I had set my prototyping deadline to August 20th because I knew I would be busy after that (and for the whole month of September). Indeed, I was, and I did zero further development on the prototype.
I envy those who manage to push forward projects working 40 minutes during their commute, or something like that, but that's not how my mind works: to do any meaningful development, I need at least a 3/4 hours slot, usually with a caffeine-powered break in the middle.
In the 16 days of "full immersion" prototyping, I did three solid slots per day, except for a day or two where other urgent stuff came up and I did just one or two.
From October 2nd, I plan to start working towards the next milestone for the project, which is going to be something like "PoC refactoring and expansion".
Having other contracting work to do for third parties, I will have to limit the Particular Reality development to "one slot per day". Hopefully I will proceed at a steady, if slower, pace. That's for weekdays. No rules about weekends: occasionally, I might do two "three slots days" to take care of some hard tasks that needs a bit of extra focus. But normally, I will just take a break.
So, no progress at all?
The fact that I couldn't find any "development slot" doesn't mean I did nothing for the project.
I decided to announce it on the same day the Quest 3 would be announced, right before Meta Connect (so, on September 27th), so I had that as a deadline to prepare a basic online presence for the game.
I designed (with a bit of behind-the-scenes help) a second version of the logo, in both extended and compact/icon versions.
The first, super ugly version, can be spotted in the last video of the August 20th entry. This "second-gen" logo is still not great, but adequate for the current phase of the project.
Additionally, I recorded some videos of myself playing the prototype, while also capturing the gameplay on the headset, and I did a bit of video editing to sync and mix them up, trying to show how the locomotion and the hand-tracked interaction work.
With these core elements ready, I quickly put together a basic website (I had already registered the domain months ago) and set up the social media accounts.
After a bit of research I decided not to put the DevLog you're reading on the website itself, but to use Substack, mainly for the newsletter feature (but also hoping to reach more readers).
During prototyping, I uploaded to Youtube the clips to be embedded in the articles. I considered self-hosting the videos, but if at any point (as one would hope) one post from the DevLog managed to go viral, that could cause problems with my hosting bandwidth. Keeping them on Youtube should keep me safe from that kind of "happy problem".
I try to not depend too much on third party software and services (because at some point, usually when you're too busy to deal with them, bad things happen), but Youtube and Substack should hopefully be fine for a long time, and if they won't, I will have the original videos and the Markdown version of the DevLog entries (I write them in Obsidian, and keep the vault under a GIT repository). After wrapping up the project I will consider adding an "archival" self-hosted version with no third parties involved. And if that's the version you're now reading, well, hello there, reader from the future.
The Unity fiasco
In September, the "big thing" in the game development world was the disastrous announcement of new Unity terms of service that would have required developers to pay a fee for every user installation of a Unity game (after crossing a certain threshold), even acting retroactively on games already released, or in an advanced state of development.
The only word appropriate to describe such terms was insane for a multitude of reasons, that I would summarize as
you can't change a deal with the developers retroactively: maybe they picked Unity and not Unreal to only have upfront, predictable costs
the installation count mechanism was not going to be specified, and possibly prone to abuse
the maximum amount to be potentially paid to Unity was unpredictable: according to the new terms, devs could find themselves having to pay more than what they had earned from a game sale, and there were situations (like "games on Microsoft GamePass", not exactly a niche case) where it wasn't clear what would be owed to Unity, and by whom
After a massive backlash from the whole game development community, Unity took a huge step back and presented new terms which are much more reasonable and predictable (basically: 2.5% royalties after a certain threshold, and only when using future versions of the engine).
Does this affect Particular Reality?
Well, maybe. I've been using Unity professionally for quite some years now (the first project I worked on was in Unity 4, and I also worked a bit with Unity 3.x to update an old project developed by others), and all this experience means that I feel quite confident and fast in using it.
So, I know that switching to any other engine is going to significantly drop my productivity for a while.
But I've never felt so bad about the trajectory Unity is following, and I know that no matter how long I've used it, I've never really liked it. It was a decision (not taken by me, anyway) that definitely made sense at the time considering the scope of the projects was "3D games targeted at mobile platforms", but still, it never felt like the kind of engine I wanted it to be.
I don't like Unity. Why?
My feeling about Unity is that it provides attempts to provide a set of high level features that do complicated tasks, making them accessible to less experienced developers. However, as there are all kinds of games that can be made, and targeted at very different platforms, those "generic" features often end up being non optimal for the job, and/or cumbersome to use.
So, I regularly end up developing my own, better tailored solutions for these "high level" problems (each project has its own), and I only use lower level, "core" engine features (input, audio, rendering, networking).
Given a set of target platforms for a project, what I'd like from a game engine is a solid, really portable abstraction layer that hides the platform specific details and offers a unified interface to the common subset of features supported by the target platforms.
Then, I expect to be able to easily export builds for those target platforms.
Does Unity do a good job at that?
According to my direct experience, not really.
Some examples?
I worked on an Android game. When it was time to port it to iOS, something that should have been pretty simple considering that we are in both cases talking about mobile devices with touch screens, I found out that the way I had used reflection in my C# code was not suitable for the iOS build. At least until I, IIRC, added some code to prevent the stripping of some methods during the iOS build process.
I worked on a project and used threads for something. At some point, the need to prepare a WebGL build arose. Guess what: threading wasn't supported in WebGL builds.
I had some networking code used in a client/server XR application. At some point, I had to develop a new client, running on Hololens. Well, the C# networking code I had written couldn't work on Hololens, as the device only supported a specific C# backend (.NET profiles mess - if you know, you know).
Now, you might argue that these problems are due to platform limitations. If WebGL didn't support threads, and UWP restricted C# usage to the "core profile", was it Unity's fault?
No, but it was Unity's fault that it allowed those things to work in the first place (in the editor and when building for other target platforms).
If I'm not really shielded by platform peculiarities, the engine is breaking the promise of easy portability.
Let's also briefly discuss the quality of some core Unity subsystems.
Input. The "old" input system has one of the worst API designs I've ever dealt with. I remember that while handling multiple controllers I found myself having to concatenate strings to access axis values from specific controllers. The moment I see a string parameter in a game engine API, I raise an eyebrow. So I was eager to see the new "InputSystem" that was published at some point. I checked it for 10 minutes and decided that I hated it even more because it made low level handling more cumbersome and needed a lot of fiddling with the inspector, while I prefer to define anything that is related to gameplay logic through code.
There's a popular third party commercial package, Rewired, that is highly regarded in the community and should be a better input handling solution
Audio. I've had to deal with lots of quirks, but I'm not an audio programming expert, so maybe it was partly my fault. Still, I find the API quite poor.
Even in this case, I know that many developers with advanced audio needs resort to third party integrations (FMOD, Wwise)
Resources management. Between the virtual "Resources" folder, asset bundles and the super-cumbersome Addressables system, dealing with resources loading and unloading has always been insanely unpleasant and inconsistent. And while with a lot of work you can solve quite complex scenarios (like, add-on packages optimized for different platforms, loaded at runtime), the real pain starts when you need to deal with external resources. Which is a very common requirement in non-gaming scenarios. Need to load a PNG as a texture? Ok, doable (with a different API than what you would use to load it from embedded resources, which is already bad). Need to load at runtime a 3D model with materials, that would get perfectly imported with default settings in editor? Now you're in bad luck. Prepare for a lot of pain and external libraries. Basically, the whole asset preprocessing pipeline, which does so many useful things in editor, is not accessible through the runtime.
Rendering. This would require a whole article. Of course, rendering is complicated, especially considering the wide range of target platforms. At some level, portable rendering is an unsolvable problem. But I feel like Unity is doing something wrong here too. The introduction of alternative rendering pipelines (URP/HDRP) next to the "built-in" rendering pipeline has introduced fragmentation. The choice of pipeline has many ramifications (shaders, material, VFXs, post-processing) and the current architecture doesn't allow switching easily. Ideally, one would be able to have more advanced lighting and post-processing features on high end devices, while being able to fall back to less demanding rendering techniques on less powerful systems, but without having to manage a totally different project for that. I've been using the package manager to split projects so that they have two different implementations, using different rendering pipelines while sharing the common part. And it's a painful process: multiple versions of the materials, custom code for the different pipelines isolated behind a common interface, etc.
Networking. What a disaster. Over the years, a number of networking solutions have been provided, then deprecated before even being stable and feature complete. I know that recently a new one was provided. Hopefully they nailed it this time, but I expect nothing at this point.
UI. Another painful topic. Initially, we had a pretty basic immediate mode GUI. Then we got UnityUI and its components and layout system. A core component for text display (TextMeshPro) was acquired by a third party years ago, but it still feels not 100% natively integrated. There's a new system that was recently added into the mix, UIToolkit, but currently its use is only advised for editor tools. It uses an approach inherited by web technologies for front-end development. Which I hate, and think should have no place in game development.
After writing all this, I feel a bit depressed. And this without thinking about non-technical reasons to run away from Unity (other possible future TOS changes, direction of the project etc).
So, why I am still using it?
After all these complaints, one could reasonably ask "then why are you still using Unity?".
Well, in part is inertia, in part is the fact that knowing about the problems and limitations I've discussed also means being able to deal with them.
Game engines are intrinsically complicated and each one I've used in the past had its share of problems, debatable design decisions and quirks.
Maybe switching to Unreal tomorrow would be a good thing in the long term, but I feel that short term, being fast in iterating and shaping up the gameplay is more valuable. And the fact that I'm faster with Unity is related to my past experience, not to the qualities of the engine itself (that's a completely different topic).
So, for now, I'm going to postpone the final "engine" decision. It might feel counter-intuitive, but I know that by using the right architecture, I can make the game relatively easy to port, and that's what I'm going to do. Isolating the game logic from the engine features (when practical, so without being too strict about it) should lead to a better code structure anyway, even if I ultimately decide to not switch engine.
While I work on the project, other interesting candidates could come up.
Today, Unreal sounds like the only reasonable alternative, especially considering the SDK support offered by Meta.
Two more adventurous candidates, in the open source space, could be Godot and Bevy. I'm trying to keep them under my radar. I think I'm much more aligned with the project direction and design choices of Bevy, and wasn't impressed by Godot when I tried it (but it was a long time ago, at least 6 years). At some point, I might stop for a bit and try Unreal, Godot and Bevy for at least a week each, to see how I feel about them and properly evaluate the state of (at least) OpenXR support.
And, of course, there's the other option, the unspeakable one... let's not even write it down for now.
Meta Connect, the Quest 3, and Mixed Reality
I think most of the interesting things about Quest 3 had leaked before the Meta Connect, but I still followed the keynote.
The new headset sounds like a reasonable, incremental step forward.
I'm eager to try the new optics and have a wider FOV, and confident that the new form factor makes it more comfortable. Of course, I'm also super happy about the increase in processing power.
The mixed reality features are cool, and the depth sensor allows you to do interesting things.
I've experienced that kind of development working with Hololens and with Vive Pro.
When I first had a Vive Pro around, one of the first things I developed was a mixed reality demo where I could grab and throw a ball around, and it would bounce off the walls of the room.
That required having a 3D artist model the room walls, and implementing a calibration system to be able to easily position the model so that it matched the room (IIRC, I defined two points of the model, in corners of a planar surface, and then you went with the Vive controller and pointed at those two corners in reality - based on that data, I positioned and scaled the model so that the virtual and real points, and consequently the whole room, would match). To be able to have hand tracking, a LEAP motion controller was attached to the headset. Everything had to be connected to a PC, with cables.
Now, just a few years later, this will be doable with room geometry automatically derived at runtime, with hand tracking built in, and no need for a PC (nor cables). All this, with a much higher quality of the video passthrough.
Not bad at all.
But I don't think mixed reality is particularly useful for VR gaming.
There are many demos showing things like 3D animated characters jumping on the furniture, or shooters where you can hide behind your couch. My honest opinion is that these are all gimmicks that focus on the tech but forget about game design principles.
The moment you need to consider some unpredictable elements (the physical room layout) in your gameplay, you're in for a lot of troubles and basically unsolvable problems.
There could be cool exceptions though, where you play with physical elements (think miniatures on a board), but use the mixed reality augmentation for extra features and special effects. It sounds very niche, but could be interesting.
For Particular Reality, I have something I'd like to try for the menu/setup phase, that could (optionally) take advantage of mixed reality features. But that's it, the core gameplay part has no room for MR, as I feel it will be for most games.
Still reading?
This entry was quite long and contained a lot of rambling.
I might do things like these once in a while, but it won't be the norm: I expect to stay focused on the development progress.
Hopefully, it was still an interesting read.
See you soon for the next update!