I have some basic ideas about the hand input I need to handle, that I'm going to explain along the way.
One thing is that I'd like to make it symmetrical, so that you can use right and left hand for doing the same actions.
For locomotion and combat I'm getting inspiration from a bit of Aikido practice that I did a few years ago, and a key element in that was being able to perform actions symmetrically with both hands and feet, and changing guard and forward direction swiftly. I'll try to recreate that feeling and symmetry.
Additionally, it's good for accessibility and saves you the handling of the classic "preferred hand" configuration option.
I woke up late (after staying up equally late to complete the weapon pick-up yesterday) and spent what remained of the morning trying out the Meta Interaction SDK examples.
I must say that I was quite impressed, having these features a few years ago (when I was building these kind of interactions myself using LEAP Motion) would have saved me a lot of time.
For the scope of `Particular Reality`, I was particularly happy about these examples
DebugBodyJoints
DebugGesture
DebugPose
The tool provided to capture custom poses ( HandGrabPoseTool
) was also pretty cool.
The good news, after trying these demos, is that I can easily incorporate features that do not only rely on hands and head poses (which is what I expected), but on the whole upper body skeleton "guessed" by the SDK, which works better than I expected and that I have experienced with other IK systems in the past.
And who knows, maybe before I'm able to complete the full game we might get feet tracking with the Quest 3? Fingers crossed.
I edited my scene so that it could use the Interaction SDK instead of the basic hand/controllers input, keeping the same behaviour I had yesterday.
After that, I followed the [SDK tutorial] to understand how the pose detection works.
I only changed things a bit at the end, because I never use the UnityEvents
wiring in the inspector as suggested.
The data-centric approach to define poses through inspector settings and ScriptableObjects
is fine, but calling methods on object behaviours on the basis of inspector configuration is a step too far.
For me, to stay manageable and scale well, the logic must stay in the code and not hidden in inspector settings of some (giant scene tree) node.
Consequently, I started developing my own InputManagerBhv
that, referring to the ActiveStateSelector
components (configured following the Meta tutorials), will catch all the pose detection events, acting as a centralized input handling hub. Such events will be processed and used to update the game state in a robust way, isolating the input handling from the gameplay logic (which is generally good practice, especially for being able to easily port to other platforms).
I defined four poses: a take and a drop for left and right hand.
I then expanded the logic I did yesterday to be able to handle both hands, and to use different inputs for take/drop (yesterday, both actions were performed by the same "pinch" gesture, that I now assigned back to the portal activation... for now).
Incidentally, for the way I structured the logic, it's also possible to pass an held weapon to the other hand.
That's it for today! Tomorrow, I think I'll add some more gestures to replace the portal activation (still using the default pinch) and to perform a ranged attack.