Oh, I intend to look into it, but I was hoping, in vain, for a link to a document. I state that I am lazy by nature, but not that I am lazy by definition. Lazy like a fox, to quote Homer Simpson.
I appreciate your help, I truly do, it should not take me too long to stop asking the “stupid questions”.
Today, I gave the player character an idle animation, and added three more walk animations - so now I have four walks - left, right, forward and backward… time to start experimenting with weighted blending. I’m of the opinion that basing the blend weight only on which keys are being pressed is the wrong idea, I have linear velocity, and a Facing direction… and feel compelled to use that to determine the weighting for blending these animations. Anyone have tips on this?
I am not certain that the animations have the same length, this could be a problem too. I currently set the animation speed in proportion to the linear velocity, but I didn’t account for blending animations of different length.
Today I am trying to figure out a cheap way to implement Unity’s notion of BlendTrees for blending locomotion animations. Ideally, any number of input animations can be blended together, based on the velocity of the character (with respect to the direction it faces), but in practise, we only have to worry about a maximum of two animations at any moment. It works by defining a ‘characterspace direction’ in 2D for each animation to be blended - essentially we’re distributing animations around a 2D circle which is defined in the unrotated / identity space of the character - ie, relative to the direction that the character is facing.
My current idea goes something like this:
The character is moving with some linear velocity - a 3D vector.
Step number one is to transform that vector into the local space of the character, and drop the Y component, so we can think in terms of a 2D circle, on the XZ plane (and normalize it, so its just a 2D direction with unit length), and think in terms of trigonometry, where zero degrees is our Right vector, and ninety degrees is our Up vector.
Step two: without making assumptions about how many animations are involved?
We find the 2D dot product between the (transformed, 2D) velocity vector, and the direction associated with each animation. We capture the results in an array, and once we have all the dot values, we normalize the array, by dividing each value with the sum of all values. Note that if a dot value is negative, that animation is effectively disabled, and we set the value as zero. The remaining (positive, and normalized) dot values are the weights we should be applying to our animations.
I’m aware that this could be optimized by computing which Quadrant we’re working in, however that introduces a bunch of conditional logic, which is likely to cause the cpu to stall, as the compiler can’t optimize on branched logic. For a fistful of operations, it’s generally faster to avoid the branches, and simply perform operations that were not really necessary for the solution of the problem.
Anyone done any work in this area? I would love to hear your ideas / opinions!
See the Doom animation talk:
They’re quite clear on the animation. There were some older GDC animation talks about correction for foot sliding and root motion, basically boiled down to correct for the constant motion not correct to lock a root in place.
I’ve been trying out the quat based retargeting from DOOM and using jiggle bones on everything for naturalizing stilted-programmer-animation. Not quite there, but getting there,
I don’t plan to recreate GDC solutions, I plan to find the cheapest path that works for me. I don’t think that gdc is the right path, just because someone there said it - math has two altruisms, the direct path to the answer, and the shortcut to the answer
Today I fixed a bug in my foot-planting solution whereby the zombie was able to walk straight through static scenery. The fix involved two parts - first, I am careful to ignore the Y component of my error term, because I want the physics hull to look after changes of position in Y axis. This looked a lot better, but still not good enough.
Secondly, I needed to add a raycast to correct the resulting Y coordinate to account for the fact that my hull is a Capsule, not a box - the feet are not positioned neatly at the bottom of the capsule.
Now the zombie is able to roam “without foot slipping” across uneven terrain without sinking into it, floating above it, or any other weirdness. Results are “close to perfect”
The only current issue with the Zombie physics is that it’s using weak impulses to drive the character, which I deliberately have given a lot more mass than the Player - the result is that a slow moving zombie can’t climb gentle slopes, it needs to get some momentum happening to make it up a hill.
I’ll try switching to a force-based controller later today, and see if I can give the zombie some more “grunt”.
[EDIT] Switched to using Forces instead of Impulses - will take a bit of tweaking to get the values right, but there’s a lot more control, and no apparent problems with “hill-climbing”.
I have no root motion, and the walk cycle is not constant, not linear… In the case of my zombie, we don’t want constant motion - it’s limping, or staggering, and in this case, constant motion actually causes foot-slipping. Therefore, I use a constant (but shifting) frame of reference, being the world position of the planted foot, assuming that only one (or not any) foot is planted at any time. The frame of reference is shifting (not moving) at a rate dictated by the animation, not based on any constant. It’s actually working pretty nicely, though there are still some small teething issues to sort out.
Started working on blending locomotion animations (not using a proper blend tree, I’m using some switched logic and AnimationController layers).
First impressions are pretty bad - but the assets need some polish. My animations are not of equal length - it looks like Urho’s animationcontroller somehow compensates for this, because there’s no visual glitching occuring in the legs of my character, but there is some glitching in the arms and hands, which are further from the root node. I’ll start by making all my walk animations have the same play length, and see if that improves things at all. If all else fails, I can ask an artist for some diagonal walking animations.
Today I synchronized the play length of my four walk animations (for player character).
Previously, they had the following lengths:
WalkForwards = 40 frames
WalkBackwards = 30 frames
StrafeLeft = 45 frames
StrafeRight = 45 frames
I chose to make them all 45 frames in length.
In order to reduce the glitching that remained when blending / moving diagonally, I set the Weight of forward/backward to 0.7, and the Weight of left/right to 0.3
The only thing that’s stopping me from applying my “foot-planting” solution to the player character, is related to how I created the “strafe right” animation - it’s a mirror of “strafe left”, which means that the leading foot is not the same in those animations - I’ll need to cut and paste half of my keyframes in order to rectify that.
Rather than screw around with the playback speed of individual animations, I used Blender to adjust the play lengths of my animations… if anyone is interested in how to do that, feel free to ask me.
Work has begun on an improved ragdoll implementation.
The idea is to attach our ragdoll armature to our model on instantiation, with all the bodyparts set to Kinematic mode, so that they are driven by animations. If done correctly, we don’t care about the initial pose - bodyparts are instantianted in “bonespace”. I did not rotate these bodyparts! When it comes time to switch (some or all of) the armature to ragdoll mode, the bodyparts are already aligned to the skeletal armature. There are other advantages, too.
If this experiment works out well, I can probably afford to throw away the coarse outer collision hull entirely.
CreateRagdollPart(adjustNode, "RightUpLeg", ShapeType::SHAPE_CAPSULE, Vector3(0.2f, .45f, 0.0f),Vector3(0.0f, -0.2f, 0.0f), Quaternion::IDENTITY); CreateRagdollPart(adjustNode, "RightLeg", ShapeType::SHAPE_CAPSULE, Vector3(0.2f, .45f, 0.0f),Vector3(0.0f, -0.2f, 0.0f), Quaternion::IDENTITY); CreateRagdollPart(adjustNode, "LeftUpLeg", ShapeType::SHAPE_CAPSULE, Vector3(0.2f, .45f, 0.0f),Vector3(0.0f, -0.2f, 0.0f), Quaternion::IDENTITY); CreateRagdollPart(adjustNode, "LeftLeg", ShapeType::SHAPE_CAPSULE, Vector3(0.2f, .45f, 0.0f),Vector3(0.0f, -0.2f, 0.0f), Quaternion::IDENTITY); CreateRagdollPart(adjustNode, "RightArm", ShapeType::SHAPE_CAPSULE, Vector3(0.15f, .25f, 0.0f),Vector3(-0.15f, 0, 0), Quaternion(0,0,90)) ; CreateRagdollPart(adjustNode, "RightForeArm",ShapeType::SHAPE_CAPSULE, Vector3(0.1f, .25f, 0.0f),Vector3(-0.15f, 0, 0), Quaternion(0,0,90)) ; CreateRagdollPart(adjustNode, "LeftArm", ShapeType::SHAPE_CAPSULE, Vector3(0.15f, .25f, 0.0f),Vector3(0.15f, 0, 0), Quaternion(0,0,90)) ; CreateRagdollPart(adjustNode, "LeftForeArm",ShapeType::SHAPE_CAPSULE, Vector3(0.1f, .25f, 0.0f),Vector3(0.15f, 0, 0), Quaternion(0,0,90)) ;
I’m starting with a code-driven approach for testing and debug purposes, but as soon as I’m happy, I’ll shove this data into a file and load it per character, as I already do for animation lists.
At this point, I don’t even need physics constraints between bodyparts, so this armature is still incomplete, yet each bodypart is doing what it should - under kinematic mode, the bodypart constraints are already enforced without need for physics constraints.