WIP - Everyone loves zombies! (plus occasional screenshots)


Seems like a lot of coding for a minimal outcome. Or perhaps I’m not seeing the value of going through that code logic to upturn or downturn a foot.

Can’t normals be compared to figure rotation and then be done and go on?

What happens if someone slaps the zombie? Does it’s head turn round? Shoulders react? The full body goes ragdoll (as per the sample by removing physics and collision shapes and adding bones and constraints)?

Just wondering if there isn’t an easier approach that works. Applying forces to a ragdoll body appears already fluid and realistic. (with forces, you can cook the zombie with one slap and sprinkle on some angular torque for some sweet eye candy).


All that code is just needed for downhill locomotion, to pull the outer collision capsule (or just the character root) downwards enough such that the front foot can reach the ground!
Without it, the collision hull prevents the foot ik from being able to reach down lower than the horizontal plane that the walk animation was designed for.

Uphill locomotion I’m not worried about - the capsule does all the work of positioning the character with satisfactory results (the error is too small to worry about) but downhill looks very wrong when the leading foot can not reach the ground. This is most evident when the character is placed such that one foot is standing on a raised ledge, and the other is perched in mid-air, while it rightly should be much lower.

Currently I have no reaction when the zombie is hit - I was thinking of using some additive blending of canned “twitch” animations, rather than full-body ik. I have implemented code for “partial ragdolls” to simulate broken limbs, but I’m going to need a few more animations before I can finish that stuff - what happens if we break both the legs? :slight_smile:
Unlike the sample, I don’t remove hull physics and add the ragdoll bodyparts/constraints at the last moment - I create the ragdoll bodyparts during initialization, leave them unconstrained, and put them into “kinematic mode” so they are animated along with the skinmesh - this lets me determine at runtime exactly which bodypart was hit.
For partial ragdoll, I switch all bodies on a bone chain from kinematic to constrained dynamic mode, so that chain is now in ragdoll mode, but the rest of the skeleton is still animated.


Ok, but why not just rotate the zombie so even on a downhill slope the zombie shape is rotated so the traversed surface remains horizontal for locomotion across it? Meaning a 45 degree incline being traversed will incline the zombie 45 degrees also.

Alternatively, assuming the zombie should remain precisely vertical at all times so the depth of each step might be less or more than the depth of the previous step, based upon some rotation of the normal of the surface underneath it at that point in time.

In such case the depth of the step is only a function of the normal of the surface (or terrain) being traversed. This can work for some inclines, certain for those of minimally inclining terrain.

In treating the zombie as a body of many parts, the depth of the step changing then causes other changes throughout the zombie body, where you mention things like the hips being affected.

And yet it would seem the rotation can be applied only to the foot of the zombie for some terrain inclinations and not have to involve IK. As the inclination increases, the zombie might just start sliding rather than walking.

What am I saying?

  1. So much logic and coding, and I’m not sure just what it achieves of lasting value. Is the code re-useable for other actions of the zombie? If you get the step perfect, can that same code be applied to other actions or reactions of the zombie (for instance, when it gets slapped, or some other envisioned motion or movement of the zombie)?

  2. Are there other and much simpler ways to get most of what you’re trying to do? Is implementing IK the most important piece of this puzzle you’re solving, or is getting the zombie to appear to step properly when examined closely to primary goal?

  3. Are you majoring in IK, when you should be minoring in it? Is IK really needed at all?

You admit this is just a rewrite of some approach used by Unreal. It might not be a viable approach, just one used for some specific reason rather than a generally applicable and workable approach.

If so, I get a gut feeling the overall approach to solving this could be simplified by doing something else once.

When I’m walking uphill or downhill it’s difficult and causes my body to do ‘unnatural’ motions which vary depending upon the slipperiness and incline amount. Trying to code up these ‘unnatural’ motions causes a lot of guessing and assumption making. If I drag a leg and try to walk uphill or downhill, it’s quite a brain teaser at times.


Thanks to some clarification about two particular lines of source in the IK Sample, I’ve been able to head down a completely different path to solving the same issue - which is the need to drag downwards the character when walking downhill.

Currently, I begin by computing the positions of the foot IK effectors as per the sample.
But before I execute the IKSolver, I examine the positions of the foot effectors I’ve just computed, I transform them into local character space so I can tell which foot is in front of the other, and I am able to then determine if the character is trying to walk downhill or uphill.
If walking downhill, I can now compute an error term (in Y) for the unplanted and leading foot (indicating that the unplanted foot has crossed over in front of the planted foot), and apply it to the root node. I’m not done with the implementation - it’s both incomplete, and sub-optimal, but early tests look good.

        /// Note the worldspace position of each foot-effector
        Vector3 leftEffectorPos  = leftEffector_ ->GetTargetPosition();
        Vector3 rightEffectorPos = rightEffector_->GetTargetPosition();

        /// Note the worldspace Y coordinate of each foot-effector
        float leftEffectorHeight  = leftEffectorPos.y_;
        float rightEffectorHeight = rightEffectorPos.y_;        
        /// Transform the effector positions from worldspace to local space of character
        Vector3 lel = node_->WorldToLocal(leftEffectorPos);
        Vector3 rel = node_->WorldToLocal(rightEffectorPos);

        /// If right foot is planted, and left foot is "in front" and is lower than right foot
        /// ie the left foot has "crossed" in front of planted right foot, and we're heading downhill
        if( rightFootPlanted && lel.z_>rel.z_ && lt < rt+0.1f)
            std::cout << "breaktime :z = " << lel.z_ <<","<< rel.z_ << " and y=" << lel.y_<<","<<rel.y_<<std::endl;
        /// ELSE
        /// If left foot is planted, and right foot is "in front" and is lower than right foot
        /// ie the right foot has "crossed" in front of planted left foot, and we're heading downhill
        else if( leftFootPlanted && rel.z_>lel.z_ && lt > rt+0.1f)
            std::cout << "breaktime :z = " << lel.z_ <<","<< rel.z_ << " and y=" << lel.y_<<","<<rel.y_<<std::endl;

Obviously this is not production quality code - I like to get things working, then optimize them.


IK is usually cheaper and generally more accurate than prediction on undulating terrain - I did try not to use IK, but in the end I had to decide between IK (which uses an iterative solver) and prediction (which is generally more prone to error given little to no local knowledge of the geometries, given the absence of a navmesh or even a butterfly mesh). I did not want to create a solution that was based on an assumption that the walk cycle was a linear, regular walk. IK seemed like the best option, in combination with animation on uncertain terrain.


Why not rotate the zombie?

Unreal’s solution does lean the character a little (forwards, on uphill, and backwards, on downhill, counter to the surface normal) when walking up or down slopes.
I think this is accurate, as we need to compensate our center of mass when tackling a slope.
But the amount of rotation is small - it does not follow the surface normal per se - in fact, the direction of the lean rotation is the reflection angle, ie, counter to the normal. So far, I don’t bother implementing it, I’m restricting myself to foot solving until I am satisfied with the results in all corner cases.
It would look silly to see a character rotated to 30 degrees on a 30 degree slope - the center of mass would be way off.


Current solution involves dragging down the root node of the character to match the height of the unplanted foot’s effector - when it has crossed ankles ahead of the planted foot’s ik-effector, when the effector for the unplanted foot is lower in height (ie we are walking downhill).
That is an incomplete solution, and appears to generate some jitter due to penetration correction of a dynamic hull, but generally seems to work, and indicates I am moving in the right direction.
Effectively, I move the entire character root down to match the height of the effector on the unplanted, leading foot. I let the IK solver deal with the fallout from doing so, but I know that the leading foot can at least reach the ground, minus its animated height.
After doing all that, I go on to apply the foot-slipping solver, which corrects the torso XZ position, but ignores the Y position correction we made, such that the character root is teleported in XZ to satisfy the planted foot remaining where it was planted.
The order of operations is in question - everything is in question - but it’s getting closer to decent.
Man, I am so tempted to get rid of the dynamic hull, but its good for certain things, and I already control animation speed / footfalls / footslipping based on the velocity of the hull in a dynamic world, looking for a way out :stuck_out_tongue:


The camera has no lerp, so its flickery when we teleport the root of the model it points at, but at least heres some content, with downhill correction looking reasonable
<https://www.dropbox.com/s/qzuo1wcyzylyvbz/FootSlipping.mp4?dl=0 />


Zombie is now loading all Ragdoll information from its “Character Descriptor” xml file.
I use these custom xml files to “describe” my characters - they contain details for all the character’s animations, the names of important bones for Foot IK, ragdoll bodypart descriptors, and joint constraint descriptors, everything required to create a full ragdoll specific to that character. This makes my character class “data-driven” instead of “completely hardcoded and derived per character type”.

When I instantiate a ragdoll in “kinematic mode”, it perfectly follows the animations applied to the skeleton - problem is though, Bullet does not look for collisions between kinematic bodies, so it looks like a bad choice for detecting where the zombie was hit by, say, a kinematic sword in the kinematic hand of the player character.

I really need to use dynamic bodies - but tie them to the animated skeleton, disregarding Bullet dynamics while manually animating the RigidBodies.

Erwin suggests that I should be measuring the “stress” at the joint constraints, and using the resulting velocities to drive motors on the joints. I think that sounds needlessly complex?

I guess I am reaching the point where I have to reach out further than this community, but I thought it might be worth asking before I dig myself into a new hole.


I found a slightly improved algorithm for “pulling down” the root node - simply put, we move it to the animated height of the lower foot (while accounting for the bindpose foot offset) from character root.
Anytime in the walk cycle, this is true. No matter which direction we are moving, or on what slope.
Will implement shortly, I was side-tracked slightly by another issue.


Today I performed a technical fix on my “strafe right” animation.
I had scaled the length of my four cardinal walking animations to be the same (45 frames).

When walking forwards, backwards, or left-strafing, the animations were generally in agreement - the left foot would fall first, and roughly on the same keyframe, and later the right foot, again roughly on the same keyframe.
But the right-strafing animation had been created as a mirror of the left-strafe animation. So it began on the right foot, and although I could still set up animation triggers on the footfalls, the animation itself was unsuitable for blending with forward and backward animations to create eight cardinal directions (four pure animations, and four blended “diagonal” animations).

I used blender to cut and paste half of the animation keys on the right-strafe animation, so that the left foot was the first to move.

Now I tested my changes - animation blending is still very twitchy and unsatifying - the walk cycles are just too different to be blended as such. I am not blaming urho’s blending implementation, but I am seeing random flickering during advancement of two blended walks of equal length, and whose footfalls occur at or near the same frame, with the left foot leading in all animations.

I’ll need to create some diagonal walk animations to suit myself, based on baking the existing ones in blender. This will give me eight cardinal direction animations, and potentially, 16 that blend more nicely.

I’m also starting to experiment with script objects - hotloading scripts is a lot cooler than rebuilding the app, and scripted classes can be promoted to c++ based on their runtime cost/benefit ratio

    // Experimental:.. try to use pre and post physics events to deal with constraining dynamic bodies to animated skeleton
    SubscribeToEvent( E_PHYSICSPRESTEP,  URHO3D_HANDLER(Character, HandlePrePhysicsUpdate ));
    SubscribeToEvent( E_PHYSICSPOSTSTEP, URHO3D_HANDLER(Character, HandlePostPhysicsUpdate));

Essentially, I want to animate a constrained set of dynamic bodies: I want the bodies to derive their momentum from the animation. The main problem with that idea, is that in Urho, animation controllers are one of the last things to get updated in a frame.

When bodies are kinematic, Bullet will ask Urho RigidBody for their world transform (via motionstate interface), but when they are dynamic, Bullet will attempt to drive their node transforms (again, via motionstate), which indicates that, for dynamic bodies, I should at least wait until after the physics has updated… Well, I tried that, and it didn’t appear to work as expected, so I’ll take some more time to trace values and then do some head-scratching based on the empirical data.

1 Like

Today, I managed to constrain dynamic bodies to the bones of an animated character.
The result is a little bit shaky, but acceptable, given that competing systems are attempting to adjust the same scene nodes.

The way I achieved this, was to listen for the “post-physics update” event, which tells me that Bullet has just finished messing with my rigidbodies, at which point I call this method:

void Character::copyModelStateToRagdoll()
	for (int i = 0; i < ragdollParts_.Size(); ++i)
        ragdollParts_[i]->SetTransform( Vector3(0,0,0), Quaternion(1,0,0,0));
        RigidBody* body = ragdollParts_[i]->GetComponent<RigidBody>();

Note that I connect my rigidbodies not directly to the bone nodes, but to a child node of each bone node, which I call “descaling nodes” (this particular model has not been correctly scaled, one of the nodes near the root introduces a scale factor that needs to be “cancelled” prior to attaching anything to the rig).
The above method forces the local transform of the rigidbody parent nodes to identity, such that they directly inherit the world transform of the bone nodes. This causes the dynamic rigidbodies to follow the animations applied to the bones - not as smoothly as kinematic bodies, but given the advantages of dynamic bodies over kinematic ones, I’ll call it a win :slight_smile:
Perhaps I’ll upload a video later today, which shows the “shaky but acceptable” results of my efforts.

Oh - I should mention that I’m also creating my ragdoll constraints upfront, and have discovered that the constraint resolver is entirely responsible for the “noise” in the body transforms - I simply need to disable my constraints until I need them, or add them when I need them. For now, I chose to create them upfront, but disable them. They exist, but they are not active.

At this stage, I am ready to try to implement partial ragdolls :slight_smile:

1 Like

I’m uploading a short video of “Animated Dynamic Ragdoll” (applied to Zombie) which also implements “foot-planting” and “foot-ik”. The player character currently has no ragdoll, but does have everything else.

Currently, I zero out all velocity (and force) when I teleport the dynamic bodies to match the animated pose, but I intend to deliberately omit that step for bodyparts that are “in ragdoll mode”. At the moment that a bodypart enters “ragdoll mode”, it should inherit the velocity (and implied forces) that are due to the animation. This should result in a “clean handover” from animated mode to ragdoll mode, with the ragdoll dynamics “continuing” from the last known animated pose.

1 Like

My next step is to process my “jump” animation, to get rid of that nasty old root motion.
I wish that our export toolchain could just optionally do that.
I use a Dynamic character controller scheme, so I don’t want root motion - I provide it, while synchronizing the animation speed. This gives me a lot more control over a character on uncertain terrain, and although it is harder to do and harder to get right than a purely kinematic solution, it is more plausible on uncertain terrain than pure kinematic solutions. It can adapt, within a margin of error.


Today I added a “falling” animation, and some logic to deal with that.
Somewhere I missed a logic bug, because the results were not as expected.
Time to remap and verify my logic.


Today I decided to set aside my Dynamic Character controller, and began experimenting with a Kinematic Character controller. To be honest, there’s a lot NOT to love about kinematic objects with respect to Bullet physics, but I always explore my options before choosing what feels right for any particular game.

The dynamic controller used dynamic velocity to adjust the playback speed of locomotion animations, which were devoid of root motion.

The kinematic controller will be taking a very different approach: animations will contain root motions, and I’ll be using them to drive the kinematics, so that the “feel” that the animators gave to character animations are not lost to the physics engine. I hope to solve foot-slipping “for free”, while still applying foot-ik. Animation playback speed will determine motion speed, and not the other way around.

For now, I’m basing my work on that of 1vanK, who provided us with a small wrapper around btKinematicCharacterController, but I’m quite willing to derive from the Bullet class, or even write a proxy based on it, depending on how things turn out… if this works out as I hope, I’ll implement the resulting class as a physics component of Urho3D, and issue a PR.

1 Like

The main difference between my new kinematic controller, and the original (by 1vanK) is that I don’t want to use keyboard input to drive kinematics - instead, I will be using keyboard input ONLY to play animations (containing root motions), and extracting kinematic motion information from the animated model’s node… this will present a small challenge, since animation and physics updates are asynchronous, and in those frames where both events fire, the order of execution is not ideal - my physics update will only have access to the results of the previous frame’s animated state, and I’ll need to track animated state changes inbetween physics frames… this could make life interesting :slight_smile:

What I’m pointing out, and please correct me if I am wrong, well, it’s that Animation is pretty much the last thing that happens in a frame update - and assuming that physics runs at a lower framerate (as we would expect, given we can interpolate it for rendering) then in those frames where physics update does fire, it fires before animation is updated. Therefore, we can’t apply the results of animation for the current frame to the physics state - we’re always going to have a situation where the physics state is representing the previous animation frame. In this modern age of graphics hardware, I expect that the update/render framerate is always higher than the physics update. Currently I get around 600 FPS (vsync disabled) versus the fixed rate for physics at 30fps. I like to unlock vsync during development, so I can budget my “spend” and quickly notice any serious issues that I have inadvertently, and recently, introduced… if I add a bottleneck by accident, I detect it almost immediately.


Now I have the new character controller implemented, I am extending it to deal with animations - this is where the fun begins for me. One of the first issues relates to non-looping animations.

So far, I’ve concentrated a lot on “locomotion”, and the animations that drive it. Walking/Running in any direction is always a Looping animation. But Jumping usually is not a looping animation.

That’s all ok, until you hold down the jump button, and expect to be able to jump again!
The character will jump, but its animation is frozen in time at the end of the keyframes, and remains so until you change animations :frowning: Calling Stop on the animation is not enough to fix it. And we can’t remove the animationstate from the controller, because the remove method is private.

I tried a few different ways to kill off that animation state from its controller, but failed so far to find a way around this.


The most major change when switching to kinematic player controller is the lack of friction - movement feels very “plastic”, and it stops immediately when you released controls - and it seems like I’ll have to hack in some basic friction and drag model. I’m working with btPairCachingGhostObject under the hood, it has some advantages in terms of optimizing collision queries, but doesn’t impose dynamics.