WIP Screenshot - Everyone loves zombies!


#63

Seems like a lot of coding for a minimal outcome. Or perhaps I’m not seeing the value of going through that code logic to upturn or downturn a foot.

Can’t normals be compared to figure rotation and then be done and go on?

What happens if someone slaps the zombie? Does it’s head turn round? Shoulders react? The full body goes ragdoll (as per the sample by removing physics and collision shapes and adding bones and constraints)?

Just wondering if there isn’t an easier approach that works. Applying forces to a ragdoll body appears already fluid and realistic. (with forces, you can cook the zombie with one slap and sprinkle on some angular torque for some sweet eye candy).


#64

All that code is just needed for downhill locomotion, to pull the outer collision capsule (or just the character root) downwards enough such that the front foot can reach the ground!
Without it, the collision hull prevents the foot ik from being able to reach down lower than the horizontal plane that the walk animation was designed for.

Uphill locomotion I’m not worried about - the capsule does all the work of positioning the character with satisfactory results (the error is too small to worry about) but downhill looks very wrong when the leading foot can not reach the ground. This is most evident when the character is placed such that one foot is standing on a raised ledge, and the other is perched in mid-air, while it rightly should be much lower.

Currently I have no reaction when the zombie is hit - I was thinking of using some additive blending of canned “twitch” animations, rather than full-body ik. I have implemented code for “partial ragdolls” to simulate broken limbs, but I’m going to need a few more animations before I can finish that stuff - what happens if we break both the legs? :slight_smile:
Unlike the sample, I don’t remove hull physics and add the ragdoll bodyparts/constraints at the last moment - I create the ragdoll bodyparts during initialization, leave them unconstrained, and put them into “kinematic mode” so they are animated along with the skinmesh - this lets me determine at runtime exactly which bodypart was hit.
For partial ragdoll, I switch all bodies on a bone chain from kinematic to constrained dynamic mode, so that chain is now in ragdoll mode, but the rest of the skeleton is still animated.


#65

Ok, but why not just rotate the zombie so even on a downhill slope the zombie shape is rotated so the traversed surface remains horizontal for locomotion across it? Meaning a 45 degree incline being traversed will incline the zombie 45 degrees also.

Alternatively, assuming the zombie should remain precisely vertical at all times so the depth of each step might be less or more than the depth of the previous step, based upon some rotation of the normal of the surface underneath it at that point in time.

In such case the depth of the step is only a function of the normal of the surface (or terrain) being traversed. This can work for some inclines, certain for those of minimally inclining terrain.

In treating the zombie as a body of many parts, the depth of the step changing then causes other changes throughout the zombie body, where you mention things like the hips being affected.

And yet it would seem the rotation can be applied only to the foot of the zombie for some terrain inclinations and not have to involve IK. As the inclination increases, the zombie might just start sliding rather than walking.

What am I saying?

  1. So much logic and coding, and I’m not sure just what it achieves of lasting value. Is the code re-useable for other actions of the zombie? If you get the step perfect, can that same code be applied to other actions or reactions of the zombie (for instance, when it gets slapped, or some other envisioned motion or movement of the zombie)?

  2. Are there other and much simpler ways to get most of what you’re trying to do? Is implementing IK the most important piece of this puzzle you’re solving, or is getting the zombie to appear to step properly when examined closely to primary goal?

  3. Are you majoring in IK, when you should be minoring in it? Is IK really needed at all?

You admit this is just a rewrite of some approach used by Unreal. It might not be a viable approach, just one used for some specific reason rather than a generally applicable and workable approach.

If so, I get a gut feeling the overall approach to solving this could be simplified by doing something else once.

When I’m walking uphill or downhill it’s difficult and causes my body to do ‘unnatural’ motions which vary depending upon the slipperiness and incline amount. Trying to code up these ‘unnatural’ motions causes a lot of guessing and assumption making. If I drag a leg and try to walk uphill or downhill, it’s quite a brain teaser at times.


#66

Thanks to some clarification about two particular lines of source in the IK Sample, I’ve been able to head down a completely different path to solving the same issue - which is the need to drag downwards the character when walking downhill.

Currently, I begin by computing the positions of the foot IK effectors as per the sample.
But before I execute the IKSolver, I examine the positions of the foot effectors I’ve just computed, I transform them into local character space so I can tell which foot is in front of the other, and I am able to then determine if the character is trying to walk downhill or uphill.
If walking downhill, I can now compute an error term (in Y) for the unplanted and leading foot (indicating that the unplanted foot has crossed over in front of the planted foot), and apply it to the root node. I’m not done with the implementation - it’s both incomplete, and sub-optimal, but early tests look good.

        /// Note the worldspace position of each foot-effector
        Vector3 leftEffectorPos  = leftEffector_ ->GetTargetPosition();
        Vector3 rightEffectorPos = rightEffector_->GetTargetPosition();

        /// Note the worldspace Y coordinate of each foot-effector
        float leftEffectorHeight  = leftEffectorPos.y_;
        float rightEffectorHeight = rightEffectorPos.y_;        
        
        /// Transform the effector positions from worldspace to local space of character
        Vector3 lel = node_->WorldToLocal(leftEffectorPos);
        Vector3 rel = node_->WorldToLocal(rightEffectorPos);

        /// If right foot is planted, and left foot is "in front" and is lower than right foot
        /// ie the left foot has "crossed" in front of planted right foot, and we're heading downhill
        if( rightFootPlanted && lel.z_>rel.z_ && lt < rt+0.1f)
        {
            std::cout << "breaktime :z = " << lel.z_ <<","<< rel.z_ << " and y=" << lel.y_<<","<<rel.y_<<std::endl;
        }
        /// ELSE
        /// If left foot is planted, and right foot is "in front" and is lower than right foot
        /// ie the right foot has "crossed" in front of planted left foot, and we're heading downhill
        else if( leftFootPlanted && rel.z_>lel.z_ && lt > rt+0.1f)
        {
            std::cout << "breaktime :z = " << lel.z_ <<","<< rel.z_ << " and y=" << lel.y_<<","<<rel.y_<<std::endl;
        }

Obviously this is not production quality code - I like to get things working, then optimize them.


#67

IK is usually cheaper and generally more accurate than prediction on undulating terrain - I did try not to use IK, but in the end I had to decide between IK (which uses an iterative solver) and prediction (which is generally more prone to error given little to no local knowledge of the geometries, given the absence of a navmesh or even a butterfly mesh). I did not want to create a solution that was based on an assumption that the walk cycle was a linear, regular walk. IK seemed like the best option, in combination with animation on uncertain terrain.


#68

Why not rotate the zombie?

Unreal’s solution does lean the character a little (forwards, on uphill, and backwards, on downhill, counter to the surface normal) when walking up or down slopes.
I think this is accurate, as we need to compensate our center of mass when tackling a slope.
But the amount of rotation is small - it does not follow the surface normal per se - in fact, the direction of the lean rotation is the reflection angle, ie, counter to the normal. So far, I don’t bother implementing it, I’m restricting myself to foot solving until I am satisfied with the results in all corner cases.
It would look silly to see a character rotated to 30 degrees on a 30 degree slope - the center of mass would be way off.


#69

Current solution involves dragging down the root node of the character to match the height of the unplanted foot’s effector - when it has crossed ankles ahead of the planted foot’s ik-effector, when the effector for the unplanted foot is lower in height (ie we are walking downhill).
That is an incomplete solution, and appears to generate some jitter due to penetration correction of a dynamic hull, but generally seems to work, and indicates I am moving in the right direction.
Effectively, I move the entire character root down to match the height of the effector on the unplanted, leading foot. I let the IK solver deal with the fallout from doing so, but I know that the leading foot can at least reach the ground, minus its animated height.
After doing all that, I go on to apply the foot-slipping solver, which corrects the torso XZ position, but ignores the Y position correction we made, such that the character root is teleported in XZ to satisfy the planted foot remaining where it was planted.
The order of operations is in question - everything is in question - but it’s getting closer to decent.
Man, I am so tempted to get rid of the dynamic hull, but its good for certain things, and I already control animation speed / footfalls / footslipping based on the velocity of the hull in a dynamic world, looking for a way out :stuck_out_tongue:


#70

The camera has no lerp, so its flickery when we teleport the root of the model it points at, but at least heres some content, with downhill correction looking reasonable
<https://www.dropbox.com/s/qzuo1wcyzylyvbz/FootSlipping.mp4?dl=0 />


#71

Zombie is now loading all Ragdoll information from its “Character Descriptor” xml file.
I use these custom xml files to “describe” my characters - they contain details for all the character’s animations, the names of important bones for Foot IK, ragdoll bodypart descriptors, and joint constraint descriptors, everything required to create a full ragdoll specific to that character. This makes my character class “data-driven” instead of “completely hardcoded and derived per character type”.

When I instantiate a ragdoll in “kinematic mode”, it perfectly follows the animations applied to the skeleton - problem is though, Bullet does not look for collisions between kinematic bodies, so it looks like a bad choice for detecting where the zombie was hit by, say, a kinematic sword in the kinematic hand of the player character.

I really need to use dynamic bodies - but tie them to the animated skeleton, disregarding Bullet dynamics while manually animating the RigidBodies.

Erwin suggests that I should be measuring the “stress” at the joint constraints, and using the resulting velocities to drive motors on the joints. I think that sounds needlessly complex?

I guess I am reaching the point where I have to reach out further than this community, but I thought it might be worth asking before I dig myself into a new hole.


#72

I found a slightly improved algorithm for “pulling down” the root node - simply put, we move it to the animated height of the lower foot (while accounting for the bindpose foot offset) from character root.
Anytime in the walk cycle, this is true. No matter which direction we are moving, or on what slope.
Will implement shortly, I was side-tracked slightly by another issue.


#73

Today I performed a technical fix on my “strafe right” animation.
I had scaled the length of my four cardinal walking animations to be the same (45 frames).

When walking forwards, backwards, or left-strafing, the animations were generally in agreement - the left foot would fall first, and roughly on the same keyframe, and later the right foot, again roughly on the same keyframe.
But the right-strafing animation had been created as a mirror of the left-strafe animation. So it began on the right foot, and although I could still set up animation triggers on the footfalls, the animation itself was unsuitable for blending with forward and backward animations to create eight cardinal directions (four pure animations, and four blended “diagonal” animations).

I used blender to cut and paste half of the animation keys on the right-strafe animation, so that the left foot was the first to move.

Now I tested my changes - animation blending is still very twitchy and unsatifying - the walk cycles are just too different to be blended as such. I am not blaming urho’s blending implementation, but I am seeing random flickering during advancement of two blended walks of equal length, and whose footfalls occur at or near the same frame, with the left foot leading in all animations.

I’ll need to create some diagonal walk animations to suit myself, based on baking the existing ones in blender. This will give me eight cardinal direction animations, and potentially, 16 that blend more nicely.

I’m also starting to experiment with script objects - hotloading scripts is a lot cooler than rebuilding the app, and scripted classes can be promoted to c++ based on their runtime cost/benefit ratio


#74
    // Experimental:.. try to use pre and post physics events to deal with constraining dynamic bodies to animated skeleton
    SubscribeToEvent( E_PHYSICSPRESTEP,  URHO3D_HANDLER(Character, HandlePrePhysicsUpdate ));
    SubscribeToEvent( E_PHYSICSPOSTSTEP, URHO3D_HANDLER(Character, HandlePostPhysicsUpdate));

Essentially, I want to animate a constrained set of dynamic bodies: I want the bodies to derive their momentum from the animation. The main problem with that idea, is that in Urho, animation controllers are one of the last things to get updated in a frame.

When bodies are kinematic, Bullet will ask Urho RigidBody for their world transform (via motionstate interface), but when they are dynamic, Bullet will attempt to drive their node transforms (again, via motionstate), which indicates that, for dynamic bodies, I should at least wait until after the physics has updated… Well, I tried that, and it didn’t appear to work as expected, so I’ll take some more time to trace values and then do some head-scratching based on the empirical data.