3D Intersection and HUD functionality? (Urho3D Beginner)

Thank goodness, that means I’m not crazy. I’m not even using textures though and my program is still very simple, just translucent colored boxes (actually rooms with interior walls) out of meshes and wireframes out of lines. I’ve noticed that sometimes the meshes will change color separately from the wireframe, but it is only noticeable if you move the camera by a single degree or so, otherwise it appears to happen simultaneously. It also does not happen if I move the objects so they are not overlapping.

I can’t imagine somebody having difficulty reproducing this.

@Kronix & @Dave82 Are you running similar hardware and drivers?

I’m running Android 8. It happens in Emulator and on Samsung device.

Edit: One thing I just noticed is that the wireframes, which have alpha blend set to 1, are still blending with the translucent box behind (inside) it, and this is happening the majority of the time. In the moments where the box inside becomes lighter, the wireframe is showing full color as it’s supposed to. Which leads me to believe that the temporary lighter flicker is the engine doing what it’s supposed to instead of the other way around.

2nd Edit: Screenshots.alphaerror2

The blue box is inside the yellow-green box. On Exhibit A (left) the blue box was moved up a bit. On Exhibit B (right) the camera was moved up a bit. On Exhibit B the green box is open on the bottom so you’re seeing the
bottom of the blue box in the foreground only blended with the back of the green box behind it (in case the perspective is confusing). The lighter blue color is the color that it flickers to, normally it is the darker.

Edit3: Notice in the top of Exhibit B, the vertical wireframe lines of the blue box are drawn in front of the green wireframe, even though those green lines are supposed to be in the foreground. Remember, this is what it normally is, not what it flickers to. Same goes for the horizontal lines in the left of Exhibit A, although only the left pair of green wireframe lines are supposed to be in the foreground, the right two are in the back behind the blue box. Hmm, but it looks like the light blue box wireframe in Exhibit A is also incorrectly drawn in front of green lines. Not so in Exhibit B though, that is correct.

Sometimes, rarely, I see the green wireframe showing full color on top of the darker blue box without the blue box becoming lighter. Shouldn’t the wireframe that’s in front of the blue box always show full color? Sometimes I even see the vertical green wireframe in the back being completely hidden by the blue box (but still being shown above and below the blue box).

If they’re both alpha blended materials (which I believe will be the case whenever one of the alpha materials is used, independent the alpha blend being set to one) I think there will be flickering as you described based on which object ends up being sorted on the CPU as being closer to the camera. I’m not really sure of any of that, but I think I ran into a similar problem once that I avoided by forcing one of the object to always be slightly closer to the camera.

@SirNate0 Well, in this case one box is inside the other. If Urho can’t detect which is closer, that means it’s doing Z-Buffering on a model basis instead of a mesh (per triangle) basis. I’m no expert on the topic, but I think that is a couple decades out of date.

Anyway, I have two new questions which aren’t really about this alpha blending problem, but coincidentally sound the same:

  1. Sometimes two models can have exactly the same geometries in exactly the same positions. In this situation it seems that the colors displayed (both solids and wire frames) are random. So if I had something like, say, a 3D progress bar (rectangular cuboid), with a maximum bar colored blue and the percent complete bar colored red and gradually becoming the same size as the blue bar, how do I tell Urho that the red bar should in effect be drawn last so that no blue shows up at random in the part that is complete?

  2. If object A is behind object B, and object B is either alpha blended or opaque, this will either change the color of object A or make object A invisible. How do I tell Urho to ignore this and show object A as if nothing was in front of it. What I mean is, how do I tell it to ignore Z-Buffer for specific objects?

This article is old, so it’s possible not all of it’s correct, but this might give some insight into the problems in properly sorting alpha blended objects (though unfortunately the image links seem broken now, but the text should give you a decent enough idea) https://blogs.msdn.microsoft.com/shawnhar/2009/02/18/depth-sorting-alpha-blended-objects/.
I think this thread probably has enough info about controlling the render order How to control render order.
Sorry I can’t be more helpful, but I really don’t know all that much about the intricacies of graphics programming…

@Modanung I’m using a GT 430 but had the same issue on my other GPUs too.

@SirNate0 thanks, SetRenderOrder was what I needed. After a small panic not being able to access it in C# and looking through the DLL bindings, I found it was implemented as a private function encapsulated in the RenderOrder property. Seems to work beautifully now :kissing_heart:

Why and when do they decide to make such modifications when porting the engine?

1 Like

Another question: How do I make Urho interact with the GUI that contains it? For example, if it is a surface in Xamarin.Forms how would I make a camera movement in Urho change the color of a button in Xamarin?

I’m sure there’s a simple answer to this I’m missing.

That sounds like something that would be better to ask on the Xamarin forums?

UrhoSharp.SharpReality does, as shown in the code for StereoApplication.cs.

Simple Question: How do I tell if an angle between two Vector3’s is greater than 90 degrees? CalculateAngle only returns values between 0 and Pi / 2.

I want to prevent my camera from revolving past the top or bottom vertical.

Edit: And is there a function to rotate a point defined by a Vector3 using a Quaternion, returning a new Vector3? That way I can test where the camera will be after rotation, before I actually rotate it.

Vector3::Angle(const Vector3& rhs) returns the angle in degrees instead of radians like this CalculateAngle function - which is not a part of Urho3D - seems to do.

In the samples clamping the camera pitch is handles this way:

// Mouse sensitivity as degrees per pixel
const float MOUSE_SENSITIVITY = 0.1f;

// Use this frame's mouse motion to adjust camera node yaw and pitch. Clamp the pitch between -90 and 90 degrees
IntVector2 mouseMove = input->GetMouseMove();
yaw_ += MOUSE_SENSITIVITY * mouseMove.x_;
pitch_ += MOUSE_SENSITIVITY * mouseMove.y_;
pitch_ = Clamp(pitch_, -90.0f, 90.0f);

// Construct new orientation for the camera scene node from yaw and pitch. Roll is fixed to zero
cameraNode_->SetRotation(Quaternion(pitch_, yaw_, 0.0f));

You can rotate a Vector3 by multiplying it with a Quaternion, which makes perfect mathematical sense.


I knew Urhosharp wasn’t as mature as Urho3D, but dang…

Also I’m talking about revolving the camera like the moon around the earth, but only within 180 degrees to the y axis. Not regular rotation, I’m using cameraNode.RotateAround(Vector3, Quaternion). I see there’s a Vector3 clamp function, but I don’t think that will work in this situation (180 degrees). I’m not even sure it works according to angle in the first place.

Right, it’s Quaternion * Vector3, not the other way around.


ok thanks. c++ example of 180 degree vector clamping would also be appreciated :nerd_face:

Does the sample code I posted earlier not work for you?

No, your code rotates the camera. I want to move the camera by revolving it (the moon) around a center point (the earth). Luckily, Urhosharp contains a cameraNode.RotateAround(Vector3, Quaternion) that automatically rotates the camera to keep the camera pointing at the Vector3 point while simultaneous REVOLVING the camera around the point using the Quaternion, keeping the same distance.

But I need to test that the camera doesn’t rotate (Edit: I mean REVOLVE, not rotate, since the rotation happens automatically) around the top or bottom past the y axis of the center point, or else the camera and image will be upside down. That’s why I wanted to test the movement of the camera in advance by assigning it to a Vector3 and rotating that using Quaternion multiplication and canceling the camera rotation if it moves past the axis. Still difficult, since direct multiplication requires me to calculate the rotation axis at any point, instead of letting RotateAround do it for me.

I’m still not sure if it’s easily possible to do this when I can only calculate angles within 90 degrees, since I can’t tell which side of the axis a given degree (for example 89 degrees) is. But c++ code using over 90 degrees could still help.

Try adding a SetPosition(targetPosition - cameraDirection * distance) after setting the rotation.

So I want to select 3D objects in my android app using touch. So if Box A is the main object and inside Box B, when I touch Box A it will be highlighted even though it is bound inside Box B. But if I touch it again it will be intelligent and know I want to select Box B instead (because the wall of Box B is between my finger and Box A).

So a simple task that has been done many times before. lezak mentioned in a post above to use raycasting. Anybody have some tips and examples to start? Can be mouse click instead of touch.