3D Intersection and HUD functionality? (Urho3D Beginner)


First I must admit my code is in C# with UrhoSharp so I hope you don’t abandon me :frowning: . I tried duplicating the color assignment in C++ DynamicGeometry but ran into the problem that the C++ version doesn’t seem to have equivalent code to this:

Material material = Material.FromColor(solidColor);
material.SetTechnique(0, CoreAssets.Techniques.NoTextureAlpha, 1, 1);

namely the FromColor method in the library. How does C++ assign the material color? Once I find that out I can try duplicating the problem in C++.

Edit: If I use material.SetTechnique(0, CoreAssets.Techniques.NoTextureUnlit, 1, 1); then all the RGB values work between 0 and 1, but the alpha blend does not work and the surface is always solid.

solidColor was passed to the function, so it is something like new Color(5f,7.5f,1f,0.2f). That produces a greenish yellow translucent surface. The alpha works fine, and is still between 0 and 1. I actually use alpha of 1 with the same color and similar code to produce the wireframe around the solid using line vertices (groups of 2) instead of triangle vertices (groups of 3).

In the meantime here’s the C#:

My Vertex Buffer looks like this:

    List<VertexBuffer.PositionNormal> triVertices = new List<VertexBuffer.PositionNormal>();

    VertexBuffer edgeBuffer = new VertexBuffer(Urho.Application.CurrentContext, false);
    edgeBuffer.SetSize((uint)triVertices.Count, ElementMask.Position | ElementMask.Normal, false);

The Vertices were defined as follows (this is just a small piece of about 100 vertices):

    VertexBuffer.PositionNormal[] axisVertices =
        new VertexBuffer.PositionNormal
            Position = new Vector3(-1f,0,0)
        new VertexBuffer.PositionNormal
            Position = new Vector3(-0.5f,2.1f,0)
        new VertexBuffer.PositionNormal
            Position = new Vector3(-0.5f,0,0)
        new VertexBuffer.PositionNormal
            Position = new Vector3(-1f,0,0)
        new VertexBuffer.PositionNormal
            Position = new Vector3(-1f,2.1f,0)
        new VertexBuffer.PositionNormal
            Position = new Vector3(-0.5f,2.1f,0)


material->SetShaderParameter(“MatDiffColor” , yourColor);


I’ll try using that later. For now I have a different problem. My translucent objects randomly become lighter in color when they move up or down through other translucent objects, and only the parts that blend (overlap) with the other objects. When I say lighter, I mean even more so than the correct blend. The error depends on a combination of the position of the objects and the position of the camera, so just changing the camera angle can make it temporarily disappear. Is this a known problem? Also, I have not defined any light nodes and am using NoTexture with Alpha. Does Urho create a LightNode on its own by default that could be causing this?


I already mentioned this issue before , unfortunately the devs couldn’t reproduce it. it has to do something with manual material creation…

No. Urho does not add anything by default to a scene.


Thank goodness, that means I’m not crazy. I’m not even using textures though and my program is still very simple, just translucent colored boxes (actually rooms with interior walls) out of meshes and wireframes out of lines. I’ve noticed that sometimes the meshes will change color separately from the wireframe, but it is only noticeable if you move the camera by a single degree or so, otherwise it appears to happen simultaneously. It also does not happen if I move the objects so they are not overlapping.

I can’t imagine somebody having difficulty reproducing this.


@Kronix & @Dave82 Are you running similar hardware and drivers?


I’m running Android 8. It happens in Emulator and on Samsung device.

Edit: One thing I just noticed is that the wireframes, which have alpha blend set to 1, are still blending with the translucent box behind (inside) it, and this is happening the majority of the time. In the moments where the box inside becomes lighter, the wireframe is showing full color as it’s supposed to. Which leads me to believe that the temporary lighter flicker is the engine doing what it’s supposed to instead of the other way around.

2nd Edit: Screenshots.

The blue box is inside the yellow-green box. On Exhibit A (left) the blue box was moved up a bit. On Exhibit B (right) the camera was moved up a bit. On Exhibit B the green box is open on the bottom so you’re seeing the
bottom of the blue box in the foreground only blended with the back of the green box behind it (in case the perspective is confusing). The lighter blue color is the color that it flickers to, normally it is the darker.

Edit3: Notice in the top of Exhibit B, the vertical wireframe lines of the blue box are drawn in front of the green wireframe, even though those green lines are supposed to be in the foreground. Remember, this is what it normally is, not what it flickers to. Same goes for the horizontal lines in the left of Exhibit A, although only the left pair of green wireframe lines are supposed to be in the foreground, the right two are in the back behind the blue box. Hmm, but it looks like the light blue box wireframe in Exhibit A is also incorrectly drawn in front of green lines. Not so in Exhibit B though, that is correct.

Sometimes, rarely, I see the green wireframe showing full color on top of the darker blue box without the blue box becoming lighter. Shouldn’t the wireframe that’s in front of the blue box always show full color? Sometimes I even see the vertical green wireframe in the back being completely hidden by the blue box (but still being shown above and below the blue box).


If they’re both alpha blended materials (which I believe will be the case whenever one of the alpha materials is used, independent the alpha blend being set to one) I think there will be flickering as you described based on which object ends up being sorted on the CPU as being closer to the camera. I’m not really sure of any of that, but I think I ran into a similar problem once that I avoided by forcing one of the object to always be slightly closer to the camera.


@SirNate0 Well, in this case one box is inside the other. If Urho can’t detect which is closer, that means it’s doing Z-Buffering on a model basis instead of a mesh (per triangle) basis. I’m no expert on the topic, but I think that is a couple decades out of date.

Anyway, I have two new questions which aren’t really about this alpha blending problem, but coincidentally sound the same:

  1. Sometimes two models can have exactly the same geometries in exactly the same positions. In this situation it seems that the colors displayed (both solids and wire frames) are random. So if I had something like, say, a 3D progress bar (rectangular cuboid), with a maximum bar colored blue and the percent complete bar colored red and gradually becoming the same size as the blue bar, how do I tell Urho that the red bar should in effect be drawn last so that no blue shows up at random in the part that is complete?

  2. If object A is behind object B, and object B is either alpha blended or opaque, this will either change the color of object A or make object A invisible. How do I tell Urho to ignore this and show object A as if nothing was in front of it. What I mean is, how do I tell it to ignore Z-Buffer for specific objects?


This article is old, so it’s possible not all of it’s correct, but this might give some insight into the problems in properly sorting alpha blended objects (though unfortunately the image links seem broken now, but the text should give you a decent enough idea) https://blogs.msdn.microsoft.com/shawnhar/2009/02/18/depth-sorting-alpha-blended-objects/.
I think this thread probably has enough info about controlling the render order How to control render order.
Sorry I can’t be more helpful, but I really don’t know all that much about the intricacies of graphics programming…


@Modanung I’m using a GT 430 but had the same issue on my other GPUs too.


@SirNate0 thanks, SetRenderOrder was what I needed. After a small panic not being able to access it in C# and looking through the DLL bindings, I found it was implemented as a private function encapsulated in the RenderOrder property. Seems to work beautifully now :kissing_heart:

Why and when do they decide to make such modifications when porting the engine?


Another question: How do I make Urho interact with the GUI that contains it? For example, if it is a surface in Xamarin.Forms how would I make a camera movement in Urho change the color of a button in Xamarin?

I’m sure there’s a simple answer to this I’m missing.


That sounds like something that would be better to ask on the Xamarin forums?


UrhoSharp.SharpReality does, as shown in the code for StereoApplication.cs.


Simple Question: How do I tell if an angle between two Vector3’s is greater than 90 degrees? CalculateAngle only returns values between 0 and Pi / 2.

I want to prevent my camera from revolving past the top or bottom vertical.

Edit: And is there a function to rotate a point defined by a Vector3 using a Quaternion, returning a new Vector3? That way I can test where the camera will be after rotation, before I actually rotate it.


Vector3::Angle(const Vector3& rhs) returns the angle in degrees instead of radians like this CalculateAngle function - which is not a part of Urho3D - seems to do.

In the samples clamping the camera pitch is handles this way:

// Mouse sensitivity as degrees per pixel
const float MOUSE_SENSITIVITY = 0.1f;

// Use this frame's mouse motion to adjust camera node yaw and pitch. Clamp the pitch between -90 and 90 degrees
IntVector2 mouseMove = input->GetMouseMove();
yaw_ += MOUSE_SENSITIVITY * mouseMove.x_;
pitch_ += MOUSE_SENSITIVITY * mouseMove.y_;
pitch_ = Clamp(pitch_, -90.0f, 90.0f);

// Construct new orientation for the camera scene node from yaw and pitch. Roll is fixed to zero
cameraNode_->SetRotation(Quaternion(pitch_, yaw_, 0.0f));

You can rotate a Vector3 by multiplying it with a Quaternion, which makes perfect mathematical sense.


I knew Urhosharp wasn’t as mature as Urho3D, but dang…

Also I’m talking about revolving the camera like the moon around the earth, but only within 180 degrees to the y axis. Not regular rotation, I’m using cameraNode.RotateAround(Vector3, Quaternion). I see there’s a Vector3 clamp function, but I don’t think that will work in this situation (180 degrees). I’m not even sure it works according to angle in the first place.


Right, it’s Quaternion * Vector3, not the other way around.



ok thanks. c++ example of 180 degree vector clamping would also be appreciated :nerd_face: