3D Intersection and HUD functionality? (Urho3D Beginner)


#1

Hello, I am considering using Urho3D with C# Visual Studio Xamarin.Forms to create an App that will be used on both Windows PCs and iPads. I would like to ask if Urho3D (Urhosharp) is the right choice for what I intend to make, as follows:

I need to build an app that checks whether machines of different sizes can fit into given spaces. I will represent the machines with simple objects like boxes, rhombohedrons, and cylinders. I will also be modeling rectangular rooms with 1 or 2 door openings, and the thickness of the walls must also be modeled. All these things are simple enough to be modeled with vertices directly in the code, and there will only be a small number of these template objects whose dimensions (including door placement and size in the case of rooms) will be changed by entering numbers for each object in text fields for each project.

I don’t want textures. The objects will be colored wireframes (blue, green, or purple) with translucent colored sides, so that I can see through them but still tell which side is facing me.

Here is the part that I am wondering whether Urho3D can do (Edit: Also wondering if the above colored wireframe + colored translucent sides is possible):

The objects can be moved around freely, and can move through each other. But whenever they move through each other, I want the intersection of the objects to be modeled in red. That is, the polyhedron that represents the 3d intersection of the objects should be colored with a red wireframe and red translucent sides. This way, I can see the intersection even if it is behind my line of sight.

Also, whenever an intersection occurs, I want a 2D yellow HUD reticle - either a circle or a rectangle - to bring attention to the intersection. This is like a targeting reticle in games or fighter jets, and it will change sizes to accommodate the size of the intersection, although it will have a minimum size so I can find intersections that might be very small. There can be multiple intersections and thus multiple reticles. I also want to be able to tap the display (or click in Windows) on the red intersection in the yellow reticle to bring up information about the objects that are intersecting – to be displayed on a side of the screen split from the rendering area – with the option to change the models’ sizes or position. It would also be nice if text labels would appear in the HUD for relevant objects, with a line pointing from the text to the object.

The intersections and reticles should appear, disappear, and mutate in real-time as I move machines around the room or change the dimensions of the room.

It would also be convenient to be able to rotate the model space by dragging and pinch to zoom in/out, although I will probably want to limit that to one axis at a time and I will have sliders as backup.

So my specific questions are:

  1. Can the 3D intersection be done in Urho3D?
  2. Can the HUD reticle (sized to enclose the intersection snugly) be done?
  3. Can the HUD text labels be done?
  4. Can I tap the screen to select the intersections?
  5. Can all of this be done with the rendering done on one side of the screen, and the data entry text fields for each object on the other side?
  6. Can all this be done in real-time?
  7. Can all the modeling be done in code without a separate CAD program?
  8. Will it be easy to develop and update this program in both iPad and Windows using all the same code?
  9. And of course, can it all be done easily? If not, can somebody recommend a better solution?

Edit 10. Can the colored wireframe rendering with translucent colored sides be done in Urho3d?

Thanks!


#2

First of all, welcome to the forums and congratulations on the longest post so far! :confetti_ball: :slight_smile:

Now I shall read.
EDIT: I will continue reading tomorrow.


#3

Here are some suggestions:

  1. Yes, I think that the best way would be to cast rays between selected verticies (along the edges of
    the object), this will detect intersections and will give You positions that can be used to place verticies for creating object representing intersection;
  2. There are at least 2 options to do this - You can use Camera::WorldToScreenPoint method to place some ui elements or You can use bilboards and place them at the intersection;
  3. Once again at least 2 options: using ui + Camera::ScreenToWorld + CustomGeometry to draw lines or Text3D + CustomGeometry to draw lines pointing to the object
  4. Raycast to detect “intersetion” object or if You would decide to use ui elements to mark intersections, You can detect ui element under the cursor;
  5. You can use ui system and View3D object to place viewport wherever You want or You can change size and position of viewport You’ll be using to render and then place ui next to it (or over it);
  6. Yes;
  7. Once again yes: creating geometry in code is covered in sample 34_DynamicGeometry (in Urho, don’t how what samples are in Urhosharp). Basically You can create model from code by defining vertex and index buffers or use CustomGeometry component.
  8. This one I don’t know - I’m windows user, never had iPad in my hands;
  9. Well… I would risk and say yes, but You’ll have to spend some time to get familiar with Urho;
  10. Once again yes - for example debug geometry of NavArea is drawn this way, though it’s using DebugRenderer, You can get the genaral idea - use 2 geometries, first to draw “solid” walls (with unlit, transparent material) and second to draw lines along the edges. You can use DebugRenderer or CustomGeometry component to draw lines. Another approach would be to use 2 cameras from whom one will render in wireframe and then put image from one camera over the second (there should be some threads how to do this on the forum) - though I don’t know if result would be any good.

#4

Thanks a lot, it’ll take time to figure out all those solutions, but right now I’m wondering if you can provide examples of using raycasting for the intersection modeling in numbers 1 and 4. That was always the thing that appeared most difficult to me if there’s no prebuilt method. Let me pose a few conceptual questions about this before looking at how it might be coded:

A. From what I gather, every edge for every object in the scene will have to have a ray cast along it to make this work, and every one of those rays will have to be compared with every plane (mesh) for every object in the scene. Will that be a big performance hit?
B. Does Urho3D have the ability to treat an entire “side” as a plane for the purposes of the raytest, or do I have to test for every triangle in the mesh?
C. Can the raycasting determine the object that was hit?
D. After I determine the points of intersection, how will I know how those points should be connected to form the enclosed 3D intersection object? My first mathematical intuition is that all raycast points of intersection calculated from the same vertex must have an edge between them unless that new edge is broken by another (secondary test) raycast intersection along it.


#5

It can all be done.

To create the intersection meshes I think you should use something like CARVE. For the reticles you’re probably best off using a BillboardSet. There’s a sample demonstrating its use.


#6

Octree raycast are quite fast in Urho, but of course performance would depend on complexity and number of objects. First thing to do would be setting custom view mask that would be used by ray to avoid unnecessary checks (for example against bilboards placed on intersections ).
Another thing is that there is no reason for every object to check for intersections all the time. Two quick ideas to limit number of checks:

  1. use physics and check for intersection only when object is colliding with another (using triggers will allow You to move one through another);
  2. using octree querries when some object is ‘active’ (moved or changed in some other way). You can use Octree:GetDrawables with BoxOctreeQuery to check if there are some objects intersecting with BoundingBox of this active object(s) and if there are some, do raycast only between them. In this case You wouldn’t have to do octree raycast, becasue You can call directly Drawable::ProcessRayQuery.
    If You are new to Urho and not familiar with it, I would suggest getting good look at: Octree , OctreeQuery, RayOctreeQuery, RayQueryResult (this one gives answer to Your C question) - for sure You’ll find something usefull there.

#7

Thanks, it looks like Carve CSG would do what I want and doing my own raycasting would be a waste of time. Can you give me some function names for collision/physics that I can research? I don’t see a reason why I would opt for the bounding box check if I can check for collisions directly.

If anybody comes up with some more alternatives I’m ears, I’m still getting accustomed to the code.


#8

I’ll save you some time: if you use Carve CSG for what you describe you’re absolutely going to have to use Urho3D’s WorkItem and worker-threads to do the CSG in the background.

Before you add a task remove every CSG-task you can first - there’ll likely be one locked in flight so you’ll still get new stuff to render without flooding the worker-thread with a thousand CSG-tasks in the queue.

Carve is fast, but it’s not realtime fast. It’s definitely slower than bespoke CSG methods like that in Godot - but its’ advantage is being really easy to map to any kind of vertex data and triangle data (you can fake BSP in it very easily by tagging triangles to a surface).


#9

Yeah, Sinoid, that’s what I was afraid of. I’m wondering if I really need Carve. Is it possible to do one or both of the following without Carve?

A. Change the color of edges of an object that are within another to red. That means the line segments from the vertex of model A that is within model B to the points where each edge of model A first intersects the surface of model B will be colored red.

B. Create a HUD reticle that surrounds the intersection of the two objects, without actually rendering the intersection (and thus not needing Carve)?

I’m guessing if this is possible, it could be done with Urho3D’s built-in collision classes? But that’s just a guess. Can the collision classes help for objects/vertices that are already “inside” of each other?


#10

There’s no easy way. You’re going to have to accept shortfalls or come up with ways to conceal things.

A. Change the color of edges of an object that are within another to red. That means the line segments from the vertex of model A that is within model B to the points where each edge of model A first intersects the surface of model B will be colored red.

You could add a depth-fail stage to the existing DebugRenderer class. Since the DebugRenderer is one of the last things drawn when a view is rendered you’ll have a complete z-buffer at that point and the depth-fail case will allow you to add lines for every edge in geometry that will only be drawn when they fail the depth test.

This won’t work if you need see those lines when you’re inside of the the geometry that should cause the depth-failure.

It also won’t work if you can have penetrations out of both view-facing and back-facing sides, all it can really do is say “I’m behind this”.

ie. it’ll mostly work if your program has a sort of lazy-Susan or can never enter the geometry behaviour.


Right now DebugRenderer only has no-depth and depth-test modes, that should be enough reference that adding a depth-fail would be fairly straight-forward - though sort of unpleasant because you’d break the interface (or just say screw it and do it as a switch “I’m in depth-fail mode! FTW!”.

B. Create a HUD reticle that surrounds the intersection of the two objects, without actually rendering the intersection (and thus not needing Carve)?

I think it was said before, but you could run raycasts along edges of one geometry against the other to check for intersections, if you’re looking at geometry that has relatively few polys that shouldn’t be bad at all, I’m picturing something simple like a crate pack.

In theory you could do Bokeh based on the results of a depth-fail addition to DebugRenderer, no idea how practical that would be, if the colors you want to draw for edges are reliably not colors that will be present in the models used then it’s really about how realiably you can get the bokeh point to be at the end-points. I think Bokeh are stupid though and have never touched them, so I have no idea how viable this would be.

I’m guessing if this is possible, it could be done with Urho3D’s built-in collision classes? But that’s just a guess. Can the collision classes help for objects/vertices that are already “inside” of each other?

No, collision in 3d is done via Bullet. It’s fairly naive and doesn’t have the information you want with any degree of reliability because it uses a moving manifold. You could try drawing the manifold points, but they might not be stable from frame-to-frame.


#11

I suppose I could use lezak’s two suggestions above for finding intersection locations for the HUD reticle? He mentioned Octree BoundingBox as well as this:

Which classes should I be looking at for this?

I assume his suggestions are easier than raycasts. He only mentioned raycasts for Octree for manually modelling the 3d intersection, which I already decided was too complex. I just want something for determining the HUD reticle.


#12

Trigger is a part of RigidBody.

You’re going to have dredge through the forums or experiment yourself, as I can’t recall what events trigger-bodies send. They don’t send all of them IIRC.

Several RigidBody related events have a P_CONTACTS field of their event data that you can read to get a list of position+normals for contact pointers. They aren’t guaranteed to be stable, but they’re usually close.


#13

Hi again, I’ve been working on my project and have a few questions:

  1. What is the difference between Geometry and CustomGeometry classes? As it is, I’m just feeding the VertexBuffer into my Geometry class, and Geometry into the Model Class. CustomGeometry seems to be used exactly the same way in examples I’ve seen.

  2. Where can I get an explanation of the options for Material.SetTechnique() and what the defaults are if I don’t use it?

  3. Why do my Urho.Color RGBA colors always seem to lock into a solid primary color? For example, if putting in (1,250,1,0.8) produces the same color as (1,150,1,0.8), and if all the numbers are small, I always get grey.

  4. Where can I get a sample of how pop-up Message Boxes are used in Urho?


#14

Color values should be normalized.Don’t use greater values than 1.0f because they’re most likely clamped between 0.0f and 1.0f Examples:

Color(1.0f , 0.0f , 0.0f , 1.0f) // Full red
Color(0.0f , 1.0f , 0.0f , 1.0f) // Full Green
Color(1.0f , 0.5f , 0.0f , 1.0f) // Orange

#15

CustomGeometry just wraps everything together and uses a simpler interface for defining the geometry and getting it setup to render as a Drawable.

If you go the raw route you have to setup a static/animated model component and give it your created model and when creating the vertex-buffers you need to be aware of the vertex-layout/etc. The more important thing is that the Model resource you create when going the raw-route is reusable (ie. you’ve loft a column mesh and want to use it a whole bunch).

You use whichever you want to for the most part - CustomGeometry's only caveats are that it isn’t setup to share the geometry with anything else and can’t setup bones/weights/morphs - which you may or may not care about.


#16

Dave82, I’ve discovered the following using my eyes:

If I only set the material this way: Material material = Material.FromColor(myColor);

before assigning it to the StaticModel, then the maximum values for each RGBA field appear to be 10.0f. That is, (10,0,0,1) produces the brightest red, (150,0,0,1) produces the same brightest red, and (9,0,0,1) produces a slightly darker red. The fourth value can be adjusted between 0 and 1 to change the alpha blend.

However, if I add these two lines before assigning to the StaticModel:

material.SetTechnique(0, CoreAssets.Techniques.NoTextureUnlit, 1, 1);
material.LineAntiAlias = true;

then the maximum RGBA values become 1.0f. So (1,0,0,1) produces the brightest red, (15,0,0,1) produces the same brightest red, and (0.9,0,0,1) produces a slightly darker red. However, using this second way, I can’t alpha blend, so (1,0,0,0.2) would produce the same opaque brightest red.

Same goes for combinations: (10,10,0,1) produces the brightest yellow in the first example, (1,1,0,1) produces the brightest yellow in the second example.

Can somebody explain this?


#17

Again , you should ALWAYS use values between 0.0f and 1.0f, If you use greater values than 1.0f even God couldn’t predict how will this value end up on your screen.
Is it clamped by Urho3D ? Is it clamped by a shader ? Does the graphics API handle this ? If it does , is there a difference between OGL and DX9 DX11 , etc how to handle this situation ?
maybe @Sinoid could explain what happens if your color values are not in a normalized range.

AFAIK alpha blending will not work out of the box .Try using NoTextureAlpha or NoTextureVCol techniques for vertex alpha blending.


#18

The range is 0 - 1.

Vertex data

In the vertex-data colors are sent a unsigned normal bytes, with a range of 0 - 255 which because the vertex semantic says that it’s an unsigned normal it gets interpreted as the 0-1 range that the shader needs (if sent as a plain ubyte4 it’d have to be divided, which would be a wasteful division).

The floating point color is converted to ubte4 by:

unsigned Color::ToUInt() const
{
    auto r = (unsigned)Clamp(((int)(r_ * 255.0f)), 0, 255);
    auto g = (unsigned)Clamp(((int)(g_ * 255.0f)), 0, 255);
    auto b = (unsigned)Clamp(((int)(b_ * 255.0f)), 0, 255);
    auto a = (unsigned)Clamp(((int)(a_ * 255.0f)), 0, 255);
    return (a << 24) | (b << 16) | (g << 8) | r;
}

which brings the values up into the 0-255 byte range and packs them. Notably, they’re clamped and must be since a unorm ubyte4 can’t go outside of the 0-1 range, otherwise they’d wrap around. So in vertex-data, colors outside of the range are pointless.


Uniforms / CBuffers

Colors passed to uniforms/c-buffers/material-variables are sent as regular float4s and are still expected to be in their normalized range (you would deviate if doing HDR/rgbm/etc).


It’s the vertex semantics that matter, so if you really really wanted to be using 0-255 ranged colors for some reason you can change you how construct the vertex-buffers (use a different type and something other than Color). It’ll then fall on you to do the division in the shader to get it into the 0-1 range that the graphics-API, lighting-functions, etc (basically everything) expects. Or leave things as is, but use your own unsigned char[4] color type.


#19

Thanks, but what could be causing my colors to only reach maximum brightness when I set the value to 10.0f then as stated in my post above? 1.0f only gives me 10% color, which is very dark. Besides when using the default settings (when no SetTechnique is set), I’ve found that CoreAssets.Techniques.NoTextureAlpha also requires 10.0f for brightest color. Even Peek Definition shows me the comments in the definition of the Color class telling me the values should be between 0 and 1, but my own eyes are telling me I only get maximum color at 10 (and going above that doesn’t change anything).

What is this HDR/rgbm/etc deviation you speak of?


#20

Thanks, but what could be causing my colors to only reach maximum brightness when I set the value to 10.0f then as stated in my post above?

You mentioned that you were creating the vertex-buffers yourself, can you post your VertexBuffer::SetSize call (and the data you feed it)? It’ll be something looking like this:

vertBuffer->SetSize(vertexData.Size(), {
                { TYPE_VECTOR3, SEM_POSITION },
                { TYPE_VECTOR3, SEM_NORMAL },
                { TYPE_VECTOR4, SEM_TANGENT },
                { TYPE_VECTOR2, SEM_TEXCOORD }
            }, false);

… what we need to see is how you’ve setup your SEM_COLOR value. It is possible to set it up such that you’re able to go outside the normal range, but we’d have to see your code to know you’ve done that (ie. { TYPE_VECTOR4, SEM_COLOR }).

Seeing how you set the value of a vertex in the buffer (don’t need the whole code, just whatever loop sets the buffer-data) would be useful as well. You shouldn’t be able to go outside of the 0-1 range unless you have intentionally or unknowingly set things up so that you can (there isn’t a right/wrong to it, it just makes it really hard to communicate what’s going on if it’s non-standard).

1.0f only gives me 10% color, which is very dark.

You’re using alpha right? What’s your alpha value? You can’t hold the same value and have transparency, the alpha will make it appear darker.

When you’re getting the value your eyes say is correct are you also losing any transparency you previously had?

Screenshots would really help here.

What is this HDR/rgbm/etc deviation you speak of?

It’s not what you need, it’s using real-world or extended units for lighting/color/etc. A discipline in itself.