3D Intersection and HUD functionality? (Urho3D Beginner)


#41

Does the sample code I posted earlier not work for you?


#42

No, your code rotates the camera. I want to move the camera by revolving it (the moon) around a center point (the earth). Luckily, Urhosharp contains a cameraNode.RotateAround(Vector3, Quaternion) that automatically rotates the camera to keep the camera pointing at the Vector3 point while simultaneous REVOLVING the camera around the point using the Quaternion, keeping the same distance.

But I need to test that the camera doesn’t rotate (Edit: I mean REVOLVE, not rotate, since the rotation happens automatically) around the top or bottom past the y axis of the center point, or else the camera and image will be upside down. That’s why I wanted to test the movement of the camera in advance by assigning it to a Vector3 and rotating that using Quaternion multiplication and canceling the camera rotation if it moves past the axis. Still difficult, since direct multiplication requires me to calculate the rotation axis at any point, instead of letting RotateAround do it for me.

I’m still not sure if it’s easily possible to do this when I can only calculate angles within 90 degrees, since I can’t tell which side of the axis a given degree (for example 89 degrees) is. But c++ code using over 90 degrees could still help.


#43

Try adding a SetPosition(targetPosition - cameraDirection * distance) after setting the rotation.


#44

So I want to select 3D objects in my android app using touch. So if Box A is the main object and inside Box B, when I touch Box A it will be highlighted even though it is bound inside Box B. But if I touch it again it will be intelligent and know I want to select Box B instead (because the wall of Box B is between my finger and Box A).

So a simple task that has been done many times before. lezak mentioned in a post above to use raycasting. Anybody have some tips and examples to start? Can be mouse click instead of touch.


#45

I explored raycasting a bit here.

The goal for that was to be able to raycast individual components and leave them in a raycasted state where their own processing routine takes over to determine when the component can again be raycasted, and what to do once raycasted.

This pattern worked well for my individual case, but likely will need adjusted for other uses.

Also, I queued up raycast events to do all the processing possible on background threads, but while awaiting the various queued events to be processed, didn’t want the same object again raycasted, with another event for it queued up.

The physicsworld component also offers spherecast and convexcast.

Also, just reviewing that pattern again now realize that the update should change the viewmask before queueing it up, and therefore can continue to raycast immediately, rather than inserting a .1 second delay.


#46

I can’t get this raycasting stuff to work. RaycastSingle always returns null and Raycast always returns a 0 size List. I thought it’s supposed to return the drawable of the node of the first triangle it hits in the direction of a Ray? Do I need to make a certain Bounding Box? I only have Bounding Boxes for all my Geometry’s and they’re (-10,-10,-10) to (10,10,10). But I noticed the objects disappear when I zoom in if I make the Bounding Boxes smaller.


#47

Be sure your raycast distance is adequate for object placement. Maybe the objects are 10f away, and you are raycasting only 5f. Or perhaps the masks are set to not be raycasted.

RaycastSingle returns a single object or none, if the objects are within distance and can be raycasted.


#48

@Kronix Could you maybe share some of your ray casting code?


#49
TouchState state;
Vector3 hitPos;
Drawable hitDrawable;

if (input.NumTouches > 0) state = input.GetTouch(0); else return;
IntVector2 pos = state.Position;
Ray cameraRay = camera.GetScreenRay((float)pos.X / Graphics.Width, (float)pos.Y / Graphics.Height);
var result = scene.GetComponent<Octree>().RaycastSingle(cameraRay, RayQueryLevel.Triangle, 250f, DrawableFlags.Geometry);
if (result != null)
{
                hitPos = result.Value.Position;
                hitDrawable = result.Value.Drawable;
                hitDrawable.Node.Translate(new Vector3(0.1f, 0, 0));
}

It should move a box to the right when I touch it. My camera is 20 units back. The models have a bounding box from (-10,-10,-10) to (10,10,10), although the models themselves are only from (0, 0, 0) to about (2, 2, 3). result is always null! I haven’t changed any ViewMasks anywhere in my code.


#50

I’m used to these ray cast methods accepting RayOctreeQuery&s. Which includes a PODVector<RayQueryResult>&. This could be a Xamarin bug for all I know.
What’s the return type of this RaycastSingle function you’re using?


#51


'tis indeed RayQueryResult

Edit: Are BoundingBoxes necessary for raycast? And do I only need to set them for the Model object? Does Model.NumGeometries matter?

Edit: Here’s my code for creating the objects. Nothing here preventing raycasting, right?

            VertexBuffer edgeBuffer = new VertexBuffer(Urho.Application.CurrentContext, false);
            edgeBuffer.SetSize((uint)triVertices.Count, ElementMask.Position | ElementMask.Normal, false);
            edgeBuffer.SetData(triVertices.ToArray());

            Geometry edgeGeometry = new Geometry();
            edgeGeometry.SetVertexBuffer(0, edgeBuffer);
            edgeGeometry.SetDrawRange(PrimitiveType.TriangleList, 0, 0, 0, (uint)triVertices.Count, true);

            Model edgeModel = new Model();
            edgeModel.NumGeometries = 1;
            edgeModel.SetGeometry(0, 0, edgeGeometry);
            edgeModel.BoundingBox = new BoundingBox(new Vector3(-10, -10, -10), new Vector3(10, 10, 10));

            mainNode.RemoveChild(mainNode.GetChild("edgeNode1"));
            Node edgeNode = mainNode.CreateChild("edgeNode1");
            StaticModel edge = edgeNode.CreateComponent<StaticModel>();
            edge.Model = edgeModel;

            Material material = Material.FromColor(solidColor);
            edge.SetMaterial(material);

Edit: Added a plane using CoreAssets.Models.Plane and the raycast to the plane works, but still not the others created using Vertices. Here’s the code I added. Why would it work for this but not the above code?:

Node planeNode = mainNode.CreateChild("planeNode1");
var plane = planeNode.CreateComponent<StaticModel>();
plane.Model = CoreAssets.Models.Plane;

#52

Just reviewing my code, and everything I raycast is a static model or animated model (inherits static model). When something isn’t detected, it’s usually because I’ve set viewMask or the ray is missing the object for some reason.

You could set a staticmodel in same place as that object and see if it detects.


#53

Did you see the last edit in my previous post? It looks like all of your models are loaded pre-made. Is it possible that manually created models like mine have a different default ViewMask?


#54

Set the viewMask for both the raycast and the object, just to be sure.


#55

Already tried that. I tried setting some to 0x70000000 and some to 0x60000000 and tried setting the Raycast ViewMask to both of those, no effect.


#56

Perhaps put in some big floating objects, like cubes. Get that working, then revert to your custom geometry? If the plane raycast works, other objects should work too. I’d also check the values sent by state.Position. Perhaps they’re not the values you’re expecting. I mean what if '(float)pos.X ’ is already scaled to the Graphics size, then it would be raycasting to a different place than you expect, but the plane is big enough to still be raycasted.


#57

My plane is default size, not big. I just replaced a bunch of stuff with built in cylinders. That works, so touch state position is working. Self-made stuff still doesn’t work. Btw, I’m using Geometry, not CustomGeometry.

I wonder if SetSize, SetVertexBuffer, or SetGeometry in my code above needs different options passed.


#58

Here’s an example of a model being built from vertex data, and these models work with raycasts.


#59

I figured out the problem. I needed to enable the Shadowed property. And it needs to be done before SetSize.

Now I am trying to figure out how to drag the 3d node after I’ve raycasted it and the finger is being moved until the finger is lifted. I used camera.GetScreenRay to touch the node:

  Ray cameraRay = camera.GetScreenRay((float)pos.X / Graphics.Width, (float)pos.Y / Graphics.Height);
  Ray oldCameraRay = camera.GetScreenRay((float)lastPos.X / Graphics.Width, (float)lastPos.Y / Graphics.Height);

pos and lastPos contain the coordinates of the current and last finger touches. How do I find the difference in world coordinates between the current finger touch and the last one? I want to convert the rays into vectors and find the difference in the world coordinates, then translate the node by those coordinates. Only X and Y of course since that’s all touch can sense. And the coordinates need to be measured in relation the the point where the finger first touched the model.


#60

You could do it the same as a hololens airtap, hold and drag and airtap release.