I’ve gotten myself into a bit of a corner with quaternions, and I could use some input.
What I want:
I want a 3D transform widget for moving entities by dragging them. For now that’s just 3 arrows, each along one of the cardinal axes. A click and drag on one of the arrows should move the entity along that arrow’s axis.
How I’m approaching it:
The widget should be centered at the model’s center, but it needs to be visible and clickable at all times (can’t have it intersecting the model) - that means the widget needs to go in the GUI node and track the screen space projected position/orientation of the model it controls.
What I have:
Node entityModelRoot = ...; // get entity model node from entity system
setLocalTranslation(cam.getScreenCoordinates(entityModelRoot.getWorldTranslation()));
setLocalRotation(...); // Got nothing for this one here
What works:
The position of the widget tracks the entity perfectly.
What doesn’t work:
I’ve tried about a dozen approaches for transforming the rotation into screen space (including multiplying the model’s world rotation by the ViewProjection matrix) but I haven’t figured out how to get it quite right.
Multiplying a Quatenion by IDENTITY just gives you a copy of the original, so forget that.
One approach would be to convert the Quaternion into 3 basis vectors, then transform those vectors from world coordinates to screen coordinates, using only the rotation portion of the transform, not the translation or scaling.
The multiplication by IDENTITY wasn’t meant to be posted - I was using it for an experiment with something I knew the value of to see how it behaved. I accidentally posted before I meant to.
I like the basis vectors approach! I haven’t tried that yet - will give it a shot now.
It’s close, but as I orbit the camera about the model the widget’s axes do not stay aligned with the corresponding object space axes. My guess is that the ViewProjection matrix is the wrong choice here, but since that’s what’s used to transform world-space into camera space I don’t see why it would be the wrong choice.
Edit: I tried using just the View matrix instead of the ViewProjection matrix (I don’t think that it’s mathematically correct to do this, but wanted to see what would happen) and it’s much better but still not quite right - as I pan around in some conditions the arrows will appear to “drift” out of alignment, and they frequently look like they’re projected wrong (which makes sense, with the projection matrix not being accounted for).
Thank you! I remembered seeing a transform widget just like this one somewhere, but I didn’t remember it was part of Spix. This approach works very nicely - I briefly considered something like this but thought I’d have to resort to a custom shader to force it to draw over other objects in all circumstances. I also was not aware of Lemur’s pick layers… that’ll be very handy!
I suspect I’ll end up settling with this approach as it’s a lot simpler and cleaner than the GUI node overlay, but I’m still intrigued by what went wrong with my attempts to transform world space rotations into screen space.
I noticed that projection does play a part… but wouldn’t transforming something through a perspective projection and then viewing it through an orthographic projection look the same as just viewing it through the first perspective projection?
I just know if you are going to try to make a 3D widget in the guiNode match the 3D object in the 3D scene then the axes aren’t going to match. I don’t know how that would ever look sensible, really.
I mean, if you think of looking at an x,y,z axes widget at 45 degrees, 45 degrees… those near tips are going to look farther apart than the far tips. (in 3D space.) In orthogonal space, they won’t.
That makes sense. I was hoping to find a path forward where that wouldn’t be the case, but at any rate using Lemur’s pick layers with an always-visible material is a much more straightforward solution.