I have been able to do mouse picking with solid geometries.
However, trying to make a 3d editor, I now want to enable the user to pick certain lines and points on the screen. By default, this does not seem to work - no matter what width the lines and points are given, they do not hit as collision detection seems not to be implemented for them.
One solution would be to perform collision detection with cylinders and spheres rather than lines and points. I am confident that this would work, but I have two issues with that solution:
The further away the line or point is, the more difficult it becomes to pick it as it displays smaller on the screen. I would prefer picking of a line or point be possible when the mouse is within a fixed number of pixels away.
There can be thousands of lines and points on the screen that need to be picked when a complex model is present. I am concerned that this solution may be inefficient.
Is anyone aware of other (better) methods to make mouse picking work with lines and points?
There are many uses for this functionality, but basically what I want to do is:
I have a list of lines and points that should be able to interact with the mouse. They interact when the mouse is within a specified number of pixels of the line or point. Sometimes, lines and points can be dragged by the mouse (with the actual effect depending on the situation). Sometimes, they should simply be clickable. There should also be a visual indication of when the mouse is close enough to a line or point to select it.
Currently, for all geometry that has to interact with the mouse, I set up a separate node that contains (hidden) geometries that are transformed in the same way as the actual scene that is shown. This hidden geometry is then used to handle mouse picking. This way I can control exactly which geometry is pickable and what the picking shape will be. (the geometry that is used for picking can thus be larger than the displayed geometry)
In the ideal case, I should be able to add a line or point to this hidden geometry with a large line width or point size such that it can be easily picked. Unfortunately, this does not work, so I am looking for something that would operate in just the same way.
Your work around doesn’t sound very efficient but should work…
You may be better off doing your own picking instead for this case. Just cast the ray from the screen through the click location and then use whatever logic you like to determine what lines/points/etc are affected by that ray.
Yes, I am experimenting with that too. It is not as practical though since then I have to duplicate the node structure. It’s more work and not as pretty, making me wonder if there was an elegant solution to this.
Yes, I am experimenting with that too. It is not as practical though since then I have to duplicate the node structure. It's more work and not as pretty, making me wonder if there was an elegant solution to this.
I'm trying to understand why you would have to duplicate your node structure.
Okay, so to be clear what I’m doing, I am trying these three approaches towards th problem:
First I tried the simplest approach, that is, put the lines and points in geometries where the meshes are in points or lines mode and where line width and point sizes are large enough to pick them easily. This appears not to work. (If it should, tell me!)
Second approach is to use cylinders and spheres instead of lines and points such that they have a solid body that can be picked. This works but is inefficient. It also doesn’t work exactly as it should since far away geometries are harder to pick than intended.
Third approach is to treat the lines and points separately and write my own logic to calculate which lines and points are being hit. With this approach, a node tree consisting of only lines and points is generated. I have started this implementation. It appears to require a bit of work so I wonder if there is a simpler approach such that maintainability of my code remains good.
If it were me, I’d do my own traversal and collision testing… external to the Node/Geometry classes. On the way down you can do the same bounding volume elimination that the regular collision detection does. When you get to a leaf (a Geometry) you can either let the Geometry handle its own collision or if it’s one of your lines or points then you can do your own collision… which personally, I’d do in 2D space at that point.
You could even signal that a geometry should use your alternate collision handling by sticking a boolean in the Geometry’s user data so you don’t have to have special subclasses or anything.
That sounds like a good option, which I just started considering as well. It would be one “pickable” tree of nodes with leaves that can be geometry, lines or points, with the lines and points being handled as projected on the screen and the geometry using its default collision check.