Voxel Style Point Cloud View

I am currently working on the darpa robotics challenge, getting a humanoid robot to do crazy things. I am using JME for our operator user interface and I am looking for some suggestions on making my 3D laser data view better/faster. currently I am batching up 100,000 small boxes and using a custom rainbow shader to make the boxes change color based on height location. I can get about 50,000 points out there before things start becoming a little slow.

the following video will show more of how our interface works

At 2:40 you will see how we are currently displaying our lidar point cloud using a combination quadtree/octree.

and this is how i would like things to look.

I would like to be able to display at least 100,000 points quickly adding and removing points in real time.
is there anything out there currently that would be good for this situation.
any suggestions on what i should be doing/ where i should be looking.
If anyone out there wants to write some code up real quick there is a good chance your code could help in running an awesome robot :slight_smile:

this is a video showing some of the work we did for the first part of this competition, including some of our jme interface.

1 Like

This is a really cool!!

Have you tried creating a custom mesh instead of batching cubes? I think a custom point cloud mesh would be faster to render.

1 Like

also, i see that you have only colored cube (no texture) and larg bands of cubes with the same color. I have a lib witch batch coplanar faces into a single one.
But as it’s not entirely stable yet, i think you should consider other option, like using a cube library (you have some good cube libraries out there). If you really need mine, i will try to make a release of it.

I don’t know if you can use this, but I remembered a recent topic which needed many boxes aswell, maybe its of any use to you: http://hub.jmonkeyengine.org/forum/topic/attach-thousands-of-childs-to-a-node-and-optimize-fps/

1 Like
kwando This is a really cool!!

Have you tried creating a custom mesh instead of batching cubes? I think a custom point cloud mesh would be faster to render.

do you mean make a mesh with each point in the cloud being a vertex and setting the mesh mode to Points. if so i have done this and i am able to view 2 million points easily, the problem is that i also need to interact with this point cloud so a solid surface, like i get with boxes, is better.

bubuche is there a cube library that you would recommend.

@jcarff said: do you mean make a mesh with each point in the cloud being a vertex and setting the mesh mode to Points. if so i have done this and i am able to view 2 million points easily, the problem is that i also need to interact with this point cloud so a solid surface, like i get with boxes, is better.

bubuche is there a cube library that you would recommend.

“interact”? Like with JME’s default collideWith()?

That will generate a ton of collision data when you could probably do your collection checks yourself on the original data instead. I’m presuming your original data is organized in some efficient form but I may be over-assuming.

@pspeed said: "interact"? Like with JME's default collideWith()?

That will generate a ton of collision data when you could probably do your collection checks yourself on the original data instead. I’m presuming your original data is organized in some efficient form but I may be over-assuming.

interactions in this case are just simple mouse clicks into the world. we do have our data organized, but i am using the jme collisions as that is what we used when we started the project and it has not caused us any problems/slowdowns for us yet.

i found the “cubes” plugin, note sure if there are any other cube worlds out there for jme, i will be setting this up and running some tests to see if it meets my needs. let me know if any of you have any other ideas/ suggestions. Thank you all.

@jcarff said: interactions in this case are just simple mouse clicks into the world. we do have our data organized, but i am using the jme collisions as that is what we used when we started the project and it has not caused us any problems/slowdowns for us yet.

i found the “cubes” plugin, note sure if there are any other cube worlds out there for jme, i will be setting this up and running some tests to see if it meets my needs. let me know if any of you have any other ideas/ suggestions. Thank you all.

Well, when you do collideWith() on a mesh then it will generate a bunch of collision data that is partitioning the space up into a bounding volume hierarchy of triangles. For a point cloud, I expect this to be pretty expensive in both time and size.

Whereas if you already have spatially organized source data then you could just do a ray intersection directly with that. Ray->sphere collisions should be pretty quick. Then you can go back to supporting millions of points again.

Ok so going back a couple steps

do you mean make a mesh with each point in the cloud being a vertex and setting the mesh mode to Points. if so i have done this and i am able to view 2 million points easily, the problem is that i also need to interact with this point cloud so a solid surface, like i get with boxes, is better.
ignoring the interaction part, yes it can be done through the data, and if it becomes a problem i will switch it over to using that data. The bigger problem for me with using “points” is that to visualize what you are seeing there really needs to be thickness to your points in order to get a sense of depth. if you have a scan of a drill sitting on a table , even if it is covered in thousands of scan points if you see points from the table through that scan it is hard to distinguish what you are seeing. if i use sprites i can add “thickness” to the points but then I run into a problem of sorting. when looking at sprites i can not tell what sprite is in front of the other, this also makes it nearly impossible to tell depth.

You can also batch quads that you rotate to face the camera in the shader. It’s a nice compromise when point sprites won’t work. It takes more space than the points on their own but much less space than full boxes would.

Actually, I’m not sure why point sprites are an issue. I was pretty sure that when depth writing is on that they behave properly. As long as semi-transparent pixels aren’t in play then there should be no issues with rendering order. As long as depth write and depth test are enabled for the point sprite material, things should be fine.

A cube has 6 sides… 2 triangles each… 100K of them… That’s 1.2m triangles. Plus all the other triangles for rendering the robot and the rest of the scene.

This might a scenario where you just have to throw more hardware at it. Also I noticed you’re using Ubuntu. I love Ubuntu, but the 3D drivers aren’t as fast as Windows.

Ubuntu was a requirement at that stage of the competition because of limitations put on us by darpa for connecting to their remote simulators, we are now running everything on windows. i will run a few more tests with sprites and see if i cant find what i am doing wrong.

@jcarff said: Ubuntu was a requirement at that stage of the competition because of limitations put on us by darpa for connecting to their remote simulators, we are now running everything on windows. i will run a few more tests with sprites and see if i cant find what i am doing wrong.

If you can post a test case (assuming you don’t spot your problem right away) then we can also help look at it.