How to Create a Mesh from Cloud Points

@Ogli said:
Here the first results: - interface - implementation, part 1 - implementation, part 2 - a simple test application (2 mio. random points)

Even with this very early brute force technique I get maximum framerate on my 2 year old machine.
I simply used point sprites - and the hardware culling speeds things up, as I have seen during the test.

Only thing that I struggled with, and could not fix yet: the points appear as semi transparent. I would have liked them to be totally opaque.

Also, they are currently squares, not circles. This is by design and could be fixed easily. There are two options for a fix: Write a simple shader with (u*u + v*v) > 1.0 = transparent, or simply use the particle material with a texture, that shows a circle. As I just said - both very simple ways to turn squares into circles...


what do you mean by "particle material", please? jmonkey doesn't make any equivalent to gl_PointCoords available to shaders
@hcavra said:
what do you mean by "particle material", please? jmonkey doesn't make any equivalent to gl_PointCoords available to shaders

This is false.
See RenderState.setPointSprite():
1 Like
@hcavra said:
what do you mean by "particle material", please? jmonkey doesn't make any equivalent to gl_PointCoords available to shaders

I'm not sure where you get this information, actually. You do know the source code is available and google code easily allows you to search it?

Notice this from the particle material:

Notice the use of: gl_PointCoord

It's only another quick step to find the JME tests that use this material... I provided that in the other thread, though. It was taking apart that example and related classes which let me do my own point sprite related shaders... of which I have at least 4 or so.
1 Like

hmmm. alright, many thanks, guys.

Registered just to say thanks to Ogli. I struggled to find a way to plot points without creating a new object for each one until finding this.

Code is simple to use and now I am using it for a robotics project with the Kinect while keeping all the niceties of Java.

1 Like

Hello @Ogli,
I’m using your cloud generator here: GitHub - metaquanta/jove: OpenCV visualization in jMonkey

thanks a lot!

@metaquanta said: Hello @Ogli, I'm using your cloud generator here:

thanks a lot!

Visualization is a nice and useful application of render engines like jME. :slight_smile:
Both science and economy can make use of such visualization tools.
I wish you merry coding times and success!

Thank you @Ogli!!

I’ve been looking for weeks for a way to do point cloud segmentation in Java, so desperate I already started to learn c++ just in order to use the PCL. This saved me months!!

Do you know how I can convert pc. files to the float array? I’m trying to parse .ply files but I find this difficult…

do you have a screenshot on what you are doing there ? :wink:
i still try to get an imagination as i am unused

It’s for object recognition. I’ve no implementation yet, but this will give you a good idea of what I will be doing:

1 Like

Hello dobr,

I’m sorry, but I did not do any work on point clouds, but I had a lot of ideas for this back then.
Seems like this is what you needed for your object recognition - some 3d points with color.
You would typically have a vast amount of points - the recently scanned jesus of rio consists of over 100.000.000 points.
To render this efficiently, you would implement an LOD algorithm (Level Of Detail), that does not visualize every of those points, but only the most important ones.
Typically I would use color to indicate density (red means many points, green means few points), but since you probably need color … for … well … your photo color :slight_smile: this is not possible - so just fuse some points in a binary tree or something like that and also segment your point cloud into cubes (octree or grit based approach). In other words: your point cloud does consist of several boxes that hold only parts of the whole cloud, so that the in-game-virtual-camera can clip those out that are currently not seen & then also have these boxes in several versions with: full points (e.g. 10742 points are in a particular box), 50% version (only show 5371), 50% of 50% version (only show 2685), 50% of 50% of 50% version (1342), etc.

About those two file formats: I did not look for the file format. First determine if you can find some info on that (e.g. is it binary or txt, what data is stored, how is it stored), then read in the file according to the format (Java can read binary and text files, your Loader will fill an internal data structure that represents the boxes and their LOD versions).
It might also help to look for a Loader for that format on the internet.
But what you really need is a solid understanding of the file format (if I were to design such a format, then I would incorporate the idea of ‘boxes’ and ‘LOD’, but also provide a brute force (raw) format with simply all points in a very long list, maybe with file header (magic number, version number, maximum and minimum 3d coordinates, number of points stored, and such stuff).

Hope that helps,

1 Like

Hi @Ogli, thanks a lot for your feedback!!

The file-reading is working now. What I’m currently doing is to basically strip all the additional information from the file (.txt) manually, so that I’m left with just the points (like the raw format you mentioned). Fortunately the formats are well documented so I’m confident I’ll get it working without cheating soon.

Well, that’s pretty cool. Wish you luck and success.

I was experimenting with my kinect 2, with a cheap photogrammetry programm and with the free 123D catch cloud. Unfortunately, the more sophisticated tools like agisoft photoscan cost alot of money. :smile:

Another hint for the ‘box segmentation’:
It would be good to fuse boxes hierarchically, since it doesn’t make much sense to render boxes with less than a few hundred or a few thousand points (2^16 is a full buffer in jME3, which means 65k points). You always try to make large batches. An octree is such a fuse-up (or split-down) structure - e.g. 8 max resolution boxes with ca. 10k points each fuse into one parent-box with 10k points, then 8 parent-boxes fuse into one parent-parent-box, etc.

Also make some tests on your machine with raw rendering first - some millions of points are possible with current hardware. For smaller objects this might be sufficient. For example: a human body typically has 1,73 m² area (4.4 by 4.4 feet) - which means that with 1,730,000 points you have one point for every square-millimeter (1/25 by 1/25 inches).
Of course this is after the human has been separated from the background (the living room or photo studio).

Happy coding and experimenting,

1 Like

by the way - the smileys of the new forum suck!
a “smile” is translated to a “stupid laughing with closed eyes”.
I wish we had the good old monkey smileys back - or at least some reasonable standard smileys…


(2^16 is a full buffer in jME3, which means 65k points).

Oh, I made a little mistake here - this limit of 65k points is only for indexed triangle meshes. So, you might directly render a million points or so. Index buffers are only needed when you use triangles instead of points.

Is there a complete list of codes for the old smileys?
I only have the codes for the old ones in a txt file.

Type the doublepoint and wait for the autocomplete Window :wink:

? Is this because you are using a 'short" index buffer? You can also use ‘int’ index buffers.

well, I was using the default sphere object when I discovered this.
didn’t know that int buffers are possible too, thanks for that hint pspeed.

also I wonder if index buffers are needed when using triangle strips - I guess no.
will check this when my bitmap text library and project pdf are finished.

It’s a trade off. If you just have positions then the trade off is easier to calculate. If you also have things like shared normals, shared texture coordinates, etc. then even triangle strips benefit from index buffers.

Edit: at least for the most common use-cases for triangle strips.