I am new to JMonkey and gaming engines. At the moment i am bit confused as from where to start, it would be great if some one can help me to start with.
Basically i have millions of points(400-500 M) each is represented by x,y,z co-ordinate and each have some data attached to it. Now i would like to create representation of this points in 3d space. And this representation have to be interactive to user. I mean user can go inside this 3d space and could able to select any of the point and displays attached information, zoom In/out and so on.
I Google a bit about the 3D java programming but i am not sure whether that can help me to deal with this vast amount of information. Some people also suggested to have look onto the java gaming engines/Jmonkey. I completely do not have any idea as from where to start. Is this task possible to do with JMonkey, if yes how to proceed? Your suggestion will be of great help. I am looking forward to hear from you guys.
Well you could get away with up to 7 million visible depending on hardware, but more is not really interactive anymore.
Dont show stuff further awy than x might help,
look at how to create a custom pointmesh by searching for bosth stuff and similar posts in the forum.
(The limits are btw a hardware thing, using c++ or whatever wont really help much)
None of out-of the box solutions will work here. Forget about display - even holding all coordinates in memory at same time is too much. This issue is non-trivial, it might be hard to solve it and learn 3d basics at same time.
Just few pointers for you to mull about:
You need hierarchical representation of data. Axis-aligned octree will be the first guess. Find some reasonable amount of points (1000?) and subdivide things till you have cells small enough to hold not more that that (or are below certain minimum size).
You need precompute all LOD levels for all blocks. If you want to keep it simple, just think about displaying uniform cube with color/transparency computed from point type/density. If you want to go non-trivial, you can compute particle-like impostors from multiple sides and use them instead, with some blending between switching them (similar to way trees are rendered when far away in games)
All that has to be streamed from disk. You should not depend on built-in culling only, you need to actively load/unload/switch LOD for nodes which are coming in and out of the view. I think (please correct me if I’m wrong) that current Octree implementation in jme3 is more focused on optimizing rendering rather than paging data from disk, so it might be a base, but not final solution.
For actual display of close-up data, you probably want to use point mesh - optionally with texture. I don’t know how well picking is going to work with point mesh - you might need to implement your own raycast - to your internal data collission.
This is definitely doable - but you will need to be smart…and run it on reasonably hefty hardware, no android for you
Basically break the points down by region of space and batch together all of the points in an XXX block - probably using a custom mesh with point sprites instead of faces. This is because graphics cards are capable of throwing around thousands of vertices easily - but will struggle with thousands of individual separate objects.
You may even want to do a tree type structure with nodes containing nodes (repeat as needed) containing the geometry blocks - this is all to make the original culling using bounding boxes as efficient as possible. (Culled objects aren’t even sent to the graphics card in the first place).
The ones out of view will then be culled and not needing processing, so immediately you get big savings.
I’d approach this one step at a time:
First create the point mesh cloud. Test how many points in that gives decent performance.
Choose a grid size a bit smaller than that.
Construct a 222 block of those.
Construct a 222 block of the original 8block
Test at each step and experiment with different numbers per geometry block, different sizes in the grid (i.e. 333) etc.
Thanks to Empire, abies, zarch and kwando for your immediate replies. Although i didn’t understand completely what you guys are suggesting (am completely newbie to the most of the terms). But definitely, it has given me some points from where i can start. Just to mentioned i would like to put the things in as simple way as possible.
It sounds like visualization is the least problem right now Scientists like to put huge arrays into computer memory cause thats the data they have. Most quickly come to the conclusion they have to first get better at data management Like the Mathematica developer is actually going back to active physics and research now that he implemented the tools he needed in the form of Mathematica (no joke).
I don't think you need to go as small as 1k for the minimum block size though. I'd try 10k and maybe even 100k.
I was suggesting 1k focusing more on LOD impostors, rather than preferred geometry size for close geometries. It might be to harsh replacing 100k points with single averaged color cube. On the other hand, having half milion cubes at distance is also not good… Maybe single huge, colored/alpha point cloud for distant things as well ?
I suppose it really depends on point distribution. But indeed, probably 10k is better starting point to work from. I suppose you will need to tune heuristics many times - having limits on count of points, size of area, maybe some properties of neightbour cells etc.
What could be psosible is to use actually tow applications, one rendering the nearest 1million points as a pointmesh, while having a very slow second appliation, that does skybox out of the rest of all points. -> and transfer the texture with some clever threading eg render to memory like to the other application. If the spectator cannot move fast, having like 0.5fps for the skybox would probably be enough.