Dynamic data loading depending on view

Hey graphic-fanatics :wink:
I recently stumbled upon the Bolshoi simulation (dark matter distribution in our universe) and since the generated data is freely available i figured to give it a try and create some visualization for it… Since the amount of particles used in the simulation is rather massive (depending on the simulation we talk about 8 to 54 billion particles) i cant just grab them, throw them into my renderer and get a nice output… So with using sprites i am stuck at about 22 million particles which render with ~4.6GB of ram usage initially at ~80 fps and ~1GB of ram after the scene is created… But because of the massive amount of particles that are used, 20 million particles just isnt enough to create a nice scene and i’m stuck at two options right now:

  1. go over my database and throw out a massive portion of my data to create a sufficiently small amount which still conquers the whole area, resulting in a huge loss of detail which i would not prefer

or

  1. i need a possibility to query my database depending on what is needed on screen… explanation: it would be perfect to query additional data while moving through the clouds of particles when a new area comes in sight especially for that new area, but also to constantly check whether a pixel (on screen) still is just one point, or if there is additional data for another point, which would be visible now that the camera moved (zoomed, reached another angle so the “new” point would be distinguishable from the previous one to ultimately create it, …)

so all in all, any thoughts on the “depending query” approach? how would i optimally (in terms of performance) check for a) points which are out of the scene and can be discarded and deleted from the scenegraph and b) check whether a pixel can now be divided into more points; and c), precondition for a) and b), how could i calculate the area which becomes visible, to check in the database if points are available (frustum near, far, top, bottom should be sufficient, but i need coordinates… there is a solution but i cant put my finger on it right now :wink: )

i would really appreciate some input for this problem, maybe even a whole new approach i might not have thought of??

Appreciate your help,
greez Ben

EDIT: just for the record, does jme check for the above performance gain possibility while rendering?? what i mean is: summing elements to close together (perspective wise) to be distinguishable pixels into one single pixel and just rendering one while ignoring the others? just to be clear i am not talking about objects that are hidden behind others, but about pixels “merging” because of angle or distance

Does it need to be realtime?
If not initialiy, i would create for each frame a skybox image, rendering in crawling slow speed.
Later just replay them all in realtime.

Of course wont work if you can move.

well i would like to be able to move through it of course, it is already amazing as it is right now :wink: so maybe it is possible to calculate the “new” segment each frame using the altered angle of the camera with the frustum, throw the coordinates of this box into my database to query for possible points in this area… as well as it would be possible to use this box on the side which was moved out of the frustum to throw those points out of the scene graph… (not sure how exactly to check for those points, as jme does not have a method for it is assume??) however, im not sure whether it is possible to calculate, query, process and add those new points to the scene graph in realtime, might be a matter of testing how responsive it gets…

Sounds like you need to partition the data somehow and then have an LOD system - so stuff near you is displayed in more detail than stuff further away.

There are lots of ways to achieve this, google on level of detail (LOD) and paging systems will give a lot of info.

well that is kinda the problem… there is no “more detail”… either the object is displayed or not, as they are just 1 px large sprites there is no higher detail level or even a possibility of paging :confused:

Depending on how the result look you might be able to get away with a look alike imposter for every few thousands further away, and update the imposter texture every x frames.
For far away ones the angle cannot change much per frame anyway, so if you update very x frames it should work fine enough.

Yes there is.

pre-process the data to get a “density” value.

i.e. for each m3 count how many particles are in it. For each 10m3, for each 100m3, etc.

Then depending how far away it is you look at the density then display one pixel with brightness based on the average density.

So for far away you display one particle. Medium range you display 101010 particles, Close you display 100100100 particles, very close you display the real particles.

Obviously those distances are just examples. For the real thing you would need to choose appropriate values.

1 Like

Hey!
Thanks a lot zarch, that might do the trick, i will look into it as soon as i have some spare time… as a liitle “eye-candy” for your help, this is it as it is right now:

Rendering with 25.6 million particles (not all visible here, flew in it for a better view :stuck_out_tongue: )… just screenshoted it out of jm, so sry for the bad quality

1 Like

Well i did some thinking again, your approach zarch, would help me if i had fps issues which i have not… so i still need to load data into the scene graph “on the fly”… will try my approach soon, as i have some time…