Implementation of the scientific 3D viewer using JME

Hello everyone,



I need to implement the following: 3D viewer of the set of points, each having almost arbitrary coordinate.



Real example of point coordinate:

(659146.9424195625f, 5297539.099442002f, 613.822972409427f).



Set of points can large, like 10000 points (not really sure is it really large).



Viewer must have various navigation tools (Rotate, Zoom In, Zoom Out).  And that is for the beginning.



I have already tried JOGL, and faced a lot of problems like setting the visible volume,  etc.



I am thinking about JME, because it is high level, feature rich and have notable usage record.



However,  I am not sure, whether it is good idea at all, because primary focus of JME are compute games, not scientific visualization.



So, what do you think, is it a good idea?



Thanks in advance,

Sergey

Sure you can! Check this out for instance: Whole Brain Catalog - neuroscience application built on jMonkeyEngine.



http://www.youtube.com/watch?v=1YzfXv4yNzg&feature=related



Another example is Betaville, by our very own Skye Book. The gaps between interactive 3D applications and games aren’t that big.

This is a fairly simple application, you should be able to do this with jME fairly easily.

What are the accuracy requirements you need?  659146.9424195625f for example is beyond the usual IEEE-754 scope of accuracy and will cause issues if used without some sort of partitioning mechanism.  As far as whole numbers, you shouldn't have too much trouble into the millions, but that amount of significant digits could cause issues

sbook said:

What are the accuracy requirements you need? 


No accuracy requirements so far... But I think when I will come with something working they are very likely to appear...

Thanks everyone for responses, really like this community!

Well for realtime I would say that 6 million nodes are a bit hevy to compute, but it depends on what frmerate you target,

Jme only uses a frustrum check to discard part of geometry, but nothing more inteligent, you have to implement it yourself.

Hi, I have a similar application, however, I’m concerned with efficiency of high-resolution meshes with 6+ million nodes from files from the output of our in house software.



When JME3 loads a model, is it displayed in a intelligent manner where occluded nodes are not computed as much, or some sort of dynamic level of detail geometry? What is the complete list of these type of intelligent rendering features?



I also need to be able to make slices, cuts, sectioning, or highlight aspects of geometry efficiently, hopefully using the graphics cards to accelerate this. (geometry shaders?) Is this possible? Is it easy? Do I need to know how to program with shaders?



Since my research involves modeling electrical properties of the heart, this geometry will often have a color for each node (vertex) that represents a parameter such as voltage. We have simulation software that outputs a file that contains the geometry and the parameter associated with each node. This file is like a movie file but with geometry. Is this a reasonable thing to implement with the JME?



Thanks

Do the usual optimizations. Load only what is necessary, and make sure to batch it properly. Most video cards can only handle on the order of 4000 objects. Combine objects with GeometryBatchFactory or other methods. jME won’t render objects that are outside the camera frustum. If the meshes are large in scale and there’s not much detail on the lower scale, you should be able to get away with just doing the above. Otherwise, you may need to implement some sort of LOD (Level of Detail) algorithms.



If you need to write shaders, It is fairly easy to with jME3, and there are tutorials on the wiki.



Also, given the nature of your project, I wouldn’t expect anything to be easy …

If there is only one object in place but with 6M vertexes, will frustrum culling still occur? Also, in my specific requirements, usually the whole object will be in view, only on occasion will I need to zoom in. I’m thinking the LOD algorithm may be the area to invest the most time in, however, I don’t know of a way to generalize simplifying complex geometry, and still keeping a reasonable map of the vertex → parameter (voltage) from the data files.

Frustum culling happens on the object level, so no.