jME2.1 design document

Here's an objective list I compiled for jME2.1 (or any next-gen engine based on jME). If you have lots of time and the latest jME2 source, you can start working now  :mrgreen:



Systems of jME2.1


  • Application

  • Display

  • Materials

  • Geometry

  • Model Importing

  • Resource Management

  • Scene post processing

  • Light and Shadow Management

  • Animation



Application System

  • Integrated Pass System

        [li]Frustum-Bound check and sorting seperate from render stage
       
    • Rendering
    [/li]
  • Callable Queue

        [li]Injecting code into rendering thread
    [/li]
  • Frame limit

  • Automated Error Handling

  • Runs in seperate thread

  • Configuration from file or registry



Rendering System
- Pluggable Rendering API
- Can render to display, canvas, texture, or buffer on CPU (RenderTarget system)
- Renderer must be singleton for each rendering API (e.g LWJGL, JOGL etc each have 1 Renderer for the whole VM)
- FSAA & CSAA support for all RenderTargets
- Full support of HDR

Material System
- Extendable materials (material controller)
- Material Contains RenderStates, Spatial contains Material
- Simplified Interface for user access

Geometry System
- Extendable geometry
- Mesh data seperate from scene graph
- User only specifies usage mode (static, modifyable, dynamic, animated), engine does the rest

Resource System
- Locating a resource by name
- Unified access to resources from filesystem, jar-file, classloader, or network
- Resources are cached, garbage collected when memory is low
- Cloning to allow sharing of resources

Model Importing
- Exporters to JME-XML from popular modeling tools (Blender, 3DS, Maya)
- Command-line model converter and manager
  * Convert from supported model formats to JME binary
  * XML <-> Binary conversion
  * Optimize verticles, combine meshes, generate bounding volumes
  * Generate LOD data

Post Processing
- Renderer writes to offscreen surface, processed by "Filter" classes
- ColorFilters operate on RGB values (bloom, grayscale, tone mapper)
- ColorDepthFilters operate on RGB+Depth values (SSAO, Depth of field)
- GeometryFilters operate on geometry (motion blur)

Incomplete:

  • Light and Shadow Management

  • Animation

  • Other?

Some examples of how some systems might operate.



Usage example of Material System:


public class TestMaterialSystem extends SimpleGame {
   public void simpleInitGame(){
      Quad quad = new Quad(100, 100);
      Geometry quadGeom = new Geometry("quad", quad);

      // material controller: NormalSpec.
      Material nsMat = new NormalSpecMaterial();

      // simplified interface for accessing various material settings without using RenderStates
      nsMat.setDiffuse(ColorRGBA.red);
      nsMat.setSpecular(ColorRGBA.white);
      nsMat.setShininess(64);

      nsMat.setFrontFace(Face.Smooth);
      nsMat.setBackFace(Face.Cull);

      // resource management system
      nsMat.setTexture(ResourceManager.getTexture("diffuse.dds"),  0);
      nsMat.setTexture(ResourceManager.getTexture("normals.dds"),  1);
      nsMat.setTexture(ResourceManager.getTexture("specular.dds"), 2);     

      rootNode.attachChild(quadGeom);
      rootNode.updateGeometricState(0, true);
   }
   // ...
}



Usage example of geometry system:


public class TestGeometry extends SimpleGame {
   public void simpleInitGame(){
      // box is the geometry data
      Box box = new Box(2, 2, 2);

      // generates static VBO before rendering, deletes geometry data from CPU
      box.setUsage(Usage.Static);

      // generates static VBO before rendering, keeps geometry data on CPU
      box.setUsage(Usage.Modifyable);

      // generates "dynamic" VBO
      box.setUsage(Usage.Dynamic);

      // generates "stream" VBO, or doesn't use VBO at all
      box.setUsage(Usage.Animated);

      // boxGeom is a scene graph element that contains a mesh
      Geometry boxGeom = new Geometry("mybox", box);

      rootNode.attachChild(boxGeom);
   }
}



Usage example of post processing:


BloomFilter bloom = new BloomFilter();
bloom.setScale(2);
// ... set other parameters

PostProcessPass ppPass = new PostProcessPass();
ppPass.addFilter(bloom);

passManager.addPass(ppPass);



Usage example of writing a filter:


public class BloomFilter extends ColorFilter {

   private TextureRenderTarget rtt1;
   private TextureRenderTarget rtt2;

   public void init(Renderer r){
      rtt1 = r.createTextureRenderTarget(r.getWidth(), r.getHeight(), Format.RGB8);
      rtt2 = r.createTextureRenderTarget(r.getWidth(), r.getHeight(), Format.RGB8);
   }

   public void execute(Texture source, RenderTarget target, Renderer r){
       // example of using Processor class
       // does processing on an image, writing the output to a RenderTarget
       Processor p = new Processor(r);

       p.setSource(source);
       p.setTarget(rtt1);
       p.setMaterial(ResourceManager.getMaterial("bloom_extract.mat"));
       p.process();

       p.setSource(rtt1.getTexture());
       p.setTarget(rtt2);
       p.setMaterial(ResourceManager.getMaterial("blur_x.mat"));
       p.process();

       p.setSource(rtt2.getTexture());
       p.setTarget(rtt1);
       p.setMaterial(ResourceManager.getMaterial("blur_y.mat"));
       p.process();

       p.setSource(source, 0);
       p.setSource(rtt1.getTexture(), 1);
       p.setTarget(target);
       p.setMaterial(ResourceManager.getMaterial("bloom_combine.mat"));
       p.process();
   }
}

I really like it so far.



What I'd add:

Things like texture splatting (seems to be something almost everybody needs but has to search in the forum and add) and very simple to use collision detection e.g. to place an object on the ground (a lot of questions were posted so I see a general demand). The latter should not try to replace a physics engine but merely help to get an object moving over a geometry (not only TerrainPage/TerrainBlock) and keep it from going through blocking objects. A CollisionManager with listeners might be the easiest to use.

Hi there



sounds good to me. Is your list influenced by vears “2.1 branch”? I see some analogies, e.g. Materials ?



With “incomplete” you mean that you didnt thought about it yet and so you didn’t write it to your design document?



I would like to see the PostProcessingSystem in action, because i think this part is very important for great grafics :slight_smile:



Regards,



Snare

+1

and +1 for Galuns request.



also Lights and Shadows are very important IMHO, shadows (espacially the new ShadowMapping) should be well integrated with lights.

Momoko_Fan said:

- Exporters to JME-XML from popular modeling tools (Blender, 3DS, Maya)

I can do Python. That means Jedimace1 can make exporters! Yay! I believe Blender takes Python exporters, and maybe Maya. 3DS Max can probably take it too. It is very common for exporters in 3D modelers. I can work on making one that can export to .jme or .jbin if I know how the file is layed out. Something like this would be nice: http://linux.ucla.edu/~phaethon/q3a/formats/md2-schoenblum.html.

Also, it would be nice to make shadows and LOD easier and more dynamic. How about a way to automatically have LOD on a node, then be able to switch it on or off? Also, shadows could just be set on or off in the light state or on the lights.

Vardamir said:


I would not create a 2.1 branch, i would create a 2.0 branch, when 2.0 is done, and then trunk would be milestone But if trunk is considered 3.0 (with lots of redesign and such), then a 2.1 branch would be ok.



Yep lots of redesign and stuff

Jedimace1 said:

Also, it would be nice to make shadows and LOD easier and more dynamic. How about a way to automatically have LOD on a node, then be able to switch it on or off? Also, shadows could just be set on or off in the light state or on the lights.


Things like this need careful consideration so not to impact applications that dont need features. It is probably better not to close couple LOD with spatials but to use utilities instead.






snareoj2 said:

sounds good to me. Is your list influenced by vears "2.1 branch"? I see some analogies, e.g. Materials ?


Might well be worth getting together with vear - looks like he has some of the above ready to implement

Is the 2.1 branch created ?, if not lets do it

Maybe for each point we can set up a thread to discuss the implementation
theprism said:

Is the 2.1 branch created ?, if not lets do it

Maybe for each point we can set up a thread to discuss the implementation


I would not create a 2.1 branch, i would create a 2.0 branch, when 2.0 is done, and then trunk would be milestone 2.1
But if trunk is considered 3.0 (with lots of redesign and such), then a 2.1 branch would be ok.
What i want to say is, that trunk should always be the version, where everything is in motion, its never garantued to be stable.
But what we can do, is create issues for those items. We need to create a milestone label:
Milestone-2.1 = Version 2.1 of the jMonkeyEngine

So every item of this list can be labeld correctly, and then details for that item should be documented in the issue. Of course its better to discuss it here, because the forum has more attention than the issues list.

But an issue can be assigned, rejected, implemented, etc.

This is logical development on my work. I like it, with some remarks.


Momoko_Fan said:

Some examples of how some systems might operate.

Usage example of Material System:


public class TestMaterialSystem extends SimpleGame {
   public void simpleInitGame(){
      Quad quad = new Quad(100, 100);
      Geometry quadGeom = new Geometry("quad", quad);



The Box is a "Geometry", the "quadGeom" is a "ModelInstance" or "GeometryInstance" or "Mesh" or named something else. Looking at this, i presume you dont want the batch system back, or the batches should be created by the contructor automatically? If you looked at my code, you saw that TriBatch has duplicated Transforms, for the reason that one can be updated, while the other is used for current rendering. You don't want the duplication, or the duplication would be better in "GeometryInstance", and thus without batches? I'd prefer batches, because then there is a clear separation between the scenegraph and the queue system. The batch system can be totally invisible to the scenegraph user, only renderpasses need to know how to handle them.



      // material controller: NormalSpec.
      Material nsMat = new NormalSpecMaterial();

      // simplified interface for accessing various material settings without using RenderStates
      nsMat.setDiffuse(ColorRGBA.red);
      nsMat.setSpecular(ColorRGBA.white);
      nsMat.setShininess(64);

      nsMat.setFrontFace(Face.Smooth);
      nsMat.setBackFace(Face.Cull);

      // resource management system
      nsMat.setTexture(ResourceManager.getTexture("diffuse.dds"),  0);
      nsMat.setTexture(ResourceManager.getTexture("normals.dds"),  1);
      nsMat.setTexture(ResourceManager.getTexture("specular.dds"), 2);      

      rootNode.attachChild(quadGeom);
      rootNode.updateGeometricState(0, true);
   }
   // ...
}



Hiding the renderstates in the material is very good idea. There are two things to this:
1. Making pass specific settings for the materials (called "technique"). There is such thing implemented in my work, but only for the shaders, which automatically chooses different shaders based on the renderpass, or other circumstances. While this is not necessarily exposed to the user, did you consider it?
2. It is still not clear to me if Material objects representing the same rendering parameters should be immutable, shared, shared for a single model, or strictly per-batch. I used them however i seen fit, but it could lead to confusion among API users. I feel that the proper usage of Material should be better defined (or not, but then it requires attention from the users side).


Usage example of geometry system:


public class TestGeometry extends SimpleGame {
   public void simpleInitGame(){
      // box is the geometry data
      Box box = new Box(2, 2, 2);

      // boxGeom is a scene graph element that contains a mesh
      Geometry boxGeom = new Geometry("mybox", box);

      // generates static VBO before rendering, deletes geometry data from CPU
      boxGeom.setUsage(Usage.Static);

      // generates static VBO before rendering, keeps geometry data on CPU
      boxGeom.setUsage(Usage.Modifyable);

      // generates "dynamic" VBO
      boxGeom.setUsage(Usage.Dynamic);

      // generates "stream" VBO, or doesn't use VBO at all
      boxGeom.setUsage(Usage.Animated);

      rootNode.attachChild(boxGeom);
   }
}



Do you want to adopt the VertexBuffer or not? If yes, then the VBO attributes should be moved from Geometry to the VertexBuffer, because those attributes are common for the whole buffer, and not only for the range of the buffer used by the Geometry (tho it could be enhanced further in that direction). The methods could still be implemented for compatibility, but the usage attributes should be in the VertexBuffer. I would change the Usage types:

Instead of:
boxGeom.setUsage(Usage.Modifyable);

boxGeom.setReadable(true);
Which keeps a direct ByteBuffer of the geometry for read-back.

boxGeom.setWritable(true);
If the buffer is readable too, then it updates the VBO from the direct buffer. If it is not readable, then requesting the buffer returns a write-only mapped buffer.


Usage example of post processing:


BloomFilter bloom = new BloomFilter();
bloom.setScale(2);
// ... set other parameters

PostProcessPass ppPass = new PostProcessPass();
ppPass.addFilter(bloom);

passManager.addPass(ppPass);



Usage example of writing a filter:


public class BloomFilter extends ColorFilter {

   private TextureRenderTarget rtt1;
   private TextureRenderTarget rtt2;

   public void init(Renderer r){
      rtt1 = r.createTextureRenderTarget(r.getWidth(), r.getHeight(), Format.RGB8);
      rtt2 = r.createTextureRenderTarget(r.getWidth(), r.getHeight(), Format.RGB8);
   }

   public void execute(Texture source, RenderTarget target, Renderer r){
       // example of using Processor class
       // does processing on an image, writing the output to a RenderTarget
       Processor p = new Processor(r);

       p.setSource(source);
       p.setTarget(rtt1);
       p.setMaterial(ResourceManager.getMaterial("bloom_extract.mat"));
       p.process();

       p.setSource(rtt1.getTexture());
       p.setTarget(rtt2);
       p.setMaterial(ResourceManager.getMaterial("blur_x.mat"));
       p.process();

       p.setSource(rtt2.getTexture());
       p.setTarget(rtt1);
       p.setMaterial(ResourceManager.getMaterial("blur_y.mat"));
       p.process();

       p.setSource(source, 0);
       p.setSource(rtt1.getTexture(), 1);
       p.setTarget(target);
       p.setMaterial(ResourceManager.getMaterial("bloom_combine.mat"));
       p.process();
   }
}




Is the RenderTarget here the same functionality as the FrameBuffer class in my code? Actually a rendertarget is one render-to-texture of an FBO, so the FrameBuffer is more precise name. I would not name these as ColorFilter and ColorDepthFilter, because for deferred shading the PostProcessPass would need one color render target, another color render target storing the normals and another render target for depth (and there can be even more render targets: specular, ambient occlusion, whatnot). So, the deferred shading needs a whole bunch of textures (a whole FBO setup: FrameBuffer), not just one color+one depth texture.

No parameters for resource creation? I use a HashMap for passing parameters to create resources (Texture, Material, Model). I presume that the resource will not be changed by the user code, so when given all the parameters, the resource system can cache the created resources. I'd like to go full-length on this and prevent changes to resources, if the resource was requested as shared. Maybe a getCachedTexture() with immutable texture, and getTexture() which isn't cached at all. The same principle could apply to Material too.

You propose that the ResourceManager be a singleton. I see no problem with that, tho does that also means that jME 2.1 should not adapt a context system? It is highly debated if singletons are good or not, i personally don't like them, and try to avoid them. Using an ApplicationContext makes it easy to replace the global singletons with user defined ones, while enforcing static singletons with the "Singleton.getInstance()" scheme makes that impossible (hope that was clear, if not i'll explain more).

I'd like to get the stable parts (in the sense that i would not further change the way those parts work, only make some adjustments) of my code into jME 2.1. Those are the Geometry, and the GLSL handling. Other parts could be ported also, to produce a stable (in the sense of well-tested and bug-free) jME 2.1, but i have ideas how to further develop pretty much every part of the code.

The ResourceManager caching should have targets that register to share a resource, that way a policy can be introduced to clean the resource on delete if there are no targets needing the resource.



Simple hashmap should do it Hashmap(String resource, Arraylist<ResourceConsumer> consumers) where a consumer is can be a node or a spatial. a ResourceConsumer needs a method isAttachedToScene so it can be determined if the resource is in use


You propose that the ResourceManager be a singleton. I see no problem with that, tho does that also means that jME 2.1 should not adapt a context system? It is highly debated if singletons are good or not, i personally don't like them, and try to avoid them. Using an ApplicationContext makes it easy to replace the global singletons with user defined ones, while enforcing static singletons with the "Singleton.getInstance()" scheme makes that impossible (hope that was clear, if not i'll explain more).


I totaly support this.

The use of singletons must be considered carefully. In the case of the renderer, i dont think this should be a singleton. Maybe a singleton in a specific context, but not globally

Vardamir said:

The use of singletons must be considered carefully. In the case of the renderer, i dont think this should be a singleton. Maybe a singleton in a specific context, but not globally


A singleton with synchronized accesors and private instantiation

or

A class with a static instance of its self and a public static getter for the instance


theprism said:


A singleton with synchronized accesors and private instantiation



Making access to renderer synchronized (locked) is required, but i'm against all kind of private instantiation (and generally declaring stuff private). Java's "private" keyword prevents the subclass from redefining its parent, while protected allows access from the same package too, not just from the subclass. I think its the lesser evil to risk that a class in the same package uses a protected method, than to disallow subclassing. There isn't too many classes in the com.jme.Renderer package, and the classes are closely related, so the Renderer should be protected enough with protected instantiation.

Galun wrote:
Is your list influenced by vears "2.1 branch"?
theprism wrote:
looks like he has some of the above ready to implement
vear wrote:
This is logical development on my work.
I have looked at it, and took what I find most useful for a next-gen engine and ignored what I found not useful. By no way did I base my design off of his. Most of this list was established by me on what kind of things I see being in a next-gen engine.

I would prefer to avoid using his code. If vear's implementation can be more easily adapated to fit the design paradigm of jME2.1 than to write the code from scratch; then by all means we'll do so, but otherwise, I see no point in simply copying the code over. Vear himself said that if he had to write his engine now then he would do so differently.
vear wrote:
i presume you dont want the batch system back
I do not see the point in implementing something without purpose. That's why they were removed from jME2.
vear wrote:
the reason that one can be updated, while the other is used for current rendering
That solution is not scalable and has very little benifit.
vear wrote:
Hiding the renderstates in the material is very good idea. There are two things to this:
1. Making pass specific settings for the materials (called "technique").
The seperation of material from renderstates was to allow extendability of materials and more streamlined cloning and sharing of materials.
The "technique" concept can be implemented in a subclass of Material if needed. A NormalSpecMaterial for example, may choose to use regular lighting  if GLSL shaders are not supported.
vear wrote:
It is still not clear to me if Material objects representing the same rendering parameters should be immutable, shared, shared for a single model, or strictly per-batch.
The concept is similar to how textures are implemented now, you may choose to load a texture from a a file (in that case the texture will be shared among all models that use it) or you can dynamically create it, assign it parameters and image data.
vear wrote:
the VBO attributes should be moved from Geometry to the VertexBuffer (...)
Sorry, that was a mistake in code, fixed now.
vear wrote:
Is the RenderTarget here the same functionality as the FrameBuffer class in my code? Actually a rendertarget is one render-to-texture of an FBO, so the FrameBuffer is more precise name.
RenderTarget is similar to an "OutputStream" for graphics, it is set using the method Renderer.setTarget(RenderTarget), that keeps all renderer parameters the same, only changing the buffer to which geometry is drawn. It is more effective than your Framebuffer/TextureRenderer because having a unique context and camera are not needed in many cases.

RenderTarget does not neccessarily represent an FBO, it can represent the following:
DisplayRenderTarget - Renders to the default screen or canvas
TextureRenderTarget - Renders to a texture on GPU
BufferRenderTarget - Renders to a buffer which can be read by the CPU
vear wrote:
Deffered Shading (...)
Deferred rendering is a completely different approach to rendering & should be implemented as a specialized RenderPass, it has nothing to do with post processing.
vear wrote:
No parameters for resource creation? I use a HashMap for passing parameters to create resources (Texture, Material, Model).
That is reasonable, there will be a "default" setting which is set by the user to load resources without a parameter map.

Singleton ResourceManager issue

I do not think ResourceManager should be associated with a context. I do not see the purpose of using different ResourceManagers for different contexts as that leads to duplicated resources for no reason at all. Although jME2.1 is supposed to handle multiple contexts, it's main purpose is single context, full screen 3D rendering. Other platforms are better suited toward multicontext rendering.
theprism wrote:
The ResourceManager caching should have targets that register to share a resource (...)
The policy is determined by the VM. When the VM is low on memory, it will automatically dellacate the weakly referenced resources by the ResourceManager.
Vardamir wrote:
In the case of the renderer, i dont think this should be a singleton. Maybe a singleton in a specific context, but not globally
It's not globally a singleton, only for an implementation of a rendering system.
vear wrote:
Making access to renderer synchronized (locked) is required
That is not neccessary if rendering is only done from a single thread.

Another popular topic seems to be picking (objects, terrain, etc.)

Maybe a simple callback method, a la onClick, could be available for all Geometry subclasses ? The engine would provide all the picking logic for free and the callback method would receive the local coordinates of the picked object and be queued…

Different picking algorithms could be available too.



Or maybe it's not feasible and I'm dreaming ? :wink:

The Librarian said:

Another popular topic seems to be picking (objects, terrain, etc.)
Maybe a simple callback method, a la onClick, could be available for all Geometry subclasses ? The engine would provide all the picking logic for free and the callback method would receive the local coordinates of the picked object and be queued...
Different picking algorithms could be available too.

Or maybe it's not feasible and I'm dreaming ? ;)

I vote for this too!
Momoko_Fan said:

I do not see the point in implementing something without purpose. That's why they were removed from jME2.

I think that batches are very useful, but i wont argue. The engine can do without them.


That solution is not scalable and has very little benifit.

That's how some engines actually do it (along with keeping a copy of alot more stuff).


The seperation of material from renderstates was to allow extendability of materials and more streamlined cloning and sharing of materials.
The "technique" concept can be implemented in a subclass of Material if needed. A NormalSpecMaterial for example, may choose to use regular lighting  if GLSL shaders are not supported.

I'm not just talking about a subtype of material, but what effect a material has when used in different renderpasses.


I do not think ResourceManager should be associated with a context. I do not see the purpose of using different ResourceManagers for different contexts as that leads to duplicated resources for no reason at all. Although jME2.1 is supposed to handle multiple contexts, it's main purpose is single context, full screen 3D rendering. Other platforms are better suited toward multicontext rendering.

I was not talking about OGL context system, but about an application context system. If you have a single application, then naturally, you have a single application context. The goal is not necessarily to have multiple contexts, but to have everything in the context. No static self-managed objects.

What i got from your post, is that for jME 2.1 there is no need for my help. Which is good for me, because i can move on, and work on something else. I'm sure jME 2.1 will be useful for the community, a step for the engine becoming even better.