This is logical development on my work. I like it, with some remarks.
Momoko_Fan said:
Some examples of how some systems might operate.
Usage example of Material System:
public class TestMaterialSystem extends SimpleGame {
public void simpleInitGame(){
Quad quad = new Quad(100, 100);
Geometry quadGeom = new Geometry("quad", quad);
The Box is a "Geometry", the "quadGeom" is a "ModelInstance" or "GeometryInstance" or "Mesh" or named something else. Looking at this, i presume you dont want the batch system back, or the batches should be created by the contructor automatically? If you looked at my code, you saw that TriBatch has duplicated Transforms, for the reason that one can be updated, while the other is used for current rendering. You don't want the duplication, or the duplication would be better in "GeometryInstance", and thus without batches? I'd prefer batches, because then there is a clear separation between the scenegraph and the queue system. The batch system can be totally invisible to the scenegraph user, only renderpasses need to know how to handle them.
// material controller: NormalSpec.
Material nsMat = new NormalSpecMaterial();
// simplified interface for accessing various material settings without using RenderStates
nsMat.setDiffuse(ColorRGBA.red);
nsMat.setSpecular(ColorRGBA.white);
nsMat.setShininess(64);
nsMat.setFrontFace(Face.Smooth);
nsMat.setBackFace(Face.Cull);
// resource management system
nsMat.setTexture(ResourceManager.getTexture("diffuse.dds"), 0);
nsMat.setTexture(ResourceManager.getTexture("normals.dds"), 1);
nsMat.setTexture(ResourceManager.getTexture("specular.dds"), 2);
rootNode.attachChild(quadGeom);
rootNode.updateGeometricState(0, true);
}
// ...
}
Hiding the renderstates in the material is very good idea. There are two things to this:
1. Making pass specific settings for the materials (called "technique"). There is such thing implemented in my work, but only for the shaders, which automatically chooses different shaders based on the renderpass, or other circumstances. While this is not necessarily exposed to the user, did you consider it?
2. It is still not clear to me if Material objects representing the same rendering parameters should be immutable, shared, shared for a single model, or strictly per-batch. I used them however i seen fit, but it could lead to confusion among API users. I feel that the proper usage of Material should be better defined (or not, but then it requires attention from the users side).
Usage example of geometry system:
public class TestGeometry extends SimpleGame {
public void simpleInitGame(){
// box is the geometry data
Box box = new Box(2, 2, 2);
// boxGeom is a scene graph element that contains a mesh
Geometry boxGeom = new Geometry("mybox", box);
// generates static VBO before rendering, deletes geometry data from CPU
boxGeom.setUsage(Usage.Static);
// generates static VBO before rendering, keeps geometry data on CPU
boxGeom.setUsage(Usage.Modifyable);
// generates "dynamic" VBO
boxGeom.setUsage(Usage.Dynamic);
// generates "stream" VBO, or doesn't use VBO at all
boxGeom.setUsage(Usage.Animated);
rootNode.attachChild(boxGeom);
}
}
Do you want to adopt the VertexBuffer or not? If yes, then the VBO attributes should be moved from Geometry to the VertexBuffer, because those attributes are common for the whole buffer, and not only for the range of the buffer used by the Geometry (tho it could be enhanced further in that direction). The methods could still be implemented for compatibility, but the usage attributes should be in the VertexBuffer. I would change the Usage types:
Instead of:
boxGeom.setUsage(Usage.Modifyable);
boxGeom.setReadable(true);
Which keeps a direct ByteBuffer of the geometry for read-back.
boxGeom.setWritable(true);
If the buffer is readable too, then it updates the VBO from the direct buffer. If it is not readable, then requesting the buffer returns a write-only mapped buffer.
Usage example of post processing:
BloomFilter bloom = new BloomFilter();
bloom.setScale(2);
// ... set other parameters
PostProcessPass ppPass = new PostProcessPass();
ppPass.addFilter(bloom);
passManager.addPass(ppPass);
Usage example of writing a filter:
public class BloomFilter extends ColorFilter {
private TextureRenderTarget rtt1;
private TextureRenderTarget rtt2;
public void init(Renderer r){
rtt1 = r.createTextureRenderTarget(r.getWidth(), r.getHeight(), Format.RGB8);
rtt2 = r.createTextureRenderTarget(r.getWidth(), r.getHeight(), Format.RGB8);
}
public void execute(Texture source, RenderTarget target, Renderer r){
// example of using Processor class
// does processing on an image, writing the output to a RenderTarget
Processor p = new Processor(r);
p.setSource(source);
p.setTarget(rtt1);
p.setMaterial(ResourceManager.getMaterial("bloom_extract.mat"));
p.process();
p.setSource(rtt1.getTexture());
p.setTarget(rtt2);
p.setMaterial(ResourceManager.getMaterial("blur_x.mat"));
p.process();
p.setSource(rtt2.getTexture());
p.setTarget(rtt1);
p.setMaterial(ResourceManager.getMaterial("blur_y.mat"));
p.process();
p.setSource(source, 0);
p.setSource(rtt1.getTexture(), 1);
p.setTarget(target);
p.setMaterial(ResourceManager.getMaterial("bloom_combine.mat"));
p.process();
}
}
Is the RenderTarget here the same functionality as the FrameBuffer class in my code? Actually a rendertarget is one render-to-texture of an FBO, so the FrameBuffer is more precise name. I would not name these as ColorFilter and ColorDepthFilter, because for deferred shading the PostProcessPass would need one color render target, another color render target storing the normals and another render target for depth (and there can be even more render targets: specular, ambient occlusion, whatnot). So, the deferred shading needs a whole bunch of textures (a whole FBO setup: FrameBuffer), not just one color+one depth texture.
No parameters for resource creation? I use a HashMap for passing parameters to create resources (Texture, Material, Model). I presume that the resource will not be changed by the user code, so when given all the parameters, the resource system can cache the created resources. I'd like to go full-length on this and prevent changes to resources, if the resource was requested as shared. Maybe a getCachedTexture() with immutable texture, and getTexture() which isn't cached at all. The same principle could apply to Material too.
You propose that the ResourceManager be a singleton. I see no problem with that, tho does that also means that jME 2.1 should not adapt a context system? It is highly debated if singletons are good or not, i personally don't like them, and try to avoid them. Using an ApplicationContext makes it easy to replace the global singletons with user defined ones, while enforcing static singletons with the "Singleton.getInstance()" scheme makes that impossible (hope that was clear, if not i'll explain more).
I'd like to get the stable parts (in the sense that i would not further change the way those parts work, only make some adjustments) of my code into jME 2.1. Those are the Geometry, and the GLSL handling. Other parts could be ported also, to produce a stable (in the sense of well-tested and bug-free) jME 2.1, but i have ideas how to further develop pretty much every part of the code.