I am creating this thread to not clutter the montly screenshot with lit/shadowed cubes and to have a place to discuss features and whatever regarding this project.
This is going to be open sources once i am confident that at least the public api is stable enough. But i am favoring an early alpha release to get some feedback as well as other computers that test everything. Since i am running on Windows with Nvidia i have the worst setup for testing shaders.
Most of the low level stuff is working. I have VSM shadows for spot and point lights.
On the high level side i ran into some design issue that i did not think of.
I have created a simple but flexible enough pipeline that allows each renderpass to work on the result of previous renderpasses without caring which one does produce the actual result. So far so good. Here are the issues with the current design:
When implementing a reflection camera i would have to create a new pipeline since i cannot reuse the currently used. I would like however to have the same pipeline to be used. This would require only little change in my design but when thinking about it the bigger problem showed up. Shadows for example. I really do not want to rerender the shadow maps for each of the pipelines since they are the same. That requires some kind over versioning on the pipeline resources.
What i have in mind is some kind of dependency tree that gets build and processed at each frame.
Depending on the progress in this area i hope to have an possible alpha soon
So it can be used as an extension to the engine? That is cool!
I was thinking the addition of deferred lighting would require changes on the engine side, it is good to know that can be used as an extension!
If i have to make changes to the engine. (like with the stencil) i will propose minimal changes as pull request.
I always was for keeping the engine as tight a possible. As long as jme allows me to do everything i need i can release/update much faster and am not depending on the engine release cycle.
Also there is the issue of maintaining the code when the engine gets bloated with what should be user projects/plugins/extensions
Implemented 4 high level apis. Nothing yet that is kind of a clean api without having to add hacks myself. Had to fokus my attention for a while to something different. Got also bored of the boxes so we are now at sponza. That was good since it already showed me some bugs that are already fixed and some features (like alpha discarding during shadow map rendering) that are missing.
I have published a pre-pre-pre alpha version to github for those who want to test. I have not tested this on ati cards, *nix, or macos systems. Basically only tested on the worst development environment possible (windows nvidia)
How does it work?
//Define your light mode (Currently only BlinnPhong available
LightMode lightMode = new BlinnPhong(Constants.WorldNormals, Constants.BaseColorsSpecular, Constants.DepthStencil, Constants.GBufferBlinnPhong);
//Setup your default pipeline
RenderPipeline renderPipeline = new RenderPipeline("Default Pipeline", Constants.PostProcessingFP, Constants.DepthStencil);
//Enable the renderer
viewPort.addProcessor(new IlluminasRenderer(renderPipeline, assetManager, cam, rootNode));
//Use the currently only material:
new Material(app.getAssetManager(), "Materials/Illuminas/BlinnPhong.j3md"));
//And you are good to go.
If you want shadows use:
//Currently only point and spotlights are supported
ExtendedPointLight or ExtendedSpotLight
light.setShadowMode(ShadowMode.VSM); //VSM and VSM_GF are available
Thats it for now, next on the tasklist:
Extend the material to support animation/instancing and so on.
Add some post processing effects
Alternative ShadowModes / Fix the current once
I hope that the public api does not change that much.
As far i can see the LightLogic things are here to set the correct shader parameters for the different lighting modes.
Now, one of the benefits of deferred shading is that the actual shader used for rendering the geometries does not know anything about lights. Even if it might be possible to use the LightLogic for setting up the correct framebuffer for rendering, but i think thats it. At least some postprocessing is required if you want something lit.
This is also one of the downsides of deferred shading. You cannot have different light logic for different geometries. Also your lights lit everything, and not only the subpart of the scenegraph where they are added.
Additionally i did not want to touch the engine itself. This is out of scope for now.
I am going to write a doc on the design decisions i took, and how to use the RenderPipeline. Actually about 99% of the code in my lib is for the flexible pipeline and only very little is actually about deferred shading. As scene processor, the way it is currently added, is only a sub optimal and misleading solution, but i did not find a way to add it deeper.
I am actually quite sure that currently it interferes with any other scene processors added. As said this is pre alpha and lots of stuff i am going to need are not yet added.
While it would be nice to be able to use the core materials, i do not see a way without engine modifications. (I was thinking about some integration utils to be able to quickly test deferred without having to replace the materials. but i have not implemented more than a scene traverser that replaces the materials)
A single light is of course not a good example on when to use deferred shading. You have to pay the memory cost of the gbuffer textures and loose the ability to multisample.
If you do not have a very complex scene with lots of overdraw i expect the deferred version to be slightly slower.
Once finished i hope to have my integration util setup in a way that you have to add a few lines in simpleinit only to try it out. (If you are working with default jme materials)
The main advantage over stock jme rendering imho is the ability to change the lightmode without having to change anything in your shaders once my Lighting.glsllib is used. There is still missing a detail, but if that is solved you might even implement a lightmode in a separate project and all materials would use that technique
Deferred PBR vs JmeStockPBR both without indirect lighting.
Not sure why jme does not light up the curtains but i spent already too much time on finetuning.
Jme still seem to be slightly brighter. There are a few pieces of code regarding SRGB and light attenuation that i have not yet implemented.
Shadows for directional lights and Image based lighting are still major features that are missing.
As for the PBR shader, it is nearly feature and naming equal with jme pbr. Same i have in mind for the classic Lighting material as well as terrain. I should make fast progress on this part since i have the glsllibs at hand.
Well, I understand that gamma is a coefficient in the exponent used for final output color correction.
To me, that does not mean that the input colors (I.e. loaded textures or generated framebuffers) are treated differently, but I might be wrong.
Maybe zzuegg can prove or disprove it. I can hardly do that because my materials are not related to the jme materials, but it is difficult to find a combination between all the different settings which works “well enough” - not even speaking of an exact match between forward and deferred rendering.