I am creating this thread to not clutter the montly screenshot with lit/shadowed cubes and to have a place to discuss features and whatever regarding this project.
This is going to be open sources once i am confident that at least the public api is stable enough. But i am favoring an early alpha release to get some feedback as well as other computers that test everything. Since i am running on Windows with Nvidia i have the worst setup for testing shaders.
Most of the low level stuff is working. I have VSM shadows for spot and point lights.
On the high level side i ran into some design issue that i did not think of.
I have created a simple but flexible enough pipeline that allows each renderpass to work on the result of previous renderpasses without caring which one does produce the actual result. So far so good. Here are the issues with the current design:
When implementing a reflection camera i would have to create a new pipeline since i cannot reuse the currently used. I would like however to have the same pipeline to be used. This would require only little change in my design but when thinking about it the bigger problem showed up. Shadows for example. I really do not want to rerender the shadow maps for each of the pipelines since they are the same. That requires some kind over versioning on the pipeline resources.
What i have in mind is some kind of dependency tree that gets build and processed at each frame.
Depending on the progress in this area i hope to have an possible alpha soon
So it can be used as an extension to the engine? That is cool!
I was thinking the addition of deferred lighting would require changes on the engine side, it is good to know that can be used as an extension!
If i have to make changes to the engine. (like with the stencil) i will propose minimal changes as pull request.
I always was for keeping the engine as tight a possible. As long as jme allows me to do everything i need i can release/update much faster and am not depending on the engine release cycle.
Also there is the issue of maintaining the code when the engine gets bloated with what should be user projects/plugins/extensions
Implemented 4 high level apis. Nothing yet that is kind of a clean api without having to add hacks myself. Had to fokus my attention for a while to something different. Got also bored of the boxes so we are now at sponza. That was good since it already showed me some bugs that are already fixed and some features (like alpha discarding during shadow map rendering) that are missing.
The screenshot shows how the shadows mixing with each other. I am quite happy with the result even i am not yet filtering the shadowmaps
Took me two more days to implement the point light shadows. turns out those where much trickier then the spotlights.
I have published a pre-pre-pre alpha version to github for those who want to test. I have not tested this on ati cards, *nix, or macos systems. Basically only tested on the worst development environment possible (windows nvidia)
How does it work?
//Define your light mode (Currently only BlinnPhong available
LightMode lightMode = new BlinnPhong(Constants.WorldNormals, Constants.BaseColorsSpecular, Constants.DepthStencil, Constants.GBufferBlinnPhong);
//Setup your default pipeline
RenderPipeline renderPipeline = new RenderPipeline("Default Pipeline", Constants.PostProcessingFP, Constants.DepthStencil);
//Enable the renderer
viewPort.addProcessor(new IlluminasRenderer(renderPipeline, assetManager, cam, rootNode));
//Use the currently only material:
new Material(app.getAssetManager(), "Materials/Illuminas/BlinnPhong.j3md"));
//And you are good to go.
If you want shadows use:
//Currently only point and spotlights are supported
ExtendedPointLight or ExtendedSpotLight
light.setShadowMode(ShadowMode.VSM); //VSM and VSM_GF are available
Thats it for now, next on the tasklist:
- Extend the material to support animation/instancing and so on.
- Add some post processing effects
- Alternative ShadowModes / Fix the current once
I hope that the public api does not change that much.
I cloned the repo and compiled the code. If I wanted to test, what would I do next? I don’t see a test application …
Yeah sorry for that. The intel sponza models alone are 4gb of assets. I had to remove the examples subproject for now until i have cleaned that up.
Is it possible to add it to the existing Lighting.j3md via lighting logics?
Like we have for SinglePass and MultiPass
and enable it by setting the light mode on the render manager
As far i can see the LightLogic things are here to set the correct shader parameters for the different lighting modes.
Now, one of the benefits of deferred shading is that the actual shader used for rendering the geometries does not know anything about lights. Even if it might be possible to use the LightLogic for setting up the correct framebuffer for rendering, but i think thats it. At least some postprocessing is required if you want something lit.
This is also one of the downsides of deferred shading. You cannot have different light logic for different geometries. Also your lights lit everything, and not only the subpart of the scenegraph where they are added.
Additionally i did not want to touch the engine itself. This is out of scope for now.
I am going to write a doc on the design decisions i took, and how to use the RenderPipeline. Actually about 99% of the code in my lib is for the flexible pipeline and only very little is actually about deferred shading. As scene processor, the way it is currently added, is only a sub optimal and misleading solution, but i did not find a way to add it deeper.
I am actually quite sure that currently it interferes with any other scene processors added. As said this is pre alpha and lots of stuff i am going to need are not yet added.
While it would be nice to be able to use the core materials, i do not see a way without engine modifications. (I was thinking about some integration utils to be able to quickly test deferred without having to replace the materials. but i have not implemented more than a scene traverser that replaces the materials)