Enhancements to the JME3 Rendering Engine

Hello,
About 2 years ago (or 1 year and 7 months ago), I posted a thread: Regarding the development of the core engine, I would like to know what everyone thinks - #27 by Ali_RS. I originally planned to complete this work at that time, but later had to temporarily leave JME3 due to being busy with work and survival. However, I have continued to follow JME3.
During the time away, I mainly worked on graphics related roles developing UE4/UE5 engines. 《Dead by Daylight》 Mobile is a recent mobile game I participated in launching, built with UE4. Although I’m still quite busy recently, I maintain the same goals from 2 years ago - I’m happy to provide any technologies I can implement for JMonkeyEngine3, and hope to merge them into the current core engine. Currently, I’ve organized some of my previous modifications to JME3, which include:

  1. Framegraph (Scenegraph organizes scene objects, while Framegraph organizes modern rendering workflows, see: https://www.gdcvault.com/play/1024612/FrameGraph-Extensible-Rendering-Architecture-in). I’ve implemented a basic framework which isn’t perfect but can work and is compatible with JME3.4 core.
  2. Implemented Forward, Deferred, TileBasedDeferred, Forward+ and other render paths. They are fully compatible with current JME3 core (and should be compatible with most existing code), and have almost zero invasiveness for existing projects. You can switch render paths at runtime with just a function call.
  3. Global Illumination (I originally planned to implement two main GI methods). Some may already know JME3’s PBR indirect lighting is similar to UE/U3D/Godot’s SkyLight + ReflectionProbes. If you’ve used other engines, you’ll notice they have ReflectionProbes, LightProbes, and SkyLights. For UE4/UE5/Unity3d, SkyLight is the same as JME3 PBR indirect lighting environment diffuse SH, while ReflectionProbes are like JME3’s PBR indirect specular IBL. However, for real GI, SkyLight and environment reflections aren’t enough. For Unity3D, Godot, and UE4 (UE5 now uses Lumen fully), their GI is described as: Lightmap/Lightprobes/LPV/vxGI/DDGI+SSGI/ReflectionProbes+SkyLight. There are many combined techniques, see Godot’s doc for details: Introduction to global illumination — Godot Engine (stable) documentation in English. So I’ve implemented a mainstream LightProbe system (but probe placement needs visualization tools, pure code is too painful, so it’s not perfect yet), I’ve explained details in code comments. I defined it as LightProbeVolume for now to be compatible with current JME3 core, but some issues may need discussion with core devs - perhaps a separate thread?
  4. ShadingModel - with multiple render paths now, ShadingModels are needed to shade materials properly under deferred rendering (similar to UE4).
    There may still be bugs in the above, but that’s ongoing work I can maintain/fix regularly. What I want to confirm is whether these can be merged into the current core? I may implement more advanced graphics features later (UE4’s HierarchicalInstancedStaticMesh, GPU Driven Pipeline, state of the art OcclusionCull, variable rate shading, AMD FSR, skin rendering etc).

Below are some example screenshots of tests. The following shows a comparison of the differences when switching between Forward, Deferred, and TileBasedDeferred render paths at runtime:




For generality, I did not optimize the Deferred rendering enough - it uses 4 RTs + 1 Depth by default. This references the UE4 renderer design, which is sufficient for most advanced rendering data packing needs. I pack the LightData information into textures instead of UniformBuffers (although I retain UniformBuffer fallback options). Below is an example of the data in one frame:

For TileBasedDeferred, in theory it is faster than Deferred. It has a tile culling operation for PointLights - this data is packed into a fetch texture and light source texture:


For shadows, under deferred rendering you can only use post-process shadows, so you are limited to using ShadowFilters. However, unlike before, in the future I may add visibility information, storing Cast, Receiver, Mask etc. data into the G-Buffer, so more complex shadow logic can be implemented during the post-process shadow pass.



I noticed @zzuegg implemented a better shadow filtering effect here (Illuminas - Deferred Shading and more), I may consider porting that in.
Next is the current LightProbeVolume implementation I have done. I referenced many materials including Unity3D’s lightprobe groups, Godot’s approach, and UE4’s Lightmass, then implemented a LightProbeVolume relatively compatible with JME3:


Note that the image is not PBR, but Phong Lighting + GI. As I mentioned, advanced graphics rendering requires adjusting some existing workflows in JME3, so I plan to refine it further after discussing with core developers.
Note the difference between constant ambient lighting and the LightProbeVolume,The first image uses constant ambient lighting, the second image uses a LightProbeVolume:


Since it is not working in an HDR pipeline, the colors are actually inaccurate - this can be fixed in future adjustments. But for now let’s look at the differences between the new LightProbeVolume and LightProbe in JME3. First, the most obvious difference is: the number of probes is very high, and the stored data includes not just irradiance but also data to prevent light leaks in GI, similar to Unity3D:


Processing: image.png…

As you can see, there is a wall between the blue box and red box, so GI does not leak between them.
Alright, I need some feedback, the most important point is: whether I can merge it into the core module (although my code is JME3.4, JME3.6 core shouldn’t have changed too much), cheers! :wink:

24 Likes

The rendering path is fully compatible with the existing PBRLighting:




You just need to dynamically switch the rendering path in code like this:
image
For custom MaterialDefs, even with deferred rendering enabled, it won’t send the rendering to the deferred path by default. You can add a Tech block called GBufferPass to pack your data and write your ShadingModelId, then extend ShadingModelMatDef to implement shading for your ShadingModelId. This will allow your custom MaterialDef to work under deferred rendering.

12 Likes

With shadows disabled, first image is constant ambient lighting, second image uses LightProbeVolume:



Still no HDR, so light colors are inaccurate, but for now just look at the differences between them.

12 Likes


15 Likes

Another global illumination I plan to implement soon in JME3 is the Light Propagation Volume from CryEngine 3, which is used for real-time dynamic scenes. Although it may not look as good as VXGI, it is relatively more memory efficient and easier to implement and integrate into JME3. Other GI solutions I may implement later could be DDGI, or call it SDF DDGI, which is the global illumination in Godot 4 (side note: considering graphics rendering is getting more and more complex, I currently have no plans to write something like Vulkan-JME4, or implement RHI (abstract graphics API interface, unifying Dx, GL, Vk, Metal) like Unreal Engine. But I see @danielp has this idea in this post, which is great - JME4 or JME5 should definitely embrace new technologies, at least it should use Vulkan. To be honest OpenGL is really ancient, the spec is not being updated anymore, and Vulkan (and Metal) is recommended for mobile too, for the larger mobile Vulkan market, except for really old devices that may still use OpenGL. The biggest problem is OpenGL is helpless for latest GPU technologies like multi-threaded concurrent command submission, PSO caching, RTX ray tracing etc. Still, I plan to enhance graphics features in JME3 - of course if someone already started on a new renderer I’d probably be happy to port related advanced graphics features over. )

14 Likes

This is reference material for implementing LightProbeVolume, some people might be interested :grinning::

http://melancholytree.com/thesis.pdf>http://melancholytree.com/thesis.pdf
https://research.nvidia.com/sites/default/files/pubs/2017-02_Real-Time-Global-Illumination/light-field-probes-final.pdf

9 Likes

I see, this is almost non invasive from user perspective.

Myself i use like 6-8 Viewports code and it would require me to change like nothing.

Im not lower level API specialist, and im not core-dev here, im just a Contributor, thats all. But i try to understand everything as much as i can.

While i belive it would be very cool if Riccardo Pipeline would be merged(ported) with your awesome work done here :slight_smile:
Rendering Dependency Graph or “framegraph” it looks very similar to new Pipeline that Riccardo were working on. I remember there also were able to split passes and re-use them / etc.
But i do not recall if it could “schedule” rendering them, i belive not. As i understand your code use asynchronous-compute scheduling, where i do not recall to be implemented for Riccardo Deferred code.

Im not sure if i underastand correctly here, Framegraph will be just background process, while user will have access only to “rendering paths” for a renderer. Ofc i understand that Advanced users might still get access to Framegraph passess if they would like to use them for anything.

Light Probe changes (Global Illumination topic) indeed will change a-lot and will be invasive for users.
So it would be best to find best solution for creating and placing required Probes.

In any case Deferred lighting in my opinion is very improtant addition for JME.

There were also another topic about changing lower API into different like WebGPU/etc, but im not expert here so im not sure how much changes it would require and if it would be even possible, but in general view as i see and understand, most of this Code is higher-api based, so for “Framegraph” it would be changing how to render specific passes for a lower-api.

2 Likes

In simple terms, Framegraph is a system to manage the rendering process (Passes). In modern renderers, a frame’s rendering consists of several Passes (e.g. PrePass->BasePass->OpaquePass->TranslucentPass etc). There are execution orders between Passes, and resource dependencies between Passes. It’s different from rendering paths - rendering paths are just methods to optimize lighting calculations. Of course, different rendering paths may contain different Framegraphs. For example, Forward (Framegraph) may be PrePass->BasePass->OpaquePass->TranslucentPass etc, while Deferred (Framegraph) may be PrePass->GBufferPass->DeferredShadingPass->BasePass->TranslucentPass etc, and TileBasedDeferred may be PrePass->GBufferPass->TileBasedDeferredShadingPass->BasePass->TranslucentPass etc. In summary, Passes can be modularized, and connected to other Passes via input/output, and executed in specified order, which makes expanding complex frames (Framegraphs) more flexible.
You may think why not just use existing JME3 SceneProcessors? Actually a SceneProcessor may also need multiple Passes internally, see BloomFilter, this SceneProcessor (Filter) has multiple Passes inside, so SceneProcessor contains a Framegraph.
Let’s look at a frame (Framegraph) in UE4:

And what it looks like in code:


For UE, a Pass can be a simple callback function, or an encapsulated drawing module, or a compute task (ComputeShader), or a custom profiling stage for detailed GPUProfiler recording.

Currently, for compatibility reasons, I implemented a basic Framegraph, which looks roughly like this in code:

Currently, the way of binding resources looks quite primitive, this can be improved later. In short, deferred rendering can be implemented even without Framegraph, so this is different from rendering paths, but Framegraph is designed to modularize (flexibility) a frame’s rendering, it prepares for implementing advanced graphics rendering in the future (e.g. GPUDrivenPipeline). Although it can work without Framegraph, just like a 3D renderer can be written without Scenegraph, but for more modernization, I think this is still quite important (at least, in several advanced modern engines I worked with - UE, Unity3d, CE5, they all organize complex rendering of a frame via Framegraph).

Regarding LightProbeVolume and existing JME3 LightProbes, my understanding is that JME3’s LightProbe is actually SkyLight + ReflectionProbe in UE4/Unity3d. Usually we may only need the diffuse GI from SkyLight (can be procedural environment or specified image environment sky), but not specular reflection. On the other hand, sometimes we need specular reflection, UE/Unity/Godot provides Specular IBL GI via separate ReflectionProbes, but for Skylight we usually only need one sky diffuse GI (SH spherical harmonics lighting), so I plan to split JME3’s LightProbe into SkyLight and ReflectionProbe in the future. One SkyLight + one ReflectionProbe is the current JME3 LightProbe, but in the future it may be one SkyLight + multiple ReflectionProbes + other GI (LightProbeVolume/Light Propagation Volumes/vxGI/SDF DDGI/SSGI), so LightProbeVolume won’t conflict with current LightProbe (because currently LightProbe handles SkyLight + ReflectionProbe).
Regarding the placement of LightProbeVolume, I plan to add Volume tools similar to UE/Godot in the SDK, so users can drag a Volume into the scene, adjust some parameters, to auto-place probes:


13 Likes

Nice work. From the few snippets i see that the basic idea i had when implementing illuminas have been the same.
Yours seem to be more integrated at the cost of requirering core changes. (not a bad thing)

I am curious to look trough your implementation details.

3 Likes

Hi zzuegg :grinning:, I looked at your illuminas, it’s great. It’s encapsulated internally by things like RenderTask. Framegraph could be a more detailed manager of RenderTask, but I didn’t spend too much time on Framegraph. I implemented the basic framework, maybe someone can do better.

Regarding the rendering path section, and how I made it compatible with the existing material definitions in the system, and almost seamlessly compatible with most of the existing code (Examples), I think if the code can be merged into the core modules, you may be interested to study it if you are interested, maybe the code is not written skillfully enough…
Here is a brief explanation of my implementation. In order to be non-invasive to existing projects, and to accommodate various shading models, the general idea is as follows:
When deferred rendering path is enabled, GBufferPass node and DeferredShadingPass/TiledDeferredShadingPass nodes will be added in the Framegraph. When executing to the GBufferPass node, in the SetupMeshDrawCommands callback method, meshes in the render queue that meet the conditions will be drawn (the material contains GBufferPass, UE4’s approach is to dispatch a task at the beginning, submitting eligible MeshDrawCommands to various Passes). Then execute DeferredShadingPass/TiledDeferredShadingPass (these two Passes use a global MatDef).
For custom materials, GBufferPass is not included by default, so even if deferred rendering path is enabled, it will not execute in it, but will be dispatched to the subsequent ForwardPass node for execution. If you want the custom material to run in the deferred rendering path, you need to add a GBufferPass block in the custom material definition, and pack your own GBufferData in it (there are 4 RTs for you to pack, you don’t have to use all the RTs), then you also need to write your ShadingModelId in it. After that, you need to extend the DeferredShadingPass/TiledDeferredShadingPass MatDef to parse your ShadingModelId in it, and then parse your own GBufferData to implement your own shading model.
Through ShadingModelId, PBRLighting, PhongLighting, Unshaded, SSS (skin shading), custom shading models, etc. can be drawn in one DeferredShading/TiledDeferredShading.
These are the Shading Models in UE4:


These are the Shading Models I implemented in JME3:

Welcome feedback, but I have to go to bed now, it’s already 11:36pm late night here. :wink:

10 Likes

:open_mouth: Great job, especially looking forward to features like deferred rendering and global illumination. The modifications based on FrameGraph will also be beneficial for future use with vulkan and DX12.

5 Likes

Looks like an amazing work you did. I hope you can push some of it into JME.
I don’t understand all this low-level tech but as an end-user, can I try using your implementation? is it compatible with the current API? do you plan deploying it to Maven so we can consume it via Gradle script?
Thanks a lot!

5 Likes

Hi adi.barda, I’m planning to merge it into jme3-core, because these modifications are based on changes to the renderer module, so they can’t be built as a standalone plugin or jar (but it preserves everything original, the new stuff works well with existing code, and is non-intrusive plug-and-play for users). Since this modifies the core module, and it’s my first attempt at contributing to JME3 core code, I’m not too sure if I should get consent from core members before pushing the code, so I’m waiting for advice from the engine leadership…

8 Likes

Usually a pull request is made where discussion regarding implementation and whatever can take place. i assume there is quite a lot to review.
Basicall push changes to your fork of the engine and create a pull request.

Another benefit of going that route is that anyone willing to try it out can apply the changes to their own fork

4 Likes

Thank you for your guidance, zzuegg. I will try to proceed according to the steps you outlined. :wink:

2 Likes

Also, if you wait for Engine_Leader advice, lets see:

Core Devs / Core Team:
pspeed - Im not sure if Paul is low-api expert, but probably a little yes.
RiccardoBlb - Imo the main person who can give advice in this topic.
sgold - our Physics specialist, but not only, idk if could give advice here.
Darkchaos - SDK Developer, Ofc this changes like Probes will affect SDK tools, so i think it would be important to discuss Probes topic with him
Tonihele - SDK Developer too.

Im not sure who is the “core of core” person, i belive first 4 are the main decision-make people here.

Additional important mention:
Ali_RS - Recent animation changes. Im not sure if this might be related to animations, but if yes, might we worth to mention.
rickard - Recent SDK changes, probably not related to topic, but maybe.

Sorry, If i skipped someone important to the topic.

I would really like if another expert like you JhonKkk might contribute with your amazing work.
Because as i understand, you would still do maintenance for this and fix issues that might follow after merge. So its not like this will be added, but there will be no other person to maintain this right?
This almost 2 years break might discourage a little, but still even PBR work were merged into JME without oryginal-author maintenance, because it was needed by people too.(ofc needed make fixes later). But here you said you gonna maintain and even help in future JME4 when need, so it sounds very cool.

If you have any JME Fork that contain changes, i could help Try it out. But yes, i hope it will just go into JME core as new sub-version. Because initial idea (at least for me) were to follow core branch. Same like Ricc pipeline i would like to have merged into JME core functionality.

As for the merge, i guess like zzuegg said, creating fork and pull request is anyway first step to add anything into jme3-core. Where Core Devs and others might look into and comment on github.
As this cant work as plugin/lib the only Tests that might be done by users, is your JME Fork anyway.
(also it passed just 1 day, so i guess need give little more time ofc)

5 Likes

Thank you for your patient guidance. Regarding some of the points you mentioned:

  1. I’m sorry for leaving for those two years. I was forced to make compromises with life, and only recently have had time to come back to the JME3 community.
  2. If merged into the core module, I will continue maintenance and bug fixing until it is stable.
  3. As you said, since the modifications may involve several parts, further communication with RiccardoBlb may be needed for some internal renderer adjustments (though he seems quite busy?). For the LightProbeVolume, I may seek help from SDK developers like Darkchaos or tonihele . For potential issues, I may discuss with pspeed , sgold and Ali_RS …
  4. What I may do in the near future:
    a. Fork the latest JME3.6 source code, migrate local changes to JME3.6, and test all existing Examples.
    b. Sync progress in this thread.
    c. Try a pull request to the JME repository.
    d. The progress may not be very fast (though I have committed the current JME3.4 mods to my personal repo for early viewing by interested folks), because currently I only have weekends for JME3.
    e. To be honest, this is my first time contributing via source code. Although I posted some stuff 2 years ago, they were mainly personal github links, so I’m not too familiar with the workflow here. Again, thank you for your patience and @zzuegg guidance. :grinning:
5 Likes

Welcome back, @JhonKkk!

Buried in this sudden, lengthy discussion are some vital questions:

some issues may need discussion with core devs - perhaps a separate thread?

This forum topic (thread) is a fine place for discussion with core devs. However, I plan to relocate it to the “Development” category.

What I want to confirm is whether these can be merged into the current core?

I’m not too sure if I should get consent from core members before pushing the code, so I’m waiting for advice from the engine leadership…

As @zzuegg and @oxplay2 indicated, the customary procedure for integrating changes into the Engine is to create a fork and pull request at GitHub. @oxplay2 also did a good job of enumerating the current leadership of the JMonkeyEngine project.

Despite being labelled an “Engine leader” and occasionally managing Engine releases (including v3.6.1-stable, our most recent) I don’t feel qualified to evaluate the merits of the changes you’ve described. They sure sound exciting, though!

In my personal view, the Engine side of the project (as opposed to the SDK) is currently adrift and without active leadership. The time is ripe for people with skills and energy to fill that void. I don’t know you well, but I’m hoping you’re such a person. Perhaps our best way forward will be to set you loose as a core dev and let you manage the v3.7 release of the Engine. (I’m speaking for myself here, remember.)

My advice is for you to get in touch with @RiccardoBlb. I suspect he’s busy with other projects these days. However, he’s probably the person best equipped to evaluate your ideas and help you get them integrated into the Engine. He understands the Engine’s internals. He also has the authority to approve and integrate pull requests and add people to the project.

12 Likes

Hi @sgold , thank you for taking the time to provide valuable suggestions. I really appreciate it, thanks again. :slightly_smiling_face:

2 Likes

Hi, did you managed create github JME fork with pull request into oryginal github JME repo?
Tried to find if there is something, but could not find.

1 Like