I’ve been thinking a lot on what features would the engine would most benefit from, and so far I’ve found 3 features which I think would significantly boost performance and save time for developers.
Virtualized geometry (dynamic LODs) and software rasterization. This would allow the engine to render far more detailed meshes and far more triangles at once. UE uses it to render over a billion triangles realtime.
Virtualized textures. This would allow the engine to use higher resolution textures far more efficiently.
Global illumination. This would make even simple scenes look far far better. If we choose to use a dynamic method, this would additionally require no work from developers. @JhonKkk already made dynamic diffuse global illumination, but unfortunately I cannot find it anywhere.
Johnkkk gi stuff is in his repository under a branch called gi lab or something.
If i had to choose from the 3 points above, i think GI would bring most benefit to current projects. (Even i am still unsure how many project are using pbr)
Streamlined asset pipeline. (not needing blender skills, and several tools to get an animated model imported) (based on the forum posts from last month)
I think GI is the option with the highest impact, and it’s certainly what I currently miss most. Lack of GI in jME has been enough to make me consider switching to Godot (though I am quite reluctant to move away from jME).
I did look at that branch, but I think it only contains stuff for light probe volumes. If you manage to find the stuff for ddgi, please send it to me.
In the mean time, I was thinking about trying voxel cone tracing for global illumination. It has several things going for it:
No hardware raytracing required! We’d otherwise need vulkan or optix (nvidia only).
Fully dynamic, no precomputation required.
Produces pretty good results (not perfect ofc).
Probably the one of the simpler methods available.
I’m definitely open to looking at other methods, but this seems like the best one at the moment, especially for someone who hasn’t done GI before.
Edit: here’s a really awesome article on voxel cone tracing if anyone is interested:
I don’t think it’s too bad at the moment, plus there are tools already made such as monkeywrench, jmec, and maud (hopefully Envision3D soon as well). Not knowing how to work blender is another problem entirely, imo.
I sometimes think that a blender addon for exporting directly to j3o would be neat.
…and then have to potentially re-export for each new JME version…
Even for the models I create, there is always some level of tweaking they go through post-export to use them in the game… even if it’s just tagging materials to export, normalizing textures, or something. Always something. (Edit: I ALWAYS create a jmec script for every model even if it will only have one little thing in it… I just know there is a high likelihood to need “something”)
That goes up exponentially with “off the shelf” models.
I’ll admit, I haven’t done a whole lot of post-export tweaking, but yeah, I can see a j3o exporter wouldn’t be very great for when it does need to be done.
Also, maintaining a blender exporter seems like an awful job… I’m happy to let the gltf folks do the heavy lifting.
(And note: “back in the day”, I was a big proponent of a JME-specific exporter… though I would probably not have used j3o as the format. There are so many differences between “an application that can make feature films” and a “game engine” that some game-level-specific exporter is definitely the right idea. Fortunately, that’s a role that gltf is kind of meant to fill now.)
I’m very much of the opinion that anything is better than nothing for GI, so please take the following as additional algorithms/implementations that may be worthwhile to look into rather than a suggestion to pursue instead of VCT GI.
Godot has VoxelGI & SDFGI (I assume VoxelGI is an implementation of Voxel Cone Tracing, but not positive), and I get the impression that SDFGI tends to be a bit more robust. I know they’re experimenting with a successor to SDFGI too, so that might bear keeping an eye on.
DVBGI looks quite promising also, though the paper mentions using ray-tracing shaders so I’m not sure if performance would be competitive without ray tracing hardware.
just my 2cents worth. on the topic of “pipeline”, active support for more than GLTF would be Fantabulous. at the very least, i would recommend Collada (it’s a standard), and FBX because there is existing established work there. i know GLTF is “all the rage” atm, but there is a lag in adopting GLTF in some vendor products ( /glares at Strata Design ).
Collada is kind of a pseudo-standard. There are so many “extensions” possible that it can be very difficult to fully support. It was also not designed with “game assets” in mind.
I think JME already had collada support at some point but I find it such an awful format that I never used it… so I didn’t track what happened to it.
…which is literally the only reason to support this otherwise proprietary format. FBX is at least more game-oriented, though. There is some support for FBX already in JME. It just needs someone motivated (like yourself) to extend and maintain it.
GLTF is specifically designed for the purpose we intend to use it… and while some tools may have lag in adopting it, it’s still the best thing we have today. And transferring the models from gltf-lacking tools into gltf-supporting tools is still the easier way to go. And for every “But that’s hard because of this incompatibility…” Welcome in advance to every conversation you will have about the other formats supported by those tools.
I’ve gotten voxelization working consistently. Here is a visualization of a scene voxelized to a 64x64x64 texture and lit with a single spotlight. The visualization clips the scene to within the voxel map’s global coordinates, which is a bounding box of size 40x40x40 centered on zero.
There is still a lot of flickering going on around the edges of the cubes and especially where the intersect each other, and there are also a few unexplained black spots. It doesn’t have to be exact, so for now I think those problems can be ignored.
I don’t have an image to show of this, but I also got the actual cone tracing part “working”. It’s still unconfirmed because I couldn’t decide whether the result looked wrong from the cone tracer being messed up or from the lack of shadows (I’m hoping it’s the latter ). The indirect lighting is most definitely there, so I’ll call it a win.
Currently, I’m working on adding shadows using a compute shader to calculate exposure for each voxel. I think it’ll be a pretty cool technique… once I get the compute shader to actually run. I’m spending most of my time debugging my compute shader library, unfortunately.
Among all issues, I think this one should be considered seriously:
Use the new java.foreign api to replace LWJGL in desktop modules:
The lwjgl3 .dll and .so are not exactly same ones you download from official site, they are from lwjglCI and some of the source code are modified. And no longer need to wait lwjgl to upgrade for bug fixing.
And better lifecycle control of direct bytebuffer and native-java method invocation, NativeObject no need to worry native side memory GC. Arena class will handle it.
No need to carry .dll and so with the program, user can install with themself, so and dll in jar are just backup usage only.
IMHO, vulkan is already possible with the merge of PR2304. The problem is that you won’t be able to use much of JME’s existing architecture, so it’ll practically be a new engine.
I did fiddle around with vulkan support a little bit. It doesn’t work, but it “should” in theory. There’s a bug that needs validation layers to solve.
MonkeyWrench supports collada and fbx (though to what extent I don’t know).
MonkeyWrench is fine for importing Collada models.
FBX models can be imported into JME using MonkeyWrench or jme3-plugins, but only if the model is 10-to-13 years old. From a support standpoint, FBX format is even more of a nightmare than Blend.