JME’s future development is a topic we’ve all discussed multiple times before (here being the first that comes to mind), but in light of some new industry developments I think it bears revisiting.
I see two primary themes that have emerged over the years:
- Will jME support Vulkan?
- jME’s API could be improved (@RiccardoBlb’s work on Pipelines comes to mind here)
Regarding (1), the consensus here (which I concur with) seems to be that jME doesn’t stand to gain much (if anything) from Vulkan without fundamental architectural changes. (2) has been approached on a per-feature basis, without breaking changes to the core API (appropriate for a major release - though I don’t know if the Pipelines feature builds on top of the core or is a refactoring).
My interest in this topic was piqued by recent developments around the WebGPU standard, which seems to be close to finalization (note that preview releases are beginning to ship in browsers and it seems we’re getting close to a 1.0 release of the spec). This is intriguing for jME because in addition to the embedded use in web browsers, Google and Mozilla are producing C(/C++) APIs to use their WebGPU implementations as standalone native libraries. We’ve had similar discussions in the past of using ANGLE to abstract the OpenGL API, and WebGPU could fill a similar role here.
There are multiple potential advantages to abstracting the native API on desktop alone - a consistent interface across platforms (no concern over OpenGL core/compatibility profiles, no more max OpenGL version of 3.3 on Mac), getting the best/preferred graphics driver for the platform with all the benefits that brings (both ANGLE and WebGPU abstract over Direct3D, Metal, and/or OpenGL/Vulkan). This abstraction could also make browser-based games made with jME all the more achievable without a lot of difficulty (see @RiccardoBlb’s demo of a minimal jME in the browser). I expect this could also be of benefit to mobile devs, though I’m not as familiar with that space so I’ll let others address that.
WebGPU brings another benefit - it’s far less verbose than Vulkan is and supports many of the same (or very similar) high-performance techniques. For comparison, consider the difference between a “Hello, World” triangle in Vulkan (~1900 LOC - not the most minimal example, but not unreasonably larger than is typical from what I’ve seen) and roughly the same in WebGPU (85 LOC). Despite the lack of verbosity, WebGPU supports a lot of modern high-performance GPU techniques such as DrawIndirect
(which allows compute shaders to perform GPU-driven rendering and achieve some phenomenally fast results).
As others have pointed out, we need to have a clear end-goal and value proposition in mind or we’re probably going to end up wasting time. From what I’ve seen over the years, I’d be inclined towards a direction like the following:
- Layered modularity. Fundamental modules (math, rendering interface, etc.) export packages/types/classes that can be used by higher levels, but do not depend on higher levels themselves. Behavior that must be provided by a higher level must be expressed via implementing an interface provided by the lower level.
- Replaceable layers. The functions provided by each layer are expressed in interfaces which describe implementation-independent behavior, and the implementations of which can be swapped out at will (jME already does this to a large degree).
- Data-driven design. Scenegraphs are difficult to learn and have a lot of caveats to using efficiently - especially for large scenes. They’re also likely to be sparse in memory, which never helps a modern CPU (not that in practice it’s usually terrible for a modern CPU because of larger caches). I’d suggest providing a core ECS-type interface to the engine (the “scene”) and modeling relationships such as parented hierarchies between entities via components and let the renderer implementation determine the most efficient way to cull/render - which in a GPU-driven architecture is probably going to be very different from a CPU-culled BVH. Taking this idea far enough could even result in a situation where (invisibly from the end-user’s view) the renderable objects’ data components are stored in native memory buffers and passed off to WebGPU compute shaders with very little overhead (a handful of CPU->GPU buffer copies, rather than many CPU->GPU buffer copies).
This might look something like the following:
Effect system (animation, audio, etc.) --------\ / → Audio Layer (OpenAL, WebAudio)
ECS (object transform, physical data, etc.) → | → Rendering Layer (Scenegraph/WebGPU)
Physics system ^
Pipelines ^
This is effectively a complete rewrite of jME, at least as far as scene/rendering handling goes (I hope the math/animation libraries could be largely re-used), and I think API compatibility is pretty well out of the question. I don’t see this as a problem - Unreal Engine, for example, tends to make major (and I believe largely breaking) changes between engine releases (UE 4 → 5 comes to mind). jME 2 → 3 was such a change and was appropriate for moving from a primarily fixed-function OpenGL scene to a shader-based scene. The native graphics capabilities have undergone just as seismic and conceptual a shift between OpenGL 2 and OpenGL 4+/D3D 12/Vulkan/Metal, so I think it’s appropriate that the engine changes again to be able to take advantage of those changes. I don’t see JME 4 being totally new and incompatible with JME 3 being any more problematic than JME 3 being incompatible with JME 2.
The last time this topic came up the questions all revolved around Vulkan, and as many here pointed out, Vulkan is incredibly verbose and time-consuming to develop with - and not a panacea for anything. WebGPU changes that picture because it allows expressing most of what Vulkan would allow with a tiny fraction of the code, and in a way that’s uniquely cross-platform. This design also isn’t married to WebGPU - one could just as well replace the WebGPU renderer with a native Vulkan, D3D, Metal, etc. implementation if inclined, but building the core “supported” renderer on WebGPU brings some unique advantages.
Thoughts?
Edit: Java’s Project Valhalla could very well have some major (and highly favorable) implications for a data-driven design.