JME - Future Development

I’m not aware of any place that serialization uses reflection, though it’s been quite a while since I browsed around anything related to that.

Regarding this there is good discussions and forums topic inside libgdx community.
I already played the slay the spire on xbox, I know that was ported by Sickhead Games first to C# and then to C++. Maybe we can make a automated software that does everything Sickhead does for jme, The process involves porting the codebase to a different language that can run on the targeted platform. This can be done manually or by using a compiler/transpiler. for Graalvm I think the effort for make it to work would be really significant as much as going to invent a new wheel.

Compiler/transpiler support is bad enough, let alone manually rewriting. I just finished the initial implementation of my second compiler project - running WASM on the JVM via a (partially implemented) Futamura projection, and I’ve tinkered with LLVM code generation before (the C API is surprisingly pleasant and LWJGL has a very credible binding for it). I’ve never attempted going from JVM → native/WASM before, and we might be better off filing PRs for TeaVM (which already has a C target, though I don’t know how well it’s supported or what its state is) if we decide to try to support AOT compilation for console targets rather than starting from scratch (my biggest concern with TeaVM is that it partially reimplements the JDK class library rather than projecting from OpenJDK bytecode to JS/WASM.

While I rather pity the poor fellow who starts off to develop a new JVM (or AOT specialization of JVM bytecode), I think the best route is either LLVM code generation or transpiling to CIL since the CLR has good AOT support for many different platforms. (Though projecting bytecode to C/C++ sources also has some notable advantages for closed off platforms that may have their own compilers.)

The biggest concern I have with GraalVM is that it’s an Oracle product aimed at enterprise uses - so good luck getting a PR accepted, and even if you did single-party corporate governance of a project doesn’t always end well.

2 Likes

I have a suggestion :
We should separate this discussion into several sections, for example:

  1. Fundamental rendering technics:
    webGL, WebGPU, Valcan, GLES…
  2. Shader System: material, shader node, what is next ?
    new rendering pipeline ?
    new GL shader support?: Geometry tessilation computer… …
    upgrade filter processor system?
    Shader code:
    basic shader libs ?
    new effect?
  3. System issues:
    asset management ?
    ECS ?
    scene management : save, reload… …?

We should make sure what we want and what we can in the first place,
then choose what to do next.
Anyway, We need a clear todo list.

Another goal I expect is :
During these discussion, most of us will learn and understand the engine better.
You can contribute only when you fully understand the engine mechanism and OpenGL.
If you are making your new game , you may want to focus on system issues.
If you are learning GL, you want to spend more time on shader system.
If your want more for your game then effects.

1 Like

I’ve been thinking about this for a while now. I agree we should have several discussion topics here on the forum for some big picture questions, but for deeply technical and detailed discussions about implementation details GitHub PRs might be good. On the other hand, keeping everything here results in one place to look for everything, which would be quite nice for historical records. I’d prefer to keep deeply technical conversations related to specific PRs though, as it’s important to have specific code examples to discuss & compare.

I think once we come to consensus on the fundamental engine architecture the ToDo list will largely write itself. I’m concerned about diving into implementations of subsystems without at least a credible rough draft of how systems will interact and what guarantees are provided or by engine interfaces (exposed functionality, threading models, etc). At a high level we seem to already largely have consensus about what we’re looking for (modularity, strong encapsulation of implementations, etc), but there are still many implementation details that remain that have major implications for subsystems.

Shading languages is a bit tricky and requires some community discussion if we’re going to try to implement core shaders once in a way that’s portable across rendering APIs (which is possible via a handful of different technologies/implementation techniques, but is not a simple issue). Not attempting portable shaders is certainly another option, but this also requires community consensus.

This part I disagree with, for several reasons.

  1. We’re an open source project. We rely entirely on people taking enough interest in something to make a PR, follow through with discussion, and get it merged. That’s a lot to ask of someone without setting any other obstacles in their path.

  2. Many would-be contributors don’t need knowledge of the entire engine to make great contributions. If we’ve settled on an audio system interface, someone who knows OpenAL well doesn’t need to know anything about rendering to make some huge contributions. Besides, how would we even meaningfully measure or determine who had sufficient knowledge, and what even is sufficient knowledge? While it may not be your intention, this is a form of gate keeping and I’ve never seen an instance of that ending well in open source projects when done in this way. Technical gate keeping (via in-depth discussion and review of PRs before acceptance) is often a good thing; contributor gate keeping is a fantastic way to kill a project.

  3. There are still some open questions over which rendering API we’ll pick for “core” implementations, but I don’t see much pushback against not biasing the core engine towards OpenGL. That makes GL knowledge irrelevant to anyone not working on an OpenGL rendering implementation.

1 Like

Imo shader nodes should be removed from the core, it is very hard to keep them up to date and they compile in plain glsl shaders.
They can be moved to an external editor or library.

6 Likes

August is a busy month for me so I’ve had little time to devote to this the past week, and I’ll have effectively no time for development until the end of August, but I’d like to keep the conversation going.

Judging from responses above, it seems there is little opposition to the idea of a future version of jME with some fundamental changes. With that in mind, I’d propose the following:

  1. This work be done as the next major version of jME - jME 4.0.
  2. We have a forum subcategory under “Development” for posts discussing development of jME 4.0.
  3. We decide how to handle GitHub resources for the new version - either a branch in the current repository, or a new repository (potentially with the current repository as a submodule to track changes across files that are ported from jME 3 → jME 4). There are probably a couple decent options here, though I’d propose that for expediency a small group (3-4 + current engine leaders) have sufficient permissions to merge directly to the main-equivalent branch for jME 4.0.

Once we have discussion & GitHub resources settled, my personal first agenda items are:

  1. Discussion of core architecture.
  2. Discussion of how shaders/materials will be handled - whether by separate implementation for each renderer type, by a single core implementation that renderers will translate to their respective shading language, or by a symbolic (shader node/AST form) that renderers will compile.
4 Likes

To throw in a few ideas in the mix even don’t knowing if time allows to contribute anything. During winter i hope to have a dozen hours a week of free time which i would contribute to this endeavor.

One of the big questions that have to be answered first is, how backwards compatible should “jme4” be?

I have a few things i mind that i currently cannot do in a nice way. I will post them on the dedicated locations once they exist. But i fear that any backwards compatibility required interferes with the design of the “modern” features.

1 Like

Glad to hear it!

My intention with suggesting a new version is to assume that anything at all can and probably will break compatibility. I think we should design new APIs/implementations around modern best practices (& lessons learned from jME 3/desired quality-of-life improvements), and only keep backwards compatibility in situations where it’s easy to do because we haven’t come up with anything better. That’s what I was getting at with the earlier comparison of jME 2 & jME 3 - jME 3 broke compatibility with jME 2 in a lot of ways, which made sense because graphics capabilities and programming models had changed a lot. I think we’re in a very comparable state of affairs now, and we’d be best served by redesigning the engine according to modern techniques and architectures.

1 Like

For renderer do we want to base it on a third party library (e.g. Google ANGLE) that targets many backends out of the box for us or do we want to implement our own just like we do now?

(I brought this up in several areas above, but will recap here too.)

I’m pretty firmly of the opinion that we’d be best served by fully decoupling the “core” engine (everything except rendering) from the rendering backend used, and placing renderers behind high-level, backend agnostic interfaces. This is very similar to what jME already does, except that the rendering (and the engine API as a whole) is biased towards OpenGL-style state machine rendering. To make full use of modern backend APIs & techniques, we really need to lose that bias.

I expect that for practicality we’ll probably need to support 2 or (worst case) 3 rendering implementations in core. I’m partial to a webgpu.h implementation (wgpu or Dawn) as the backend interface (easy native support for rendering on Vulkan, Direct3D, Metal, or OpenGL as a fallback), but I also see value in a GLES 3+ based renderer (which I believe ANGLE is rather ideally suited for).

I’ve long been against using a higher-level wrapper such as either of the above, but I’ve recently been convinced of the merits - there just don’t seem to be any other good alternatives for getting good cross-platform support without extensive time spent, testing, and expertise on many platforms. Vulkan was supposed to help with this, but besides being freakishly verbose I’ve heard that it now sometimes has differing behavior on different platforms, lots of extensions for important things that may not be available elsewhere, etc. Using a native WebGPU implementation or ANGLE would shift all of that hassle onto the wrapper maintainers and free us up to focus on jME itself, and would probably give us a lot of consistency and reliability across platforms that we lack now. The only downside I really see in practice is the potential for some overhead, but from what I’ve heard those overheads tend to be miniscule in practice - and if using WebGPU, you can handily make up for them with a number of high-performance rendering techniques that you can’t really do (or are harder to do) on OpenGL.

1 Like

IMO the default renderer should be based only on opengl es 3.0 specs as lowest common denominator and drop anything older than that. Binding it to angle or webgl on anything else is just a matter of swapping the wrapper.

But having an engine entirely built around a specific wrapper or graphic library (that is pretty much what jme is right now, since everything is based on the logic defined in GLRenderer) can be quite limiting, that’s why in my pipeline experiments i tried to isolate all the rendering logic and provide a way to bind custom generic states to any object (the StatefulObjects PR) . I think this can be a good approach, i will look into getting back to this project once the webgl backend is at a good point.

3 Likes

We should decide the target platforms and see if webgpu can run on them.
Personally i think we should target windows/linux/mac (obviously), recent android and at least one console if possible, and the web browser that can double as a jolly for platforms that can run any of the webapp frameworks (eg. electron) but not java.

2 Likes

Agreed - console support is tricky though, and none of the graphics API wrappers seem to explicitly support any console, though I do see some rumors floating around that ANGLE might be able to run on XBox. If it does, I would expect that at least Dawn will soon as well - which would mean that a WebGPU-based renderer would be portable across all of our targets.

I expect early on we’d be better served by focusing on desktop, mobile, and web since (to my knowledge) all modern consoles can run HTML5 applications so we kind of get them as a “freebie” (albeit with some compromises) via the web target. I certainly don’t want to permanently neglect console and if someone wants to pick up and run with support for a console platform early on I’m all for it - I just don’t think we would be well served by treating it as a requirement.

Note: I’m partial to WebGPU over GLES 3 because of its modern, concurrent-capable architecture and the lower-level GPU operations it exposes. (I also see a couple points of concern in the ANGLE platform support matrix - support seems to be lagging on Mac/iOS, which is not a problem I think we’re likely to see soon with a WebGPU implementation due to the nature of WebGPU and momentum carrying it forward right now.)

2 Likes

Can this possibly be of any help?

2 Likes

I’ve never developed for console and don’t particularly plan to start so I don’t know how helpful that would likely be in practice (I’d probably try going the WebGPU → D3D 12 route first), but it certainly can’t hurt!

1 Like

A couple things that come to mind off the top of my head:

  1. Use JOML for math operations
  2. LWJGL 4 will be based on panama and is currently in progress

There are other great libraries that can be bolted on later such as ImGUI for a UI, and the physx4j is working great. No need to support legacy bullet, but modern bullet such Minie is a good option too.

Just some ideas. I recommend thinking about what resources exist that can be utilized instead building everything from the ground up.

3 Likes

I entirely agree!

We should figure out a list of standalone independent things we want to improve or change and see if we can get a volunteer for each one.

We can also consider the possibility to incentivize contribution with the project funds.

1 Like

Sounds good to me! What are your thoughts on a forum Development subcategory for these types of discussions? I think it would be beneficial to have a label to distinguish between conversations about maintaining the current version and the next-gen work.

This concerns me a bit - we have limited funds, and my first concern is what happens with contributions when the incentives budget runs out. My second concern is what happens down the road if we need to pay for, say, CI/CD infrastructure, (I know we’ll usually be able to find free or heavily subsidized CI/CD, just pulling this out for an example), Mac-in-the-cloud machines, etc. My last concern is the types of quarrels that tend to come up in open source projects when money gets involved. At the moment I’m of the opinion that we’d be better off with a volunteer model driven by folks who have a personal interest in the results.

1 Like