JME - Future Development

This is true. However it is also a very unlikely issue and is one that I have never encountered yet.

I did start writing support for my editor so that it could convert an existing scene and save the scene entirely in json (and this would also make my paging system more dynamic), but I stopped halfway through because it felt useless to worry about when I could be working on gameplay, so I told myself I’ll go back and do it if I ever have a problem that requires that.

I also have not seen anyone in JME make any type of editor that relies entirely on json and throws out j3o.

So while I do agree that would be best practice, I think it falls into the area of overkill until a game reaches a certain size, which my game and many other JME games that rely on non-procedurally developed maps have never reached the scale of yet.

I would also argue that a game dev should NEVER change a j3o assetPath or texture ref later on once its used in a scene. This is a very important rule I have enforced for myself because I did encounter issues with this early on when using JME, bur learned quickly not to do that. Best case scenario this leaves you with ugly “missing texture” objects in your world that you may never know about because it doesn’t crash, and worst case scenario you have scenes that won’t load if you do things the way I am, as you mentioned. TBH (in the case of a changed textureRef, for exampe) I personally prefer my game crash so I know about the problem, rather than letting models with missing textures remain around my world. But I haven’t had this issue in years since I know better than to edit the names of existing assets now.

Yes - I think of user data as a concession that needs to be made because sometimes it’s just the best way to associate a scene graph node to game data. I do this myself - in MyWorld, every object is an entity that’s managed entirely by my ES (technically a bit different from an ECS, but a similar idea). For UI, I need to detect clicks on game objects in the scene and react appropriately, so my two options are (1) use user data to associate the node to an entity so I know which entity got clicked, or (2) completely re-implement scene picking against my entities instead of using jME’s perfectly good picking system. This is a useful concession to associate visual objects with game data that doesn’t cross into blurring the distinction between data and visuals or using visual objects as data objects. Removing user data would mean that examples such as mine above would be far more of a headache and likely require some nasty, error-prone workarounds, so it’s a useful design feature if it’s not abused.


Yes, but you see this whole paragraph is because the Spatials were used as game objects.

Instead of your level being “tree here”, “tree here”, “goblin here”.

Your level is “tree mesh at this location with a lighting material that has a diffuse color override of brown with a texture of wood.jpg and another mesh at a slightly different location with a leaf material and a texture of leaf-atlas.png and another tree mesh at this location using a lighting material with wood2.jpg and a leaf mesh at a slightly different location with leaf-atlas2.jpg and a mesh of a goblin using the lighting material with the goblin1.png texture and a skeleton with these bones and a animation with these 12 animations of various lengths and probably half-a-dozen things I’m not thinking about.”

In the first way, if the goblins need to have a different skin then you just redefine what a goblin looks like. In the second way, you have to re-edit every level/map of your game. Or if you just want to fix the one little glitch in an animation. Or if you wanted to build levels with placeholder assets for now and fill in real assets later.


I did ECS test with Urho-3D (now u3d community) with enTT. Urho is, to me, similar to JMonkey, only that it’s c++ and has no appstates, only controls. I chose not to serialize any data into the models but have a database so I could replace the model without much changes I could easily change meshes at runtime since the entities would remain

I plan to port the project to JME in the future.


But wouldn’t I be able to achieve most of this with AssetLinkNode? I usually only link things in my scene editor, so if I change the original tree model or its texture then every Linked copy of that tree automatically is updated in all my scenes.( I also made a utility for my editor so anything that is batched still holds a reference to the original linkNodes and it can be un-batched, reloaded , and then rebatched again all in one click. Since by default jme does not handle batching linked nodes very well)

And I do think I understand and agree with your goblin example pertaining to a game object like an npc especially if saving player save files like that. So maybe I explained how I do that wrong. But when saving more complex game objects (like a goblin NPC) in my editor, the model is irrelevant and is just for debug purposes so you can see where a spawn point is located and what it’s supposed to be. I’m really just saving a blank node with an integer id representing the NPC’s type, and then it uses that ID to generate the correct NPC at runtime and will apply any new or updated textures or animations associated with that npc type. And for game save files I obviously save a list of npc IDs with their locations in a json file and would never save a full map. I just reload a fresh copy of the map and change things based on save data read from a json file.

So I agree it is not acceptable at all to do game-saves for the player with a whole spatial like that. But with AssetLinkNode and some utilities in my own editor, I think I’ve atleast managed to get by saving my scenes in j3o format solely for editing purposes. But I think in the future, especially as my project grows, I’ll put more effort into shifting entirely to another format like json.

Yes, AssetLinkNode was added much later to address the issues that I bring up. It’s a band-aid on top of an abused feature (j3o as level) that wasn’t viewed as “abuse” until people started actually using it and having those problems.

The band-aid works… with some effort, as you’ve pointed out. But if you forget to put a band-aid somewhere then you bleed all over the floor.

1 Like

I’ve not used AssetLinkNode myself, nor am I familiar with its intended use so I don’t really have an opinion on this, but how does everyone feel about its future in the engine?

1 Like

I don’t have any serious trouble using AssetLinkNode now that I’ve set things up in my editor to work with it, but I can see how it is a bandaid solution and requires following some certain rules and required some trial/error to learn how to not mess up and cause a scene to be un-loadable. And it was certainly rough to work with large j3o levels in the SDK prior to having my own editor, even when I learned about AssetLinkNodes and started using them to make things easier.

So I think AssetLinkNode should probably either be improved or removed in place of something better that isn’t considered a band-aid solution and does the job right. In either case, maybe it could be worthwhile to consider adding another type of file that is better suited for levels so that jme devs know to use .j3o for models and something else for levels that is equally easy to load with jme’s asset manager without requiring registering extra loaders.

Or if a new file type would be undesirable, then maybe at least include examples and some tutorials showing how to use another standard file type (like json or xml… whatevers considered best) for setting up a level, that way people will know there’s a better and equally easy solution to use than j3o before they go in too deep with j3o and AssetLinkNodes like I did. :laughing:

"what is JME’s CORE values? " I think our core values are:

  1. JAVA (of course)
  2. cross-platform (WIN,LINUX,MAC,ANDROID and if possible: IOS)

So, OpenGL ES is the best common measure. Other new emerged techs like valcan or webgpu might need long time to prove their surviving ability. But there are good ideas I think we can introduced to our engine for example timeline model from webgpu.

Other features I think we should consider for the next version are :

  1. fully transfer to Shader node system?
  2. new rending techs (undhade->light->light shadow->PBR->? , Lumen? nanite?ray-tracing?)
1 Like

I favor Shader nodes, but I want to keep the GLSL and material defs as an alternative for certain effects and special materials.

As for rendering techs, I would vote for global illumination if possible

1 Like

I agree fully on cross-platform, though I’d add web to the list of desired platforms (note that this target also aids in mobile deployment, if your game’s technical requirements allow it and you don’t wish to deal with the complexities of the native mobile platforms yourself).

I disagree with tying the engine to OpenGL ES, but as long as renderer implementation details are well-encapsulated from the point of view of any interface the engine itself (or user-created extensions such as post processing filters) uses then any graphics API could be used. This requires some care to design but is pretty doable for high-level interfaces. Ideally, if a user wanted to, say, create a Direct3D 12 backend, they should be able to do that by implementing only the renderer (+ shader translation/compilation, if that’s not encapsulated entirely in the renderer).

OpenGL ES is also quite frankly a liability to the cross-platform goal (through no fault of its own). Apple has decided to move away from OpenGL, and we’ve seen the effects of that for years on Mac. To my knowledge, every version of every variant of OpenGL is now deprecated on every Apple platform, and I don’t think that it’s worthwhile to count on that deprecation meaning nothing more than “no new features” (a painful limitation in its own right) when overhauling an entire game engine.

Now, WebGPU is looking to be a whole different story. At this point, I think it’s likely here to stay given the industry weight and the industry players who have thrown their weight behind it - it’s not just another Google thing that they might kill off next year because the returns were consistently below target. However, even if WebGPU in the browser ceased to be viable, Mozilla’s wgpu implementation has a lot of history, a lot of weight, and a lot of features that would be beneficial (and reasonably accessible) outside of WebGPU (for example, you can write shaders in GLSL, HLSL, or WGSL and they’ll be translated to the backend used at runtime). Note also that it supports OpenGL ES as a fallback (though this backend doesn’t seem to get as much attention as the others), so theoretically that’s always available as a fallback should the “preferred” API for a platform be unavailable.

TL;DR I’m proposing:

  1. Cross-platform Java is a must and should include Windows, Linux, Max, iOS, Android, and Web, at least to the extent of being able to run a viable engine + game code that will run on the developer’s intended target(s) (some platforms have limitations that can’t be avoided - iOS, for example, has no way to runtime-load code that’s not packaged in the app except for using a Safari web view to run JS/WASM). Console platforms are very difficult to support with Java, and so will be considered out of scope.

  2. High-level (engine + game code, unless an individual developer opts for lower-level access to specific backend APIs) graphics API agnostic rendering interface, with a default core implementation based on WebGPU.

Note: Point #2 implies that a properly designed interface between the renderer and other engine layers means that nobody is locked into any specific rendering technology and can freely select whichever graphics API they see fit for their application (though they may need to create or maintain a renderer for it if it’s outside of the core-supported scope and it’s not popular within the community).

By the way, might be worth taking a look at Khronos ANARI API (they released version 1.0 recently). Afaik no Java wrapper is made yet but I saw a request for this on LWJGL GitHub.

I think there are already multiple backends implemented and available. Not sure if there are backends available for mobile devices or not.

Also, afaik they provide abstraction for new rendering techniques like raytracing,…

1 Like

I’m not as technically as you friends in opengl, and vulkan and … .
but I know that this opengl still has a lot of room to improve things we have like lack of, better terrain, effects, PBR, water system, day and night, sky, grassing and trees on terrian, asset workflow, sdk, … . all part of engine can get a lot better and even new futures added to it, more effort on sdk would be really good, we are all indies it is better actually we make our lives easier without needing to get into vulkan i think. I want more improvements at least in assets i/o, more better supporting in fbx, pbr is really oldish! water-system can get far far better than what it is now, I’m not saying using new artichectors and techs isn’t good but what is the point if engine it self in those terms i mentioned doesn’t move much!
Edit: Also just mentioning it AI already found it’s way into the game development, I suggest you have an eye on Replica new demo, “Replica Smart NPCs” it is one of many usecases we can adobt AI to our engine, what i seek is more and more ease of use and less effort as a developer when using the engine

I think this is a big point. The JVM will never run on consoles, but compiling to native with GraalVM, ClearwingVM, or something alike, may or may not be possible in the future. It seems hard, I don’t think there is much to work with at this point. But I guess try to make it simple to add “backends” in case these technologies become available at some point.

Currently this is a hard limitation, if your game is a hit on desktop then porting to consoles needs a rewrite with another engine (of course this is a good problem to have).
For hobbyists this isn’t likely an issue, but I suspect (wild guess) that studios may not want to invest in adopting jME3 because consoles games are likely to make them more money.

This would be good to have, but I see it as an independent development so in order to keep the scope small it could be done at a later time when all problems in the 1st point are solved.
If this higher level api is a layer on top of the rest, it could be added later and even multiple alternative implementations could arise from the community.

That’s my impression as well. It seems there are reasonable ways to do this with C#/CIL bytecode, but there’s not much for Java - especially recent versions. HTML5 sort of gets you consoles because most any console will have an HTML5 compatible browser (there are games that have deployed to console via this route), but I don’t know how good the performance characteristics are so that may not be a good solution for higher-demand games. I’m wondering if a JVM bytecode → CIL bytecode → native console deploy will be the route we have to take if we go that way. Unfortunately I don’t really see GraalVM going the console gaming route. :grimacing:

As you said, for hobbyists I don’t think this is an issue in practice - console deployment is a walled garden and it seems that the console makers often reject games outright for whatever picky reason they want. We have a very long ways to go before we’d tempt studios to adopt jME for a major project, and if we reach the point where studios are asking about console deployment we’ll probably be better equipped to target them then than we are now.

I’d actually approach this from the opposite direction - the renderer interface would expose a very high-level interface (along the lines of “render this mesh with this material”, “add this post-process filter with this shader”, etc) which anything calling from the engine into the renderer would use. Renderer implementations could, if they wish, expose a low-level API that a game developer could access via casting (for WebGPU this would be something like obtain a command buffer recorder, sync a buffer to GPU memory, etc). This helps to prevent tight coupling from the engine to a particular rendering implementation, provided that we’re careful about what makes it into that interface. Porting to a new platform would then only mean providing implementations of the platform-dependent interfaces (or at least ones that didn’t have implementations already supporting that platform), and this alone to me justifies explicitly decoupling from OpenGL - any device that has a suitable graphics API becomes a potential target as long as it has a JVM or we have a way to compile to it.

I also think that a well-designed high-level API would largely (completely?) obviate the need to interact directly with the low-level capabilities of the renderer in probably 99% of cases. If you have meshes, textures, generic buffers, and vertex, fragment, and compute shaders, and a mechanism to feed data from any one of those into any other (note that these constructs are enough to replace both tesselation & geometry shaders with compute shaders), you should be able to render pretty much anything without knowing or caring too much about the specific API. The more we use higher-level constructs, the more reusable and maintainable add-on libraries become (no more “this awesome library requires OpenGL 4.2+” or “these features work on OpenGL 4.0+ but not 3.3 or lower”).

1 Like

IIRC the biggest problem with transpiling / native compilation with graal etc was the use of reflection. Maybe we can workaround that and remove it from the core entirely?
TeaVM that i am using to write the webgl renderer, actually supports transpilation to C, maybe this can be interesting too?

1 Like

Yes, that’s my hope since reflection pretty much always crops up as a problem anytime compilation to anything other than bytecode comes into the picture. GraalVM does sort of support reflection, but last I saw you had to manually provide it with a list of reflectable classes to get reflection data built into the executable. With the caveat that I’m not familiar with all of the places that reflection happens in core, I generally find that most (or all) uses of reflection can be eliminated by using interfaces to expose whatever it was the reflection was getting at.

I didn’t know that TeaVM supports transpiling to C - that’s a big deal, and while I don’t know details/caveats of what it supports that’s a huge, huge step in the right direction. If we can have one “extra” (non-JVM) toolchain that supports “everything else” that would be a huge plus for cross-platform support.

1 Like

I think they have added a tool (Tracing Agent) for that to auto-generate reflection, JNI,… configurations by tracing the code on a regular Java VM.


You need to do the same in TeaVM, it is pretty easy to isolate the classes that use reflection in the core, the problem is with all the user defined Savables, Cloneables and network messages, probably.

1 Like

I don’t remember how much reflection JME’s serialization uses since it has read/write methods all over the place that manually seem to deal with the fields.

For networking, I guess 99% of the reflection is done by the standard serializers in that one package. Primarily FieldSerializer (maybe only FieldSerializer).