JME on top of Vulkan

I know that 3.1 is being prepared with opengl API abstraction layer, but it looks like that hype of the week currently is Vulkan API (https://www.khronos.org/vulkan). Detailed specs/implementations are not available yet, but it is said to be very close to Mantle (http://www.amd.com/Documents/Mantle-Programming-Guide-and-API-Reference.pdf) in design.
Are there any plans to adapt JME to Vulkan when it comes out? Will it be worth it if possible (for both technical and political/bragging reasons)? Would it mean JME 4.0 with 70% of code rewrite, or just reasonably small changes around renderer and shader interface, with the rest kept intact?

JOGL will support Vulkan, there is already a request for enhancement about it and there is already a renderer for JMonkeyEngine 3 implemented with JOGL 2.2 (I have to update it soon). In my humble opinion, the code rewrite will be smaller than you think. Personally, I prefer keeping some compatibility with OpenGL too as it isn’t dead. It is possible to convert GLSL to SPIR-V, I’m not afraid about the shaders.

It seems that all major engines already got a head start (like Unity and Unreal). There are alpha video drivers that support it too.

Everybody else will need to wait until next year for the official specs and support.

It should be possible without to much headaches.

Vulcan should support GLSL cross compiling, so we are in a good start there.
The rest of jme is very abstracted anyway, you barely use any opengl specific classes/logic in normal use, after all meshes ect. must also exist in vulcan :smile:

I’m personally more interested in the supportf for multithreading in a cleaner way, maybee we finally can do a fully asyn texture/asset loading without to much ugly hacks.

My feeling is that major difference is that with Vulkan you are supposed to prepare and reuse command buffers which are conciderably richer than just meshes. I suppose that something similar to material defines check would have to be done - to recreate command buffer on bigger changes, but only change parameterized parts of it for smaller changes.

I’m still not sure how big they expect command buffers to be in Vulkan. Is it one command buffer per mesh-with-material or rather huge command buffers with things like ‘all particles’, ‘all skeletal meshes’, ‘terrain’, ‘trees’ etc.

My understanding is that command buffers are only needed for multithreaded stuff. E.g. resource loading from multiple threads. It is the same concept as an OpenGL context except more lightweight.
Also you already can do multithreaded resource loading in jME3 by using Java ExecutorService and calling the AssetManager from there.

I understand command buffers as display lists with some limitations and some extensions (better parameterization). OpenGL context is most similar to Vulkan Queue, but there is no real context because I don’t think any state is shared between commands buffer executions.