RFC: Core Profile Management

Context:
By default jme runs the compatibility profile. That means the driver tries it’s best to work with frankenstein shaders containing multiple glsl version statements and much more.
This is a pitfall for devs as it can lead to driver-specific (mis-) behavior.
Furthermore it may swallow compilation warnings or errors and just assume stuff, which may be undesired.

Core Profiles for specific versions, on the other hand, can be seen as regular APIs. Some stuff got deprecated in later versions, new things have been added, but you cannot use statements, that are unsupported for that particular version. Compilers are also more strict there typically, so it helps writing clean GLSL Shaders. Therefor every developer should enable the core profile on the minimum supported version.

This is even more relevant on Mac OS X, where the compat profile only supports OpenGL 2.0 but a core profile for 3.1 is available (and on more recent Macs an even higher version, iirc).
There is GLSLCompat.glslib that e.g. allows our OpenGL 3.1 based PBR Shaders to run on OpenGL 2.0, but that’s always a workaround and the native version should be preferred, which needs the core profile on macOS.

Problem:
Setting the Core Profile currently happens without feedback. It’s just setting a string via the AppSettings and that string contains both the Rendering Backend as well as the OpenGL String (so we basically use substring for that, same for the custom rendering backends that are prepended via CUSTOM).

Reference:

While this is good to save (like the appsettings will then contain the openGL version), this only really works when having an installer detect the core profiles beforehand.

You can basically only try to wrap the whole Application (and especially start()) into try-catch and brute force every string constant AppSettings offer.

Imagine you want to build a bleeding edge application using OpenGL 4.5 features, but then there’s this guy with an OpenGL 3.1 Laptop.

You basically have 3 options:

  1. Compatibility Profile
  2. Only use 3.1 Features (and maybe add 4.5 replacements to GLSLCompat.glsllib, where possible)
  3. Screw that laptop guy

How about an option 4? You probe the available profiles and then selectively disable/replace some shaders to keep core profiles where possible. Chances are, the more modern features are only required for advanced techniques but a simple baseline can be produced that only requires e.g. OpenGL 3.1

Even if not, having a better way to handle not-existant core profiles to display an error dialogue would be cool.

Proposal:
I am open to hearing your ideas, but from a quick thought, I could come up with two ideas:

  1. Application#setCoreProfile throws InvalidOperationException. This would entail having proper context restarts and would enable an in-game setting to change the renderer as well as the profile, which is helpful to debug issues as well.
    It may be hard to achieve with the current code structure, though, since typically renderers open the whole context (e.g. the window) and then stay.

  2. A callback or rather Function<List<Profiles>, Profile>

app.chooseCoreProfile((profiles) => {
 if (profiles.contains(Profile.OPENGL44) {
   return Profile.OPENGL44;
} else if (profiles.contains(Profile.OPENGL31) {
  app.openGL31Compat = true; // user supplied field to handle 3.1, could also be replaced by like App.getActiveCoreProfile()
  return Profile.OPENGL31;
} else {
  app.showDialog("A minimum of OpenGL 3.1 is required to run this Application");
  return null;
}
});

What are your thoughts? Ideas? Don’t see any use? etc.

5 Likes

I don’t know if that may help you or not, I was having problems with System Profiles lately, my laptop was running on Core Profile when in Linux & Compatibility Profile when in windows, that ofc broke some functionalities when in core profile, basically shaders one of GLSLibCompat, BTW , what do mean by an installer detects the current system profile ??
& how my system is detected as Core in Linux & Compatibility in windows ?
& currently is there a way to convert from Core to Compatibility profiles ?

Option #2 seems both quite powerful & the least invasive. I’d suggest creating a new functional interface for it though, rather than using the JDK Function interface for the sake of those who prefer to implement functional interfaced the old fashioned way rather than via lambdas.

I agree that specifying the renderer / opengl version is quite messy (probably turned into it over time keeping stuff backwards compatible)

I once imagined that and quickly found i need to write my own renderer :smiley:
I mean, what does one actually gain from selecting GL 4.5 renderer?
Ye sure if the hardware does not support tessellation shaders that you use in the 4th level of your game, it crashes right away instead of when starting the 4th level but you can also do caps checks at the start of your application

I always thought it was the other way round: all shaders are written in ancient glsl using varying and gl_FragColor and GLSLCompat.glslib fixes it when the version of the shader (which is appended at runtime depending on what the hardware supports and the j3md declares) is >= 130
Also adds some implementations for functions that are not supported on ancient glsl (determinant, inverse)

If its mostly about custom shaders i would love to hear all input, as with one eye on a possible future vulkan renderer, whoever might write that, shaders need to be rewritten entirely anyway. well not the actual algorithm code stuff, but the includes, scattered uniform declarations, attributes with locations assigned by the driver, etc…

EDIT: reading my post again i feel its sounds a bit harsh and i have to apologize for that. it just takes me time to realize the community is more valuable than the source code

1 Like

option 4(proposal) sounds nice, but then why do we have CAPS?

always thought just need check CAPS to determine what enable/disable based on them?

i understand difference is that can setup if use openGL31Compat or not?

Going to need a reference for this. What kind of shader errors are not detected in compatibility mode?

Are you sure you aren’t conflating that with the nVidia compatibility issue when no #version was specified back in the early JME 3.x days? That one has been fixed for a few years now. (Most common issue was nvidia shader-compatibility mode allowing raw 0s when 0.0 is required, etc.)

As to the other, the callback is probably the least invasive.

BUT, in general, if your game doesn’t have some utility to reset the app settings in some way, you are going to mess with a lot of users… so in the end there may not be any way around a launcher as the most friendly approach. (Which also gives an opportunity for automatic updates, etc…) Or at least being able to pass a command line option to reset settings to defaults. Well, for any game that uses in-game display settings, etc. which I assumed because that’s already a Mac requirement (no AWT = no settings dialog).

…launcher, command line switch, etc… Else a driver change could make the game unplayable and unrecoverable.

Well, in order to properly define a core profile, the application should know which profiles are available at least. An installer could do that.

I guess your issue is that the GPU Driver Game Profiles somehow play into that, typically you always have the Compat Profile unless you specific a core profile (which your game probably should do).

Technically you should be able to instantiate Function<> as an anonymous class, shouldn’t you?
It’s actually backwards: if you specify a custom one-method interface, it will be lambda-fied.

That’s a good point. What’s the benefit instead of Caps? I mean the Driver knows what you’re up to, you can’t mix glsl 150 with 330, some stuff deprecated, but after all you could still go lowest possible or compat.

Hmm, the latter is at least the reason why PBR didn’t work: there were no determinant/inverse defines.
And like I guess having those built in is better for the performance, at least.
Also like type safety at compile time versus runtime (you gotta manage caps manually and ensure the correct shaders are picked), speaking off: I think jme also already supports choosing a shader based on the supported GL Version, where it actually looks at caps.

I guess Vulkan may also need some kind of render management. Like putting a string Vulkan would be bad, if we could have something more professional

That’s a good point, for development, the driver ensures you don’t access anything wrong, caps depend on you and I guess you still have the compatibility mode then always.

I need to take that back then, a quick search didn’t confirm that, and all I knew was heresay, e.g. things like this 0 instead of 0.0 and there were some things with how vectors were accessed or something etc.

That’s true, however there should be some way to at least query the available core profiles, because otherwise you really need to somehow catch exceptions or guess that when java -jar returns exit code 1, that it means the renderer couldn’t be started.

So something to have more feedback/idea would be nice, maybe also some querying only, much like a wrapper around lwjgl3 to query resolutions etc, that’s also something that is missing there as well.

1 Like

Yeah, the classic 0 instead of 0.0 problem was because if you didn’t specify a #version line then nVidia would drop into “let you do anything” mode… which was really nice for rapidly trying stuff out but super painful if you ever actually wanted to run your stuff anywhere else.

And in the olden days, JME’s default GLSL100 would leave the #version out… because 1.0 is not a real GLSL version and that was the easy way to say “work on desktop and android”. JME later decided to put the correct #version in when you are running on desktop or android when specifying GLSL100.

Prior to that, the solution was to always use GLSL110 in your matdefs.

1 Like

Well, one could argue that that’s the same issue, just another “API”, as in:
No #version in the shader, the driver will fall into compatibility mode, #version 110, driver will fall into GLSL110 Core Profile.

But yeah, that kinda reduces the motivation of this proposal, because then at least the shader compiler won’t be less strict, however it’s probably still beneficial to set the core profile to the required version