Why RGB/RGBA16 got removed from development version?

Is there any reason why RGB16/RGBA16 got removed from 3.1 development version? Is it temporary, or they won’t be supported in 3.1 at all?

mhh not sure, there has been a big refactoring of the core rendering and maybe this removal was not intended.
You still have RGB16F and RGBA16F though.

I’ll investigate and tell you.

I’m coming back to jme3 after long break and when I was trying to update to new version, I hit problem mentioned above. I can try to add it back and provide a pull request, but maybe there was a reason to remove it.
BTW, there seems to be two TextureUtils classes which are not used anymore (one in android, one in jogl) - I assume they can be safely ignored?

One year already… damn it…
@Momoko_Fan any idea?

They are not supported by OpenGL ES.

Go to the OpenGL ES 2.0 or 3.0 specification and scroll down to “Required Texture Formats”. RGB(A)16 is not listed.

yeah but…maybe we could provide it with the propper warning… not everybody is using ogl es

Indeed, this seems to be required extension:
https://www.khronos.org/registry/gles/extensions/EXT/EXT_texture_norm16.txt
In any case, what is the current jury on such functionalities? Only lowest common denominator will be supported, or will it be up to developer to somehow test what is supported and provide fallbacks where needed?

Man, I hope not.

Limiting desktop to only what’s supported in ES seems like a bad idea. Might as well just call JME a mobile engine (that oh by the way can run on desktop).

Though I guess we could rip a whole lot out of our shaders in that case. Lighting could be reduced to almost nothing. :wink:

Well… I for one went from libGDX to JME because I wanted to benefit from the bigger capabilites of the desktop platform… so limiting the features to what is available to mobile defeats the purpouse.

While there is a lot of overlap,I have always thought that libGDX is optimized for mobile, while JME is optimized for desktop.

Me too.

Yep I see this similar, JME has more full power stuff, while libgdx has more small enough stuff :slight_smile:

I think the idea is not necessarily common denominator but more like supporting modern features which can be expected be widely available. Meaning an intersect between forward compatible / core desktop OpenGL 3.2 profile and OpenGL ES 3.x which are fairly similar already.
RGB(A)16 is kind of a weird format because it gives you really high precision but still can only represent values between 0.0 - 1.0. In most cases RGB(A)16F (floating point) or the leaner RGB111110F formats would be preferable, giving higher precision and value range.

Just to give background why I was asking about it. I’m encoding normals into gbuffer using two components and it turned out that with both 8 and 10 bit precision, I can observe very strong artifacts for screen-space reflections. With 16-bit precision, they completely disappear.

Maybe there is some smart way of encoding normal into 16F utilizing some of the exponent bit-depth (as default 10 bits will probably suffer from same artifacts as RGB10_A2). Or, alternatively, use RGBA8 and compose high-definition value by using two extra components as detail info (turning it into kind of RG16 format).

Edit: Made a small video showing a difference (using 16F to emulate old RGB10 and 32F to emulate old RGB16). Please open it in full HD and observe bottom part of the screen. Ignore parts of reflection disappering - it is normal for SSR, I haven’t implemented fallback env map yet, it is about reflection stability.

Last year, I found a nice paper and implementation to encode normal in RGB8 with good precision (I used for the full res normal buffer, lower res normal buffer use RGB32F (not optimized, wip for MSSAO))

my code + url of paper : jme3_ext_deferred/UnitVector.glsllib at master · davidB/jme3_ext_deferred · GitHub

and

#import "ShaderLib/UnitVector.glsllib"


vec3 encodeNormal(in vec3 normal){
	return snorm12x2_to_unorm8x3(float32x3_to_oct(normal));
}

vec3 decodeNormal(in vec3 unorm8x3Normal){
	return oct_to_float32x3(unorm8x3_to_snorm12x2(unorm8x3Normal));
	//return hemioct_to_float32x3(unorm8x3_to_snorm12x2(intNormal.rgb));
}

vec3 readNormal(in sampler2D normalBuffer, in vec2 uv) {
	vec3 intNormal = texture(normalBuffer, uv).rgb;
	return normalize(decodeNormal(intNormal));
}

I’m using

vec2 encodeNormal (vec3 n)
{
    float f = sqrt(8.0*n.z+8.0);
    return n.xy / f + 0.5;
}
vec3 decodeNormal(vec2 enc)
{
    vec2 fenc = enc.xy*4.0-2.0;
    float f = dot(fenc,fenc);
    float g = sqrt(1.0-f/4.0);
    vec3 n;
    n.xy = fenc*g;
    n.z = 1.0-f/2.0;
    return n;
}

And probably can use similar trick to utilize 3rd component for extra 4 bits of precision (or 3rd and 4th for extra 8 bits). Let me see if 12 bits are enough (I know that 10 were not, 16 were good).

Take a look at the paper, it’s not only about precision and encode/deduce z from xy. But more about about increase precision for the set a possible value (an octogon). Anyway you can give a try to it (the code is ready to use for jME).

I have tested encodings for my usecase. Previously I was using Lambert encoding from Compact Normal Storage for small G-Buffers · Aras' website, but even when extending it to 12 bits it was still bit jumpy. Using one you have chosen makes it stable with 12 bits.

Thanks for the hint. Seems that I don’t need RGB16 in close future, which doesn’t necessarily mean that jme3 should not support it anyway :wink: