OpenGL resetting bound textures causes intermittent black geometry (OpenXL)

Context

I’ve been working with using OpenXR (aka modern Virtual reality etc) with JMonkeyEngine. I’ve found a bug but I’m unsure if it’s in my code or JMonkeyEngine. The basic process I’m doing is setting up 2 cameras, moving them in an AppStates update method depending on where OpenXR says they should be then in the postRender I’m copying their frame buffers into OpenXR

public class XrAppState extends BaseAppState{

    @Override
    protected void initialize(Application app){
        Texture2D leftOffTex = new Texture2D(width, height, Image.Format.RGBA8);
        leftOffTex.setMinFilter(Texture.MinFilter.BilinearNoMipMaps);
        leftOffTex.setMagFilter(Texture.MagFilter.Bilinear);

        leftEyeFrameBuffer = new FrameBuffer(width, height, 1);
        leftEyeFrameBuffer.setDepthBuffer(Image.Format.Depth);
        leftEyeFrameBuffer.setColorTexture(leftOffTex);

        leftViewPort = app.getRenderManager().createPreView("Left Eye", leftCamera);
        leftViewPort.setClearFlags(true, true, true);
        leftViewPort.setOutputFrameBuffer(leftEyeFrameBuffer);

        //same for right eye
    }

    @Override
    public void update(float tpf){
        leftCamera.setLocation(observer.localToWorld(inProgressXrRender.leftEye.eyePosition(), null));
    }

    public void postRender(){
        xrSession.presentFrameBuffersToOpenXr(inProgressXrRender, leftEyeFrameBuffer, rightEyeFrameBuffer);
    }
}

The bug

Sometimes (and it seems to depend on viewing angle) a geometry will go black (not culled, black)

“A” solution

The way I present these frame buffers to OpenXR (called within the postRender method) is like this:

private void renderWithJMEFrameBuffer(XrSwapchainImageOpenGLKHR swapchainImage, FrameBuffer targetBuffer){
    GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, targetBuffer.getId());
    GL11.glBindTexture(GL11.GL_TEXTURE_2D, swapchainImage.image());
    GL11.glCopyTexSubImage2D(
            GL11.GL_TEXTURE_2D, 
            0,         
            0, 0,            
            0, 0,                
            swapchainWidth, 
            swapchainHeight   
    );

    GL30.glBindFramebuffer(GL30.GL_READ_FRAMEBUFFER, 0); //clear framebuffer
    GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0); //clear texture
}

I.e. I bind the texture and the frame buffer and copy them across, then I set them both to zero as I’m done with them to release them

If I instead change it to this then the problems go away

private void renderWithJMEFrameBuffer(XrSwapchainImageOpenGLKHR swapchainImage, FrameBuffer targetBuffer){

    IntBuffer intBuffer = BufferUtils.createIntBuffer(1);
    GL11.glGetIntegerv(GL11.GL_TEXTURE_BINDING_2D, intBuffer);
    int previouslyBoundTexture = intBuffer.get(0);

    GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, targetBuffer.getId());
    GL11.glBindTexture(GL11.GL_TEXTURE_2D, swapchainImage.image());
    GL11.glCopyTexSubImage2D(
            GL11.GL_TEXTURE_2D, 
            0,               
            0, 0,                
            0, 0,               
            swapchainWidth,      
            swapchainHeight   
    );

    GL30.glBindFramebuffer(GL30.GL_READ_FRAMEBUFFER, 0);
    GL11.glBindTexture(GL11.GL_TEXTURE_2D, previouslyBoundTexture); //< -- put things back how I found them
}

So it seems like JMonkey already has a bound texture and doesn’t like me messing with it.

Question

So it seems like there are 3 possibilities

  • JMonkey wants full control of the bound textures and I should put them back how I found them (my solution is correct, there was a bug in my code)
  • JMonkey shouldn’t be relying on these bound textures not being reset but it is (bug in JMonkeyEngine, my solution is a work around)
  • I shouldn’t be copying around frame buffers in the postRender method and should be doing it elsewhere

Does anyone know which of these 3 is true? (Or if it’s a 4th I haven’t considered).

I’m on the most recent JME; 3.6.1

1 Like

this code looks a bit odd. I am not at the pc to check the specs. i think glTexSubImage expects a framebuffer at the READ_FRAMEBuFFEr target.

I am unsure if you have to unbind it first. set framebuffer to 0 and bind as read_frambuffer.

it seems to me as you are binding fb 0 as read_fb. which is the display.out. i am kind of surprised that this works at all.

my guess is that the code does not work at all in the first frame, since you have nothing bound @ read_framebuffer.

not sure about the textures, but in general you should never query the gl context. for most stuff jme does keep track of the binding states internally. if you mess with it you have to revert the changes. This is expected.

Or, if you change something, change also the jme binding state.

This is the docs for glCopyTexSubImage2D:

  Respecifies a rectangular subregion of an existing texel array. No change is made to the
 internalformat, width, height, or border parameters of the specified texel array, nor is any
 change made to texel values outside the specified subregion. See CopyTexImage2D for
 more details.
  Params:
  target – the texture target. One of: 
  TEXTURE_2D
  TEXTURE_1D_ARRAY
  TEXTURE_RECTANGLE
  TEXTURE_CUBE_MAP
   level – the level-of-detail number xoffset – the left texel coordinate of the texture subregion 
to update yoffset – the lower texel coordinate of the texture subregion to update x – the left 
framebuffer pixel coordinate y – the lower framebuffer pixel coordinate width – the texture
 subregion width height – the texture subregion height

not sure about the textures, but in general you should never query the gl context. for most stuff jme does keep track of the binding states internally.

I’m not sure I understand this, are you saying I shouldn’t be calling any GL11 (etc) methods? I’m not sure how else to present the cameras rendered output to OpenXR (but I’ll admit I am somewhat new to this)

not sure about the textures, but in general you should never query the gl context. for most stuff jme does keep track of the binding states internally. if you mess with it you have to revert the changes. This is expected.

Ok, that sounds like its a bug in my code and I should put the textures back the way I found them (which is a nice answer as it’s also what sorts it out)

You want to read from the targetBuffer? if yes you have to bind it to READ_FRAMEBUFFER instead or FRAMEBUFFER. first gl command in your method

That does also work. Why is GL_READ_FRAMEBUFFER better than GL_FRAMEBUFFER? (Both seem to work but I assume there is something I’m missing).

(Edit; reading around a bit possibly GL_FRAMEBUFFER binds both the read and write buffers, which is why it still works but isn’t ideal as I only need the read bit)

I think it is because you are binding fb 0 to GL_READ_FRAMEBUFFER in the last line, so in frame 2 you have something bound.

You are most likely in the realm of unspecified behaviour, so while it might work on your gpu and this driver version it is likely that a different vendor outputs something different.

If you binding to read you notify opengl that all write operations have to be completed.

AMD is usually less forgiving while NVidia tends to write black when it comes to texture access violations.

Ok, that makes sense. This is what I have now. It uses the GL_READ_FRAMEBUFFER, and it puts everything back the way it found it

private void renderWithJMEFrameBuffer(XrSwapchainImageOpenGLKHR swapchainImage, FrameBuffer targetBuffer){

    IntBuffer intBufferTexture = BufferUtils.createIntBuffer(1);
    GL11.glGetIntegerv(GL11.GL_TEXTURE_BINDING_2D, intBufferTexture);
    int previouslyBoundTexture = intBufferTexture.get(0);

    IntBuffer intBufferFrameBuffer = BufferUtils.createIntBuffer(1);
    GL30.glGetIntegerv(GL30.GL_READ_FRAMEBUFFER_BINDING, intBufferFrameBuffer);
    int previousFramebuffer = intBufferFrameBuffer.get(0);

    GL30.glBindFramebuffer(GL30.GL_READ_FRAMEBUFFER, targetBuffer.getId());
    GL11.glBindTexture(GL11.GL_TEXTURE_2D, swapchainImage.image());
    GL11.glCopyTexSubImage2D(
            GL11.GL_TEXTURE_2D,
            0, 0, 0,0, 0,
            swapchainWidth,
            swapchainHeight
    );

    GL30.glBindFramebuffer(GL30.GL_READ_FRAMEBUFFER, previousFramebuffer);
    GL11.glBindTexture(GL11.GL_TEXTURE_2D, previouslyBoundTexture);
}

if you have access to you can access jmes context to get the current binding state

As in calls to Renderer#setFrameBuffer etc. I think I could do that for the frame buffer (which comes from JME) but I’m less sure about the swapchain image (which comes from OpenXR’s openGL extension)

Hm, out of interest. Why dont you create a framebuffer with the image from openxr as target, and let jme render to it? it would save the copy texture if its working.
700Mb / sec

Honestly, because I didn’t realise that was an option!

Would this be the int I get from XrSwapchainImageOpenGLKHR#image() being made into a frame buffer and given to the viewPort during the update method (there is some buffering going on so I think I’d need to do it each frame).

Assuming that is true how do I go from that int to a JMonkey FrameBuffer?

that int is an image, so you would create a common framebuffer in jme and add that int as color target. i have to check the api when i am at home.

something with RenderTargets.newRenderTarget.

1 Like

I think there is no super clean way to do it but a nearly clean way.

you could extend Image.java to get access to the protected Image(id) constructor, and then create a Texture2d with that image and then create your framebuffer as usual.

for the buffering, you should check how many different textures openxr gives you and creat a framebuffer for each one of them, but reuse it

1 Like

Nice! Thank you! Its a lot cleaner now

For completeness this was what I needed to do.

Create a child of Image to set required parameters, it needed to do a few other things beyond just the ID

public class SwapchainImage extends Image{

    public SwapchainImage(int id, Format format, int width, int height){
        super(id);
        data = new ArrayList<>(1); //or else Texture2D's constructor NPEs
        setFormat(format);
        setWidth(width);
        setHeight(height);
    }
}

Then within the update loop I create or refetch a FrameBuffer

    private FrameBuffer getOrCreateFrameBuffer(int swapchainImageId){
        return frameBuffers.computeIfAbsent(swapchainImageId, id -> {
            Image.Format format;
            if(glColorFormat == GL11.GL_RGB10_A2){
                format = Image.Format.RGB10A2;
            } else if (glColorFormat == GL30.GL_RGBA16F){
                format = Image.Format.RGBA16F;
            }else {
                throw new IllegalStateException("Unknown color format: " + glColorFormat);
            }

            Texture2D texture = new Texture2D(new SwapchainImage(id, format, swapchainWidth, swapchainHeight));
            FrameBuffer frameBuffer = new FrameBuffer(swapchainWidth, swapchainHeight, 1);
            frameBuffer.addColorTarget(FrameBuffer.FrameBufferTarget.newTarget(texture));
            frameBuffer.setDepthBuffer(Image.Format.Depth);
            return frameBuffer;
        });
    }

Within the update loop of XrAppState I set the currently requested frame buffer to the output of the viewport

     leftViewPort.setOutputFrameBuffer(inProgressXrRender.getLeftBufferToRenderTo());
     rightViewPort.setOutputFrameBuffer(inProgressXrRender.getRightBufferToRenderTo());

Does that all look right? (It certainly works, but that doesn’t mean its optimal)

1 Like

Glad it worked out to a clearer solution. I don’t know how the vr appstate renders stuff, it might be possible that you can share a single depth texture on all of your framebuffers.