Bloom filter causes flicker (in a VR context)

I’ve been trying to get Tamarin 2 working in some of my real projects and I’m experiencing an issue where if the cameras have a bloom filter applied they flicker (otherwise they are fine). I think that flicker is the glow being applied only on some frames.

A minimal example that includes Tamarin is:

public class Main extends SimpleApplication{
    
    public static void main(String[] args) {
        AppSettings settings = new AppSettings(true);
        settings.put("Renderer", AppSettings.LWJGL_OPENGL45);
        settings.setTitle("Tamarin OpenXR Example");
        settings.setVSync(false); 
        Main app = new Main();
        app.setLostFocusBehavior(LostFocusBehavior.Disabled);
        app.setSettings(settings);
        app.setShowSettings(false);
        app.start();
    }
    
    @Override
    public void simpleInitApp(){
        XrAppState xrAppState = new XrAppState();

        xrAppState.configureBothViewports(v -> {
            FilterPostProcessor fpp=new FilterPostProcessor(assetManager);
            BloomFilter bf=new BloomFilter(BloomFilter.GlowMode.Objects);
            bf.setBlurScale(0.5f);
            fpp.addFilter(bf);
            v.addProcessor(fpp);
        });

        getStateManager().attach(xrAppState);

        Box b = new Box(0.1f, 0.1f, 0.1f);
        Geometry geom = new Geometry("Box", b);
        Material mat = new Material(assetManager,
                "Common/MatDefs/Misc/Unshaded.j3md");
        mat.setColor("Color", ColorRGBA.Gray);
        geom.setMaterial(mat);

        geom.setLocalTranslation(0, 1, 5f);
        xrAppState.playerLookAtPosition(new Vector3f(0,1,5));
        rootNode.attachChild(geom);
    }

}

I’ll try to create a minimal example that doesn’t involve Tamarin if the problem isn’t immediately obvious to someone (but that will probably be hard so I wanted to at least see if someone had an idea first).

The way this works is

During XrAppState#update the viewport is given an output frame buffer to render into

@Override
public void update(float tpf){
    ...
    leftViewPort.setOutputFrameBuffer(inProgressXrRender.getLeftBufferToRenderTo()); 
    ...
}

Then later during the post render OpenXR is informed that the scenes are ready

@Override
public void postRender(){
    super.postRender();
    if (inProgressXrRender !=null){
        xrSession.presentFrameBuffersToOpenXr(inProgressXrRender);
        inProgressXrRender = null;
    }
}

My two current working theories are:

  • The bloom filter is only applying to the frame buffer that is attached to the viewport when the filter is applied; it’s triple buffered so that would make a kind of sense, but the processors belong to the viewport, not the framebuffer
  • The bloom filter may or may not be “ready” by the time I present it to OpenXR. But it feels like too regular a flicker for that theory to make sense.

Anything people think could plausibly be causing these problems?

My initial point of research would be the filter post proc. I am quite sure there have been some issues with it when using more viewports.

To replicate rendering with two viewports should cause the same issues.

I do not now at what point the ffp gets a handle for the output framebuffer.

A regular flickering might hint, that the ffp does not update its output target

1 Like

I think you’re right. Getting the debugger out; in FilterPostProcessor the outputBuffer is only set once (for each eye)

At FilterPostProcessor.java#L483

    if (renderFrameBuffer == null && renderFrameBufferMS == null) {
        outputBuffer = viewPort.getOutputFrameBuffer();
    }

And the output buffer is constantly cycling through 3 different buffers for me.

Does this mean that this approach of setting the output buffer isn’t supported along side FilterPostProcessors? I was originally letting JMonkey draw to its frame buffer then copying it into the swap chain each frame but that seems much less efficient (copying across loads of memory every single frame). But it would probably not suffer from this problem

I think it would be best if the Fpp check if the viewport render target has changed.

As i see no setter for the outputbuffer, the alternative is java reflection.
Or, copy the output texture as you mentioned.

Ultimately the jme fpp is not flexible enough for Vr. You always end up dooing lots of unnecesary double rendering. Shadowmaps are a prime example.

Ok, I think I understand a bit more of what’s going on.

Whats going on

Simple explanation:

When the filter initialised it sets itself to output to whatever the viewport was outputting to, and sets the viewport to now output to the filter. I then also try to edit the viewports output

Complex explanation:

When the FilterPostProcessor initialises it steals the original viewport outputFrameBuffer, storing it as FilterPostProcessor#outputBuffer. This is now the output from the post processors perspective. It also updates the viewports outputFrameBuffer to be the renderFrameBuffer. This being the input to the filter from the camera into the post processor. At this point the FilterPostProcessor effectively owns the Viewports output.

Simultaneously I am also updating the viewports outputFrameBuffer in order to do my triple buffering. Clearly these two things fight with each other.

Bad solution

So what I thought was the correct solution is that if a FilterPostProcessor exists then within Tamarin I should be cycling the FilterPostProcessors#outputBuffer between my 3 OpenXR swap chains. Not the viewports

But not all scene processors work that way, I could get the FilterPostProcessor to work but that would mess up the SimpleWaterProcessor etc (multiple scene processors would be especially hard to handle, as the first in the chain would need to be the FilterPostProcessor).

Possibly better solution

So the solution I’m currently using is to create a DelegatingFrameBuffer this “extends” FrameBuffer but has a delegate within it and allows the inners to be swapped out

public class DelegatingFrameBuffer extends FrameBuffer{

    @Setter
    private FrameBuffer delegatedFrameBuffer;
    
    public DelegatingFrameBuffer(){
        //the below is irrelevant as we always delegate to a different framebuffer
        super(1, 1, 1);
    }

    @Override
    public void addColorTarget(FrameBufferBufferTarget colorBuf){
        delegatedFrameBuffer.addColorTarget(colorBuf);
    }

    .... etc

Then I set my camera up as having this as its output buffer

    leftFrameBuffer = new DelegatingFrameBuffer()
    leftViewPort = app.getRenderManager().createMainView("Left Eye", leftCamera);
    leftViewPort.setClearFlags(true, true, true);
    leftFrameBuffer.setDelegatedFrameBuffer(leftViewPort.getOutputFrameBuffer());
    leftViewPort.setOutputFrameBuffer(leftFrameBuffer);#

Then the FilterPostProcessor steals this DelegatingFrameBuffer when it initialises and I can change the outputbuffer to point at whatever I want before each frame. This feels a little unclean, but works nicely for glow and I expect any other Scene Processor.

Aside

The way FilterPostProcessor does a full buffer copy does make me wonder if I’m being over the top avoiding my own full buffer copy into the OpenXR swapchain. Not sure what the cost of that buffer copy really is

1 Like

Ah, nothing is ever that easy. I think because FrameBuffer extends NativeObject something in JME is noticing and trying to clean it up on application shutdown, giving me an assertion error and shortly after that a Fatal Error

SEVERE: Uncaught exception thrown in Thread[#45,jME3 Main,5,main]
java.lang.AssertionError
	at com.jme3.util.NativeObjectManager.deleteNativeObject(NativeObjectManager.java:129)
	at com.jme3.util.NativeObjectManager.deleteAllObjects(NativeObjectManager.java:209)
	at com.jme3.renderer.opengl.GLRenderer.cleanup(GLRenderer.java:724)
	at com.jme3.system.lwjgl.LwjglWindow.destroyContext(LwjglWindow.java:501)
	at com.jme3.system.lwjgl.LwjglWindow.deinitInThread(LwjglWindow.java:694)
	at com.jme3.system.lwjgl.LwjglWindow.run(LwjglWindow.java:728)
1 Like

Ok, I’m now on my 2nd and 3rd implementation of this. And I think I’m actually happy with number 3

Implementation 2: Compatibility mode

I tried just accepting the frame buffer copy and allowing the viewport/scene processor to render to a FrameBuffer and then just do a copy from that frame buffer into the swap chain.

This worked but:

  • Theoretically slow (a whole bunch of extra copying)
  • Has a bunch of if-else branches to either do compatibility or not (because I wasn’t prepared to accept the frame buffer copy when scene processors aren’t being used)
  • Required the authors of consuming application to make a complicated choice
  • Required me to detect and exception when the authors of consuming application made the wrong choice (So I don’t get “why is this flickering” bug reports)
  • Generally felt yucky

I still have this in a branch, but I don’t think I’m going to use it.

Implementation 3: As many viewports as your heart desires

In this implementation whenever OpenXR presents me with a new framebuffer to fill for the swapchain I create a new viewport to go with it (if it’s one I’ve seen before I reuse the old ViewPort). This means that when the consuming application asks to configure viewports I need to remember that configuration to apply to any newly synthesised Viewports as well as apply it to any existing Viewports

public void setViewportConfiguration(Consumer<ViewPort> configureViewport)

This means typically there will be 6 viewports (2 eyes * triple buffering).

On each frame I set the 2 ViewPorts for that frame as being enabled and the other 4 as being disabled. This means I don’t need to mess with the ViewPort’s outputbuffer after creation so if a SceneProcessor has replaced it then that’s fine.

I think that if a ViewPort is disabled then it should have no (or negligible) cost, so this should be as performant as my original solution, and I think I’m going to go with this approach unless anyone can see any major flaws in my reasoning.

1 Like