Copy FrameBuffer contents, except for pixels at a certain distance

I’m toying with another virtual reality idea. Since depth perception falls off at a certain distance, you should be able to re-use the same “far” object renders for the second eye. I’ve got this almost working with a special SceneProcessor that copies the FrameBuffer from one eye to the next. I then set the far frustrum short on the second eye. The only problem is, things up close that are rendered for both eyes, show up twice in the second eye. What I’d like to do, is clip out nearby things from the FrameBuffer copy, so they won’t be duplicated in the second eye. How would I go about this? It has to be possible… some way… the performance improvements for my game, 5089, would be quite significant since so much is far away.

If you have the depth buffer then I guess you could do it but on CPU it would be slow. Perhaps there is some trick to take the left eye’s framebuffer in a post processing filter and do some frame buffer ‘cleaning’ and make the copy there without rendering it back in post-proc.

Like, you could render a black full screen polygon and set the depth test function such that only nearer stuff gets painted.black… ie: backwards from normal depth test.

The ham-fisted approach would be to do three renders, far with a near plane set out, then left with a short far, right with a short far… that’s bound to be faster than a CPU-based copy solution if you can’t find a way to do it on the GPU.

Topic on the Oculus forums – it is an old idea:

Not sure if anyone actually tried implementing it, though. Thank you for the feedback, pspeed… looking into it more now. Trying to at least see if the performance gains are there to be worth determining how to clean it up.

I was able to see up to around 38% improvement in performance just culling far things in the second camera. This improvement would be cut when things are properly displayed far away in the second eye, however that is best done (3 passes or some framebuffer magic).

A problem would exist when more things get up close, and this “optimization” may actually slow things down if not much can be culled away in the background to offset the overhead.

The other performance idea, using instancing to do everything in one pass, might have a higher “return on investment”. It seems to be what Unreal & Unity are following up on. It just seems more involved to implement…

It occurs to me that the problem of ‘duplicates’ becomes a problem of ‘gaps’. If duplicates of near stuff are showing in the right eye, for example, then what would show there if you somehow removed those pixels? Nothing… not background for sure.

You can render the far stuff to the left eye first, then do a framebuffer blit from the left eye to the right eye (only color components – no depth). Then render the near things to the left and right eyes.

Yep… three renders. A small but significant optimization over my 3 pass comment above.

You’d still have to decide what to do about the near/far transition to not leave gaps in the right eye render for anything that crosses the near/far boundary.

I’m still fiddling with this. I found the FunctionTest for depth testing & got some interesting results. If I don’t copy over the depth buffer, and force a RenderState with a FunctionTest set to “NotEqual”, I get this:

Imgur

… which seems to be the opposite effect I’m going for. Newly rendered objects in the right eye are causing the skybox to render over them, leaving just the little tidbits I want removed. Not sure I am any closer to a solution, thought I’d share if it sparked any more suggestions…

So which of the mentioned approaches is this?

Note: for any of the ones mentioned so far, you’d only have to mess with depth test to render a single quad.

The above example was doing just 2 passes, a full left eye pass (both near & far), copied to the right eye. Then, I just try to do a near pass for the right eye. Anything rendered in the right eye causes the skybox to replace what was rendered. It looks interesting, but I’m not sure it is really helpful.

Toying with 3 passes now, which fixes the problem with near stuff being sent to the right eye…

To use that approach, you would have to draw a quad or something to clear out the near stuff. The regular scene still has to be drawn in the regular way. So I’m not sure what you are changing the depth test function for.

See, the idea was to render a single quad in 3D space at your near/far boundary but with TestFunction.Greater so that the quad is only rendered over nearer pixels.

Note: if you render your scene again with a closer far plane then you need to turn off the sky in that render or it will overwrite anything in the background as it renders at the far plane. You’d want to keep the old sky anyway.

All of these approaches are flawed for the exact same reason that your seeing duplicates in the original two pass case. Those duplicate areas are going to be replaced by “voids” in any of these approaches.

Just tried out a “prototype” version of this on my Vive, which wasn’t perfect but at least copied the far stuff into the right eye. Turns out, this just won’t work as planned. Even far things need to be rendered differently for each eye, because the projection matrix is different for the second eye. Things were simply not converging & warped very funny when moving my head around.

This is something that worked when doing cross-eyed, side-by-side rendering. It might have even worked on the DK2 with a single flat screen. However, it isn’t working on the Vive and likely won’t on the CV1. It was an interesting thing to experiment with, but alas wasn’t phr00tful :stuck_out_tongue:

On to other performance improvement ideas…