I am working on elevators in my top-down-view game. I want to keep the camera locked on the opaque player/elevator while the current level translates out and fades away, then next level translates in and fades in.
Like so -
The green area representing the level has all the complexities and many geometries of the world. Setting alphas individually on all the objects in the level will not look right, because for example a partially transparent wall will reveal objects behind the wall. This can be fixed by rendering the level normally to a frame buffer, then rendering the frame buffer to screen with the appropriate alpha instead.
My problem is combining the frame buffer level render with the player and the elevator which may be both in-front of and behind parts of the level as it moves-
Rendering the fading part of the level as a full screen quad, I lose the frame buffer’s depth data - it renders at the depth of the quad.
Rendering the frame buffer by blitting it does copy the depth, but the process cannot blend the colors or apply alpha.
Is there some way to apply the frame buffer depth to a full screen quad? As far as I know opengl’s depth test is done before the shader, so adding the depth map to my shader and doing my own depth checking feels hacky. Is that the way forward? Or am I down the wrong path entirely?