Proper organizing of multi-pass scene rendering

I’m trying to get used to the concepts of ViewPort and SceneProcessor in JME3. I’m wondering what the proper configuration of those would be to achieve the following.

I have three layers of geometry:

  1. Very far, opaque
  2. Very far, translucent, shaders require depth map from layer 1
  3. Near (far and opaque)

The first two layers may interfer w.r.t. to their depth order, whereas layer 3 can always be rendered on-top of the result of the first two.

What I’ve managed to understand this far is that using JME3 one would render far objects (layers 1 and 2) to a separate viewport, that is rendered before the main viewport, i.e. layer 3. Hence, for the rest of this question we can forget about layer 3.

I’ve thought of the following process to properly render layers 1 and 2 within the pre-viewport:

  1. Render far opaque objects to famebuffer 1, that consists of colorbuffer 1 and depth texture 1.
  2. Copy colorbuffer 1 to colorbuffer 0 of the main framebuffer 0.
  3. Render the far translucent geometry with it’s funky shaders, that the depth texture 1 is supplied to, to the main framebuffer 0.

Then proceed with layer 3, i.e. the main viewport, that continues rendering on-top of the main framebuffer.

My question is: How to realize the above process in JME3?

My approach is to use a SceneProcessor to realize step 1, i.e. render the opaque bucket to a separate framebuffer. I think switching framebuffers and drawing one large quad, textured with colorbuffer 1, will do the trick for step 2. But then I will have to render only the translucent geometry for step 3. My problem is, I don’t see a way to realize that. Is it perhaps possible to exchange the viewport’s “main scene processor” with a custom one, that only renders the translucent bucket? If not, how could one realize step 3?

Thanks in advance.

I’m not sure why you can’t just render both far layers using a regular viewport that already renders opaque and transparent objects as separate passes. Then use a second viewport (with depth clear) for the near stuff.

Thanks for the reply, @pspeed. The problem I see with your approach: How am I supposed to pass the depth map of the opaque geometry to the translucent shaders?

As far as I know, one has to change the framebuffer, because it’s not possible to draw to the framebuffer that has the depth texture bound that is texture-sampled at the same time. Is this wrong?

JME is already doing this. It renders all of the buckets into the same framebuffer, depth and all.

Or maybe I’m misunderstanding something… why would you need to pass the depth map to the translucent shaders? Just for normal depth buffer processing or because the shaders need to query depth on their own?

You will need to define what you mean by translucent object. What kind of effects are you trying to achieve? Alpha blending? Additive blending? Refraction?
The first two can be performed by the GPU’s blending pipeline. The latter requires a shader to sample the framebuffer, which is an undefined operation if it is the framebuffer on which drawing is performed.

Yes, they need.

Indeed, this is what I thought. This is why I wrote down my approach where I switch the framebuffers.

Just to be sure… what is is that your shaders are doing with depth?

It might be that filter post processor can be manipulated into helping here. I’ve used it for similar just because it already handles the framebuffer issues for me.

I need to perform a volume rendering that needs to query the depth values.

Nevertheless, the depth is not the problem. The problem is that I need to switch framebuffers between buckets. My approach was to use a SceneProcessor but this requires the default processor not to render the opaque bucket, which is what I don’t know how to achieve.

I render volumes with my drop shadow filter (part of the Monkey Trap sample game). It’s a filter post processor filter that renders regular geometry (that needs depth) and then just doesn’t do anything on the actual filter pass. It’s convenient for me because it already handles the buffer issues.

…you could also look at what FilterPostProcessor does.

OK, then you’re doing something very similar to soft particles - this will require taking an approach similar to what you’re thinking of.

You can see how soft particles are implemented here:

Essentially, the filter hijacks the depth texture at the end of the filter chain and then passes it on to all the soft particles in the render queue. In the future we might expose more convenient ways of passing such information, e.g. via custom material globals.

Using a filter is something I haven’t concerned yet. This might be the way it was meant to be done. I will look deeper into this later.

And for reference, here is my DropShadowFilter that renders a bunch of boxes that need to know depth.
https://code.google.com/p/jmonkeyplatform-contributions/source/browse/trunk/zay-es/examples/MonkeyTrap/src/trap/filter/DropShadowFilter.java

1 Like