Post Processing Filter - multiple depth passes and general issues

I have been stuck on this for a while, across a few projects, it’s one of those stupid little things that’s probably trival to fix, but I just cant see the solution. After a few days of going in circle I need a nudge in the right direction :slight_smile:



I’m working on a postFilter that extends com.jme3.post.Filter.



I need to perform several render passes on several different scenes, every frame (so I can render say a box by itself, then the rest of a scene minus the box, then combine. Or similar to how Water renders a reflection then uses it as a texture) I think I have this much under control.



First question: I have attempt to run these passes in initFilter, postQueue and preFrame, where is the correct place to be doing this ?



My main issue: running and then reading back a depth pass for these renders, I can never get a value in getOutputFrameBuffer().getDepthBuffer().getTexture(); for anything other than the final pass (for which there’s probably a very simple explanation). I just cant find a way to get access to the a depth texture.



this is the sort of crap I have been trying, but I’m begning to think I might just be clean missing the point…



[java]depthPass = new Pass() {

@Override

public boolean requiresDepthAsTexture() { return true; }

};

depthPass.init(renderManager.getRenderer(), w, h, Format.RGBA8, Format.Depth);[/java]





I have been trying to use depth Texures because I thought :

  • this is hardware accelerated, is this correct ?
  • its more efficant than using the inPosition vector, is that correct ?



    I’m really trying to avoid using an inPosition vector, but I may end up having to.



    ooh, another question(s) :
  1. are the render techniques (use by forceTechnique) pre-defined, or can we add our own?
  2. do you need to “define” or “register” a new Technique anywhere before using it ?
  3. is it as simple as, in material: Technique MyTechnique { and renderManager.setForcedTechnique(“MyTechnique”) ?





    Thanks for you help

    James

Hey James,


thetoucher said:
First question: I have attempt to run these passes in initFilter, postQueue and preFrame, where is the correct place to be doing this ?

Certainly not in initFilter, because this method is called once when you add the filter to the processor and is never called again.
preFrame is called before any operation is done to compute and render the frame
postQueue is called once all the updates / culling operations have been done in the viewport BUT before the actual rendering of the frame.

If you are using a completely different scene, you can do it in preFrame, if you want to render the actual scene but...with a specific technique do it in postQueue.

thetoucher said:
My main issue: running and then reading back a depth pass for these renders, I can never get a value in getOutputFrameBuffer().getDepthBuffer().getTexture(); for anything other than the final pass (for which there's probably a very simple explanation). I just cant find a way to get access to the a depth texture.

this is the sort of crap I have been trying, but I'm begning to think I might just be clean missing the point....

[java]depthPass = new Pass() {
@Override
public boolean requiresDepthAsTexture() { return true; }
};
depthPass.init(renderManager.getRenderer(), w, h, Format.RGBA8, Format.Depth);[/java]


I have been trying to use depth Texures because I thought :
- this is hardware accelerated, is this correct ?
- its more efficant than using the inPosition vector, is that correct ?

ATM you can't render hardware depth for a pre pass. There was this feature at first but since i never used it in Filters I removed it. I can add it again of course if you need it.
The usage will be to just grab the texture doing pass.getDepthTexture() and linking it to a material parameter. (the method is still there btw...)
requiresDepthAsTexture for a path means this pass needs the main scene depth texture as an input.

To answer your questions : yes it's hardware accelerated, it's the actual hardware depth buffer in a texture.
I don't really get the inPosition question, but i suspect you need the position of each fragment in world space or in view space.
There are 2 main techniques to do this :
- rendering the position to a texture (position buffer) and just fetch the texture later : this is the fastest way of doing it BUT it requires a lot of bandwidth because you'll have to store the position (assuming it's in viewspace, usually is) in a RGBA32F texture. this means the hardware needs to support floating point textures and that you'll have a 128 bit per pixel texture in the memory, for hi res it can be huge like around 50 mb.
- reconstructing position from the depth buffer : depth buffer is a 24 bit per pixel map and its rendering is "free", so you need less bandwidth, but you'll have some GPU overhead due to the calculation to reconstruct the position for each fragment. That's what i do for SSAO, the SSAO.frag has a convenient method to reconstruct viewspace position from depth using the "one frustum corner" method (there are several methods, but i won't go into that much details).


thetoucher said:
ooh, another question(s) :
1) are the render techniques (use by forceTechnique) pre-defined, or can we add our own?
2) do you need to "define" or "register" a new Technique anywhere before using it ?
3) is it as simple as, in material: Technique MyTechnique { and renderManager.setForcedTechnique("MyTechnique") ?

1) Yes and no : they are predefined in the provided shaders (unshaded and lighting), but if you have your own lighting shader, you can add what ever technique you want. Actually see the technique as way to render the scene with a different shader
Predefined techniques are Shadow, Glow, also Gbuf that is intended for future deferred rendering but dos not completely work atm.
2) yes it needs to be added to the j3md file of your material definition and...you have to implement the shaders associated to it :p
3) Almost :p look at the Bloom Filter and how the glow technique is used.
2 Likes

Check last revision now Filter Pass can render a depth texture.

There is a new init method



init(Renderer renderer, int width, int height, Format textureFormat, Format depthBufferFormat, int numSamples, boolean renderDepth)



set renderDepth to true ans then you’ll have the pass depthTexture doing pass.getDepthTexture().

1 Like

thank you so much man! perfect answer.



I still can’t get it working, when I try and use the DepthTexture as a texure, I always end up with something like this …

http://i.imgur.com/P7wfQ.png



any more ideas? I’m guessing its not rendering anything and that is just leftover screen garbage.



Thanks again.

mhh ok, maybe it does not work…I’m gonna experiment a bit, and report back.



A workaround would be to render the depth yourself during the pass using multi render target or rendering in another pass…

I have somewhere and algorithm that packs the 24 bits depth float values to a RGB8 texture. You’ll have some overhead but it’s handy and avoid the use of 32bit float textures. I’ll search my hard drive tonight and post it here

@nehon have you had a chance to look into this at all? is there anything I can have a look into to help out/figure it out myself?



I’m so painfully close to a breakthrough on some code that is getting hung up by this tiny thing :confused:

I’m Sorry i didn’t.



I’ll look into it asap



the thing is it should work, so maybe i made a mistake initializing the depth texture in the Pass.

You can look into the Filter.jave the Pass class sources, look at the texture init, maybe there is an error in the texture format or something like that…

Thanks man :slight_smile: Keep in mind the error may well be mine.



Another question, gl_FragDepth - I’m starting to toy with it and getting some unexpected results … is FragDepth ok for us to be using ?

gl_FragDepth output the depth like gl_FragColor output the color, but it’s useful only if you want to write it yourself.

http://www.opengl.org/sdk/docs/manglsl/xhtml/gl_FragDepth.xml

Here you want the hardware buffer.



btw i was thinking, do you clear the depth before rendering your pass?

because it may be filled with the depth of the previous frame, so if you are not rendering the entire scene (just some objects in the scene) it might be the problem

use renderer.clearBuffers(boolean clearDepth, boolean clearColor, boolean clearStencil); with true for the 3 booleans, just before rendering your pass

I have tried clearing the depth before and after, my pass, and every pass, still nothing. If I clear too much, I get all mixed up screen fragments (from whatever was behind the window when launched), If I don’t clear enough, I get the screen shot above, which is a mix of my output and complete garbage.



Do you know if gl_FragCoord.z get filled in correctly? I have read what it is supposed to do, for me it’s always zero. Is this something set/adjusted by jME or is it handled elsewhere (GL, hardware, LWJGL) ?



Why is this so damn tough, all I’m trying to do is have 2 depth values per fragment: the real depth, and the depth of another object/scene that is being occluded in a normal render, perhaps there is another way…

Ok I quickly tested it yesterday evening and there is obviously an issue.



I need more time to figure that out.

You should really use some workaround for now and render the depth yourself in another pass: in the vertex shader you need to compute position in view space (inPosition * g_WorldViewMatrix), then the depth value is viewSpacePos.z / viewSpacePos.w ( we need a value from 0.0 to 1.0). Put that in a varying then in the frag shader just do gl_FragColor = vec4(depth);

where depth is the name of your varying.

1 Like

Ok great, thanks a lot @nehon. Let me know if there is anything I can be doing to help out



Cheers, that’s what I’ve already been doing in the interim.

ok sorry for the delay, but that’s fixed, now you should properly get the depth rendered from additional passes

no worries, that’s great thanks. I will test it out over the next couple of days, cheers :slight_smile: