Hey James,
thetoucher said:
First question: I have attempt to run these passes in initFilter, postQueue and preFrame, where is the correct place to be doing this ?
Certainly not in initFilter, because this method is called once when you add the filter to the processor and is never called again.
preFrame is called before any operation is done to compute and render the frame
postQueue is called once all the updates / culling operations have been done in the viewport BUT before the actual rendering of the frame.
If you are using a completely different scene, you can do it in preFrame, if you want to render the actual scene but...with a specific technique do it in postQueue.
thetoucher said:
My main issue: running and then reading back a depth pass for these renders, I can never get a value in getOutputFrameBuffer().getDepthBuffer().getTexture(); for anything other than the final pass (for which there's probably a very simple explanation). I just cant find a way to get access to the a depth texture.
this is the sort of crap I have been trying, but I'm begning to think I might just be clean missing the point....
[java]depthPass = new Pass() {
@Override
public boolean requiresDepthAsTexture() { return true; }
};
depthPass.init(renderManager.getRenderer(), w, h, Format.RGBA8, Format.Depth);[/java]
I have been trying to use depth Texures because I thought :
- this is hardware accelerated, is this correct ?
- its more efficant than using the inPosition vector, is that correct ?
ATM you can't render hardware depth for a pre pass. There was this feature at first but since i never used it in Filters I removed it. I can add it again of course if you need it.
The usage will be to just grab the texture doing pass.getDepthTexture() and linking it to a material parameter. (the method is still there btw...)
requiresDepthAsTexture for a path means this pass needs the main scene depth texture as an input.
To answer your questions : yes it's hardware accelerated, it's the actual hardware depth buffer in a texture.
I don't really get the inPosition question, but i suspect you need the position of each fragment in world space or in view space.
There are 2 main techniques to do this :
- rendering the position to a texture (position buffer) and just fetch the texture later : this is the fastest way of doing it BUT it requires a lot of bandwidth because you'll have to store the position (assuming it's in viewspace, usually is) in a RGBA32F texture. this means the hardware needs to support floating point textures and that you'll have a 128 bit per pixel texture in the memory, for hi res it can be huge like around 50 mb.
- reconstructing position from the depth buffer : depth buffer is a 24 bit per pixel map and its rendering is "free", so you need less bandwidth, but you'll have some GPU overhead due to the calculation to reconstruct the position for each fragment. That's what i do for SSAO, the SSAO.frag has a convenient method to reconstruct viewspace position from depth using the "one frustum corner" method (there are several methods, but i won't go into that much details).
thetoucher said:
ooh, another question(s) :
1) are the render techniques (use by forceTechnique) pre-defined, or can we add our own?
2) do you need to "define" or "register" a new Technique anywhere before using it ?
3) is it as simple as, in material: Technique MyTechnique { and renderManager.setForcedTechnique("MyTechnique") ?
1) Yes and no : they are predefined in the provided shaders (unshaded and lighting), but if you have your own lighting shader, you can add what ever technique you want. Actually see the technique as way to render the scene with a different shader
Predefined techniques are Shadow, Glow, also Gbuf that is intended for future deferred rendering but dos not completely work atm.
2) yes it needs to be added to the j3md file of your material definition and...you have to implement the shaders associated to it :p
3) Almost :p look at the Bloom Filter and how the glow technique is used.