A Depth Blur Filter…

On the path to something else, I created a Depth Blur Filter. It could probably be improved but it’s pretty nifty.



It looks like this in Mythruna (which ironically will probably not be using it. ;)):





That picture has the focus distance set to 64 and the range set to 20 or something. So near is out of focus and far is out of focus.



Also, I think the code makes a reasonably good template for any depth-based filter because sometimes reverse engineering what’s in the zbuffer can be tricky. I’ve tried to be as illuminating as possible but I’m a bit of a shader noob, too.



Here is the DepthBlurFilter.java file:

[java]

package yourPackageHere;



import com.jme3.asset.AssetManager;

import com.jme3.post.Filter;

import com.jme3.material.Material;

import com.jme3.renderer.RenderManager;

import com.jme3.renderer.Renderer;

import com.jme3.renderer.ViewPort;





/**

  • A post-processing filter that performs a depth range
  • blur using a scaled convolution filter.

    *
  • @version $Revision: 779 $
  • @author Paul Speed

    */

    public class DepthBlurFilter extends Filter

    {

    private float focusDistance = 50f;

    private float focusRange = 10f;

    private float blurScale = 1;



    // These values are set internally based on the

    // viewport size.

    private float xScale;

    private float yScale;



    public DepthBlurFilter()

    {

    super( "Depth Blur" );

    }



    /**
  • Sets the distance at which objects are purely in focus.

    */

    public void setFocusDistance( float f )

    {

    this.focusDistance = f;

    }



    public float getFocusDistance()

    {

    return focusDistance;

    }



    /**
  • Sets the range to either side of focusDistance where the
  • objects go gradually out of focus. Less than focusDistance - focusRange
  • and greater than focusDistance + focusRange, objects are maximally "blurred".

    */

    public void setFocusRange( float f )

    {

    this.focusRange = f;

    }



    public float getFocusRange()

    {

    return focusRange;

    }



    /**
  • Sets the blur amount by scaling the convolution filter up or
  • down. A value of 1 (the default) performs a sparse 5x5 evenly
  • distribubted convolution at pixel level accuracy. Higher values skip
  • more pixels, and so on until you are no longer blurring the image
  • but simply hashing it.

    *
  • The sparse convolution is as follows:

    %MINIFYHTMLc3d0cd9fab65de6875a381fd3f83e1b338%
  • Where ‘x’ is the texel being modified. Setting blur scale higher
  • than 1 spaces the samples out.

    */

    public void setBlurScale( float f )

    {

    this.blurScale = f;

    }



    public float getBlurScale()

    {

    return blurScale;

    }



    @Override

    public boolean isRequiresDepthTexture()

    {

    return true;

    }



    @Override

    public Material getMaterial()

    {

    material.setFloat( “FocusDistance”, focusDistance );

    material.setFloat( “FocusRange”, focusRange );

    material.setFloat( “XScale”, blurScale * xScale );

    material.setFloat( “YScale”, blurScale * yScale );



    return material;

    }



    @Override

    public void preRender( RenderManager renderManager, ViewPort viewPort )

    {

    }



    @Override

    public void initFilter( AssetManager assets, RenderManager renderManager,

    ViewPort vp, int w, int h )

    {

    material = new Material( assets, “MatDefs/DepthBlur.j3md” );

    xScale = 1.0f / w;

    yScale = 1.0f / h;

    }



    @Override

    public void cleanUpFilter( Renderer r )

    {

    }

    }

    [/java]



    And the material definition:

    [java]

    MaterialDef Depth Blur {



    MaterialParameters {

    Int NumSamples

    Int NumSamplesDepth

    Texture2D Texture

    Texture2D DepthTexture

    Float FocusRange;

    Float FocusDistance;

    Float XScale;

    Float YScale;

    }



    Technique {

    VertexShader GLSL100: Common/MatDefs/Post/Post.vert

    FragmentShader GLSL100: MatDefs/DepthBlur.frag



    WorldParameters {

    WorldViewProjectionMatrix

    }

    }



    Technique FixedFunc {

    }

    }

    [/java]



    And the .frag where the ‘magic’ happens:

    [java]

    uniform sampler2D m_Texture;

    uniform sampler2D m_DepthTexture;

    varying vec2 texCoord;



    uniform float m_FocusRange;

    uniform float m_FocusDistance;

    uniform float m_XScale;

    uniform float m_YScale;



    vec2 m_NearFar = vec2( 0.1, 1000.0 );



    void main() {



    vec4 texVal = texture2D( m_Texture, texCoord );



    float zBuffer = texture2D( m_DepthTexture, texCoord ).r;



    //

    // z_buffer_value = a + b / z;

    //

    // Where:

    // a = zFar / ( zFar - zNear )

    // b = zFar * zNear / ( zNear - zFar )

    // z = distance from the eye to the object

    //

    // Which means:

    // zb - a = b / z;

    // z * (zb - a) = b

    // z = b / (zb - a)

    //

    float a = m_NearFar.y / (m_NearFar.y - m_NearFar.x);

    float b = m_NearFar.y * m_NearFar.x / (m_NearFar.x - m_NearFar.y);

    float z = b / (zBuffer - a);



    // Above could be the same for any depth-based filter





    // We want to be purely focused right at

    // m_FocusDistance and be purely unfocused

    // at +/- m_FocusRange to either side of that.

    float unfocus = min( 1.0, abs( z - m_FocusDistance ) / m_FocusRange );



    if( unfocus < 0.2 ) {

    // If we are mostly in focus then don’t bother with the

    // convolution filter

    gl_FragColor = texVal;

    } else {

    // Perform a wide convolution filter and we scatter it

    // a bit to avoid some texture look-ups. Instead of

    // a full 5x5 (25-1 lookups) we’ll skip every other one

    // to only perform 12.

    // 1 0 1 0 1

    // 0 1 0 1 0

    // 1 0 x 0 1

    // 0 1 0 1 0

    // 1 0 1 0 1

    //

    // You can get away with 8 just around the outside but

    // it looks more jittery to me.



    vec4 sum = vec4(0.0);



    float x = texCoord.x;

    float y = texCoord.y;



    float xScale = m_XScale;

    float yScale = m_YScale;



    // In order from lower left to right, depending on how you look at it

    sum += texture2D( m_Texture, vec2(x - 2.0 * xScale, y - 2.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x - 0.0 * xScale, y - 2.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x + 2.0 * xScale, y - 2.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x - 1.0 * xScale, y - 1.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x + 1.0 * xScale, y - 1.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x - 2.0 * xScale, y - 0.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x + 2.0 * xScale, y - 0.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x - 1.0 * xScale, y + 1.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x + 1.0 * xScale, y + 1.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x - 2.0 * xScale, y + 2.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x - 0.0 * xScale, y + 2.0 * yScale) );

    sum += texture2D( m_Texture, vec2(x + 2.0 * xScale, y + 2.0 * yScale) );



    sum = sum / 12.0;



    gl_FragColor = mix( texVal, sum, unfocus );



    // I used this for debugging the range

    //gl_FragColor.r = unfocus;

    }

    }

    [/java]



    Improvements: it currently hard-codes the near/far clip values to what I use for my camera. These could potentially be passed in or maybe there’s a better way to get them.



    Also, the blur might look better if it was clamped radially around the center of the screen. Anything within distance - range → distance + range is in focus which means that the focused area is a big strip across the screen centered at ‘distance’.



    Let me know if I’ve done anything the hard way.
12 Likes

Note: it would also be pretty trivial to modify it to set the focus distance to the depth at the center of the screen and get a dynamic depth blur based on what the player was looking at.

Replace the ‘unfocus’ calculation with this to get dynamic depth:



[java]

//float unfocus = min( 1.0, abs( z - m_FocusDistance ) / m_FocusRange );

float dynamicDepth = b / (texture2D( m_DepthTexture, vec2(0.5,0.5) ).r - a);

float unfocus = min( 1.0, abs( z - dynamicDepth ) / m_FocusRange );

[/java]

That’s Depth of field!! That’s nice, this was on my todo list :smiley:

I’ll add it to the core thanks!

I’m using it for my underwater effect… combined with a clamped depth fog filter. here are some shots:







1 Like

Nice !!!

also what could be cool is some sinusoidal distortion applied to the texCoord when you fetch a sample.

Yeah… that was the original plan. This looked so cool on its own I was going to leave it a while. :slight_smile:

Awesome, thanks for oss’ing!

Very pretty! I like.

That’s just Brilliant !

Hi pspeed,



very nice under water effect :smiley:

how do you handle the “above/under” water stuff?

with AppState?

I want have this for bloxel too … I guess the water-elements should have no “physics” right?



Regards

Andreas

For underwater, I reduce the acceleration due to gravity to simulate buoyancy. A also make jump into swim.



For the above to below transition, I just wait until the head is sufficiently close to the water before turning on the effect. A lot of games do this and usually it’s hard to notice unless you are looking for it… though mine still needs some tweaking in that regard.

And it’s now in last revision : DepthOfFieldFilter

See TestDepthOfField

Thanks Paul!!

1 Like

Can’t wait to swim in the sea of mythruna :slight_smile:

Yaaay, awesome effect, thanks for doing that! I’m putting that into all my code now, muhahahah! (well, almost all)

I’ll make sure it’s mentioned on the effects wiki page, too.

Hey there, I’m working with the shaders myself at the moment and looking at your code I was wondering about something. I have little experience but this bit struk me:

[java] if( unfocus < 0.2 ) {

// If we are mostly in focus then don’t bother with the

// convolution filter

gl_FragColor = texVal;

} else {[/java]



I cannot guarantee anything but as far as I got the theory behind processing on the graphics card, this if statement may even slow down your system instead of speed it up.

Correct me if I’m wrong, as this is exactly the reason I pose this question, but the graphics card starts a ‘warp’ everytime it can, thus starting something like 16 (or even more on modern cards) times the same little process together. Then, going through these processes simultaneously, it goes through your whole image. Whenever an if statement occurs, the processes that return true in this statement will progress through the if-part, whereas the other processes are kept on a hold until the if statement is over. After that, the else bit is started for the rest of the processes, while keeping the other processes that previously were true for the if statement on a hold.



So, now I’d like to know, seeing there is nothing going on after the if and else statements, does the GPU behave the way I just described or can it partially start on the next warp when the if statement is over, thus ending their processes and freeing up computational power? As far as I understood, it can do the same computation parallel many times, but when computations start to deviate from eachother, all processes with different commands are paused.



Edit: In addition to what is stated above, I would like to add:

[java] float xScale = m_XScale;

float yScale = m_YScale;[/java]

Is there any reason for doing that? The variables are not changed, I wonder if this costs memory space or computational time to do.



No offense of course, just questioning why it is done this way as I want to learn and possibly improve things :wink:

The if probably can and should be removed. For really tight depth ranges it can create some stability in the image that simulates a more conical equation but for most uses it probably just gums things up, as you say. I haven’t done any testing, though. Also, when I originally wrote the block, I wasn’t sure what size of a convolution kernel I was going to need and might have ended up with one way more expensive than the one there. …though 12 texture look-ups isn’t free, either.



The xScale, yScale thing is just an iteration artifact. Originally, those were values set right in the code so I could test different ideas and then I just set them to the values I exposed to the ‘user’ when I finally added those uniforms. I also find them easier to type. :wink: I’d expect the compiler to sort it out either way… and at least this way the code has the option of varying them from what the user set them to if it decides to for some reason in the future. Though that’s a pretty weak explanation.

Howkay, thanks for the information :slight_smile:

Looks great! Thanks.

Hi,

I`m new to this area. And i would like to experience this effect. Is there is a particular simple sample to test this.

Thanks in Advance.

:roll: