Applying post effect to only selected objects + Screen Space Displacement Filter

Hi Monkeys,

After a reasonably long hiatus, I have returned =), and for my returning contribution, I present a Screen Space Distortion Filter (SSDF) :

This is a test scene rendering showing the effect in action in several forms : simple static box, animated Sinbad and a very basic particle fire. The scene along with the full source is available for download (~686kb). Keep in mind this is still a work in progress, so the filter has not been completely finalised. However the source is a good learning tool to accompany the explanation bellow, so i will offer it up in its current form. Oh, and the video was recorded using an older version of the source, so the download doesn’t exactly match the video (mostly just the debug view).

(fire close up)

Some other possible applications would be explosion shock waves, Matrix style bullet trails, water splashes on the screen, glass effects, water effects, Predator camouflage, heat haze… I’m sure you can think of more.

I can’t take full credit for this, @Nehon had a lot of input and helped me optimise the performance significantly.

The filter itself is pretty trivial, It uses a system similar to normal mapping, the red channel is used for x (or u) offset, from 0 - 255, with 128 (half red) being 0% offset, 0 (or no red) being -100% offset, and 255 (full red) being 100% offset. The green channel is used for y (or v) offset, and the blue channel is a multiplier. These details are still being toyed with and finalized.

For the test, I used a simple variation on the ColoredTextured material. How the colour for the offset is decided (ie what material is used), really doesn’t matter all that much. A solid colour would work and would give a uniform offset across an entire object. A simple Fresnel based shader would work great as it could be used to make the edges distort more than the middle. Animated textures, like in the particle material, could also be used, and would be effective on objects that aren’t moving, to give the displacement some movement (ripples on the surface of a pond for example).

Applying Post-Process Filter effects to only selected objects

The key to getting the filter to work, was finding a way to isolated certain objects in the scene, and applying a filter only to them. The technique I present should be handy for many shader developers, and something I have wanted the ability to do for a while now. I will try and explain how I achieved this…

Step 1: Find a way to flag or tag the items we are trying to isolate.
For this I used a custom Material Technique. The idea being that I can create a new material, or modify an existing one, to include this new technique, then when it comes to post, I can single out only objects using a material with this technique using a ForcedTechnique render pass. For the SSDF, I created a technique “ScreenSpaceDistortion”.

Step 2: Make sure the objects using the new material, don’t get rendered with the rest of the scene.
This isn’t always strictly required for all effects, but for simplicity, I do this with my new material. To achieve this, I simply changed the default Technique fragment shader, to use a new shader “Blank.frag”, which is empty :

void main() {

(I tried to simply remove the default Technique all together but this causes all sorts of problems, a change to the core would be required to get this to work)

At this point, our scene can be rendered without the objects we are trying to isolate in post, so we move onto the Filter itself.

    Step 3 : Setup a render pass to render only objects we wish to apply the filter to (those objects using a material with the new technique).

    This is done with a simple render pass, setup in the initFilter method.

    distortPass = new Pass();
            distortPass.init(renderManager.getRenderer(), w, h, Format.RGB8, Format.Depth, 1, true);

    Step 4 :Setup our final composition shader, and pass the colour and depth information from the render pass, to the shader.

    material = new Material(manager, "MatDefs/SSDFinal.j3md");
    material.setTexture("DistortionTex", distortPass.getRenderedTexture());

    re(“DistortionDepth”, distortPass.getDepthTexture());

    Step 5 : Render only our target objects.

    Within postQueue :

    renderManager.getRenderer().setFrameBuffer(distortPass.getRenderFrameBuffer()); // we want to render to the distortPass Frame Buffer
    renderManager.getRenderer().clearBuffers(true, true, true); // clear the buffer
    renderManager.setForcedTechnique("ScreenSpaceDistortion"); // set our forced technique
    renderManager.renderViewPortQueues(viewPort, false); // do the render
    renderManager.setForcedTechnique(null); // clear our forced technique so we dont screw up the rest of the application

    Step 6: Apply our effect (or filter) to the selected target objects.

    This is done within the final composition shader, SSDFinal.j3md in this case, which has access to 4 key Textures …

    uniform sampler2D m_Texture; // original scene colour
    uniform sampler2D m_DepthTexture; // original scene depth
    uniform sampler2D m_DistortionTex; // target object colours
    uniform sampler2D m_DistortionDepth; // target objects depths

    It’s now in the hands of the shader developer to work their magic.

    Step 7 : Compose the final render by merging the original scene with the new effect render.

    In the case of the distortion filter, as with most filters, it is important to retain the depth order, so foreground objects occlude (hide) background objects. A depth test is achieved by using a step function in the final shader (basically a simple less than / greater than comparator).

    this is how it looks in the SSDFinal.frag …

    gl_FragColor = mix(texture2D(m_Texture, displacementCoord),origColor,step(sceneDepth,distortionDepth));

    but a more generic version would be:

    gl_FragColor = mix(texture2D(m_Texture, texCoord), texture2D(m_CustomObjectPassTex, texCoord),step(texture2D(m_DepthTexture,textCoord),texture2D(m_CustomObjectPassDepth,textCoord)));

    … which is basically saying, at a point on the screen (texCoord), compare the depth of the original scene render and our selected target objects render depth, (step …), and which ever is closer to the camera, display that (mix…).

    Step 8: ???

    Step 9 : Profit!

    That is about it. This technique may not be the best solution, or the cleanest, but it works well, is pretty flexible so I hope you can find some use for it

    Feel free to chip in with any feedback or ideas, I hope this can help some of you better understand how post-proccessing works, and how to leverage it to your advantage =)



    Cool stuff. I could only give you one thumbs up but I would have given 50 or so. :slight_smile:

    Welcome back buddy we missed you big time :wink:

    Great work! I’'ve been thinking about this kind of filter aswell =)

    about the empty shader: will that empty shader still cause draw calls for geometries that should not be rendered?

    @kwando said: about the empty shader: will that empty shader still cause draw calls for geometries that should not be rendered?
    Yes, unfortunately. With a small modification, JME can just not render geometry if no technique is selected and if there is no default technique, and it would fix the issue. Right now it throws an error "no default technique on the material". Not sure what's the better approach...removing this error can lead to silent failure when you "forgot" to make a default technique.

    “no default technique on the material”… This error and I have a rough history together =P

    I think it is a valid use case to have geometries/things in your scene graph that you want to render in a separate pass (suppress the default technique) and I’m not afraid of silent errors (in this case).

    But a better approach might be to change something in the default technique block, like replacing the definition block with the keyword Empty or adding a skip directive inside the block. That way default rendering will be opt-out not opt-in =)

    Technique Default Empty;

    …it’s too bad there isn’t a way to define our own buckets… :wink: (sorry, inside joke)

    <cite>@pspeed said:</cite>'s too bad there isn't a way to define our own buckets... ;) (sorry, inside joke)


    1 Like

    idk, or we could have a more flexible bucket system to decide exactly what you want to render and when. And optionally reuse the depth buffer of the scene.
    We talked about it with Paul, and ideas will end up being a design… We’ll just have to convince Kirill that it’s useful :smiley:

    Edit : uh…I was answering to @kwando , but Paul ninjaed me :stuck_out_tongue:


    Yeah, custom buckets would be really awesome… last time I saw it discussed here, dreams of a flexible bucket system was shot to pieces…

    Good luck with your convincing.

    screen space displblablabla…, this can only be effin’ Predator camouflage!!! and you know it.

    p.s. why didn’t you use Jaime? If you were missing some animations I’m sure you could just cozy up to Rémy and give him something to think about.

    I understand how you’re applying this filter, and I’m using the instructions given on my own project. However, it’s always easiest to understand code that you can walk through. Is there any way either the OP or someone who downloaded the .zip could throw up a new link? It’d be much appreciated…

    1 Like

    The download link is no longer available. Could you give again the code you used on that vids?. I would like to learn to make that kind of distortions and that code could be a good reference having in mind that is already done for JME3.

    Sorry guys.

    I have updated the download link in the original post.


    1 Like

    It works almost perfect. There is a problem (at least for my use) and is that if there is an object in front of the effect (between the camera and the object with the distortion effect) it is also distorted. Do you know a fix to get rid of this issue?

    Anyway, thanks for the code ;).