Apply various shaders to an image

In my application I have a quad with video files playing on it. During playing the videos, I want to apply various shaders to this quad material to make various VFX for the video, like distortions, scanlines, color shifts, overlay images, morphing and other stuff. The change of the shaders is to be controlled manually or by an AI, or programmed on a timeline (so it is rather arbitrary). So, assuming that I have all the frag shaders I want, then the question:

What would be the best approach to doing so? How do I change shaders on the fly? How do I better approach this task with JME and the material system?

PS: The video is played with help of JavaCV. Currently the JavaCV-based player dumps video frames on the JME Image that is embedded in a Texture2D which is set as ColorMap for the Material with the Unshaded shader for the Quad.

I am ready to take a look at radically different approaches, like maybe using FrameBuffers or something, if this goes well with JMEs viewpoint.

I imagine there are several ways you can accomplish this, I’m assuming that you want to be able not only to switch from one effect to the next, but also stack multiple effects to be applied on the same frame.

If your video quad is the only thing being displayed in the viewport then what would probably be the easiest way to manage this would be to write different post processors that work with your shaders and then just add/remove and stack those effects however you like. Again, that’s probably the easiest way to go about doing this, especially since it sounds like you’ve already written the fragmentation shaders.

However; a more efficient method, from a run-time perspective, would be to create different fragmentation shaders each containing the code to one or more visual effects. So you might have a shader that produces a distortion effect, another shader that produces a color shift effect and another shader that produces a distortion and color shift effect. Once you have all those shaders written then you would just swap out the material on your quad when the desired time arrives.

Oh, I did not mention, the quad is not the only thing in there, it is just one of the objects… and also there can be several quads :smile:

So, as i understand, you recommend creating a material for each shader. These materials must be able to assign a texture color map to the object, just like the Unshaded does, and apply the desired fragment effect to it, am I right? (but then it won’t be possible to combine them, I guess)

Again, several ways to accomplish this.

One would be to, as stated before, write different shaders that apply a different combination of the effects you’d like. So one shader for distortion, one shader for color shift, one shader for distortion and color shift. Then assign those shaders to different materials and swap out the materials as desired.

Another approach would be to render the quad to an off-screen buffer multiple times applying different shaders with each pass then render with the final shader to your visible viewport.

Yet another approach would be to write a single shader that contains all of your visual effects and use a variable in that shader to switch different effects on and off. Here you wouldn’t need to swap materials at all.

For instance:

uniform sampler2D m_DiffuseMap;

uniform bool distort;
uniform bool colorshift;

varying vec2 texCoord;

void main() {
    vec4 col = texture2D(m_DiffuseMap, texCoord);
    if (distort == true) {
       //distort effect

    if (colorshift == true) {
        //colorshift effect

    gl_FragColor = col;

Edit: Now of course that doesn’t really detail all that’d be necessary depending upon what your effects do. For instance the distort effect might modify the UV coordinates or something so, in that case, if the distort effect is enabled you’d probably not want to do that texture grab before checking to see if distort is enabled that way you wouldn’t need to do more than one texture grab.

vec4 col = vec4(0.0, 0.0, 0.0, 1.0);
if (distort == true) {
    col = texture2D(m_DiffuseMap, texCoord * distortion);
} else {
    col = texture2D(m_DiffuseMap, texCoord);
1 Like

I think I am aware of the approaches 1 and 3 that you suggest, but the approach 2 is the most interesting from the POV of flexibility! It feels like a dedicated rendering pipeline for the image! A dream! Could you please tell me more about basic steps on how to do the task with offscreen buffers (or give links for reading)? Never worked with offscreen buffers in JME before…

Node vidRoot = new Node("Video Root Node");
Camera vidCam = cam.clone() //clone the default jME camera
vidCam.setLocation(new Vector3f(0f, 10f, 0f));
vidCam.setRotation(new Quaternion(new float[] {1.5708176978793961078628152543852f, 0f, 0f}));

ViewPort vidVP = rm.createPreView("vidVP", vidCam); //rm Render Manager
vidVP.setClearFlags(true, true, true);

vidCam.resize(256, 128, true); //pixel width of your desired output texture
vidCam.setFrustum(1f, 20f, -128, 128, 64, -64); //modify these units according to the world size of your video quad (near, far, left, right, top, bottom)

FrameBuffer vidBuff = new FrameBuffer(256, 128, 0);  //pixel size of the output texture

Texture2D vidTex = new Texture2D(256, 128, Image.Format.RGBA8);

FilterPostProcessor fpp = new FilterPostProcessor(assetManager);

DistortFilter distortFilter = new DistortFilter();

ColorShiftFilter csFilter = new ColorShiftFilter();


You’ll create two quads, one that is attached to the vidRoot node, the unmodified video is rendered to this quad, and one attached to the rootNode, assuming your visible scene is using the rootNode. The quad attached to the rootNode should use a standard unshaded material using the vidTex texture.

In this case you’ll need to write different post processors that will apply the desired effects. In the example DistortFilter and ColorShiftFilter are these filters. Once the filters are added you can just enable and disable them with filter.setEnabled(boolean enable);

The effects will be applied in the order they were added to the FilterPostProcessor.

1 Like

Thank you very much! I am now heading to implement this technique!

No problem noncom. Obviously that was a very generic implementation so be sure to let me or the community know if you need any help with writing the post processors or anything else. Best of luck to ya!

1 Like