Post filter just for some objects in scene

Is it possible with the screen space post filters ( com.jme3.post.filters ) apply just on some objects in scene?



For example - if I want some objects to be blured, bloomed or if I want to paint CartoonEdged objects together with normal “photorealistic” graphics? I was looking into examples in jme3test.post, but it looks like it is always applied to whole scene.



I think such a thing should be possible, for example using stencil buffer (by two passes - one of objects where the effects is applied, and one on other)

… and I’ve seen similar effects in many games.



I was trying to look to the code of com.jme3.post.filters but maybe it would be better to ask first simple question.

oh, I putted this topic to “Troubleshooting – General” instead of “Troubleshooting – Graphics”, sorry for that (it seem to me, that I’m not able to change group now by “Edit topic”)

Yes this is possible.



Forgive me here… trying to remember this off the top of my head. You are going to need to render the objects separately forcing a GeometryList that contains the items you want to apply these to. You’ll have to set up a separate FrameBuffer to render to… and you may have to port the post processing filters unless there is a way of forcing the FrameBuffer they’re using.



If there is any easier way to do this, I’m sure someone will say something sooner or later, but, with my limited knowledge, this is the way I would approach it.

t0neg0d > thanks for answer,

if you already used this approach in some of your code, would be great if you post the piece of the code as an example.

I haven’t done much more than research ways of doing this, however… I believe that someone was working on a projected texture renderer, which looks like it took this approach.



I know it at least uses the uses a forced material rendered against a specific GeometryList. and I believe the final output is added as a PPF. It is still someone’s WIP, but would be a great place to start with how to go about doing what you want to. Search the forum for “projected texture” and you should be able to find it.



Speaking of which… it is probably a good idea to note, you’ll need to keep in mind that when rendering specific geometries, you’ll have to figure out a way of rendering a mask of the objects closer to your camera that might be covering these. I unfortunately have NO clue how to overcome this issue. Hopefully someone else will have that answer.

An easier solution, would be to use several viewports with filters on each of them.

Render them to frameBuffers and combine them in the end on a full screen quad. You have to copy the depth buffer from one framebuffer to another if you want a correct depth test to be performed.

@nehon definitely… any ideas on how to overcome the ordering issue? I was thinking about finishing up the projected texture renderer, and I’m not really sure how to tackle that.

Just performing a depth test, the hardware does it for you if you give it the depth buffer from previous passes, that’s why I talked about copying the depth buffer



I do that in the FilterPostPorcessor for the TranslucentBucketFilter.

1 Like

@nehon Sorry to drudge up an old topic, but I’m having trouble seeing where you are talking about (for the example). Is it in the TranslucentBucketFilter? If so, I’m clueless… because I am not seeing it :frowning:



Maybe I should ask a few simple questions while I am at it, to help me understand the process a bit more and stop me from asking silly questions.



I look at a SceneProcessor and Filter of any kind and they look pretty much the same to me. Both have the bulk of their code in the postFrame method… sceneProcessor recieves a single frameBuffer (I think the main buffer), while a Filter receives the previous filter’s frameBuffer and the main frameBuffer.



What am I missing here?

When is a sceneProcessor called?

What decent reading is there on JMEs render process?

The idea of the depth buffer makes sense to me… sort of… but only if the frameBuffer was storing multiple renders. Is it? I’m fairly sure it is not. /shrug



I guess the area I am struggling with mostly would be… (in the case of the texture projection)



It would seem that having the scene rendered prior to adding the projected texture is pointless, because everything is going to need to be rendered again, separated out to multiple viewPorts to ensure that any objects that may potentially cover the projected texture end up in front of the texture. Is this assumption correct?



And if it is… I think the code that Survivor wrote (as cool as it is) would need to be reworked taking a different approach.



Thanks for putting up with me.



Oh… one last note… if I remember correctly, doesn’t everything in the translucent bucket end up in front of everything else?

phwew…that’s a lot of questions

@t0neg0d said:
@nehon Sorry to drudge up an old topic, but I'm having trouble seeing where you are talking about (for the example). Is it in the TranslucentBucketFilter? If so, I'm clueless... because I am not seeing it :(

That's in the postFrame of the TranslucentBucketFilter, but i do it the other way around, instead of copying the depth of the scene in a new FrameBuffer i copy the previous filter's color buffer (and not depth) into the main scene framebuffer (wich has the depth buffer we want).
Than i render the translucent bucket into this framebuffer and the depth test is done against the depth values of the main scene.

I don't remmember exactly why i do it like that, maybe i had issues copying only depth.

@t0neg0d said:
@nehon Sorry to drudge up an old topic, but I'm having trouble seeing
I look at a SceneProcessor and Filter of any kind and they look pretty much the same to me. Both have the bulk of their code in the postFrame method... sceneProcessor recieves a single frameBuffer (I think the main buffer), while a Filter receives the previous filter's frameBuffer and the main frameBuffer.

What am I missing here?

The goal of the FilterPostProcessor is that most of the pre processing and post processing is handled here. The filter only has to handle it's pass.
Also it handles a lot of things that would have to be duplicated in each filter if filters were sceneprocessors. (framebuffer initialization, resizing of the screen, ordering of filters and so on...)

This way the Filter code is a lot simpler, the goal was to make people like you develop their own Filters without having to thing about the "plumbing" behind it and focus on the effect itself. It's complicated enough right now so having to handle all the framebuffers things could have been overwhelming for some.

@t0neg0d said:
When is a sceneProcessor called?
What decent reading is there on JMEs render process?

The scene processor's preframe() method is called before any culling or queuing is made on the main scene
the scene porcessor's postQueue is called after queuing has been done (queue bucket are sorted and shadow buckets are populated)
the scene processor's postFrame after all the buckets (except the translucent) of the main scene has been rendered.
There is no doc about that, i have to write one since a long time ago, but never found the time and i must admit, writing docs is not my favorite thing...but i'll do it eventually.

@t0neg0d said:
The idea of the depth buffer makes sense to me… sort of… but only if the frameBuffer was storing multiple renders. Is it? I’m fairly sure it is not. /shrug

No it's not, a FrameBuffer holds a Color buffer and a depth buffer (and a stencil buffer but that's not relevant for the conversation). When you render a scene into a framebuffer, the color is written to the color buffer and depth to the depth buffer. The depth test (is the pixel i'm currently rendering belongs to and object that is not occluded by another one that I have already rendered?) is done on the depth buffer.

So, if you are planning on rendering 2 parts of the scene separately into 2 separate framebuffers, the second render has to account for the depth of the first render, because there might be objects in the second render that should be behind objects in the first render.

@t0neg0d said:
I guess the area I am struggling with mostly would be… (in the case of the texture projection)

It would seem that having the scene rendered prior to adding the projected texture is pointless, because everything is going to need to be rendered again, separated out to multiple viewPorts to ensure that any objects that may potentially cover the projected texture end up in front of the texture. Is this assumption correct?

No, depth test again. You are mistaking two things : the rendering order and the Z position of an object in view space.
For example opaque objects are rendered front to back, this means that nearest objects are rendered before far object.
Intuitively you would do it the other way around, rendering first far objects, then any closer objects would be rendered over. But that generates a lot of overdraw and can kill the frame rate depending of the view angle.
When rendering front to back, the nearest objects are rendered first, color and DEPTH, then far object's pixels are rendered only if their depth is higher than the one in the depth buffer.

@t0neg0d said:
Oh… one last note… if I remember correctly, doesn’t everything in the translucent bucket end up in front of everything else?

No. Same mistake as above, the translucent bucket is rendered AFTER everything else, but it doesn't mean that objects in the translucent bucket can't be behind opaque objects.
3 Likes

@nehon Thanks sooooo much! This clears up a ton of conceptions/misconceptions and will make it a bunch easier to determine the best approach when developing something!



So, in theory, I should be able to change the buffer the projected texture is rendered to and the depth check will be performed for me?



HAH! More than in theory… in worked flawlessly. Can’t thank you enough. I’ll post the fix for the texture projector… this is definitely something that should be added to the core engine, if at all possible.

1 Like