Per-object Depth-of-field


I am trying to develop a visualization based on the Depth-of-field render pass. However:

-  the DepthTexture which contains a calculated depth per pixel should not correspond to the z distance of the geometry from the camera but custom values.

  • the values on the DepthTexture should be the ones that are somehow defined for each single spatial (one value between 0 and 1 per spatial, representing something like: no blur and max blur).

    Some help would he much appreciated, thank you!

    Best regards,


If you want a depth for each spatial, you should render each spatial singly with render to texture, and for each texture find the minimum and maximum grey values to convert them to 0…1

It can be a solution but will be slow.

Thanks for your answer,

Well, not exactly what I hand in mind but it was my fault.

In the meantime (from my post until know) I managed to refine my ideas and came to the fact that what I really wanted is blur on objects. But I want to do blur per-object.

Let me clear the tought: it would be great to have a BlurRenderPass that received a node containing a set of objects. For each one of them, the pass BlurRenderPass would blur the object according to a value that would be defined somewhere in the object.

Other render passes could co-exist for other distinct nodes without the blur leaking in the display results (for leak problem see

Best regards,


With basixs suggestion I managed to solve my blur problem. So, for a kind of a "BlurRenderPass" see

The part of the problem that remains with no solution (to my knowledge ofc) is how to blur a set of objects by an amount specified per object. This will be posted on other topic to avoid misleading-subject.

You can probably render each model with a different blur intensity, if you have a small amount of models it will work fine.

Thank you for the help, but actually, I can have a few 100s of objects and applying a distinct BlurRenderPass for each object may be a bit CPU demmanding.

In a worst case scennario I could have 100 BlurRenderPasses each one set for a distinct blur and have the objects parent node be switched to the BlurRenderPasses corresponding to the blur amount of the object… somehow 100 render passes dont seem a good idea to me…

Is there no way not a field "objectData" in the objects to define this value?

In case the above answer is afirmative, can this "objectData" value be available in the .frag file.


Okay… Then you should draw the blur intensity value into a small MRT and process that MRT together with the rendered scene MRT in the blur shader, use the read value as the weight for the blur. This is a more scalable solution and should work for you. If you don't want the blur bleeding to front objects, make sure the neighbor value sampled in the blur shader is behind the current pixel (you will need another MRT with linear depth values for that).

This could be a bit complicated if you never worked with screen-space shaders before, I think the depth of field shader will help you a lot with this.