[Solved] Can I send more information to a Post-Processor Filters

Is it possible to send something else object specific? Or is it the information is lost going through the graphics pipeline?

I’ve been trying to read all the documentation on Post-Processor Filters.
I have found that they only take in the rendered image of the screen and a depth as texture.
source looks like its @ jmonkeyengine/FilterPostProcessor.java at master · jMonkeyEngine/jmonkeyengine · GitHub

This is my start at trying to learn JME3’s graphics pipeline.
My main goal is to have object specific cartoon filter colors - at the moment I’m up for any ideas on how to do this.

A filter has a material that takes a custom fragment shader, so to that end you can add any parameter to the material as you could any other material.

I feel like I’ve only half answered your question, though.

This sounds like you want something back. Materials never return anything.

There is a cartoon filter already made. Maybe take a look and see how far that takes you.


Yeah sorry, forgot to mention that that only allows one colour at a time Cartoon Edge Filter - insight needed - #2 by nehon.
Related test class: jmonkeyengine/TestCartoonEdge.java at master · jMonkeyEngine/jmonkeyengine · GitHub

I’m asking what it would take to ‘make it work’ - i’m willing do the effort.

Also if its this: Google Code Archive - Long-term storage for Google Code Project Hosting. and im just dumb and couldn’t figure it out thats okay too. Although it looks like it only does object outlines, where i think i do need edges as well.

Looking into the Glow Filter might help you, because it has a glow mode “object”.
What it does, iirc, is rendering the scene again with a Technique Glow, which means every shader has to support it though and then it in fact re-renders the scene with glow.

So for your outlines you might render only the outlined geometries again and then just use your algorithms in a fragment shader. Then, you can do what most games that use outline do: compare the depth of your outline to the scene depth to only draw the outline partially (namely when the player is hidden). Like a Merge Pass in your Filter.

Oh no I was looking at the glow filter but didn’t understand it and felt it had engine specific support.

And that nice to hear as I was thinking of doing something like a merge pass.
I’m concerned based on how the filter works that it might be too expensive.
(my understanding of the ToonEdge filter is: highlights places where the depth buffer jumps and places the colour.) I think it might need a pass per colour.

Without knowing exactly what every shader object must support it implies, I feel like with what I’m going for I should be okay. (no lighting or textures hopefully lazy hehe)

Just skimmed the glow filter here @ BloomFilter.java and it does 2 passes. I see it does 2 ‘extra’ render passes and thats how it renders the objects’ glow shapes.
I’m not sure how to do this per object though.

Also I’m guessing that this stackover flow question - the answer addendum depth part is basically what the current CartoonEdgeFilter class does
Are you suggesting the MultiTRenderTarget comment at the bottom? (which a quick google shows me out of my depth haha)

I am not experienced with that at all but:

What a ToonEdge Filter should do is “edge detection” on the normals of an object. It also depends if you want toon edges or only object outlines. One pass per color doesn’t make sense though.

Well: every shader you use needs this jmonkeyengine/Lighting.j3md at master · jMonkeyEngine/jmonkeyengine · GitHub
Not a big deal though.

Bloom Filter renders twice to blur the scene, because that’s what glow is after all: a blurred color. You would instead do your outlines there.

What you are searching for is GlowMode.Objects which starts here: jmonkeyengine/BloomFilter.java at master · jMonkeyEngine/jmonkeyengine · GitHub and is somehow “extracted”.

Hmmm… I start to feel like we’re in a “How do I inject things into my stomach?” conversation where you may have gotten to an “I need this solution, how do I do it?” without stopping to check that your path along the way was correct.

What effect are you actually trying to achieve? Because it’s sounding more and more like post-processing is not at all what you want.

Edit: background on the “stomach injection” thing. A humorous look at how we often get to a place we didn’t intend through purely logical (but incorrect) jumps:

yeah, XY problem

Basically im trying to avoid doing work with textures to put lines on everything:

without doing any real work with textures.
But if the environment and everything else is also outlined with yellow its not going to work

So you essentially want a GlowMode.Object for the already there ToonShader. Might actually be a good idea to add this to the engine, I guess?

Though doing this on textures would be the most efficient way and would also allow some play. A “good” texturing tool might be able to do this, because after all those are edges with a big angle/normal change.

Probably one “simply” needs this Extract Pass and integrate it into the toon shader.

Is edge detection what you really want or is it wireframe (without the internal lines) that you really want?

The problem with wireframes is that everything becomes triangles which looks way too busy. Which is okay to look at in blender while not particularly a good and a pretty common look.

…and while I was typing this I just thought I could use a second wireframe as the ‘highlight’ anyway.
Which does sound way easier on cpu/mem/my time (and which is a very off topic problem to generate those automatically)
Edit: Oh no this method won’t work with the silhouette edges, which are very important here

That’s what I meant by that. Finding “edges” for a proper wireframe is just a math problem.

Yeah, but not outside the conversation and the “do I include this edge or not” detection is similar to the “do I highlight this pixel or not” detection.

Yes, but there may be two separate problems. And the conversation is starting to yield results on what you are actually looking for.

Now we know you need the silhouette edges (which wasn’t even perfect in your screen shot). Do you need perfect silhouetted edges or the imperfect ones like in your (hand drawn?) example?

Edit: something to note is that even edge detection will always be a bit imperfect, too. It will make chunky edges, won’t AA right all the time, etc…

So sometimes it’s what you can live with. Here is an example of just a wireframe effect with no special mesh processing. Basically it’s rendering a slightly larger wireframe “inside out”:

Edit 2: note: this effect works well for Jaime because he has a decent number of triangles to work with.

Yay math, but yes this should be a ‘on game load’ cost at the worst.

Wow nice find, I didn’t notice the toon edges weren’t on the back facing faces (which seems like a not quite bug in the filter). So i don’t know about that honestly, will have to see when i get something working.

Also come to think of it, there shouldn’t be many viewport edges which aren’t actual edges themselves…
Which may give a lot more merit to the wireframe method, so thanks.

About the jamie image, how does the wireframe edges on the chest and closest fingers not show up?
Is that because its drawn using CullMode.Front?

Ideally I would draw some other major lines, such as the eyebrow ridge.

You have given me a lot to check so thanks, i have a long way to go yet.

FWIW, here is the code that sets up the material for that Jaime image:


So thanks for the help.

Implemented the logic you were talking about:

Which looks close enough although smoother lines is something i’ll want later.

Code (this is not great code, but good enough for a test): (EDITED 2019-05-03)

public static Mesh findAllHighlightEdges(Mesh m) {
	final float DIFF = 0.9f;
	// Steps for this method:
	// read edges->triangles into a map: edge,list<tri>
	// iterate through the edges
	// find any edges with only one triangle and add to list
	// find any edges with a large difference between the triangle normals and add it as well
	Map<Edge, List<Triangle>> edgeMap = new HashMap<Edge, List<Triangle>>();

	int count = m.getTriangleCount();
	System.out.println("Triangle count:" + count);
	for (int i = 0; i < count; i++) {
		Triangle tri = new Triangle();
		m.getTriangle(i, tri);

		//add edges
		addToMap(edgeMap, new Edge(tri.get1(), tri.get2()), tri);
		addToMap(edgeMap, new Edge(tri.get3(), tri.get2()), tri);
		addToMap(edgeMap, new Edge(tri.get1(), tri.get3()), tri);

	System.out.println("Edge count: " + edgeMap.size());
	List<Edge> edges = new LinkedList<>();
	for (Entry<Edge, List<Triangle>> a : edgeMap.entrySet()) {
		//detect empty edges (only have a triangle on one side)
		if (a.getValue().size() == 1) {
		//detect large normal differences
		if (a.getValue().size() == 2) {
			//has 2 triangles
			Triangle tri1 = a.getValue().get(0);
			Triangle tri2 = a.getValue().get(1);
			Vector3f normal1 = tri1.getNormal();
			Vector3f normal2 = tri2.getNormal();
			if (normal1.dot(normal2) < DIFF) {

	Vector3f[] vertices = new Vector3f[edges.size()*2];
	int i = 0;
	for (Edge e: edges) {
		vertices[i] = e.a;
		vertices[i+1] = e.b;

	Mesh newMesh = new Mesh();
	newMesh.setBuffer(Type.Position, 3, BufferUtils.createFloatBuffer(vertices));
	return newMesh;

Edge class is just a 2 Vector3f class with a symmetric equals and hashCode().
Although the edge hashcode method is fairly important.


Issue solved?

Yep, edited the title.

1 Like