You are correct in that I am writing a transparency handler to handle transparencies correctly.
You are correct in that it may not be able to handle every circumstance correctly.
However, it will handle the majority of intersecting transparencies which is a lot better than how jme3 handles them currently, basically, there isnt any.
I have written a shader which processes the fragments to take advantage of hardware acceleration so I am hoping the finished product wont run too expensively, but I will only find this out once I have finished my prototype.
To perform the rendering correctly I need to write a TransparencyComparator that sorts the geometries by back edge to ensure the geometries with the deepest z order are processed first.
I am hoping that someone with in depth knowledge of the above routine can convert it to back edge a lot more efficiently than I can. I will compile my own versions the bounding volumes with this routine added.
I would be happy to share the code as a contribution if it would be something that is good enough to be added to your standard library or similar. My finished prototype would need to be scrutinised and of course to progress I would like the back edge distance routine.
This is incorrect. It will handle maybe 25% of them if you are lucky… but I don’t know what magic you are attempting in the shader. There is really only so much you can do, though.
Let’s look at it from the top, pretend we have two intersecting cubes… (pictures presume camera is at the bottom looking up)
Case where your way might sort of work if you ignore the internal faces:
Case where your way won’t work:
No matter what sorting you use, it will only cover a small fraction of the common cases. Any choice is arbitrary.
Anyway, you can prove it to yourself by getting your approach to at least sort properly… since you don’t actually want a general point distance but a screen distance, you can simply reverse location relative to the shape and do a nearest-edge distance that way. It will give you the farthest edge.
You are correct that back face sorting will not solve the issues with rendering transparencies.
You are correct that you are reading too much into the need of back face sorting and the reasons why.
I would like to add that this thread is discussing how to turn the first mentioned routine into a back face distance and is not discussing how to handle transparencies in any way. Please try to stick to the topic. Thanks.
ie: for each object, pretend you are standing on the opposite side of it along the view direction… how far doesn’t particularly matter if you’ve already filtered based on actual distance (dot product with view dir) as long as you are far enough away to include everything.
I understand what you mean. calculate the vector distance from centre to centre. then get the face distance from the location of vector distance * 2. then subtract that value from the vector distance * 2 to get the back face distance.
Whilst that works, thank you, thats a lot more computations than I had hoped for. Its not as optimal as altering the original code. but thank you.
I’m not so sure. Wouldnt the single projection cause a different angle. fine for sphere bounds but would cause inaccuracies for box bounds. each centre distance vector would need to be the oposite in each case.
if I calculate a vector from the camera position to double the far plane and then use that to perform the transparencyComparator this will in effect create the render queue in order of back face distance?
Yeah. distanceToEdge() is already pretty bad for the regular way to determine painter order… so it should be fine for munged reverse painter order also.
For any example where it fails, it’s easy to come up with cases where it failed in the other direction also. (and cases where either way will find things proper that a real screen-parallel paint order would miss… because there really is no proper sort order).