distanceToEdge only does closest edge


I need to make a routine that does distanceToFurthestEdge instead of distanceToEdge that uses the closest edge.

BoundingSphere is simple enough as that is plus or minus the radius.

However, BoundingBox is a little more convoluted.

The following code is the BoundingBox’s function…

public float distanceToEdge(Vector3f point) {
    // compute coordinates of point in box coordinate system
    TempVars vars= TempVars.get();
    Vector3f closest = vars.vect1;

    // project test point onto box
    float sqrDistance = 0.0f;
    float delta;

    if (closest.x < -xExtent) {
        delta = closest.x + xExtent;
        sqrDistance += delta * delta;
        closest.x = -xExtent;
    } else if (closest.x > xExtent) {
        delta = closest.x - xExtent;
        sqrDistance += delta * delta;
        closest.x = xExtent;

    if (closest.y < -yExtent) {
        delta = closest.y + yExtent;
        sqrDistance += delta * delta;
        closest.y = -yExtent;
    } else if (closest.y > yExtent) {
        delta = closest.y - yExtent;
        sqrDistance += delta * delta;
        closest.y = yExtent;

    if (closest.z < -zExtent) {
        delta = closest.z + zExtent;
        sqrDistance += delta * delta;
        closest.z = -zExtent;
    } else if (closest.z > zExtent) {
        delta = closest.z - zExtent;
        sqrDistance += delta * delta;
        closest.z = zExtent;
    return FastMath.sqrt(sqrDistance);

How could I turn this into distance to furthest edge? Is it a simple matter of swopping the chunks in the lesser than and greater than blocks?

I am hoping someone with indepth understanding of this routine could explain, i am sure it a matter of swapping a few plusses and minuses.


Aside: if you think this will fix transparent sorting then I can pretty trivially come up with examples where it will still break. Maybe I’m reading too much into the need, though.

You are correct in that I am writing a transparency handler to handle transparencies correctly.

You are correct in that it may not be able to handle every circumstance correctly.

However, it will handle the majority of intersecting transparencies which is a lot better than how jme3 handles them currently, basically, there isnt any.

I have written a shader which processes the fragments to take advantage of hardware acceleration so I am hoping the finished product wont run too expensively, but I will only find this out once I have finished my prototype.

To perform the rendering correctly I need to write a TransparencyComparator that sorts the geometries by back edge to ensure the geometries with the deepest z order are processed first.

I am hoping that someone with in depth knowledge of the above routine can convert it to back edge a lot more efficiently than I can. I will compile my own versions the bounding volumes with this routine added.

I would be happy to share the code as a contribution if it would be something that is good enough to be added to your standard library or similar. My finished prototype would need to be scrutinised and of course to progress I would like the back edge distance routine.


This is incorrect. It will handle maybe 25% of them if you are lucky… but I don’t know what magic you are attempting in the shader. There is really only so much you can do, though.

Let’s look at it from the top, pretend we have two intersecting cubes… (pictures presume camera is at the bottom looking up)

Case where your way might sort of work if you ignore the internal faces:

Case where your way won’t work:

No matter what sorting you use, it will only cover a small fraction of the common cases. Any choice is arbitrary.

Anyway, you can prove it to yourself by getting your approach to at least sort properly… since you don’t actually want a general point distance but a screen distance, you can simply reverse location relative to the shape and do a nearest-edge distance that way. It will give you the farthest edge.

To help frame your approach better let me simplify the problem even further.

How would you handle the case of two simple triangles (as viewed from the top):smile:

Which one do you render first? How is it not obscuring the other one in some way?

The sooner I can get back face distance the sooner I will be able to show you my prototype which will handle all of the issues you point out.

I already told you how to do that.

Note: if you can handle this case then sort order doesn’t matter… because either way you sort it something gets drawn in the other order.

Sorry, please point out where you explained how to get the back face distance.

You are correct that back face sorting will not solve the issues with rendering transparencies.

You are correct that you are reading too much into the need of back face sorting and the reasons why.

I would like to add that this thread is discussing how to turn the first mentioned routine into a back face distance and is not discussing how to handle transparencies in any way. Please try to stick to the topic. Thanks.

ie: for each object, pretend you are standing on the opposite side of it along the view direction… how far doesn’t particularly matter if you’ve already filtered based on actual distance (dot product with view dir) as long as you are far enough away to include everything.

I understand what you mean. calculate the vector distance from centre to centre. then get the face distance from the location of vector distance * 2. then subtract that value from the vector distance * 2 to get the back face distance.

Whilst that works, thank you, thats a lot more computations than I had hoped for. Its not as optimal as altering the original code. but thank you.

You can actually just project location once as long as it’s behind all of the objects.

I’m not so sure. Wouldnt the single projection cause a different angle. fine for sphere bounds but would cause inaccuracies for box bounds. each centre distance vector would need to be the oposite in each case.

unless I have not understood you properly?

No, because you only care about distance to the plane… and anyway, nearest edge isn’t accurate in the forward case. It will be exactly as accurate in the backwards case.

But this is why you project along view direction then you will get a roughly opposite ordering (within the grossly inaccurate nature of distanceToEdge() for sorting paint order).

Remember that all bounding shapes are effectively axis-aligned.

ok, to see if I have understood you correctly…

if I calculate a vector from the camera position to double the far plane and then use that to perform the transparencyComparator this will in effect create the render queue in order of back face distance?

with the inaccuracies considered that may well provide sufficient functionality that I need.


Yeah. distanceToEdge() is already pretty bad for the regular way to determine painter order… so it should be fine for munged reverse painter order also.

For any example where it fails, it’s easy to come up with cases where it failed in the other direction also. (and cases where either way will find things proper that a real screen-parallel paint order would miss… because there really is no proper sort order).