Sure - It uses Quadric Error Metrics - which is nothing new to the maths world, but it works faster because it avoids sorting by using a predetermined incremental threshold instead. I believe the original method calculated a cost (which is an exhaustive iterative procedure in itself) to determine which edge to remove, and removed the edge with the least cost (sorting), whereas this method just removes the edge if the cost is lower than the threshold, so ignores the cost calculation entirely. The result is slightly “uglier” than the former method, but is offset by speed. In practice it’s actually not that bad at all - especially since the whole point is reducing meshes for the purposes of detail over distance (we’re not going to be up close to them anyway).
The threshold is increased per iteration, and the aggression level increases the threshold per iteration. So the lower the aggression, the lower the threshold will increase per iteration, hence more iterations, the better the quality. Or inversely if the aggression is higher, the threshold will increase quicker per iteration, and as such less iterations are necessary to achieve the desired count, and the “uglier” it is.
final double threshold = 0.000000001d * Math.pow(iteration + 3d, agressiveness);
In reality a lower aggression level tends leave more vertices in the detailed areas and focuses more on flat areas, and a higher aggression level tends to “average out” the removal distribution across the whole mesh. Which makes sense if you think about it.
A picture says a thousand words and all that story:
Some reading if you’re interested:
Surface Simplification Using Quadric Error Metrics:
There are implementations I looked through that do implement sorting - along with exhaustive documentation (Paul Burke seems to have his hand in almost everything I read lately) - and I did find this link in particular very educational and comes with source code, too.
The link died many moons ago. All hail the wayback machine.