And at the risk of beating a dead horse by now, I want to talk a second about the importance of the stack trace.
When debugging an issue, stack traces are glorious. They are the one (and often only) bit of ground truth data we will get to try to diagnose a problem. Java doesn’t lie about it’s stack traces. You can often infer huge amounts of stuff or rule out entire trees of possibilities just from the stack trace.
Every single other piece of information a developer provides is tainted by perception. Even the debug printlns are subject to preconceived notions of what was expected to be found. (This is why the output of debug printlns is nearly useless without also showing the exact code producing those lines in full context.)
The presumption is that most developers who post about their problems like this, it will be because they have reached the end of their rope and exhausted their own tricks and procedures. So, by definition, the bug has surpassed their own ability to diagnose. Thus, every single assumption on their part has become suspect. Consequently, any information that developer provides on what they think is going on is the least reliable information. Useful anecdotally, sure, but chances are if that dev had a good idea what was actually happening then they’d have also solved the problem.
This particular stack trace was an excellent example. If you’d posted nothing else I would have been able to tell you what your problem was. Let’s take a look at how:
If you had told me nothing else about your problem…
Clue #1: indicates that a collection is being modified while it is being iterated. This could be because of ONLY two reasons:
- the loop is modifying the collection while iterating
- multiple threads are modifying the same collection
Given the location I can already 100% rule out number 1. (so technically there is a clue 1.5)
Conclusion: multiple threads are modifying this data structure.
Clue #2: with just a few clicks through the online SVN (still relevant in this case and github’s interface is horrible) I can see that this is the collision results internal to BIHTree. But actually, before I even looked, I had my suspicions because why would BIHTree be calling getNearestCollsion() on the supplied results? It wouldn’t. So I checked the code to confirm. In theory, I could have diagnosed this even without looking.
Conclusion: multiple threads are performing collisions on the same Mesh instance.
Clue #3: now we get a little speculative. One conclusion I can make 100% is that threads are being used (but I already knew that 100%, see above). A probable but less sure conclusion is that it’s quite likely that multiple threads in the pool itself could be performing collisions on the same mesh instance.
But even if the render thread were also performing collisions on this same scene then the issue would happen.
Those are the 100% true facts and then all that would be left is to apply them to your code. Working with ground truth is wonderful.