jme uses Float for almost everything right?
is there some benchmark related to Double usage?
and why not use Long and and consider the unit as 1000 (like 1.0 in float would mean 1000 in long), in a sense of absolute precision (could even be 1000000), to the world’s objects’ positioning at least.
also, the libs dependencies all use float right? could that be a problem? (perf loss at float casting/converting)
so main concerns could be:
- lowered performance
- higher memory usage
- dependencies adaptation (requiring changes on the dep code or the dep func calls)
And the main advantage would be… precision.
Not that it is so imprecise that we need it to be Double or Long, I just mean it would be interesting to have more precision, worlds wouldnt be endless, just a lot huger, and may be make it more ready to the future? (not that I am an oracle or something like that…, I just think it is obvious, tho aiming for 128bits doesnt seem adequate yet)
And no, I dont have a test case to say it really needs to be 64bitted as soon as possible… no, I dont have even a test case to show any problems on it as it is now.
And no, I am not saying jme is bad… I love it, cuz of that I wonder if it could be 64bitted, may be at JME 4.0?
And yes, I would like to know your thoughts on the subject, may be I should consider things differently? may be I shouldnt care so much about it, if so why? only u may know
PS.: should I just
sed replace all float/Float with double/Double (and casts where/if needed) to benchmark by myself?