The reason why I want this project accomplished is that OpenJDK floating point errors interfere with my programming, and the programming of others. The workarounds are only partial, because if you are re-using a closed source java class, that uses naive floating point programming, you have no clear way to know what it is up to, nor any way to repair it, internally. The only real way to look at floating point underflow and overflow is as that as a computer error, a logic bug inside the language itself. It is a bug scenario which is very consequential, and so I want it fixed so that I and others, even in commercial spheres, can have accurate OpenJDK back again. It is the principle of the thing, but more than that the consequences of the thing.

Is there anyone knowing or disposed in these areas who could lob in, or point me in a better direction?

It just means that you temporarily and carefully forget that base 10 or base 16 non-fraction values, 0.0<=v<=1.0, are like that. You treat that part of a whole value as a whole number anyway. You treat arithmetic that goes through the boundary of whole and decimal numbers very carefully and evaluate appropriately.

That is not necessarily true. Floating Point were originally created so that the location of the decimal point can be considered as a secondary matter within one giant number, or two related ones. This approximation business with them is something of a secondary tradition that has grown up after the original fact. The problem being, is that there is a case for total figures accuracy, as fast and as efficient as possible. The need for total range accuracy, such as dovetails with the Rational Number, is greater, more important, than some accuracy tradeoff. Workarounds based on BigInteger and BigDecimal, if not range required, are a waste of memory, speed and programming effort, given how clumsy they are to program and deal with in source code. To say nothing of library access where a needed class is compiled and protected already, is written using the floating point approach, and there is no way to know, access or replace the Java source code within the method.

What should be done is that floating point arithmetic and method/function mathematics range accuracy needs to be established again, or granted as an in-syntax, switching, mutual option.

I am talking about the original Java OpenJDK float and double ranges. There would, as I see it, a small percentage redution from the original range. Please note that the original ranges actually aren’t ranges, because they are not contiguous.

Java Primitive Floating Point Original Ranges float, Float: 32 bits, (+/-)(1.4010^(-45), 3.4010^(+38)) double, Double: 64 bits,(+/-)(4.9410^(-324), 1.7910^(+308))

Percentage (%) Value Range Loss for Floating Point Corrected Types. float: 32 bits, aprox. -15.56% general range loss. double: 64 bits, approx. -4.94% range loss.

Is there any good team who could volunteer for what I am talking about? Or can someone
on the JMonkeyEngine forums point me to a team, in the right productive direction?

Have you tried posting an issue or PR on the openJDK repo? I suspect that would be the best place to find people who are interested in improving OpenJDK.

It may also help if you have any type of code or basic implementation for what your trying to do that you can show to potential volunteers, so others can understand what your exact plan is, and then it can also be tested and critiqued. Some basic code demonstrating and proving your concept would be much more useful than a million theoretical explanations could ever be.

I have gone through the process of using the OpenJDK mailing lists, and am turned down and ignored every time. I have tried creating a Jep before, and it has been deleted and ignored.

Where is the ‘issue or PR on the openJDK repo’? I believe I have used it before, and have just
been sent on to the mailing lists. Can someone please reply with this internet address, even though I
believe I have exhausted this avenue, anyway?

If you are collecting people, i am interested in this study, i am primarily an android developer and i recently do GNU/Linux and avr programming too, so i want to expand my knowledge, but there are many things on your project that are still unexplained, for example: building the JDK (i think this will be the most toughest part), the number and the type of platforms you would like to support and etc (amd/arm on win/linux/android/mac), you can send me a private message with the project roadmap (if you have one) that may help me to find my resources and start the journey, but just to know, i haven’t touched an MSDOS C code before and i don’t have a mac too, so my study is all about linux and android.

No, I am not proposing re-routing to BigDecimal/BigInteger.

No, it is not impossible in the same way as a recurring decimal. Floating point overflow and underflow in Java are a separate phenomenon. It is due to their adherence to the formula that comes with IEEE 754:

What I am talking about is altering half of the present floating point equation, as a partial departure from IEEE 754. I am also talking about repairing the sub-implementation for java.lang.StrictMath, which is its own independent source of overflow and underflow.

It refers to the internal workings of an adjusted, half-IEEE-754 floating point corrected equation. The following curve presently only applies to integers, and not fractional values, be one using base 10 or base 16 with float and double. This can be changed so that it applies to fractional values as well, if you temporarily treat them as being the same as integer values, using the positive exponent part of the curve only, for absolutely all situations. Observe:

I am hopefully looking for a team to do all this. So that the extent of the work gets done quicker, and with better error removal. It’s all Java sub-implementation work, and I’m not perfectly sure what will be involved. C programming will be required. I’m not perfectly sure about Assembly or Java intrinsics. I also don’t want the time for this project to drag out forever.

I must admit I still don’t understand what your proposal is, which may have been the problem with the mailing lists as well.

Possibly some concrete examples would be helpful, what they currently result is and what you are suggesting they result in after your proposed changes.

You’ve mentioned overflow

The maximum value for a java float is (2-2^{-23})·2^{127}. If we take that value and multiply it by 1.1 we get infinity. Is assume that’s what you’d expect?

Equally if we start with a very large number and add a very very small number it ends up equalling the very large number because the small number is below the accuracy of the large number.

E.g.

public static void main(String[] args){
float start = 10000000000000000000f;
float addition = 0.0000000000000000005f;
float result = start + addition;
System.out.println(start == result); // prints true, the large number cannot contain the tiny addition and it's sheered off
}

Are you suggesting you’d expect a different result? You couldn’t support anything else without allowing a float to vary the amount of bits it is, the way big decimal does. But a float has a set number of bits available to it. Perhaps explaining how those bits are to be used differently in your suggestion would help (or if you want to let it be a variable bit value)

Providing individual examples of all the possible Java floating point errrors, within float and double, from their arithmetic operators and via java.lang.StrictMath, is too large a task to immediately request. What I want is the equation scheme altered so that they never happen, via assignment and operators, and also StrictMath, which is its own area.

I am aware of the Java Language Specification and floating point standard IEEE 754. For this, I am asking for a partial departure from them, which inasmuch is going on already.

Take a look at the graph image on the white background, with a red line, and black diamonds, that I have supplied above. It is an initial example of how the present scheme does decimal to binary, and binary to decimal, conversion. Something similar happens with binary to hexadecimal, and hexidecimal to binary. The included graph works sensibly in Java, because it refers to how whole numbers are treated with the floating point types. But fractional decimals, between 1 and 0, are not treated the same way, and that is the source of the problem. If those fractional decimal values or hexadecimal ones are briefly treated as whole number again, for the sake of all those digits after the dot separator (“the decimal point”), there would never be a Java floating point underflow or overflow error ever again. It is true that the present lower (decimal/hexadecimal) ranges for float and double would be reduced by about 15% and 5% respectively, as I have tried to calculate, but it will be a range with no holes in it and no spurious repeats. It will be a contiguous range. In fact a real range, unlike the present situation, where there are floating point error value holes, which produces not one range but a whole series of them. It is a price that should be paid. Correcting the operational code for all this, and updating tightly bound language classes and interfaces as well, is what I am discussing.

Im not suggesting all the combinations. I’m suggesting just 2 or 3. For example how would the java example i showed behave in your new scheme (and why: what would the bit representation that made it possible be).

Integer and fractional numbers are fundamentally different things. All integers can be expressed in all bases. The same is not true of fractional numbers. So there needs to be a but of explanation as to how you can treat them as the same

When I have used the term “fractional number”, it is only an attempt to find another term fo either base 10 digit numbers between 0 and 1, or base 16 digit numbers between 0 and 1. That’s the only thing going on. Just because the totality of what I have to submit on this subject can get very wordy.

There are any number fo particular examples for Java floating point underflow and overflow (floating point errors) in action. I’ll just include a few that Google allows one to find. Try these:

public static void main(String[] args)
{ double a = 0.7;
double b = 0.9;
double x = a + 0.1;
double y = b - 0.1;
System.out.println("x = " + x);
System.out.println("y = " + y );
System.out.println(x == y);}
//x = 0.7999999999999999
//y = 0.8
//false

public static void main(String ... args)
{double total = 0.2;
for (int i = 0; i < 100; i++)
{total += 0.2;}
System.out.println(“total = “ + total);}
//Expected output: total = 20.20
//Actual Output: total = 20.19999999999996

Thanks for the example, that gives something concrete to talk about. decimal 0.1 can’t be exactly represented in binary

0.1_{10} ≈ 0.0001100110011001101_{2}

The same is true of 0.7 and 0.9. That is where the effect you are seeing comes from. The same effect happens when 0.1_{3} (Often called “a third”) is converted into decimal, it ends up as approximately 0.33333333_{10}.

How are you proposing to represent 0.1_{10} so that these effects don’t happen. Are you proposing to do all calculations in decimal instead of binary? That is certainly possible (see BigDecimal) but is vastly slower (and as game developers we’d be particularly resistant to slowing java down to prevent microscopic inaccuracies*)

* although we only think of these as “inaccuracies” because we have a base-10 mindset because we have 10 fingers. A species that used a base-3 mineset would think base 10 was very odd for it’s inability to represent 1/3 exactly

I have already explained all this. The problems occur because the present IEEE 754 curve treats whole numbers and decimals differently. Things in fact CAN be corrected if the decimal numbers are temporarily treated as whole numbers anyway, and are derived from by means of the whole number part of the curve equation used already. You get problems from that curve equation when its exponents become negative exponents, and not positive ones all the time. The negative exponents lead to fraction terms, not whole number ones, which is where the overflow and underflow comes from.

decimal 0.1 can't be exactly represented in binary

Incorrect. decimal 0.1 can’t be exactly represented in the IEEE 754 binary equation scheme.

There are other schemes out there that in fact mean that decimal 0.1 CAN be exactly represented in binary. In fact, the IEEE 754 equation scheme can be altered so that decimal 0.1 CAN be exactly represented again. The included white graph that I have included here, Graph of n=2^m, is an optimised and corrected version of IEEE 754’s equation.

Way out of my league, but I’m just curious - this is a forum for a java game development engine, where hobbyists try to implement their game ideas. Would there not be a forum better suited for this type of java engine problem solving ? There must be forums for the openjdk, adoptopenjdk, corretto, oracles or similar that would have people more interested in this problem ? I hope you have already tried to look there for people to assist you