Seeking Volunteers for Serious Java Project

Yeah, they have.

I think the communication problems we see in this thread are probably exclusionary in the proper forums.

It will probably end the same way here but we are a curious bunch and so asked questions.

3 Likes

This doesn’t require IEEE or floating point, this is a maths question not a programming question. The number 0.110 cannot (without infinite places) exactly be written as a binary number (base-2 number). By which I mean a number like 0.00011001100110011012. If you believe it can please tell me what it is.

The problems occur because the present IEEE 754 curve treats whole numbers and decimals differently

I’m going to try and guess at what you mean by this, please let me know if this is not your meaning. So my reading of this is that you mean float the point in decimal.

So 0.131 would be represented as 131 - thousandths. And obviously the numbers 13110 and 100010 can be exactly represented in binary (they are 100000112 and 11111010002). And this is part of a larger principal; any number that can be expressed as a fraction in one base can be expressed as a fraction in any other base.

Obviously a language that worked like this could get questions like “what is 0.710-0.110?” exactly correct. Would this system remain decimal obsessed or would one third also be supported? In any case, it would be slow, and slow-but-decimal-accurate maths is already supported by BigDecimal. Few applications exist that would benefit from slow-but-decimal-accurate maths (Money is one of the few where it would and BigDecimal is one solution to that use case).

1 Like

Firstly, the schemed that allows 0.1 to be written finitely and
accurate in binary is just a mirror scheme of any binary representation,
that does not allow for negative powers to map values between 1 and 0.
You can treat those numbers as being whole numbers, superficially, at view.

a language that worked like this could get questions like 
“what is 0.7-0.1 ?” exactly correct.

This is true. In base 10, it would treat decimals and integers the same way, for the production of all digits.
For the evaluation of something of 1/3, it would (and should) give you

double a = 1.0D/3.0D;

// 0.3333333......3

which will be as far as the allowed range would allow. It would also allow

double b = a*3.0;

System.out.println(b == 1.0D);

// true

Despites not being possible from operations on the stored digits, alone.

Such a system would work for base 10 and base 16 whole numbers, and base 10 and base 16 Rational numbers. By the way, a Rational number is named for a property that it has in fractions and decimals; that its representation always terminates.

All this is what my included white graph diagram to the top of this discussion represents; since there is no negative exponent X axis, for decimal values that are not just whole numbers as well as the whole numbers. Since partial ascription appears very similar to whole numbers, you just use an echo of exactly the same thing to produce digits with no particular regard to their placing; you partial numbers between all occurences of the whole numbers as though they were whole numbers.

Observationally, whis would mean for any one whole/partial number value with one dot separator, the speed for the whole numbers and the partial ones would have to be the same. If one suggested change is made, they are no longer treated differently as IEEE 754 requires; The partial ones echo how the whole ones are treated, and so speed must be uniform for them both, since only one means is involved.

Btw, what instructions would the final implementation use? Are there supported CPU instructions for such float math, or would you have to emulate them?
Is it necessary for your use case to implement it inside jdk, or would using library/class rewriting tool be enough?

1 Like

If you consider the way that java.lang.StrictMath does it now, it leverages a C library. I’m not perfectly sure about the state of CPU instructions for math in the floating point unit. I’m hoping that you won’t have to emulate those instructions, but if you did it wouldn’t make much of a difference to care about, while things are still fast enough.

It might seem nice to implement it all inside the jdk using only its intrinsics, but the rumors have always been that with project Valhalla the Java instrinsics at the SE and OpenJDK ends will get a re-write. I would think the fact that tips the scales is considering the approach which is at least attemptedly used now. If that is a JVM Java intrinsics approach, and that is still possible given the consequences of these new mooted changes, then that might be better. Java is build on C and Assembly, so if the meaning of a Java Intrinsic is something that is built on C, then that would be good. The approach would be to use a parallel approach to now, with a view to any Java OpenJDK instrinsic changes into the future, which would require updating at that time. It’s a tricky question, since when you drill down to the software subsystems of Java I’m not sure of everything, combined with the fact that “Java Intrinsincs” may undergo more root changes in times to come.