Issue
The value 0.1
is not representable as a 64 bits floats.
The exact value is roughly equals to 0.10000000000000000555
https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/
You can highlight this behavior with this simple code:
timestep = 0.1
iterations = 1_000_000
total = 0
for _ in range(iterations):
total += timestep
print(total - timestep * iterations) # output is not zero but 1.3328826753422618e-06
I totally understand why 0.1
is not representable as an exact value as a float 64, but what I don't get is why when I do print(0.1)
, it outputs 0.1
and not the underlying value as a float 64.
Of course, the underlying value has many more digits on a base 10 system so there should be some rounding involved, but I am looking for the specification for all values and how to control that.
I had the issue with some application storing data in database:
- the python app (using
str(0.1)
) would show0.1
- another database client UI would show
0.10000000000000000555
, which would throw off the end user
P-S: I had other issues with other values
Regards,
Solution
First, you are right, floats (single, double, whatever) have an exact value.
For 64 bits IEEE-754 double, the nearest representable value to 0.1 would be exactly 0.1000000000000000055511151231257827021181583404541015625
, quite long as you can see. But representable floating point values all have a finite number of decimal digits, because the base (2) is a divisor of some power of 10.
For a REPL language like python, it is essential to have this property:
- the printed representation of the float shall be reinterpreted as the same value
A consequence is that
- every two different float shall have different printed representation
For obtaining those properties, there are several possbilities:
- print the exact value. That can be many digits, and for the vast majority of humans, just noise.
- print enough digits so that every two different float have a different representation. For double precision, that's 17 digits in the worse case. So a naive implementation for representing floating point values would be to always print 17 significant digits.
- print the shortest representation that would be reinterpreted unchanged.
Python, and many other languages have chosen the 3rd solution, because it is considered annoying to print 0.10000000000000001
when user have entered 0.1
. Human users generally choose the shorter representation and printed representation is for human consumption. The shorter, the better.
The bad property is that it could give the false impression that those floating point values are storing exact decimal values like 1/10. That's a knowledge that is evangelized here and in many places now.
Answered By - aka.nice Answer Checked By - Robin (PHPFixing Admin)
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.