Issue
I've never understood why a float datatype is considered an approximation while a decimal datatype is considered exact. I'm looking for a good explanation, thanks.
Solution
well, you're right - it's misleading to make such a blanket statement. to understand completely you need to grasp two things.
first, decimal is intended for storing (exactly) decimal values with a fixed number of decimal places. typically, money (where the decimals are cents, for example). that's a very specific use case. it's not an exact store for any value; it's only for decimal values with a fixed number of decimal points, and the implementation is tailored to do that correctly.
second, floats are intended to be a more general datatype - they are used to store "any" value - and the implementation reflects that (so, for example, the implementation aims to cover a wide range of scales and support operations as efficiently as possible). in particular, it uses a binary representation that cannot represent all decimal values exactly. so, for example, it can store 0.5 exactly, but it can't store 0.1 exactly. that's just a fact of life of the binary - base 2 - representation used, but it means that for money, floats are not a good idea: if you can't store 0.10 exactly as a float then any calculations involving 10 cents may accumulate unexpected errors.
in other words, both have their limitations. the only way that decimal is "more exact" than float is that it's easier to understand: the values for which it does work exactly are clearly defined, useful, and match the "natural" base 10 representation we use. in contrast, it's much harder to understand which values will be stored exactly by floats, and which not, because they depend on the underlying base 2 representation.
Answered By - andrew cooke Answer Checked By - Marie Seifert (PHPFixing Admin)
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.