Issue
The new IEEE 128-bit decimal floating-point type https://en.wikipedia.org/wiki/Decimal128_floating-point_format specifies that the significand (mantissa) can be represented in one of two ways, either as a simple binary integer, or in densely packed decimal (in which case every ten bits represent three decimal digits).
C#'s decimal
type predates this standard, but has the same idea. It went with binary integer significand.
On the face of it, this seems inefficient; for addition and subtraction, to line up the significands, you have to divide one of them by a power of ten; and division is the most expensive of all the arithmetic operators.
What was the reason for the choice? What corresponding advantage was considered worth that penalty?
Solution
Choosing one representation is almost always about trade offs.
From here
A binary encoding is inherently less efficient for conversions to or from decimal-encoded data, such as strings (ASCII, Unicode, etc.) and BCD. A binary encoding is therefore best chosen only when the data are binary rather than decimal. IBM has published some unverified performance data.
Here you can find more about the relative performance.
Basically, it asserts your thoughts, that decimal significant is generally faster, but most operations show similar performance and binary even winning in division. Also keep in mind, since Intel mostly seems to rely on binary significants (i couldn't find hints about other manufactures), they are more likely to get hardware support and than might beat decimals by a good margin.
Answered By - Patrick Beynio Answer Checked By - Robin (PHPFixing Admin)
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.