Issue
So, as per C# rules, in the division (int & int division) the output is an int. While in (FP & int division), it is a FP.
but following generate different outputs.
int a = 45; //integer
decimal b = 5.0m; //floating point
Console.WriteLine(a/b); // "9"
Output is: 9 //output looks like an integer??? why?
Note that +, -, and *
produce expected result:
Console.WriteLine(a * b); // "225.0"
Dividing by non whole number produce expected result:
int a = 45; //integer
decimal b = 5.5m; //floating point
Console.WriteLine(a/b); // "8.181818181818181818"
Output is: 8.181818181818181818 - /
output is a FP which is okay.
Can anyone explain this?
The results are more consistent for float
/ double
- no zeros in output for all operations (which makes sense as those types don't store infomration on number of digits after decimal point)
Solution
The C# standard goes into details here, in section 12.9.3.
The scale of the result, before any rounding, is the closest scale to the preferred scale that will preserve a result equal to the exact result. The preferred scale is the scale of
x
less the scale ofy
.
So, to apply that, we've got x
with a value of 45m (after an implicit conversion to decimal
) which has a scale of 0, and 5.0m
which has a scale of 1.
Therefore the preferred scale is -1 - which would be invalid. (The scale is always non-negative.) The closest scale that can preserve the exact result is 0, so that's the actual scale - the result is equivalent to 9m
rather than 9.0m
.
Answered By - Jon Skeet Answer Checked By - Clifford M. (PHPFixing Volunteer)
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.