# Issue

The following C# code prints 0.1 - why?

```
var s = "0.1";
var dbl = Double.Parse(s);
var dcml = Convert.ToDecimal(dbl);
Console.WriteLine(dcml.ToString());
```

Isn't 0.01 not representable in binary, therefore it should print 0.100000001490116 ?

# Solution

The value of `dbl`

is precisely 0.1000000000000000055511151231257827021181583404541015625.

That's 0.1 to 17 significant digits.

The documentation for `Convert.ToDecimal(Double)`

states:

The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.

The conversion from `Single`

(aka `float`

) is documented to truncate earlier:

The Decimal value returned by this method contains a maximum of seven significant digits. If the value parameter contains more than seven significant digits, it is rounded using rounding to nearest.

If you call `Convert.ToDecimal(Double)`

with a value initially converted from 0.1f, it will display 0.100000001490116:

```
double dbl = 0.1f;
decimal dcml = (decimal) dbl;
Console.WriteLine(dcml);
```

Answered By - Jon Skeet Answer Checked By - Gilberto Lyons (PHPFixing Admin)

## 0 Comments:

## Post a Comment

Note: Only a member of this blog may post a comment.