PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0
Showing posts with label floating-point. Show all posts
Showing posts with label floating-point. Show all posts

Tuesday, November 1, 2022

[FIXED] Why does changing 0.1f to 0 slow down performance by 10x?

 November 01, 2022     c++, compilation, floating-point, performance, visual-studio-2010     No comments   

Issue

Why does this bit of code,

const float x[16] = {  1.1,   1.2,   1.3,     1.4,   1.5,   1.6,   1.7,   1.8,
                       1.9,   2.0,   2.1,     2.2,   2.3,   2.4,   2.5,   2.6};
const float z[16] = {1.123, 1.234, 1.345, 156.467, 1.578, 1.689, 1.790, 1.812,
                     1.923, 2.034, 2.145,   2.256, 2.367, 2.478, 2.589, 2.690};
float y[16];
for (int i = 0; i < 16; i++)
{
    y[i] = x[i];
}

for (int j = 0; j < 9000000; j++)
{
    for (int i = 0; i < 16; i++)
    {
        y[i] *= x[i];
        y[i] /= z[i];
        y[i] = y[i] + 0.1f; // <--
        y[i] = y[i] - 0.1f; // <--
    }
}

run more than 10 times faster than the following bit (identical except where noted)?

const float x[16] = {  1.1,   1.2,   1.3,     1.4,   1.5,   1.6,   1.7,   1.8,
                       1.9,   2.0,   2.1,     2.2,   2.3,   2.4,   2.5,   2.6};
const float z[16] = {1.123, 1.234, 1.345, 156.467, 1.578, 1.689, 1.790, 1.812,
                     1.923, 2.034, 2.145,   2.256, 2.367, 2.478, 2.589, 2.690};
float y[16];
for (int i = 0; i < 16; i++)
{
    y[i] = x[i];
}

for (int j = 0; j < 9000000; j++)
{
    for (int i = 0; i < 16; i++)
    {
        y[i] *= x[i];
        y[i] /= z[i];
        y[i] = y[i] + 0; // <--
        y[i] = y[i] - 0; // <--
    }
}

when compiling with Visual Studio 2010 SP1. The optimization level was -02 with sse2 enabled. I haven't tested with other compilers.


Solution

Welcome to the world of denormalized floating-point! They can wreak havoc on performance!!!

Denormal (or subnormal) numbers are kind of a hack to get some extra values very close to zero out of the floating point representation. Operations on denormalized floating-point can be tens to hundreds of times slower than on normalized floating-point. This is because many processors can't handle them directly and must trap and resolve them using microcode.

If you print out the numbers after 10,000 iterations, you will see that they have converged to different values depending on whether 0 or 0.1 is used.

Here's the test code compiled on x64:

int main() {

    double start = omp_get_wtime();

    const float x[16]={1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,2.0,2.1,2.2,2.3,2.4,2.5,2.6};
    const float z[16]={1.123,1.234,1.345,156.467,1.578,1.689,1.790,1.812,1.923,2.034,2.145,2.256,2.367,2.478,2.589,2.690};
    float y[16];
    for(int i=0;i<16;i++)
    {
        y[i]=x[i];
    }
    for(int j=0;j<9000000;j++)
    {
        for(int i=0;i<16;i++)
        {
            y[i]*=x[i];
            y[i]/=z[i];
#ifdef FLOATING
            y[i]=y[i]+0.1f;
            y[i]=y[i]-0.1f;
#else
            y[i]=y[i]+0;
            y[i]=y[i]-0;
#endif

            if (j > 10000)
                cout << y[i] << "  ";
        }
        if (j > 10000)
            cout << endl;
    }

    double end = omp_get_wtime();
    cout << end - start << endl;

    system("pause");
    return 0;
}

Output:

#define FLOATING
1.78814e-007  1.3411e-007  1.04308e-007  0  7.45058e-008  6.70552e-008  6.70552e-008  5.58794e-007  3.05474e-007  2.16067e-007  1.71363e-007  1.49012e-007  1.2666e-007  1.11759e-007  1.04308e-007  1.04308e-007
1.78814e-007  1.3411e-007  1.04308e-007  0  7.45058e-008  6.70552e-008  6.70552e-008  5.58794e-007  3.05474e-007  2.16067e-007  1.71363e-007  1.49012e-007  1.2666e-007  1.11759e-007  1.04308e-007  1.04308e-007

//#define FLOATING
6.30584e-044  3.92364e-044  3.08286e-044  0  1.82169e-044  1.54143e-044  2.10195e-044  2.46842e-029  7.56701e-044  4.06377e-044  3.92364e-044  3.22299e-044  3.08286e-044  2.66247e-044  2.66247e-044  2.24208e-044
6.30584e-044  3.92364e-044  3.08286e-044  0  1.82169e-044  1.54143e-044  2.10195e-044  2.45208e-029  7.56701e-044  4.06377e-044  3.92364e-044  3.22299e-044  3.08286e-044  2.66247e-044  2.66247e-044  2.24208e-044

Note how in the second run the numbers are very close to zero.

Denormalized numbers are generally rare and thus most processors don't try to handle them efficiently.


To demonstrate that this has everything to do with denormalized numbers, if we flush denormals to zero by adding this to the start of the code:

_MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON);

Then the version with 0 is no longer 10x slower and actually becomes faster. (This requires that the code be compiled with SSE enabled.)

This means that rather than using these weird lower precision almost-zero values, we just round to zero instead.

Timings: Core i7 920 @ 3.5 GHz:

//  Don't flush denormals to zero.
0.1f: 0.564067
0   : 26.7669

//  Flush denormals to zero.
0.1f: 0.587117
0   : 0.341406

In the end, this really has nothing to do with whether it's an integer or floating-point. The 0 or 0.1f is converted/stored into a register outside of both loops. So that has no effect on performance.



Answered By - Mysticial
Answer Checked By - Timothy Miller (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Saturday, October 29, 2022

[FIXED] How to use feof to read an undefined number of float values from a binary file?

 October 29, 2022     binaryfiles, c, eof, file, floating-point     No comments   

Issue

I write some float values to a binary file, and after that I want to read them back with another .c program.

This is how I write them:

#include <stdio.h>

int main() {
    /* Create the file */
    float x = 1.1;
    FILE *fh = fopen ("file.bin", "wb");
    if (fh != NULL) {
       for (int i = 0; i < 10; ++i)
       {
            x = 1.1*i;
            fwrite (&x,1, sizeof (x), fh);
            printf("%f\n", x);
       }
        fclose (fh);
    }

    return 0;
}

And this is how I want to read them:

#include <stdio.h>


int main(){
    /* Read the file back in */
    
    FILE *fh = fopen ("file.bin", "wb");
    
    float x = 7.7;
    fh = fopen ("file.bin", "rb");
    if (fh != NULL) {
       
        while(!feof(fh)){
            if(feof(fh))
                break;
            
            fread (&x, 1, sizeof (x), fh);
            printf ("Value is: %f\n", x);
        }
        


        fclose (fh);
    }

    return 0;
}

But I got back 7.7 which means that the reader never found any of the values.

How can I do this? What did I miss here?


Solution

In your second program, FILE *fh = fopen ("file.bin", "wb"); opens the file for writing and truncates it to zero length, destroying the data in it. Change that to FILE *fh = fopen ("file.bin", "rb"); and remove the later fh = fopen ("file.bin", "rb");.

Additionally, do not use feof for testing whether there is more data in a file. feof only reports if EOF or an error occurred on a previous read or write operation. It does not tell you that the file position indicator is currently pointing to the end of the file, if no attempt to read past that has been made. Instead, check the return value of fread to see how many items it read.

If you use size_t result = fread(&x, 1, sizeof (x), fh);, you ask fread to read sizeof (x) bytes, and it will return the number of bytes read. If that is less than sizeof (x), then a complete x was not read. In contrast, if you use size_t result = fread(&x, sizeof x, 1, fh);, you ask fread to read 1 object of size sizeof x. Then fread will return the number of complete objects read, which will be 0 or 1.



Answered By - Eric Postpischil
Answer Checked By - Marilyn (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, October 24, 2022

[FIXED] How is Python's decimal (and other precise decimal libraries) implemented and why are they slower than built in floating point calculations?

 October 24, 2022     decimal, floating-point, python     No comments   

Issue

I've been reading the floating point guide to try to clarify some points about floating point numbers and I assume Python's decimal library is an implementation of "Limited-Precision Decimal" mentioned on the linked page.

It mentions that "Limited-Precision Decimal" is "Basically the same as a IEEE 754 binary floating-point, except that the exponent is interpreted as base 10. As a result, there are no unexpected rounding errors. Also, this kind of format is relatively compact and fast, but usually slower than binary formats."

Is Python decimal implemented the same way? If all else is equal in the representation besides the exponent being interpreted differently, why is it slower and why isn't this representation always preferred over the IEEE 754 implementation? Finally, why does using the exponent as base 10 prevent unexpected rounding errors?

Thanks!


Solution

It mentions that "Limited-Precision Decimal" [...] Is Python decimal implemented the same way?

No, internally Python's Decimal uses a base-10 exponent, along with an arbitrarily large integer. Since the size of the integer is unlimited, the potential precision is unlimited too.

why is [Python's Decimal] slower

There are a few reasons for this. First, adding two Decimal values of different exponents requires multiplying by ten, and multiplying by ten is more expensive than multiplying by two on a computer which uses binary. Second, doing an exact calculation requires more digits of precision than doing an approximate calculation. Third, IEEE754 floating point has hardware acceleration because it's such a common operation.

why isn't this representation always preferred over the IEEE 754 implementation?

Speed is a feature, and not all calculations benefit from being done exactly. The use of inexact calculations is more widespread than you might think. For example, Excel uses floating-point numbers internally. Yet, it has hundreds of millions of users, so evidently you can get pretty far with only floating point.

Finally, why does using the exponent as base 10 prevent unexpected rounding errors?

The key word in that sentence is "unexpected." You wouldn't be surprised to learn that a base 10 number system can't represent the number 1/3 without rounding it. We understand and are okay with not being able to represent 1/3, 1/7, and 1/9 perfectly accurately. But people are much less accepting of computer systems which can't represent 1/5 accurately.

If you tried to represent 0.2 in binary, you'd get 0.0011(0011), with the 0011 part repeating forever. A floating point number doesn't have an infinite number of bits, so it rounds off everything after 53 bits (assuming double precision) and approximates it.

This is not to say that Decimal is perfectly accurate. There are lots of situations that force rounding. For example, if you took the square root of two, that's an irrational number, and can't be represented as an exact decimal.

Example:

>>> Decimal(2).sqrt()
Decimal('1.414213562373095048801688724')
>>> Decimal(2).sqrt() ** 2
Decimal('1.999999999999999999999999999')

Decimal is a way of doing math that agrees with the answer you'd get by doing it with pencil and paper. For this, it trades off speed and memory use.



Answered By - Nick ODell
Answer Checked By - Mary Flores (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] Why is the data stored in a Float datatype considered to be an approximate value?

 October 24, 2022     approximation, decimal, floating-point, types     No comments   

Issue

I've never understood why a float datatype is considered an approximation while a decimal datatype is considered exact. I'm looking for a good explanation, thanks.


Solution

well, you're right - it's misleading to make such a blanket statement. to understand completely you need to grasp two things.

first, decimal is intended for storing (exactly) decimal values with a fixed number of decimal places. typically, money (where the decimals are cents, for example). that's a very specific use case. it's not an exact store for any value; it's only for decimal values with a fixed number of decimal points, and the implementation is tailored to do that correctly.

second, floats are intended to be a more general datatype - they are used to store "any" value - and the implementation reflects that (so, for example, the implementation aims to cover a wide range of scales and support operations as efficiently as possible). in particular, it uses a binary representation that cannot represent all decimal values exactly. so, for example, it can store 0.5 exactly, but it can't store 0.1 exactly. that's just a fact of life of the binary - base 2 - representation used, but it means that for money, floats are not a good idea: if you can't store 0.10 exactly as a float then any calculations involving 10 cents may accumulate unexpected errors.

in other words, both have their limitations. the only way that decimal is "more exact" than float is that it's easier to understand: the values for which it does work exactly are clearly defined, useful, and match the "natural" base 10 representation we use. in contrast, it's much harder to understand which values will be stored exactly by floats, and which not, because they depend on the underlying base 2 representation.



Answered By - andrew cooke
Answer Checked By - Marie Seifert (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, October 17, 2022

[FIXED] how to check if a float number is an integer in cpp?

 October 17, 2022     c++, floating-point, integer     No comments   

Issue

For example 14.2 is not an integer but 14.0 is an integer in a mathematical perspective. What I've tried to do is the following, let n be a long double, so to check if it's an integer I compared it to its integer form:

if (n == (int) n) 
{
    // n is an integer
}

It all looks perfect but when I applied it, it didn't work, and when i debugged the program I discovered that the long double number is never a whole number; it's not 14.0 instead it's 14.0000000000000000000002, the last 2 is added by the compiler.

Does someone know how to fix it?


Solution

The cleanest approach is to use floor() and not concern yourself with casting to integer types which makes the false assumption that there will be no overflow in converting a floating-point value to integer. For large floats that's obviously not true.

#include <iostream>
#include <cmath> // This is where the overloaded versions of floor() are.


bool is_whole(double d){
    return d==floor(d);
}

int main() {
    double x{14.0};
    double y{14.2};
    double z{-173.5};
    
    std::cout << is_whole(14.0) << '\n';    
    std::cout << is_whole(14.2) << '\n';
    std::cout << is_whole(-123.4) << '\n';
    std::cout << is_whole(-120394794.0) << '\n';
    std::cout << is_whole(-120394794.44) << '\n';
    std::cout << is_whole(3681726.0) << '\n';
    
    return 0;
}

Expected Output:

1
0
0
1
0
1


Answered By - Persixty
Answer Checked By - Mary Flores (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, October 4, 2022

[FIXED] How to fix PhpExcel extra decimal points bug?

 October 04, 2022     excel, floating-point, php, phpexcel, precision     No comments   

Issue

I'm using PHPExcel for a client project I'm working on. There's this bug I've been facing where extra decimal places are added randomly for some cells while reading the excel file. For example, instead of 56.25, I get 56.24999999999. I've traced this problem to PHPExcel itself and I've spent a lot of time looking through the documentation but I haven't found anything that can solve this. I can't just round off the numbers to a given decimal point because it happens randomly and different cells have numbers with different decimal points. Please help!!!

EDIT: It is critical that the decimal points remain exactly as they were manually input. So if the client manually inputs 34.25 in one cell and then 51.3456 in a other cell, they should remain exactly like that! So rounding off isn't the ideal solution here.


Solution

After reading the comment by Mark Baker,

These aren't random; decimal float values cannot always be represented exactly on computers that use binary representation.... this is not a PHPExcel problem, or even a PHP problem.... it's a well-documented digital computer problem; and the solutions for formatting are provided by sprint() or number_format() –

I discovered that making sure all numbers are saved as text solves this problem. PHP will read them as strings hence its weird floating point shenanigans won't be a problem.



Answered By - ninjacoder
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Saturday, September 17, 2022

[FIXED] Why python print() prints a rounded value rather than the exact value for non-representable float

 September 17, 2022     floating-point, precision, printf, printing, python     No comments   

Issue

The value 0.1 is not representable as a 64 bits floats. The exact value is roughly equals to 0.10000000000000000555

https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/

You can highlight this behavior with this simple code:

timestep = 0.1
iterations = 1_000_000
total = 0

for _ in range(iterations):
    total += timestep

print(total - timestep * iterations)  # output is not zero but 1.3328826753422618e-06

I totally understand why 0.1 is not representable as an exact value as a float 64, but what I don't get is why when I do print(0.1), it outputs 0.1 and not the underlying value as a float 64.

Of course, the underlying value has many more digits on a base 10 system so there should be some rounding involved, but I am looking for the specification for all values and how to control that.

I had the issue with some application storing data in database:

  • the python app (using str(0.1)) would show 0.1
  • another database client UI would show 0.10000000000000000555, which would throw off the end user

P-S: I had other issues with other values

Regards,


Solution

First, you are right, floats (single, double, whatever) have an exact value.

For 64 bits IEEE-754 double, the nearest representable value to 0.1 would be exactly 0.1000000000000000055511151231257827021181583404541015625, quite long as you can see. But representable floating point values all have a finite number of decimal digits, because the base (2) is a divisor of some power of 10.

For a REPL language like python, it is essential to have this property:

  • the printed representation of the float shall be reinterpreted as the same value

A consequence is that

  • every two different float shall have different printed representation

For obtaining those properties, there are several possbilities:

  • print the exact value. That can be many digits, and for the vast majority of humans, just noise.
  • print enough digits so that every two different float have a different representation. For double precision, that's 17 digits in the worse case. So a naive implementation for representing floating point values would be to always print 17 significant digits.
  • print the shortest representation that would be reinterpreted unchanged.

Python, and many other languages have chosen the 3rd solution, because it is considered annoying to print 0.10000000000000001 when user have entered 0.1. Human users generally choose the shorter representation and printed representation is for human consumption. The shorter, the better.

The bad property is that it could give the false impression that those floating point values are storing exact decimal values like 1/10. That's a knowledge that is evangelized here and in many places now.



Answered By - aka.nice
Answer Checked By - Robin (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, August 15, 2022

[FIXED] How to show actual result for floating-point division in C?

 August 15, 2022     c, division, floating-point, output, printf     No comments   

Issue

I am learning the C programming language and, in Xcode 13.x, when I write this:

float a = 3 / 2;
float b = 1 / 3;
printf("3 divided by 2 as a float is %f, 1 divided by 3 as a float is %f\n", a, b);

The console shouts out this:
3 divided by 2 as a float is 1.000000, 1 divided by 3 as a float is 0.000000

I would expect it to show 1.500000 and 0.333333, but why doesn't it? I am sure the solution is obvious and simple but basic googling/searching did not help.
I tried to #include float.h thinking it would have helped but no luck there.
What am I blindly missing?
Thank you


Solution

You are obviously trying to store the result of the division into a float, but floats also include integers. You're actually performing an integer division, because, in both cases, both of your terms are integers (don't have a fractional part). The division is performed rounded (result being without a fractional part), and is assigned to your variable.

How to correct this? Try adding a fractional part to at least one of the two numbers. For example:

float a = 3.0/2;

This should do it.



Answered By - Mario MateaÈ™
Answer Checked By - Clifford M. (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Saturday, August 13, 2022

[FIXED] Why is decimal more precise than double if it has a shorter range? C#

 August 13, 2022     c#, decimal, double, floating-point     No comments   

Issue

Im reading the C# in a Nutshell book and it shows this table: enter image description here

Im having a hard time understanding the table. It says that double takes 64 bits of space and it ranges from 10^-324 to 10^308. Decimal takes 128 bits of space BUT is also says that it ranges from 10^-28 to 10^28. So what im understanding here is that decimal takes more space but provides a shorter range? this doesnt make much sense in my head since everyone agrees that decimal should be use when precision is required. Also when doing a calculation like = (1/3)*3, the desire result is 1, but only float and double give me 1, decimal gives me 0.9999... So why is decimal more precise? I dont really understand.


Solution

what I'm understanding here is that decimal takes more space but provides a shorter range?

Correct. It provides higher precision and smaller range. Plainly if you have a limited number of bits, you can increase precision only by decreasing range!

everyone agrees that decimal should be use when precision is required

Since that statement is false -- in particular, I do not agree with it -- any conclusion you draw from it is not sound.

The purpose of using decimal is not higher precision. It is smaller representation error. Higher precision is one way to achieve smaller representation error, but decimal does not achieve its smaller representation error by being higher precision. It achieves its smaller representation error by exactly representing decimal fractions.

Decimal is for those scenarios where the representation error of a decimal fraction must be zero, such as a financial computation.

Also when doing a calculation like = (1/3)*3, the desire result is 1, but only float and double give me 1

You got lucky. There are lots of fractions where the representation error of that computation is non-zero for both floats and doubles.

Let's do a quick check to see how many there are. We'll just make a million rationals and see:

    var q = from x in Enumerable.Range(1, 1000)
            from y in Enumerable.Range(1, 1000)
            where ((double)x)/y*y != x
            select x + " " + y;
    Console.WriteLine(q.Count()); // 101791

Over 10% of all small-number rationals are represented as doubles with sufficiently large representation error that they do not turn back into whole numbers when multiplied by their denominator!

If your desire is to do exact arithmetic on arbitrary rationals then neither double nor decimal are the appropriate type to use. Use a big-rational library if you need to exactly represent rationals.

why is decimal more precise?

Decimal is more precise than double because it has more bits of precision.

But again, precision is not actually that relevant. What is relevant is that decimal has smaller representation error than double for many common fractions.

It has smaller representation error than double for representing fractions with a small power of ten in the denominator because it was designed specifically to have zero representation error for all fractions with a small power of ten in the denominator.

That's why it is called "decimal", because it represents fractions with powers of ten. It represents the decimal system, which is the system we commonly use for arithmetic.

Double, in contrast, was explicitly not designed to have small representation error. Double was designed to have the range, precision, representation error and performance that is appropriate for physics computations.

There is no bias towards exact decimal quantities in physics. There is such a bias in finance. Use decimals for finance. Use doubles for physics.



Answered By - Eric Lippert
Answer Checked By - Robin (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to get truncated-to-integer division in C#?

 August 13, 2022     .net, c#, decimal, floating-point     No comments   

Issue

The following code keeps producing 5ft 11.8in instead of 4ft 11.8in when I type in 152 cm. I thought it was the floating point inaccuracies at first so I tried using decimal but that also gives me the same value. Is there anyway to make it more accurate? The number in height before it is divided by 12 is 59.84251....

class Height
{
    private decimal height;
    private string unit;
    public static string ConvertedHeight(decimal height, string unit)
    {
        if (unit == "m")
        {
            height = height * 100 / 2.54m;
            string conversion = (height / 12).ToString("F0") + "ft " + (height % 12).ToString("F1") + "in/n";
            return conversion;
        }
        else if (unit == "cm")
        {
            height = height / 2.54m;
            string conversion = (height / 12).ToString("F0") + "ft " + (height % 12).ToString("F1") + "in/n";
            return conversion;

Solution

You need cast your feet conversation to int in this case.

This is the code you need for division by 12.

(int) (height / 12)

Hope this helps!



Answered By - dj079
Answer Checked By - Marilyn (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to get largest possible precision? (Python - Decimal)

 August 13, 2022     decimal, floating-point, memory, precision, python     No comments   

Issue

I'm using the Decimal class for operations that requires precision.

I would like to use 'largest possible' precision. With this, I mean as precise as the system on which the program runs can handle.

To set a certain precision it's simple:

import decimal
decimal.getcontext().prec = 123 #123 decimal precision

I tried to figure out the maximum precision the 'Decimal' class can compute:

print(decimal.MAX_PREC)
>> 999999999999999999

So I tried to set the precision to the maximum precision (knowing it probably won't work..):

decimal.getcontext().prec = decimal.MAX_PREC

But, of course, this throws a Memory Error (on division)

So my question is: How do I figure out the maximum precision the current system can handle?

Extra info:

import sys
print(sys.maxsize)
>> 9223372036854775807

Solution

From your reply above:

What if I just wanted to find more digits in pi than already found? what if I wanted to test the irrationality of e or mill's constant.

I get it. I really do. My one SO question, several years old, is about arbitrary-precision floating point libraries for Python. If those are the types of numerical representations you want to generate, be prepared for the deep dive. Decimal/FP arithmetic is notoriously tricky in Computer Science.

Some programmers, when confronted with a problem, think “I know, I’ll use floating point arithmetic.” Now they have 1.999999999997 problems. – @tomscott

I think when others have said it's a "mistake" or "it depends" to wonder what the max precision is for a Python Decimal type on a given platform, they're taking your question more literally than I'm guessing it was intended. You asked about the Python Decimal type, but if you're interested in FP arithmetic for educational purposes -- "to find more digits in pi" -- you're going to need more powerful, more flexible tools than Decimal or float. These built-in Python types don't even come close. Those are good enough for NASA maybe, but they have limits... in fact, the very limits you are asking about.

That's what multiple-precision (or arbitrary-precision) floating point libraries are for: arbitrarily-precise representations. Want to compute pi for the next 20 years? Python's Decimal type won't even get you through the day.

The fact is, multi-precision binary FP arithmetic is still kinda fringe science. For Python, you'll need to install the GNU MPFR library on your Linux box, then you can use the Python library gmpy2 to dive as deep as you like.

Then, the question isn't, "What's the max precision my program can use?"

It's, "How do I write my program so that it'll run until the electricity goes out?"

And that's a whole other problem, but at least it's restricted by your algorithm, not the hardware it runs on.



Answered By - Joseph8th
Answer Checked By - Willingham (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Friday, August 12, 2022

[FIXED] How do I move a decimal point left while maintaining number of digits displayed?

 August 12, 2022     c++, decimal, floating-point, setw     No comments   

Issue

I currently have a function that takes in a vector of structs, including all floats, and should return some values after a simple calculation.

My cout function is simply

void taxPrint(std::vector<TaxPayer> &citizen)
{
    int loops = 0;

    std::cout << "\nTaxes due for this year: \n" << std::endl;

    do
    {
        std::cout << "Tax Payer #" << loops << " : $" << citizen[loops].taxes << std::endl;
        loops++;
}
while (loops + 1 <= SIZE);

and the resulting output in console is

Tax Payer #0 : $450000
Tax Payer #1 : $210000

That said, I want it to be

Tax Payer #0 : $4500.00
Tax Payer #1 : $2100.00

I've been messing around with setw() and setprecision() but I don't exactly understand how they work.


Solution

std::setw, actually has nothing to do with value precision, it is for padding string with prefixes like: 001-0123-9124 (Padded with 0)

Example: std::cout << std::setfill('0') << std::setw(5) << 5 << std::endl; will print 00005

Here is how to use it using std::fixed and std::setprecision:

void taxPrint(std::vector<TaxPayer> &citizen)
{
    int loops = 0;

    std::cout << "\nTaxes due for this year: \n" << std::endl;

    do
    {
        std::cout << "Tax Payer #" << loops << " : $" << std::setprecision(2) 
                  << std::fixed << citizen[loops].taxes / 100. << std::endl;
        loops++;
    }
    while (loops + 1 <= SIZE);
} // Don't miss this bracket!

Also, look at this question to know more about specifying a fixed precision to a value...



Answered By - Ruks
Answer Checked By - Marilyn (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] how to validate that a floating number is not "0"

 August 12, 2022     c#, decimal, floating-point, mvvm, xamarin     No comments   

Issue

I have an <Entry> control where the user is allowed to enter decimal numbers, for examenple ...

0,2

0,02

5,405

But I do not want to enter a "0" (as a decimal), as follows

0,0

0,00

00,00

The control used in my Vista is an

MyView.XAML:

 <Entry
          HorizontalOptions="FillAndExpand"    
          Placeholder="Cantidad"
          Keyboard="Numeric"
          MaxLength="5"
          Text="{Binding CantidadEstimado}"></Entry>

To then capture the value with a string type in the following way in my ViewModel

ViewModel.CS:

    string cantidadEstimado;

   public string CantidadEstimado
        {
            get
            {
                return cantidadEstimado;
            }
            set
            {
                if (cantidadEstimado != value)
                {
                    cantidadEstimado = value.setOnlyNumbersDouble();
                    PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(CantidadEstimado)));
                }
            }
        }

As seen in my Property Amount Dear I have the call to the method setOnlyNumberDouble (), which gives the user the possibility to enter a single comma (","), I attach the following method ...

 public static string setOnlyNumbersDouble(this string s)
    {
        string sTemp = "";

        foreach (var item in s)
        {
            if (item == ',')
            {
                if (!sTemp.Contains(","))
                {
                    sTemp += item;
                }
            }
            else
            {
                sTemp += item; 
            }
        }
        return Regex.Replace(sTemp, @"[^0-9,]+", "");
    }

How can I validate that the user does not enter a "0" as a decimal? Can I reuse my setOnlyNumberDouble () method? any help for me?


Solution

You may use RegularExpressions:

bool isZero = Regex.Matches(input,"[0,]");

Or

bool isZero = int.Parse(input.Replace(",","") == 0;

Instead of trying to forcing it to be a valid double number by removing extra commas, non-numeric chars ,... try to validate it:

  public static bool IsValidDouble(this string s)
  {
      double d = 0;
      double.TryParse(s, out d);
      return d != 0; //will be false if result is 0 
      //return d > 0; if you don't want negativer values 
  }


Answered By - Ashkan Mobayen Khiabani
Answer Checked By - Robin (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, August 11, 2022

[FIXED] How *exactly* does C# convert double to decimal?

 August 11, 2022     c#, decimal, floating-point, precision     No comments   

Issue

The following C# code prints 0.1 - why?

var s = "0.1";
var dbl = Double.Parse(s);
var dcml = Convert.ToDecimal(dbl);
Console.WriteLine(dcml.ToString());

Isn't 0.01 not representable in binary, therefore it should print 0.100000001490116 ?


Solution

The value of dbl is precisely 0.1000000000000000055511151231257827021181583404541015625.

That's 0.1 to 17 significant digits.

The documentation for Convert.ToDecimal(Double) states:

The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.

The conversion from Single (aka float) is documented to truncate earlier:

The Decimal value returned by this method contains a maximum of seven significant digits. If the value parameter contains more than seven significant digits, it is rounded using rounding to nearest.

If you call Convert.ToDecimal(Double) with a value initially converted from 0.1f, it will display 0.100000001490116:

double dbl = 0.1f;        
decimal dcml = (decimal) dbl;
Console.WriteLine(dcml);


Answered By - Jon Skeet
Answer Checked By - Gilberto Lyons (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to fix incorrect decimal places of a multiplication result

 August 11, 2022     decimal, floating-point, python, string     No comments   

Issue

I'm having a problem with the following calculation:

>>> print(float('32.31') * 1e9)
32310000000.000004

I need this result to be 32310000000.0 (without the false decimal place). This also occurs when using a float directly:

>>> print(32.31 * 1e9)
32310000000.000004

Is there a way to avoid the false decimal place? Analysing the string and rounding to the number of decimal places is not a pefered solution.

Many thanks in advance.

NOTE: The following works fine:

>>> print(32.32 * 1e9)
32320000000.0

so I'm really happy I found the problem above during testing.

EDIT: Thank you for your quick answers! Sorry, I've missed an important point. The method must also work for when the result is less than one, e.g.:

32.31 * 1e-9

...in this case I cannot use round(32.31 * 1e-9, 1)


Solution

One way to avoid your problem is to use the decimal module, which works in base ten and thus works the way humans would work (if we were much faster).

from decimal import Decimal
value = float(Decimal('32.31') * Decimal(1e9))

This yields the value you want,

32310000000.0

Another way is to use the fractions module, which works with exact values:

from fractions import Fraction
value = float(Fraction('32.31') * Fraction(1e9))

Note that in both of these methods, we must convert 1e9 and not just the decimal value. Using 1e9 as a float converts the intermediate values to float and the approximation problem pops up again. In either method, you could leave off the final conversion to float type and just continue to work with a Decimal or Fraction value. Either of these methods is somewhat slower than using float types: you gain accuracy at the expense of speed. That speed decrease may matter in some situations.


Regarding your edit to your question: using the value 1e-9 in either of my methods will still result in what you want, namely the value 3.231e-08.



Answered By - Rory Daulton
Answer Checked By - David Goodson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to check if a double has at most n decimal places?

 August 11, 2022     decimal, floating-point, java     No comments   

Issue

Currently i have this method:

static boolean checkDecimalPlaces(double d, int decimalPlaces){
    if (d==0) return true;

    double multiplier = Math.pow(10, decimalPlaces); 
    double check  =  d * multiplier;
    check = Math.round(check);      
    check = check/multiplier; 
    return (d==check);      
}

But this method fails for checkDecmialPlaces(649632196443.4279, 4) probably because I do base 10 math on a base 2 number.

So how can this check be done correctly?

I thought of getting a string representation of the double value and then check that with a regexp - but that felt weird.

EDIT: Thanks for all the answers. There are cases where I really get a double and for those cases I implemented the following:

private static boolean checkDecimalPlaces(double d, int decimalPlaces) {
    if (d == 0) return true;

    final double epsilon = Math.pow(10.0, ((decimalPlaces + 1) * -1));

    double multiplier = Math.pow(10, decimalPlaces);
    double check = d * multiplier;
    long checkLong = (long) Math.abs(check);
    check = checkLong / multiplier;

    double e = Math.abs(d - check);
    return e < epsilon;
}

I changed the round to a truncation. Seems that the computation done in round increases the inaccuracy too much. At least in the failing testcase.
As some of you pointed out if I could get to the 'real' string input I should use BigDecimal to check and so I have done:

BigDecimal decimal = new BigDecimal(value);
BigDecimal checkDecimal = decimal.movePointRight(decimalPlaces);
return checkDecimal.scale() == 0;

The double value I get comes from the Apache POI API that reads excel files. I did a few tests and found out that although the API returns double values for numeric cells I can get a accurate representation when I immediately format that double with the DecimalFormat:

DecimalFormat decimalFormat = new DecimalFormat();
decimalFormat.setMaximumIntegerDigits(Integer.MAX_VALUE);
// don't use grouping for numeric-type cells
decimalFormat.setGroupingUsed(false);
decimalFormat.setDecimalFormatSymbols(new DecimalFormatSymbols(Locale.US));
value = decimalFormat.format(numericValue);

This also works for values that can't be represented exactly in binary format.


Solution

The test fails, because you have reached the accuracy of the binary floating point representation, which is approximately 16 digits with IEEE754 double precision. Multiplying by 649632196443.4279 by 10000 will truncate the binary representation, leading to errors when rounding and dividing afterwards, thereby invalidating the result of your function completely.

For more details see http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems

A better way would be to check whether the n+1 decimal places are below a certain threshold. If d - round(d) is less than epsilon (see limit), the decimal representation of d has no significant decimal places. Similarly if (d - round(d)) * 10^n is less than epsilon, d can have at most n significant places.

Use Jon Skeet's DoubleConverter to check for the cases where d isn't accurate enough to hold the decimal places you are looking for.



Answered By - David Schmitt
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] What is the point of writing integer in hexadecimal, octal and binary?

 August 11, 2022     decimal, floating-point, int, swift, var     No comments   

Issue

I am well aware that one is able to assign a value to an array or constant in Swift and have those value represented in different formats.

For Integer: One can declare in the formats of decimal, binary, octal or hexadecimal.

For Float or Double: One can declare in the formats of either decimal or hexadecimal and able to make use of the exponent too.

For instance:

var decInt = 17
var binInt = 0b10001
var octInt = 0o21
var hexInt = 0x11

All of the above variables gives the same result which is 17.

But what's the catch? Why bother using those other than decimal?


Solution

There are some notations that can be way easier to understand for people even if the result in the end is the same. You can for example think in cases like colour notation (hexadecimal) or file permission notation (octal).



Answered By - CarlosMorente
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] why does sum of decimals still have floating point errors?

 August 11, 2022     decimal, floating-point, python-3.x     No comments   

Issue

Im using the decimals module to try to avoid floating point errors. From the decimal module's documentary it says:

Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.

But when I try sum with decimals, I still get those floating point errors.

decimal.Decimal(4.04)+decimal.Decimal(4.04)
>>Decimal('8.080000000000000071054273576')

Why is this?


Solution

Try putting strings around your float literals, like so:

decimal.Decimal('4.04')+decimal.Decimal('4.04')

In the code in your question, the raw binary (base 2) "float" type is passed to Decimal. When you use strings to represent the number 4.04 for example, Decimal represents '4.04' precisely in base 10.



Answered By - Jerfov2
Answer Checked By - Cary Denson (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Wednesday, August 10, 2022

[FIXED] How do I represent the return of a specific probability fraction to be shown as a float but limited to two decimal places?

 August 10, 2022     decimal, floating-point, format, python, python-3.x     No comments   

Issue

So say I have something that looks like this:

def probability_color():
  if color == 'red':
    return float(3/10)

so essentially it should return .30 instead of anything longer. My specific problem includes fractions that aren't as clean as this example so I'm getting very long decimal float values in this particular scenario. Is there a simple solution that would format it rather than using something like round()?


Solution

Just use the round format in Python to help. Here is the improved code:

def probability_color():
  if color == 'red':
    # Decimal places :)
    return float('%.2f' % (1/10))


Answered By - xnarf
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to differentiate between a one-decimal float and integer in JavaScript

 August 10, 2022     decimal, floating-point, integer, javascript, types     No comments   

Issue

I'm using a function to to verify whether the number passed as a parameter is a float or an integer in JavaScript.
The method is working for numbers such as '4.34' i.e. with a non-zero decimal but it fails for numbers such as '3.0', returning integer instead of float.
This is the code I have been able to come up with so far

function dataType(x) {
    if (typeof x === 'number' && ){
        if (Math.round(x) === x ){
            return 'integer';
        }
        return 'float';
    }
}

console.log(dataType(8)); //integer
console.log(dataType(3.01)); //float
console.log(dataType(3.0)); // should return float

I would really appreciate some help on how to do this in JavaScript.
Thanks in advance.

Update: I want console.log(dataType(3.0)); to return float.


Solution

Every number in JS is a float. There is only one number type in JS (Number).

Thus, there's no cross-browser way of guaranteeing a difference between:

3
3.0
3.0000000000000

et cetera.

Even in a modern browser, (3.0000).toString( ) === "3"; //true.

Trying to cast or enforce numeric type safety in JS is rather pointless.
Work on the numbers in the Number format, convert into and out of string, using desired precision, as needed.



Answered By - Norguard
Answer Checked By - Cary Denson (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Older Posts Home
View mobile version

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
All Comments
Atom
All Comments

Copyright © PHPFixing