Today's Question:  What are you most afraid of as a programmer?        GIVE A SHOUT

Technical Article => Web =>  JavaScript

Why 0.1+0.2 != 0.3

  sonic0002      2014-11-19 05:32:46      4,500    0    0

In programming languages such as JavaScript, c/c++, Java and Matlab, you will find that you will get unexpected result when doing float point calculation. For example, when calculating 0.1 + 0.1, you will not get 0.3:

> 0.1 + 0.2 == 0.3

> 0.1 + 0.2

Don't be surprised of this result, this is the end result of IEEE 754 standard, float point number cannot be accurately represented according to IEEE 754 because:

  • No enough memory is allocated for representing the number
  • It needs a range to ensure the accuracy

JavaScript uses 64-bit floating point representation, which is the same as Java's double.A float number consists of three components : sign bit+exponent+mantissa. The number 1/10 can be expressed as 0.1 in decimal, but it is 0.0001100110011001100110011001100110011001100110011001….. in binary. Because there s only 52 bit significant number, starting from bit 53, the number will be rounded. 

You need to be very careful when handling float point number calculations, especially when you do float point number comparisons for equality. How to do comparisons of float point number if you have to? There are two solutions:

1. Compare whether the subtraction of the two numbers is in a range

x = 0.2;
y = 0.3;
equal = (Math.abs(x - y) < 0.000001)

2. Use toPrecision or toFixed in JavaScript

(0.1 + 0.2).toPrecision(10) == 0.3
> true

(0.1 + 0.2).toFixed(10) == 0.3
> true

Reference :



Share on Facebook  Share on Twitter  Share on Google+  Share on Weibo  Share on Reddit  Share on Digg  Share on Tumblr    Delicious



No comment for this article.


When trying to throw one bug to someone else

By sonic0002