And now for something completely different...
Binary Floating Point Comparisons
You never expected that on this blog, did you? :P
(Edit: yes, I'm sure I'm getting this entire discussion rather wrong, but I never claimed to be a software engineer. I just hang out around them all the time and work at a software company, that's all. :P)
The Theory
Computers don't think like you or I. We think in this nifty thing called base 10. Everything is broken down into 10s and multiples of 10s. Everything in our system counts up 0-1-2-3-4-5-6-7-8-9 and then when it hits 10 it resets back to zero again. That's why when you look at a number your math teacher talked about the "ten's place" and the "hundred's place" (remember, a hundred is just 10 x 10 after all). All of this is sensible, given that we have 10 fingers ... humans just automatically assumed 10 was the "cool" number to use as a basis for their math. But it's not necessarily the only number you can use.
You can have base 5. Or base 16. Or base 127. It doesn't matter, they're all valid. But the most interesting one for this discussion is base 2. It's what computers think in. You count 0-1 in it, and then increment. One and zeros. On and off. Black and white.
And, it turns out that if you say 1/10 (AKA 0.1) in base 10 ... a computer says 0.110011001100110011001100 11001100110011001100110011.... and repeats on to infinity. O_o Freaky, yes?
The Problem
Numbers can't be represented infinitely in a computer. So, I need to round off that big, long representation of 0.1 somewhere. 0.11001100110011001100110011001100110011 001100110011010 is how it's usually done (in double precision floating point). But notice! The last few digits aren't quite in sequence anymore!
That's because the number was rounded by the computer. And suddenly it's not quite 0.1 anymore. It's actually more like 0.10000000149 now. O_o!
The Even Bigger Problem
Imagine having a software product that was all about making comparisons between floating point numbers. ... Yeah, suddenly you have to actually care about this. >_< The Solution
if(fabs(a - b) <= epsilon * fabs(a))
or
if(fabs(a - b) <= epsilon)
where you choose epsilon to be some reasonable small number.
Meaning, check to see if the difference between the two numbers is really, really small, and if so, just act like they're equal to each other.
You never expected that on this blog, did you? :P
(Edit: yes, I'm sure I'm getting this entire discussion rather wrong, but I never claimed to be a software engineer. I just hang out around them all the time and work at a software company, that's all. :P)
The Theory
Computers don't think like you or I. We think in this nifty thing called base 10. Everything is broken down into 10s and multiples of 10s. Everything in our system counts up 0-1-2-3-4-5-6-7-8-9 and then when it hits 10 it resets back to zero again. That's why when you look at a number your math teacher talked about the "ten's place" and the "hundred's place" (remember, a hundred is just 10 x 10 after all). All of this is sensible, given that we have 10 fingers ... humans just automatically assumed 10 was the "cool" number to use as a basis for their math. But it's not necessarily the only number you can use.
You can have base 5. Or base 16. Or base 127. It doesn't matter, they're all valid. But the most interesting one for this discussion is base 2. It's what computers think in. You count 0-1 in it, and then increment. One and zeros. On and off. Black and white.
And, it turns out that if you say 1/10 (AKA 0.1) in base 10 ... a computer says 0.110011001100110011001100 11001100110011001100110011.... and repeats on to infinity. O_o Freaky, yes?
The Problem
Numbers can't be represented infinitely in a computer. So, I need to round off that big, long representation of 0.1 somewhere. 0.11001100110011001100110011001100110011 001100110011010 is how it's usually done (in double precision floating point). But notice! The last few digits aren't quite in sequence anymore!
That's because the number was rounded by the computer. And suddenly it's not quite 0.1 anymore. It's actually more like 0.10000000149 now. O_o!
The Even Bigger Problem
Imagine having a software product that was all about making comparisons between floating point numbers. ... Yeah, suddenly you have to actually care about this. >_< The Solution
if(fabs(a - b) <= epsilon * fabs(a))
or
if(fabs(a - b) <= epsilon)
where you choose epsilon to be some reasonable small number.
Meaning, check to see if the difference between the two numbers is really, really small, and if so, just act like they're equal to each other.
0 Comments:
Post a Comment
<< Home