I thought I understood floating point problems but I guess I don't and now I
have a lot of code that is acting strangely given my understanding. EG. 0.20.1==0.1 ans = 1 #GREAT! 1.21.1==0.1 ans = 0 #HUH? Here's one concrete example from some code I was playing with. all this code does is rebuilding the decimal column using start:increment:end syntax and compare the old column to the new. test = [ 96.02,3827; 96.03,5341; 96.04,6107; 96.05,7134; 96.06,8706; 96.07,12367; 96.08,13971; 96.09,16900; 96.10,15700; 96.11,18791; 96.12,22168; 96.13,21973; 96.14,21800; 96.15,19626; 96.16,17601; 96.17,16496; 96.18,14921; 96.19,10819; 96.20,9087; 96.21,6056; 96.22,5071; 96.23,4311]; newfirstcol = (min(x):0.01:max(x))'; #should have same values as tests 1st column.. sum(ismember(newfirstcol,test(:,1))) #matches only 16 times.... Is there a simple and consistent way to avoid problems like these? I would have though that with only 2 decimal places floating point problems(which is what I assume this is) wouldn't be a problem. Thanks for the help.  Sent from: http://octave.1599824.n4.nabble.com/OctaveGeneralf1599825.html _______________________________________________ Helpoctave mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/helpoctave 
On Tue, Oct 24, 2017 at 3:27 PM, AG <[hidden email]> wrote: I thought I understood floating point problems but I guess I don't and now I It really comes down to how numbers are represented in floating point. Effectively they are stored as an exponent of 2 and a mantissa, so that the number being represented is mantissa*2^exponent. Some numbers look simple in base 10, but will give a lot of digits in the mantissa, so when you do addition and subtractions you get a lot of binary digits. Anyway, the way I have usually seen used to compare for floating point "equality" is instead of a==b use abs(ab)<tol unfortunately that means you need to come up with a tolerance value. I have seen people use a tolerance of "eps" which is the machine epsilon, which is the amount that the number 1 and the next representable number are away from each other. This should work ok if the numbers are near 1, but would fail if the numbers are very large or very small. A simple modification would be to use eps(a), or maybe eps(max(a,b)). I think this should work in most cases. There are probably other possibilities, but I think that for most cases (abs(ab)<eps(a)) would be a good general replacement. Bill _______________________________________________ Helpoctave mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/helpoctave 
In reply to this post by AG
AG,
> Original Message > From: Helpoctave [mailto:helpoctave > > I thought I understood floating point problems but I guess I don't and now I > have a lot of code that is acting strangely given my understanding. > EG. > 0.20.1==0.1 > ans = 1 #GREAT! > > 1.21.1==0.1 > ans = 0 #HUH? > > Here's one concrete example from some code I was playing with. all this code > does is rebuilding the decimal column using start:increment:end syntax and > compare the old column to the new. > > test = [ > 96.02,3827; > ... > 96.23,4311]; > newfirstcol = (min(x):0.01:max(x))'; #should have same values as tests 1st > column.. > sum(ismember(newfirstcol,test(:,1))) #matches only 16 times.... > > Is there a simple and consistent way to avoid problems like these? I would > have though that with only 2 decimal places floating point problems(which is > what I assume this is) wouldn't be a problem. Thanks for the help. As I have been taught, you should NEVER use an equality comparison on floatingpoint numbers. At least, compare with +/ eps of one of the values. The 2 decimal places you show become an infinite repeating binary fraction when converted. I don't know why some work and others don't, but if you did the conversion by hand you would probably be enlightened. 0.1 decimal is 0.000110011001100110011.... binary for instance. Regards, Allen _______________________________________________ Helpoctave mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/helpoctave 
In reply to this post by AG
On Tue, 24 Oct 2017 13:27:17 0700 (MST)
AG <[hidden email]> wrote: > I thought I understood floating point problems but I guess I don't > and now I have a lot of code that is acting strangely given my > understanding. EG. > 0.20.1==0.1 We are preconditioned to understand base 10. Computers don't work in base 10. But, even with this, we can look at the problem of: (1 / 3) * 3 In base 10, 1/3 is 0.333333333333333.... If we multiply that by 3, we are going to get 0.99999999999.... We do _NOT_ get 1.0. If we could choose bases to work in, base 12 might be reasonable. Lots of repeating fractions in base 10 become finite fractions in base 12. But for computers, we are more or less working in base 2. And you get many more endless repeating fractions converting integer ratios into "real numbers" than in base 10. The first thing to consider, is the number we want to work with, actually known? The numbers e and pi cannot be represented exactly in a finite number of bits. If the number you want to use is not possible to write in some kind of base 2 floating point representation exactly, you will have error. As hinted at above, base 2 leads to many more numbers which cannot be represented exactly in some word size, than base 10. In base 10 arithmetic, 1/7 is 0.124857 124857 124857 124857 124857 ... I like this as an example, as the other 7'ths fall out of the same sequence. 2/7 = 0.24857 124857 124857 124857 ... 3/7 = 0.4857 124857 124857 124857 ... 4/7 = 0.57 124857 124857 124857 ... 5/7 = 0.724867 124857 124857 124857 ... 6/7 = 0.85 124857 124857 124857 ... Regardless of what precision you can use, integers divided by 7 will typically end up with error as the fractions usually end going on forever. You have to round or chop someplace, and that is where your error starts. If you solve a problem in single precision arithmetic and then solve it in double precision arithmetic and you get substantially the same solution, it means your answer _MIGHT_ be accurate. With 64 bit processors, adding on quad precision arithmetic and getting the same answer adds further weight to the suggestion that the answer _MIGHT_ be accurate. Some problems are easy to solve. Some problems are easy to solve by good algorithms. Some problems are difficult to solve by any means available. You need to know something about the problem you are trying to solve. But just writing a bit of code (be it in octave or anything else) and expecting you are getting "_THE ANSWER_" is naive. Way back when, if you were doing linear systems type problems, you could choose from some fast solvers (scale as N^2) or doing things by SVD (scale as N^3). If N is big, using SVD made for long computation times. But SVD allows one to remove/reduce the effects of error. Do you want the wrong answer quickly, or a more correct answer more slowly? Being correct enough is another problem.  Gord _______________________________________________ Helpoctave mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/helpoctave 
In reply to this post by AG
Ozzy,Gordon & Allen thanks a bunch for taking the time to explain to me. It
was a HUUUGE help. Given your inputs I ended up creating a 'rounding' function that used the idea of tolerance and cleaving decimal places. As an example: For rounding to two decimals myRound(num) does something like: num = num + .001; roundedNum = floor( NUM*100 ) / 100; Everything seems to be working perfectly now. Thanks again for your help and time.  Sent from: http://octave.1599824.n4.nabble.com/OctaveGeneralf1599825.html _______________________________________________ Helpoctave mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/helpoctave 
In reply to this post by Gordon Haverland
Please forgive me Gordon for correcting your typo. Any number that is not an
exact multiple of seven, divided by seven will give the periodic sequence 428571 starting somewhere in that sequence. Your sequences are all based on 248571 and have the 2 and 4 swapped with each other.  Giovanni Ciriani  Windows 10, Octave 4.2.1, configured for x86_64w64mingw32  Sent from: http://octave.1599824.n4.nabble.com/OctaveGeneralf1599825.html _______________________________________________ Helpoctave mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/helpoctave
Giovanni Ciriani  Windows 10, Octave 4.2.1, configured for x86_64w64mingw32

On Wed, 25 Oct 2017 13:33:23 0700 (MST)
gciriani <[hidden email]> wrote: > Please forgive me Gordon for correcting your typo. Any number that is > not an exact multiple of seven, divided by seven will give the > periodic sequence 428571 starting somewhere in that sequence. Your > sequences are all based on 248571 and have the 2 and 4 swapped with > each other. > Oops, right you are. What is in my head is 142857. You said I wrote 124857, which is wrong. Thanks. The first fraction starts 0.14 The second starts 0.28 The third starts 0.42 The fourth starts 0.57 The fifth starts 0.71 The sixth starts 0.85 Gord _______________________________________________ Helpoctave mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/helpoctave 
Yes but the repeating decimals are incorrect too. If we use the notation
0.(xxxx) for the repeating decimals, we have: 1/7 = 0.(142857) 2/7 = 0.(285714) 3/7 = 0.(428571) 4/7 = 0.(571428) 5/7 = 0.(714285) 6/7 = 0.(857142) I apologize for straying from the OP.  Giovanni Ciriani  Windows 10, Octave 4.2.1, configured for x86_64w64mingw32  Sent from: http://octave.1599824.n4.nabble.com/OctaveGeneralf1599825.html _______________________________________________ Helpoctave mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/helpoctave
Giovanni Ciriani  Windows 10, Octave 4.2.1, configured for x86_64w64mingw32

Free forum by Nabble  Edit this page 