
12

Hello all,
I would like to bring the following bug report to your attention :
http://savannah.gnu.org/bugs/?42589A simple example of the regression described in this report is the
following :
octave:1> (9:10) * .1  1
ans =
1.0000e01 2.7756e17
The second element should be 0.
Rik thinks that this bug might be hard to solve... which is why I'm
bringing this up on this list :)
@++
Julien

Administrator

On 06/20/2014 04:28 PM, Julien Bect wrote:
> Hello all,
>
> I would like to bring the following bug report to your attention :
>
> http://savannah.gnu.org/bugs/?42589>
> A simple example of the regression described in this report is the
> following :
>
> octave:1> (9:10) * .1  1
> ans =
>
> 1.0000e01 2.7756e17
>
> The second element should be 0.
>
> Rik thinks that this bug might be hard to solve... which is why I'm
> bringing this up on this list :)
Please see my comments on the bug tracker.
jwe


John W. Eaton wrote
On 06/20/2014 04:28 PM, Julien Bect wrote:
> Hello all,
>
> I would like to bring the following bug report to your attention :
>
> http://savannah.gnu.org/bugs/?42589>
> A simple example of the regression described in this report is the
> following :
>
> octave:1> (9:10) * .1  1
> ans =
>
> 1.0000e01 2.7756e17
>
> The second element should be 0.
>
> Rik thinks that this bug might be hard to solve... which is why I'm
> bringing this up on this list :)
Please see my comments on the bug tracker.
jwe
I have the similar problem with octave 3.8.1 on cygwin.
See below:
octave:50> t=[2:0.1:0]
t =
Columns 1 through 5:
2 1.9 1.8 1.7 1.6
Columns 6 through 10:
1.5 1.4 1.3 1.2 1.1
Columns 11 through 15:
1 0.9 0.8 0.7 0.6
Columns 16 through 20:
0.5 0.4 0.3 0.2 0.0999999999999999
Column 21:
0
Range command is not good with floatingpoint step only starting from negative integer as above.
I am trying to upgrade octave 3.8.2 to see if this problem is fixed.
I post it just in case.
Thank you.
jo


Am 23.09.2014 03:27, schrieb s.jo:
> John W. Eaton wrote
>> On 06/20/2014 04:28 PM, Julien Bect wrote:
>>> Hello all,
>>>
>>> I would like to bring the following bug report to your attention :
>>>
>>> http://savannah.gnu.org/bugs/?42589>>>
>>> A simple example of the regression described in this report is the
>>> following :
>>>
>>> octave:1> (9:10) * .1  1
>>> ans =
>>>
>>> 1.0000e01 2.7756e17
>>>
>>> The second element should be 0.
>>>
>>> Rik thinks that this bug might be hard to solve... which is why I'm
>>> bringing this up on this list :)
>>
>> Please see my comments on the bug tracker.
>>
>> jwe
>
> I have the similar problem with octave 3.8.1 on cygwin.
> See below:
> octave:50> t=[2:0.1:0]
> t =
>
> Columns 1 through 5:
>
> 2 1.9 1.8
> 1.7 1.6
>
> Columns 6 through 10:
>
> 1.5 1.4 1.3
> 1.2 1.1
>
> Columns 11 through 15:
>
> 1 0.9 0.8
> 0.7 0.6
>
> Columns 16 through 20:
>
> 0.5 0.4 0.3
> 0.2 0.0999999999999999
>
> Column 21:
>
> 0
>
> Range command is not good with floatingpoint step only starting from
> negative integer as above.
>
> I am trying to upgrade octave 3.8.2 to see if this problem is fixed.
>
> I post it just in case.
The results are perfectly okay. This is how floating point works, see
IEEE 754. Above mentioned bug report also gives some explanation in the
comments.
The “problem” occurs because .1 is not a binary floating point number.
and the approximation
0.1000000000000000055511151231257827021181583404541015625 is used
instead. 10 * 0.1 suffers (or in this case benefits) from rounding.
In the last example with t=[2:0.1:0] only the numbers 2, 1.5, 1,
0.5, and 0 have the exact value that is displayed. Column 20
additionally yields a propagation of errors. You can check this with the
bit format:
octave:3> format bit
octave:4> .1
ans = 1011111110111001100110011001100110011001100110011001100110011010
(this is 0.1000000000000000055511151231257827021181583404541015625)
octave:5> 2 + 19 * .1
ans = 1011111110111001100110011001100110011001100110011001100110010000
(this is 0.09999999999999997779553950749686919152736663818359375)
octave:6> t(20)
ans = 1011111110111001100110011001100110011001100110011001100110010000


Oliver Heimlich wrote
Am 23.09.2014 03:27, schrieb s.jo:
> John W. Eaton wrote
>> On 06/20/2014 04:28 PM, Julien Bect wrote:
>>> Hello all,
>>>
>>> I would like to bring the following bug report to your attention :
>>>
>>> http://savannah.gnu.org/bugs/?42589>>>
>>> A simple example of the regression described in this report is the
>>> following :
>>>
>>> octave:1> (9:10) * .1  1
>>> ans =
>>>
>>> 1.0000e01 2.7756e17
>>>
>>> The second element should be 0.
>>>
>>> Rik thinks that this bug might be hard to solve... which is why I'm
>>> bringing this up on this list :)
>>
>> Please see my comments on the bug tracker.
>>
>> jwe
>
> I have the similar problem with octave 3.8.1 on cygwin.
> See below:
> octave:50> t=[2:0.1:0]
> t =
>
> Columns 1 through 5:
>
> 2 1.9 1.8
> 1.7 1.6
>
> Columns 6 through 10:
>
> 1.5 1.4 1.3
> 1.2 1.1
>
> Columns 11 through 15:
>
> 1 0.9 0.8
> 0.7 0.6
>
> Columns 16 through 20:
>
> 0.5 0.4 0.3
> 0.2 0.0999999999999999
>
> Column 21:
>
> 0
>
> Range command is not good with floatingpoint step only starting from
> negative integer as above.
>
> I am trying to upgrade octave 3.8.2 to see if this problem is fixed.
>
> I post it just in case.
The results are perfectly okay. This is how floating point works, see
IEEE 754. Above mentioned bug report also gives some explanation in the
comments.
The “problem” occurs because .1 is not a binary floating point number.
and the approximation
0.1000000000000000055511151231257827021181583404541015625 is used
instead. 10 * 0.1 suffers (or in this case benefits) from rounding.
In the last example with t=[2:0.1:0] only the numbers 2, 1.5, 1,
0.5, and 0 have the exact value that is displayed. Column 20
additionally yields a propagation of errors. You can check this with the
bit format:
octave:3> format bit
octave:4> .1
ans = 1011111110111001100110011001100110011001100110011001100110011010
(this is 0.1000000000000000055511151231257827021181583404541015625)
octave:5> 2 + 19 * .1
ans = 1011111110111001100110011001100110011001100110011001100110010000
(this is 0.09999999999999997779553950749686919152736663818359375)
octave:6> t(20)
ans = 1011111110111001100110011001100110011001100110011001100110010000
Thanks to your comments, I understand what happened in the range operator with floatingpoint numbers.
Also I found that most people use the linspace function to generate floatingpoint squences.
I check and compare colon operator and linspace function: (octave 3.8.2 on cygwin)
%%
% setup
fp_start=2;
fp_end=1;
fp_step=0.1;
fp_num=fix(abs(fp_endfp_start)/fp_step)+1;
% range operator (floatingpoint)
t1=(fp_start:fp_step:fp_end);
% linspace function
t2=linspace(fp_start,fp_end,fp_num);
% compare
format long e
[t1' t2']
format bit
[t1' t2']
%% results:
ans =
2.00000000000000e+00 2.00000000000000e+00
1.90000000000000e+00 1.90000000000000e+00
1.80000000000000e+00 1.80000000000000e+00
1.70000000000000e+00 1.70000000000000e+00
1.60000000000000e+00 1.60000000000000e+00
1.50000000000000e+00 1.50000000000000e+00
1.40000000000000e+00 1.40000000000000e+00
1.30000000000000e+00 1.30000000000000e+00
1.20000000000000e+00 1.20000000000000e+00
1.10000000000000e+00 1.10000000000000e+00
1.00000000000000e+00 1.00000000000000e+00
9.00000000000000e01 9.00000000000000e01
8.00000000000000e01 8.00000000000000e01
7.00000000000000e01 7.00000000000000e01
6.00000000000000e01 6.00000000000000e01
5.00000000000000e01 5.00000000000000e01
4.00000000000000e01 4.00000000000000e01
3.00000000000000e01 3.00000000000000e01
2.00000000000000e01 2.00000000000000e01
9.99999999999999e02 1.00000000000000e01
1.11022302462516e16 0.00000000000000e+00
1.00000000000000e01 1.00000000000000e01
2.00000000000000e01 2.00000000000000e01
3.00000000000000e01 3.00000000000000e01
4.00000000000000e01 4.00000000000000e01
5.00000000000000e01 5.00000000000000e01
6.00000000000000e01 6.00000000000000e01
7.00000000000000e01 7.00000000000000e01
8.00000000000000e01 8.00000000000000e01
9.00000000000000e01 9.00000000000000e01
1.00000000000000e+00 1.00000000000000e+00
ans =
Column 1:
1100000000000000000000000000000000000000000000000000000000000000
1011111111111110011001100110011001100110011001100110011001100110
1011111111111100110011001100110011001100110011001100110011001101
1011111111111011001100110011001100110011001100110011001100110011
1011111111111001100110011001100110011001100110011001100110011010
1011111111111000000000000000000000000000000000000000000000000000
1011111111110110011001100110011001100110011001100110011001100110
1011111111110100110011001100110011001100110011001100110011001101
1011111111110011001100110011001100110011001100110011001100110011
1011111111110001100110011001100110011001100110011001100110011001
1011111111110000000000000000000000000000000000000000000000000000
1011111111101100110011001100110011001100110011001100110011001100
1011111111101001100110011001100110011001100110011001100110011001
1011111111100110011001100110011001100110011001100110011001100110
1011111111100011001100110011001100110011001100110011001100110010
1011111111011111111111111111111111111111111111111111111111111110
1011111111011001100110011001100110011001100110011001100110011000
1011111111010011001100110011001100110011001100110011001100110010
1011111111001001100110011001100110011001100110011001100110010110
1011111110111001100110011001100110011001100110011001100110010010
0011110010100000000000000000000000000000000000000000000000000000
0011111110111001100110011001100110011001100110011001100110100010
0011111111001001100110011001100110011001100110011001100110011110
0011111111010011001100110011001100110011001100110011001100110110
0011111111011001100110011001100110011001100110011001100110011100
0011111111100000000000000000000000000000000000000000000000000001
0011111111100011001100110011001100110011001100110011001100110100
0011111111100110011001100110011001100110011001100110011001101000
0011111111101001100110011001100110011001100110011001100110011011
0011111111101100110011001100110011001100110011001100110011001110
0011111111110000000000000000000000000000000000000000000000000000
Column 2:
1100000000000000000000000000000000000000000000000000000000000000
1011111111111110011001100110011001100110011001100110011001100110
1011111111111100110011001100110011001100110011001100110011001101
1011111111111011001100110011001100110011001100110011001100110011
1011111111111001100110011001100110011001100110011001100110011010
1011111111111000000000000000000000000000000000000000000000000000
1011111111110110011001100110011001100110011001100110011001100110
1011111111110100110011001100110011001100110011001100110011001101
1011111111110011001100110011001100110011001100110011001100110011
1011111111110001100110011001100110011001100110011001100110011010
1011111111110000000000000000000000000000000000000000000000000000
1011111111101100110011001100110011001100110011001100110011001101
1011111111101001100110011001100110011001100110011001100110011010
1011111111100110011001100110011001100110011001100110011001100110
1011111111100011001100110011001100110011001100110011001100110011
1011111111100000000000000000000000000000000000000000000000000000
1011111111011001100110011001100110011001100110011001100110011010
1011111111010011001100110011001100110011001100110011001100110011
1011111111001001100110011001100110011001100110011001100110011010
1011111110111001100110011001100110011001100110011001100110011010
0000000000000000000000000000000000000000000000000000000000000000
0011111110111001100110011001100110011001100110011001100110011010
0011111111001001100110011001100110011001100110011001100110011010
0011111111010011001100110011001100110011001100110011001100110011
0011111111011001100110011001100110011001100110011001100110011010
0011111111100000000000000000000000000000000000000000000000000000
0011111111100011001100110011001100110011001100110011001100110011
0011111111100110011001100110011001100110011001100110011001100110
0011111111101001100110011001100110011001100110011001100110011010
0011111111101100110011001100110011001100110011001100110011001101
0011111111110000000000000000000000000000000000000000000000000000
The colon operator works well with integers, of course.
With floatingpoint numbers, it happens to show floatingpoint representation errors.
Most worst thing is that the locations of such errors is not consistent, not predictable.
Try format long e; (10:0.1:1)' and (2:0.1:1)'.
You can see the errors are close to 0 side, but the total number and the locations
are different according to the starting number of range operator.
I think that people use the range operator with floatingpoint numbers to simulate their work and share it other people most of case. Such inconsistency of floatingpoint range operator may mislead their result.
Thus, the implementation of the floatingpoint case colon operator should be changed to use the linspace function or equivalent, I guess.
I agree that it is not a bug of colon operator. It is a nature behavior.
But, people does not want to see floatingpoint number's weird behaviors in their application.
People usually expect the equally spaced numbers when they use colon operators.
Mathlab does not show such behavior at least with 'format long e'. I need to check the raw bit format.
I guess that they use linspace or similar in colon operator.
 jo.
ps. see...
octave:171> (10:0.1:1)'
ans =
1.00000000000000e+01
9.90000000000000e+00
9.80000000000000e+00
9.70000000000000e+00
9.60000000000000e+00
9.50000000000000e+00
9.40000000000000e+00
9.30000000000000e+00
9.20000000000000e+00
9.10000000000000e+00
9.00000000000000e+00
8.90000000000000e+00
8.80000000000000e+00
8.70000000000000e+00
[[[snip]]]
1.60000000000000e+00
1.50000000000000e+00
1.40000000000000e+00
1.30000000000000e+00
1.20000000000000e+00
1.10000000000000e+00
1.00000000000000e+00
8.99999999999999e01
7.99999999999999e01
7.00000000000000e01
5.99999999999999e01
4.99999999999999e01
3.99999999999999e01
2.99999999999999e01
1.99999999999999e01
9.99999999999995e02
5.55111512312578e16
1.00000000000001e01
2.00000000000001e01
3.00000000000001e01
4.00000000000001e01
5.00000000000001e01
6.00000000000001e01
7.00000000000001e01
8.00000000000001e01
9.00000000000001e01
1.00000000000000e+00


I am going to share the comparison results of colon operators in octave and matlab.
* octave (3.8.2 on cygwin) vs matlab (R2012b on XP windows)
* test numbers: (2:0.1:1)
* test methods: colon operator t1=(2:0.1:1) vs t2=linspace(2,1,31)
* output format: compare them with hexadecimal of raw IEEE bits.
[ num2str(t2','%5.2g'), repmat(' ',[numel(t1) 1]), num2hex(t1'), repmat(' ',[numel(t1) 1]), num2hex(t2') ]
* result of octave: [index, colon oper, linspace ]
2 c000000000000000 c000000000000000
1.9 bffe666666666666 bffe666666666666
1.8 bffccccccccccccd bffccccccccccccd
1.7 bffb333333333333 bffb333333333333
1.6 bff999999999999a bff999999999999a
1.5 bff8000000000000 bff8000000000000
1.4 bff6666666666666 bff6666666666666
1.3 bff4cccccccccccd bff4cccccccccccd
1.2 bff3333333333333 bff3333333333333
1.1 bff1999999999999 bff199999999999a
1 bff0000000000000 bff0000000000000
0.9 bfeccccccccccccc bfeccccccccccccd
0.8 bfe9999999999999 bfe999999999999a
0.7 bfe6666666666666 bfe6666666666666
0.6 bfe3333333333332 bfe3333333333333
0.5 bfdffffffffffffe bfe0000000000000
0.4 bfd9999999999998 bfd999999999999a
0.3 bfd3333333333332 bfd3333333333333
0.2 bfc9999999999996 bfc999999999999a
0.1 bfb9999999999992 bfb999999999999a
0 3ca0000000000000 0000000000000000
0.1 3fb99999999999a2 3fb999999999999a
0.2 3fc999999999999e 3fc999999999999a
0.3 3fd3333333333336 3fd3333333333333
0.4 3fd999999999999c 3fd999999999999a
0.5 3fe0000000000001 3fe0000000000000
0.6 3fe3333333333334 3fe3333333333333
0.7 3fe6666666666668 3fe6666666666666
0.8 3fe999999999999b 3fe999999999999a
0.9 3fecccccccccccce 3feccccccccccccd
1 3ff0000000000000 3ff0000000000000
* result of matlab:
2 c000000000000000 c000000000000000
1.9 bffe666666666666 bffe666666666666
1.8 bffccccccccccccd bffccccccccccccd
1.7 bffb333333333333 bffb333333333333
1.6 bff999999999999a bff999999999999a
1.5 bff8000000000000 bff8000000000000
1.4 bff6666666666666 bff6666666666666
1.3 bff4cccccccccccc bff4cccccccccccd
1.2 bff3333333333333 bff3333333333333
1.1 bff199999999999a bff199999999999a
1 bff0000000000000 bff0000000000000
0.9 bfeccccccccccccc bfeccccccccccccc
0.8 bfe9999999999998 bfe999999999999a
0.7 bfe6666666666666 bfe6666666666666
0.6 bfe3333333333332 bfe3333333333334
0.5 bfe0000000000000 bfe0000000000000
0.4 bfd999999999999c bfd9999999999998
0.3 bfd3333333333334 bfd3333333333334
0.2 bfc99999999999a0 bfc9999999999998
0.1 bfb99999999999a0 bfb99999999999a0
0 0000000000000000 0000000000000000
0.1 3fb9999999999998 3fb99999999999a0
0.2 3fc9999999999998 3fc99999999999a0
0.3 3fd3333333333332 3fd3333333333330
0.4 3fd9999999999998 3fd9999999999998
0.5 3fe0000000000000 3fe0000000000000
0.6 3fe3333333333333 3fe3333333333334
0.7 3fe6666666666666 3fe6666666666668
0.8 3fe999999999999a 3fe9999999999998
0.9 3feccccccccccccd 3feccccccccccccc
1 3ff0000000000000 3ff0000000000000
* conclusion:
In matlab, the result of colon operator is very close to the result of the linspace.
Note that all multiples of 0.5 have the exact floating point number. (which has compact significant digits)
However, the result of colon operator in octave shows the poorest performance: the values of 0.5, 0 and 0.5 do not show the exact floating point representations.
The colon operation with floatingpoint numbers needs to improve, anyhow.
Or, the linspace function is preferable in Octave.
 jo


On 09/23/2014 01:12 AM, Oliver Heimlich wrote:
> Am 23.09.2014 03:27, schrieb s.jo:
>> John W. Eaton wrote
>>> On 06/20/2014 04:28 PM, Julien Bect wrote:
>>>> Hello all,
>>>>
>>>> I would like to bring the following bug report to your attention :
>>>>
>>>> http://savannah.gnu.org/bugs/?42589>>>>
>>>> A simple example of the regression described in this report is the
>>>> following :
>>>>
>>>> octave:1> (9:10) * .1  1
>>>> ans =
>>>>
>>>> 1.0000e01 2.7756e17
>>>>
>>>> The second element should be 0.
>>>>
>>>> Rik thinks that this bug might be hard to solve... which is why I'm
>>>> bringing this up on this list :)
>>>
>>> Please see my comments on the bug tracker.
>>>
>>> jwe
>>
>> I have the similar problem with octave 3.8.1 on cygwin.
>> See below:
>> octave:50> t=[2:0.1:0]
>> t =
>>
>> Columns 1 through 5:
>>
>> 2 1.9 1.8
>> 1.7 1.6
>>
>> Columns 6 through 10:
>>
>> 1.5 1.4 1.3
>> 1.2 1.1
>>
>> Columns 11 through 15:
>>
>> 1 0.9 0.8
>> 0.7 0.6
>>
>> Columns 16 through 20:
>>
>> 0.5 0.4 0.3
>> 0.2 0.0999999999999999
>>
>> Column 21:
>>
>> 0
>>
>> Range command is not good with floatingpoint step only starting from
>> negative integer as above.
>>
>> I am trying to upgrade octave 3.8.2 to see if this problem is fixed.
>>
>> I post it just in case.
>
> The results are perfectly okay. This is how floating point works, see
> IEEE 754. Above mentioned bug report also gives some explanation in the
> comments.
>
> The “problem” occurs because .1 is not a binary floating point number.
> and the approximation
> 0.1000000000000000055511151231257827021181583404541015625 is used
> instead. 10 * 0.1 suffers (or in this case benefits) from rounding.
>
> In the last example with t=[2:0.1:0] only the numbers 2, 1.5, 1,
> 0.5, and 0 have the exact value that is displayed. Column 20
> additionally yields a propagation of errors. You can check this with the
> bit format:
>
> octave:3> format bit
> octave:4> .1
> ans = 1011111110111001100110011001100110011001100110011001100110011010
> (this is 0.1000000000000000055511151231257827021181583404541015625)
>
> octave:5> 2 + 19 * .1
> ans = 1011111110111001100110011001100110011001100110011001100110010000
> (this is 0.09999999999999997779553950749686919152736663818359375)
>
> octave:6> t(20)
> ans = 1011111110111001100110011001100110011001100110011001100110010000
Thank you for the explanation Oliver.
Would there be any utility to attempting to "integerize" the range, if
possible, and thereby eliminate the accumulation of errors? For
example, [2:0.1:0] is equivalent to [20:1:0]*.1, so if the limits turn
out to be factorable, then internally the range could be represented
slightly different than what the user types.
scale = 1.0;
...
if (range.limits_are_factorable ())
{
start /= step;
scale = step;
}
...
ridx.xelem (i) = (start + i*step)*scale;
To be factorable doesn't necessarily require that the floating point
step size be an exact number system equivalent, i.e., that .1 equal
exactly 1/10. It just requires the machine arithmetic to be as expected
when producing an operation result, i.e.,
octavecli:1> rem(2,.1) == 0
ans = 1
octavecli:2> rem(0,.1) == 0
ans = 1
octavecli:3> [20:1:0]*.1
ans =
Columns 1 through 5:
2.00000 1.90000 1.80000 1.70000 1.60000
Columns 6 through 10:
1.50000 1.40000 1.30000 1.20000 1.10000
Columns 11 through 15:
1.00000 0.90000 0.80000 0.70000 0.60000
Columns 16 through 20:
0.50000 0.40000 0.30000 0.20000 0.10000
Column 21:
0.00000
Jo, what do you get for the above scripts?
Dan

Administrator

On 09/23/2014 01:47 PM, s.jo wrote:
> I am going to share the comparison results of colon operators in octave and
> matlab.
>
> * octave (3.8.2 on cygwin) vs matlab (R2012b on XP windows)
>
> * test numbers: (2:0.1:1)
>
> * test methods: colon operator t1=(2:0.1:1) vs t2=linspace(2,1,31)
>
> * output format: compare them with hexadecimal of raw IEEE bits.
> [ num2str(t2','%5.2g'), repmat(' ',[numel(t1) 1]), num2hex(t1'), repmat('
> ',[numel(t1) 1]), num2hex(t2') ]
>
> * result of octave: [index, colon oper, linspace ]
>
>
> * result of matlab:
>
>
>
> * conclusion:
> In matlab, the result of colon operator is very close to the result of the
> linspace.
> Note that all multiples of 0.5 have the exact floating point number. (which
> has compact significant digits)
> However, the result of colon operator in octave shows the poorest
> performance: the values of 0.5, 0 and 0.5 do not show the exact floating
> point representations.
>
> The colon operation with floatingpoint numbers needs to improve, anyhow.
> Or, the linspace function is preferable in Octave.
What results are you seeing? I don't see any expected or actual results
in your message.
jwe


Hello Jo, hello Dan,
Am 23.09.2014 17:28, schrieb s.jo:
> %%
> % setup
> fp_start=2;
> fp_end=1;
> fp_step=0.1;
> fp_num=fix(abs(fp_endfp_start)/fp_step)+1;
> % range operator (floatingpoint)
> t1=(fp_start:fp_step:fp_end);
> % linspace function
> t2=linspace(fp_start,fp_end,fp_num);
> % compare
> format long e
> [t1' t2']
…
>
> %% results:
>
> ans =
>
> 2.00000000000000e+00 2.00000000000000e+00
> 1.90000000000000e+00 1.90000000000000e+00
> 1.80000000000000e+00 1.80000000000000e+00
> 1.70000000000000e+00 1.70000000000000e+00
> 1.60000000000000e+00 1.60000000000000e+00
> 1.50000000000000e+00 1.50000000000000e+00
> 1.40000000000000e+00 1.40000000000000e+00
> 1.30000000000000e+00 1.30000000000000e+00
> 1.20000000000000e+00 1.20000000000000e+00
> 1.10000000000000e+00 1.10000000000000e+00
> 1.00000000000000e+00 1.00000000000000e+00
> 9.00000000000000e01 9.00000000000000e01
> 8.00000000000000e01 8.00000000000000e01
> 7.00000000000000e01 7.00000000000000e01
> 6.00000000000000e01 6.00000000000000e01
> 5.00000000000000e01 5.00000000000000e01
> 4.00000000000000e01 4.00000000000000e01
> 3.00000000000000e01 3.00000000000000e01
> 2.00000000000000e01 2.00000000000000e01
> 9.99999999999999e02 1.00000000000000e01
> 1.11022302462516e16 0.00000000000000e+00
> 1.00000000000000e01 1.00000000000000e01
> 2.00000000000000e01 2.00000000000000e01
> 3.00000000000000e01 3.00000000000000e01
> 4.00000000000000e01 4.00000000000000e01
> 5.00000000000000e01 5.00000000000000e01
> 6.00000000000000e01 6.00000000000000e01
> 7.00000000000000e01 7.00000000000000e01
> 8.00000000000000e01 8.00000000000000e01
> 9.00000000000000e01 9.00000000000000e01
> 1.00000000000000e+00 1.00000000000000e+00
…
>
> Thus, the implementation of the floatingpoint case colon operator should be
> changed to use the linspace function or equivalent, I guess.
>
> I agree that it is not a bug of colon operator. It is a nature behavior.
…
>
> Mathlab does not show such behavior at least with 'format long e'. I need to
> check the raw bit format.
> I guess that they use linspace or similar in colon operator.
>
>  jo.
You would have to do some decimal(!) arithmetic of the user's input(!)
to be able to convert the colon operator into an equivalent linspace
function call, as proposed by Daniel (see below).
Remember, that the input “0.1” actually takes a different value and you
tell the colon operator to create a vector of values from 2 to 1 with a
distance of 0.1000…05551115123… each. It is no surprise that you do not
exactly hit some of your expected values.
On the other side you give exact integer parameters to the linspace
function. It is no wonder that the linspace function can produce most
accurate results.
Am 24.09.2014 17:47, schrieb Daniel J Sebald:
> Would there be any utility to attempting to "integerize" the range, if
> possible, and thereby eliminate the accumulation of errors? For
> example, [2:0.1:0] is equivalent to [20:1:0]*.1, so if the limits turn
> out to be factorable, then internally the range could be represented
> slightly different than what the user types.
You try to make the colon operator context sensitive and require the use
of decimal arithmetic. This is not a good idea. It would be a very
special behaviour that does not conform IEEE 754 and what an experienced
user would expect. It would create even more unpredictable cases.
I suggest, that you (or any other user) prefers the linspace function
over the colon operator and use the colon operator carefully with the
binary floating point context in mind. For example instead of 0.1 you
can use the numbers 0.125 (=2^3) or 0.09375 (=2^4 + 2^5), which will
produce far less representational errors.
Am 23.09.2014 17:28, schrieb s.jo:
> But, people does not want to see floatingpoint number's weird
behaviors in
> their application.
> People usually expect the equally spaced numbers when they use colon
> operators.
If people use floatingpoint numbers (and algorithms) they should know
what they do. Otherwise they could not reasonably trust the results of
their computations.
The numbers are equally spaced within the accuracy of the floating point
context.
IMHO these problems would be much less a surprise to many people if they
could see the exact value of their intermediate results by default
(which would also spam the user's console a lot) instead of a rounded
decimal number like 1.9, 1.8, and so on.


On Mon, 20140922 at 18:27 0700, s.jo wrote:
> John W. Eaton wrote
> > On 06/20/2014 04:28 PM, Julien Bect wrote:
> >> Hello all,
> >>
> >> I would like to bring the following bug report to your attention :
> >>
> >> http://savannah.gnu.org/bugs/?42589> >>
> >> A simple example of the regression described in this report is the
> >> following :
> >>
> >> octave:1> (9:10) * .1  1
> >> ans =
> >>
> >> 1.0000e01 2.7756e17
> >>
> >> The second element should be 0.
> >>
> >> Rik thinks that this bug might be hard to solve... which is why I'm
> >> bringing this up on this list :)
> >
> > Please see my comments on the bug tracker.
> >
> > jwe
>
> I have the similar problem with octave 3.8.1 on cygwin.
That's expected. Did you apply the patch from the bug report above?
 Jordi G. H.


On 09/24/2014 12:29 PM, Oliver Heimlich wrote:
> Hello Jo, hello Dan,
> Am 24.09.2014 17:47, schrieb Daniel J Sebald:
>> Would there be any utility to attempting to "integerize" the range, if
>> possible, and thereby eliminate the accumulation of errors? For
>> example, [2:0.1:0] is equivalent to [20:1:0]*.1, so if the limits turn
>> out to be factorable, then internally the range could be represented
>> slightly different than what the user types.
>
> You try to make the colon operator context sensitive and require the use
> of decimal arithmetic. This is not a good idea. It would be a very
> special behaviour that does not conform IEEE 754 and what an experienced
> user would expect. It would create even more unpredictable cases.
>
> I suggest, that you (or any other user) prefers the linspace function
> over the colon operator and use the colon operator carefully with the
> binary floating point context in mind. For example instead of 0.1 you
> can use the numbers 0.125 (=2^3) or 0.09375 (=2^4 + 2^5), which will
> produce far less representational errors.
That's true, but for binary numbers. My point was that no matter the
number representation system, the underlying arithmatic logic unit (ALU)
should have mathematical consistency. That is, if the ALU carries out
an operation, the result should be the equivalent of what is expected in
mathematics, number representation aside. I'm wondering if there is
consistency in hardware architecture. It may not matter that "0.1"
(which actually equals 0.10000000000xxx) doesn't equal 1/10, so long as
the following is true:
octavecli:1> rem(2,.1) == 0
ans = 1
octavecli:2> rem(0,.1) == 0
ans = 1
Try it with some numbers that we know can be represented exactly with
base 10 or base 2:
octavecli:2> (pi/pi) == 1.0
ans = 1
octavecli:3> ((50*pi)/pi) == 50.0
ans = 1
octavecli:4> ((pi*pi)/pi) == pi
ans = 1
Dan


Am 24.09.2014 20:06, schrieb Daniel J Sebald:
> On 09/24/2014 12:29 PM, Oliver Heimlich wrote:
>> Hello Jo, hello Dan,
>
>> Am 24.09.2014 17:47, schrieb Daniel J Sebald:
>>> Would there be any utility to attempting to "integerize" the range, if
>>> possible, and thereby eliminate the accumulation of errors? For
>>> example, [2:0.1:0] is equivalent to [20:1:0]*.1, so if the limits turn
>>> out to be factorable, then internally the range could be represented
>>> slightly different than what the user types.
>>
>> You try to make the colon operator context sensitive and require the use
>> of decimal arithmetic. This is not a good idea. It would be a very
>> special behaviour that does not conform IEEE 754 and what an experienced
>> user would expect. It would create even more unpredictable cases.
>>
>> I suggest, that you (or any other user) prefers the linspace function
>> over the colon operator and use the colon operator carefully with the
>> binary floating point context in mind. For example instead of 0.1 you
>> can use the numbers 0.125 (=2^3) or 0.09375 (=2^4 + 2^5), which will
>> produce far less representational errors.
>
> That's true, but for binary numbers. My point was that no matter the
> number representation system, the underlying arithmatic logic unit (ALU)
> should have mathematical consistency. That is, if the ALU carries out
> an operation, the result should be the equivalent of what is expected in
> mathematics, number representation aside. I'm wondering if there is
> consistency in hardware architecture. It may not matter that "0.1"
> (which actually equals 0.10000000000xxx) doesn't equal 1/10, so long as
> the following is true:
If you want to have mathematical consistency, which means no difference
between the internal representation of numbers, then (at least) two
problems arise: (1) You want to do decimal arithmetic and not binary
arithmetic. (2) You want to have infinite precision for your
(intermediate) results.
The first is possible, there are decimal arithmetic toolboxes/libraries
out there. They usually are slower than binary arithmetic (factor 4).
The second is a general problem since you will soon get problems with
infinite continued fractions and finite memory boundaries. However,
computer algebra systems can do a good job here.
IEEE 754 is a very popular standard and implemented both in software and
hardware. As long as you are fine with binary floating point arithmetic
of finite precision (mostly 64 bit) you will see consinstency amongst
all standard compliant systems and get a decent performance.
> octavecli:1> rem(2,.1) == 0
> ans = 1
See the definition of rem(x,y): x  y .* fix (x ./ y)
The division exactly results in 20. The relative error of 0.1 is too
small and is lost. Then you multiply 0.1 with 20. Again, the result is
rounded and you get exactly what you wanted.
> octavecli:2> rem(0,.1) == 0
> ans = 1
Any inaccuracy is lost when you divide 0 by anything.
> Try it with some numbers that we know can be represented exactly with
> base 10 or base 2:
>
> octavecli:2> (pi/pi) == 1.0
> ans = 1
This is because x/x==1 with any x. I do not have to emphasize that both
πs are equal.
> octavecli:3> ((50*pi)/pi) == 50.0
> ans = 1
> octavecli:4> ((pi*pi)/pi) == pi
> ans = 1
Both 50 and the π constant are binary floating point numbers, so the
results may be exact. Additionally, the π constant's very last binary
digits are zero, so there is some protection against errors. Try the
following:
octave:1> x = pi + eps * 2;
octave:2> x * 50 / 50 == x
ans = 0


Am 20140925 07:54, schrieb Oliver Heimlich:
> Am 24.09.2014 20:06, schrieb Daniel J Sebald:
>> On 09/24/2014 12:29 PM, Oliver Heimlich wrote:
>>> Hello Jo, hello Dan,
>>
>>> Am 24.09.2014 17:47, schrieb Daniel J Sebald:
>>>> Would there be any utility to attempting to "integerize" the range,
>>>> if
>>>> possible, and thereby eliminate the accumulation of errors? For
>>>> example, [2:0.1:0] is equivalent to [20:1:0]*.1, so if the limits
>>>> turn
>>>> out to be factorable, then internally the range could be represented
>>>> slightly different than what the user types.
>>>
>>> You try to make the colon operator context sensitive and require the
>>> use
>>> of decimal arithmetic. This is not a good idea. It would be a very
>>> special behaviour that does not conform IEEE 754 and what an
>>> experienced
>>> user would expect. It would create even more unpredictable cases.
>>>
>>> I suggest, that you (or any other user) prefers the linspace function
>>> over the colon operator and use the colon operator carefully with the
>>> binary floating point context in mind. For example instead of 0.1 you
>>> can use the numbers 0.125 (=2^3) or 0.09375 (=2^4 + 2^5), which
>>> will
>>> produce far less representational errors.
>>
>> That's true, but for binary numbers. My point was that no matter the
>> number representation system, the underlying arithmatic logic unit
>> (ALU)
>> should have mathematical consistency. That is, if the ALU carries out
>> an operation, the result should be the equivalent of what is expected
>> in
>> mathematics, number representation aside. I'm wondering if there is
>> consistency in hardware architecture. It may not matter that "0.1"
>> (which actually equals 0.10000000000xxx) doesn't equal 1/10, so long
>> as
>> the following is true:
>
> If you want to have mathematical consistency, which means no difference
> between the internal representation of numbers, then (at least) two
> problems arise: (1) You want to do decimal arithmetic and not binary
> arithmetic. (2) You want to have infinite precision for your
> (intermediate) results.
>
> The first is possible, there are decimal arithmetic toolboxes/libraries
> out there. They usually are slower than binary arithmetic (factor 4).
> The second is a general problem since you will soon get problems with
> infinite continued fractions and finite memory boundaries. However,
> computer algebra systems can do a good job here.
>
> IEEE 754 is a very popular standard and implemented both in software
> and
> hardware. As long as you are fine with binary floating point arithmetic
> of finite precision (mostly 64 bit) you will see consinstency amongst
> all standard compliant systems and get a decent performance.
>
>> octavecli:1> rem(2,.1) == 0
>> ans = 1
>
> See the definition of rem(x,y): x  y .* fix (x ./ y)
>
> The division exactly results in 20. The relative error of 0.1 is too
> small and is lost. Then you multiply 0.1 with 20. Again, the result is
> rounded and you get exactly what you wanted.
>
>> octavecli:2> rem(0,.1) == 0
>> ans = 1
>
> Any inaccuracy is lost when you divide 0 by anything.
>
>> Try it with some numbers that we know can be represented exactly with
>> base 10 or base 2:
>>
>> octavecli:2> (pi/pi) == 1.0
>> ans = 1
>
> This is because x/x==1 with any x. I do not have to emphasize that both
> πs are equal.
>
>> octavecli:3> ((50*pi)/pi) == 50.0
>> ans = 1
>> octavecli:4> ((pi*pi)/pi) == pi
>> ans = 1
>
> Both 50 and the π constant are binary floating point numbers, so the
> results may be exact. Additionally, the π constant's very last binary
> digits are zero, so there is some protection against errors. Try the
> following:
>
> octave:1> x = pi + eps * 2;
> octave:2> x * 50 / 50 == x
> ans = 0
oh boy
octave:9> 0.1 + 0.2 == 0.3
ans = 0
octave:10> 0.1 + 0.2  0.2 == 0.1
ans = 0
octave:11> fprintf("%.24f == %.24f\n", 0.1 + 0.2, 0.3)
0.300000000000000044408921 == 0.299999999999999988897770
So, no surprise here.
You should read this
http://docs.oracle.com/cd/E1995701/8063568/ncg_goldberg.htmlAnd for the sake of completeness ....
# Matlab
>> 0.1 + 0.2 == 0.3
ans =
0
>> 0.1 + 0.2 0.2 == 0.1
ans =
0

https://github.com/markumanJabber: [hidden email]


On 09/25/2014 12:54 AM, Oliver Heimlich wrote:
> Am 24.09.2014 20:06, schrieb Daniel J Sebald:
>> On 09/24/2014 12:29 PM, Oliver Heimlich wrote:
[snip]
>> That's true, but for binary numbers. My point was that no matter the
>> number representation system, the underlying arithmatic logic unit (ALU)
>> should have mathematical consistency. That is, if the ALU carries out
>> an operation, the result should be the equivalent of what is expected in
>> mathematics, number representation aside. I'm wondering if there is
>> consistency in hardware architecture. It may not matter that "0.1"
>> (which actually equals 0.10000000000xxx) doesn't equal 1/10, so long as
>> the following is true:
>
> If you want to have mathematical consistency, which means no difference
> between the internal representation of numbers, then (at least) two
> problems arise: (1) You want to do decimal arithmetic and not binary
> arithmetic. (2) You want to have infinite precision for your
> (intermediate) results.
I'm not sure (1) is true, or perhaps what you mean by it. If you mean
0.1 = ... + 0*10^2 + 1*10^1 + 0*10^0 + ...
I don't see that as a necessary requirement for division to behave
consistently in order to use base10 format limits and step sizes.
For (2), you might be referring to something that appears in the
reference Markus pointed to. Something like
"The IEEE standard requires that the result of addition, subtraction,
multiplication and division be exactly rounded. That is, the result must
be computed exactly and then rounded to the nearest floatingpoint
number (using round to even)."
Is that correct? I don't think that implies infinite precision. The
Goldberg reference hints at how to use guard bits to carry out the
computation and meet the standard requirement.
> IEEE 754 is a very popular standard and implemented both in software and
> hardware. As long as you are fine with binary floating point arithmetic
> of finite precision (mostly 64 bit) you will see consinstency amongst
> all standard compliant systems and get a decent performance.
That's good to know.
>> octavecli:1> rem(2,.1) == 0
>> ans = 1
>
> See the definition of rem(x,y): x  y .* fix (x ./ y)
>
> The division exactly results in 20. The relative error of 0.1 is too
> small and is lost. Then you multiply 0.1 with 20. Again, the result is
> rounded and you get exactly what you wanted.
Well, it exactly results in whatever the machine representation for 20
is. Nonetheless, I'm starting to get the feeling that division and
multiplication behave a little better under floating point than do
addition and subtraction.
>> octavecli:2> rem(0,.1) == 0
>> ans = 1
>
> Any inaccuracy is lost when you divide 0 by anything.
>
>> Try it with some numbers that we know can be represented exactly with
>> base 10 or base 2:
>>
>> octavecli:2> (pi/pi) == 1.0
>> ans = 1
>
> This is because x/x==1 with any x. I do not have to emphasize that both
> πs are equal.
>
>> octavecli:3> ((50*pi)/pi) == 50.0
>> ans = 1
>> octavecli:4> ((pi*pi)/pi) == pi
>> ans = 1
>
> Both 50 and the π constant are binary floating point numbers, so the
> results may be exact. Additionally, the π constant's very last binary
> digits are zero, so there is some protection against errors. Try the
> following:
>
> octave:1> x = pi + eps * 2;
> octave:2> x * 50 / 50 == x
> ans = 0
So if the user chooses something like the following:
[pi+eps*2:pi:5*pi+eps]
that can't be "integerized".
The approach I'm pondering is if for
[A:s:B]
A/s is a whole number (by machine representation) and B/s is a whole
number, is [A/s:1:B/s]*s, where the first range is done using integer
ranging, better/worse than [A:s:B] as done by the floatingpoint ranging
algorithm. What are the good points and what are the bad points?
From what you pointed out, we know
octavecli:34> (2+eps)/0.1 == 20
ans = 1
so if the user were to input
[2+eps:0.1:10+eps]
the algorithm would treat this as factorable and generate
[20:1:100]*0.1. In other words, the user would get the same as if he or
she entered
[2:0.1:10]
Is that a bad result? (Personally, I don't think that is a real bad
result because it is saying the discrepancy in the limit w.r.t. the
number representation is insignificant to the step size.) If so, how
about the slightly more stringent requirements:
Given [A:s:B], if
1) A is integer, and
2) A/s is integer
then [A:s:B] can be implemented as
[A/s:1:floor(B/s)] * s
where the range operation is integerbased.
The requirements are more stringent than originally posed, but still
looser than A integer and s integer, which Octave currently recognizes.
Dan


On 09/25/2014 01:16 AM, Markus wrote:
> Am 20140925 07:54, schrieb Oliver Heimlich:
>> Am 24.09.2014 20:06, schrieb Daniel J Sebald:
>>> On 09/24/2014 12:29 PM, Oliver Heimlich wrote:
[snip]
>> This is because x/x==1 with any x. I do not have to emphasize that both
>> πs are equal.
>>
>>> octavecli:3> ((50*pi)/pi) == 50.0
>>> ans = 1
>>> octavecli:4> ((pi*pi)/pi) == pi
>>> ans = 1
>>
>> Both 50 and the π constant are binary floating point numbers, so the
>> results may be exact. Additionally, the π constant's very last binary
>> digits are zero, so there is some protection against errors. Try the
>> following:
>>
>> octave:1> x = pi + eps * 2;
>> octave:2> x * 50 / 50 == x
>> ans = 0
>
> oh boy
>
> octave:9> 0.1 + 0.2 == 0.3
> ans = 0
> octave:10> 0.1 + 0.2  0.2 == 0.1
> ans = 0
> octave:11> fprintf("%.24f == %.24f\n", 0.1 + 0.2, 0.3)
> 0.300000000000000044408921 == 0.299999999999999988897770
>
>
> So, no surprise here.
> You should read this
> http://docs.oracle.com/cd/E1995701/8063568/ncg_goldberg.html>
>
> And for the sake of completeness ....
>
> # Matlab
>>> 0.1 + 0.2 == 0.3
>
> ans =
>
> 0
>
>>> 0.1 + 0.2 0.2 == 0.1
>
> ans =
>
> 0
OK, true. I suppose that explains why the rangegenerating algorithm
accumulates errors when adding the step size so often.
However, the original algorithm I was pondering is comprised of division.
Given [A:s:B], if
1) A is integer, and
2) A/s is integer
then [A:s:B] can be implemented as
[A/s:1:floor(B/s)] * s
where the range operation is integerbased.
Will that eliminate the accumulation of addition errors that Jo pointed
out? Is changing the problem into one of multiplication rather addition
when possible a good idea? Or is the problem only manifesting somewhere
else?
Dan


Daniel Sebald wrote
On 09/23/2014 01:12 AM, Oliver Heimlich wrote:
> Am 23.09.2014 03:27, schrieb s.jo:
>> John W. Eaton wrote
>>> On 06/20/2014 04:28 PM, Julien Bect wrote:
>>>> Hello all,
>>>>
>>>> I would like to bring the following bug report to your attention :
>>>>
>>>> http://savannah.gnu.org/bugs/?42589>>>>
>>>> A simple example of the regression described in this report is the
>>>> following :
>>>>
>>>> octave:1> (9:10) * .1  1
>>>> ans =
>>>>
>>>> 1.0000e01 2.7756e17
>>>>
>>>> The second element should be 0.
>>>>
>>>> Rik thinks that this bug might be hard to solve... which is why I'm
>>>> bringing this up on this list :)
>>>
>>> Please see my comments on the bug tracker.
>>>
>>> jwe
>>
>> I have the similar problem with octave 3.8.1 on cygwin.
>> See below:
>> octave:50> t=[2:0.1:0]
>> t =
>>
>> Columns 1 through 5:
>>
>> 2 1.9 1.8
>> 1.7 1.6
>>
>> Columns 6 through 10:
>>
>> 1.5 1.4 1.3
>> 1.2 1.1
>>
>> Columns 11 through 15:
>>
>> 1 0.9 0.8
>> 0.7 0.6
>>
>> Columns 16 through 20:
>>
>> 0.5 0.4 0.3
>> 0.2 0.0999999999999999
>>
>> Column 21:
>>
>> 0
>>
>> Range command is not good with floatingpoint step only starting from
>> negative integer as above.
>>
>> I am trying to upgrade octave 3.8.2 to see if this problem is fixed.
>>
>> I post it just in case.
>
> The results are perfectly okay. This is how floating point works, see
> IEEE 754. Above mentioned bug report also gives some explanation in the
> comments.
>
> The “problem” occurs because .1 is not a binary floating point number.
> and the approximation
> 0.1000000000000000055511151231257827021181583404541015625 is used
> instead. 10 * 0.1 suffers (or in this case benefits) from rounding.
>
> In the last example with t=[2:0.1:0] only the numbers 2, 1.5, 1,
> 0.5, and 0 have the exact value that is displayed. Column 20
> additionally yields a propagation of errors. You can check this with the
> bit format:
>
> octave:3> format bit
> octave:4> .1
> ans = 1011111110111001100110011001100110011001100110011001100110011010
> (this is 0.1000000000000000055511151231257827021181583404541015625)
>
> octave:5> 2 + 19 * .1
> ans = 1011111110111001100110011001100110011001100110011001100110010000
> (this is 0.09999999999999997779553950749686919152736663818359375)
>
> octave:6> t(20)
> ans = 1011111110111001100110011001100110011001100110011001100110010000
Thank you for the explanation Oliver.
Would there be any utility to attempting to "integerize" the range, if
possible, and thereby eliminate the accumulation of errors? For
example, [2:0.1:0] is equivalent to [20:1:0]*.1, so if the limits turn
out to be factorable, then internally the range could be represented
slightly different than what the user types.
scale = 1.0;
...
if (range.limits_are_factorable ())
{
start /= step;
scale = step;
}
...
ridx.xelem (i) = (start + i*step)*scale;
To be factorable doesn't necessarily require that the floating point
step size be an exact number system equivalent, i.e., that .1 equal
exactly 1/10. It just requires the machine arithmetic to be as expected
when producing an operation result, i.e.,
octavecli:1> rem(2,.1) == 0
ans = 1
octavecli:2> rem(0,.1) == 0
ans = 1
octavecli:3> [20:1:0]*.1
ans =
Columns 1 through 5:
2.00000 1.90000 1.80000 1.70000 1.60000
Columns 6 through 10:
1.50000 1.40000 1.30000 1.20000 1.10000
Columns 11 through 15:
1.00000 0.90000 0.80000 0.70000 0.60000
Columns 16 through 20:
0.50000 0.40000 0.30000 0.20000 0.10000
Column 21:
0.00000
Jo, what do you get for the above scripts?
Dan
Dan and others,
Sorry for late reply.
Your integerized stepping seems to be a decimation process.
You used stepsize 1. Any bigger integers such as 10 or 10^17 are allowed in the decimation process.
Comparing the results of colon operators with floatingpoint numbers,
Matlab seems to use such decimation process, but Octave does not.
I agree that this decimation feature of colon operator is far from any IEEE standard.
Also I point out that the results of linspace function and colon operator in Matlab are different.
This may imply decimation processes are different.
By inspection, the linspace result is much close to decimal number.
I guess that the decimation process of Matlab's colon operator is simpler for speed.
I rephase my question:
Does Octave's colon operator with floatingpoint numbers need to use decimation as Matlab does?
At the early conversation, I think that the compatibility with Matlab is most important for usual Octave users. Now I understand that this may mislead other unknown features.
So far, I learned that integer range can be generated 'safely' with colon operator.
Regarding to floatingpoint number range, we may want to use the function "linspace" for the best decimation.
I don't know who is responsible for the colon operator source code.
But I think that if there is a document for comparing colon operator with linspace function,
that will be helpful.
There is another question: I found that (20:1:1)*.1 and [20:1:1]*0.1 have different results.
octave:87> tt=[(20:1:1)*0.1; [20:1:1]*0.1]'
tt =
2 2
1.9 1.9
1.8 1.8
1.7 1.7
1.6 1.6
1.5 1.5
1.4 1.4
1.3 1.3
1.2 1.2
1.1 1.1
1 1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.0999999999999999 0.1
1.11022302462516e16 0
0.1 0.1
octave:88> [num2str(tt(:,1),'%.42g') repmat(' ',size(tt,1)) num2str(tt(:,2),'%.42g')]
ans =
2 2
1.89999999999999991118215802998747676610947 1.9000000000000001332267629550187848508358
1.80000000000000004440892098500626161694527 1.80000000000000004440892098500626161694527
1.69999999999999995559107901499373838305473 1.70000000000000017763568394002504646778107
1.60000000000000008881784197001252323389053 1.60000000000000008881784197001252323389053
1.5 1.5
1.39999999999999991118215802998747676610947 1.4000000000000001332267629550187848508358
1.30000000000000004440892098500626161694527 1.30000000000000004440892098500626161694527
1.19999999999999995559107901499373838305473 1.20000000000000017763568394002504646778107
1.0999999999999998667732370449812151491642 1.10000000000000008881784197001252323389053
1 1
0.899999999999999911182158029987476766109467 0.900000000000000022204460492503130808472633
0.7999999999999999333866185224906075745821 0.800000000000000044408920985006261616945267
0.699999999999999955591079014993738383054733 0.7000000000000000666133814775093924254179
0.5999999999999998667732370449812151491642 0.600000000000000088817841970012523233890533
0.499999999999999888977697537484345957636833 0.5
0.399999999999999911182158029987476766109467 0.400000000000000022204460492503130808472633
0.2999999999999999333866185224906075745821 0.300000000000000044408920985006261616945267
0.19999999999999990007992778373591136187315 0.200000000000000011102230246251565404236317
0.0999999999999998945288126606101286597549915 0.100000000000000005551115123125782702118158
1.1102230246251565404236316680908203125e16 0
0.100000000000000005551115123125782702118158 0.100000000000000005551115123125782702118158
The range with [...] is better decimated than the case of (...).
More interestingly, if I transpose the range array such as (20:1:1)'*.1, then I have the same result in [20:1:1]'*.1.
Is this a bug? Did someone say that there is a patch for this?
I am useing Ocatve 3.8.2 on cygwin.
Thank you guys for many comments.
Best regards,
jo
ps.
I reopened this topic for my curiosity.
It is fun to learn how Octave works and to compare it with Matlab.
You has better email me to see any more comparison results.
Sharing raw data seems on this conversation looks like spam.


On 09/26/2014 02:43 AM, s.jo wrote:
> Daniel Sebald wrote
[snip]
>> To be factorable doesn't necessarily require that the floating point
>> step size be an exact number system equivalent, i.e., that .1 equal
>> exactly 1/10. It just requires the machine arithmetic to be as expected
>> when producing an operation result, i.e.,
>>
>> octavecli:1> rem(2,.1) == 0
>> ans = 1
>> octavecli:2> rem(0,.1) == 0
>> ans = 1
>> octavecli:3> [20:1:0]*.1
>> ans =
>>
>> Columns 1 through 5:
>>
>> 2.00000 1.90000 1.80000 1.70000 1.60000
>>
>> Columns 6 through 10:
>>
>> 1.50000 1.40000 1.30000 1.20000 1.10000
>>
>> Columns 11 through 15:
>>
>> 1.00000 0.90000 0.80000 0.70000 0.60000
>>
>> Columns 16 through 20:
>>
>> 0.50000 0.40000 0.30000 0.20000 0.10000
>>
>> Column 21:
>>
>> 0.00000
>>
>>
>> Jo, what do you get for the above scripts?
>>
>> Dan
>
> Dan and others,
> Sorry for late reply.
>
> Your integerized stepping seems to be a decimation process.
> You used stepsize 1. Any bigger integers such as 10 or 10^17 are allowed in
> the decimation process.
By decimation, I take it you mean somehow converting the number to true
decimal representation, similar to the decimal arithmetic toolbox Oliver
alluded to. I don't think that's the case.
> Comparing the results of colon operators with floatingpoint numbers,
> Matlab seems to use such decimation process, but Octave does not.
> I agree that this decimation feature of colon operator is far from any IEEE
> standard.
>
> Also I point out that the results of linspace function and colon operator in
> Matlab are different.
> This may imply decimation processes are different.
> By inspection, the linspace result is much close to decimal number.
> I guess that the decimation process of Matlab's colon operator is simpler
> for speed.
No, I doubt that. The linspace and colon operator are probably two
slightly different algorithms. For linspace the spacing is exactly an
integer, by definition. That's not necessarily so for the colon
operator where the spacing can result in a fraction of the step size at
the last interval.
Looking at the code a bit, I see that the range (colon) operator does
attempt to use integer multiplication as much as possible, e.g., from
Range::matrix_value:
cache.resize (1, rng_nelem);
double b = rng_base;
double increment = rng_inc;
if (rng_nelem > 0)
{
// The first element must always be *exactly* the base.
// E.g, 0 would otherwise become +0 in the loop (0 +
0*increment).
cache(0) = b;
for (octave_idx_type i = 1; i < rng_nelem; i++)
cache(i) = b + i * increment;
}
So, the accumulation of addition errors doesn't seem to be an issue here.
> So far, I learned that integer range can be generated 'safely' with colon
> operator.
> Regarding to floatingpoint number range, we may want to use the function
> "linspace" for the best decimation.
>
> I don't know who is responsible for the colon operator source code.
> But I think that if there is a document for comparing colon operator with
> linspace function,
> that will be helpful.
>
> There is another question: I found that (20:1:1)*.1 and [20:1:1]*0.1 have
> different results.
>
>
> The range with [...] is better decimated than the case of (...).
Interesting. You may have it right there. And in fact, it appears the
conversation in the bug report
https://savannah.gnu.org/bugs/?42589turns to this issue. John describes how a "Range" class retains the
description of the range rather than converting it to a matrix class.
That actually may be a good idea because it could possibly allow
retaining double precision if needed somehow (e.g., machines where the
single precision is markedly different from "double"). Whereas
converting the range to a matrix might make the result single precision
depending on compiler settings or whatnot. I'm just speaking in the
abstract without any example.
So, in your example, the difference is (20:1:1) is a Range description
(base, increment, limit), where [20:1:1] is a matrix_value
implementation of that range (all the actual 21 values).
Let's try to discern which of these might be the more accurate result:
>> t_1 = (20:1:1)*.1;
>> t_2 = [20:1:1]*.1;
>> t_f = [2 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1 ...
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1];
>> t_d = double(zeros(1,21));
>> t_d(1) = 2; t_d(2) = 1.9; t_d(3) = 1.8; t_d(4) = 1.7; t_d(5) = 1.6;
>> t_d(6) = 1.5; t_d(7) = 1.4; t_d(8) = 1.3; t_d(9) = 1.2; t_d(10) = 1.1;
>> t_d(11) = 1; t_d(12) = 0.9; t_d(13) = 0.8; t_d(14) = 0.7; t_d(15) = 0.6;
>> t_d(16) = 0.5; t_d(17) = 0.4; t_d(18) = 0.3; t_d(19) = 0.2; t_d(20) = 0.1;
>> t_d(21) = 0; t_d(22) = 0.1;
>> max(abs(t_d  t_f))
ans = 0
>> norm(abs(t_d  t_f))
ans = 0
>> max(abs(t_d  t_1))
ans = 2.2204e16
>> norm(abs(t_d  t_1))
ans = 4.3088e16
>> max(abs(t_d  t_2))
ans = 2.2204e16
>> norm(abs(t_d  t_2))
ans = 4.7429e16
>> max(abs(t_1  t_2))
ans = 2.2204e16
>> [abs(t_dt_1)' abs(t_dt_2)']
ans =
0.0000e+00 0.0000e+00
0.0000e+00 2.2204e16
0.0000e+00 0.0000e+00
0.0000e+00 2.2204e16
0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00
0.0000e+00 2.2204e16
2.2204e16 0.0000e+00
0.0000e+00 2.2204e16
0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00
1.1102e16 0.0000e+00
2.2204e16 0.0000e+00
0.0000e+00 1.1102e16
1.1102e16 1.1102e16
0.0000e+00 0.0000e+00
1.1102e16 0.0000e+00
1.6653e16 5.5511e17
5.5511e17 0.0000e+00
1.3878e16 0.0000e+00
0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00
From the norm(abs(t_d  t_f)) = 0 result, I'd say my machine/compiler
is using double precision by default. Also, there is a difference
between (A:1:B)*d and [A:1:B]*d but no implementation seems better than
the other. If anything (A:1:B)*d is marginally better than [A:1:B]*d,
but statistically speaking the norm difference of 4.3405e17 is probably
not conclusive. However, if I were to guess, I'd say (A:1:B)*d retains
accuracy better because it doesn't cast to a IEEE float representation
and then multiply. The computations may be all done with more bits in
the ALU. But we're talking the very last bit here, I believe.
> More interestingly, if I transpose the range array such as (20:1:1)'*.1,
> then I have the same result in [20:1:1]'*.1.
The transpose likely forces the Range description into a matrix_value
implementation.
> Is this a bug? Did someone say that there is a patch for this?
> I am useing Ocatve 3.8.2 on cygwin.
OK, Cygwin. So there's where there could be some differences in
compiler settings that make your computations single precision and
therefore showing larger errors in the arithmetic than what I'm seeing.
Try the script lines above on your machine and see what you get. You
probably should have the latest code with the Range patch applied. In
any case, though, if you can identify a single/double precision issue,
it might suggest that the Octave code isn't retaining precision...or it
could simply be that on your system Matlab is using double precision at
compilation whereas Octave is not.
Dan


This post was updated on .
CONTENTS DELETED
The author has deleted this message.


Daniel Sebald wrote
On 09/26/2014 02:43 AM, s.jo wrote:
> Daniel Sebald wrote
[snip]
>> To be factorable doesn't necessarily require that the floating point
>> step size be an exact number system equivalent, i.e., that .1 equal
>> exactly 1/10. It just requires the machine arithmetic to be as expected
>> when producing an operation result, i.e.,
>>
>> octavecli:1> rem(2,.1) == 0
>> ans = 1
>> octavecli:2> rem(0,.1) == 0
>> ans = 1
>> octavecli:3> [20:1:0]*.1
>> ans =
>>
>> Columns 1 through 5:
>>
>> 2.00000 1.90000 1.80000 1.70000 1.60000
>>
>> Columns 6 through 10:
>>
>> 1.50000 1.40000 1.30000 1.20000 1.10000
>>
>> Columns 11 through 15:
>>
>> 1.00000 0.90000 0.80000 0.70000 0.60000
>>
>> Columns 16 through 20:
>>
>> 0.50000 0.40000 0.30000 0.20000 0.10000
>>
>> Column 21:
>>
>> 0.00000
>>
>>
>> Jo, what do you get for the above scripts?
>>
>> Dan
>
> Dan and others,
> Sorry for late reply.
>
> Your integerized stepping seems to be a decimation process.
> You used stepsize 1. Any bigger integers such as 10 or 10^17 are allowed in
> the decimation process.
By decimation, I take it you mean somehow converting the number to true
decimal representation, similar to the decimal arithmetic toolbox Oliver
alluded to. I don't think that's the case.
> Comparing the results of colon operators with floatingpoint numbers,
> Matlab seems to use such decimation process, but Octave does not.
> I agree that this decimation feature of colon operator is far from any IEEE
> standard.
>
> Also I point out that the results of linspace function and colon operator in
> Matlab are different.
> This may imply decimation processes are different.
> By inspection, the linspace result is much close to decimal number.
> I guess that the decimation process of Matlab's colon operator is simpler
> for speed.
No, I doubt that. The linspace and colon operator are probably two
slightly different algorithms. For linspace the spacing is exactly an
integer, by definition. That's not necessarily so for the colon
operator where the spacing can result in a fraction of the step size at
the last interval.
Looking at the code a bit, I see that the range (colon) operator does
attempt to use integer multiplication as much as possible, e.g., from
Range::matrix_value:
cache.resize (1, rng_nelem);
double b = rng_base;
double increment = rng_inc;
if (rng_nelem > 0)
{
// The first element must always be *exactly* the base.
// E.g, 0 would otherwise become +0 in the loop (0 +
0*increment).
cache(0) = b;
for (octave_idx_type i = 1; i < rng_nelem; i++)
cache(i) = b + i * increment;
}
So, the accumulation of addition errors doesn't seem to be an issue here.
> So far, I learned that integer range can be generated 'safely' with colon
> operator.
> Regarding to floatingpoint number range, we may want to use the function
> "linspace" for the best decimation.
>
> I don't know who is responsible for the colon operator source code.
> But I think that if there is a document for comparing colon operator with
> linspace function,
> that will be helpful.
>
> There is another question: I found that (20:1:1)*.1 and [20:1:1]*0.1 have
> different results.
>
>
> The range with [...] is better decimated than the case of (...).
Interesting. You may have it right there. And in fact, it appears the
conversation in the bug report
https://savannah.gnu.org/bugs/?42589turns to this issue. John describes how a "Range" class retains the
description of the range rather than converting it to a matrix class.
That actually may be a good idea because it could possibly allow
retaining double precision if needed somehow (e.g., machines where the
single precision is markedly different from "double"). Whereas
converting the range to a matrix might make the result single precision
depending on compiler settings or whatnot. I'm just speaking in the
abstract without any example.
So, in your example, the difference is (20:1:1) is a Range description
(base, increment, limit), where [20:1:1] is a matrix_value
implementation of that range (all the actual 21 values).
Let's try to discern which of these might be the more accurate result:
>> t_1 = (20:1:1)*.1;
>> t_2 = [20:1:1]*.1;
>> t_f = [2 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1 ...
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1];
>> t_d = double(zeros(1,21));
>> t_d(1) = 2; t_d(2) = 1.9; t_d(3) = 1.8; t_d(4) = 1.7; t_d(5) = 1.6;
>> t_d(6) = 1.5; t_d(7) = 1.4; t_d(8) = 1.3; t_d(9) = 1.2; t_d(10) = 1.1;
>> t_d(11) = 1; t_d(12) = 0.9; t_d(13) = 0.8; t_d(14) = 0.7; t_d(15) = 0.6;
>> t_d(16) = 0.5; t_d(17) = 0.4; t_d(18) = 0.3; t_d(19) = 0.2; t_d(20) = 0.1;
>> t_d(21) = 0; t_d(22) = 0.1;
>> max(abs(t_d  t_f))
ans = 0
>> norm(abs(t_d  t_f))
ans = 0
>> max(abs(t_d  t_1))
ans = 2.2204e16
>> norm(abs(t_d  t_1))
ans = 4.3088e16
>> max(abs(t_d  t_2))
ans = 2.2204e16
>> norm(abs(t_d  t_2))
ans = 4.7429e16
>> max(abs(t_1  t_2))
ans = 2.2204e16
>> [abs(t_dt_1)' abs(t_dt_2)']
ans =
0.0000e+00 0.0000e+00
0.0000e+00 2.2204e16
0.0000e+00 0.0000e+00
0.0000e+00 2.2204e16
0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00
0.0000e+00 2.2204e16
2.2204e16 0.0000e+00
0.0000e+00 2.2204e16
0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00
1.1102e16 0.0000e+00
2.2204e16 0.0000e+00
0.0000e+00 1.1102e16
1.1102e16 1.1102e16
0.0000e+00 0.0000e+00
1.1102e16 0.0000e+00
1.6653e16 5.5511e17
5.5511e17 0.0000e+00
1.3878e16 0.0000e+00
0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00
From the norm(abs(t_d  t_f)) = 0 result, I'd say my machine/compiler
is using double precision by default. Also, there is a difference
between (A:1:B)*d and [A:1:B]*d but no implementation seems better than
the other. If anything (A:1:B)*d is marginally better than [A:1:B]*d,
but statistically speaking the norm difference of 4.3405e17 is probably
not conclusive. However, if I were to guess, I'd say (A:1:B)*d retains
accuracy better because it doesn't cast to a IEEE float representation
and then multiply. The computations may be all done with more bits in
the ALU. But we're talking the very last bit here, I believe.
> More interestingly, if I transpose the range array such as (20:1:1)'*.1,
> then I have the same result in [20:1:1]'*.1.
The transpose likely forces the Range description into a matrix_value
implementation.
> Is this a bug? Did someone say that there is a patch for this?
> I am useing Ocatve 3.8.2 on cygwin.
OK, Cygwin. So there's where there could be some differences in
compiler settings that make your computations single precision and
therefore showing larger errors in the arithmetic than what I'm seeing.
Try the script lines above on your machine and see what you get. You
probably should have the latest code with the Range patch applied. In
any case, though, if you can identify a single/double precision issue,
it might suggest that the Octave code isn't retaining precision...or it
could simply be that on your system Matlab is using double precision at
compilation whereas Octave is not.
Dan
Dear Dan,
Thanks to your comments, I got more clear idea on ranges...
Please find my opinions as follows:
From https://www.gnu.org/software/octave/doc/interpreter/Ranges.html ,
I can see the difference among range object, matrix object, and linspace.
I understand that the transpose function triggers converting range into matrix type.
octave:14> (20:1:1)  [20:1:1]
ans =
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
octave:15> (20:1:1)*.1  [20:1:1]*.1
ans =
Columns 1 through 5:
0 2.2204e16 0 2.2204e16 0
Columns 6 through 10:
0 2.2204e16 0 2.2204e16 2.2204e16
Columns 11 through 15:
0 1.1102e16 1.1102e16 1.1102e16 2.2204e16
Columns 16 through 20:
1.1102e16 1.1102e16 1.1102e16 1.1102e16 1.1102e16
Columns 21 and 22:
1.1102e16 0
But I could not follow the above result.
It is ok that range and matrix types show the same results as they are.
Speaking of the mixed type operation with double, they look different!
There seems to be a bug in mixed type operation of range object and double number.
I installed Octave 3.8.2 from cygwin package repository, not compiled them from source code.
I believe that range type operated mixed with double should be same as matrix type operated mixed with double.
I suspect that floatingpoint arithmetic rounding modes are differently selected during multiple file compilation. http://en.wikipedia.org/wiki/IEEE_floating_point#Rounding_rulesI am not much familiar with gcc compile options.
Such small errors cannot occur if single precision numbers are mistook.
With singleprecision number interruption, worst errors are expected to be much more than 1e10.
Anyhow, I agree that my previous conclusion comparing (A:1:B)*d and [A:1:B]*d was NOT correct.
Both expressions have similar performance over floatingpoint representations.
There is a special script that I use when I test the ranges:
sum(abs(sin(k*pi))<eps), k is ranges in floatingpoint numbers.
This script that counts the zerocrossover points of sin is motivated from
http://www.mathworks.co.kr/company/newsletters/articles/thetetragammafunctionandnumericalcraftsmanship.htmlFrom the result of the script, I can determine that pi multiplied by decimal points such 1.0*pi, 2.0*pi, ... 1000.0*pi are well generated from range operations.
As a result, both (A:1:B)*d and [A:1:B]*d are poor than linspace in Octave.
However, in Matlab, all results from range type, matrixrange type and linspace pass the 'abs(sin(k*pi))<eps' test for large k.
See the complete script and their results (comparing Octave and Matlab) as follows:
== sin test script ==
version
% generate ranges :
% 1. range object, 2. matrix object, 3. linspace, 4. perfect decimation
N=1000;
range_test1=(N:0.1:N);
range_test2=[N:0.1:N];
range_test3=linspace(N,N,20*N+1);
range_test4=(str2num(sprintf('%.12f ',N:0.1:N)));
% count sin(k*pi)==0 : 0 or 1 expected due to floating point representation error
sum(sin(range_test1*pi)==0)
sum(sin(range_test2*pi)==0)
sum(sin(range_test3*pi)==0)
sum(sin(range_test4*pi)==0)
% count sin(k*pi)<eps : what we want is 2*N+1 in case of the best decimation
sum(abs(sin(range_test1*pi))<eps(range_test1*pi))
sum(abs(sin(range_test2*pi))<eps(range_test2*pi))
sum(abs(sin(range_test3*pi))<eps(range_test3*pi))
sum(abs(sin(range_test4*pi))<eps(range_test4*pi))
== Octave 3.8.2 result ==
+ version
ans = 3.8.2
+ ## generate ranges :
+ ## 1. range object, 2. matrix object, 3. linspace, 4. perfect decimation
+ N = 1000;
+ range_test1 = (N:0.1:N);
+ range_test2 = [N:0.1:N];
+ range_test3 = linspace (N, N, 20 * N + 1);
+ range_test4 = (str2num (sprintf ('%.12f ', N:0.1:N)));
+ ## count sin(k*pi)==0 : 0 or 1 expected due to floating point representation error
+ sum (sin (range_test1 * pi) == 0)
ans = 0
+ sum (sin (range_test2 * pi) == 0)
ans = 0
+ sum (sin (range_test3 * pi) == 0)
ans = 1
+ sum (sin (range_test4 * pi) == 0)
ans = 1
+ ## count sin(k*pi)<eps : what we want is 2*N+1 in case of the best decimation
+ sum (abs (sin (range_test1 * pi)) < eps (range_test1 * pi))
ans = 1055
+ sum (abs (sin (range_test2 * pi)) < eps (range_test2 * pi))
ans = 1168
+ sum (abs (sin (range_test3 * pi)) < eps (range_test3 * pi))
ans = 2001
+ sum (abs (sin (range_test4 * pi)) < eps (range_test4 * pi))
ans = 2001
== Matlab 2012b result ==
version
ans =
8.0.0.783 (R2012b)
% generate ranges :
% 1. range object, 2. matrix object, 3. linspace, 4. perfect decimation
N=1000;
range_test1=(N:0.1:N);
range_test2=[N:0.1:N];
range_test3=linspace(N,N,20*N+1);
range_test4=(str2num(sprintf('%.12f ',N:0.1:N)));
% count sin(k*pi)==0 : 0 or 1 expected due to floating point representation error
sum(sin(range_test1*pi)==0)
ans =
1
sum(sin(range_test2*pi)==0)
ans =
1
sum(sin(range_test3*pi)==0)
ans =
1
sum(sin(range_test4*pi)==0)
ans =
1
% count sin(k*pi)<eps : what we want is 2*N+1 in case of the best decimation
sum(abs(sin(range_test1*pi))<eps(range_test1*pi))
ans =
2001
sum(abs(sin(range_test2*pi))<eps(range_test2*pi))
ans =
2001
sum(abs(sin(range_test3*pi))<eps(range_test3*pi))
ans =
2001
sum(abs(sin(range_test4*pi))<eps(range_test4*pi))
ans =
2001
== discussion ==
Range object with floatingpoint numbers is not much emphasized so far.
In Octave, floatingpoint ranges generated from range object may show poorer performance than
those from linspace function, when evaluating trigonometric functions such as sin, cos and exp(complex arguments).
Meanwile, Matlab show the same performance among the ranges and linspace function
when we apply abs(sin(k*pi))<eps test for large k.
For example, I counted zerocrossover points in sin function in the range of 1000*pi~1000pi,
with 0.1*pi stepsize. The true value will be 2001 points in that range.
Matlab showed the correct count for any range method, but Octave showed the correct count only in linspace function.
Note that Matlab seems to have no range type.
Speaking of range object, there seems to be a bug with mixed type operation with double precision numbers in Octave.
I suspect some inconsistent floatingpoint arithmetic rounding options during multiple file compilation.
For numerical application intensively using trigonometric function with large angles,
we had better use linspace function to generate a range of radian angles in Octave.

ps. I added more test script for range hex forms and their mixedtype operation.
We can see how they differ from matlab's outcome.
Note that range*pi and matrix_range*pi show same in Matlab because Matlab does not have range object.
Also note that linspace functions of Matlab and Octave are not matched even though both pass abs(sin(k*pi))<eps test.
Octave's linspace is very close to the full decimation, but Matlab's is not.
There seems to be an unknown additional decimal process in Matlab.
Speaking of zerocrossover check of sin function, I wrote that script to see sin function behavior with floatingpoint numbers. I happen to find that the results vary according to range methods in Octave.
Such errors occur more likely on 0 side. However, Matlab does not show such errors.
This drove me here~
== octave ==
+ ## full comparison of range hex forms
+ h1 = num2hex (range_test1);
+ h2 = num2hex (range_test2);
+ h3 = num2hex (range_test3);
+ h4 = num2hex (range_test4);
+ sum (sum (h1 != h2))
ans = 0
+ sum (sum (h2 != h3))
ans = 25373
+ sum (sum (h3 != h4))
ans = 5
+ sum (sum (h4 != h1))
ans = 25374
+ sum (sum (h1 != h3))
ans = 25373
+ sum (sum (h2 != h4))
ans = 25374
+ ## full comparison of (range*double) hex forms
+ h1 = num2hex (range_test1 * pi);
+ h2 = num2hex (range_test2 * pi);
+ h3 = num2hex (range_test3 * pi);
+ h4 = num2hex (range_test4 * pi);
+ sum (sum (h1 != h2))
ans = 9518
+ sum (sum (h2 != h3))
ans = 15053
+ sum (sum (h3 != h4))
ans = 5
+ sum (sum (h4 != h1))
ans = 17261
+ sum (sum (h1 != h3))
ans = 17266
+ sum (sum (h2 != h4))
ans = 15053
== matlab ==
% full comparison ofn range hex forms
h1=num2hex(range_test1);
h2=num2hex(range_test2);
h3=num2hex(range_test3);
h4=num2hex(range_test4);
sum(sum(h1~=h2))
ans =
0
sum(sum(h2~=h3))
ans =
6752
sum(sum(h3~=h4))
ans =
10346
sum(sum(h4~=h1))
ans =
7968
sum(sum(h1~=h3))
ans =
6752
sum(sum(h2~=h4))
ans =
7968
% full comparison of (range*double) hex forms
h1=num2hex(range_test1*pi);
h2=num2hex(range_test2*pi);
h3=num2hex(range_test3*pi);
h4=num2hex(range_test4*pi);
sum(sum(h1~=h2))
ans =
0
sum(sum(h2~=h3))
ans =
7275
sum(sum(h3~=h4))
ans =
9932
sum(sum(h4~=h1))
ans =
8684
sum(sum(h1~=h3))
ans =
7275
sum(sum(h2~=h4))
ans =
8684



On 09/28/2014 10:20 PM, s.jo wrote:
> Daniel Sebald wrote
[snip]
> From the result of the script, I can determine that pi multiplied by decimal
> points such 1.0*pi, 2.0*pi, ... 1000.0*pi are well generated from range
> operations.
> As a result, both (A:1:B)*d and [A:1:B]*d are poor than linspace in Octave.
> However, in Matlab, all results from range type, matrixrange type and
> linspace pass the 'abs(sin(k*pi))<eps' test for large k.
> See the complete script and their results (comparing Octave and Matlab) as
> follows:
>
> == sin test script ==
> version
> % generate ranges :
> % 1. range object, 2. matrix object, 3. linspace, 4. perfect decimation
> N=1000;
> range_test1=(N:0.1:N);
> range_test2=[N:0.1:N];
> range_test3=linspace(N,N,20*N+1);
Nice coding for this line:
> range_test4=(str2num(sprintf('%.12f ',N:0.1:N)));
but I think if you change this to '%.1f ', it pretty much retains only
the 1/10 fractional digit, which is all we really want. If there is a
nonzero digit at 10^12 or something and it gets converted back to a
number, then that is not a correct benchmark.
> % count sin(k*pi)==0 : 0 or 1 expected due to floating point representation
> error
> sum(sin(range_test1*pi)==0)
> sum(sin(range_test2*pi)==0)
> sum(sin(range_test3*pi)==0)
> sum(sin(range_test4*pi)==0)
Could you add a couple more cases here?
range_test5=(N/0.1:1:N/0.1)*0.1;
range_test6=[N/0.1:1:N/0.1]*0.1;
> % count sin(k*pi)<eps : what we want is 2*N+1 in case of the best decimation
> sum(abs(sin(range_test1*pi))<eps(range_test1*pi))
> sum(abs(sin(range_test2*pi))<eps(range_test2*pi))
> sum(abs(sin(range_test3*pi))<eps(range_test3*pi))
> sum(abs(sin(range_test4*pi))<eps(range_test4*pi))
and here:
sum(abs(sin(range_test5*pi))<eps(range_test5*pi))
sum(abs(sin(range_test6*pi))<eps(range_test6*pi))
> == Octave 3.8.2 result ==
> + sum (abs (sin (range_test1 * pi))< eps (range_test1 * pi))
> ans = 1055
> + sum (abs (sin (range_test2 * pi))< eps (range_test2 * pi))
> ans = 1168
On my system, the above case is showing 2001, so it is only the first
test1 case out of tolerance for me.
> + sum (abs (sin (range_test3 * pi))< eps (range_test3 * pi))
> ans = 2001
> + sum (abs (sin (range_test4 * pi))< eps (range_test4 * pi))
> ans = 2001
OK, on my system I printed out the matrix_value whenever the conversion
was done from range to Matrix. Here are my results, which should start
to illustrate where the differences are:
octave:20> N=1000;
octave:21> range_test1=(N:0.1:N);
octave:22> range_test2=[N:0.1:N];
matrix_value, inc 0.1
octave:23> range_test3=linspace(N,N,20*N+1);
octave:24> %range_test4=(str2num(sprintf('%.12f ',N:0.1:N)));
octave:24> range_test4=(str2num(sprintf('%.1f ',N:0.1:N)));
octave:25> range_test5=(N/0.1:1:N/0.1)*0.1;
octave:26> range_test6=[N/0.1:1:N/0.1]*0.1;
matrix_value, inc 1
octave:27> sum(abs(sin(range_test1*pi))<eps(range_test1*pi))
matrix_value, inc 0.314159
matrix_value, inc 0.314159
ans = 1034
octave:28> sum(abs(sin(range_test2*pi))<eps(range_test2*pi))
ans = 2001
octave:29> sum(abs(sin(range_test3*pi))<eps(range_test3*pi))
ans = 2001
octave:30> sum(abs(sin(range_test4*pi))<eps(range_test4*pi))
ans = 2001
octave:31> sum(abs(sin(range_test5*pi))<eps(range_test5*pi))
matrix_value, inc 0.314159
matrix_value, inc 0.314159
ans = 1034
octave:32> sum(abs(sin(range_test6*pi))<eps(range_test6*pi))
ans = 2001
Comments:
1) Note how the range class, i.e., (A:s:B), multiplication does not
expand the range, it simply scales the constant s. So (N:0.1:N)*pi is
actually changing the range to, I believe, (N*pi:pi:N*pi).
2) Similarly, the 5th case in which (N/0.1:1:N/0.1)*0.1 is exactly the
same as (N:0.1:N), such that (N/0.1:1:N/0.1)*0.1 is also treated as
(N*pi:pi:N*pi). Notice for test1 and test5 that the increment is
0.1*pi = 0.314159 at the point a rangetomatrix_value conversion is done.
3) The test1 and test5 don't pass the test because this formula
for (octave_idx_type i = 1; i < rng_nelem; i++)
cache(i) = b + i * increment;
behaves particular to the numerical characterists of 'b' and of
'increment'. If b is an integer, say b=1000, then that addition step
may not be so susceptible to numerical effects. However, if b=1000pi,
i.e., a fractional number, then the addition might be less accurate.
4) The same can probably be said for increment. If that can be made to
be strictly integer (i.e., fits in the mantissa of the float), then the
multiplication is more accurate. But really, I think the addition is
the more troublesome of the two. So this sort of comes back to
representing the range slightly different as:
Given [A:s:B], if
1) A is integer, and
2) A/s is integer
then [A:s:B] can be implemented as
[A/s:1:floor(B/s)] * s
where the range operation is integerbased.
5) I notice that in linspace() algorithms for the various class objects,
the algorithm is similarly
for (octave_idx_type i = 1; i < n1; i++)
retval(i) = x1 + i*delta;
There is a slightly different alternative for this. For example,
retval(i) = (x1*(Ni) + x2*i)/N;
for i equal 0 to N, where N is the number of intervals. Maybe that
behaves betterdon't know. I will analyze the difference when I get
the chance.
6) Jo, you are seeing test2 fail. Could you please try test6 on your
system:
octave:26> range_test6=[N/0.1:1:N/0.1]*0.1;
matrix_value, inc 1
octave:32> sum(abs(sin(range_test6*pi))<eps(range_test6*pi))
ans = 2001
I'm thinking that one might pass on your system because this is the case
where the range is comprised of integer start and integer increment, as
described above.
7) Does all this matter? I guess, simply because we tend to think in
nice even number intervals and nice decimal fractions. But for an
increment which is an arbitrary float, it's probably not that critical.
Dan

12
