Re: Test failures due to tolerance in fftfilt.m

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: Test failures due to tolerance in fftfilt.m

Daniel Sebald
On 09/06/2012 02:59 PM, Daniel J Sebald wrote:
> I'll toss this one to Ed and Rik, since we were just talking about
> precision issues for svds test failures...
>
> I checked the current state of tests and found this failure:
>
>>>>>> processing
>>>>>> /usr/local/src/octave/octave/octave/scripts/signal/fftfilt.m
> ***** test

There is a bit more to this, and I've put a patch on Savannah:

https://savannah.gnu.org/bugs/index.php?37297

The routine will round the output if the inputs are integers and will
truncate the imaginary component if both inputs are real.  That seems
fair, I suppose.  (I do wonder though if there should be an option to
remove this behavior because some might not want such a thing.  Any
thoughts maintainers or OctDev?)  I've extended that concept to account
for the other cases of real*imaginary, imaginary*real, and
imaginary*imaginary.  I don't see why only the real*real case should be
done...all or nothing, as I see it.  These conditions now have tests,
and there are a couple more tests for tolerance on the
imaginary/imaginary scenario, as well as the complex/complex scenario.

By making the integerization (rounding) test more stringent, I uncovered
a bug whereby only the first element of the output single row vector was
rounded.

Dan

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Octave-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/octave-dev
Reply | Threaded
Open this post in threaded view
|

Re: Test failures due to tolerance in fftfilt.m

eem2314


On Fri, Sep 7, 2012 at 2:12 PM, Daniel J Sebald <[hidden email]> wrote:
On 09/06/2012 02:59 PM, Daniel J Sebald wrote:
I'll toss this one to Ed and Rik, since we were just talking about
precision issues for svds test failures...

I checked the current state of tests and found this failure:

processing
/usr/local/src/octave/octave/octave/scripts/signal/fftfilt.m
***** test

There is a bit more to this, and I've put a patch on Savannah:

https://savannah.gnu.org/bugs/index.php?37297

The routine will round the output if the inputs are integers and will truncate the imaginary component if both inputs are real.  That seems fair, I suppose.  (I do wonder though if there should be an option to remove this behavior because some might not want such a thing.  Any thoughts maintainers or OctDev?)  I've extended that concept to account for the other cases of real*imaginary, imaginary*real, and imaginary*imaginary.  I don't see why only the real*real case should be done...all or nothing, as I see it.  These conditions now have tests, and there are a couple more tests for tolerance on the imaginary/imaginary scenario, as well as the complex/complex scenario.

By making the integerization (rounding) test more stringent, I uncovered a bug whereby only the first element of the output single row vector was rounded.

Dan

I just ran into the fftfilt test failure again (bugs 37297 & 35959)
and I narrowed it down to differences between FFTPACK and fftw3.
octave with FFTPACK gets the test error:

!!!!! test failed
assert (fftfilt (b, r * x),r * r * [1, 1, 0, 0, 0, 0, 0, 0, 0, 0],eps) expected
 Columns 1 through 3:
     ...
 maximum absolute error 2.22478e-16 exceeds tolerance 2.22045e-16

rebuilding with fftw3 makes the error go away. Then I looked
at the errors with fftpack and fftw3, ie the difference between
the fftfilt output (a 10-element complex vector) and the expected vector:

                fftpack                                                fftw3
                -------                                                -----
   3.4694469519536142e-18 + 2.2204460492503131e-16i      0.0000000000000000e+00 - 0.0000000000000000e+00i
   1.3877787807814457e-17 + 2.2204460492503131e-16i      0.0000000000000000e+00 - 2.2204460492503131e-16i
   3.1892503067014210e-17 + 2.0395767215548695e-17i      0.0000000000000000e+00 - 0.0000000000000000e+00i
  -1.5476803848138888e-17 - 1.1721501528016046e-17i      0.0000000000000000e+00 - 0.0000000000000000e+00i
  -5.5511151231257827e-17 - 5.2041704279304213e-17i      0.0000000000000000e+00 + 2.7755575615628914e-17i
   0.0000000000000000e+00 - 6.9388939039072284e-17i      0.0000000000000000e+00 + 2.7755575615628914e-17i
  -3.1892503067014198e-17 - 3.5115384015709088e-17i      0.0000000000000000e+00 - 0.0000000000000000e+00i
   1.0999025841583994e-18 + 1.0166004376210030e-17i      0.0000000000000000e+00 + 5.5511151231257827e-17i
  -3.4694469519536142e-18 - 0.0000000000000000e+00i      0.0000000000000000e+00 - 0.0000000000000000e+00i
  -1.3877787807814457e-17 - 0.0000000000000000e+00i      0.0000000000000000e+00 - 5.5511151231257827e-17i

some things to notice about these:

1) the largest error in both is in the imag part of the 2nd element
   and is exactly eps, i.e. one ulp; no big surprise

2) the fftpack result has more "garbage" numbers but roughly the
   same size as the garbage from fftw3 and all smaller than eps.

3) the reason the test fails with fftpack is that it was unlucky enough
   to have put a bit of garbage in the real part of the second element
   which made the abs of the element slightly larger than eps. Otherwise
   the two results should be considered equivalent.

4) the fftw3 result passes the test because assert() uses the infinity
   norm; had it used, e.g. the 2-norm the test would have failed.
   These tests should not depend on which norm is used.

I propose fixing this test by replacing the tolerance eps with something
like 2*eps*norm(z) where z = r*r*[1 1 0 0 0 0 0 0 0 0]. Just multiplying
eps by 2 would fix this problem but tests like these should always account
for the size of the things being tested.

I put a modified version of Dan's patch for bug #37297 on the tracker.
In it I added norms to the test tolerances, so for example instead of

assert (y0, y, 55*eps);

I substituted

assert (y0, y, 4*eps*norm(y));

and it passes 10000 passes with both fftpack and fftw3.

--
Ed Meyer


------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Octave-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/octave-dev
Reply | Threaded
Open this post in threaded view
|

Re: Test failures due to tolerance in fftfilt.m

Daniel Sebald
On 10/09/2012 07:49 PM, Ed Meyer wrote:

>
>
> On Fri, Sep 7, 2012 at 2:12 PM, Daniel J Sebald <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     On 09/06/2012 02:59 PM, Daniel J Sebald wrote:
>
>         I'll toss this one to Ed and Rik, since we were just talking about
>         precision issues for svds test failures...
>
>         I checked the current state of tests and found this failure:
>
>                             processing
>                             /usr/local/src/octave/octave/__octave/scripts/signal/fftfilt.__m
>
>         ***** test
>
>
>     There is a bit more to this, and I've put a patch on Savannah:
>
>     https://savannah.gnu.org/bugs/__index.php?37297
>     <https://savannah.gnu.org/bugs/index.php?37297>
>
>     The routine will round the output if the inputs are integers and
>     will truncate the imaginary component if both inputs are real.  That
>     seems fair, I suppose.  (I do wonder though if there should be an
>     option to remove this behavior because some might not want such a
>     thing.  Any thoughts maintainers or OctDev?)  I've extended that
>     concept to account for the other cases of real*imaginary,
>     imaginary*real, and imaginary*imaginary.  I don't see why only the
>     real*real case should be done...all or nothing, as I see it.  These
>     conditions now have tests, and there are a couple more tests for
>     tolerance on the imaginary/imaginary scenario, as well as the
>     complex/complex scenario.
>
>     By making the integerization (rounding) test more stringent, I
>     uncovered a bug whereby only the first element of the output single
>     row vector was rounded.
>
>     Dan
>
>
> I just ran into the fftfilt test failure again (bugs 37297 & 35959)
> and I narrowed it down to differences between FFTPACK and fftw3.
> octave with FFTPACK gets the test error:
>
> !!!!! test failed
> assert (fftfilt (b, r * x),r * r * [1, 1, 0, 0, 0, 0, 0, 0, 0, 0],eps)
> expected
>   Columns 1 through 3:
>       ...
>   maximum absolute error 2.22478e-16 exceeds tolerance 2.22045e-16
>
> rebuilding with fftw3 makes the error go away. Then I looked
> at the errors with fftpack and fftw3, ie the difference between
> the fftfilt output (a 10-element complex vector) and the expected vector:
>
>                  fftpack
> fftw3
>                  -------
> -----
>     3.4694469519536142e-18 + 2.2204460492503131e-16i
> 0.0000000000000000e+00 - 0.0000000000000000e+00i
>     1.3877787807814457e-17 + 2.2204460492503131e-16i
> 0.0000000000000000e+00 - 2.2204460492503131e-16i
>     3.1892503067014210e-17 + 2.0395767215548695e-17i
> 0.0000000000000000e+00 - 0.0000000000000000e+00i
>    -1.5476803848138888e-17 - 1.1721501528016046e-17i
> 0.0000000000000000e+00 - 0.0000000000000000e+00i
>    -5.5511151231257827e-17 - 5.2041704279304213e-17i
> 0.0000000000000000e+00 + 2.7755575615628914e-17i
>     0.0000000000000000e+00 - 6.9388939039072284e-17i
> 0.0000000000000000e+00 + 2.7755575615628914e-17i
>    -3.1892503067014198e-17 - 3.5115384015709088e-17i
> 0.0000000000000000e+00 - 0.0000000000000000e+00i
>     1.0999025841583994e-18 + 1.0166004376210030e-17i
> 0.0000000000000000e+00 + 5.5511151231257827e-17i
>    -3.4694469519536142e-18 - 0.0000000000000000e+00i
> 0.0000000000000000e+00 - 0.0000000000000000e+00i
>    -1.3877787807814457e-17 - 0.0000000000000000e+00i
> 0.0000000000000000e+00 - 5.5511151231257827e-17i
>
> some things to notice about these:
>
> 1) the largest error in both is in the imag part of the 2nd element
>     and is exactly eps, i.e. one ulp; no big surprise
>
> 2) the fftpack result has more "garbage" numbers but roughly the
>     same size as the garbage from fftw3 and all smaller than eps.
>
> 3) the reason the test fails with fftpack is that it was unlucky enough
>     to have put a bit of garbage in the real part of the second element
>     which made the abs of the element slightly larger than eps. Otherwise
>     the two results should be considered equivalent.

Keep in mind that you may have found the first instance of the tolerance
limit being exceeded.  If the example were run for more trials worse
excursions might result.

I think this was the test where I checked for large numbers of trials
just to get an estimate of the probability of exceeding the limit.  It
was surprising at first how large the error could be, but thinking about
it, FFT has rather extensive computational "mixing", for lack of better
phrase.


> 4) the fftw3 result passes the test because assert() uses the infinity
>     norm; had it used, e.g. the 2-norm the test would have failed.
>     These tests should not depend on which norm is used.

I'm curious if you ran the test with inf-norm for high numbers of trials.


> I propose fixing this test by replacing the tolerance eps with something
> like 2*eps*norm(z) where z = r*r*[1 1 0 0 0 0 0 0 0 0]. Just multiplying
> eps by 2 would fix this problem but tests like these should always account
> for the size of the things being tested.

I'm fine with that.  Especially in this case, as the FFT has a lot of
computations in it.  However, there were one or two tests using
degenerate inputs where the result should come out exact.


> I put a modified version of Dan's patch for bug #37297 on the tracker.
> In it I added norms to the test tolerances, so for example instead of
>
> assert (y0, y, 55*eps);
>
> I substituted
>
> assert (y0, y, 4*eps*norm(y));
>
> and it passes 10000 passes with both fftpack and fftw3.

In this case, 4*norm(y) is approximately 49.  I had come up with 55 by
trial and error for large numbers of trials.  A scale factor of 50 was
probably still causing tolerance failures if I chose 55, but I suspect
the occurrence is still rare enough that the number who run the test
will ever find a failure.  In other words, 49 is in the ballpark from
the tests I did.

Dan

------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Octave-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/octave-dev
Reply | Threaded
Open this post in threaded view
|

Re: Test failures due to tolerance in fftfilt.m

c.-2

On 10 Oct 2012, at 09:16, Daniel J Sebald wrote:

>> I propose fixing this test by replacing the tolerance eps with something
>> like 2*eps*norm(z)

FYI this could be expressed as

2 * eps (z)

from the help text for eps () :

"Given a single argument X, return the distance between X and the next largest value"

c.
------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Octave-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/octave-dev
Reply | Threaded
Open this post in threaded view
|

Re: Test failures due to tolerance in fftfilt.m

eem2314


On Wed, Oct 10, 2012 at 1:11 AM, c. <[hidden email]> wrote:

On 10 Oct 2012, at 09:16, Daniel J Sebald wrote:

>> I propose fixing this test by replacing the tolerance eps with something
>> like 2*eps*norm(z)

FYI this could be expressed as

2 * eps (z)

from the help text for eps () :

"Given a single argument X, return the distance between X and the next largest value"

c.

Thanks, Carlos, I wasn't aware of this capability.  I thought it was just what I needed until I
tried it on a vector, expecting something like eps(z) = eps*norm(z) but what I get is eps(z(1)).
Is that the intended behavior?

--
Ed Meyer


------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Octave-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/octave-dev