Re: svds test failure

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

Rik-4
On 07/31/2012 07:02 AM, [hidden email] wrote:
>
> No, you're not. There issue is that svds has a test that is
> nondeterministic and sometimes the error is higher than tolerance. One
> the machine that failed, try running "test svds" a few times and see
> how often that fails.
>
> I don't know if we can make this test nondeterministic because the
> randomness comes from ARPACK, not from Octave, so I don't know if we
> can seed it. We could just increase the tolerance.
7/31/12

Melvin,

This issue comes up again and again.  The issue is that ARPACK is an
iterative solution solver and the success or failure can depend on the
initial starting point relative to the solution.  In order to stop
variability in tests, we initialize Octave's random number generators and
then create an initial seed location (which is now always the same) to pass
to ARPACK.  However, even this hasn't fully stopped all failures.  It is
possible that this is due to small machine-dependent round-off errors which
render even the initial seed location slightly different between machines.

I am willing to try one more time on this issue.  I'm attaching a small
patch for svds.m.  Apply it with 'patch -p1 < svds.patch'.  At a shell do
'./run-octave' and try this code:

for i = 1:300
  bm(i) = test ("svds");
endfor
sum (bm)

If the sum is not exactly 300 then some test runs failed and I will be
completely out of ideas.

--Rik

svds.patch (899 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

Daniel Sebald
On 07/31/2012 12:47 PM, Rik wrote:

> On 07/31/2012 07:02 AM, [hidden email] wrote:
>>
>> No, you're not. There issue is that svds has a test that is
>> nondeterministic and sometimes the error is higher than tolerance. One
>> the machine that failed, try running "test svds" a few times and see
>> how often that fails.
>>
>> I don't know if we can make this test nondeterministic because the
>> randomness comes from ARPACK, not from Octave, so I don't know if we
>> can seed it. We could just increase the tolerance.
> 7/31/12
>
> Melvin,
>
> This issue comes up again and again.  The issue is that ARPACK is an
> iterative solution solver and the success or failure can depend on the
> initial starting point relative to the solution.  In order to stop
> variability in tests, we initialize Octave's random number generators and
> then create an initial seed location (which is now always the same) to pass
> to ARPACK.  However, even this hasn't fully stopped all failures.  It is
> possible that this is due to small machine-dependent round-off errors which
> render even the initial seed location slightly different between machines.

I agree with your assessment.  However, there are six tests in svds.m
and only two of them use the 'opts' input options.  The tests' fail/pass
shouldn't be dependent upon having a specific initial state for the code.

There are some other tests here that look somewhat suspect as well.  For
example:

%!testif HAVE_ARPACK
%! s = svds (speye (10));
%! assert (s, ones (6, 1), 2*eps);

To expect a numerical routine to be accurate within two times the
machine precision is asking for quite a bit.  The tests which have
'opts' passed in are asking for accuracy within 1e-10, which seems more
reasonable.

I realize one would expect the result to be deterministic and not fail
in one case and pass in the other, but who knows how ARPACK is set up?

Melvin, is there a way you could make some simple mod to the svds.m
tests to indicate exactly which test(s) is failing.

Dan
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

Daniel Sebald
On 07/31/2012 01:25 PM, Daniel J Sebald wrote:

> On 07/31/2012 12:47 PM, Rik wrote:
>> On 07/31/2012 07:02 AM, [hidden email] wrote:
>>>
>>> No, you're not. There issue is that svds has a test that is
>>> nondeterministic and sometimes the error is higher than tolerance. One
>>> the machine that failed, try running "test svds" a few times and see
>>> how often that fails.
>>>
>>> I don't know if we can make this test nondeterministic because the
>>> randomness comes from ARPACK, not from Octave, so I don't know if we
>>> can seed it. We could just increase the tolerance.
>> 7/31/12
>>
>> Melvin,
>>
>> This issue comes up again and again. The issue is that ARPACK is an
>> iterative solution solver and the success or failure can depend on the
>> initial starting point relative to the solution. In order to stop
>> variability in tests, we initialize Octave's random number generators and
>> then create an initial seed location (which is now always the same) to
>> pass
>> to ARPACK. However, even this hasn't fully stopped all failures. It is
>> possible that this is due to small machine-dependent round-off errors
>> which
>> render even the initial seed location slightly different between
>> machines.
>
> I agree with your assessment. However, there are six tests in svds.m and
> only two of them use the 'opts' input options. The tests' fail/pass
> shouldn't be dependent upon having a specific initial state for the code.
>
> There are some other tests here that look somewhat suspect as well. For
> example:
>
> %!testif HAVE_ARPACK
> %! s = svds (speye (10));
> %! assert (s, ones (6, 1), 2*eps);
>
> To expect a numerical routine to be accurate within two times the
> machine precision is asking for quite a bit. The tests which have 'opts'
> passed in are asking for accuracy within 1e-10, which seems more
> reasonable.
>
> I realize one would expect the result to be deterministic and not fail
> in one case and pass in the other, but who knows how ARPACK is set up?
>
> Melvin, is there a way you could make some simple mod to the svds.m
> tests to indicate exactly which test(s) is failing.

I think it is the test I noted above that is causing the problem.  Look
at these results:

octave:4> s = svds (speye (10))
s =

    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000

octave:5> s - 1
ans =

    8.8818e-16
    6.6613e-16
    4.4409e-16
    4.4409e-16
    2.2204e-16
    0.0000e+00

octave:6> eps
ans =  2.2204e-16
octave:7> assert (s, ones (6, 1), 2*eps)
error: assert (s,ones (6, 1),2 * eps) expected
    1
    1
    1
    1
    1
    1
but got
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
maximum absolute error 8.88178e-16 exceeds tolerance 4.44089e-16
error: called from:
error:
/usr/local/src/octave/octave-builtin_bin2dec/octave-build1/../octave/scripts/testfun/assert.m
at line 235, column 5
octave:7> s = svds (speye (10))
s =

    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000

octave:8> s - 1
ans =

    6.6613e-16
    4.4409e-16
    2.2204e-16
    0.0000e+00
    0.0000e+00
    0.0000e+00

octave:9>

Two things to note above.  One is that the result isn't accurate to
within 2*eps and the test fails.  The second is that the results are
*not* deterministic.  Therefore, it is random that this test could pass
or fail.  I suggest bumping the tolerance up to 10*eps or maybe
100*eps... or 1e-10 if you like, same as the rest.  It is simply trying
to test the results are fairly accurate, right?

There is one test here that has no tolerance, expecting exact accuracy.

%!testif HAVE_ARPACK
%! [u2,s2,v2,flag] = svds (zeros (10), k);
%! assert (u2, eye (10, k));
%! assert (s2, zeros (k));
%! assert (v2, eye (10, 7));

octave:10> [u2,s2,v2,flag] = svds (zeros (10), k);
octave:12> u2
u2 =

Diagonal Matrix

    1   0   0   0   0   0   0
    0   1   0   0   0   0   0
    0   0   1   0   0   0   0
    0   0   0   1   0   0   0
    0   0   0   0   1   0   0
    0   0   0   0   0   1   0
    0   0   0   0   0   0   1
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0

octave:13> u2 - eye (10,k)
ans =

Diagonal Matrix

    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0

octave:14> s2
s2 =

Diagonal Matrix

    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0

octave:15> s2 - zeros (k)
ans =

    0  -0  -0  -0  -0  -0  -0
   -0   0  -0  -0  -0  -0  -0
   -0  -0   0  -0  -0  -0  -0
   -0  -0  -0   0  -0  -0  -0
   -0  -0  -0  -0   0  -0  -0
   -0  -0  -0  -0  -0   0  -0
   -0  -0  -0  -0  -0  -0   0

octave:16> v2
v2 =

Diagonal Matrix

    1   0   0   0   0   0   0
    0   1   0   0   0   0   0
    0   0   1   0   0   0   0
    0   0   0   1   0   0   0
    0   0   0   0   1   0   0
    0   0   0   0   0   1   0
    0   0   0   0   0   0   1
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0

octave:18> v2 - eye (10, 7)
ans =

Diagonal Matrix

    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0
    0   0   0   0   0   0   0

The reason this works out I suspect is because ARPACK's internal routine
does no iterations for this case, seeing as everything is zeros.  That,
or minimal computations because for some odd reason the 's2' check above
shows all negative zero except for positive zero on the diagonal.

Dan
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

Daniel Sebald
On 07/31/2012 02:00 PM, Daniel J Sebald wrote:

> Two things to note above. One is that the result isn't accurate to
> within 2*eps and the test fails. The second is that the results are
> *not* deterministic. Therefore, it is random that this test could pass
> or fail. I suggest bumping the tolerance up to 10*eps or maybe
> 100*eps... or 1e-10 if you like, same as the rest. It is simply trying
> to test the results are fairly accurate, right?

And if that change is made, the code feeding in random numbers with
'opts' could probably be removed.  My guess is that test is passing
fine.  But, if you do want to test that 'opts' is in fact working, just
pick some numbers to pass in rather than use the random number generator.

Dan
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

Rik-4
In reply to this post by Rik-4
On 07/31/2012 11:26 AM, Daniel J Sebald wrote:

> On 07/31/2012 12:47 PM, Rik wrote:
>> On 07/31/2012 07:02 AM, [hidden email] wrote:
>>>
>>> No, you're not. There issue is that svds has a test that is
>>> nondeterministic and sometimes the error is higher than tolerance. One
>>> the machine that failed, try running "test svds" a few times and see
>>> how often that fails.
>>>
>>> I don't know if we can make this test nondeterministic because the
>>> randomness comes from ARPACK, not from Octave, so I don't know if we
>>> can seed it. We could just increase the tolerance.
>> 7/31/12
>>
>> Melvin,
>>
>> This issue comes up again and again.  The issue is that ARPACK is an
>> iterative solution solver and the success or failure can depend on the
>> initial starting point relative to the solution.  In order to stop
>> variability in tests, we initialize Octave's random number generators and
>> then create an initial seed location (which is now always the same) to pass
>> to ARPACK.  However, even this hasn't fully stopped all failures.  It is
>> possible that this is due to small machine-dependent round-off errors which
>> render even the initial seed location slightly different between machines.
>
> I agree with your assessment.  However, there are six tests in svds.m
> and only two of them use the 'opts' input options.  The tests' fail/pass
> shouldn't be dependent upon having a specific initial state for the code.
>
> There are some other tests here that look somewhat suspect as well.  For
> example:
>
> %!testif HAVE_ARPACK
> %! s = svds (speye (10));
> %! assert (s, ones (6, 1), 2*eps);
>
> To expect a numerical routine to be accurate within two times the
> machine precision is asking for quite a bit.  The tests which have
> 'opts' passed in are asking for accuracy within 1e-10, which seems more
> reasonable.
The special case of the identity matrix and a matrix of all zeros are just
that -- special cases.  ARPACK should be able in these instances to arrive
at a nearly exact solution.
>
> I realize one would expect the result to be deterministic and not fail
> in one case and pass in the other, but who knows how ARPACK is set up?
From what I can tell from the API, ARPACK chooses a random initial seed
ONLY if a seed is not provided to them.  Other than this, the algorithm is
deterministic.
>
> Melvin, is there a way you could make some simple mod to the svds.m
> tests to indicate exactly which test(s) is failing.
The test which fails is recorded in the file test/fntests.log.  Melvin
could check this file and report back which test is showing an issue.  I
assumed it was the same ones we had trouble with before which depend on
ARPACK and UMFPACK.  If it is the test involving speye() then I agree that
just bumping up the tolerance just enough to allow it to pass is a fine
solution.

--Rik
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

Daniel Sebald
On 07/31/2012 03:04 PM, Rik wrote:
> On 07/31/2012 11:26 AM, Daniel J Sebald wrote:
[snip]

>> %!testif HAVE_ARPACK
>> %! s = svds (speye (10));
>> %! assert (s, ones (6, 1), 2*eps);
>>
>> To expect a numerical routine to be accurate within two times the
>> machine precision is asking for quite a bit.  The tests which have
>> 'opts' passed in are asking for accuracy within 1e-10, which seems more
>> reasonable.
> The special case of the identity matrix and a matrix of all zeros are just
> that -- special cases.  ARPACK should be able in these instances to arrive
> at a nearly exact solution.

The identity matrix probably isn't a special case.  I mean,
theoretically identity matrix is special, but algorithm wise ARPACK
probably proceeds the same as other matrices until the stopping criteria
is met.

In the case of the zero matrix I would guess that ARPACK proceeds by
trying to diagonalize some matrix using a rotation of sorts, but because
the sum of any terms (be they the diagonal) turns out to be zero the
algorithm likely stops right away because of some potential division by
zero in the rotation algorithm.


>> I realize one would expect the result to be deterministic and not fail
>> in one case and pass in the other, but who knows how ARPACK is set up?
>> From what I can tell from the API, ARPACK chooses a random initial seed
> ONLY if a seed is not provided to them.  Other than this, the algorithm is
> deterministic.

That would explain the result from my followup post on the identity
matrix randomness.

Dan
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

eem2314
In reply to this post by Daniel Sebald


On Tue, Jul 31, 2012 at 12:31 PM, Daniel J Sebald <[hidden email]> wrote:
On 07/31/2012 02:22 PM, Ed Meyer wrote:

As I pointed out in the "make check" thread the problem is that we
should not be
using absolute tolerances because that does not account for the size of
the matrix
elements.  What we should do is to estimate if the result lies in the
interval we would
get with roundoff perturbations in the matrix.  If it does, that's as
good as can be expected.
Eispack used to do something like that with their "performance index"
but I think you can
get better estimates.

Give us an idea of what code you are suggesting Ed.  Something like

eps*size(s,1)*size(s,2)

?  Something concerning the rank of the input matrix?

Dan

PS: Did you mean to CC to the maintainers list?

oops - I did mean to CC
I'm attaching a page from a manual I wrote describing how I judge results.  It's more
involved than the old eispack tests and maybe more work than we want to do.
Much simpler would be to just use the norm of the matrix times some factor times
machine epsilon instead of the absolute tolerances.

--
Ed Meyer


sens.pdf (156K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

Daniel Sebald
On 07/31/2012 03:23 PM, Ed Meyer wrote:

>
>
> On Tue, Jul 31, 2012 at 12:31 PM, Daniel J Sebald
> <[hidden email] <mailto:[hidden email]>> wrote:
>
>     On 07/31/2012 02:22 PM, Ed Meyer wrote:
>
>         As I pointed out in the "make check" thread the problem is that we
>         should not be
>         using absolute tolerances because that does not account for the
>         size of
>         the matrix
>         elements.  What we should do is to estimate if the result lies
>         in the
>         interval we would
>         get with roundoff perturbations in the matrix.  If it does,
>         that's as
>         good as can be expected.
>         Eispack used to do something like that with their "performance
>         index"
>         but I think you can
>         get better estimates.
>
>
>     Give us an idea of what code you are suggesting Ed.  Something like
>
>     eps*size(s,1)*size(s,2)
>
>     ?  Something concerning the rank of the input matrix?
>
>     Dan
>
>     PS: Did you mean to CC to the maintainers list?
>
>
> oops - I did mean to CC
> I'm attaching a page from a manual I wrote describing how I judge
> results.  It's more
> involved than the old eispack tests and maybe more work than we want to do.
> Much simpler would be to just use the norm of the matrix times some
> factor times
> machine epsilon instead of the absolute tolerances.

We're talking the linear case here, so what does equation (7) become in
that case?  A'(lambda) should be simply a constant matrix?  Does that
then turn out to be Frobenius norm or something?  So

tol = 100*eps*norm(A, 'fro');

and

%!testif HAVE_ARPACK
%! s = svds (speye (10));
%! assert (s, ones (6, 1), tol);

Something like that?

And what of the degenerative case of

%!testif HAVE_ARPACK
%! [u2,s2,v2,flag] = svds (zeros (10), k);
%! assert (u2, eye (10, k));
%! assert (s2, zeros (k));
%! assert (v2, eye (10, 7));

where the norm is zero?  Just leave as is, because we aren't really
testing the performance in this case so much as the library's ability to
handle the degenerate case?

Dan
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

eem2314


On Tue, Jul 31, 2012 at 1:41 PM, Daniel J Sebald <[hidden email]> wrote:
On 07/31/2012 03:23 PM, Ed Meyer wrote:


On Tue, Jul 31, 2012 at 12:31 PM, Daniel J Sebald
<[hidden email] <mailto:[hidden email]>> wrote:

    On 07/31/2012 02:22 PM, Ed Meyer wrote:

        As I pointed out in the "make check" thread the problem is that we
        should not be
        using absolute tolerances because that does not account for the
        size of
        the matrix
        elements.  What we should do is to estimate if the result lies
        in the
        interval we would
        get with roundoff perturbations in the matrix.  If it does,
        that's as
        good as can be expected.
        Eispack used to do something like that with their "performance
        index"
        but I think you can
        get better estimates.


    Give us an idea of what code you are suggesting Ed.  Something like

    eps*size(s,1)*size(s,2)

    ?  Something concerning the rank of the input matrix?

    Dan

    PS: Did you mean to CC to the maintainers list?


oops - I did mean to CC
I'm attaching a page from a manual I wrote describing how I judge
results.  It's more
involved than the old eispack tests and maybe more work than we want to do.
Much simpler would be to just use the norm of the matrix times some
factor times
machine epsilon instead of the absolute tolerances.

We're talking the linear case here, so what does equation (7) become in that case?  A'(lambda) should be simply a constant matrix?  Does that then turn out to be Frobenius norm or something?  So

for the linear case A(lambda) = B - lambda*I where B is the matrix who's svd we want
 
tol = 100*eps*norm(A, 'fro');

and


%!testif HAVE_ARPACK
%! s = svds (speye (10));
%! assert (s, ones (6, 1), tol);

Something like that?
 
And what of the degenerative case of


%!testif HAVE_ARPACK
%! [u2,s2,v2,flag] = svds (zeros (10), k);
%! assert (u2, eye (10, k));
%! assert (s2, zeros (k));
%! assert (v2, eye (10, 7));

where the norm is zero?  Just leave as is, because we aren't really testing the performance in this case so much as the library's ability to handle the degenerate case?

Dan
 
to handle the case of a matrix with zero (or small) elements we could use

  tol = 100*eps*min (1.0, norm(A,"fro"))

The results from ARPACK should not depend on the starting vector; if they do there is probably
something wrong with the installation. Which brings up another point - computing an svd by solving
a (larger) eigenvalue problem is probably not the best way.  There are sparse svd solvers out
there that might be more efficient and robust.  I'll try a few.

As for test cases I put together a case to create randomly sized and shaped matrices which
are very ill-conditioned to test the algorithm;  I'll use it to compare svd solvers.  I was surprised
that Arnoldi did not converge for some test cases.
--
Ed Meyer

Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

Daniel Sebald
On 08/01/2012 11:54 AM, Ed Meyer wrote:

>
>
> On Tue, Jul 31, 2012 at 1:41 PM, Daniel J Sebald <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     On 07/31/2012 03:23 PM, Ed Meyer wrote:
>
>
>
>         On Tue, Jul 31, 2012 at 12:31 PM, Daniel J Sebald
>         <[hidden email] <mailto:[hidden email]>
>         <mailto:[hidden email]
>         <mailto:[hidden email]>__>> wrote:
>
>              On 07/31/2012 02:22 PM, Ed Meyer wrote:
>
>                  As I pointed out in the "make check" thread the problem
>         is that we
>                  should not be
>                  using absolute tolerances because that does not account
>         for the
>                  size of
>                  the matrix
>                  elements.  What we should do is to estimate if the
>         result lies
>                  in the
>                  interval we would
>                  get with roundoff perturbations in the matrix.  If it does,
>                  that's as
>                  good as can be expected.
>                  Eispack used to do something like that with their
>         "performance
>                  index"
>                  but I think you can
>                  get better estimates.
>
>
>              Give us an idea of what code you are suggesting Ed.
>           Something like
>
>              eps*size(s,1)*size(s,2)
>
>              ?  Something concerning the rank of the input matrix?
>
>              Dan
>
>              PS: Did you mean to CC to the maintainers list?
>
>
>         oops - I did mean to CC
>         I'm attaching a page from a manual I wrote describing how I judge
>         results.  It's more
>         involved than the old eispack tests and maybe more work than we
>         want to do.
>         Much simpler would be to just use the norm of the matrix times some
>         factor times
>         machine epsilon instead of the absolute tolerances.
>
>
>     We're talking the linear case here, so what does equation (7) become
>     in that case?  A'(lambda) should be simply a constant matrix?  Does
>     that then turn out to be Frobenius norm or something?  So
>
> for the linear case A(lambda) = B - lambda*I where B is the matrix who's
> svd we want
>
>     tol = 100*eps*norm(A, 'fro');
>
>     and
>
>
>     %!testif HAVE_ARPACK
>     %! s = svds (speye (10));
>     %! assert (s, ones (6, 1), tol);
>
>     Something like that?
>
>     And what of the degenerative case of
>
>
>     %!testif HAVE_ARPACK
>     %! [u2,s2,v2,flag] = svds (zeros (10), k);
>     %! assert (u2, eye (10, k));
>     %! assert (s2, zeros (k));
>     %! assert (v2, eye (10, 7));
>
>     where the norm is zero?  Just leave as is, because we aren't really
>     testing the performance in this case so much as the library's
>     ability to handle the degenerate case?
>
>     Dan
>
> to handle the case of a matrix with zero (or small) elements we could use
>
>    tol = 100*eps*min (1.0, norm(A,"fro"))

You meant "max", correct?


> The results from ARPACK should not depend on the starting vector; if
> they do there is probably
> something wrong with the installation.

I agree.  It is the "something wrong with the installation" part we are
trying to test here.  The accuracy of the ARPACK algorithm is the user's
concern and of course should be evaluated by the user when working with
an application.


>  Which brings up another point -
> computing an svd by solving
> a (larger) eigenvalue problem is probably not the best way.  There are
> sparse svd solvers out
> there that might be more efficient and robust.  I'll try a few.
>
> As for test cases I put together a case to create randomly sized and
> shaped matrices which
> are very ill-conditioned to test the algorithm;  I'll use it to compare
> svd solvers.  I was surprised
> that Arnoldi did not converge for some test cases.

That's interesting.  I looked up Arnoldi algorithm.  It is meant as a
way to deal with unstable Krylov/power iteration, so perhaps it isn't
surprising it has issues as well.  A couple of researchers proposed
implicitly restarting the Arnoldi algorithm (IRAM) if it runs too long.
  Supposedly this technique is in ARPACK.

Dan
Reply | Threaded
Open this post in threaded view
|

Re: svds test failure

eem2314


On Wed, Aug 1, 2012 at 10:37 AM, Daniel J Sebald <[hidden email]> wrote:

[snip]

to handle the case of a matrix with zero (or small) elements we could use

   tol = 100*eps*min (1.0, norm(A,"fro"))

You meant "max", correct?

yes, that should be "max"

--
Ed Meyer