Hi! Is there a function in octave or octave forge that gives the density of a *multivariate* normal distribution. normal_pdf only gives a one-dimensional. I know it is a joke to do get it from there, but nevertheless. Regards, Gorazd -- Gorazd Brumen Mail: [hidden email] WWW: http://valjhun.fmf.uni-lj.si/~brumen PGP: Key at http://pgp.mit.edu, ID BCC93240 ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
The math looks pretty easy to implement:
http://www.itl.nist.gov/div898/handbook/pmc/section5/pmc542.htm Using Cholesky factorization on the positive definite covariance matrix: r = chol(sigma); where r'*r = sigma. Being upper triangular, the determinant of r is trivially the product of the diagonal, and the determinant of sigma is the square of this: det = prod(diag(r))^2; The formula asks for the square root of the determinant, so no need to square it. The exponential argument y = x' inv(sigma) x y = x' inv(sigma) x = x' inv(r' * r) x = x' inv(r) inv(r') x I don't know for sure that inv(r') == inv(r)' for r upper triangular. Numerically it is not the case in octave: octave:34> x = triu(rand(10)); norm(inv(x') - inv(x)') ans = 3.3466e-14 Assuming that it is, then y = (x' / r) * (x'/r)' = sumsq(x'/r) The interface takes the parameters to the mvn in columns rather than rows, so we are actually dealing with the transpose: y = sumsq(x/r) and the final result is: r = chol(sigma); pdf = (2*pi)^(-p/2) * exp(-0.5 * sum(((x-mu)/r).^2,2)) / prod(diag(r)); I've added mvnpdf to octave-forge (see attached). Note: Google found a cdf here: http://alex.strashny.org/a/Multivariate-normal-cumulative- distribution-function-(cdf)-in-MATLAB.html - Paul On Nov 5, 2005, at 8:18 AM, Gorazd Brumen wrote: > > Hi! > > Is there a function in octave or octave forge that gives the density of > a *multivariate* normal distribution. normal_pdf only gives a > one-dimensional. > I know it is a joke to do get it from there, but nevertheless. mvnpdf.m (820 bytes) Download Attachment |
On Sat, 5 Nov 2005, Paul Kienzle wrote:
> The math looks pretty easy to implement: > > http://www.itl.nist.gov/div898/handbook/pmc/section5/pmc542.htm > > Using Cholesky factorization on the positive definite covariance matrix: why not this?: density = mvnorm_pdf(x, mu, Sigma) # first check array sizes, but I'm skipping that p = length(x); density = (2*pi)^(-p/2) * (1/sqrt(det(Sigma))) * exp(-.5*(x-mu)'*inv(Sigma)*(x-mu)); Of course, the density goes to infinity when Sigma is singular. Is your use of chol() just meant to check that the matrix is PD? Mike ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
On Nov 5, 2005, at 5:10 PM, Mike Miller wrote: > On Sat, 5 Nov 2005, Paul Kienzle wrote: > >> The math looks pretty easy to implement: >> >> http://www.itl.nist.gov/div898/handbook/pmc/section5/pmc542.htm >> >> Using Cholesky factorization on the positive definite covariance >> matrix: > > why not this?: > > density = mvnorm_pdf(x, mu, Sigma) > > # first check array sizes, but I'm skipping that > > p = length(x); > > density = (2*pi)^(-p/2) * (1/sqrt(det(Sigma))) * > exp(-.5*(x-mu)'*inv(Sigma)*(x-mu)); > > > Of course, the density goes to infinity when Sigma is singular. Is > your use of chol() just meant to check that the matrix is PD? The wikipedia entry on cholesky decomposition (http://en.wikipedia.org/wiki/Choleskey_decomposition) suggests it is faster and more stable than the lu decomposition which would be used to compute the inverse. The speed doesn't matter in this case, but accuracy is always a concern. The side effect of checking positive definiteness of sigma is a bonus. Some numerical tests with e.g., n=11; x = prolate(n); cn=cond(x), d=norm(x*inv(x)-eye(n)), r=chol(x); c=norm(x*inv(r)*inv(r)'-eye(n)), shows that it chol() is indeed a little better than inv() for ill-conditioned positive definite matrices. The function prolate() is from higham's test matrix toolbox. Similarly for hilb(), though norm(inv(x) - invhilb(n)) and norm(inv(r)*inv(r)' - invhilb(n)) are both pretty bad. - Paul ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
On Sat, 5 Nov 2005, Paul Kienzle wrote:
>> Of course, the density goes to infinity when Sigma is singular. Is your >> use of chol() just meant to check that the matrix is PD? > > The wikipedia entry on cholesky decomposition > (http://en.wikipedia.org/wiki/Choleskey_decomposition) suggests it is faster > and more stable than the lu decomposition which would be used to compute the > inverse. The speed doesn't matter in this case, but accuracy is always a > concern. The side effect of checking positive definiteness of sigma is a > bonus. > > Some numerical tests with e.g., > > n=11; x = prolate(n); cn=cond(x), d=norm(x*inv(x)-eye(n)), r=chol(x); > c=norm(x*inv(r)*inv(r)'-eye(n)), > > shows that it chol() is indeed a little better than inv() for ill-conditioned > positive definite matrices. The function prolate() is from higham's test > matrix toolbox. > > Similarly for hilb(), though norm(inv(x) - invhilb(n)) and > norm(inv(r)*inv(r)' - invhilb(n)) are both pretty bad. Interesting. Thanks for sharing this information. Mike ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
> I don't know for sure that inv(r') == inv(r)' for r upper triangular.
Analytically it is, but numerically need not be, as Paul rightly showed. Now while on this multivariate normal issue, how about generating multivariate normal random variables, given the mean and the sigma matrix ? any available/easily-writable code ? Kapat ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
On Sat, 5 Nov 2005, Prasenjit Kapat wrote:
>> I don't know for sure that inv(r') == inv(r)' for r upper triangular. > > Analytically it is, but numerically need not be, as Paul rightly showed. > Now while on this multivariate normal issue, how about generating > multivariate normal random variables, given the mean and the sigma > matrix ? any available/easily-writable code ? I found this recently: http://www.gatsby.ucl.ac.uk/~iam23/code/mvnrnd.m It is under the GPL: http://www.gatsby.ucl.ac.uk/~iam23/code/ Mike ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
Thats great ! Thanks.
> I found this recently: > > http://www.gatsby.ucl.ac.uk/~iam23/code/mvnrnd.m > > It is under the GPL: > > http://www.gatsby.ucl.ac.uk/~iam23/code/ > > Mike > ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
In reply to this post by Mike Miller-20
Thanks. I've included mvnrnd in octave-forge.
Looking at the code it uses chol(), and if that doesn't work, it uses eig(). The R manual for the MASS package has this to say about mvrnorm: The matrix decomposition is done via eigen; although a Choleski decomposition might be faster, the eigendecomposition is stabler. http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/mvrnorm.html Using the same test I did earlier on ill-conditioned positive definite matrices, eig doesn't seem to be a more accurate way to compute matrix inverses. Anyone care to comment on this? - Paul On Nov 5, 2005, at 8:43 PM, Mike Miller wrote: > On Sat, 5 Nov 2005, Prasenjit Kapat wrote: > >>> I don't know for sure that inv(r') == inv(r)' for r upper triangular. >> >> Analytically it is, but numerically need not be, as Paul rightly >> showed. Now while on this multivariate normal issue, how about >> generating multivariate normal random variables, given the mean and >> the sigma matrix ? any available/easily-writable code ? > > > I found this recently: > > http://www.gatsby.ucl.ac.uk/~iam23/code/mvnrnd.m > > It is under the GPL: > > http://www.gatsby.ucl.ac.uk/~iam23/code/ > > Mike > ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
On Sat, 5 Nov 2005, Paul Kienzle wrote:
> Thanks. I've included mvnrnd in octave-forge. > > Looking at the code it uses chol(), and if that doesn't work, it uses > eig(). > > The R manual for the MASS package has this to say about mvrnorm: > > The matrix decomposition is done via eigen; although a Choleski > decomposition might be faster, the eigendecomposition is stabler. > > http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/mvrnorm.html > > Using the same test I did earlier on ill-conditioned positive definite > matrices, eig doesn't seem to be a more accurate way to compute matrix > inverses. Anyone care to comment on this? I'll just say that I like a system where chol() is tried, and if it fails, eig() is tried. Usually chol() will work and it will be fast, but if a matrix is singular or negative definite, chol() will fail. When chol() fails, eig() should show a zero eigenvalue, but it might be estimated as a negative eigenvalue. You will then see complex eigenvectors and you will produce complex mv-normal random numbers, which is not what you want. Therefore, when the zero eigenvalue is estimated as negative, you have to take the real part of the square root of the eigenvalues and of the eigenvector matrix and use only those real parts. But when the negative eigenvalue is truly a negative number, and not just a poorly estimated zero, the function must fail, but where should we draw the line? Here is a function my student, Gregg Lind, wrote a few days ago: function U=eigensplit(M), try U = chol(M); catch [E,Lam] = eig(M); U = sqrt(Lam)*E'; U = real(U); end I think more checking would be wise. Any opinions on how best to write such a function? What would be a good name for it? I think the kinship package in R (also GPL) has a generalized cholesky for singular matrices, but it might only work when there is a correlation of 1.0 off the diagonal. Thanks again, Paul, for your great work. Mike ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
In reply to this post by Gorazd Brumen
Hello Paul, > > I don't know for sure that inv(r') == inv(r)' for r upper triangular. > Numerically it is not the case in octave: > > octave:34> x = triu(rand(10)); norm(inv(x') - inv(x)') > ans = 3.3466e-14 > > Assuming that it is, then I think I have proven that this is the case (at least for upper triangular matrices). But perhaps it is the case actually for all invertible matrices, not only for upper triangular ones. If you want a proof I need a couple of minutes to latex it and I can send it to you, but maybe this is such an obvious fact, that it is in every algebra book (maybe somebody knows). Thanks a lot for the function. Gorazd -- Gorazd Brumen Mail: [hidden email] WWW: http://valjhun.fmf.uni-lj.si/~brumen PGP: Key at http://pgp.mit.edu, ID BCC93240 ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
On Sun, 6 Nov 2005, Gorazd Brumen wrote:
> Hello Paul, > >> >> I don't know for sure that inv(r') == inv(r)' for r upper triangular. >> Numerically it is not the case in octave: >> >> octave:34> x = triu(rand(10)); norm(inv(x') - inv(x)') >> ans = 3.3466e-14 >> >> Assuming that it is, then > > I think I have proven that this is the case (at least for upper > triangular matrices). > But perhaps it is the case actually for all invertible matrices, not only > for upper triangular ones. When inv(A) exists, inv(A') = inv(A)' because the inverse equals the transpose of the adjoint divided by the determinant. The adjoint of the transpose equals the transpose of the adjoint and the determinant is the same for A and A'. The identity really follows from the fact that the determinant is not affected by the transpose operation -- the adjoint is a collection of determinants. I agree that this must be in most matrix algebra books. Mike ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
On Sun, 6 Nov 2005, Mike Miller wrote:
> When inv(A) exists, > > inv(A') = inv(A)' > > because the inverse equals the transpose of the adjoint divided by the > determinant. The adjoint of the transpose equals the transpose of the > adjoint and the determinant is the same for A and A'. The identity really > follows from the fact that the determinant is not affected by the > transpose operation -- the adjoint is a collection of determinants. > > I agree that this must be in most matrix algebra books. Yes, that is true in theory. But the error is to assume that all mathematical identities remain intact when done on a finite precision computer. Try it: A= rand(2); inv(A')- inv(A)' ans = 7.1054e-15 -7.1054e-15 8.8818e-16 0.0000e+00 -- Andy Adler <[hidden email]> 1(613)562-5800x6218 ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
On Sun, 6 Nov 2005, Andy Adler wrote:
> On Sun, 6 Nov 2005, Mike Miller wrote: > >> When inv(A) exists, >> >> inv(A') = inv(A)' >> >> because the inverse equals the transpose of the adjoint divided by the >> determinant. The adjoint of the transpose equals the transpose of the >> adjoint and the determinant is the same for A and A'. The identity really >> follows from the fact that the determinant is not affected by the >> transpose operation -- the adjoint is a collection of determinants. >> >> I agree that this must be in most matrix algebra books. > > Yes, that is true in theory. But the error is to assume that all > mathematical identities remain intact when done on a finite > precision computer. > > Try it: > A= rand(2); inv(A')- inv(A)' > ans = > > 7.1054e-15 -7.1054e-15 > 8.8818e-16 0.0000e+00 That wasn't an error that anyone here was making. The original question showed an example where there was a small difference. Related to this, we also were discussing recently the problem with eigen decomposition of singular matrices. For example: octave:6> [E,L] = eig(ones(4)) E = -3.2026e-01 7.0711e-01 3.8397e-01 5.0000e-01 -3.2026e-01 -7.0711e-01 3.8397e-01 5.0000e-01 -2.2276e-01 -5.9234e-17 -8.3689e-01 5.0000e-01 8.6328e-01 -4.4132e-19 6.8941e-02 5.0000e-01 L = -0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 4.00000 octave:7> A = E*sqrt(L) A = 0.00000 - 0.00000i 0.00000 + 0.00000i 0.00000 + 0.00000i 1.00000 + 0.00000i 0.00000 - 0.00000i -0.00000 + 0.00000i 0.00000 + 0.00000i 1.00000 + 0.00000i 0.00000 - 0.00000i -0.00000 + 0.00000i -0.00000 + 0.00000i 1.00000 + 0.00000i 0.00000 + 0.00000i -0.00000 + 0.00000i 0.00000 + 0.00000i 1.00000 + 0.00000i octave:8> A*A' ans = 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 The problem here is that numerical imprecision causes A to be complex when we want it to be real. If the matrix really is singular (as this in the example above), we can use real() to get a good answer: octave:9> B=real(A) B = 0.00000 0.00000 0.00000 1.00000 0.00000 -0.00000 0.00000 1.00000 0.00000 -0.00000 -0.00000 1.00000 0.00000 -0.00000 0.00000 1.00000 octave:10> B*B' ans = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 But for a function that always works, we'll have to set some tolerance limit and have the function fail when the smallest eigenvalue [min(L) above] is less than some value. What is a good choice of a value? Maybe -1.0e-13? Anyone? How far off can we be when the precise min(L) is zero? Mike ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
octave:27> [E,L] = eig(ones(4))
E = -1.7253e-01 4.6929e-01 7.0711e-01 5.0000e-01 -1.7253e-01 4.6929e-01 -7.0711e-01 5.0000e-01 -4.9115e-01 -7.1328e-01 -6.8934e-17 5.0000e-01 8.3620e-01 -2.2530e-01 9.7203e-18 5.0000e-01 L = -0.00000 0.00000 0.00000 0.00000 0.00000 -0.00000 0.00000 0.00000 0.00000 0.00000 -0.00000 0.00000 0.00000 0.00000 0.00000 4.00000 GNU Octave, version 2.1.71 (powerpc-apple-darwin8.1.0). Why do I not get the same results? octave:30> det(A) ans = 0 octave:31> rank(A) ans = 1 Only look at non-zero eigenvalue and corresponding eigenvector? Henry on 11/6/05 5:16 PM, Mike Miller at [hidden email] wrote: > On Sun, 6 Nov 2005, Andy Adler wrote: > >> On Sun, 6 Nov 2005, Mike Miller wrote: >> >>> When inv(A) exists, >>> >>> inv(A') = inv(A)' >>> >>> because the inverse equals the transpose of the adjoint divided by the >>> determinant. The adjoint of the transpose equals the transpose of the >>> adjoint and the determinant is the same for A and A'. The identity really >>> follows from the fact that the determinant is not affected by the >>> transpose operation -- the adjoint is a collection of determinants. >>> >>> I agree that this must be in most matrix algebra books. >> >> Yes, that is true in theory. But the error is to assume that all >> mathematical identities remain intact when done on a finite >> precision computer. >> >> Try it: >> A= rand(2); inv(A')- inv(A)' >> ans = >> >> 7.1054e-15 -7.1054e-15 >> 8.8818e-16 0.0000e+00 > > > That wasn't an error that anyone here was making. The original question > showed an example where there was a small difference. > > Related to this, we also were discussing recently the problem with eigen > decomposition of singular matrices. For example: > > > octave:6> [E,L] = eig(ones(4)) > E = > > -3.2026e-01 7.0711e-01 3.8397e-01 5.0000e-01 > -3.2026e-01 -7.0711e-01 3.8397e-01 5.0000e-01 > -2.2276e-01 -5.9234e-17 -8.3689e-01 5.0000e-01 > 8.6328e-01 -4.4132e-19 6.8941e-02 5.0000e-01 > > L = > > -0.00000 0.00000 0.00000 0.00000 > 0.00000 0.00000 0.00000 0.00000 > 0.00000 0.00000 0.00000 0.00000 > 0.00000 0.00000 0.00000 4.00000 > > octave:7> A = E*sqrt(L) > A = > > 0.00000 - 0.00000i 0.00000 + 0.00000i 0.00000 + 0.00000i 1.00000 + > 0.00000i > 0.00000 - 0.00000i -0.00000 + 0.00000i 0.00000 + 0.00000i 1.00000 + > 0.00000i > 0.00000 - 0.00000i -0.00000 + 0.00000i -0.00000 + 0.00000i 1.00000 + > 0.00000i > 0.00000 + 0.00000i -0.00000 + 0.00000i 0.00000 + 0.00000i 1.00000 + > 0.00000i > > octave:8> A*A' > ans = > > 1.0000 1.0000 1.0000 1.0000 > 1.0000 1.0000 1.0000 1.0000 > 1.0000 1.0000 1.0000 1.0000 > 1.0000 1.0000 1.0000 1.0000 > > The problem here is that numerical imprecision causes A to be complex when > we want it to be real. If the matrix really is singular (as this in the > example above), we can use real() to get a good answer: > > octave:9> B=real(A) > B = > > 0.00000 0.00000 0.00000 1.00000 > 0.00000 -0.00000 0.00000 1.00000 > 0.00000 -0.00000 -0.00000 1.00000 > 0.00000 -0.00000 0.00000 1.00000 > > octave:10> B*B' > ans = > > 1 1 1 1 > 1 1 1 1 > 1 1 1 1 > 1 1 1 1 > > But for a function that always works, we'll have to set some tolerance > limit and have the function fail when the smallest eigenvalue [min(L) > above] is less than some value. What is a good choice of a value? Maybe > -1.0e-13? Anyone? How far off can we be when the precise min(L) is zero? > > Mike > > > > ------------------------------------------------------------- > Octave is freely available under the terms of the GNU GPL. > > Octave's home on the web: http://www.octave.org > How to fund new projects: http://www.octave.org/funding.html > Subscription information: http://www.octave.org/archive.html > ------------------------------------------------------------- > ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
In reply to this post by Paul Kienzle
Paul Kienzle wrote:
(snip) > > Note: Google found a cdf here: > > http://alex.strashny.org/a/Multivariate-normal-cumulative- > distribution-function-(cdf)-in-MATLAB.html > > > - Paul On Monday morning, this was a broken link. In general, evaluating the mvn cdf for a p-vector will require monte carlo integration or similar techniques when p is at all large. ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
On Mon, 7 Nov 2005, Michael Creel wrote:
> Paul Kienzle wrote: > > (snip) > >> >> Note: Google found a cdf here: >> >> http://alex.strashny.org/a/Multivariate-normal-cumulative- distribution-function-(cdf)-in-MATLAB.html > > On Monday morning, this was a broken link. In general, evaluating the > mvn cdf for a p-vector will require monte carlo integration or similar > techniques when p is at all large. I think the problem might be that there is a space in that URL when there shouldn't be a space. Try this: http://alex.strashny.org/a/Multivariate-normal-cumulative-distribution-function-(cdf)-in-MATLAB.html That page says the following: In statistics, the multivariate normal (mvn) is a popular distribution. Unfortunately, its cumulative distribution function (cdf) does not have a closed form. There are, however, a number of algorithms that numerically estimate the value of the cdf. Here is an implementation of one such algorithm in MATLAB [URL below]. For the algorithm itself, see Genz, A. (1992) Numerical Computation of Multivariate Normal Probabilities, J. Comp. Graph. Stat. The linked URL is this one: http://alex.strashny.org/b/mvncdf.m It is under the GPL. There is also a link to the Genz paper in postscript format: http://alex.strashny.org/local/Genz-1992-multivariate-normal-probabilities.ps Mike ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
In reply to this post by Henry F. Mollet
On Sun, 6 Nov 2005, Henry F. Mollet wrote:
> octave:27> [E,L] = eig(ones(4)) > E = > > -1.7253e-01 4.6929e-01 7.0711e-01 5.0000e-01 > -1.7253e-01 4.6929e-01 -7.0711e-01 5.0000e-01 > -4.9115e-01 -7.1328e-01 -6.8934e-17 5.0000e-01 > 8.3620e-01 -2.2530e-01 9.7203e-18 5.0000e-01 > > L = > > -0.00000 0.00000 0.00000 0.00000 > 0.00000 -0.00000 0.00000 0.00000 > 0.00000 0.00000 -0.00000 0.00000 > 0.00000 0.00000 0.00000 4.00000 > > GNU Octave, version 2.1.71 (powerpc-apple-darwin8.1.0). > Why do I not get the same results? (Same results as below.) Well, I don't really know, but we are both using Octave 2.1.71, though mine was compiled on Red Hat Linux with a different BLAS. The ordering of the first three columns is arbitrary. We have two columns that are the same to within machine precision. The other two columns are more mysterious. I'll have to guess that with zero eigenvalues, the eigenvectors are fairly arbitrary, so both solutions are correct but they've used slightly different algorithms. > Only look at non-zero eigenvalue and corresponding eigenvector? The problem is in identifying "non-zero" eigenvalues. They usually are not *exactly* zero. Mike >> octave:6> [E,L] = eig(ones(4)) >> E = >> >> -3.2026e-01 7.0711e-01 3.8397e-01 5.0000e-01 >> -3.2026e-01 -7.0711e-01 3.8397e-01 5.0000e-01 >> -2.2276e-01 -5.9234e-17 -8.3689e-01 5.0000e-01 >> 8.6328e-01 -4.4132e-19 6.8941e-02 5.0000e-01 >> >> L = >> >> -0.00000 0.00000 0.00000 0.00000 >> 0.00000 0.00000 0.00000 0.00000 >> 0.00000 0.00000 0.00000 0.00000 >> 0.00000 0.00000 0.00000 4.00000 ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
In reply to this post by Mike Miller-20
On Nov 7, 2005, at 8:51 AM, Mike Miller wrote: > On Mon, 7 Nov 2005, Michael Creel wrote: > >> Paul Kienzle wrote: >> >> (snip) >> >>> Note: Google found a cdf here: >>> http://alex.strashny.org/a/Multivariate-normal-cumulative- >>> distribution-function-(cdf)-in-MATLAB.html >> >> On Monday morning, this was a broken link. In general, evaluating the >> mvn cdf for a p-vector will require monte carlo integration or >> similar techniques when p is at all large. > > I think the problem might be that there is a space in that URL when > there shouldn't be a space. Try this: > > http://alex.strashny.org/a/Multivariate-normal-cumulative- > distribution-function-(cdf)-in-MATLAB.html > > That page says the following: > > In statistics, the multivariate normal (mvn) is a popular > distribution. > Unfortunately, its cumulative distribution function (cdf) does not > have > a closed form. There are, however, a number of algorithms that > numerically estimate the value of the cdf. Here is an implementation > of > one such algorithm in MATLAB [URL below]. For the algorithm itself, > see > > Genz, A. (1992) Numerical Computation of Multivariate Normal > Probabilities, J. Comp. Graph. Stat. > > The linked URL is this one: > > http://alex.strashny.org/b/mvncdf.m > > It is under the GPL. There is also a link to the Genz paper in > postscript format: > > http://alex.strashny.org/local/Genz-1992-multivariate-normal- > probabilities.ps The manual for NIST dataplot has some examples: http://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/ mvncdf.htm and some references: "Comparison of Methods for the Computation ofMultivariate Normal Probabilities", Alan Genz, Computing Science and Statistics, 25, 1993, pp. 400-405. "Numerical Computation of Multivariate Normal Probabilities", Alan Genz, Journal of Computational and Graphical Statistics, 1, 1992, pp. 141-149. Plugging the parameters and limits into the mvncdf code above I did not get the same values. I didn't have a chance to figure out where I went wrong. Here's the test I used if anybody wants to correct it: %!test %! sigma = [1.0 -0.707107 0.0 0.0 0.0; %! -0.707107 1.0 0.5 0.5 0.5; %! 0.0 0.5 1.0 0.5 0.5; %! 0.0 0.5 0.5 1.0 0.5; %! 0.0 0.5 0.5 0.5 1.0 ]; %! lo = [0.0 0.0 1.7817 1.4755 -inf]; %! hi = [inf 1.5198 inf inf 1.5949]; %! accuracy = 0.00005; %! Alo = mvncdf(lo,0,sigma,accuracy); %! Ahi = mvncdf(hi,0,sigma,accuracy); %! assert(Ahi-Alo, 0.2865, 0.0005); ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- |
Powered by Nabble | Edit this page |