[R] how to invert the matrix with quite small eigenvalues

(Ted Harding) Ted.Harding at nessie.mcc.ac.uk
Mon May 30 12:41:33 CEST 2005


On 30-May-05 huang min wrote:
> Maybe I should state more clear that I define b to get the
> orthogonal matrix bb$vectors.

OK. Certainly bbv<-bb$vectors is close to orthogonal: bbv%*%bbv
differs from the unit matrix only in that the off-diagonal
terms are O(10^(-16)).

> We also can define diag(b)<-diag(b)+100, which will make the
> eigenvalues of b much bigger to make sure the orthogonal matrix is
> reliable.
> 
> My intention is to invert the covariance matrix to perform some
> algorithm which is common in the estimating equations like GEE.

Which comes back to my general question: why do you need to ensure
that you get correct results for matrices like these? This matrix
is very nearly singular, and in many contexts you would interpret
it as exactly singular (e.g. if I got such a matrix as the
empirical covariance matrix from a sample, or by computation from
the structure of a model such as a design matrix, I would not
normally want to preserve this very small margin of non-singularity:
I would replace it with the lower-dimensional singular version,
e.g. by decomposing it with eigen() or svd() and reconstructing
the singular version after setting very small eigenvalues to 0).

But then of course there would be no inverse, which might be
required by the further computational expressions you want to
evaluate. However, in that case (depending on your application),
you may be able to proceed satisfactorily by using ginv() (see
package MASS) instead of solve(). But if you take that route,
then you have to bear in mind that a generalised inverse is
not a true inverse (if only because the latter does not exist):
the only property required for G=ginv(M) is that

  M%*%G%*%M = M

If such a matrix G will work for the rest of your calculations,
then you are OK, in particular if what you need is a possible
solution to an under-determined system of linear equations.
If not, then you should seriously consider whether your
computational strategy is suitable for the problem you have
in hand.

> [...]
> Another strange point is that my friend use the LU decomposition
> in Fortran to solve the inverse matrix of aa for me. He used
> double precision of course, otherwise no inverse matrix in
> Fortran too. He show that the product of the two matrix is
> almost identity (The biggest off-digonal element is about 1e-8).
> I copy his inverse matrix(with 31 digits!) to R and read in aa
> I sent to him(16 digits). The product is also not an identity
> matrix. It is fairly strange! Any suggestion?

These phenomena are yet another instance of the fact that at the
margins of computability the results will be anomalouus. Your
friend is simply using a narrower margin, but it is still not
exact! Not strange at all.

Best wishes,
Ted.


--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.Harding at nessie.mcc.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 30-May-05                                       Time: 11:41:28
------------------------------ XFMail ------------------------------




More information about the R-help mailing list