Many statistical problems involve the estimation of a (d×d) orthogonal matrix Q. Such an estimation is often challenging due to the orthonormality constraints on Q. To cope with this problem, we use the well-known PLU decomposition, which factorizes any invertible (d×d) matrix as the product of a (d×d) permutation matrix P, a (d×d) unit lower triangular matrix L, and a (d×d) upper triangular matrix U. Thanks to the QR decomposition, we find the formulation of U when the PLU decomposition is applied to Q. We call the result as PLR decomposition; it produces a one-to-one correspondence between Q and the d (d −1) /2 entries belowthe diagonal of L, which are advantageously unconstrained real values. Thus, once the decomposition is applied, regardless of the objective function under consideration, we can use any classical unconstrained optimization method to find the minimum (or maximum) of the objective function with respect to L. For illustrative purposes, we apply the PLR decomposition in common principle components analysis (CPCA) for the maximum likelihood estimation of the common orthogonal matrix when a multivariate leptokurtic-normal distribution is assumed in each group. Compared to the commonly used normal distribution, the leptokurtic-normal has an additional parameter governing the excess kurtosis; this makes the estimation of Q in CPCA more robust against mild outliers. The usefulness of the PLR decomposition in leptokurtic-normal CPCA is illustrated by two biometric data analyses.
Bagnato, L., Punzo, A., Unconstrained representation of orthogonal matrices with application to common principal components, <<COMPUTATIONAL STATISTICS>>, 2021; 36 (2): 1177-1195. [doi:10.1007/s00180-020-01041-8] [http://hdl.handle.net/10807/162209]
Unconstrained representation of orthogonal matrices with application to common principal components
Bagnato, Luca;
2021
Abstract
Many statistical problems involve the estimation of a (d×d) orthogonal matrix Q. Such an estimation is often challenging due to the orthonormality constraints on Q. To cope with this problem, we use the well-known PLU decomposition, which factorizes any invertible (d×d) matrix as the product of a (d×d) permutation matrix P, a (d×d) unit lower triangular matrix L, and a (d×d) upper triangular matrix U. Thanks to the QR decomposition, we find the formulation of U when the PLU decomposition is applied to Q. We call the result as PLR decomposition; it produces a one-to-one correspondence between Q and the d (d −1) /2 entries belowthe diagonal of L, which are advantageously unconstrained real values. Thus, once the decomposition is applied, regardless of the objective function under consideration, we can use any classical unconstrained optimization method to find the minimum (or maximum) of the objective function with respect to L. For illustrative purposes, we apply the PLR decomposition in common principle components analysis (CPCA) for the maximum likelihood estimation of the common orthogonal matrix when a multivariate leptokurtic-normal distribution is assumed in each group. Compared to the commonly used normal distribution, the leptokurtic-normal has an additional parameter governing the excess kurtosis; this makes the estimation of Q in CPCA more robust against mild outliers. The usefulness of the PLR decomposition in leptokurtic-normal CPCA is illustrated by two biometric data analyses.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.