Global Information Lookup Global Information

CUR matrix approximation information


A CUR matrix approximation is a set of three matrices that, when multiplied together, closely approximate a given matrix.[1][2][3] A CUR approximation can be used in the same way as the low-rank approximation of the singular value decomposition (SVD). CUR approximations are less accurate than the SVD, but they offer two key advantages, both stemming from the fact that the rows and columns come from the original matrix (rather than left and right singular vectors):

  • There are methods to calculate it with lower asymptotic time complexity versus the SVD.
  • The matrices are more interpretable; The meanings of rows and columns in the decomposed matrix are essentially the same as their meanings in the original matrix.

Formally, a CUR matrix approximation of a matrix A is three matrices C, U, and R such that C is made from columns of A, R is made from rows of A, and that the product CUR closely approximates A. Usually the CUR is selected to be a rank-k approximation, which means that C contains k columns of A, R contains k rows of A, and U is a k-by-k matrix. There are many possible CUR matrix approximations, and many CUR matrix approximations for a given rank.

The CUR matrix approximation is often [citation needed] used in place of the low-rank approximation of the SVD in principal component analysis. The CUR is less accurate, but the columns of the matrix C are taken from A and the rows of R are taken from A. In PCA, each column of A contains a data sample; thus, the matrix C is made of a subset of data samples. This is much easier to interpret than the SVD's left singular vectors, which represent the data in a rotated space. Similarly, the matrix R is made of a subset of variables measured for each data sample. This is easier to comprehend than the SVD's right singular vectors, which are another rotations of the data in space.

  1. ^ Michael W. Mahoney; Petros Drineas. "CUR matrix decompositions for improved data analysis". Retrieved 26 June 2012.
  2. ^ Boutsidis, Christos; Woodruff, David P. (2014). Optimal CUR matrix decompositions. STOC '14 Proceedings of the forty-sixth annual ACM symposium on Theory of Computing.
  3. ^ Song, Zhao; Woodruff, David P.; Zhong, Peilin (2017). Low Rank Approximation with Entrywise L1-Norm Error. STOC '17 Proceedings of the forty-ninth annual ACM symposium on Theory of Computing. arXiv:1611.00898.

and 6 Related for: CUR matrix approximation information

Request time (Page generated in 0.8952 seconds.)

CUR matrix approximation

Last Update:

A CUR matrix approximation is a set of three matrices that, when multiplied together, closely approximate a given matrix. A CUR approximation can be used...

Word Count : 931

Dimensionality reduction

Last Update:

information as possible about the original data is preserved. CUR matrix approximation Data transformation (statistics) Hyperparameter optimization Information...

Word Count : 2349

Petros Drineas

Last Update:

(RandNLA). In a 2012 paper Michael W. Mahoney and Drineas introduced CUR matrix approximation for improved big data analysis. Drineas' work on the application...

Word Count : 254

Principal component analysis

Last Update:

qualitative variables) Canonical correlation CUR matrix approximation (can replace of low-rank SVD approximation) Detrended correspondence analysis Directional...

Word Count : 14281

Sanskrit grammar

Last Update:

– XVII The Sanskrit sound system can be represented in a 2-dimensional matrix arranged on the basis of the articulatory criteria: The table below shows...

Word Count : 4547

History of algebra

Last Update:

terms of modern notation the solution is easily achieved. By shifting the curring plane (Gig. 6.2), we can find a parabola with any latus rectum. If, then...

Word Count : 16877

PDF Search Engine © AllGlobal.net