Computational Complexity of PCA

  • Step 2: Compute the covariance matrix Cv = X′ c ∗Xc: The covariance matrix is computed by multiplying the mean-centred matrix Xc with its transpose. This is a computationally intensive step because it requires multiplying two matrices where typically none of them can fit in memory.
  • Step 3: Compute the eigenvalue decomposition of Cv: Eigenvalues λi and eigenvectors vi satisfy the relation: Cv ∗ λi = λi ∗ vi, where λi is i th eigenvalue and vi is its corresponding eigenvector. Putting the above formula in the matrix form: Cv ∗Λ = Λ∗V, where Λ is a diagonal matrix whose elements are the eigenvalues of the covariance matrix Cv, and V is the matrix whose columns are the corresponding eigenvectors. The eigenvalue decomposition of the matrix Cv is given by Cv = Λ∗V ∗Λ −1.
  • Step 4: Get the principal components: The principal components of the input matrix X are the eigenvectors associated with the largest eigenvalues. Since the eigenvalues in the diagonal matrix Λ are sorted in decreasing order, then the first d vectors in the matrix V are the principal components of X: V = (v1, v2,…, vd).

Time Complexity:

The algorithm has two computationally intensive steps:

  1. Computing the eigenvalue decomposition of the covariance matrix
  1. https://stackoverflow.com/questions/20507646/how-is-the-complexity-of-pca-ominp3-n3

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store