Sparse PCA
Sparse principal component analysis is a specialised technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis for the reduction of dimensionality of data by introducing sparsity structures to the input variables.
A particular disadvantage of ordinary PCA is that the principal components are usually linear combinations of all input variables. Sparse PCA overcomes this disadvantage by finding linear combinations that contain just a few input variables.
Contemporary datasets often have the number of input variables comparable with or even much larger than the number of samples. It has been shown that if does not converge to zero, the classical PCA is not consistent.
But sparse PCA can retain consistency even if
Mathematical formulation
Consider a data matrix,, where each of the columns represent an input variable, and each of the rows represents an independent sample from data population. One assumes each column of has mean zero, otherwise one can subtract column-wise mean from each element of.Let be the empirical covariance matrix of, which has dimension. Given an integer with, the sparse PCA problem can be formulated as maximizing the variance along a direction represented by vector while constraining its cardinality:
The first constraint specifies that v is a unit vector. In the second constraint, represents the L0 norm of v, which is defined as the number of its non-zero components. So the second constraint specifies that the number of non-zero components in v is less than or equal to k, which is typically an integer that is much smaller than dimension p. The optimal value of is known as the k-sparse largest eigenvalue.
If one takes k=p, the problem reduces to the ordinary PCA, and the optimal value becomes the largest eigenvalue of covariance matrix Σ.
After finding the optimal solution v, one deflates Σ to obtain a new matrix
and iterate this process to obtain further principal components. However, unlike PCA, sparse PCA cannot guarantee that different principal components are orthogonal. In order to achieve orthogonality, additional constraints must be enforced.
The following equivalent definition is in matrix form.
Let be a p×p symmetric matrix, one can rewrite the sparse PCA problem as
Tr is the matrix trace, and represents the non-zero elements in matrix V.
The last line specifies that V has matrix rank one and is positive semidefinite.
The last line means that one has, so is equivalent to.
Moreover, the rank constraint in this formulation is actually redundant, and therefore sparse PCA can be cast as the following mixed-integer semidefinite program
Because of the cardinality constraint, the maximization problem is hard to solve exactly, especially when dimension p is high. In fact, the sparse PCA problem in is NP-hard in the strong sense.
Algorithms for Sparse PCA
Several alternative approaches have been proposed, including- a regression framework,
- a convex relaxation/semidefinite programming framework,
- a generalized power method framework
- an alternating maximization framework
- forward-backward greedy search and exact methods using branch-and-bound techniques,
- a certifiably optimal branch-and-bound approach
- Bayesian formulation framework.
- A certifiably optimal mixed-integer semidefinite branch-and-cut approach
Regression approach via lasso (elastic net)
Semidefinite Programming Relaxation
It has been proposed that sparse PCA can be approximated by semidefinite programming. If one drops the rank constraint and relaxes the cardinality constraint by a 1-norm convex constraint, one gets a semidefinite programming relaxation, which can be solved efficiently in polynomial time:In the second constraint, is a p×1 vector of ones, and |V| is the matrix whose elements are the absolute values of the elements of V.
The optimal solution to the relaxed problem is not guaranteed to have rank one. In that case, can be truncated to retain only the dominant eigenvector.
While the semidefinite program does not scale beyond n=300 covariates, it has been shown that a second-order cone relaxation of the semidefinite relaxation is almost as tight and successfully solves problems with n=1000s of covariates
Applications
Financial Data Analysis
Suppose ordinary PCA is applied to a dataset where each input variable represents a different asset, it may generate principal components that are weighted combination of all the assets. In contrast, sparse PCA would produce principal components that are weighted combination of only a few input assets, so one can easily interpret its meaning. Furthermore, if one uses a trading strategy based on these principal components, fewer assets imply less transaction costs.Biology
Consider a dataset where each input variable corresponds to a specific gene. Sparse PCA can produce a principal component that involves only a few genes, so researchers can focus on these specific genes for further analysis.High-dimensional Hypothesis Testing
Contemporary datasets often have the number of input variables comparable with or even much larger than the number of samples. It has been shown that if does not converge to zero, the classical PCA is not consistent. In other words, if we let in, thenthe optimal value does not converge to the largest eigenvalue of data population when the sample size, and the optimal solution does not converge to the direction of maximum variance.
But sparse PCA can retain consistency even if
The k-sparse largest eigenvalue can be used to discriminate an isometric model, where every direction has the same variance, from a spiked covariance model in high-dimensional setting. Consider a hypothesis test where the null hypothesis specifies that data are generated from a multivariate normal distribution with mean 0 and covariance equal to an identity matrix, and the alternative hypothesis specifies that data is generated from a spiked model with signal strength :
where has only k non-zero coordinates. The largest k-sparse eigenvalue can discriminate the two hypothesis if and only if.
Since computing k-sparse eigenvalue is NP-hard, one can approximate it by the optimal value of semidefinite programming relaxation. If that case, we can discriminate the two hypotheses if. The additional term cannot be improved by any other polynomical time algorithm if the planted clique conjecture holds.
Software/source code
- elasticnet – R package for Sparse Estimation and Sparse PCA using Elastic-Nets
- nsprcomp - R package for sparse and/or non-negative PCA based on thresholded power iterations
- Scikit-learn – Python library for machine learning which contains Sparse PCA and other techniques in the decomposition module.