Inference: Dimension Reduction
Dimension reduction methods search for low dimensional structure in high dimensional data. Although unsupervised methods such as PCA have been used for over 80 years, new issues in robustness, very high dimensionality p >> n, nonlinear structure, and so on continue to provide rich sources of statistical and computational issues. Supervised methods, such as Sliced Inverse Regression, which look for low dimensional association between predictors and response have been developed within the past 35 years, and many useful extensions continue to be developed. Performing valid inference to correctly account for the impact of dimension reduction including variable selection also poses problems that have not been thoroughly explored.
This working group will work on problems of extending unsupervised and supervised methods to some of the interesting new areas such as machine learning, functional data, discrete data, and complex data structures such as networks. As well, we will work on inferential issues.
On NMF clustering and K-means clustering by Kim & Park (2008)
Bagging to improve the accuracy of a clustering procedure
Selecting k in a NMF clustering senario: Brunet et al (2004) "Metagenes and molecular pattern discovery using matrix factorization"
"When Does Non-Negative Matrix Factorization Give a Correct Decomposition into Parts?" by Donoho
A unified view of matrix factorization methods
Comparing SVD, PCA, Non-negative matrix factorization, and Diffusion maps
Order determination for dimension reduction using an alternating pattern of spectral variability
Working Group Information
Recent Comments
-
artemiou
-
This file is an overview of
2 years 27 weeks ago
-
Naomi Altman
-
PCA and SVD
2 years 27 weeks ago
Active Documents
Group notifications
-
Naomi Altman