poss_dataset_ids = dataset_info
.map(d => d.dataset_id)
.filter(d => results.map(r => r.dataset_id).includes(d))
poss_method_ids = method_info
.map(d => d.method_id)
.filter(d => results.map(r => r.method_id).includes(d))
poss_metric_ids = metric_info
.map(d => d.metric_id)
.filter(d => results.map(r => Object.keys(r.scaled_scores)).flat().includes(d))
Dimensionality reduction for visualisation
Reduction of high-dimensional datasets to 2D for visualization & interpretation
4 datasets · 23 methods · 3 control methods · 10 metrics
Info
Repository
v1.0.0
MIT
Task info Method info Metric info Dataset info Results
Dimensionality reduction is one of the key challenges in single-cell data representation. Routine single-cell RNA sequencing (scRNA-seq) experiments measure cells in roughly 20,000-30,000 dimensions (i.e., features - mostly gene transcripts but also other functional elements encoded in mRNA such as lncRNAs). Since its inception, scRNA-seq experiments have been growing in terms of the number of cells measured. Originally, cutting-edge SmartSeq experiments would yield a few hundred cells, at best. Now, it is not uncommon to see experiments that yield over 100,000 cells or even > 1 million cells.
Each feature in a dataset functions as a single dimension. While each of the ~30,000 dimensions measured in each cell contribute to an underlying data structure, the overall structure of the data is challenging to display in few dimensions due to data sparsity and the “curse of dimensionality” (distances in high dimensional data don’t distinguish data points well). Thus, we need to find a way to dimensionally reduce the data for visualization and interpretation.
Summary
Display settings
Filter datasets
Filter methods
Filter metrics
Results
Results table of the scores per method, dataset and metric (after scaling). Use the filters to make a custom subselection of methods and datasets. The “Overall mean” dataset is the mean value across all datasets.
Dataset info
Show
Mouse hematopoietic stem cell differentiation
1.6k hematopoietic stem and progenitor cells from mouse bone marrow. Sequenced by Smart-seq2. 1920 cells x 43258 features with 3 cell type labels (Nestorowa et al. 2016).
Mouse myeloid lineage differentiation
Myeloid lineage differentiation from mouse blood. Sequenced by SMARTseq in 2016 by Olsson et al. 660 cells x 112815 features with 4 cell type labels (Olsson et al. 2016).
5k Peripheral blood mononuclear cells
5k Peripheral Blood Mononuclear Cells (PBMCs) from a healthy donor. Sequenced on 10X v3 chemistry in July 2019 by 10X Genomics. 5247 cells x 20822 features with no cell type labels (10x Genomics 2019).
Zebrafish
90k cells from zebrafish embryos throughout the first day of development, with and without a knockout of chordin, an important developmental gene. Dimensions: 26022 cells, 25258 genes. 24 cell types (avg. 1084±1156 cells per cell type) (Wagner et al. 2018).
Method info
Show
densMAP (logCP10k)
Repository · Source Code · Container · v1.0.0
densMAP is a modification of UMAP that adds an extra cost term in order to preserve information about the relative local density of the data. It is performed on the same inputs as UMAP (Narayan, Berger, and Cho 2021)
densMAP (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
densMAP is a modification of UMAP that adds an extra cost term in order to preserve information about the relative local density of the data. It is performed on the same inputs as UMAP (Narayan, Berger, and Cho 2021)
densMAP PCA (logCP10k)
Repository · Source Code · Container · v1.0.0
densMAP is a modification of UMAP that adds an extra cost term in order to preserve information about the relative local density of the data. It is performed on the same inputs as UMAP (Narayan, Berger, and Cho 2021)
densMAP PCA (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
densMAP is a modification of UMAP that adds an extra cost term in order to preserve information about the relative local density of the data. It is performed on the same inputs as UMAP (Narayan, Berger, and Cho 2021)
Diffusion maps
Repository · Source Code · Container · v1.0.0
Diffusion maps uses an affinity matrix to describe the similarity between data points, which is then transformed into a graph Laplacian. The eigenvalue-weighted eigenvectors of the graph Laplacian are then used to create the embedding. Diffusion maps is calculated on the logCPM expression matrix (Coifman and Lafon 2006)
NeuralEE (CPU) (Default)
Repository · Source Code · Container · v1.0.0
NeuralEE is a neural network implementation of elastic embedding. It is a non-linear method that preserves pairwise distances between data points. NeuralEE uses a neural network to optimize an objective function that measures the difference between pairwise distances in the original high-dimensional space and the two-dimensional space. It is computed on both the recommended input from the package authors of 500 HVGs selected from a logged expression matrix (without sequencing depth scaling) and the default logCPM matrix with 1000 HVGs (Xiong et al. 2020)
NeuralEE (CPU) (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
NeuralEE is a neural network implementation of elastic embedding. It is a non-linear method that preserves pairwise distances between data points. NeuralEE uses a neural network to optimize an objective function that measures the difference between pairwise distances in the original high-dimensional space and the two-dimensional space. It is computed on both the recommended input from the package authors of 500 HVGs selected from a logged expression matrix (without sequencing depth scaling) and the default logCPM matrix with 1000 HVGs (Xiong et al. 2020)
PCA (logCP10k)
Repository · Source Code · Container · v1.0.0
PCA or “Principal Component Analysis” is a linear method that finds orthogonal directions in the data that capture the most variance. The first two principal components are chosen as the two-dimensional embedding. We select only the first two principal components as the two-dimensional embedding. PCA is calculated on the logCPM expression matrix with and without selecting 1000 HVGs (Pearson 1901)
PCA (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
PCA or “Principal Component Analysis” is a linear method that finds orthogonal directions in the data that capture the most variance. The first two principal components are chosen as the two-dimensional embedding. We select only the first two principal components as the two-dimensional embedding. PCA is calculated on the logCPM expression matrix with and without selecting 1000 HVGs (Pearson 1901)
PHATE (default)
Repository · Source Code · Container · v1.0.0
PHATE or “Potential of Heat - diffusion for Affinity - based Transition Embedding” uses the potential of heat diffusion to preserve trajectories in a dataset via a diffusion process. It is an affinity - based method that creates an embedding by finding the dominant eigenvalues of a Markov transition matrix. We evaluate several variants including using the recommended square - root transformed CPM matrix as input, this input with the gamma parameter set to zero and the normal logCPM transformed matrix with and without HVG selection (Moon et al. 2019)
PHATE (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
PHATE or “Potential of Heat - diffusion for Affinity - based Transition Embedding” uses the potential of heat diffusion to preserve trajectories in a dataset via a diffusion process. It is an affinity - based method that creates an embedding by finding the dominant eigenvalues of a Markov transition matrix. We evaluate several variants including using the recommended square - root transformed CPM matrix as input, this input with the gamma parameter set to zero and the normal logCPM transformed matrix with and without HVG selection (Moon et al. 2019)
PHATE (logCP10k)
Repository · Source Code · Container · v1.0.0
PHATE or “Potential of Heat - diffusion for Affinity - based Transition Embedding” uses the potential of heat diffusion to preserve trajectories in a dataset via a diffusion process. It is an affinity - based method that creates an embedding by finding the dominant eigenvalues of a Markov transition matrix. We evaluate several variants including using the recommended square - root transformed CPM matrix as input, this input with the gamma parameter set to zero and the normal logCPM transformed matrix with and without HVG selection (Moon et al. 2019)
PHATE (gamma=0)
Repository · Source Code · Container · v1.0.0
PHATE or “Potential of Heat - diffusion for Affinity - based Transition Embedding” uses the potential of heat diffusion to preserve trajectories in a dataset via a diffusion process. It is an affinity - based method that creates an embedding by finding the dominant eigenvalues of a Markov transition matrix. We evaluate several variants including using the recommended square - root transformed CPM matrix as input, this input with the gamma parameter set to zero and the normal logCPM transformed matrix with and without HVG selection (Moon et al. 2019)
PyMDE Preserve Distances (logCP10k)
Repository · Source Code · Container · v1.0.0
PyMDE is a Python implementation of minimum-distortion embedding. It is a non-linear method that preserves distances between cells or neighborhoods in the high-dimensional space. It is computed with options to preserve distances between cells or neighbourhoods and with the logCPM matrix with and without HVG selection as input (Agrawal, Ali, and Boyd 2021)
PyMDE Preserve Distances (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
PyMDE is a Python implementation of minimum-distortion embedding. It is a non-linear method that preserves distances between cells or neighborhoods in the high-dimensional space. It is computed with options to preserve distances between cells or neighbourhoods and with the logCPM matrix with and without HVG selection as input (Agrawal, Ali, and Boyd 2021)
PyMDE Preserve Neighbors (logCP10k)
Repository · Source Code · Container · v1.0.0
PyMDE is a Python implementation of minimum-distortion embedding. It is a non-linear method that preserves distances between cells or neighborhoods in the high-dimensional space. It is computed with options to preserve distances between cells or neighbourhoods and with the logCPM matrix with and without HVG selection as input (Agrawal, Ali, and Boyd 2021)
PyMDE Preserve Neighbors (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
PyMDE is a Python implementation of minimum-distortion embedding. It is a non-linear method that preserves distances between cells or neighborhoods in the high-dimensional space. It is computed with options to preserve distances between cells or neighbourhoods and with the logCPM matrix with and without HVG selection as input (Agrawal, Ali, and Boyd 2021)
t-SNE (logCP10k)
Repository · Source Code · Container · v1.0.0
t-SNE or t-distributed Stochastic Neighbor Embedding converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. We use the implementation in the scanpy package with the result of PCA on the logCPM expression matrix (with and without HVG selection) (van der Maaten and Hinton 2008)
t-SNE (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
t-SNE or t-distributed Stochastic Neighbor Embedding converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. We use the implementation in the scanpy package with the result of PCA on the logCPM expression matrix (with and without HVG selection) (van der Maaten and Hinton 2008)
UMAP (logCP10k)
Repository · Source Code · Container · v1.0.0
UMAP or Uniform Manifold Approximation and Projection is an algorithm for dimension reduction based on manifold learning techniques and ideas from topological data analysis. We perform UMAP on the logCPM expression matrix before and after HVG selection and with and without PCA as a pre-processing step (McInnes, Healy, and Melville 2018)
UMAP (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
UMAP or Uniform Manifold Approximation and Projection is an algorithm for dimension reduction based on manifold learning techniques and ideas from topological data analysis. We perform UMAP on the logCPM expression matrix before and after HVG selection and with and without PCA as a pre-processing step (McInnes, Healy, and Melville 2018)
UMAP PCA (logCP10k)
Repository · Source Code · Container · v1.0.0
UMAP or Uniform Manifold Approximation and Projection is an algorithm for dimension reduction based on manifold learning techniques and ideas from topological data analysis. We perform UMAP on the logCPM expression matrix before and after HVG selection and with and without PCA as a pre-processing step (McInnes, Healy, and Melville 2018)
UMAP PCA (logCP10k, 1kHVG)
Repository · Source Code · Container · v1.0.0
UMAP or Uniform Manifold Approximation and Projection is an algorithm for dimension reduction based on manifold learning techniques and ideas from topological data analysis. We perform UMAP on the logCPM expression matrix before and after HVG selection and with and without PCA as a pre-processing step (McInnes, Healy, and Melville 2018)
Control method info
Show
Random Features
Repository · Source Code · Container · v1.0.0
Randomly generated two-dimensional coordinates from a normal distribution (Open Problems for Single Cell Analysis Consortium 2022)
Spectral Features
Repository · Source Code · Container · v1.0.0
Use 1000-dimensional diffusions maps as an embedding (Open Problems for Single Cell Analysis Consortium 2022)
True Features
Repository · Source Code · Container · v1.0.0
Use of the original feature inputs as the ‘embedding’ (Open Problems for Single Cell Analysis Consortium 2022)
Metric info
Show
continuity
Continuity measures error of hard extrusions based on nearest neighbor coranking (Zhang, Shang, and Zhang 2021).
Density preservation
Similarity between local densities in the high-dimensional data and the reduced data (Narayan, Berger, and Cho 2021).
Distance correlation
Spearman correlation between all pairwise Euclidean distances in the original and dimension-reduced data (Schober, Boer, and Schwarte 2018).
Distance correlation (spectral)
Spearman correlation between all pairwise diffusion distances in the original and dimension-reduced data (Coifman and Lafon 2006).
local continuity meta criterion
The local continuity meta criterion is the co-KNN size with baseline removal which favors locality (Zhang, Shang, and Zhang 2021).
global property
The global property metric is a summary of the global co-KNN (Zhang, Shang, and Zhang 2021).
local property
The local property metric is a summary of the local co-KNN (Zhang, Shang, and Zhang 2021).
co-KNN size
co-KNN size counts how many points are in both k-nearest neighbors before and after the dimensionality reduction (Zhang, Shang, and Zhang 2021).
co-KNN AUC
co-KNN AUC is area under the co-KNN curve (Zhang, Shang, and Zhang 2021).
trustworthiness
a measurement of similarity between the rank of each point’s nearest neighbors in the high-dimensional data and the reduced data (Venna and Kaski 2001).
Quality control results
Show
Category | Name | Value | Condition | Severity |
---|---|---|---|---|
Raw results | Dataset 'zebrafish_labs' %missing | 0.60 | pct_missing <= .1 | ✗✗✗ |
Raw results | Metric 'continuity' %missing | 0.25 | pct_missing <= .1 | ✗✗ |
Raw results | Metric 'lcmc' %missing | 0.25 | pct_missing <= .1 | ✗✗ |
Raw results | Metric 'qglobal' %missing | 0.25 | pct_missing <= .1 | ✗✗ |
Raw results | Metric 'qlocal' %missing | 0.25 | pct_missing <= .1 | ✗✗ |
Raw results | Metric 'qnn' %missing | 0.25 | pct_missing <= .1 | ✗✗ |
Raw results | Metric 'qnn_auc' %missing | 0.25 | pct_missing <= .1 | ✗✗ |
Raw results | Method 'densmap_logCP10k' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'densmap_logCP10k_1kHVG' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'densmap_pca_logCP10k' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'densmap_pca_logCP10k_1kHVG' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'diffusion_map' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'neuralee_default' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'neuralee_logCP10k_1kHVG' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'pca_logCP10k' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'pca_logCP10k_1kHVG' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'phate_default' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'phate_logCP10k' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'phate_logCP10k_1kHVG' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'phate_sqrt' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'pymde_distances_log_cp10k' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'pymde_distances_log_cp10k_hvg' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'pymde_neighbors_log_cp10k' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'pymde_neighbors_log_cp10k_hvg' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'random_features' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'spectral_features' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'true_features' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'tsne_logCP10k' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'tsne_logCP10k_1kHVG' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'umap_logCP10k' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'umap_logCP10k_1kHVG' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'umap_pca_logCP10k' %missing | 0.15 | pct_missing <= .1 | ✗ |
Raw results | Method 'umap_pca_logCP10k_1kHVG' %missing | 0.15 | pct_missing <= .1 | ✗ |
Normalisation visualisation
Show
Warning: Removed 1 row containing missing values (`geom_path()`).
Warning: Removed 156 rows containing missing values (`geom_point()`).
References
10x Genomics. 2019. “5k Peripheral Blood Mononuclear Cells (PBMCs) from a Healthy Donor with a Panel of TotalSeq-b Antibodies (V3 Chemistry).” https://www.10xgenomics.com/resources/datasets/5-k-peripheral-blood-mononuclear-cells-pbm-cs-from-a-healthy-donor-with-cell-surface-proteins-v-3-chemistry-3-1-standard-3-1-0.
Agrawal, Akshay, Alnur Ali, and Stephen Boyd. 2021. “Minimum-Distortion Embedding.” Foundations and Trends in Machine Learning 14 (3): 211–378. https://doi.org/10.1561/2200000090.
Coifman, Ronald R., and Stéphane Lafon. 2006. “Diffusion Maps.” Applied and Computational Harmonic Analysis 21 (1): 5–30. https://doi.org/10.1016/j.acha.2006.04.006.
McInnes, Leland, John Healy, and James Melville. 2018. “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction.” arXiv. https://doi.org/10.48550/arxiv.1802.03426.
Moon, Kevin R., David van Dijk, Zheng Wang, Scott Gigante, Daniel B. Burkhardt, William S. Chen, Kristina Yim, et al. 2019. “Visualizing Structure and Transitions in High-Dimensional Biological Data.” Nature Biotechnology 37 (12): 1482–92. https://doi.org/10.1038/s41587-019-0336-3.
Narayan, Ashwin, Bonnie Berger, and Hyunghoon Cho. 2021. “Assessing Single-Cell Transcriptomic Variability Through Density-Preserving Data Visualization.” Nature Biotechnology 39 (6): 765–74. https://doi.org/10.1038/s41587-020-00801-7.
Nestorowa, Sonia, Fiona K. Hamey, Blanca Pijuan Sala, Evangelia Diamanti, Mairi Shepherd, Elisa Laurenti, Nicola K. Wilson, David G. Kent, and Berthold Göttgens. 2016. “A Single-Cell Resolution Map of Mouse Hematopoietic Stem and Progenitor Cell Differentiation.” Blood 128 (8): e20–31. https://doi.org/10.1182/blood-2016-05-716480.
Olsson, Andre, Meenakshi Venkatasubramanian, Viren K. Chaudhri, Bruce J. Aronow, Nathan Salomonis, Harinder Singh, and H. Leighton Grimes. 2016. “Single-Cell Analysis of Mixed-Lineage States Leading to a Binary Cell Fate Choice.” Nature 537 (7622): 698–702. https://doi.org/10.1038/nature19348.
Open Problems for Single Cell Analysis Consortium. 2022. “Open Problems.” https://openproblems.bio.
Pearson, Karl. 1901. “On Lines and Planes of Closest Fit to Systems of Points in Space.” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2 (11): 559–72. https://doi.org/10.1080/14786440109462720.
Schober, Patrick, Christa Boer, and Lothar A. Schwarte. 2018. “Correlation Coefficients.” Anesthesia & Analgesia 126 (5): 1763–68. https://doi.org/10.1213/ane.0000000000002864.
van der Maaten, Laurens, and Geoffrey Hinton. 2008. “Visualizing Data Using t-SNE.” Journal of Machine Learning Research 9 (86): 2579–2605. http://jmlr.org/papers/v9/vandermaaten08a.html.
Venna, Jarkko, and Samuel Kaski. 2001. “Neighborhood Preservation in Nonlinear Projection Methods: An Experimental Study.” In Artificial Neural Networks ICANN 2001, 485–91. Springer Berlin Heidelberg. https://doi.org/{10.1007/3-540-44668-0\_68}.
Wagner, Daniel E., Caleb Weinreb, Zach M. Collins, James A. Briggs, Sean G. Megason, and Allon M. Klein. 2018. “Single-Cell Mapping of Gene Expression Landscapes and Lineage in the Zebrafish Embryo.” Science 360 (6392): 981–87. https://doi.org/10.1126/science.aar4362.
Xiong, Jiankang, Fuzhou Gong, Lin Wan, and Liang Ma. 2020. “NeuralEE: A GPU-Accelerated Elastic Embedding Dimensionality Reduction Method for Visualizing Large-Scale scRNA-Seq Data.” Frontiers in Genetics 11. https://doi.org/10.3389/fgene.2020.00786.
Zhang, Yinsheng, Qian Shang, and Guoming Zhang. 2021. “pyDRMetrics - a Python Toolkit for Dimensionality Reduction Quality Assessment.” Heliyon 7 (2): e06199. https://doi.org/10.1016/j.heliyon.2021.e06199.