poss_dataset_ids = dataset_info
.map(d => d.dataset_id)
.filter(d => results.map(r => r.dataset_id).includes(d))
poss_method_ids = method_info
.map(d => d.method_id)
.filter(d => results.map(r => r.method_id).includes(d))
poss_metric_ids = metric_info
.map(d => d.metric_id)
.filter(d => results.map(r => Object.keys(r.scaled_scores)).flat().includes(d))
Label Projection
Automated cell type annotation from rich, labeled reference data
8 datasets · 16 methods · 2 control methods · 3 metrics
Info
Repository
v1.0.0
MIT
Task info Method info Metric info Dataset info Results
A major challenge for integrating single cell datasets is creating matching cell type annotations for each cell. One of the most common strategies for annotating cell types is referred to as “cluster-then-annotate” whereby cells are aggregated into clusters based on feature similarity and then manually characterized based on differential gene expression or previously identified marker genes. Recently, methods have emerged to build on this strategy and annotate cells using known marker genes. However, these strategies pose a difficulty for integrating atlas-scale datasets as the particular annotations may not match.
To ensure that the cell type labels in newly generated datasets match existing reference datasets, some methods align cells to a previously annotated reference dataset and then project labels from the reference to the new dataset.
Here, we compare methods for annotation based on a reference dataset. The datasets consist of two or more samples of single cell profiles that have been manually annotated with matching labels. These datasets are then split into training and test batches, and the task of each method is to train a cell type classifer on the training set and project those labels onto the test set.
Summary
Display settings
Filter datasets
Filter methods
Filter metrics
Results
Results table of the scores per method, dataset and metric (after scaling). Use the filters to make a custom subselection of methods and datasets. The “Overall mean” dataset is the mean value across all datasets.
Dataset info
Show
CeNGEN (split by batch)
100k FACS-isolated C. elegans neurons from 17 experiments sequenced on 10x Genomics. Split into train/test by experimental batch. Dimensions: 100955 cells, 22469 genes. 169 cell types (avg. 597±800 cells per cell type) (Hammarlund et al. 2018).
CeNGEN (random split)
100k FACS-isolated C. elegans neurons from 17 experiments sequenced on 10x Genomics. Split into train/test randomly. Dimensions: 100955 cells, 22469 genes. 169 cell types avg. 597±800 cells per cell type) (Hammarlund et al. 2018).
Pancreas (by batch)
Human pancreatic islet scRNA-seq data from 6 datasets across technologies (CEL-seq, CEL-seq2, Smart-seq2, inDrop, Fluidigm C1, and SMARTER-seq). Split into train/test by experimental batch. Dimensions: 16382 cells, 18771 genes. 14 cell types (avg. 1170±1703 cells per cell type) (Luecken et al. 2021).
Pancreas (random split)
Human pancreatic islet scRNA-seq data from 6 datasets across technologies (CEL-seq, CEL-seq2, Smart-seq2, inDrop, Fluidigm C1, and SMARTER-seq). Split into train/test randomly. Dimensions: 16382 cells, 18771 genes. 14 cell types (avg. 1170±1703 cells per cell type) (Luecken et al. 2021).
Pancreas (random split with label noise)
Human pancreatic islet scRNA-seq data from 6 datasets across technologies (CEL-seq, CEL-seq2, Smart-seq2, inDrop, Fluidigm C1, and SMARTER-seq). Split into train/test randomly with 20% label noise. Dimensions: 16382 cells, 18771 genes. 14 cell types (avg. 1170±1703 cells per cell type) (Luecken et al. 2021).
Tabula Muris Senis Lung (random split)
All lung cells from Tabula Muris Senis, a 500k cell-atlas from 18 organs and tissues across the mouse lifespan. Split into train/test randomly. Dimensions: 24540 cells, 17985 genes. 39 cell types (avg. 629±999 cells per cell type) (Tabula Muris Consortium 2020).
Zebrafish (by laboratory)
90k cells from zebrafish embryos throughout the first day of development, with and without a knockout of chordin, an important developmental gene. Split into train/test by laboratory. Dimensions: 26022 cells, 25258 genes. 24 cell types (avg. 1084±1156 cells per cell type) (Wagner et al. 2018).
Zebrafish (random split)
90k cells from zebrafish embryos throughout the first day of development, with and without a knockout of chordin, an important developmental gene. Split into train/test randomly. Dimensions: 26022 cells, 25258 genes. 24 cell types (avg. 1084±1156 cells per cell type) (Wagner et al. 2018).
Method info
Show
K-neighbors classifier (log CP10k)
Repository · Source Code · Container · v1.0.0
K-neighbors classifier uses the “k-nearest neighbours” approach, which is a popular machine learning algorithm for classification and regression tasks. The assumption underlying KNN in this context is that cells with similar gene expression profiles tend to belong to the same cell type. For each unlabelled cell, this method computes the k labelled cells (in this case, 5) with the smallest distance in PCA space, and assigns that cell the most common cell type among its k nearest neighbors (Cover and Hart 1967)
K-neighbors classifier (log scran)
Repository · Source Code · Container · v1.0.0
K-neighbors classifier uses the “k-nearest neighbours” approach, which is a popular machine learning algorithm for classification and regression tasks. The assumption underlying KNN in this context is that cells with similar gene expression profiles tend to belong to the same cell type. For each unlabelled cell, this method computes the k labelled cells (in this case, 5) with the smallest distance in PCA space, and assigns that cell the most common cell type among its k nearest neighbors (Cover and Hart 1967)
Logistic regression (log CP10k)
Repository · Source Code · Container · v1.0.0
Logistic Regression estimates parameters of a logistic function for multivariate classification tasks. Here, we use 100-dimensional whitened PCA coordinates as independent variables, and the model minimises the cross entropy loss over all cell type classes (Hosmer Jr, Lemeshow, and Sturdivant 2013)
Logistic regression (log scran)
Repository · Source Code · Container · v1.0.0
Logistic Regression estimates parameters of a logistic function for multivariate classification tasks. Here, we use 100-dimensional whitened PCA coordinates as independent variables, and the model minimises the cross entropy loss over all cell type classes (Hosmer Jr, Lemeshow, and Sturdivant 2013)
Majority Vote
Repository · Source Code · Container · v1.0.0
Assignment of all predicted labels as the most common label in the training data (Open Problems for Single Cell Analysis Consortium 2022)
Multilayer perceptron (log CP10k)
Repository · Source Code · Container · v1.0.0
MLP or “Multi-Layer Perceptron” is a type of artificial neural network that consists of multiple layers of interconnected neurons. Each neuron computes a weighted sum of all neurons in the previous layer and transforms it with nonlinear activation function. The output layer provides the final prediction, and network weights are updated by gradient descent to minimize the cross entropy loss. Here, the input data is 100-dimensional whitened PCA coordinates for each cell, and we use two hidden layers of 100 neurons each (Hinton 1989)
Multilayer perceptron (log scran)
Repository · Source Code · Container · v1.0.0
MLP or “Multi-Layer Perceptron” is a type of artificial neural network that consists of multiple layers of interconnected neurons. Each neuron computes a weighted sum of all neurons in the previous layer and transforms it with nonlinear activation function. The output layer provides the final prediction, and network weights are updated by gradient descent to minimize the cross entropy loss. Here, the input data is 100-dimensional whitened PCA coordinates for each cell, and we use two hidden layers of 100 neurons each (Hinton 1989)
scANVI (All genes)
Repository · Source Code · Container · v1.0.0
scANVI or “single-cell ANnotation using Variational Inference” is a semi-supervised variant of the scVI(Lopez et al. 2018) algorithm. Like scVI, scANVI uses deep neural networks and stochastic optimization to model uncertainty caused by technical noise and bias in single - cell transcriptomics measurements. However, scANVI also leverages cell type labels in the generative modelling. In this approach, scANVI is used to predict the cell type labels of the unlabelled test data (Xu et al. 2021)
scANVI (Seurat v3 2000 HVG)
Repository · Source Code · Container · v1.0.0
scANVI or “single-cell ANnotation using Variational Inference” is a semi-supervised variant of the scVI(Lopez et al. 2018) algorithm. Like scVI, scANVI uses deep neural networks and stochastic optimization to model uncertainty caused by technical noise and bias in single - cell transcriptomics measurements. However, scANVI also leverages cell type labels in the generative modelling. In this approach, scANVI is used to predict the cell type labels of the unlabelled test data (Xu et al. 2021)
scArches+scANVI (All genes)
Repository · Source Code · Container · v1.0.0
scArches+scANVI or “Single-cell architecture surgery” is a deep learning method for mapping new datasets onto a pre-existing reference model, using transfer learning and parameter optimization. It first uses scANVI to build a reference model from the training data, and then apply scArches to map the test data onto the reference model and make predictions (Lotfollahi et al. 2020)
scArches+scANVI (Seurat v3 2000 HVG)
Repository · Source Code · Container · v1.0.0
scArches+scANVI or “Single-cell architecture surgery” is a deep learning method for mapping new datasets onto a pre-existing reference model, using transfer learning and parameter optimization. It first uses scANVI to build a reference model from the training data, and then apply scArches to map the test data onto the reference model and make predictions (Lotfollahi et al. 2020)
scArches+scANVI+xgboost (All genes)
Repository · Source Code · Container · v1.0.0
scArches+scANVI or “Single-cell architecture surgery” is a deep learning method for mapping new datasets onto a pre-existing reference model, using transfer learning and parameter optimization. It first uses scANVI to build a reference model from the training data, and then apply scArches to map the test data onto the reference model and make predictions (Lotfollahi et al. 2020)
scArches+scANVI+xgboost (Seurat v3 2000 HVG)
Repository · Source Code · Container · v1.0.0
scArches+scANVI or “Single-cell architecture surgery” is a deep learning method for mapping new datasets onto a pre-existing reference model, using transfer learning and parameter optimization. It first uses scANVI to build a reference model from the training data, and then apply scArches to map the test data onto the reference model and make predictions (Lotfollahi et al. 2020)
Seurat reference mapping (SCTransform)
Repository · Source Code · Container · v1.0.0
Seurat reference mapping is a cell type label transfer method provided by the Seurat package. Gene expression counts are first normalised by SCTransform before computing PCA. Then it finds mutual nearest neighbours, known as transfer anchors, between the labelled and unlabelled part of the data in PCA space, and computes each cell’s distance to each of the anchor pairs. Finally, it uses the labelled anchors to predict cell types for unlabelled cells based on these distances (Hao et al. 2021)
XGBoost (log CP10k)
Repository · Source Code · Container · v1.0.0
XGBoost is a gradient boosting decision tree model that learns multiple tree structures in the form of a series of input features and their values, leading to a prediction decision, and averages predictions from all its trees. Here, input features are normalised gene expression values (Chen and Guestrin 2016)
XGBoost (log scran)
Repository · Source Code · Container · v1.0.0
XGBoost is a gradient boosting decision tree model that learns multiple tree structures in the form of a series of input features and their values, leading to a prediction decision, and averages predictions from all its trees. Here, input features are normalised gene expression values (Chen and Guestrin 2016)
Control method info
Show
Random Labels
Repository · Source Code · Container · v1.0.0
Random assignment of predicted labels proportionate to label abundance in training data (Open Problems for Single Cell Analysis Consortium 2022)
True Labels
Repository · Source Code · Container · v1.0.0
Perfect assignment of the predicted labels from the test labels (Open Problems for Single Cell Analysis Consortium 2022)
Metric info
Show
Accuracy
Average number of correctly applied labels (Grandini, Bagli, and Visani 2020).
F1 score
The F1 score is a weighted average of the precision and recall over all class labels, where an F1 score reaches its best value at 1 and worst score at 0, where each class contributes to the score relative to its frequency in the dataset (Grandini, Bagli, and Visani 2020).
Macro F1 score
The macro F1 score is an unweighted F1 score, where each class contributes equally, regardless of its frequency (Grandini, Bagli, and Visani 2020).
Quality control results
Show
✓ All checks succeeded!
Normalisation visualisation
Show
References
Chen, Tianqi, and Carlos Guestrin. 2016. “XGBoost.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Acm. https://doi.org/10.1145/2939672.2939785.
Cover, T., and P. Hart. 1967. “Nearest Neighbor Pattern Classification.” IEEE Transactions on Information Theory 13 (1): 21–27. https://doi.org/10.1109/tit.1967.1053964.
Grandini, Margherita, Enrico Bagli, and Giorgio Visani. 2020. “Metrics for Multi-Class Classification: An Overview.” arXiv. https://doi.org/10.48550/arxiv.2008.05756.
Hammarlund, Marc, Oliver Hobert, David M. Miller, and Nenad Sestan. 2018. “The CeNGEN Project: The Complete Gene Expression Map of an Entire Nervous System.” Neuron 99 (3): 430–33. https://doi.org/10.1016/j.neuron.2018.07.042.
Hao, Yuhan, Stephanie Hao, Erica Andersen-Nissen, William M. Mauck, Shiwei Zheng, Andrew Butler, Maddie J. Lee, et al. 2021. “Integrated Analysis of Multimodal Single-Cell Data.” Cell 184 (13): 3573–3587.e29. https://doi.org/10.1016/j.cell.2021.04.048.
Hinton, Geoffrey E. 1989. “Connectionist Learning Procedures.” Artificial Intelligence 40 (1-3): 185–234. https://doi.org/10.1016/0004-3702(89)90049-0.
Hosmer Jr, D. W., S. Lemeshow, and R. X. Sturdivant. 2013. Applied Logistic Regression. Vol. 398. John Wiley & Sons.
Lotfollahi, Mohammad, Mohsen Naghipourfar, Malte D. Luecken, Matin Khajavi, Maren Büttner, Ziga Avsec, Alexander V. Misharin, and Fabian J. Theis. 2020. “Query to Reference Single-Cell Integration with Transfer Learning.” bioRxiv. https://doi.org/10.1101/2020.07.16.205997.
Luecken, Malte D., M. Büttner, K. Chaichoompu, A. Danese, M. Interlandi, M. F. Mueller, D. C. Strobl, et al. 2021. “Benchmarking Atlas-Level Data Integration in Single-Cell Genomics.” Nature Methods 19 (1): 41–50. https://doi.org/10.1038/s41592-021-01336-8.
Open Problems for Single Cell Analysis Consortium. 2022. “Open Problems.” https://openproblems.bio.
Tabula Muris Consortium. 2020. “A Single-Cell Transcriptomic Atlas Characterizes Ageing Tissues in the Mouse.” Nature 583 (7817): 590–95. https://doi.org/10.1038/s41586-020-2496-1.
Wagner, Daniel E., Caleb Weinreb, Zach M. Collins, James A. Briggs, Sean G. Megason, and Allon M. Klein. 2018. “Single-Cell Mapping of Gene Expression Landscapes and Lineage in the Zebrafish Embryo.” Science 360 (6392): 981–87. https://doi.org/10.1126/science.aar4362.
Xu, Chenling, Romain Lopez, Edouard Mehlman, Jeffrey Regier, Michael I Jordan, and Nir Yosef. 2021. “Probabilistic Harmonization and Annotation of Single-Cell Transcriptomics Data with Deep Generative Models.” Molecular Systems Biology 17 (1). https://doi.org/10.15252/msb.20209620.