Single-cell transcriptomics
Single-cell transcriptomics examines the gene expression level of individual cells in a given population by simultaneously measuring the messenger RNA concentration of hundreds to thousands of genes.
The unraveling of heterogenous cell populations, reconstruction of cellular developmental trajectories, and modeling of transcriptional dynamics — all previously masked in bulk transcriptome measurements — are made possible through analysis of this transcriptomic data.
Background
Gene expression analysis has become routine through the development of high-throughput RNA sequencing and microarrays. RNA analysis that was previously limited to tracing individual transcripts by Northern blots or quantitative PCR is now used frequently to characterize the expression profiles of populations of thousands of cells. The data produced from the bulk based assays has led to the identification of genes that are differentially expressed in distinct cell populations and biomarker discovery.These genomic studies are limited as they provide measurements for whole tissues and as a result show an average expression profile for all the constituent cells. In multicellular organisms different cell types within the same population can have distinct roles and form subpopulations with different transcriptional profiles. Correlations in the gene expression of the subpopulations can often be missed due to the lack of subpopulation identification.
Moreover, bulk assays fail to identify if a change in the expression profile is due to a change in regulation or composition, in which one cell type arises to dominate the population. Lastly, when examining cellular progression through differentiation, average expression profiles are only able to order cells by time rather than their stage of development and are consequently unable to show trends in gene expression levels specific to certain stages.
Recent advances in biotechnology allow the measurement of gene expression in hundreds to thousands of individual cells simultaneously. Whilst these breakthroughs in transcriptomics technologies have enabled the generation of single-cell transcriptomic data there are new computational and analytical challenges presented by the data produced. Techniques used for analysing RNA-seq data from bulk cell populations can be used for single-cell data but many new computational approaches have been designed for this data type to facilitate a complete and detailed study of single-cell expression profiles.
Experimental steps
There is currently no standardized technique to generate single-cell data, all methods must include cell isolation from the population, lysate formation, amplification through reverse transcription and quantification of expression levels. Common techniques for measuring expression are quantitative PCR or RNA-seq.Isolating single cells
There are several methods available to isolate and amplify cells for single-cell analysis. Low throughput techniques are able to isolate hundreds of cells, are slow and enable selection. These methods include:High throughput methods are able to quickly isolate hundreds to tens of thousands of cells. Common techniques include:
- Fluorescence activated cell sorting
- Microfluidic devices
Quantitative PCR (qPCR)
constant under the conditions, is used for normalisation. The most commonly used house keeping genes include GAPDH and α-actin, although the reliability of normalisation through this process is questionable as there is evidence that the level of expression can vary significantly. Fluorescent dyes are used as reporter molecules to detect the PCR product and monitor the progress of the amplification - the increase in fluorescence intensity is proportional to the amplicon concentration. A plot of fluorescence vs. cycle
number is made and a threshold fluorescence level is used to find cycle number at which the plot reaches this value. The cycle number at this point is known as the threshold cycle and is measured for each gene.
Single-cell RNA-seq
The Single-cell RNA-seq technique converts a population of RNAs to a library of cDNA fragments. These fragments are sequenced by high-throughput next generation sequencing techniques and the reads are mapped back to the reference genome, providing a count of the number of reads associated with each gene.Normalisation of RNA-seq data accounts for cell to cell variation in the efficiencies of the cDNA library formation and sequencing. One method relies on the use of extrinsic RNA spike-ins that are added in equal quantities to each cell lysate and used to normalise read count by the number of reads mapped to spike-in mRNA.
Another control uses unique molecular identifiers -short DNA sequences that are added to each cDNA before amplification and act as a bar code for each cDNA molecule. Normalisation is achieved by using the count number of unique UMIs associated with each gene to account for differences in amplification efficiency.
A combination of both spike-ins, UMIs and other approaches have been combined for more accurate normalisation.
Considerations
A problem associated with single-cell data occurs in the form of zero inflated gene expression distributions, known as technical dropouts, that are common due to low mRNA concentrations of less-expressed genes that are not captured in the reverse transcription process. The percentage of mRNA molecules in the cell lysate that are detected is often only 10-20%.When using RNA spike-ins for normalisation the assumption is made that the amplification and sequencing efficiencies for the endogenous and spike-in RNA are the same. Evidence suggests that this is not the case given fundamental differences in size and features, such as the lack of a polyadenylated tail in spike-ins and therefore shorter length. Additionally, normalisation using UMIs assumes the cDNA library is sequenced to saturation, which is not always the case.
Data analysis
Insights based on single-cell data analysis assumes that the input is a matrix of normalised gene expression counts, generated by the approaches outline above, and can provide opportunities that are not obtainable by bulk.Three main insights provided:
- Identification and characterization of cell types and their spatial organisation in time
- Inference of gene regulatory networks and their strength across individual cells
- Classification of the stochastic component of transcription
Clustering
allows for the formation of subgroups in the cell population. Cells can be clustered by their transcriptomic profile in order to analyse the sub-population structure and identify rare cell types or cell subtypes. Alternatively, genes can be clustered by their expression states in order to identify covarying genes. A combination of both clustering approaches, known as biclustering, has been used to simultaneously cluster by genes and cells to find genes that behave similarly within cell clusters.Clustering methods applied can be K-means clustering, forming disjoint groups or Hierarchical clustering, forming nested partitions.
Biclustering
Biclustering provides several advantages by improving the resolution of clustering. Genes that are only informative to a subset of cells and are hence only expressed there can be identified through biclustering. Moreover, similarly behaving genes that differentiate one cell cluster from another can be identified using this method.Dimensionality reduction
algorithms such as Principal component analysis and t-SNE can be used to simplify data for visualisation and pattern detection by transforming cells from a high to a lower dimensional space. The result of this method produces graphs with each cell as a point in a 2-D or 3-D space. Dimensionality reduction is frequently used before clustering as cells in high dimensions can wrongly appear to be close due to distance metrics behaving non-intuitively.Principal component analysis
The most frequently used technique is PCA, which identifies the directions of largest variance principal components and transforms the data so that the first principal component has the largest possible variance, and successive principle components in turn each have the highest variance possible while remaining orthogonal to the preceding components. The contribution each gene makes to each component is used to infer which genes are contributing the most to variance in the population and are involved in differentiating different subpopulations.Differential expression
Detecting differences in gene expression level between two populations is used both single-cell and bulk transcriptomic data. Specialised methods have been designed for single-cell data that considers single cell features such as technical dropouts and shape of the distribution e.g. Bimodal vs. unimodal.Gene ontology enrichment
terms describe gene functions and the relationships between those functions into three classes:- Molecular function
- Cellular component
- Biological process
Pseudotemporal ordering
Pseudo-temporal ordering is a technique that aims to infer gene expression dynamics from snapshot single-cell data. The method tries to order the cells in such a way that similar cells are closely positioned to each other. This trajectory of cells can be linear, but can also bifurcate or follow more complex graph structures. The trajectory therefore enables the inference of gene expression dynamics and the ordering of cells by their progression through differentiation or response to external stimuli.The method relies on the assumptions that the cells follow the same path through the process of interest and that their transcriptional state correlates to their progression. The algorithm can be applied to both mixed populations and temporal samples.
More than 50 methods for pseudo-temporal ordering have been developed, and each has its own requirements for prior information, detectable topologies, and methodology. An example algorithm is the Monocle algorithm that carries out dimensionality reduction of the data, builds a minimal spanning tree using the transformed data, orders cells in pseudo-time by following the longest connected path of the tree and consequently labels cells by type. Another example is DPT, which uses a diffusion map and diffusion process.
Network inference
Gene regulatory network inference is a technique that aims to construct a network, shown as a graph, in which the nodes represent the genes and edges indicate co-regulatory interactions. The method relies on the assumption that a strong statistical relationship between the expression of genes is an indication of a potential functional relationship. The most commonly used method to measure the strength of a statistical relationship is correlation. However, correlation fails to identify non-linear relationships and mutual information is used as an alternative. Gene clusters linked in a network signify genes that undergo coordinated changes in expression.Integration
Single cell transcriptomics datasets generated using different experimental protocols and under different experimental conditions often differ in the presence or strength of technical effects, and the types of cells observed, among other factors. This results in strong batch effects that may bias the findings of statistical methods applied across batches, particularly in the presence of confounding.As a result of the aforementioned properties of single cell transcriptomic data, batch correction methods developed for bulk sequencing data were observed to perform poorly. This resulted in the development of statistical methods to correct for batch effects that are robust to the properties of single cell transcriptomic data, in order to integrate data from different sources or experimental batches. Foundational work in this regard was performed by Laleh Haghverdi in formulating the use of mutual nearest neighbours between each batch to define batch correction vectors. These vectors can be used to merge datasets that each include at least one shared cell type. An orthogonal approach involves the projection of each dataset onto a shared low-dimensional space using canonical correlation analysis. Mutual nearest neighbours and canonical correlation analysis have also been combined to define integration "anchors" comprising reference cells in one dataset, to which query cells in another dataset are normalised.