Models

trVAE

class scarches.models.TRVAE(adata: AnnData, condition_key: str | None = None, conditions: list | None = None, hidden_layer_sizes: list = [256, 64], latent_dim: int = 10, dr_rate: float = 0.05, use_mmd: bool = True, mmd_on: str = 'z', mmd_boundary: int | None = None, recon_loss: str | None = 'nb', beta: float = 1, use_bn: bool = False, use_ln: bool = True)[source]

Bases: BaseMixin, SurgeryMixin, CVAELatentsMixin

Model for scArches class. This class contains the implementation of Conditional Variational Auto-encoder.

Parameters:
  • adata (: ~anndata.AnnData) – Annotated data matrix. Has to be count data for ‘nb’ and ‘zinb’ loss and normalized log transformed data for ‘mse’ loss.

  • condition_key (String) – column name of conditions in adata.obs data frame.

  • conditions (List) – List of Condition names that the used data will contain to get the right encoding when used after reloading.

  • hidden_layer_sizes (List) – A list of hidden layer sizes for encoder network. Decoder network will be the reversed order.

  • latent_dim (Integer) – Bottleneck layer (z) size.

  • dr_rate (Float) – Dropput rate applied to all layers, if `dr_rate`==0 no dropout will be applied.

  • use_mmd (Boolean) – If ‘True’ an additional MMD loss will be calculated on the latent dim. ‘z’ or the first decoder layer ‘y’.

  • mmd_on (String) – Choose on which layer MMD loss will be calculated on if ‘use_mmd=True’: ‘z’ for latent dim or ‘y’ for first decoder layer.

  • mmd_boundary (Integer or None) – Choose on how many conditions the MMD loss should be calculated on. If ‘None’ MMD will be calculated on all conditions.

  • recon_loss (String) – Definition of Reconstruction-Loss-Method, ‘mse’, ‘nb’ or ‘zinb’.

  • beta (Float) – Scaling Factor for MMD loss

  • use_bn (Boolean) – If True batch normalization will be applied to layers.

  • use_ln (Boolean) – If True layer normalization will be applied to layers.

Methods

get_latent([x, c, mean, mean_var])

Map x in to the latent space. This function will feed data in encoder and return z for each sample in data. :param x: Numpy nd-array to be mapped to latent space. x has to be in shape [n_obs, input_dim]. If None, then self.adata.X is used. :param c: numpy nd-array of original (unencoded) desired labels for each sample. :param mean: return mean instead of random sample from the latent space :param mean_var: return mean and variance instead of random sample from the latent space if mean=False.

get_y([x, c])

Map x in to the latent space.

load(dir_path[, adata, map_location])

Instantiate a model from the saved output. :param dir_path: Path to saved outputs. :param adata: AnnData object. If None, will check for and load anndata saved with the model. :param map_location: a function, torch.device, string or a dict specifying how to remap storage locations.

load_query_data(adata, reference_model[, ...])

Transfer Learning function for new data.

save(dir_path[, overwrite, save_anndata])

Save the state of the model. Neither the trainer optimizer state nor the trainer history are saved. :param dir_path: Path to a directory. :param overwrite: Overwrite existing data or not. If False and directory already exists at dir_path, error will be raised. :param save_anndata: If True, also saves the anndata :param anndata_write_kwargs: Kwargs for anndata write function.

train([n_epochs, lr, eps])

Train the model.

train(n_epochs: int = 400, lr: float = 0.001, eps: float = 0.01, **kwargs)[source]

Train the model.

Parameters:
  • n_epochs – Number of epochs for training the model.

  • lr – Learning rate for training the model.

  • eps – torch.optim.Adam eps parameter

  • kwargs – kwargs for the TrVAE trainer.

expiMap

class scarches.models.EXPIMAP(adata: AnnData, condition_key: str | None = None, conditions: list | None = None, hidden_layer_sizes: list = [256, 256], dr_rate: float = 0.05, recon_loss: str = 'nb', use_l_encoder: bool = False, use_bn: bool = False, use_ln: bool = True, mask: ndarray | list | None = None, mask_key: str = 'I', decoder_last_layer: str | None = None, soft_mask: bool = False, n_ext: int = 0, n_ext_m: int = 0, use_hsic: bool = False, hsic_one_vs_all: bool = False, ext_mask: ndarray | list | None = None, soft_ext_mask: bool = False)[source]

Bases: BaseMixin, SurgeryMixin, CVAELatentsMixin

Model for scArches class. This class contains the implementation of Conditional Variational Auto-encoder.

Parameters:
  • adata (: ~anndata.AnnData) – Annotated data matrix. Has to be count data for ‘nb’ and ‘zinb’ loss and normalized log transformed data for ‘mse’ loss.

  • condition_key (String) – column name of conditions in adata.obs data frame.

  • conditions (List) – List of Condition names that the used data will contain to get the right encoding when used after reloading.

  • hidden_layer_sizes (List) – A list of hidden layer sizes for encoder network. Decoder network will be the reversed order.

  • latent_dim (Integer) – Bottleneck layer (z) size.

  • dr_rate (Float) – Dropput rate applied to all layers, if `dr_rate`==0 no dropout will be applied.

  • recon_loss (String) – Definition of Reconstruction-Loss-Method, ‘mse’ or ‘nb’.

  • use_l_encoder (Boolean) – If True and `decoder_last_layer`=’softmax’, libary size encoder is used.

  • use_bn (Boolean) – If True batch normalization will be applied to layers.

  • use_ln (Boolean) – If True layer normalization will be applied to layers.

  • mask (Array or List) – if not None, an array of 0s and 1s from utils.add_annotations to create VAE with a masked linear decoder.

  • mask_key (String) – A key in adata.varm for the mask if the mask is not provided.

  • decoder_last_layer (String or None) – The last layer of the decoder. Must be ‘softmax’ (default for ‘nb’ loss), identity(default for ‘mse’ loss), ‘softplus’, ‘exp’ or ‘relu’.

  • soft_mask (Boolean) – Use soft mask option. If True, the model will enforce mask with L1 regularization instead of multipling weight of the linear decoder by the binary mask.

  • n_ext (Integer) – Number of unconstarined extension terms. Used for query mapping.

  • n_ext_m (Integer) – Number of constrained extension terms. Used for query mapping.

  • use_hsic (Boolean) – If True, add HSIC regularization for unconstarined extension terms. Used for query mapping.

  • hsic_one_vs_all (Boolean) – If True, calculates the sum of HSIC losses for each unconstarined term vs the other terms. If False, calculates HSIC for all unconstarined terms vs the other terms. Used for query mapping.

  • ext_mask (Array or List) – Mask (similar to the mask argument) for unconstarined extension terms. Used for query mapping.

  • soft_ext_mask (Boolean) – Use the soft mask mode for training with the constarined extension terms. Used for query mapping.

Methods

get_latent([x, c, only_active, mean, mean_var])

Map x in to the latent space.

get_y([x, c])

Map x in to the latent space.

latent_directions([method, get_confidence, ...])

Get directions of upregulation for each latent dimension.

latent_enrich(groups[, comparison, ...])

Gene set enrichment test for the latent space.

load(dir_path[, adata, map_location])

Instantiate a model from the saved output. :param dir_path: Path to saved outputs. :param adata: AnnData object. If None, will check for and load anndata saved with the model. :param map_location: a function, torch.device, string or a dict specifying how to remap storage locations.

load_query_data(adata, reference_model[, ...])

Transfer Learning function for new data.

mask_genes([terms])

Return lists of genes belonging to the terms in the mask.

nonzero_terms()

Return indices of active terms.

save(dir_path[, overwrite, save_anndata])

Save the state of the model. Neither the trainer optimizer state nor the trainer history are saved. :param dir_path: Path to a directory. :param overwrite: Overwrite existing data or not. If False and directory already exists at dir_path, error will be raised. :param save_anndata: If True, also saves the anndata :param anndata_write_kwargs: Kwargs for anndata write function.

term_genes(term[, terms])

Return the dataframe with genes belonging to the term after training sorted by absolute weights in the decoder.

train([n_epochs, lr, eps, alpha, omega])

Train the model.

update_terms([terms, adata])

Add extension terms' names to the terms.

get_latent(x: ndarray | None = None, c: ndarray | None = None, only_active: bool = False, mean: bool = False, mean_var: bool = False)[source]

Map x in to the latent space. This function will feed data in encoder and return z for each sample in data.

Parameters:
  • x – Numpy nd-array to be mapped to latent space. x has to be in shape [n_obs, input_dim]. If None, then self.adata.X is used.

  • cnumpy nd-array of original (unencoded) desired labels for each sample.

  • only_active – Return only the latent variables which correspond to active terms, i.e terms that were not deactivated by the group lasso regularization.

  • mean – return mean instead of random sample from the latent space

  • mean_var – return mean and variance instead of random sample from the latent space if mean=False.

Return type:

Returns array containing latent space encoding of ‘x’.

latent_directions(method='sum', get_confidence=False, adata=None, key_added='directions')[source]

Get directions of upregulation for each latent dimension. Multipling this by raw latent scores ensures positive latent scores correspond to upregulation.

Parameters:
  • method (String) – Method of calculation, it should be ‘sum’ or ‘counts’.

  • get_confidence (Boolean) – Only for method=’counts’. If ‘True’, also calculate confidence of the directions.

  • adata (AnnData) – An AnnData object to store dimensions. If ‘None’, self.adata is used.

  • key_added (String) – key of adata.uns where to put the dimensions.

latent_enrich(groups, comparison='rest', n_sample=5000, use_directions=False, directions_key='directions', select_terms=None, adata=None, exact=True, key_added='bf_scores')[source]

Gene set enrichment test for the latent space. Test the hypothesis that latent scores for each term in one group (z_1) is bigger than in the other group (z_2).

Puts results to adata.uns[key_added]. Results are a dictionary with p_h0 - probability that z_1 > z_2, p_h1 = 1-p_h0 and bf - bayes factors equal to log(p_h0/p_h1).

Parameters:
  • groups (String or Dict) – A string with the key in adata.obs to look for categories or a dictionary with categories as keys and lists of cell names as values.

  • comparison (String) – The category name to compare against. If ‘rest’, then compares each category against all others.

  • n_sample (Integer) – Number of random samples to draw for each category.

  • use_directions (Boolean) – If ‘True’, multiplies the latent scores by directions in adata.

  • directions_key (String) – The key in adata.uns for directions.

  • select_terms (Array) – If not ‘None’, then an index of terms to select for the test. Only does the test for these terms.

  • adata (AnnData) – An AnnData object to use. If ‘None’, uses self.adata.

  • exact (Boolean) – Use exact probabilities for comparisons.

  • key_added (String) – key of adata.uns where to put the results of the test.

classmethod load_query_data(adata: AnnData, reference_model: str | TRVAE, freeze: bool = True, freeze_expression: bool = True, unfreeze_ext: bool = True, remove_dropout: bool = True, new_n_ext: int | None = None, new_n_ext_m: int | None = None, new_ext_mask: ndarray | list | None = None, new_soft_ext_mask: bool = False, **kwargs)[source]

Transfer Learning function for new data. Uses old trained model and expands it for new conditions.

Parameters:
  • adata – Query anndata object.

  • reference_model – A model to expand or a path to a model folder.

  • freeze (Boolean) – If ‘True’ freezes every part of the network except the first layers of encoder/decoder.

  • freeze_expression (Boolean) – If ‘True’ freeze every weight in first layers except the condition weights.

  • remove_dropout (Boolean) – If ‘True’ remove Dropout for Transfer Learning.

  • unfreeze_ext (Boolean) – If ‘True’ do not freeze weights for new constrained and unconstrained extension terms.

  • new_n_ext (Integer) – Number of new unconstarined extension terms to add to the reference model. Used for query mapping.

  • new_n_ext_m (Integer) – Number of new constrained extension terms to add to the reference model. Used for query mapping.

  • new_ext_mask (Array or List) – Mask (similar to the mask argument) for new unconstarined extension terms.

  • new_soft_ext_mask (Boolean) – Use the soft mask mode for training with the constarined extension terms.

  • kwargs – kwargs for the initialization of the EXPIMAP class for the query model.

Returns:

New (query) model to train on query data.

Return type:

new_model

mask_genes(terms: str | list = 'terms')[source]

Return lists of genes belonging to the terms in the mask.

nonzero_terms()[source]

Return indices of active terms. Active terms are the terms which were not deactivated by the group lasso regularization.

term_genes(term: str | int, terms: str | list = 'terms')[source]

Return the dataframe with genes belonging to the term after training sorted by absolute weights in the decoder.

train(n_epochs: int = 400, lr: float = 0.001, eps: float = 0.01, alpha: float | None = None, omega: Tensor | None = None, **kwargs)[source]

Train the model.

Parameters:
  • n_epochs (Integer) – Number of epochs for training the model.

  • lr (Float) – Learning rate for training the model.

  • eps (Float) – torch.optim.Adam eps parameter

  • alpha_kl (Float) – Multiplies the KL divergence part of the loss. Set to 0.35 by default.

  • alpha_epoch_anneal (Integer or None) – If not ‘None’, the KL Loss scaling factor (alpha_kl) will be annealed from 0 to 1 every epoch until the input integer is reached. By default is set to 130 epochs or to n_epochs if n_epochs < 130.

  • alpha (Float) – Group Lasso regularization coefficient

  • omega (Tensor or None) – If not ‘None’, vector of coefficients for each group

  • alpha_l1 (Float) – L1 regularization coefficient for the soft mask of reference (old) and new constrained terms. Specifies the strength for deactivating the genes which are not in the corresponding annotations groups in the mask.

  • alpha_l1_epoch_anneal (Integer) – If not ‘None’, the alpha_l1 scaling factor will be annealed from 0 to 1 every ‘alpha_l1_anneal_each’ epochs until the input integer is reached.

  • alpha_l1_anneal_each (Integer) – Anneal alpha_l1 every alpha_l1_anneal_each’th epoch, i.e. for 5 (default) do annealing every 5th epoch.

  • gamma_ext (Float) – L1 regularization coefficient for the new unconstrained terms. Specifies the strength of sparcity enforcement.

  • gamma_epoch_anneal (Integer) – If not ‘None’, the gamma_ext scaling factor will be annealed from 0 to 1 every ‘gamma_anneal_each’ epochs until the input integer is reached.

  • gamma_anneal_each (Integer) – Anneal gamma_ext every gamma_anneal_each’th epoch, i.e. for 5 (default) do annealing every 5th epoch.

  • beta (Float) – HSIC regularization coefficient for the unconstrained terms. Multiplies the HSIC loss terms if not ‘None’.

  • kwargs – kwargs for the expiMap trainer.

update_terms(terms: str | list = 'terms', adata=None)[source]

Add extension terms’ names to the terms.

scPoli

class scarches.models.scPoli(adata: AnnData, share_metadata: bool | None = True, obs_metadata: DataFrame | None = None, condition_keys: list | str | None = None, conditions: list | None = None, conditions_combined: list | None = None, inject_condition: list | None = ['encoder', 'decoder'], cell_type_keys: list | str | None = None, cell_types: dict | None = None, unknown_ct_names: list | None = None, labeled_indices: list | None = None, prototypes_labeled: dict | None = None, prototypes_unlabeled: dict | None = None, hidden_layer_sizes: list | None = None, latent_dim: int = 10, embedding_dims: list | int = 10, embedding_max_norm: float = 1.0, dr_rate: float = 0.05, use_mmd: bool = False, mmd_on: str = 'z', mmd_boundary: int | None = None, recon_loss: str | None = 'nb', beta: float = 1, use_bn: bool = False, use_ln: bool = True)[source]

Bases: BaseMixin

Model for scPoli class. This class contains the methods and functionalities for label transfer and prototype training.

Parameters:
  • adata (: ~anndata.AnnData) – Annotated data matrix.

  • share_metadata (Bool) – Whether or not to share metadata associated with samples. The metadata is aggregated using the condition_keys. First element is taken. Consider manually adding an .obs_metadata attribute if you need more flexibility.

  • condition_keys (String) – column name of conditions in adata.obs data frame.

  • conditions (List) – List of Condition names that the used data will contain to get the right encoding when used after reloading.

  • cell_type_keys (List or str) – List or string of obs columns to use as cell type annotation for prototypes.

  • cell_types (Dictionary) – Dictionary of cell types. Keys are cell types and values are cell_type_keys. Needed for surgery.

  • unknown_ct_names (List) – List of strings with the names of cell clusters to be ignored for prototypes computation.

  • labeled_indices (List) – List of integers with the indices of the labeled cells.

  • prototypes_labeled (Dictionary) – Dictionary with keys mean, cov and the respective mean or covariate matrices for prototypes.

  • prototypes_unlabeled (Dictionary) – Dictionary with keys mean and the respective mean for unlabeled prototypes.

  • hidden_layer_sizes (List) – A list of hidden layer sizes for encoder network. Decoder network will be the reversed order.

  • latent_dim (Integer) – Bottleneck layer (z) size.

  • embedding_dim (Integer) – Conditional embedding size.

  • embedding_max_norm – Max norm allowed for conditional embeddings.

  • dr_rate (Float) – Dropput rate applied to all layers, if `dr_rate`==0 no dropout will be applied.

  • use_mmd (Boolean) – If ‘True’ an additional MMD loss will be calculated on the latent dim. ‘z’ or the first decoder layer ‘y’.

  • mmd_on (String) – Choose on which layer MMD loss will be calculated on if ‘use_mmd=True’: ‘z’ for latent dim or ‘y’ for first decoder layer.

  • mmd_boundary (Integer or None) – Choose on how many conditions the MMD loss should be calculated on. If ‘None’ MMD will be calculated on all conditions.

  • recon_loss (String) – Definition of Reconstruction-Loss-Method, ‘mse’, ‘nb’ or ‘zinb’.

  • beta (Float) – Scaling Factor for MMD loss

  • use_bn (Boolean) – If True batch normalization will be applied to layers.

  • use_ln (Boolean) – If True layer normalization will be applied to layers.

Methods

add_new_cell_type(cell_type_name, obs_key, ...)

Function used to add new annotation for a novel cell type.

classify(adata[, prototype, p, get_prob, ...])

Classifies unlabeled cells using the prototypes obtained during training.

get_conditional_embeddings()

Returns anndata object of the conditional embeddings

get_latent(adata[, mean])

Map x in to the latent space.

get_prototypes_info([prototype_set])

Generates anndata file with prototype features and annotations.

load(dir_path[, adata, map_location])

Instantiate a model from the saved output. :param dir_path: Path to saved outputs. :param adata: AnnData object. If None, will check for and load anndata saved with the model. :param map_location: a function, torch.device, string or a dict specifying how to remap storage locations.

load_query_data(adata, reference_model[, ...])

Transfer Learning function for new data.

save(dir_path[, overwrite, save_anndata])

Save the state of the model. Neither the trainer optimizer state nor the trainer history are saved. :param dir_path: Path to a directory. :param overwrite: Overwrite existing data or not. If False and directory already exists at dir_path, error will be raised. :param save_anndata: If True, also saves the anndata :param anndata_write_kwargs: Kwargs for anndata write function.

train([n_epochs, pretraining_epochs, eta, ...])

Train the model.

get_recon_loss

shot_surgery

add_new_cell_type(cell_type_name, obs_key, prototypes, x=None, c=None)[source]

Function used to add new annotation for a novel cell type.

Parameters:
  • cell_type_name (str) – Name of the new cell type

  • obs_key (str) – Obs column key to define the hierarchy level of celltype annotation.

  • prototypes (list) – List of indices of the unlabeled prototypes that correspond to the new cell type

  • x (np.ndarray) – Features to be classified. If None the stored model’s adata is used.

  • c (np.ndarray) – Condition vector. If None the stored model’s condition vector is used.

classify(adata, prototype=False, p=2, get_prob=False, log_distance=True, scale_uncertainties=False)[source]

Classifies unlabeled cells using the prototypes obtained during training. Data handling before call to model’s classify method.

x: np.ndarray

Features to be classified. If None the stored model’s adata is used.

c: Dict or np.ndarray

Condition vector, or dictionary when the model is conditioned on multiple batch covariates.

prototype:

Boolean whether to classify the gene features or prototypes stored stored in the model.

get_conditional_embeddings()[source]

Returns anndata object of the conditional embeddings

get_latent(adata, mean: bool = False)[source]

Map x in to the latent space. This function will feed data in encoder and return z for each sample in data.

Parameters:
x

Numpy nd-array to be mapped to latent space. x has to be in shape [n_obs, input_dim].

c

numpy nd-array of original (unencoded) desired labels for each sample.

mean

return mean instead of random sample from the latent space

:rtype: Returns array containing latent space encoding of ‘x’.
get_prototypes_info(prototype_set='labeled')[source]

Generates anndata file with prototype features and annotations.

Parameters:
  • cell_type_name (str) – Name of the new cell type

  • prototypes (list) – List of indices of the unlabeled prototypes that correspond to the new cell type

get_recon_loss(adata: AnnData, batch_size: int = 128, condition_encoders: dict | None = None, conditions_combined_encoder: dict | None = None)[source]
classmethod load_query_data(adata: AnnData, reference_model: str | SCPOLI, labeled_indices: list | None = None, unknown_ct_names: list | None = None, freeze: bool = True, freeze_expression: bool = True, remove_dropout: bool = True, return_new_conditions: bool = False, map_location=None)[source]

Transfer Learning function for new data. Uses old trained model and expands it for new conditions.

Parameters:
  • adata – Query anndata object.

  • reference_model – SCPOLI model to expand or a path to SCPOLI model folder.

  • labeled_indices (List) – List of integers with the indices of the labeled cells.

  • unknown_ct_names (List) – List of strings with the names of cell clusters to be ignored for prototypes computation.

  • freeze (Boolean) – If ‘True’ freezes every part of the network except the first layers of encoder/decoder.

  • freeze_expression (Boolean) – If ‘True’ freeze every weight in first layers except the condition weights.

  • remove_dropout (Boolean) – If ‘True’ remove Dropout for Transfer Learning.

  • map_location – map_location to remap storage locations (as in ‘.load’) of ‘reference_model’. Only taken into account if ‘reference_model’ is a path to a model on disk.

Returns:

new_model – New SCPOLI model to train on query data.

Return type:

scPoli

classmethod shot_surgery(adata: AnnData, reference_model: str | SCPOLI, labeled_indices: list | None = None, unknown_ct_names: list | None = None, train_epochs: int = 0, batch_size: int = 128, subsample: float = 1.0, force_cuda: bool = True, **kwargs)[source]
train(n_epochs: int = 100, pretraining_epochs=None, eta: float = 1, lr: float = 0.001, eps: float = 0.01, alpha_epoch_anneal=100.0, reload_best: bool = False, prototype_training: bool | None = True, unlabeled_prototype_training: bool | None = True, **kwargs)[source]

Train the model.

Parameters:
  • n_epochs – Number of epochs for training the model.

  • lr – Learning rate for training the model.

  • eps – torch.optim.Adam eps parameter

  • kwargs – kwargs for the scPoli trainer.

scVI

class scarches.models.SCVI(adata: AnnData | None = None, n_hidden: int = 128, n_latent: int = 10, n_layers: int = 1, dropout_rate: float = 0.1, dispersion: Literal['gene', 'gene-batch', 'gene-label', 'gene-cell'] = 'gene', gene_likelihood: Literal['zinb', 'nb', 'poisson'] = 'zinb', latent_distribution: Literal['normal', 'ln'] = 'normal', **kwargs)[source]

Bases: RNASeqMixin, VAEMixin, ArchesMixin, UnsupervisedTrainingMixin, BaseMinifiedModeModelClass

single-cell Variational Inference :cite:p:`Lopez18`.

Parameters:
  • adata – AnnData object that has been registered via setup_anndata(). If None, then the underlying module will not be initialized until training, and a LightningDataModule must be passed in during training (EXPERIMENTAL).

  • n_hidden – Number of nodes per hidden layer.

  • n_latent – Dimensionality of the latent space.

  • n_layers – Number of hidden layers used for encoder and decoder NNs.

  • dropout_rate – Dropout rate for neural networks.

  • dispersion

    One of the following:

    • 'gene' - dispersion parameter of NB is constant per gene across cells

    • 'gene-batch' - dispersion can differ between different batches

    • 'gene-label' - dispersion can differ between different labels

    • 'gene-cell' - dispersion can differ for every gene in every cell

  • gene_likelihood

    One of:

    • 'nb' - Negative binomial distribution

    • 'zinb' - Zero-inflated negative binomial distribution

    • 'poisson' - Poisson distribution

  • latent_distribution

    One of:

    • 'normal' - Normal distribution

    • 'ln' - Logistic normal distribution (Normal(0, I) transformed by softmax)

  • **kwargs – Additional keyword arguments for VAE.

Examples

>>> adata = anndata.read_h5ad(path_to_anndata)
>>> scvi.model.SCVI.setup_anndata(adata, batch_key="batch")
>>> vae = scvi.model.SCVI(adata)
>>> vae.train()
>>> adata.obsm["X_scVI"] = vae.get_latent_representation()
>>> adata.obsm["X_normalized_scVI"] = vae.get_normalized_expression()

Notes

See further usage examples in the following tutorials:

  1. /tutorials/notebooks/quick_start/api_overview

  2. /tutorials/notebooks/scrna/harmonization

  3. /tutorials/notebooks/scrna/scarches_scvi_tools

  4. /tutorials/notebooks/scrna/scvi_in_R

Attributes:
adata

Data attached to model instance.

adata_manager

Manager instance associated with self.adata.

device

The current device that the module’s params are on.

history

Returns computed metrics during training.

is_trained

Whether the model has been trained.

minified_data_type

The type of minified data associated with this model, if applicable.

summary_string

Summary string of the model.

test_indices

Observations that are in test set.

train_indices

Observations that are in train set.

validation_indices

Observations that are in validation set.

Methods

convert_legacy_save(dir_path, output_dir_path)

Converts a legacy saved model (<v0.15.0) to the updated save format.

deregister_manager([adata])

Deregisters the AnnDataManager instance associated with adata.

differential_expression([adata, groupby, ...])

A unified method for differential expression analysis.

get_anndata_manager(adata[, required])

Retrieves the AnnDataManager for a given AnnData object specific to this model instance.

get_elbo([adata, indices, batch_size])

Return the ELBO for the data.

get_feature_correlation_matrix([adata, ...])

Generate gene-gene correlation matrix using scvi uncertainty and expression.

get_from_registry(adata, registry_key)

Returns the object in AnnData associated with the key in the data registry.

get_latent_library_size([adata, indices, ...])

Returns the latent library size for each cell.

get_latent_representation([adata, indices, ...])

Return the latent representation for each cell.

get_likelihood_parameters([adata, indices, ...])

Estimates for the parameters of the likelihood \(p(x \mid z)\).

get_marginal_ll([adata, indices, ...])

Return the marginal LL for the data.

get_normalized_expression([adata, indices, ...])

Returns the normalized (decoded) gene expression.

get_reconstruction_error([adata, indices, ...])

Return the reconstruction error for the data.

load(dir_path[, adata, accelerator, device, ...])

Instantiate a model from the saved output.

load_query_data(adata, reference_model[, ...])

Online update of a reference model with scArches algorithm :cite:p:`Lotfollahi21`.

load_registry(dir_path[, prefix])

Return the full registry saved with the model.

minify_adata([minified_data_type, ...])

Minifies the model's adata.

posterior_predictive_sample([adata, ...])

Generate predictive samples from the posterior predictive distribution.

prepare_query_anndata(adata, reference_model)

Prepare data for query integration.

register_manager(adata_manager)

Registers an AnnDataManager instance with this model class.

save(dir_path[, prefix, overwrite, ...])

Save the state of the model.

setup_anndata(adata[, layer, batch_key, ...])

Sets up the AnnData object for this model.

to_device(device)

Move model to device.

train([max_epochs, accelerator, devices, ...])

Train the model.

view_anndata_setup([adata, ...])

Print summary of the setup for the initial AnnData or a given AnnData object.

view_setup_args(dir_path[, prefix])

Print args used to setup a saved model.

minify_adata(minified_data_type: Literal['latent_posterior_parameters'] = 'latent_posterior_parameters', use_latent_qzm_key: str = 'X_latent_qzm', use_latent_qzv_key: str = 'X_latent_qzv') None[source]

Minifies the model’s adata.

Minifies the adata, and registers new anndata fields: latent qzm, latent qzv, adata uns containing minified-adata type, and library size. This also sets the appropriate property on the module to indicate that the adata is minified.

Parameters:
  • minified_data_type

    How to minify the data. Currently only supports latent_posterior_parameters. If minified_data_type == latent_posterior_parameters:

    • the original count data is removed (adata.X, adata.raw, and any layers)

    • the parameters of the latent representation of the original data is stored

    • everything else is left untouched

  • use_latent_qzm_key – Key to use in adata.obsm where the latent qzm params are stored

  • use_latent_qzv_key – Key to use in adata.obsm where the latent qzv params are stored

Notes

The modification is not done inplace – instead the model is assigned a new (minified) version of the adata.

classmethod setup_anndata(adata: AnnData, layer: str | None = None, batch_key: str | None = None, labels_key: str | None = None, size_factor_key: str | None = None, categorical_covariate_keys: list[str] | None = None, continuous_covariate_keys: list[str] | None = None, **kwargs)[source]

Sets up the AnnData object for this model.

A mapping will be created between data fields used by this model to their respective locations in adata. None of the data in adata are modified. Only adds fields to adata.

Parameters:
  • adata – AnnData object. Rows represent cells, columns represent features.

  • layer – if not None, uses this as the key in adata.layers for raw count data.

  • batch_key – key in adata.obs for batch information. Categories will automatically be converted into integer categories and saved to adata.obs[‘_scvi_batch’]. If None, assigns the same batch to all the data.

  • labels_key – key in adata.obs for label information. Categories will automatically be converted into integer categories and saved to adata.obs[‘_scvi_labels’]. If None, assigns the same label to all the data.

  • size_factor_key – key in adata.obs for size factor information. Instead of using library size as a size factor, the provided size factor column will be used as offset in the mean of the likelihood. Assumed to be on linear scale.

  • categorical_covariate_keys – keys in adata.obs that correspond to categorical data. These covariates can be added in addition to the batch covariate and are also treated as nuisance factors (i.e., the model tries to minimize their effects on the latent space). Thus, these should not be used for biologically-relevant factors that you do _not_ want to correct for.

  • continuous_covariate_keys – keys in adata.obs that correspond to continuous data. These covariates can be added in addition to the batch covariate and are also treated as nuisance factors (i.e., the model tries to minimize their effects on the latent space). Thus, these should not be used for biologically-relevant factors that you do _not_ want to correct for.

scANVI

class scarches.models.SCANVI(adata: AnnData, n_hidden: int = 128, n_latent: int = 10, n_layers: int = 1, dropout_rate: float = 0.1, dispersion: Literal['gene', 'gene-batch', 'gene-label', 'gene-cell'] = 'gene', gene_likelihood: Literal['zinb', 'nb', 'poisson'] = 'zinb', linear_classifier: bool = False, **model_kwargs)[source]

Bases: RNASeqMixin, VAEMixin, ArchesMixin, BaseMinifiedModeModelClass

Single-cell annotation using variational inference :cite:p:`Xu21`.

Inspired from M1 + M2 model, as described in (https://arxiv.org/pdf/1406.5298.pdf).

Parameters:
  • adata – AnnData object that has been registered via setup_anndata().

  • n_hidden – Number of nodes per hidden layer.

  • n_latent – Dimensionality of the latent space.

  • n_layers – Number of hidden layers used for encoder and decoder NNs.

  • dropout_rate – Dropout rate for neural networks.

  • dispersion

    One of the following:

    • 'gene' - dispersion parameter of NB is constant per gene across cells

    • 'gene-batch' - dispersion can differ between different batches

    • 'gene-label' - dispersion can differ between different labels

    • 'gene-cell' - dispersion can differ for every gene in every cell

  • gene_likelihood

    One of:

    • 'nb' - Negative binomial distribution

    • 'zinb' - Zero-inflated negative binomial distribution

    • 'poisson' - Poisson distribution

  • linear_classifier – If True, uses a single linear layer for classification instead of a multi-layer perceptron.

  • **model_kwargs – Keyword args for SCANVAE

Examples

>>> adata = anndata.read_h5ad(path_to_anndata)
>>> scvi.model.SCANVI.setup_anndata(adata, batch_key="batch", labels_key="labels")
>>> vae = scvi.model.SCANVI(adata, "Unknown")
>>> vae.train()
>>> adata.obsm["X_scVI"] = vae.get_latent_representation()
>>> adata.obs["pred_label"] = vae.predict()

Notes

See further usage examples in the following tutorials:

  1. /tutorials/notebooks/scrna/harmonization

  2. /tutorials/notebooks/scrna/scarches_scvi_tools

  3. /tutorials/notebooks/scrna/seed_labeling

Attributes:
adata

Data attached to model instance.

adata_manager

Manager instance associated with self.adata.

device

The current device that the module’s params are on.

history

Returns computed metrics during training.

is_trained

Whether the model has been trained.

minified_data_type

The type of minified data associated with this model, if applicable.

summary_string

Summary string of the model.

test_indices

Observations that are in test set.

train_indices

Observations that are in train set.

validation_indices

Observations that are in validation set.

Methods

convert_legacy_save(dir_path, output_dir_path)

Converts a legacy saved model (<v0.15.0) to the updated save format.

deregister_manager([adata])

Deregisters the AnnDataManager instance associated with adata.

differential_expression([adata, groupby, ...])

A unified method for differential expression analysis.

from_scvi_model(scvi_model, unlabeled_category)

Initialize scanVI model with weights from pretrained SCVI model.

get_anndata_manager(adata[, required])

Retrieves the AnnDataManager for a given AnnData object specific to this model instance.

get_elbo([adata, indices, batch_size])

Return the ELBO for the data.

get_feature_correlation_matrix([adata, ...])

Generate gene-gene correlation matrix using scvi uncertainty and expression.

get_from_registry(adata, registry_key)

Returns the object in AnnData associated with the key in the data registry.

get_latent_library_size([adata, indices, ...])

Returns the latent library size for each cell.

get_latent_representation([adata, indices, ...])

Return the latent representation for each cell.

get_likelihood_parameters([adata, indices, ...])

Estimates for the parameters of the likelihood \(p(x \mid z)\).

get_marginal_ll([adata, indices, ...])

Return the marginal LL for the data.

get_normalized_expression([adata, indices, ...])

Returns the normalized (decoded) gene expression.

get_reconstruction_error([adata, indices, ...])

Return the reconstruction error for the data.

load(dir_path[, adata, accelerator, device, ...])

Instantiate a model from the saved output.

load_query_data(adata, reference_model[, ...])

Online update of a reference model with scArches algorithm :cite:p:`Lotfollahi21`.

load_registry(dir_path[, prefix])

Return the full registry saved with the model.

minify_adata([minified_data_type, ...])

Minifies the model's adata.

posterior_predictive_sample([adata, ...])

Generate predictive samples from the posterior predictive distribution.

predict([adata, indices, soft, batch_size, ...])

Return cell label predictions.

prepare_query_anndata(adata, reference_model)

Prepare data for query integration.

register_manager(adata_manager)

Registers an AnnDataManager instance with this model class.

save(dir_path[, prefix, overwrite, ...])

Save the state of the model.

setup_anndata(adata, labels_key, ...[, ...])

Sets up the AnnData object for this model.

to_device(device)

Move model to device.

train([max_epochs, n_samples_per_label, ...])

Train the model.

view_anndata_setup([adata, ...])

Print summary of the setup for the initial AnnData or a given AnnData object.

view_setup_args(dir_path[, prefix])

Print args used to setup a saved model.

classmethod from_scvi_model(scvi_model: SCVI, unlabeled_category: str, labels_key: str | None = None, adata: AnnData | None = None, **scanvi_kwargs)[source]

Initialize scanVI model with weights from pretrained SCVI model.

Parameters:
  • scvi_model – Pretrained scvi model

  • labels_key – key in adata.obs for label information. Label categories can not be different if labels_key was used to setup the SCVI model. If None, uses the labels_key used to setup the SCVI model. If that was None, and error is raised.

  • unlabeled_category – Value used for unlabeled cells in labels_key used to setup AnnData with scvi.

  • adata – AnnData object that has been registered via setup_anndata().

  • scanvi_kwargs – kwargs for scANVI model

minify_adata(minified_data_type: Literal['latent_posterior_parameters'] = 'latent_posterior_parameters', use_latent_qzm_key: str = 'X_latent_qzm', use_latent_qzv_key: str = 'X_latent_qzv')[source]

Minifies the model’s adata.

Minifies the adata, and registers new anndata fields: latent qzm, latent qzv, adata uns containing minified-adata type, and library size. This also sets the appropriate property on the module to indicate that the adata is minified.

Parameters:
  • minified_data_type

    How to minify the data. Currently only supports latent_posterior_parameters. If minified_data_type == latent_posterior_parameters:

    • the original count data is removed (adata.X, adata.raw, and any layers)

    • the parameters of the latent representation of the original data is stored

    • everything else is left untouched

  • use_latent_qzm_key – Key to use in adata.obsm where the latent qzm params are stored

  • use_latent_qzv_key – Key to use in adata.obsm where the latent qzv params are stored

Notes

The modification is not done inplace – instead the model is assigned a new (minified) version of the adata.

predict(adata: AnnData | None = None, indices: Sequence[int] | None = None, soft: bool = False, batch_size: int | None = None, use_posterior_mean: bool = True) np.ndarray | pd.DataFrame[source]

Return cell label predictions.

Parameters:
  • adata – AnnData object that has been registered via setup_anndata().

  • indices – Return probabilities for each class label.

  • soft – If True, returns per class probabilities

  • batch_size – Minibatch size for data loading into model. Defaults to scvi.settings.batch_size.

  • use_posterior_mean – If True, uses the mean of the posterior distribution to predict celltype labels. Otherwise, uses a sample from the posterior distribution - this means that the predictions will be stochastic.

classmethod setup_anndata(adata: AnnData, labels_key: str, unlabeled_category: str, layer: str | None = None, batch_key: str | None = None, size_factor_key: str | None = None, categorical_covariate_keys: list[str] | None = None, continuous_covariate_keys: list[str] | None = None, **kwargs)[source]

Sets up the AnnData object for this model.

A mapping will be created between data fields used by this model to their respective locations in adata. None of the data in adata are modified. Only adds fields to adata.

Parameters:
  • adata – AnnData object. Rows represent cells, columns represent features.

  • labels_key – key in adata.obs for label information. Categories will automatically be converted into integer categories and saved to adata.obs[‘_scvi_labels’]. If None, assigns the same label to all the data.

  • unlabeled_category – value in adata.obs[labels_key] that indicates unlabeled observations.

  • layer – if not None, uses this as the key in adata.layers for raw count data.

  • batch_key – key in adata.obs for batch information. Categories will automatically be converted into integer categories and saved to adata.obs[‘_scvi_batch’]. If None, assigns the same batch to all the data.

  • size_factor_key – key in adata.obs for size factor information. Instead of using library size as a size factor, the provided size factor column will be used as offset in the mean of the likelihood. Assumed to be on linear scale.

  • categorical_covariate_keys – keys in adata.obs that correspond to categorical data. These covariates can be added in addition to the batch covariate and are also treated as nuisance factors (i.e., the model tries to minimize their effects on the latent space). Thus, these should not be used for biologically-relevant factors that you do _not_ want to correct for.

  • continuous_covariate_keys – keys in adata.obs that correspond to continuous data. These covariates can be added in addition to the batch covariate and are also treated as nuisance factors (i.e., the model tries to minimize their effects on the latent space). Thus, these should not be used for biologically-relevant factors that you do _not_ want to correct for.

train(max_epochs: int | None = None, n_samples_per_label: float | None = None, check_val_every_n_epoch: int | None = None, train_size: float = 0.9, validation_size: float | None = None, shuffle_set_split: bool = True, batch_size: int = 128, accelerator: str = 'auto', devices: int | list[int] | str = 'auto', datasplitter_kwargs: dict | None = None, plan_kwargs: dict | None = None, **trainer_kwargs)[source]

Train the model.

Parameters:
  • max_epochs – Number of passes through the dataset for semisupervised training.

  • n_samples_per_label – Number of subsamples for each label class to sample per epoch. By default, there is no label subsampling.

  • check_val_every_n_epoch – Frequency with which metrics are computed on the data for validation set for both the unsupervised and semisupervised trainers. If you’d like a different frequency for the semisupervised trainer, set check_val_every_n_epoch in semisupervised_train_kwargs.

  • train_size – Size of training set in the range [0.0, 1.0].

  • validation_size – Size of the test set. If None, defaults to 1 - train_size. If train_size + validation_size < 1, the remaining cells belong to a test set.

  • shuffle_set_split – Whether to shuffle indices before splitting. If False, the val, train, and test set are split in the sequential order of the data according to validation_size and train_size percentages.

  • batch_size – Minibatch size to use during training.

  • accelerator – Supports passing different accelerator types (“cpu”, “gpu”, “tpu”, “ipu”, “hpu”, “mps, “auto”) as well as custom accelerator instances.

  • devices – The devices to use. Can be set to a non-negative index (int or str), a sequence of device indices (list or comma-separated str), the value -1 to indicate all available devices, or “auto” for automatic selection based on the chosen accelerator. If set to “auto” and accelerator is not determined to be “cpu”, then devices will be set to the first available device.

  • datasplitter_kwargs – Additional keyword arguments passed into SemiSupervisedDataSplitter.

  • plan_kwargs – Keyword args for SemiSupervisedTrainingPlan. Keyword arguments passed to train() will overwrite values present in plan_kwargs, when appropriate.

  • **trainer_kwargs – Other keyword args for Trainer.

TotalVI

class scarches.models.TOTALVI(adata: AnnData, n_latent: int = 20, gene_dispersion: Literal['gene', 'gene-batch', 'gene-label', 'gene-cell'] = 'gene', protein_dispersion: Literal['protein', 'protein-batch', 'protein-label'] = 'protein', gene_likelihood: Literal['zinb', 'nb'] = 'nb', latent_distribution: Literal['normal', 'ln'] = 'normal', empirical_protein_background_prior: bool | None = None, override_missing_proteins: bool = False, **model_kwargs)[source]

Bases: RNASeqMixin, VAEMixin, ArchesMixin, BaseModelClass

total Variational Inference :cite:p:`GayosoSteier21`.

Parameters:
  • adata – AnnData object that has been registered via setup_anndata().

  • n_latent – Dimensionality of the latent space.

  • gene_dispersion

    One of the following:

    • 'gene' - genes_dispersion parameter of NB is constant per gene across cells

    • 'gene-batch' - genes_dispersion can differ between different batches

    • 'gene-label' - genes_dispersion can differ between different labels

  • protein_dispersion

    One of the following:

    • 'protein' - protein_dispersion parameter is constant per protein across cells

    • 'protein-batch' - protein_dispersion can differ between different batches NOT TESTED

    • 'protein-label' - protein_dispersion can differ between different labels NOT TESTED

  • gene_likelihood

    One of:

    • 'nb' - Negative binomial distribution

    • 'zinb' - Zero-inflated negative binomial distribution

  • latent_distribution

    One of:

    • 'normal' - Normal distribution

    • 'ln' - Logistic normal distribution (Normal(0, I) transformed by softmax)

  • empirical_protein_background_prior – Set the initialization of protein background prior empirically. This option fits a GMM for each of 100 cells per batch and averages the distributions. Note that even with this option set to True, this only initializes a parameter that is learned during inference. If False, randomly initializes. The default (None), sets this to True if greater than 10 proteins are used.

  • override_missing_proteins – If True, will not treat proteins with all 0 expression in a particular batch as missing.

  • **model_kwargs – Keyword args for TOTALVAE

Examples

>>> adata = anndata.read_h5ad(path_to_anndata)
>>> scvi.model.TOTALVI.setup_anndata(adata, batch_key="batch", protein_expression_obsm_key="protein_expression")
>>> vae = scvi.model.TOTALVI(adata)
>>> vae.train()
>>> adata.obsm["X_totalVI"] = vae.get_latent_representation()

Notes

See further usage examples in the following tutorials:

  1. /tutorials/notebooks/multimodal/totalVI

  2. /tutorials/notebooks/multimodal/cite_scrna_integration_w_totalVI

  3. /tutorials/notebooks/scrna/scarches_scvi_tools

Attributes:
adata

Data attached to model instance.

adata_manager

Manager instance associated with self.adata.

device

The current device that the module’s params are on.

history

Returns computed metrics during training.

is_trained

Whether the model has been trained.

summary_string

Summary string of the model.

test_indices

Observations that are in test set.

train_indices

Observations that are in train set.

validation_indices

Observations that are in validation set.

Methods

convert_legacy_save(dir_path, output_dir_path)

Converts a legacy saved model (<v0.15.0) to the updated save format.

deregister_manager([adata])

Deregisters the AnnDataManager instance associated with adata.

differential_expression([adata, groupby, ...])

A unified method for differential expression analysis.

get_anndata_manager(adata[, required])

Retrieves the AnnDataManager for a given AnnData object specific to this model instance.

get_elbo([adata, indices, batch_size])

Return the ELBO for the data.

get_feature_correlation_matrix([adata, ...])

Generate gene-gene correlation matrix using scvi uncertainty and expression.

get_from_registry(adata, registry_key)

Returns the object in AnnData associated with the key in the data registry.

get_latent_library_size([adata, indices, ...])

Returns the latent library size for each cell.

get_latent_representation([adata, indices, ...])

Return the latent representation for each cell.

get_likelihood_parameters([adata, indices, ...])

Estimates for the parameters of the likelihood \(p(x, y \mid z)\).

get_marginal_ll([adata, indices, ...])

Return the marginal LL for the data.

get_normalized_expression([adata, indices, ...])

Returns the normalized gene expression and protein expression.

get_protein_background_mean(adata, indices, ...)

Get protein background mean.

get_protein_foreground_probability([adata, ...])

Returns the foreground probability for proteins.

get_reconstruction_error([adata, indices, ...])

Return the reconstruction error for the data.

load(dir_path[, adata, accelerator, device, ...])

Instantiate a model from the saved output.

load_query_data(adata, reference_model[, ...])

Online update of a reference model with scArches algorithm :cite:p:`Lotfollahi21`.

load_registry(dir_path[, prefix])

Return the full registry saved with the model.

posterior_predictive_sample([adata, ...])

Generate observation samples from the posterior predictive distribution.

prepare_query_anndata(adata, reference_model)

Prepare data for query integration.

register_manager(adata_manager)

Registers an AnnDataManager instance with this model class.

save(dir_path[, prefix, overwrite, ...])

Save the state of the model.

setup_anndata(adata, protein_expression_obsm_key)

Sets up the AnnData object for this model.

setup_mudata(mdata[, rna_layer, ...])

Sets up the MuData object for this model.

to_device(device)

Move model to device.

train([max_epochs, lr, accelerator, ...])

Trains the model using amortized variational inference.

view_anndata_setup([adata, ...])

Print summary of the setup for the initial AnnData or a given AnnData object.

view_setup_args(dir_path[, prefix])

Print args used to setup a saved model.

differential_expression(adata: AnnData | None = None, groupby: str | None = None, group1: Iterable[str] | None = None, group2: str | None = None, idx1: Sequence[int] | Sequence[bool] | str | None = None, idx2: Sequence[int] | Sequence[bool] | str | None = None, mode: Literal['vanilla', 'change'] = 'change', delta: float = 0.25, batch_size: int | None = None, all_stats: bool = True, batch_correction: bool = False, batchid1: Iterable[str] | None = None, batchid2: Iterable[str] | None = None, fdr_target: float = 0.05, silent: bool = False, protein_prior_count: float = 0.1, scale_protein: bool = False, sample_protein_mixing: bool = False, include_protein_background: bool = False, **kwargs) pd.DataFrame[source]

A unified method for differential expression analysis.

Implements “vanilla” DE :cite:p:`Lopez18`. and “change” mode DE :cite:p:`Boyeau19`.

Parameters:
  • adata – AnnData object with equivalent structure to initial AnnData. If None, defaults to the AnnData object used to initialize the model.

  • groupby – The key of the observations grouping to consider.

  • group1 – Subset of groups, e.g. [‘g1’, ‘g2’, ‘g3’], to which comparison shall be restricted, or all groups in groupby (default).

  • group2 – If None, compare each group in group1 to the union of the rest of the groups in groupby. If a group identifier, compare with respect to this group.

  • idx1idx1 and idx2 can be used as an alternative to the AnnData keys. Custom identifier for group1 that can be of three sorts: (1) a boolean mask, (2) indices, or (3) a string. If it is a string, then it will query indices that verifies conditions on adata.obs, as described in pandas.DataFrame.query() If idx1 is not None, this option overrides group1 and group2.

  • idx2 – Custom identifier for group2 that has the same properties as idx1. By default, includes all cells not specified in idx1.

  • mode – Method for differential expression. See user guide for full explanation.

  • delta – specific case of region inducing differential expression. In this case, we suppose that \(R \setminus [-\delta, \delta]\) does not induce differential expression (change model default case).

  • batch_size – Minibatch size for data loading into model. Defaults to scvi.settings.batch_size.

  • all_stats – Concatenate count statistics (e.g., mean expression group 1) to DE results.

  • batch_correction – Whether to correct for batch effects in DE inference.

  • batchid1 – Subset of categories from batch_key registered in setup_anndata, e.g. [‘batch1’, ‘batch2’, ‘batch3’], for group1. Only used if batch_correction is True, and by default all categories are used.

  • batchid2 – Same as batchid1 for group2. batchid2 must either have null intersection with batchid1, or be exactly equal to batchid1. When the two sets are exactly equal, cells are compared by decoding on the same batch. When sets have null intersection, cells from group1 and group2 are decoded on each group in group1 and group2, respectively.

  • fdr_target – Tag features as DE based on posterior expected false discovery rate.

  • silent – If True, disables the progress bar. Default: False.

  • protein_prior_count – Prior count added to protein expression before LFC computation

  • scale_protein – Force protein values to sum to one in every single cell (post-hoc normalization)

  • sample_protein_mixing – Sample the protein mixture component, i.e., use the parameter to sample a Bernoulli that determines if expression is from foreground/background.

  • include_protein_background – Include the protein background component as part of the protein expression

  • **kwargs – Keyword args for scvi.model.base.DifferentialComputation.get_bayes_factors()

Return type:

Differential expression DataFrame.

get_feature_correlation_matrix(adata=None, indices=None, n_samples: int = 10, batch_size: int = 64, rna_size_factor: int = 1000, transform_batch: Sequence[Number | str] | None = None, correlation_type: Literal['spearman', 'pearson'] = 'spearman', log_transform: bool = False) pd.DataFrame[source]

Generate gene-gene correlation matrix using scvi uncertainty and expression.

Parameters:
  • adata – AnnData object with equivalent structure to initial AnnData. If None, defaults to the AnnData object used to initialize the model.

  • indices – Indices of cells in adata to use. If None, all cells are used.

  • n_samples – Number of posterior samples to use for estimation.

  • batch_size – Minibatch size for data loading into model. Defaults to scvi.settings.batch_size.

  • rna_size_factor – size factor for RNA prior to sampling gamma distribution

  • transform_batch

    Batches to condition on. If transform_batch is:

    • None, then real observed batch is used

    • int, then batch transform_batch is used

    • list of int, then values are averaged over provided batches.

  • correlation_type – One of “pearson”, “spearman”.

  • log_transform – Whether to log transform denoised values prior to correlation calculation.

Return type:

Gene-protein-gene-protein correlation matrix

get_latent_library_size(adata: AnnData | None = None, indices: Sequence[int] | None = None, give_mean: bool = True, batch_size: int | None = None) np.ndarray[source]

Returns the latent library size for each cell.

This is denoted as \(\ell_n\) in the totalVI paper.

Parameters:
  • adata – AnnData object with equivalent structure to initial AnnData. If None, defaults to the AnnData object used to initialize the model.

  • indices – Indices of cells in adata to use. If None, all cells are used.

  • give_mean – Return the mean or a sample from the posterior distribution.

  • batch_size – Minibatch size for data loading into model. Defaults to scvi.settings.batch_size.

get_likelihood_parameters(adata: AnnData | None = None, indices: Sequence[int] | None = None, n_samples: int | None = 1, give_mean: bool | None = False, batch_size: int | None = None) dict[str, np.ndarray][source]

Estimates for the parameters of the likelihood \(p(x, y \mid z)\).

Parameters:
  • adata – AnnData object with equivalent structure to initial AnnData. If None, defaults to the AnnData object used to initialize the model.

  • indices – Indices of cells in adata to use. If None, all cells are used.

  • n_samples – Number of posterior samples to use for estimation.

  • give_mean – Return expected value of parameters or a samples

  • batch_size – Minibatch size for data loading into model. Defaults to scvi.settings.batch_size.

get_normalized_expression(adata=None, indices=None, n_samples_overall: int | None = None, transform_batch: Sequence[Number | str] | None = None, gene_list: Sequence[str] | None = None, protein_list: Sequence[str] | None = None, library_size: float | Literal['latent'] | None = 1, n_samples: int = 1, sample_protein_mixing: bool = False, scale_protein: bool = False, include_protein_background: bool = False, batch_size: int | None = None, return_mean: bool = True, return_numpy: bool | None = None) tuple[np.ndarray | pd.DataFrame, np.ndarray | pd.DataFrame][source]

Returns the normalized gene expression and protein expression.

This is denoted as \(\rho_n\) in the totalVI paper for genes, and TODO for proteins, \((1-\pi_{nt})\alpha_{nt}\beta_{nt}\).

Parameters:
  • adata – AnnData object with equivalent structure to initial AnnData. If None, defaults to the AnnData object used to initialize the model.

  • indices – Indices of cells in adata to use. If None, all cells are used.

  • n_samples_overall – Number of samples to use in total

  • transform_batch

    Batch to condition on. If transform_batch is:

    • None, then real observed batch is used

    • int, then batch transform_batch is used

    • List[int], then average over batches in list

  • gene_list – Return frequencies of expression for a subset of genes. This can save memory when working with large datasets and few genes are of interest.

  • protein_list – Return protein expression for a subset of genes. This can save memory when working with large datasets and few genes are of interest.

  • library_size – Scale the expression frequencies to a common library size. This allows gene expression levels to be interpreted on a common scale of relevant magnitude.

  • n_samples – Get sample scale from multiple samples.

  • sample_protein_mixing – Sample mixing bernoulli, setting background to zero

  • scale_protein – Make protein expression sum to 1

  • include_protein_background – Include background component for protein expression

  • batch_size – Minibatch size for data loading into model. Defaults to scvi.settings.batch_size.

  • return_mean – Whether to return the mean of the samples.

  • return_numpy – Return a np.ndarray instead of a pd.DataFrame. Includes gene names as columns. If either n_samples=1 or return_mean=True, defaults to False. Otherwise, it defaults to True.

Returns:

  • - **gene_normalized_expression* - normalized expression for RNA*

  • - **protein_normalized_expression* - normalized expression for proteins*

  • If n_samples > 1 and return_mean is False, then the shape is (samples, cells, genes).

  • Otherwise, shape is (cells, genes). Return type is pd.DataFrame unless return_numpy is True.

get_protein_background_mean(adata, indices, batch_size)[source]

Get protein background mean.

get_protein_foreground_probability(adata: AnnData | None = None, indices: Sequence[int] | None = None, transform_batch: Sequence[Number | str] | None = None, protein_list: Sequence[str] | None = None, n_samples: int = 1, batch_size: int | None = None, return_mean: bool = True, return_numpy: bool | None = None)[source]

Returns the foreground probability for proteins.

This is denoted as \((1 - \pi_{nt})\) in the totalVI paper.

Parameters:
  • adata – AnnData object with equivalent structure to initial AnnData. If None, defaults to the AnnData object used to initialize the model.

  • indices – Indices of cells in adata to use. If None, all cells are used.

  • transform_batch

    Batch to condition on. If transform_batch is:

    • None, then real observed batch is used

    • int, then batch transform_batch is used

    • List[int], then average over batches in list

  • protein_list – Return protein expression for a subset of genes. This can save memory when working with large datasets and few genes are of interest.

  • n_samples – Number of posterior samples to use for estimation.

  • batch_size – Minibatch size for data loading into model. Defaults to scvi.settings.batch_size.

  • return_mean – Whether to return the mean of the samples.

  • return_numpy – Return a ndarray instead of a DataFrame. DataFrame includes gene names as columns. If either n_samples=1 or return_mean=True, defaults to False. Otherwise, it defaults to True.

Returns:

  • - **foreground_probability* - probability foreground for each protein*

  • If n_samples > 1 and return_mean is False, then the shape is (samples, cells, genes).

  • Otherwise, shape is (cells, genes). In this case, return type is DataFrame unless return_numpy is True.

posterior_predictive_sample(adata: AnnData | None = None, indices: Sequence[int] | None = None, n_samples: int = 1, batch_size: int | None = None, gene_list: Sequence[str] | None = None, protein_list: Sequence[str] | None = None) np.ndarray[source]

Generate observation samples from the posterior predictive distribution.

The posterior predictive distribution is written as \(p(\hat{x}, \hat{y} \mid x, y)\).

Parameters:
  • adata – AnnData object with equivalent structure to initial AnnData. If None, defaults to the AnnData object used to initialize the model.

  • indices – Indices of cells in adata to use. If None, all cells are used.

  • n_samples – Number of required samples for each cell

  • batch_size – Minibatch size for data loading into model. Defaults to scvi.settings.batch_size.

  • gene_list – Names of genes of interest

  • protein_list – Names of proteins of interest

Returns:

x_new – tensor with shape (n_cells, n_genes, n_samples)

Return type:

ndarray

classmethod setup_anndata(adata: AnnData, protein_expression_obsm_key: str, protein_names_uns_key: str | None = None, batch_key: str | None = None, layer: str | None = None, size_factor_key: str | None = None, categorical_covariate_keys: list[str] | None = None, continuous_covariate_keys: list[str] | None = None, **kwargs)[source]

Sets up the AnnData object for this model.

A mapping will be created between data fields used by this model to their respective locations in adata. None of the data in adata are modified. Only adds fields to adata.

Parameters:
  • adata – AnnData object. Rows represent cells, columns represent features.

  • protein_expression_obsm_key – key in adata.obsm for protein expression data.

  • protein_names_uns_key – key in adata.uns for protein names. If None, will use the column names of adata.obsm[protein_expression_obsm_key] if it is a DataFrame, else will assign sequential names to proteins.

  • batch_key – key in adata.obs for batch information. Categories will automatically be converted into integer categories and saved to adata.obs[‘_scvi_batch’]. If None, assigns the same batch to all the data.

  • layer – if not None, uses this as the key in adata.layers for raw count data.

  • size_factor_key – key in adata.obs for size factor information. Instead of using library size as a size factor, the provided size factor column will be used as offset in the mean of the likelihood. Assumed to be on linear scale.

  • categorical_covariate_keys – keys in adata.obs that correspond to categorical data. These covariates can be added in addition to the batch covariate and are also treated as nuisance factors (i.e., the model tries to minimize their effects on the latent space). Thus, these should not be used for biologically-relevant factors that you do _not_ want to correct for.

  • continuous_covariate_keys – keys in adata.obs that correspond to continuous data. These covariates can be added in addition to the batch covariate and are also treated as nuisance factors (i.e., the model tries to minimize their effects on the latent space). Thus, these should not be used for biologically-relevant factors that you do _not_ want to correct for.

Returns:

  • None. Adds the following fields

  • .uns[‘_scvi’]scvi setup dictionary

  • .obs[‘_scvi_labels’] – labels encoded as integers

  • .obs[‘_scvi_batch’] – batch encoded as integers

classmethod setup_mudata(mdata: MuData, rna_layer: str | None = None, protein_layer: str | None = None, batch_key: str | None = None, size_factor_key: str | None = None, categorical_covariate_keys: list[str] | None = None, continuous_covariate_keys: list[str] | None = None, modalities: dict[str, str] | None = None, **kwargs)[source]

Sets up the MuData object for this model.

A mapping will be created between data fields used by this model to their respective locations in adata. None of the data in adata are modified. Only adds fields to adata.

Parameters:
  • mdata – MuData object. Rows represent cells, columns represent features.

  • rna_layer – RNA layer key. If None, will use .X of specified modality key.

  • protein_layer – Protein layer key. If None, will use .X of specified modality key.

  • batch_key – key in adata.obs for batch information. Categories will automatically be converted into integer categories and saved to adata.obs[‘_scvi_batch’]. If None, assigns the same batch to all the data.

  • size_factor_key – key in adata.obs for size factor information. Instead of using library size as a size factor, the provided size factor column will be used as offset in the mean of the likelihood. Assumed to be on linear scale.

  • categorical_covariate_keys – keys in adata.obs that correspond to categorical data. These covariates can be added in addition to the batch covariate and are also treated as nuisance factors (i.e., the model tries to minimize their effects on the latent space). Thus, these should not be used for biologically-relevant factors that you do _not_ want to correct for.

  • continuous_covariate_keys – keys in adata.obs that correspond to continuous data. These covariates can be added in addition to the batch covariate and are also treated as nuisance factors (i.e., the model tries to minimize their effects on the latent space). Thus, these should not be used for biologically-relevant factors that you do _not_ want to correct for.

  • modalities – Dictionary mapping parameters to modalities.

Examples

>>> mdata = muon.read_10x_h5("pbmc_10k_protein_v3_filtered_feature_bc_matrix.h5")
>>> scvi.model.TOTALVI.setup_mudata(mdata, modalities={"rna_layer": "rna": "protein_layer": "prot"})
>>> vae = scvi.model.TOTALVI(mdata)
train(max_epochs: int | None = None, lr: float = 0.004, accelerator: str = 'auto', devices: int | list[int] | str = 'auto', train_size: float = 0.9, validation_size: float | None = None, shuffle_set_split: bool = True, batch_size: int = 256, early_stopping: bool = True, check_val_every_n_epoch: int | None = None, reduce_lr_on_plateau: bool = True, n_steps_kl_warmup: int | None = None, n_epochs_kl_warmup: int | None = None, adversarial_classifier: bool | None = None, datasplitter_kwargs: dict | None = None, plan_kwargs: dict | None = None, **kwargs)[source]

Trains the model using amortized variational inference.

Parameters:
  • max_epochs – Number of passes through the dataset.

  • lr – Learning rate for optimization.

  • accelerator – Supports passing different accelerator types (“cpu”, “gpu”, “tpu”, “ipu”, “hpu”, “mps, “auto”) as well as custom accelerator instances.

  • devices – The devices to use. Can be set to a non-negative index (int or str), a sequence of device indices (list or comma-separated str), the value -1 to indicate all available devices, or “auto” for automatic selection based on the chosen accelerator. If set to “auto” and accelerator is not determined to be “cpu”, then devices will be set to the first available device.

  • train_size – Size of training set in the range [0.0, 1.0].

  • validation_size – Size of the test set. If None, defaults to 1 - train_size. If train_size + validation_size < 1, the remaining cells belong to a test set.

  • shuffle_set_split – Whether to shuffle indices before splitting. If False, the val, train, and test set are split in the sequential order of the data according to validation_size and train_size percentages.

  • batch_size – Minibatch size to use during training.

  • early_stopping – Whether to perform early stopping with respect to the validation set.

  • check_val_every_n_epoch – Check val every n train epochs. By default, val is not checked, unless early_stopping is True or reduce_lr_on_plateau is True. If either of the latter conditions are met, val is checked every epoch.

  • reduce_lr_on_plateau – Reduce learning rate on plateau of validation metric (default is ELBO).

  • n_steps_kl_warmup – Number of training steps (minibatches) to scale weight on KL divergences from 0 to 1. Only activated when n_epochs_kl_warmup is set to None. If None, defaults to floor(0.75 * adata.n_obs).

  • n_epochs_kl_warmup – Number of epochs to scale weight on KL divergences from 0 to 1. Overrides n_steps_kl_warmup when both are not None.

  • adversarial_classifier – Whether to use adversarial classifier in the latent space. This helps mixing when there are missing proteins in any of the batches. Defaults to True is missing proteins are detected.

  • datasplitter_kwargs – Additional keyword arguments passed into DataSplitter.

  • plan_kwargs – Keyword args for AdversarialTrainingPlan. Keyword arguments passed to train() will overwrite values present in plan_kwargs, when appropriate.

  • **kwargs – Other keyword args for Trainer.