mvpy.estimators package#
Submodules#
mvpy.estimators.b2b module#
A collection of estimators for decoding and disentangling features using back2back regression.
- class mvpy.estimators.b2b.B2B(alphas: Tensor | ndarray | float | int = 1, **kwargs)#
Bases:
BaseEstimator
Implements a back-to-back regression.
- Parameters:
alphas (Union[torch.Tensor, np.ndarray], default=torch.tensor([1])) – The penalties to use for estimation.
fit_intercept (bool, default=True) – Whether to fit an intercept.
normalise (bool, default=True) – Whether to normalise the data.
alpha_per_target (bool, default=False) – Whether to use a different penalty for each target.
normalise_decoder (bool, default=True) – Whether to normalise decoder ouputs.
- alphas#
The penalties to use for estimation.
- Type:
Union[torch.Tensor, np.ndarray]
- fit_intercept#
Whether to fit an intercept.
- Type:
bool
- normalise#
Whether to normalise the data
- Type:
bool
- alpha_per_target#
Whether to use a different penalty for each target.
- Type:
bool
- normalise_decoder#
Whether to normalise decoder ouputs.
- Type:
bool
- decoder_#
The decoder.
- Type:
mvpy.estimators.Decoder
- encoder_#
The encoder.
- Type:
mvpy.estimators.Decoder
- scaler_#
The scaler.
- Type:
mvpy.estimators.Scaler
- causal_#
The causal contribution of each feature.
- Type:
Union[torch.Tensor, np.ndarray]
- pattern_#
The decoded patterns.
- Type:
Union[torch.Tensor, np.ndarray]
Notes
The back-to-back estimator is a two-step estimator that consists of a decoder and an encoder. Effectively, the idea is to first decode all features, then use predictions from the decoder to encode all true features from all predictions. Consequently, this allows us to obtain a disentangled estimate of the causal contribution of each feature.
In practice, this is implemented as:
\[\begin{split}\\hat{G} = (Y_1^T Y_i + \\alpha_Y)^{-1}Y^T X\end{split}\]\[\begin{split}\\hat{H} = (X^T X + \\alpha_X)^{-1}X^T Y\hat{G}\end{split}\]where \(\\hat{G}\) is the decoder and \(\\hat{H}\) is the encoder, and \(\\alpha\) are regularisation parameters. Note also that, in practice, we do two additional steps:
Firstly, we split the data in half and train the decoder on the first half and the encoder on the second half of the data. This is done to avoid overfitting.
Secondly, we also (offer an option to) normalise the outputs from our decoder. This isn’t technically required, but it can be helpful if you are using, for example, different alpha penalties per target.
For more information on B2B regression, please see [1]_.
References
Examples
>>> import torch >>> from mvpy.estimators import B2B >>> ß = torch.normal(0, 1, (2, 60)) >>> X = torch.normal(0, 1, (100, 2)) >>> y = X @ ß + torch.normal(0, 1, (100, 60)) >>> X, y = y, X >>> y = torch.cat((y, y.mean(1).unsqueeze(-1) + torch.normal(0, 5, (100, 1))), 1) >>> b2b = B2B() >>> b2b.fit(X, y).causal_ tensor([0.4470, 0.4594, 0.0060])
- fit(X, y)#
Fit the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
y (Union[np.ndarray, torch.Tensor]) – The targets.
- predict(X)#
Predict from the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
- Returns:
The predictions.
- Return type:
Union[np.ndarray, torch.Tensor]
mvpy.estimators.classifier module#
A collection of estimators for decoding features using ridge classifiers.
- class mvpy.estimators.classifier.Classifier(alphas: Tensor | ndarray | float | int = 1, method: str = 'OvR', **kwargs)#
Bases:
BaseEstimator
Implements a ridge classifier.
- Parameters:
alphas (Union[torch.Tensor, np.ndarray, float, int], default=1) – The penalties to use for estimation.
method (str, default='OvR') – The method to use for estimation (available: ‘OvR’, ‘OvO’).
fit_intercept (bool, default=True) – Whether to fit an intercept.
normalise (bool, default=True) – Whether to normalise the data.
alpha_per_target (bool, default=False) – Whether to use a different penalty for each target.
- estimators_#
The estimators.
- Type:
List[sklearn.base.BaseEstimator], optional (only for OvO)
- estimator_#
The estimator.
- Type:
sklearn.base.BaseEstimator, optional (only for OvR)
- classes_#
The classes.
- Type:
Dict[int, int]
- intercept_#
The intercepts of the classifiers.
- Type:
Union[np.ndarray, torch.Tensor]
- coef_#
The coefficients of the classifiers.
- Type:
Union[np.ndarray, torch.Tensor]
- pattern_#
The patterns of the classifiers.
- Type:
Union[np.ndarray, torch.Tensor]
Notes
For multi-class classification, the One-vs-Rest strategy is used by default.
Examples
>>> import torch >>> from mvpy.estimators import Classifier >>> from sklearn.datasets import load_iris >>> X, y = load_iris(return_X_y = True) >>> X, y = torch.from_numpy(X).to(torch.float32), torch.from_numpy(y).to(torch.float32) >>> clf = Classifier(alphas = torch.logspace(-5, 10, 20)) >>> clf.fit(X, y) >>> clf.predict(X).shape torch.Size([150])
- fit(X, y)#
Fit the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
y (Union[np.ndarray, torch.Tensor]) – The targets.
- predict(X)#
Predict from the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
- Returns:
The predictions.
- Return type:
Union[np.ndarray, torch.Tensor]
- predict_proba(X)#
Predict from the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
- Returns:
The predictions.
- Return type:
Union[np.ndarray, torch.Tensor]
mvpy.estimators.covariance module#
A collection of estimators for covariance estimation and pre-whitening of data.
- class mvpy.estimators.covariance.Covariance(method='LedoitWolf', s_min=None, s_max=None)#
Bases:
BaseEstimator
Class for computing covariance, precision and whitening matrices. Note that calling a transform from this clas will whiten the data.
- Parameters:
method (str, default = 'LedoitWolf') – Which method should be applied for estimation of covariance? (default = LedoitWolf, available = [Empirical, LedoitWolf])
s_min (float, default = None) – What’s the minimum sample we should consider in the time dimension?
s_max (float, default = None) – What’s the maximum sample we should consider in the time dimension?
- covariance_#
Covariance matrix
- Type:
Union[np.ndarray, torch.Tensor]
- precision_#
Precision matrix (inverse of covariance matrix)
- Type:
Union[np.ndarray, torch.Tensor]
- whitener_#
Whitening matrix
- Type:
Union[np.ndarray, torch.Tensor]
- shrinkage_#
Shrinkage parameter, if used by method.
- Type:
float, optional
Notes
This class assumes features to be the second to last dimension of the data, unless there are only two dimensions (in which case it is assumed to be the last dimension).
Currently, we support the following methods:
- Empirical:
This method simply computes the biassed empirical covariance matrix.
- LedoitWolf:
This method computes the Ledoit-Wolf shrinkage estimator as detailed in [1]_.
References
[1] Ledoit, O., & Wolf, M. (2004). A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88, 365-411. 10.1016/S0047-259X(03)00096-4
Examples
>>> import torch >>> from mvpy.estimators import Covariance >>> X = torch.normal(0, 1, (100, 10, 100)) >>> cov = Covariance().fit(X) >>> cov.covariance_.shape torch.Size([10, 10])
- clone()#
Obtain a clone of this class.
- Returns:
The cloned object.
- Return type:
- fit(X, *args)#
Fit the covariance estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – Data to fit the estimator on.
*args (Any) – Additional arguments to pass to the estimator.
- Returns:
self – Fitted covariance estimator.
- Return type:
- fit_transform(X, *args)#
Fit the covariance estimator and whiten the data.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – Data to fit the estimator on and transform.
*args (Any) – Additional arguments to pass to the estimator.
- Returns:
W – Whitened data.
- Return type:
Union[np.ndarray, torch.Tensor]
- to_numpy()#
Create the numpy estimator. Note that this function cannot be used for conversion.
- Returns:
The numpy estimator.
- Return type:
- to_torch()#
Create the torch estimator. Note that this function cannot be used for conversion.
- Returns:
The torch estimator.
- Return type:
- transform(X, *args)#
Whiten data using the fitted covariance estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – Data to transform.
*args (Any) – Additional arguments to pass to the estimator.
- Returns:
W – Whitened data.
- Return type:
Union[np.ndarray, torch.Tensor]
mvpy.estimators.decoder module#
A collection of estimators for decoding features using ridge decoders.
- class mvpy.estimators.decoder.Decoder(alphas: Tensor | ndarray | float | int = 1, **kwargs)#
Bases:
BaseEstimator
Implements a simple linear ridge decoder.
- Parameters:
alphas (Union[torch.Tensor, np.ndarray]) – The penalties to use for estimation.
fit_intercept (bool, default=True) – Whether to fit an intercept.
normalise (bool, default=True) – Whether to normalise the data.
alpha_per_target (bool, default=False) – Whether to use a different penalty for each target.
- estimator_#
The ridge estimator.
- Type:
mvpy.estimators.RidgeCV
- pattern_#
The decoded pattern.
- Type:
Union[torch.Tensor, np.ndarray]
- coef_#
The coefficeints of the decoder.
- Type:
Union[torch.Tensor, np.ndarray]
- intercept_#
The intercepts of the decoder.
- Type:
Union[torch.Tensor, np.ndarray]
- alpha_#
The penalties used for estimation.
- Type:
Union[torch.Tensor, np.ndarray]
Notes
After fitting the decoder, this class will also estimate the decoded patterns. This follows the approach detailed in [4]_. Please also be aware that, while this class supports decoding multiple features at once, these will principally be separate regressions wherein individual contributions are not disentangled. If you would like to do this, please consider using a back-to-back decoder.
References
[4] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96-110. 10.1016/j.neuroimage.2013.10.067
Examples
>>> import torch >>> from mvpy.estimators import Decoder >>> X = torch.normal(0, 1, (100, 5)) >>> ß = torch.normal(0, 1, (5, 60)) >>> y = X @ ß + torch.normal(0, 1, (100, 60)) >>> decoder = Decoder(alphas = torch.logspace(-5, 10, 20)).fit(y, X) >>> decoder.pattern_.shape torch.Size([60, 5]) >>> decoder.predict(y).shape torch.size([100, 5])
- fit(X, y)#
Fit the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
y (Union[np.ndarray, torch.Tensor]) – The targets.
- predict(X)#
Predict from the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
- Returns:
The predictions.
- Return type:
Union[np.ndarray, torch.Tensor]
mvpy.estimators.encoder module#
A collection of estimators for encoding features using ridge regressions.
- class mvpy.estimators.encoder.Encoder(alphas: Tensor | ndarray | float | int = 1, **kwargs)#
Bases:
BaseEstimator
Implements a simple linear ridge encoder. This class essentially just wraps mvpy.estimators.RidgeCV, exposing this as a more convenient name.
- Parameters:
alphas (Union[torch.Tensor, np.ndarray, float, int], default=1) – The penalties to use for estimation.
kwargs (Any) – Additional arguments.
- alphas#
The penalties to use for estimation.
- Type:
torch.Tensor
- kwargs#
Additional arguments for the estimator.
- Type:
Any
- estimator#
The estimator to use.
- Type:
mvpy.estimators.RidgeCV
- intercept_#
The intercepts of the encoder.
- Type:
torch.Tensor
- coef_#
The coefficients of the encoder.
- Type:
torch.Tensor
Notes
This class is a wrapper around mvpy.estimators.RidgeCV. Really, it exists mostly to make the code more readable.
However, this class may also be used for a temporally expanded encoder. This may be useful in cases where you would like to encode in time, but would like to impose the constraint that alpha should be fit over all time points. To use this class in this manner, simply supply 3D X and y tensors.
Examples
Let’s say we want to do a very simple encoding:
>>> import torch >>> from mvpy.estimators import Encoder >>> ß = torch.normal(0, 1, (50,)) >>> X = torch.normal(0, 1, (100, 50)) >>> y = X @ ß >>> y = y[:,None] + torch.normal(0, 1, (100, 1)) >>> encoder = Encoder().fit(X, y) >>> encoder.coef_.shape torch.Size([1, 50])
Next, let’s assume we want to do a temporally expanded encoding instead:
>>> import torch >>> from mvpy.estimators import Encoder >>> X = torch.normal(0, 1, (240, 5, 100)) >>> ß = torch.normal(0, 1, (60, 5, 100)) >>> y = torch.stack([torch.stack([X[:,:,i] @ ß[j,:,i] for i in range(X.shape[2])], 0) for j in range(ß.shape[0])], 0).swapaxes(0, 2).swapaxes(1, 2) >>> y = y + torch.normal(0, 1, y.shape) >>> encoder = Encoder().fit(X, y) >>> encoder.coef_.shape torch.Size([60, 5, 100])
- fit(X, y)#
Fit the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
y (Union[np.ndarray, torch.Tensor]) – The targets.
- predict(X)#
Predict from the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
- Returns:
The predictions.
- Return type:
Union[np.ndarray, torch.Tensor]
mvpy.estimators.ridgecv module#
A collection of estimators for fitting cross-validated ridge regressions.
- class mvpy.estimators.ridgecv.RidgeCV(alphas: ndarray | Tensor | list | float | int = 1, fit_intercept: bool = True, normalise: bool = True, alpha_per_target: bool = False)#
Bases:
BaseEstimator
Implements RidgeCV using torch as our backend.
- Parameters:
alphas (Union[torch.Tensor, list, float, int], default=torch.Tensor([1])) – Penalties to use for estimation.
fit_intercept (bool, default=True) – Whether to fit an intercept.
normalise (bool, default=True) – Whether to normalise the data.
alpha_per_target (bool, default=True) – Whether to use a different penalty for each target.
- alpha_#
The penalties used for estimation.
- Type:
torch.Tensor
- intercept_#
The intercepts.
- Type:
torch.Tensor
- coef_#
The coefficients.
- Type:
torch.Tensor
Notes
This class owes greatly to J.R. King’s RidgeCV implementation[3]_. If data are supplied as numpy, this class will fall back to :func:`sklearn.linear_model.RidgeCV`[4]_.
References
[3] King, J.R. (2020). torch_ridge. kingjr/torch_ridge
[4] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., … & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825-2830.
Examples
>>> import torch >>> from mvpy.estimators import RidgeCV >>> ß = torch.normal(0, 1, size = (5,)) >>> X = torch.normal(0, 1, size = (240, 5)) >>> y = ß @ X.T + torch.normal(0, 0.5, size = (X.shape[0],)) >>> model = RidgeCV().fit(X, y) >>> model.coef_
- fit(X, y)#
Fit the estimator.
- Parameters:
X (torch.Tensor) – The features.
y (torch.Tensor) – The targets.
- predict(X)#
Predict from the estimator.
- Parameters:
X (torch.Tensor) – The features.
- Returns:
y – The predictions.
- Return type:
torch.Tensor
mvpy.estimators.rsa module#
A collection of estimators for computing representational similarities.
- class mvpy.estimators.rsa.RSA(grouped=False, estimator=<function euclidean>, n_jobs=None, verbose=False)#
Bases:
BaseEstimator
Implements representational similarity analysis as an estimator. Note that this class expects features to be the second to last dimension.
- Parameters:
grouped (bool, default=False) – Whether to use a grouped RSA (this is required for cross-validated metrics to make sense, irrelevant otherwise).
estimator (callable, default=mv.math.euclidean) – The estimator/metric to use for RDM computation.
n_jobs (int, default=None) – Number of jobs to run in parallel (default = None).
verbose (bool, default=False) – Whether to print progress information.
- rdm_#
The representational (dis)similarity matrix.
- Type:
Union[np.ndarray, torch.Tensor]
- cx_#
The upper triangular indices of the RDM.
- Type:
Union[np.ndarray, torch.Tensor]
- cy_#
The upper triangular indices of the RDM.
- Type:
Union[np.ndarray, torch.Tensor]
- grouped#
Whether the RSA is grouped.
- Type:
bool
- estimator#
The estimator/metric to use for RDM computation.
- Type:
Callable
- n_jobs#
Number of jobs to run in parallel.
- Type:
int
- verbose#
Whether to print progress information.
- Type:
bool
Notes
If you would like to perform, for example, a cross-validated RSA using
mvpy.math.cv_euclidean()
, you should make sure that the first dimension in your data is trials, whereas the second dimension groups them meaningfully. The resulting RDM will then be computed over groups, with cross-validation over trials.For more information on representational similarity, please see [2]_.
Examples
Let’s assume we have some data with 100 trials and 5 groups, recording 10 channels over 50 time points:
>>> import torch >>> from mvpy.math import euclidean, cv_euclidean >>> from mvpy.estimators import RSA >>> X = torch.normal(0, 1, (100, 5, 10, 50)) >>> rsa = RSA(estimator = euclidean) >>> rsa.transform(X).shape torch.Size([4950, 5, 50])
If we want to compute a cross-validated RSA over the groups instead, we can use:
>>> rsa = RSA(grouped = True, estimator = cv_euclidean) >>> rsa.transform(X).shape torch.Size([10, 50])
Finally, if we want to plot the full RDM, we can do:
>>> rdm = torch.zeros((5, 5, 50)) >>> rdm[rsa.cx_, rsa.cy_] = rsa.rdm_ >>> import matplotlib.pyplot as plt >>> plt.imshow(rdm[...,0], cmap = 'RdBu_r')
Note that if you would like to perform a decoding RSA, you can use a OvR classifier instead. For example, let’s assume we have data from 100 trials, 60 channels and 50 time points. Data are from 5 distinct classes.
>>> from mvpy.estimators import Classifier >>> X = torch.normal(0, 1, (100, 10, 50)) >>> y = torch.randint(0, 5, (100, 50)) >>>
References
[2] Kriegeskorte, N. (2008). Representational similarity analaysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience. 10.3389/neuro.06.004.2008
See also
mvpy.math.euclidean()
,mvpy.math.cv_euclidean()
,mvpy.math.cosine()
,mvpy.math.cosine_d()
,mvpy.math.pearsonr()
,mvpy.math.pearsonr_d()
,mvpy.math.spearmanr()
,mvpy.math.spearmanr_d()
- fit(X, *args)#
Fit the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The data to compute the RDM for.
args (Any) – Additional arguments
- Return type:
Any
- fit_transform(X, *args)#
Fit the estimator and transform data into representational similarity.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The data to compute the RDM for.
args (Any) – Additional arguments
- Returns:
rdm – The representational similarity.
- Return type:
Union[np.ndarray, torch.Tensor]
- full_rdm()#
Obtain the full representational similartiy matrix.
- Returns:
rdm – The representational similarity matrix in full.
- Return type:
Union[np.ndarray, torch.Tensor]
- to_numpy()#
Make this estimator use the numpy backend. Note that this method does not support conversion between types.
- Returns:
The estimator.
- Return type:
sklearn.base.BaseEstimator
- to_torch()#
Make this estimator use the torch backend. Note that this method does not support conversion between types.
- Returns:
The estimator.
- Return type:
sklearn.base.BaseEstimator
- transform(X, *args)#
Transform the data into representational similarity.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The data to compute the RDM for.
args (Any) – Additional arguments
- Returns:
rdm – The representational similarity.
- Return type:
Union[np.ndarray, torch.Tensor]
mvpy.estimators.scaler module#
A collection of estimators for scaling data.
- class mvpy.estimators.scaler.Scaler(with_mean=True, with_std=True, dims=None)#
Bases:
BaseEstimator
A standard scaler akin to sklearn.preprocessing.StandardScaler. See notes for some differences.
- Parameters:
with_mean (bool, default=True) – If True, center the data before scaling.
with_std (bool, default=True) – If True, scale the data to unit variance.
dims (int, list or tuple of ints, default=None) – The dimensions over which to scale (None for first dimension).
copy (bool, default=False) – If True, the data will be copied.
- shape_#
The shape of the data.
- Type:
tuple
- mean_#
The mean of the data.
- Type:
Union[np.ndarray, torch.Tensor]
- var_#
The variance of the data.
- Type:
Union[np.ndarray, torch.Tensor]
- scale_#
The scale of the data.
- Type:
Union[np.ndarray, torch.Tensor]
Notes
This is a scaler analogous to sklearn.preprocessing.StandardScaler, except that here we support n-dimensional arrays, and use a degrees-of-freedom correction for computing variances (without the step-wise fitting).
By default, this scaler will compute:
\[z = \frac{x - \mu}{\sigma}\]where \(\mu\) is the mean and \(\sigma\) is the standard deviation of the data.
Examples
>>> import torch >>> from mvpy.estimators import Scaler >>> X = torch.normal(5, 10, (1000, 5)) >>> print(X.std(0)) tensor([ 9.7033, 10.2510, 10.2483, 10.1274, 10.2013]) >>> scaler = Scaler().fit(X) >>> X_s = scaler.transform(X) >>> print(X_s.std(0)) tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000]) >>> X_i = scaler.inverse_transform(X_s) >>> print(X_i.std(0)) tensor([ 9.7033, 10.2510, 10.2483, 10.1274, 10.2013])
- fit(X, *args, sample_weight=None)#
Fit the scaler.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The data.
args (Any) – Additional arguments.
sample_weight (Union[np.ndarray, torch.Tensor], default=None) – The sample weights.
- Return type:
Any
- fit_transform(X, *args, sample_weight=None)#
Fit and transform the data in one step.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The data.
args (Any) – Additional arguments.
sample_weight (Union[np.ndarray, torch.Tensor], default=None) – The sample weights.
- Returns:
The transformed data.
- Return type:
Union[np.ndarray, torch.Tensor]
- inverse_transform(X, *args)#
Invert the transform of the data.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The data.
args (Any) – Additional arguments.
- Returns:
The inverse transformed data.
- Return type:
Union[np.ndarray, torch.Tensor]
- set_fit_request(*, sample_weight: bool | None | str = '$UNCHANGED$') Scaler #
Configure whether metadata should be requested to be passed to the
fit
method.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config()
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weight
parameter infit
.
- selfobject
The updated object.
- to_numpy()#
Selet the numpy scaler. Note that this cannot be called for conversion.
- Returns:
The numpy scaler.
- Return type:
_Scaler_numpy
- to_torch()#
Selet the torch scaler. Note that this cannot be called for conversion.
- Returns:
The torch scaler.
- Return type:
_Scaler_torch
- transform(X, *args)#
Transform the data using scaler.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The data.
args (Any) – Additional arguments.
- Returns:
The transformed data.
- Return type:
Union[np.ndarray, torch.Tensor]
mvpy.estimators.sliding module#
A collection of estimators that allow for sliding other estimators over a dimension of the data.
- class mvpy.estimators.sliding.Sliding(estimator: Callable | BaseEstimator, dims: int | tuple | list | ndarray | Tensor = -1, n_jobs: int | None = None, top: bool = True, verbose: bool = False)#
Bases:
BaseEstimator
Implements a sliding estimator that allows you to fit estimators iteratively over a set of dimensions.
- Parameters:
estimator (Callable, sklearn.base.BaseEstimator) – Estimator to use.
dims (Union[int, tuple, list, np.ndarray, torch.Tensor], default=-1) – Dimensions to slide over.
n_jobs (Union[int, None], default=None) – Number of jobs to run in parallel.
top (bool, default=True) – Is this a top-level estimator?
verbose (bool, default=False) – Whether to print progress.
- estimators_#
List of fitted estimators.
- Type:
list
Notes
This class generally expects that your input data is of shape (n_trials, […], n_channels, […]). Make sure that your data and dimension selection is appropriate for the estimator you wish to fit. Note also that, when fitting estimators, you _must_ have an equal number of dimensions in X and y. If you do not, please simply pad to the same dimension length. Finally, be aware that, if you want to use numpy as your backend, you _must_ supply dims as a numpy array.
Examples
>>> import torch >>> from mvpy.estimators import Sliding, Decoder >>> X = torch.normal(0, 1, (240, 50, 4, 100)) # trials x searchlights x channels x time >>> y = torch.normal(0, 1, (240, 1, 5, 100)) # trials x searchlights x outcomes x time >>> decoder = Decoder(alphas = torch.logspace(-5, 10, 20)) >>> sliding = Sliding(estimator = decoder, dims = (1, 3), n_jobs = 4) # slide over searchlights and time >>> sliding.fit(X, y) >>> patterns = sliding.collect('pattern_') >>> patterns.shape torch.Size([50, 100, 4, 5])
- collect(attr)#
Collect the attribute of the estimators.
- Parameters:
attr (str) – Attribute to collect.
- Returns:
Collected attribute.
- Return type:
Union[np.ndarray, torch.Tensor]
- dims = (-1,)#
- estimator = _ClassifierOvR_torch(alpha=tensor([1.0000e-05, 6.1585e-05, 3.7927e-04, 2.3357e-03, 1.4384e-02, 8.8587e-02, 5.4556e-01, 3.3598e+00, 2.0691e+01, 1.2743e+02, 7.8476e+02, 4.8329e+03, 2.9764e+04, 1.8330e+05, 1.1288e+06, 6.9519e+06, 4.2813e+07, 2.6367e+08, 1.6238e+09, 1.0000e+10]))#
- fit(X, y, *args)#
Fit the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – Input data.
y (Union[np.ndarray, torch.Tensor]) – Target data.
*args – Additional arguments.
- fit_transform(X, y, *args)#
Fit and transform the data.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – Input data.
y (Union[np.ndarray, torch.Tensor]) – Target data.
*args (Any) – Additional arguments.
- Returns:
Transformed data.
- Return type:
Union[np.ndarray, torch.Tensor]
- predict(X, y=None, *args)#
Predict the targets.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – Input data.
y (Union[np.ndarray, torch.Tensor, None], default=None) – Target data.
*args – Additional arguments.
- Returns:
Predicted targets.
- Return type:
Union[np.ndarray, torch.Tensor]
- predict_proba(X, y=None, *args)#
Predict the probabilities.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – Input data.
y (Union[np.ndarray, torch.Tensor, None], default=None) – Target data.
*args – Additional arguments.
- Returns:
Predicted probabilities.
- Return type:
Union[np.ndarray, torch.Tensor]
- top = True#
- transform(X, y=None, *args)#
Transform the data.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – Input data.
y (Union[np.ndarray, torch.Tensor, None], default=None) – Target data.
*args (Any) – Additional arguments.
- Returns:
Transformed data.
- Return type:
Union[np.ndarray, torch.Tensor]
- verbose = True#
mvpy.estimators.timedelayed module#
A collection of estimators for TimeDelayed modeling (mTRF + SR).
- class mvpy.estimators.timedelayed.TimeDelayed(t_min: float, t_max: float, fs: int, alphas: Tensor = tensor([1]), patterns: bool = False, **kwargs)#
Bases:
BaseEstimator
Implements TimeDelayed regression.
- Parameters:
t_min (float) – The minimum time delay. Note that positive values indicate X is delayed relative to y. This is unlike MNE’s behaviour.
t_max (float) – The maximum time delay. Note that positive values indicate X is delayed relative to y. This is unlike MNE’s behaviour.
fs (int) – The sampling frequency.
alphas (Union[np.ndarray, torch.Tensor], default=torch.tensor([1])) – The penalties to use for estimation.
patterns (bool, default=False) – Should patterns be estimated?
kwargs (Any) – Additional arguments for the estimator.
- alphas#
The penalties to use for estimation.
- Type:
Union[np.ndarray, torch.Tensor]
- kwargs#
Additional arguments.
- Type:
Any
- patterns#
Should patterns be estimated?
- Type:
bool
- t_min#
The minimum time delay. Note that positive values indicate X is delayed relative to y. This is unlike MNE’s behaviour.
- Type:
float
- t_max#
The maximum time delay. Note that positive values indicate X is delayed relative to y. This is unlike MNE’s behaviour.
- Type:
float
- fs#
The sampling frequency.
- Type:
int
- window#
The window to use for estimation.
- Type:
Union[np.ndarray, torch.Tensor]
- estimator#
The estimator to use.
- Type:
mvpy.estimators.RidgeCV
- f_#
The number of output features.
- Type:
int
- c_#
The number of input features.
- Type:
int
- w_#
The number of time delays.
- Type:
int
- intercept_#
The intercepts of the estimator.
- Type:
Union[np.ndarray, torch.Tensor]
- coef_#
The coefficients of the estimator.
- Type:
Union[np.ndarray, torch.Tensor]
- pattern_#
The patterns of the estimator.
- Type:
Union[np.ndarray, torch.Tensor]
Notes
This class allows estimation of either multivariate temporal response functions (mTRF) or stimulus reconstruction (SR) models.
mTRFs are estimated as:
\[\begin{split}r(t,n) = \\sum_\\tau w(\\tau, n) s(t - \\tau) + \\epsilon\end{split}\]where \(r(t,n)\) is the reconstructed signal at timepoint \(t\) for channel \(n\), \(s(t)\) is the stimulus at time \(t\), \(w(\tau, n)\) is the weight at time delay \(\tau\) for channel \(n\), and \(\epsilon\) is the error.
SR models are estimated as:
\[\begin{split}s(t) = \\sum_n\\sum_\\tau r(t + \\tau, n) g(\\tau, n)\end{split}\]where \(s(t)\) is the reconstructed stimulus at time \(t\), \(r(t,n)\) is the neural response at \(t\) and lagged by \(\\tau\) for channel \(n\), \(g(\tau, n)\) is the weight at time delay \(\tau\) for channel \(n\).
For more information on mTRF or SR models, see [1]_.
Note that for SR models it is recommended to also pass patterns=True to estimate not only the coefficients but also the patterns that were actually used for reconstructing stimuli. For more information, see [2]_.
References
[1] Crosse, M.J., Di Liberto, G.M., Bednar, A., & Lalor, E.C. (2016). The multivariate temporal response function (mTRF) toolbox: A MATLAB toolbox for relating neural signals to continuous stimuli. Frontiers in Human Neuroscience, 10, 604. 10.3389/fnhum.2016.00604
[2] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96-110. 10.1016/j.neuroimage.2013.10.067
Examples
For mTRF estimation, we can do:
>>> import torch >>> from mvpy.estimators import TimeDelayed >>> ß = torch.tensor([1., 2., 3., 2., 1.]) >>> X = torch.normal(0, 1, (100, 1, 50)) >>> y = torch.nn.functional.conv1d(X, ß[None,None,:], padding = 'same') >>> y = y + torch.normal(0, 1, y.shape) >>> trf = TimeDelayed(-2, 2, 1, alphas = 1e-5) >>> trf.fit(X, y).coef_ tensor([[[0.9290, 1.9101, 2.8802, 1.9790, 0.9453]]])
For stimulus reconstruction, we can do:
>>> import torch >>> from mvpy.estimators import TimeDelayed >>> ß = torch.tensor([1., 2., 3., 2., 1.]) >>> X = torch.arange(50)[None,None,:] * torch.ones((100, 1, 50)) >>> y = torch.nn.functional.conv1d(X, ß[None,None,:], padding = 'same') >>> y = y + torch.normal(0, 1, y.shape) >>> X, y = y, X >>> sr = TimeDelayed(-2, 2, 1, alphas = 1e-3, patterns = True).fit(X, y) >>> sr.predict(X).mean(0)[0,:] tensor([ 1.3591, 1.2549, 1.5662, 2.3544, 3.3440, 4.3683, 5.4097, 6.4418, 7.4454, 8.4978, 9.5206, 10.5374, 11.5841, 12.6102, 13.6254, 14.6939, 15.6932, 16.7168, 17.7619, 18.8130, 19.8182, 20.8687, 21.8854, 22.9310, 23.9270, 24.9808, 26.0085, 27.0347, 28.0728, 29.0828, 30.1400, 31.1452, 32.1793, 33.2047, 34.2332, 35.2717, 36.2945, 37.3491, 38.3800, 39.3817, 40.3962, 41.4489, 42.4854, 43.4965, 44.5346, 45.5716, 46.7301, 47.2251, 48.4449, 48.8793])
- fit(X, y)#
Fit the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
y (Union[np.ndarray, torch.Tensor]) – The targets.
- predict(X)#
Predict from the estimator.
- Parameters:
X (Union[np.ndarray, torch.Tensor]) – The features.
- Returns:
The predictions.
- Return type:
Union[np.ndarray, torch.Tensor]