Changelog¶
Change tags (adopted from sklearn):
[Major Feature] : something big that you couldn’t do before.
[Feature] : something that you couldn’t do before.
[Efficiency] : an existing feature now may not require as much computation or memory.
[Enhancement] : a miscellaneous minor improvement.
[Fix] : something that previously didn’t work as documentated – or according to reasonable expectations – should now work.
[API] : you will need to change your code to have the same effect in the future; or a feature will be removed in the future.
Version 0.5.0¶
[Fix]
CCA
now only accepts integer arguments forn_components
, upper bounded by the minimum number of view features. Previously it accepted options thatMCCA
accepted which are only valid for greater than two views. This also had the effect of errors withcca.get_stats
. #279 by Ronan Perry.[Efficiency] Removed the package dependency on graspy, now called graspologic, since graspy is no longer being maintained. #306 by Gavin Mischler.
[API] Due to the removal of the graspy dependency, the
mvlearn.embed.Omnibus
class has been removed. The same functionality with a similar API can be found in graspologic.embed.OmnibusEmbed.[Fix] An issue in
CTClassifier
where the incorrect samples were being removed from the unlabeled_pool has been fixed. #304 by Gavin Mischler.
Version 0.4.1¶
[Efficiency] The 'graspy' package was made an optional dependency in order to reduce the base installation overhead. To use the Omnibus() object from mvlearn.embed, see the installation guide. #271 by Ronan Perry.
Version 0.4.0¶
Updates in this release:
mvlearn.compose¶
[Major Feature] Adds an
mvlearn.compose
module with Merger and Splitter objects to create single views from multiviews and vice versa:ConcatMerger
,AverageMerger
, andSimpleSplitter
. #228, #234 by Pierre Ablin.[Major Feature] Adds
ViewTransformer
to apply a single view transformer to each view separately. #229 by Pierre Ablin, #263 by Ronan Perry.[Major Feature] Adds
ViewClassifier
to apply a single view classifier to each view separately. #263 by Ronan Perry.[Feature] Switches
random_subspace_method
andrandom_gaussian_projection
functions to sklearn-compliant estimatorsRandomSubspaceMethod
andRandomGaussianProjection
. #263 by Ronan Perry.[API] The
mvlearn.construct
module was merged intomvlearn.compose
due to overlapping functionality. Any imports statements change accordingly. #258 by Ronan Perry.
mvlearn.construct¶
[API] The
mvlearn.construct
module was merged intomvlearn.compose
due to overlapping functionality and no longer exists. Any imports statements change accordingly. #258 by Ronan Perry.
mvlearn.decomposition¶
[Feature] Adds
GroupICA
andGroupPCA
. #225 by Pierre Ablin and Hugo Richard.
mvlearn.embed¶
[Feature] Adds Multi CCA (
MCCA
) and Kernel MCCA (KMCCA
) for two or more views. #249 by Ronan Perry and Iain Carmichael.[Feature] Adds CCA, MCCA which requires 2 views but has a variety of interpretable statistics. #261 by Ronan Perry.
[API] Removes KCCA and moves its functionallity into KMCCA. #261 by Ronan Perry.
mvlearn.model_selection¶
[Major Feature] Adds an
model_selection
module with multiview cross validation. #234 by Pierre Ablin.[Feature] Adds the function
model_selection.train_test_split
to wrap that of sklearn <scikit-learn for multiview data or items. #174 by Alexander Chang and Gavin Mischler.
mvlearn.utils¶
[Enhancement] Adds a parameter to utils.check_Xs so that the function also returns the dimensions (n_views, n_samples, n_features) of the input dataset. #235 by Pierre Ablin.
Version 0.3.0¶
Updates in this release:
cotraining
module changed tosemi_supervised
.factorization
module changed todecomposition
.A new class within the
semi_supervised
module,CTRegressor
, and regression tool for 2-view semi-supervised learning, following the cotraining framework.Three multiview ICA methods added: MultiviewICA, GroupICA, PermICA with
python-picard
dependency.Added parallelizability to GCCA using joblib and added
partial_fit
function to handle streaming or large data.Adds a function (get_stats()) to perform statistical tests within the
embed.KCCA
class so that canonical correlations and canonical variates can be robustly. assessed for significance. See the documentation in Reference for more details.Adds ability to select which views to return from the UCI multiple features dataset loader,
datasets.UCI_multifeature
.API enhancements including base classes for each module and algorithm type, allowing for greater flexibility to extend
mvlearn
.Internals of
SplitAE
changed to snake case to fit with the rest of the package.Fixes a bug which prevented the
visualize.crossviews_plot
from plotting when each view only has a single feature.Changes to the
mvlearn.datasets.gaussian_mixture.GaussianMixture
parameters to better mimic sklearn's datasets.Fixes a bug with printing error messages in a few classes.
Patch 0.2.1¶
Fixed missing __init__.py
file in the ajive_utils
submodule.
Version 0.2.0¶
Updates in this release:
MVMDS
can now also accept distance matrices as input, rather than only views of data with samples and featuresA new clustering algorithm,
CoRegMultiviewSpectralClustering
- co-regularized multi-view spectral clustering functionalitySome attribute names slightly changed for more intuitive use in
DCCA
,KCCA
,MVMDS
,CTClassifier
Option to use an Incomplete Cholesky Decomposition method for
KCCA
to reduce up computation timesA new module,
factorization
, containing theAJIVE
algorithm - angle-based joint and individual variance explainedFixed issue where signal dimensions of noise were dependent in the GaussianMixtures class
Added a dependecy to
joblib
to enable parallel clustering implementationRemoved the requirements for
torchvision
andpillow
, since they are only used in tutorials
Version 0.1.0¶
We’re happy to announce the first major stable version of mvlearn
.
This version includes multiple new algorithms, more utility functions, as well as significant enhancements to the documentation. Here are some highlights of the big updates.
Deep CCA, (
DCCA
) in theembed
moduleUpdated
KCCA
with multiple kernelsSynthetic multi-view dataset generator class,
GaussianMixture
, in thedatasets
moduleA new module,
plotting
, which includes functions for visualizing multi-view data, such ascrossviews_plot
andquick_visualize
More detailed tutorial notebooks for all algorithms
Additionally, mvlearn now makes the torch
and tqdm
dependencies optional, so users who don’t need the DCCA or SplitAE functionality do not have to import such a large package. Note this is only the case for installing with pip. Installing from conda
includes these dependencies automatically. To install the full version of mvlearn with torch
and tqdm
from pip, you must include the optional torch in brackets:
pip3 install mvlearn[torch]
or
pip3 install --upgrade mvlearn[torch]
To install without torch
, do:
pip3 install mvlearn
or
pip3 install --upgrade mvlearn