Changelog

Change tags (adopted from sklearn):

  • [Major Feature] : something big that you couldn’t do before.

  • [Feature] : something that you couldn’t do before.

  • [Efficiency] : an existing feature now may not require as much computation or memory.

  • [Enhancement] : a miscellaneous minor improvement.

  • [Fix] : something that previously didn’t work as documentated – or according to reasonable expectations – should now work.

  • [API] : you will need to change your code to have the same effect in the future; or a feature will be removed in the future.

Version 0.5.0

  • [Fix] CCA now only accepts integer arguments for n_components, upper bounded by the minimum number of view features. Previously it accepted options that MCCA accepted which are only valid for greater than two views. This also had the effect of errors with cca.get_stats. #279 by Ronan Perry.

  • [Efficiency] Removed the package dependency on graspy, now called graspologic, since graspy is no longer being maintained. #306 by Gavin Mischler.

  • [API] Due to the removal of the graspy dependency, the mvlearn.embed.Omnibus class has been removed. The same functionality with a similar API can be found in graspologic.embed.OmnibusEmbed.

  • [Fix] An issue in CTClassifier where the incorrect samples were being removed from the unlabeled_pool has been fixed. #304 by Gavin Mischler.

Version 0.4.1

  • [Efficiency] The 'graspy' package was made an optional dependency in order to reduce the base installation overhead. To use the Omnibus() object from mvlearn.embed, see the installation guide. #271 by Ronan Perry.

Version 0.4.0

Updates in this release:

mvlearn.compose

  • [Major Feature] Adds an mvlearn.compose module with Merger and Splitter objects to create single views from multiviews and vice versa: ConcatMerger, AverageMerger, and SimpleSplitter. #228, #234 by Pierre Ablin.

  • [Major Feature] Adds ViewTransformer to apply a single view transformer to each view separately. #229 by Pierre Ablin, #263 by Ronan Perry.

  • [Major Feature] Adds ViewClassifier to apply a single view classifier to each view separately. #263 by Ronan Perry.

  • [Feature] Switches random_subspace_method and random_gaussian_projection functions to sklearn-compliant estimators RandomSubspaceMethod and RandomGaussianProjection. #263 by Ronan Perry.

  • [API] The mvlearn.construct module was merged into mvlearn.compose due to overlapping functionality. Any imports statements change accordingly. #258 by Ronan Perry.

mvlearn.construct

  • [API] The mvlearn.construct module was merged into mvlearn.compose due to overlapping functionality and no longer exists. Any imports statements change accordingly. #258 by Ronan Perry.

mvlearn.decomposition

mvlearn.embed

  • [Feature] Adds Multi CCA (MCCA) and Kernel MCCA (KMCCA) for two or more views. #249 by Ronan Perry and Iain Carmichael.

  • [Feature] Adds CCA, MCCA which requires 2 views but has a variety of interpretable statistics. #261 by Ronan Perry.

  • [API] Removes KCCA and moves its functionallity into KMCCA. #261 by Ronan Perry.

mvlearn.model_selection

mvlearn.utils

  • [Enhancement] Adds a parameter to utils.check_Xs so that the function also returns the dimensions (n_views, n_samples, n_features) of the input dataset. #235 by Pierre Ablin.

Version 0.3.0

Updates in this release:

  • cotraining module changed to semi_supervised.

  • factorization module changed to decomposition.

  • A new class within the semi_supervised module, CTRegressor, and regression tool for 2-view semi-supervised learning, following the cotraining framework.

  • Three multiview ICA methods added: MultiviewICA, GroupICA, PermICA with python-picard dependency.

  • Added parallelizability to GCCA using joblib and added partial_fit function to handle streaming or large data.

  • Adds a function (get_stats()) to perform statistical tests within the embed.KCCA class so that canonical correlations and canonical variates can be robustly. assessed for significance. See the documentation in Reference for more details.

  • Adds ability to select which views to return from the UCI multiple features dataset loader, datasets.UCI_multifeature.

  • API enhancements including base classes for each module and algorithm type, allowing for greater flexibility to extend mvlearn.

  • Internals of SplitAE changed to snake case to fit with the rest of the package.

  • Fixes a bug which prevented the visualize.crossviews_plot from plotting when each view only has a single feature.

  • Changes to the mvlearn.datasets.gaussian_mixture.GaussianMixture parameters to better mimic sklearn's datasets.

  • Fixes a bug with printing error messages in a few classes.

Patch 0.2.1

Fixed missing __init__.py file in the ajive_utils submodule.

Version 0.2.0

Updates in this release:

  • MVMDS can now also accept distance matrices as input, rather than only views of data with samples and features

  • A new clustering algorithm, CoRegMultiviewSpectralClustering - co-regularized multi-view spectral clustering functionality

  • Some attribute names slightly changed for more intuitive use in DCCA, KCCA, MVMDS, CTClassifier

  • Option to use an Incomplete Cholesky Decomposition method for KCCA to reduce up computation times

  • A new module, factorization, containing the AJIVE algorithm - angle-based joint and individual variance explained

  • Fixed issue where signal dimensions of noise were dependent in the GaussianMixtures class

  • Added a dependecy to joblib to enable parallel clustering implementation

  • Removed the requirements for torchvision and pillow, since they are only used in tutorials

Version 0.1.0

We’re happy to announce the first major stable version of mvlearn. This version includes multiple new algorithms, more utility functions, as well as significant enhancements to the documentation. Here are some highlights of the big updates.

  • Deep CCA, (DCCA) in the embed module

  • Updated KCCA with multiple kernels

  • Synthetic multi-view dataset generator class, GaussianMixture, in the datasets module

  • A new module, plotting, which includes functions for visualizing multi-view data, such as crossviews_plot and quick_visualize

  • More detailed tutorial notebooks for all algorithms

Additionally, mvlearn now makes the torch and tqdm dependencies optional, so users who don’t need the DCCA or SplitAE functionality do not have to import such a large package. Note this is only the case for installing with pip. Installing from conda includes these dependencies automatically. To install the full version of mvlearn with torch and tqdm from pip, you must include the optional torch in brackets:

pip3 install mvlearn[torch]

or

pip3 install --upgrade mvlearn[torch]

To install without torch, do:

pip3 install mvlearn

or

pip3 install --upgrade mvlearn