Skip to content

skggm/skggm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

skggm : Gaussian graphical models in scikit-learn

In the last decade, learning networks that encode conditional indepedence relationships has become an important problem in machine learning and statistics. For many important probability distributions, such as multivariate Gaussians, this amounts to estimation of inverse covariance matrices. Inverse covariance estimation is now used widely in infer gene regulatory networks in cellular biology and neural interactions in the neuroscience.

However, many statistical advances and best practices in fitting such models to data are not yet widely adopted and not available in common python packages for machine learning. Furthermore, inverse covariance estimation is an active area of research where researchers continue to improve algorithms and estimators. With skggm we seek to provide these new developments to a wider audience, and also enable researchers to effectively benchmark their methods in regimes relevant to their applications of interest.

While skggm is currently geared toward "Gaussian graphical models", we hope to eventually evolve it to support "Generalized graphical models".

Inverse Covariance Estimation

Given n independently drawn, p-dimensional Gaussian random samples X, the maximum likelihood estimate of the inverse covariance matrix \lambda can be computed via the graphical lasso, i.e., the program

\ell_1 penalized inverse covariance estimation

where \Lambda is a symmetric non-negative weight matrix and

\ell_1 penalized inverse covariance estimation

is a regularization term that promotes sparsity [Hsieh et al.]. This is a generalization of the scalar \lambda formulation found in [Friedman et al.] and implemented here.

In this package we provide a scikit-learn-compatible implementation of the program above and a collection of modern best practices for working with the graphical lasso. To get started, test out

from inverse_covariance import QuicGraphLassoCV

model = QuicGraphLassoCV()
model.fit(X)                   # X is matrix of shape (n_samples, n_features) 

# outputs: model.covariance_, model.precision_, model.lam_

and then head over to examples/estimator_suite.py for other example usage.


This is an ongoing effort. We'd love your feedback on which algorithms and techniques we should include and how you're using the package. We also welcome contributions.

@jasonlaska and @mnarayn


Included in inverse_covariance

  • QuicGraphLasso [doc]

    QuicGraphLasso is an implementation of QUIC wrapped as a scikit-learn compatible estimator [Hsieh et al.] . The estimator can be run in default mode for a fixed penalty or in path mode to explore a sequence of penalties efficiently. The penalty lam can be a scalar or matrix.

    The primary outputs of interest are: covariance_, precision_, and lam_.

    The interface largely mirrors the built-in GraphLasso although some param names have been changed (e.g., alpha to lam). Some notable advantages of this implementation over GraphLasso are support for a matrix penalization term and speed.

  • QuicGraphLassoCV [doc]

    QuicGraphLassoCV is an optimized cross-validation model selection implementation similar to scikit-learn's GraphLassoCV. As with QuicGraphLasso, this implementation also supports matrix penalization.

  • QuicGraphLassoEBIC [doc]

    QuicGraphLassoEBIC is provided as a convenience class to use the Extended Bayesian Information Criteria (EBIC) for model selection [Foygel et al.].

  • ModelAverage [doc]

    ModelAverage is an ensemble meta-estimator that computes several fits with a user-specified estimator and averages the support of the resulting precision estimates. The result is a proportion_ matrix indicating the sample probability of a non-zero at each index. This is a similar facility to scikit-learn's RandomizedLasso) but for the graph lasso.

    In each trial, this class will:

    1. Draw bootstrap samples by randomly subsampling X.

    2. Draw a random matrix penalty.

    The random penalty can be chosen in a variety of ways, specified by the penalization parameter. This technique is also known as stability selection or random lasso.

  • AdaptiveGraphLasso [doc]

    AdaptiveGraphLasso performs a two step estimation procedure:

    1. Obtain an initial sparse estimate.

    2. Derive a new penalization matrix from the original estimate. We currently provide three methods for this: binary, 1/|coeffs|, and 1/|coeffs|^2. The binary method only requires the initial estimate's support (and this can be be used with ModelAverage below).

    This technique works well to refine the non-zero precision values given a reasonable initial support estimate.

  • inverse_covariance.plot_util.trace_plot

    Utility to plot lam_ paths.

  • inverse_covariance.profiling

    Submodule that includes profiling.AverageError, profiling.StatisticalPower to compare performance between methods. This is a work in progress, more to come soon!

Installation

Clone this repo and run

python setup.py install

or via PyPI

pip install skggm

or from a cloned repo

cd inverse_covariance/pyquic
make 

The package requires that numpy, scipy, and cython are installed independently into your environment first.

If you would like to fork the pyquic bindings directly, use the Makefile provided in inverse_covariance/pyquic.

Tests

To run the tests, execute the following lines.

python -m pytest inverse_covariance/tests/
python -m pytest inverse_covariance/profiling/tests

Examples

Usage

In examples/estimator_suite.py we reproduce the plot_sparse_cov example from the scikit-learn documentation for each method provided (however, the variations chosen are not exhaustive).

An example run for n_examples=100 and n_features=20 yielded the following results.

(n_examples, n_features) = (100, 20)

(n_examples, n_features) = (100, 20)

(n_examples, n_features) = (100, 20)

For slightly higher dimensions of n_examples=600 and n_features=120 we obtained:

(n_examples, n_features) = (600, 120)

Plotting the regularization path

We've provided a utility function inverse_covariance.plot_util.trace_plot that can be used to display the coefficients as a function of lam_. This can be used with any estimator that returns a path. The example in examples/trace_plot_example.py yields:

Trace plot

Profiling utilities

We've provided some utilities in inverse_covariance.profiling to compare performance across the estimators.

For example, below is the comparison of the average support error between QuicGraphLassoCV and its randomized model average equivalent (the example found in examples/compare_model_selection.py). The support error of QuicGraphLassoCV is dominated by the false-positive rate which grows substantially as the number of samples grows.

Brain network functional connectivity

In examples/plot_functional_brain_networks.py and the corresponding Jupyter notebook example/ABIDE_Example, we plot the functional connectivity of brain-wide networks learned from the observation data (similar example to this example).

Specifically, we extract the time-series from the ABIDE dataset, with nodes defined using regions of interest from the Power-264 atlas (Power, 2011). The image on the left shows the upper triangle of the resulting precision matrix and the image on the right shows a top-of-brain connectome depicting the functional connectivity between different locations on XXX.

References

BIC / EBIC Model Selection

QuicGraphLasso / QuicGraphLassoCV

Adaptive refitting (two-step methods)

Randomized model averaging

Convergence test