gpytorch-mogp
is a Python package that extends GPyTorch to support Multiple-Output Gaussian
processes where the correlation between the outputs is known.
Note: The package is currently in an early stage of development. Expect bugs and breaking changes in future versions. Please create an issue if you encounter any problems or have any suggestions.
Table of Contents
pip install gpytorch-mogp
Note: If you want to use GPU acceleration, you should manually install PyTorch with CUDA support before running the above command.
If you want to run the examples, you should install the package from source instead. First, clone the repository:
git clone https://github.com/dnv-opensource/gpytorch-mogp.git
Then install the package with the examples
dependencies:
pip install .[examples]
To run the comparison notebook, you also need to install
rapid-models
from source.
Note: The version of
rapid-modles
linked to above is private at the time of writing, so you may not be able to install it.
Usage is similar to the official Multitask GP Regression example.
The package provides a custom MultiOutputKernel
module that is used similarly to MultitaskKernel
, and a custom
FixedNoiseMultiOutputGaussianLikelihood
that is used similarly to MultitaskGaussianLikelihood
. The
MultiOutputKernel
wraps one or more base kernels, producing a joint covariance matrix for the outputs. The
FixedNoiseMultiOutputGaussianLikelihood
allows for fixed noise to be added to the joint covariance matrix.
import gpytorch
from gpytorch_mogp import MultiOutputKernel, FixedNoiseMultiOutputGaussianLikelihood
# Define a multi-output GP model
class MultiOutputGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super().__init__(train_x, train_y, likelihood)
# Reuse the `MultitaskMean` module from `gpytorch`
self.mean_module = gpytorch.means.MultitaskMean(gpytorch.means.ConstantMean(), num_tasks=2)
# Use the custom `MultiOutputKernel` module
self.covar_module = MultiOutputKernel(gpytorch.kernels.MaternKernel(), num_outputs=2)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
# Reuse the `MultitaskMultivariateNormal` distribution from `gpytorch`
return gpytorch.distributions.MultitaskMultivariateNormal(mean_x, covar_x)
# Training data
train_x = ... # (n,) or (n, num_inputs)
train_y = ... # (n, num_outputs)
train_noise = ... # (n, num_outputs, num_outputs)
# Initialize the model
likelihood = FixedNoiseMultiOutputGaussianLikelihood(noise=train_noise, num_tasks=2)
model = MultiOutputGPModel(train_x, train_y, likelihood)
# Training
model.train()
...
# Testing
model.eval()
test_x = ... # (m,) or (m, num_inputs)
test_noise = ... # (m, num_outputs, num_outputs)
f_preds = model(test_x)
y_preds = likelihood(model(test_x), noise=test_noise)
Note:
MultiOutputKernel
currently usesnum_outputs
, whileMultitaskMean
andFixedNoiseMultiOutputGaussianLikelihood
usenum_tasks
. In this example, they mean the same thing. However, multi-output and multi-task can have different meanings in other contexts. The reason for this inconsistency is that thenum_tasks
argument comes fromgpytorch
, while thenum_outputs
argument comes fromgpytorch-mogp
(FixedNoiseMultiOutputGaussianLikelihood
inherits it fromgpytorch
). This inconsistency may be addressed in future versions of the package.
Note: The
MultiOutputKernel
produces an interleaved block diagonal covariance matrix by default, as that is the convention used ingpytorch
. If you want to produce a non-interleaved block diagonal covariance matrix, you can passinterleaved=False
to theMultiOutputKernel
constructor.
MultitaskMultivariateNormal
expects an interleaved block diagonal covariance matrix by default. If you want to use a non-interleaved block diagonal covariance matrix, you can passinterleaved=False
to theMultitaskMultivariateNormal
constructor.
FixedNoiseMultiOutputGaussianLikelihood
expects the same noise structure regardless of interleaving, but will internally apply interleaving to the noise before adding it to the covariance matrix. Interleaving is applied by default. If you want to avoid this, you can passinterleaved=False
to theFixedNoiseMultiOutputGaussianLikelihood
constructor. The likelihood should always use the same interleaving setting as the kernel.WARNING: The
interleaved=False
option is not working as expected at the moment. Avoid using it for now.
See also the example notebooks/scripts for more usage examples.
See the comparison notebook for a comparison between the gpytorch-mogp
package and the rapid-models
package (demonstrating that the two packages produce the same results for one example).
- Constructing a GP with
interleaved=False
inMultiOutputKernel
,MultitaskMultivariateNormal
, andFixedNoiseMultiOutputGaussianLikelihood
does not work as expected. Avoid usinginterleaved=False
for now. - The output of
MultiOutputKernel
is dense at the moment, as indexing into a block [linear operator] (https://github.com/cornellius-gp/linear_operator) is not working as expected. - Important docstrings are missing.
- The code style is not consistent with the
gpytorch
code style (note thatgpytorch
itself does not have a good and consistent code style). - There are currently no tests for the package, except for some doctests in the docstrings. The primary goal is to
replicate the behavior of
rapid-models
, which is currently checked by comparing the results from one example. - Most issues are probably unknown at this point. Please create an issue if you encounter any problems ...
-
Multi-output GP model: A Gaussian process model that produces multiple outputs for a given input. I.e. a model that predicts a vector-valued output for a given input.
-
Multi-output kernel: A kernel that wraps one or more base kernels, producing a joint covariance matrix for the outputs. The joint covariance matrix is typically block diagonal, with each block corresponding to the output of a single base kernel.
-
Block matrix: A matrix that is partitioned into blocks. E.g. a block matrix with
$2$ blocks per side of size$3$ is structured like:
-
Interleaved block matrix: A block matrix where the blocks are interleaved. E.g. a block matrix with
$2$ blocks per side of size$3$ is structured like:
-
(Non-interleaved) block diagonal covariance matrix: A joint covariance matrix that is block diagonal, with each
block corresponding to the output of a single base kernel. E.g. for a multi-output kernel with two base kernels,
$K_{\alpha}$ and$K_{\beta}$ , the joint covariance matrix is given by:
- Interleaved block diagonal covariance matrix: Similar to a block diagonal covariance matrix, but with the blocks interleaved:
Contributions are welcome! Please create an issue or a pull request if you have any suggestions or improvements.
To get started contributing, clone the repository:
git clone https://github.com/dnv-opensource/gpytorch-mogp.git
Then install the package in editable mode with all
dependencies:
pip install -e .[all]
gpytorch-mogp
is distributed under the terms of the MIT license.