You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current algorithm trains the Gaussian process on (possibly mock) input data and then simply uses the best fitting values of the hyperparameters throughout. This is not really correct from a Bayesian point of view. The hyperparameters can be marginalised by using a sequential Monte-Carlo sampler, resulting in a so called Metropolis within Gibbs sampling scheme (see for example here). This will be required for realistic tests of the CP.
The text was updated successfully, but these errors were encountered:
The current algorithm trains the Gaussian process on (possibly mock) input data and then simply uses the best fitting values of the hyperparameters throughout. This is not really correct from a Bayesian point of view. The hyperparameters can be marginalised by using a sequential Monte-Carlo sampler, resulting in a so called Metropolis within Gibbs sampling scheme (see for example here). This will be required for realistic tests of the CP.
The text was updated successfully, but these errors were encountered: