You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hallo!
Thank you for the very interesting article on Distill! I am pretty new to the topic, so maybe my question is naive, but I would like to know why I would need a pre-selected kernel to calculate Cov(X,X')? Can't I calculate the covariance matrix from the data itself?
I am also a bit uncertain, why we need to have X and Y for test and training data, and why they use different dimensions? In a machine learning setting, X and Y come from the same data pool; the test data set is just separated out according to some sampling strategy.
The text was updated successfully, but these errors were encountered:
There are some machine learning approaches where the covariance matrix is computed directly from the data. PCA comes first to my mind. However, in GPs the covariance matrix serves a slightly different purpose: it puts two points X and X' into context by defining which values are probable for X' when X takes a certain value and vice versa. A somewhat related approach is taken by Kernel PCA.
I agree that the notation of GPs is sometimes confusing in the context of machine learning. We are currently tracking this in issue #41.
Hallo!
Thank you for the very interesting article on Distill! I am pretty new to the topic, so maybe my question is naive, but I would like to know why I would need a pre-selected kernel to calculate Cov(X,X')? Can't I calculate the covariance matrix from the data itself?
I am also a bit uncertain, why we need to have X and Y for test and training data, and why they use different dimensions? In a machine learning setting, X and Y come from the same data pool; the test data set is just separated out according to some sampling strategy.
The text was updated successfully, but these errors were encountered: