You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Analyze a pair of models together, in such a way that you find the places where their predictions diverge. This is the place to conduct an experiment, as it will give the biggest differentiator between the models.
Because the advance of science is much more economical when we can explicitly eliminate the most likely alternative theories, and because formulating the alternative theories and deriving their consequences is preeminently a theoretical task, the central gift of the great methodologist is his facility at formulating and deriving the consequences of alternative theories in such a way that the observations can actually be made to decide the question. (Stinchcombe 1968)
Formal models should excel at this task, as they are able to actually compare the implications of each formalized theory across a range of parameters.
Going further, If you have an understanding of the uncertainties of the parameters, then you can derive a distribution for the predicted values at that point in the parameters space for each model. Then you have the likelihood of getting a predicted value given that the model is correct (given that one of your models is correct). This is a good setup for model selection using MCMC.
Could show that by preferentially conducting experiments at the place where models diverge, you improve the tightness of your parameter estimation in the MCMC...
The text was updated successfully, but these errors were encountered:
JamesPHoughton
changed the title
Design of Experiments...
Design of Experiments for model selection
Feb 9, 2016
Alternately could use some implementation of Reversible Jump Markov Chain Monte Carlo. Not sure if there is a good implementation in python yet, but we could tap the R implementation most likely.
If we get a good example here, it would probably also make a good ISDC/SDR paper.
Analyze a pair of models together, in such a way that you find the places where their predictions diverge. This is the place to conduct an experiment, as it will give the biggest differentiator between the models.
Formal models should excel at this task, as they are able to actually compare the implications of each formalized theory across a range of parameters.
Going further, If you have an understanding of the uncertainties of the parameters, then you can derive a distribution for the predicted values at that point in the parameters space for each model. Then you have the likelihood of getting a predicted value given that the model is correct (given that one of your models is correct). This is a good setup for model selection using MCMC.
Could show that by preferentially conducting experiments at the place where models diverge, you improve the tightness of your parameter estimation in the MCMC...
The text was updated successfully, but these errors were encountered: