Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What else do we need for postprocessing? #68

Open
topepo opened this issue Jan 22, 2025 · 0 comments
Open

What else do we need for postprocessing? #68

topepo opened this issue Jan 22, 2025 · 0 comments

Comments

@topepo
Copy link
Member

topepo commented Jan 22, 2025

I have a specific argument to make regarding two potential adjustments. However, it would also be good to get a broader set of opinions from others. Maybe @ryantibs and/or @dajmcdon have thoughts.

My thought: three things that we might consider being optional arguments to the tailor (or an individual adjustment):

Why? Two similar calibration tools prompted these ideas. To demonstrate, let's look at what Cubist does to postprocess. This is discussed and illustrated in this blog post. The other is discussed in #67 and has requirements similar to those of the Cubist adjustment.

After the supervised model predicts, Cubist finds its nearest neighbors in the training set. It adjusts a prediction based on the distances to the neighbors and the training set predictions for the neighbors.

We don't have to use the training set; it could conceivably be a calibration set. To generalize, I'll call it the reference data set.

To do this with a tailor, we would already have the current prediction from the model (which may have already been adjusted by other postprocessors) and perhaps the reference set predictions if we are properly prepared.

To find the neighbors, we will need to process both the reference set and the new predictors in the same way as the data was given to the supervised model. For this, we'd need the mold from the workflow.

When making the tailor, we could specify the number of neighbors, pass the reference data set, and the mold. We could require the predictions for the reference set to be in the reference set data frame, avoiding the workflow need.

The presence of the workflow is a little dangerous; it would likely include the tailor. Apart from the infinite recursion of the workflow being added to a workflow that contains the tailor adjustment in the workflow, we would want to avoid people accidentally misapplying the workflow. Let's exclude the workflow from being an input into a tailor adjustment but keep the idea of adding a data set of predictors and/or the workflow mold.

Where would we specify the mold or data? In the main tailor() call or to the adjustments? The mold is independent of the data set and would not vary from adjustment to adjustment, so an option to tailor() would be my suggestion.

The data set is probably in the adjustments. Unfortunately, multiple data sets could be included depending on what is computed after the model prediction (relevant is #4).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant