Skip to content

Commit

Permalink
docs: updates re: comments
Browse files Browse the repository at this point in the history
  • Loading branch information
William Bakst committed Nov 16, 2023
1 parent 3640fea commit ef6a4ef
Show file tree
Hide file tree
Showing 7 changed files with 27 additions and 18 deletions.
8 changes: 6 additions & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@

A PyTorch implementation of constrained optimization and modeling techniques

- **Transparent Models**: Glassbox models to provide increased interpretability and insights into your ML models.
- **Shape Constraints**: Embed domain knowledge directly into the model through feature constraints.
- **Rate Constraints (Coming soon...)**: Optimize any PyTorch model under a set of constraints on rates (e.g. FPR < 1%). Rates can be calculated both for the entire dataset as well as specific slices.

---

[![GitHub stars](https://img.shields.io/github/stars/ControlAI/pytorch-lattice.svg)](https://github.com/ControlAI/pytorch-lattice/stargazers)
Expand Down Expand Up @@ -73,11 +77,11 @@ pyl.plots.calibrator(clf.model, "thal")

## Contributing

PyTorch Lattice welcomes contributions from the community! See the [contribution guide](CONTRIBUTING.md) for more information on the development workflow. For bugs and feature requests, visit our [GitHub Issues](https://github.com/ControlAI/pytorch-lattice/issues) and check out our [templates](https://github.com/ControlAI/pytorch-lattice/tree/main/.github/ISSUE_TEMPLATES).
PyTorch Lattice welcomes contributions from the community! See the [contribution guide](contributing.md) for more information on the development workflow. For bugs and feature requests, visit our [GitHub Issues](https://github.com/ControlAI/pytorch-lattice/issues) and check out our [templates](https://github.com/ControlAI/pytorch-lattice/tree/main/.github/ISSUE_TEMPLATES).

## How To Help

Any and all help is greatly appreciated! Check out our page on [how you can help](HELP.md).
Any and all help is greatly appreciated! Check out our page on [how you can help](help.md).

## Roadmap

Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/calibrators.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

Calibrators are one of the core concepts of the PyTorch Lattice library. The library currently implements two types of calibrators:

- [`CategoricalCalibrator`](/pytorch-lattice/api/layers/#pytorch_lattice.layers.CategoricalCalibrator): calibrates a categorical value through a mapping from a category to a learned value.
- [`NumericalCalibrator`](/pytorch-lattice/api/layers/#pytorch_lattice.layers.NumericalCalibrator): calibrates a numerical value through a learned piece-wise linear function.
- [`CategoricalCalibrator`](../api/layers.md#pytorch_lattice.layers.CategoricalCalibrator): calibrates a categorical value through a mapping from a category to a learned value.
- [`NumericalCalibrator`](../api/layers.md#pytorch_lattice.layers.NumericalCalibrator): calibrates a numerical value through a learned piece-wise linear function.

Categorical Calibrator | Numerical Calibrator
:------------------------------:|:----------------------------------------:
Expand All @@ -17,7 +17,7 @@ There are three primary benefits to using feature calibrators:

- Automated Feature Pre-Processing. Rather than relying on the practitioner to determine how to best transform each feature, feature calibrators learn the best transformations from the data.
- Additional Interpretability. Plotting calibrators as bar/line charts helps visualize how the model is understanding each feature. For example, if two input values for a feature have the same calibrated value, then the model considers those two input values equivalent with respect to the prediction.
- [Shape Constraints](shape_constraints). Calibrators can be constrained to guarantee certain expected input/output behavior. For example, you might a monotonicity constraint on a feature for square footage to ensure that increasing square footage always increases predicted price. Or perhaps you want a concavity constraint such that increasing a feature for price first increases and then decreases predicted sales.
- [Shape Constraints](shape_constraints.md). Calibrators can be constrained to guarantee certain expected input/output behavior. For example, you might a monotonicity constraint on a feature for square footage to ensure that increasing square footage always increases predicted price. Or perhaps you want a concavity constraint such that increasing a feature for price first increases and then decreases predicted sales.

## Output Calibration

Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/classifier.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Classifier

The [`Classifier`](/pytorch-lattice/api/classifier) class is a high-level wrapper around the calibrated modeling functionality to make it extremely easy to fit a calibrated model to a classification task. The class uses declarative configuration and automatically handles the data preparation, feature configuration, model creation, and model training necessary for properly training a calibrated model.
The [`Classifier`](../api/classifier.md) class is a high-level wrapper around the calibrated modeling functionality to make it extremely easy to fit a calibrated model to a classification task. The class uses declarative configuration and automatically handles the data preparation, feature configuration, model creation, and model training necessary for properly training a calibrated model.

## Initialization

Expand Down Expand Up @@ -66,7 +66,7 @@ model_config = pyl.model_configs.LinearConfig(use_bias=False)
clf = pyl.Classifier(["list", "of", "features"], model_config)
```

See [Model Types](model_types.md) for more information on the supported model types and [model_configs](/pytorch-lattice/api/model_configs) for more information on configuring these models in a classifier.
See [Model Types](model_types.md) for more information on the supported model types and [model_configs](../api/model_configs.md) for more information on configuring these models in a classifier.

## Feature Configuration

Expand All @@ -76,7 +76,7 @@ When you first initialize a calibrator, all features will be initialized using d
clf.configure("feature").monotonicity("increasing").num_keypoints(10)
```

See [feature_configs](/pytorch-lattice/api/feature_config/) for all of the available configuration options.
See [feature_configs](../api/feature_config.md) for all of the available configuration options.

## Categorical Features

Expand Down
4 changes: 2 additions & 2 deletions docs/concepts/model_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@

The PyTorch Lattice library currently supports two types of calibrated modeling:

- [`CalibratedLinear`](/pytorch-lattice/api/models/#pytorch_lattice.models.CalibratedLinear): a calibrated linear model combines calibrated features using a standard [linear](/pytorch-lattice/api/layers/#pytorch_lattice.layers.Linear) layer, optionally followed by an output calibrator.
- [`CalibratedLinear`](../api/models.md#pytorch_lattice.models.CalibratedLinear): a calibrated linear model combines calibrated features using a standard [linear](../api/layers.md#pytorch_lattice.layers.Linear) layer, optionally followed by an output calibrator.

- [`CalibratedLattice`](/pytorch-lattice/api/models/#pytorch_lattice.models.CalibratedLattice): a calibrated lattice model combines calibrated features using a [lattice](/pytorch-lattice/api/layers/#pytorch_lattice.layers.Lattice) layer, optionally followed by an output calibrator. The lattice layer can learn higher-order feature interactions, which can help increase model flexibility and thereby performance on more complex prediction tasks.
- [`CalibratedLattice`](../api/models.md#pytorch_lattice.models.CalibratedLattice): a calibrated lattice model combines calibrated features using a [lattice](../api/layers.md#pytorch_lattice.layers.Lattice) layer, optionally followed by an output calibrator. The lattice layer can learn higher-order feature interactions, which can help increase model flexibility and thereby performance on more complex prediction tasks.
12 changes: 7 additions & 5 deletions docs/concepts/shape_constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,14 @@

Shape constraints play a crucial role in making calibrated models interpretable by allowing users to impose specific behavioral rules on their machine learning models. These constraints help to reduce – or even eliminate – the impact of noise and inherent biases contained in the data.

Monotonicity constraints ensure that the relationship between an input feature and the output prediction consistently increases or decreases. Let's consider our house price prediction task once more. A monotonic constraint on the square footage feature would guarantee that increasing the size of the property increases the predicted price. This makes sense.
[`Monotonicity`](../api/enums.md#pytorch_lattice.enums.Monotonicity) constraints ensure that the relationship between an input feature and the output prediction consistently increases or decreases. Let's consider our house price prediction task once more. A monotonic constraint on the square footage feature would guarantee that increasing the size of the property increases the predicted price. This makes sense.

Unimodality constraints create a single peak in the model's output, ensuring that there is only one optimal value for a given input feature. For example, a feature for price used when predicting sales volume may be unimodal since lower prices generally lead to higher sales, but prices that are too low may indicate low quality.
Unimodality constraints (coming soon) create a single peak in the model's output, ensuring that there is only one optimal value for a given input feature. For example, a feature for price used when predicting sales volume may be unimodal since lower prices generally lead to higher sales, but prices that are too low may indicate low quality with one single optimal price.

Trust constraints define the relative importance of input features depending on other features. For instance, a trust constraint can ensure that a model predicting product sales relies more on the star rating (1-5) when the number of reviews is higher, which forces the model's predictions to better align with real-world expectations and rules.
Convexity/Concavity constraints (coming soon) ensure that the given feature's value has a convex/concave relationship with the model's output. Looking again at the feature for price for predicting sales volume, it may be that there is a range of optimal prices and not one single optimal price, which would instead be a concavity constraint.

Together, these shape constraints help create machine learning models that are both interpretable and trustworthy.
Trust constraints (coming soon) define the relative importance of input features depending on other features. For instance, a trust constraint can ensure that a model predicting product sales relies more on the star rating (1-5) when the number of reviews is higher, which forces the model's predictions to better align with real-world expectations and rules.

Dominance constraints (coming soon) are intended to embed that a dominant feature is more important than a weak feature. For example, you might want to constrain a model predicting click-through-rate (CTR) for a specific web link to be more sensitive to past CTR for that web link than the average CTR for the whole website.

The library currently implements the [`Monotonicity`](/pytorch-lattice/api/enums/#pytorch_lattice.enums.Monotonicity) shape constraint, but we are working on releasing additional constraints soon.
Together, these shape constraints help create machine learning models that are both interpretable and trustworthy.
2 changes: 2 additions & 0 deletions docs/walkthroughs/uci_adult_income.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,8 @@ You can see here how each category for `occupation` gets calibrated before going

Interestingly, plotting the calibrator for `hours_per_week` shows that there's a flat region starting around ~52 hours. This indicates that there is a chance that the `hours_per_week` feature is not actually monotonically increasing, in which case you might consider training a new classifier where you do not constrain this feature.

![Hours Per Week Calibrator](../img/hours_per_week_calibrator.png)

When setting constraints, there are two things to keep in mind:

1. Do you want to guarantee the constrained behavior regardless of performance? In this case, setting the constraint can make sure that model behavior matches your expectations on unseen examples, which is especially useful when using a model to make decisions.
Expand Down
7 changes: 4 additions & 3 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ markdown_extensions:
emoji_generator: !!python/name:material.extensions.emoji.to_svg

plugins:
- search
- mkdocstrings:
handlers:
python:
Expand All @@ -76,9 +77,9 @@ plugins:
nav:
- Get Started:
- Welcome to PyTorch Lattice: "README.md"
- Why use PyTorch Lattice: "WHY.md"
- Contributing: "CONTRIBUTING.md"
- How to help: "HELP.md"
- Why use PyTorch Lattice: "why.md"
- Contributing: "contributing.md"
- How to help: "help.md"
- Concepts:
- Classifier: "concepts/classifier.md"
- Calibrators: "concepts/calibrators.md"
Expand Down

0 comments on commit ef6a4ef

Please sign in to comment.