Skip to content

Commit

Permalink
Update version to 0.15.0
Browse files Browse the repository at this point in the history
  • Loading branch information
deliahu committed Mar 25, 2020
1 parent ea94a5c commit 511f123
Show file tree
Hide file tree
Showing 25 changed files with 73 additions and 78 deletions.
16 changes: 6 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,6 @@ Cortex is an open source platform for deploying machine learning models as produ

<br>

<!-- Delete on release branches -->
<!-- CORTEX_VERSION_README_MINOR -->
[install](https://cortex.dev/install)[tutorial](https://cortex.dev/iris-classifier)[docs](https://cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.14/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[email us](mailto:[email protected])[chat with us](https://gitter.im/cortexlabs/cortex)<br><br>

<!-- Set header Cache-Control=no-cache on the S3 object metadata (see https://help.github.com/en/articles/about-anonymized-image-urls) -->
![Demo](https://d1zqebknpdh033.cloudfront.net/demo/gif/v0.13_2.gif)

Expand All @@ -33,7 +29,7 @@ Cortex is designed to be self-hosted on any AWS account. You can spin up a clust
<!-- CORTEX_VERSION_README_MINOR -->
```bash
# install the CLI on your machine
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.14/get-cli.sh)"
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.15/get-cli.sh)"

# provision infrastructure on AWS and spin up a cluster
$ cortex cluster up
Expand Down Expand Up @@ -140,8 +136,8 @@ The CLI sends configuration and code to the cluster every time you run `cortex d
## Examples of Cortex deployments

<!-- CORTEX_VERSION_README_MINOR x5 -->
* [Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.14/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.14/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
* [Iris classification](https://github.com/cortexlabs/cortex/tree/0.14/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
* [Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
* [Iris classification](https://github.com/cortexlabs/cortex/tree/0.15/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
2 changes: 1 addition & 1 deletion build/build-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.15.0

dir=$1
image=$2
Expand Down
2 changes: 1 addition & 1 deletion build/cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.15.0

arg1=${1:-""}
upload="false"
Expand Down
2 changes: 1 addition & 1 deletion build/push-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

set -euo pipefail

CORTEX_VERSION=master
CORTEX_VERSION=0.15.0

image=$1

Expand Down
42 changes: 21 additions & 21 deletions docs/cluster-management/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ instance_volume_size: 50
log_group: cortex

# whether to use spot instances in the cluster (default: false)
# see https://cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
# see https://cortex.dev/v/0.15/cluster-management/spot-instances for additional details on spot configuration
spot: false
```
Expand All @@ -53,24 +53,24 @@ You can follow these [instructions](../deployments/system-packages.md) to build
<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
# docker image paths
image_python_serve: cortexlabs/python-serve:master
image_python_serve_gpu: cortexlabs/python-serve-gpu:master
image_tf_serve: cortexlabs/tf-serve:master
image_tf_serve_gpu: cortexlabs/tf-serve-gpu:master
image_tf_api: cortexlabs/tf-api:master
image_onnx_serve: cortexlabs/onnx-serve:master
image_onnx_serve_gpu: cortexlabs/onnx-serve-gpu:master
image_operator: cortexlabs/operator:master
image_manager: cortexlabs/manager:master
image_downloader: cortexlabs/downloader:master
image_request_monitor: cortexlabs/request-monitor:master
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:master
image_metrics_server: cortexlabs/metrics-server:master
image_nvidia: cortexlabs/nvidia:master
image_fluentd: cortexlabs/fluentd:master
image_statsd: cortexlabs/statsd:master
image_istio_proxy: cortexlabs/istio-proxy:master
image_istio_pilot: cortexlabs/istio-pilot:master
image_istio_citadel: cortexlabs/istio-citadel:master
image_istio_galley: cortexlabs/istio-galley:master
image_python_serve: cortexlabs/python-serve:0.15.0
image_python_serve_gpu: cortexlabs/python-serve-gpu:0.15.0
image_tf_serve: cortexlabs/tf-serve:0.15.0
image_tf_serve_gpu: cortexlabs/tf-serve-gpu:0.15.0
image_tf_api: cortexlabs/tf-api:0.15.0
image_onnx_serve: cortexlabs/onnx-serve:0.15.0
image_onnx_serve_gpu: cortexlabs/onnx-serve-gpu:0.15.0
image_operator: cortexlabs/operator:0.15.0
image_manager: cortexlabs/manager:0.15.0
image_downloader: cortexlabs/downloader:0.15.0
image_request_monitor: cortexlabs/request-monitor:0.15.0
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:0.15.0
image_metrics_server: cortexlabs/metrics-server:0.15.0
image_nvidia: cortexlabs/nvidia:0.15.0
image_fluentd: cortexlabs/fluentd:0.15.0
image_statsd: cortexlabs/statsd:0.15.0
image_istio_proxy: cortexlabs/istio-proxy:0.15.0
image_istio_pilot: cortexlabs/istio-pilot:0.15.0
image_istio_citadel: cortexlabs/istio-citadel:0.15.0
image_istio_galley: cortexlabs/istio-galley:0.15.0
```
4 changes: 2 additions & 2 deletions docs/cluster-management/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ See [cluster configuration](config.md) to learn how you can customize your clust
<!-- CORTEX_VERSION_MINOR -->
```bash
# install the CLI on your machine
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.15/get-cli.sh)"

# provision infrastructure on AWS and spin up a cluster
$ cortex cluster up
Expand All @@ -38,7 +38,7 @@ your cluster is ready!

```bash
# clone the Cortex repository
git clone -b master https://github.com/cortexlabs/cortex.git
git clone -b 0.15 https://github.com/cortexlabs/cortex.git

# navigate to the TensorFlow iris classification example
cd cortex/examples/tensorflow/iris-classifier
Expand Down
2 changes: 1 addition & 1 deletion docs/cluster-management/update.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ cortex cluster update
cortex cluster down

# update your CLI
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.15/get-cli.sh)"

# confirm version
cortex version
Expand Down
2 changes: 1 addition & 1 deletion docs/deployments/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,4 +67,4 @@ deleting my-api
<!-- CORTEX_VERSION_MINOR -->
* [Tutorial](../../examples/sklearn/iris-classifier/README.md) provides a step-by-step walkthough of deploying an iris classifier API
* [CLI documentation](../cluster-management/cli.md) lists all CLI commands
* [Examples](https://github.com/cortexlabs/cortex/tree/master/examples) demonstrate how to deploy models from common ML libraries
* [Examples](https://github.com/cortexlabs/cortex/tree/0.15/examples) demonstrate how to deploy models from common ML libraries
14 changes: 7 additions & 7 deletions docs/deployments/exporting.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Here are examples for some common ML libraries:
The recommended approach is export your PyTorch model with [torch.save()](https://pytorch.org/docs/stable/torch.html?highlight=save#torch.save). Here is PyTorch's documentation on [saving and loading models](https://pytorch.org/tutorials/beginner/saving_loading_models.html).

<!-- CORTEX_VERSION_MINOR -->
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) exports its trained model like this:
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.15/examples/pytorch/iris-classifier) exports its trained model like this:

```python
torch.save(model.state_dict(), "weights.pth")
Expand All @@ -22,7 +22,7 @@ torch.save(model.state_dict(), "weights.pth")
It may also be possible to export your PyTorch model into the ONNX format using [torch.onnx.export()](https://pytorch.org/docs/stable/onnx.html#torch.onnx.export).

<!-- CORTEX_VERSION_MINOR -->
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.15/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:

```python
placeholder = torch.randn(1, 4)
Expand Down Expand Up @@ -50,7 +50,7 @@ A TensorFlow `SavedModel` directory should have this structure:
```

<!-- CORTEX_VERSION_MINOR -->
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/sentiment-analyzer):
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/0.15/examples/tensorflow/sentiment-analyzer):

```python
import tensorflow as tf
Expand Down Expand Up @@ -88,14 +88,14 @@ aws s3 cp bert.zip s3://my-bucket/bert.zip
```

<!-- CORTEX_VERSION_MINOR -->
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.15/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.

### Other model formats

There are other ways to export Keras or TensorFlow models, and as long as they can be loaded and used to make predictions in Python, they will be supported by Cortex.

<!-- CORTEX_VERSION_MINOR -->
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/license-plate-reader) uses this approach.
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/0.15/examples/tensorflow/license-plate-reader) uses this approach.

## Scikit-learn

Expand All @@ -104,7 +104,7 @@ For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https:
Scikit-learn models are typically exported using `pickle`. Here is [Scikit-learn's documentation](https://scikit-learn.org/stable/modules/model_persistence.html).

<!-- CORTEX_VERSION_MINOR -->
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.15/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:

```python
pickle.dump(model, open("model.pkl", "wb"))
Expand Down Expand Up @@ -157,7 +157,7 @@ model.save_model("model.bin")
It is also possible to export an XGBoost model to the ONNX format using [onnxmltools](https://github.com/onnx/onnxmltools).

<!-- CORTEX_VERSION_MINOR -->
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/xgboost/iris-classifier) uses this approach. Here is the relevant code:
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.15/examples/xgboost/iris-classifier) uses this approach. Here is the relevant code:

```python
from onnxmltools.convert import convert_xgboost
Expand Down
20 changes: 10 additions & 10 deletions docs/deployments/predictors.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,10 +68,10 @@ For proper separation of concerns, it is recommended to use the constructor's `c
### Examples

<!-- CORTEX_VERSION_MINOR -->
Many of the [examples](https://github.com/cortexlabs/cortex/tree/master/examples) use the Python Predictor, including all of the PyTorch examples.
Many of the [examples](https://github.com/cortexlabs/cortex/tree/0.15/examples) use the Python Predictor, including all of the PyTorch examples.

<!-- CORTEX_VERSION_MINOR -->
Here is the Predictor for [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/pytorch/iris-classifier):
Here is the Predictor for [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/iris-classifier):

```python
import re
Expand Down Expand Up @@ -150,7 +150,7 @@ xgboost==0.90
```

<!-- CORTEX_VERSION_MINOR x2 -->
The pre-installed system packages are listed in [images/python-serve/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-serve/Dockerfile) (for CPU) or [images/python-serve-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-serve-gpu/Dockerfile) (for GPU).
The pre-installed system packages are listed in [images/python-serve/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.15/images/python-serve/Dockerfile) (for CPU) or [images/python-serve-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.15/images/python-serve-gpu/Dockerfile) (for GPU).

If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).

Expand Down Expand Up @@ -183,17 +183,17 @@ class TensorFlowPredictor:
```

<!-- CORTEX_VERSION_MINOR -->
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.15/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.

For proper separation of concerns, it is recommended to use the constructor's `config` paramater for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.

### Examples

<!-- CORTEX_VERSION_MINOR -->
Most of the examples in [examples/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow) use the TensorFlow Predictor.
Most of the examples in [examples/tensorflow](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow) use the TensorFlow Predictor.

<!-- CORTEX_VERSION_MINOR -->
Here is the Predictor for [examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/iris-classifier):
Here is the Predictor for [examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/iris-classifier):

```python
labels = ["setosa", "versicolor", "virginica"]
Expand Down Expand Up @@ -226,7 +226,7 @@ tensorflow==2.1.0
```

<!-- CORTEX_VERSION_MINOR -->
The pre-installed system packages are listed in [images/tf-api/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tf-api/Dockerfile).
The pre-installed system packages are listed in [images/tf-api/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.15/images/tf-api/Dockerfile).

If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).

Expand Down Expand Up @@ -259,14 +259,14 @@ class ONNXPredictor:
```

<!-- CORTEX_VERSION_MINOR -->
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.15/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.

For proper separation of concerns, it is recommended to use the constructor's `config` paramater for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.

### Examples

<!-- CORTEX_VERSION_MINOR -->
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/xgboost/iris-classifier) uses the ONNX Predictor:
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.15/examples/xgboost/iris-classifier) uses the ONNX Predictor:

```python
labels = ["setosa", "versicolor", "virginica"]
Expand Down Expand Up @@ -303,6 +303,6 @@ requests==2.22.0
```

<!-- CORTEX_VERSION_MINOR x2 -->
The pre-installed system packages are listed in [images/onnx-serve/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-serve/Dockerfile) (for CPU) or [images/onnx-serve-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-serve-gpu/Dockerfile) (for GPU).
The pre-installed system packages are listed in [images/onnx-serve/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.15/images/onnx-serve/Dockerfile) (for CPU) or [images/onnx-serve-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.15/images/onnx-serve-gpu/Dockerfile) (for GPU).

If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
Loading

0 comments on commit 511f123

Please sign in to comment.