Skip to content

Commit

Permalink
Update version to 0.1.0
Browse files Browse the repository at this point in the history
  • Loading branch information
deliahu committed Nov 23, 2021
1 parent bb29f2c commit e108ea0
Show file tree
Hide file tree
Showing 6 changed files with 7 additions and 7 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1131,7 +1131,7 @@ class Handler:
# define any handler methods for HTTP/gRPC workloads here
```

When explicit model paths are specified in the Python handler's Nucleus configuration, Nucleus provides a `model_client` to your Handler's constructor. `model_client` is an instance of [ModelClient](https://github.com/cortexlabs/nucleus/tree/master/src/cortex/cortex_internal/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your handler, which must be defined when using explicit model paths). It should be saved as an instance variable in your handler class, and your handler method should call `model_client.get_model()` to load your model for inference. Preprocessing of the JSON/gRPC payload and postprocessing of predictions can be implemented in your handler method as well.
When explicit model paths are specified in the Python handler's Nucleus configuration, Nucleus provides a `model_client` to your Handler's constructor. `model_client` is an instance of [ModelClient](https://github.com/cortexlabs/nucleus/tree/0.1/src/cortex/cortex_internal/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your handler, which must be defined when using explicit model paths). It should be saved as an instance variable in your handler class, and your handler method should call `model_client.get_model()` to load your model for inference. Preprocessing of the JSON/gRPC payload and postprocessing of predictions can be implemented in your handler method as well.

When multiple models are defined using the Handler's `multi_model_reloading` field, the `model_client.get_model()` method expects an argument `model_name` which must hold the name of the model that you want to load (for example: `self.client.get_model("text-generator")`). There is also an optional second argument to specify the model version.

Expand Down Expand Up @@ -1305,7 +1305,7 @@ class Handler:
# define any handler methods for HTTP/gRPC workloads here
```

Nucleus provides a `tensorflow_client` to your Handler's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/nucleus/tree/master/src/cortex/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your handler method should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your handler method as well.
Nucleus provides a `tensorflow_client` to your Handler's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/nucleus/tree/0.1/src/cortex/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your handler method should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your handler method as well.

When multiple models are defined using the Handler's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). There is also an optional third argument to specify the model version.

Expand Down
2 changes: 1 addition & 1 deletion nucleus/templates/handler.Dockerfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# to replace when building the dockerfile
FROM $BASE_IMAGE
ENV CORTEX_MODEL_SERVER_VERSION=master
ENV CORTEX_MODEL_SERVER_VERSION=0.1.0

RUN apt-get update -qq && apt-get install -y -q \
build-essential \
Expand Down
2 changes: 1 addition & 1 deletion nucleus/templates/tfs.Dockerfile
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
FROM $BASE_IMAGE
ENV CORTEX_MODEL_SERVER_VERSION=master
ENV CORTEX_MODEL_SERVER_VERSION=0.1.0
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

import setuptools

CORTEX_MODEL_SERVER_VERSION = "master"
CORTEX_MODEL_SERVER_VERSION = "0.1.0"

with open("requirements.txt") as fp:
install_requires = fp.read()
Expand Down
2 changes: 1 addition & 1 deletion src/cortex/cortex_internal/consts.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@
# limitations under the License.

SINGLE_MODEL_NAME = "_cortex_default"
MODEL_SERVER_VERSION = "master"
MODEL_SERVER_VERSION = "0.1.0"
2 changes: 1 addition & 1 deletion src/cortex/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import pkg_resources
from setuptools import setup, find_packages

CORTEX_MODEL_SERVER_VERSION = "master"
CORTEX_MODEL_SERVER_VERSION = "0.1.0"

with pathlib.Path("cortex_internal.requirements.txt").open() as requirements_txt:
install_requires = [
Expand Down

0 comments on commit e108ea0

Please sign in to comment.