Skip to content

Commit

Permalink
added to include mlserver
Browse files Browse the repository at this point in the history
  • Loading branch information
Rakavitha Kodhandapani authored and Rakavitha Kodhandapani committed Jan 13, 2025
1 parent 25d0c55 commit 16020a6
Show file tree
Hide file tree
Showing 8 changed files with 23 additions and 9 deletions.
4 changes: 3 additions & 1 deletion doc/source/analytics/explainers.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,9 @@ For an e2e example, please check AnchorTabular notebook [here](../examples/iris_

## Explain API

**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). We strongly encourage customers to use the OIP, which offers seamless integration across diverse model serving runtimes, supports the creation of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.
**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). As part of this transition, you need to use [MLServer](https://github.com/SeldonIO/MLServer) for model serving in Seldon Core 1.

We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.


For the Seldon Protocol an endpoint path will be exposed for:
Expand Down
4 changes: 3 additions & 1 deletion doc/source/graph/protocols.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Protocols

**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). We strongly encourage customers to use the OIP, which offers seamless integration across diverse model serving runtimes, supports the creation of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.
**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). As part of this transition, you need to use [MLServer](https://github.com/SeldonIO/MLServer) for model serving in Seldon Core 1.

We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.

Tensorflow protocol is only available in version >=1.1.

Expand Down
4 changes: 3 additions & 1 deletion doc/source/production/optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,9 @@ Using the Seldon python wrapper there are various optimization areas one needs t

### Seldon Protocol Payload Types with REST and gRPC

**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). We strongly encourage customers to use the OIP, which offers seamless integration across diverse model serving runtimes, supports the creation of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.
**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). As part of this transition, you need to use [MLServer](https://github.com/SeldonIO/MLServer) for model serving in Seldon Core 1.

We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.


Depending on whether you want to use REST or gRPC and want to send tensor data the format of the request will have a deserialization/serialization cost in the python wrapper. This is investigated in a [python serialization notebook](../examples/python_serialization.html).
Expand Down
3 changes: 2 additions & 1 deletion doc/source/reference/upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,9 @@ Only the v1 versions of the CRD will be supported moving forward. The v1beta1 ve
### Model Health Checks
**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). We strongly encourage customers to use the OIP, which offers seamless integration across diverse model serving runtimes, supports the creation of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.
**Note**:Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). As part of this transition, you need to use [MLServer](https://github.com/SeldonIO/MLServer) for model serving in Seldon Core 1.
We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.
We have updated the health checks done by Seldon for the model nodes in your inference graph. If `executor.fullHealthChecks` is set to true then:
* For Seldon protocol each node will be probed with `/api/v1.0/health/status`.
* For tensorflow just TCP checks will be run on the http endpoint.
Expand Down
3 changes: 2 additions & 1 deletion examples/models/lightgbm_custom_server/iris.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,9 @@
"source": [
"# Custom LightGBM Prepackaged Model Server\n",
"\n",
"**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). We strongly encourage customers to use the OIP, which offers seamless integration across diverse model serving runtimes, supports the creation of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). As part of this transition, you need to use [MLServer](https://github.com/SeldonIO/MLServer) for model serving in Seldon Core 1.\n",
"\n",
"We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"\n",
"In this notebook we create a new custom LIGHTGBM_SERVER prepackaged server with two versions:\n",
" * A Seldon protocol LightGBM model server\n",
Expand Down
4 changes: 3 additions & 1 deletion notebooks/backwards_compatability.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,9 @@
" * grpcurl\n",
" * pygmentize\n",
"\n",
"**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). We strongly encourage customers to use the OIP, which offers seamless integration across diverse model serving runtimes, supports the creation of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). As part of this transition, you need to use [MLServer](https://github.com/SeldonIO/MLServer) for model serving in Seldon Core 1.\n",
"\n",
"We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"\n",
"## Setup Seldon Core\n",
"\n",
Expand Down
6 changes: 4 additions & 2 deletions notebooks/protocol_examples.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,13 @@
" \n",
"## Examples\n",
"\n",
"**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). We strongly encourage customers to use the OIP, which offers seamless integration across diverse model serving runtimes, supports the creation of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
" * [Open Inference Protocol or V2 Protocol](#V2-Protocol-Model)\n",
" * [Seldon Protocol](#Seldon-Protocol-Model)\n",
" * [Tensorflow Protocol](#Tensorflow-Protocol-Model)\n",
" \n",
"\n",
"**Note**:Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). As part of this transition, you need to use [MLServer](https://github.com/SeldonIO/MLServer) for model serving in Seldon Core 1.\n",
"\n",
"We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"\n",
"## Setup Seldon Core\n",
"\n",
Expand Down
4 changes: 3 additions & 1 deletion notebooks/server_examples.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,9 @@
"source": [
"## Serve SKLearn Iris Model\n",
"\n",
"**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). We strongly encourage customers to use the OIP, which offers seamless integration across diverse model serving runtimes, supports the creation of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"**Note**: Seldon is no longer maintaining the Seldon and TensorFlow protocols. Instead, Seldon is adopting the industry-standard Open Inference Protocol (OIP). As part of this transition, you need to use [MLServer](https://github.com/SeldonIO/MLServer) for model serving in Seldon Core 1.\n",
"\n",
"We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"\n",
"In order to deploy SKLearn artifacts, we can leverage the [pre-packaged SKLearn inference server](https://docs.seldon.io/projects/seldon-core/en/latest/servers/sklearn.html).\n",
"The exposed API can follow either:\n",
Expand Down

0 comments on commit 16020a6

Please sign in to comment.