-
Notifications
You must be signed in to change notification settings - Fork 837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new introduction to Core 2 #6195
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just minor comments.
docs-gb/README.md
Outdated
|
||
Seldon Core 2 APIs provide a state of the art solution for machine learning inference which | ||
can be run locally on a laptop as well as on Kubernetes for production. | ||
Seldon Core 2 is a source-available framework for deploying and managing machine learning systems at scale. The data-centric approach and modular architecture of Core 2 helps users deploy, manage, and scale their ML - from simple models to complex ML applications. After the models are deployed, Core 2 enables the monitoring and experimentation on those systems in production. With support for a wide range of model types, and design patterns to build around those models, you can standardize ML deployment across a range of use-cases in the cloud or on-premise serving infrastructure of your choice. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seldon Core 2 is a source-available framework for deploying and managing machine learning systems at scale. The data-centric approach and modular architecture of Core 2 helps users deploy, manage, and scale their ML - from simple models to complex ML applications. After the models are deployed, Core 2 enables the monitoring and experimentation on those systems in production. With support for a wide range of model types, and design patterns to build around those models, you can standardize ML deployment across a range of use-cases in the cloud or on-premise serving infrastructure of your choice. | |
Seldon Core 2 is a source-available framework for deploying and managing machine learning systems at scale. The data-centric approach and modular architecture of Seldon Core 2 helps you to deploy, manage, and scale your ML - from simple models to complex ML applications. After the models are deployed in Sledon Core 2, you can monitor and run experiments on those systems in production. Seldon Core 2 supports a wide range of model types, and design patterns to build around those models. You can also standardize ML deployment across a range of use-cases in the cloud or on-premise serving infrastructure of your choice. |
docs-gb/README.md
Outdated
* Explain individual models and pipelines with state of the art explanation techniques. | ||
* Deploy drift and outlier detectors alongside models. | ||
* Kubernetes Service mesh agnostic - use the service mesh of your choice. | ||
Seldon Core 2 orchestrates and scales machine learning components running as production-grade microservices. These components can be deployed locally or in enterprise-scale kubernetes clusters. The components of your ML system - such as models, processing steps, custom logic, or monitoring methods - are deployed as **Models**, leveraging serving solutions compatible with Core 2 such as MLServer, Alibi, LLM Module, or Triton Inference Server. These serving solutions package the required dependencies and standardize inference using the Open Inference Protocol. This ensures that, regardless of your model types and use-cases, all request and responses follow a unified format. Once models are deployed, they can process REST or gRPC requests for real-time inference. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seldon Core 2 orchestrates and scales machine learning components running as production-grade microservices. These components can be deployed locally or in enterprise-scale kubernetes clusters. The components of your ML system - such as models, processing steps, custom logic, or monitoring methods - are deployed as **Models**, leveraging serving solutions compatible with Core 2 such as MLServer, Alibi, LLM Module, or Triton Inference Server. These serving solutions package the required dependencies and standardize inference using the Open Inference Protocol. This ensures that, regardless of your model types and use-cases, all request and responses follow a unified format. Once models are deployed, they can process REST or gRPC requests for real-time inference. | |
Seldon Core 2 orchestrates and scales machine learning components running as production-grade microservices. These components can be deployed locally or in enterprise-scale kubernetes clusters. The components of your ML system - such as models, processing steps, custom logic, or monitoring methods - are deployed as **Models**, leveraging serving solutions compatible with Seldon Core 2 such as MLServer, Alibi, LLM Module, or Triton Inference Server. These serving solutions package the required dependencies and standardize inference using the Open Inference Protocol. This ensures that, regardless of your model types and use-cases, all request and responses follow a unified format. After models are deployed, they can process REST or gRPC requests for real-time inference. |
docs-gb/README.md
Outdated
|
||
## Core features and comparison to Seldon Core V1 APIs | ||
Machine learning applications are increasingly complex. They’ve evolved from individual models deployed as services, to complex applications that can consist of multiple models, processing steps, custom logic, and asynchronous monitoring components. With Core you can build Pipelines that connect any of these components to make data-centric applications. Core 2 then handles the orchestration and scaling of the underlying components of such an application, and exposes the data streamed through the application in real time using Kafka. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Machine learning applications are increasingly complex. They’ve evolved from individual models deployed as services, to complex applications that can consist of multiple models, processing steps, custom logic, and asynchronous monitoring components. With Core you can build Pipelines that connect any of these components to make data-centric applications. Core 2 then handles the orchestration and scaling of the underlying components of such an application, and exposes the data streamed through the application in real time using Kafka. | |
Machine learning applications are increasingly complex. They’ve evolved from individual models deployed as services, to complex applications that can consist of multiple models, processing steps, custom logic, and asynchronous monitoring components. With Seldon Core 1 you can build Pipelines that connect any of these components to make data-centric applications. Seldon Core 2 orchestrates and scales the underlying components of such an application, and then exposes the data streamed through the application in real time using Kafka. |
Our V2 APIs separate out core tasks into separate resources allowing users to get started fast | ||
with deploying a Model and the progressing to more complex Pipelines, Explanations and Experiments. | ||
{% hint style="info" %} | ||
Data-centricity is an approach that places the management, integrity, and flow of data at the core of the machine learning deployment framework. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Data-centricity is an approach that places the management, integrity, and flow of data at the core of the machine learning deployment framework. | |
Data-centricity is an approach that places management of data, integrity of data, and flow of data at the core of the machine learning deployment framework. |
docs-gb/README.md
Outdated
|
||
![mms1](images/multimodel1.png) | ||
Lastly, Core 2 provides Experiments as part of its orchestration capabilities, enabling users to implement routing logic like A/B tests or Canary deployments to models or pipelines in production. After experiments are run, you can promote new models or pipelines, or launch new experiments, allowing you to continuously improve the performance of your ML products. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lastly, Core 2 provides Experiments as part of its orchestration capabilities, enabling users to implement routing logic like A/B tests or Canary deployments to models or pipelines in production. After experiments are run, you can promote new models or pipelines, or launch new experiments, allowing you to continuously improve the performance of your ML products. | |
Lastly, Seldon Core 2 provides Experiments as part of its orchestration capabilities, to implement routing logic such as A/B tests or Canary deployments to models or pipelines in production. After experiments are run, you can promote new models or pipelines, or launch new experiments, so that you can continuously improve the performance of your ML products. |
docs-gb/README.md
Outdated
|
||
![mms3](images/overcommit.png) | ||
With the modular design of Core 2, users are able to implement cutting-edge methods to save hardware costs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the modular design of Core 2, users are able to implement cutting-edge methods to save hardware costs: | |
The modular design of Seldon Core 2, enables the implementation of cutting-edge methods to drastically reduce hardware expenses: |
docs-gb/README.md
Outdated
|
||
## Inference Servers | ||
- **Multi-Model serving** consolidates multiple models onto shared inference servers to reduce resource usage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- **Multi-Model serving** consolidates multiple models onto shared inference servers to reduce resource usage. | |
- **Multi-Model serving** deploy multiple models within a single inference server to optimize resource utilization and decrease the number of servers required. |
docs-gb/README.md
Outdated
|
||
## Inference Servers | ||
- **Multi-Model serving** consolidates multiple models onto shared inference servers to reduce resource usage. | ||
- **Over-commit** automatically relegates models from memory to disk when not in use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- **Over-commit** automatically relegates models from memory to disk when not in use. | |
- **Over-commit**: provision more models than the available memory by dynamically loading and unloading models based on demand, ensuring efficient use of hardware resources. |
|
||
## Service Mesh Agnostic | ||
Core 2 demonstrates the power of a standardized, data-centric approach to MLOps at scale, ensuring that data observability and management are prioritized across every layer of machine learning operations. Furthermore, Core 2 seamlessly integrates into end-to-end MLOps workflows, from CI/CD, managing traffic with the service mesh of your choice, alerting, data visualization, or authentication and authorization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Core 2 demonstrates the power of a standardized, data-centric approach to MLOps at scale, ensuring that data observability and management are prioritized across every layer of machine learning operations. Furthermore, Core 2 seamlessly integrates into end-to-end MLOps workflows, from CI/CD, managing traffic with the service mesh of your choice, alerting, data visualization, or authentication and authorization. | |
Seldon Core 2 demonstrates the power of a standardized, data-centric approach to MLOps at scale, ensuring that data observability and management are prioritized across every layer of machine learning operations. Furthermore, Seldon Core 2 seamlessly integrates into end-to-end MLOps workflows, from CI/CD, managing traffic with the service mesh of your choice, alerting, data visualization, or authentication and authorization. |
docs-gb/README.md
Outdated
- To get Core 2 running, see our installation guide | ||
- Then see our Quickstart and Tutorials | ||
- Join our Slack Community for updates or for answers to any questions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- To get Core 2 running, see our installation guide | |
- Then see our Quickstart and Tutorials | |
- Join our Slack Community for updates or for answers to any questions | |
- Install Seldon Core 2 | |
- Explore the Quickstart and Tutorials | |
- Join our Slack Community for updates or for answers to any questions |
docs-gb/README.md
Outdated
|
||
![mesh](images/mesh.png) | ||
## What's Next |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
## What's Next | |
## Next Steps |
New introduction page to Core 2.
Still need to add links (especially in What's Next section at the bottom).