MLX supports training models and serving of pre-trained models.
Below are the 2 templates to create the metadata for a trainable model and a servable model.
The template describes the metadata spec:
- (Required): Fields that are Always required in all condition.
- (Required for xxx): Fields that are required when trainable/servable or certain storage type is specified.
- (optional): Fields that can be omit, but do not put empty strings since it will overwrite the default values.
Or take a look at the samples used in the Machine Learning Exchange catalog.
name: <model_name>
model_identifier: <model_id>
description: <model_description>
author:
name: "IBM CODAIT"
email: "[email protected]"
framework:
name: "tensorflow"
version: "1.5"
runtimes:
name: python
version: "3.5"
license: "Apache 2.0"
domain: "Domain Area"
website: <model_website> # Can be GitHub link
readme_url: <readme_url> # Github-flavored markdown, Github raw URL, will be displayed in MLX UI
train:
trainable: true
credentials_required: true
tested_platforms:
- WatsonML
model_source:
initial_model:
data_store: cloud_training_datastore
bucket: <data_bucket_name>
path: model.zip
model_training_results:
trained_model:
data_store: cloud_training_datastore
bucket: <data_result_name>
data_source:
training_data:
data_store: cloud_training_datastore
bucket: <data_bucket_name>
path: aligned
mount_type: mount_cos
execution:
command: ./train-max-model.sh
compute_configuration:
name: k80
nodes: 1
process:
- name: training_process
params:
staging_dir: training_output/
trained_model_path: trained_model/tensorflow/checkpoint/
data_stores:
- name: cloud_training_datastore
type: s3
connection:
endpoint: https://s3.us.cloud-object-storage.appdomain.cloud
name: <model_name>
model_identifier: <model_id>
description: <model_description>
framework:
name: "tensorflow"
version: "1.8.0"
license: "Apache 2.0"
domain: "Domain Area"
website: <model_website> # Can be GitHub link
readme_url: <readme_url> # Github-flavored markdown, Github raw URL, will be displayed in MLX UI
serve:
servable: true
tested_platforms:
- kubernetes
- knative
serving_container_image:
container_image_url: <model_docker_image>
- Click on the "Models" link in left-hand navigation panel
- Click on "Register a Model"
- Select a YAML file to be uploaded or provide a URL to download the YAML file
.tar.gz
and.tgz
files containing the compressed.yaml
specification can also be uploaded
- Enter a name for the model; Otherwise the name from the YAML file will be used
Models can easily be executed based on the metadata specified in the YAML file for a particular function
- Under the models tab, select a model
- Switch to the "LAUNCH" tab
- Optionally, provide a run name
- Click submit to run the pipeline which will deploy the model