Skip to content

Latest commit

 

History

History
130 lines (106 loc) · 3.71 KB

README.md

File metadata and controls

130 lines (106 loc) · 3.71 KB

Models

MLX supports training models and serving of pre-trained models.

Table of Contents

Create Model Metadata

Below are the 2 templates to create the metadata for a trainable model and a servable model.

The template describes the metadata spec:

  • (Required): Fields that are Always required in all condition.
  • (Required for xxx): Fields that are required when trainable/servable or certain storage type is specified.
  • (optional): Fields that can be omit, but do not put empty strings since it will overwrite the default values.

Or take a look at the samples used in the Machine Learning Exchange catalog.

Metadata template for a trainable model

name: <model_name>
model_identifier: <model_id>
description: <model_description>
author:
  name: "IBM CODAIT"
  email: "[email protected]"
framework:
  name: "tensorflow"
  version: "1.5"
  runtimes:
      name: python
      version: "3.5"

license: "Apache 2.0"
domain: "Domain Area"
website: <model_website> # Can be GitHub link
readme_url: <readme_url> # Github-flavored markdown, Github raw URL, will be displayed in MLX UI

train:
  trainable: true
  credentials_required: true
  tested_platforms:
    - WatsonML
  model_source:
    initial_model:
      data_store: cloud_training_datastore
      bucket: <data_bucket_name>
      path: model.zip
  model_training_results:
    trained_model:
      data_store: cloud_training_datastore
      bucket: <data_result_name>
  data_source:
    training_data:
      data_store: cloud_training_datastore
      bucket: <data_bucket_name>
      path: aligned
  mount_type: mount_cos
  execution:
    command: ./train-max-model.sh
    compute_configuration:
      name: k80
      nodes: 1
process:
    - name: training_process
      params:
       staging_dir: training_output/
       trained_model_path: trained_model/tensorflow/checkpoint/
data_stores:
  - name: cloud_training_datastore
    type: s3
    connection:
      endpoint: https://s3.us.cloud-object-storage.appdomain.cloud

Metadata template for a servable model

name: <model_name>
model_identifier: <model_id>
description: <model_description>
framework:
  name: "tensorflow"
  version: "1.8.0"

license: "Apache 2.0"
domain: "Domain Area"
website: <model_website> # Can be GitHub link
readme_url: <readme_url> # Github-flavored markdown, Github raw URL, will be displayed in MLX UI

serve:
  servable: true
  tested_platforms:
    - kubernetes
    - knative
  serving_container_image:
    container_image_url: <model_docker_image>

Register a Model

  1. Click on the "Models" link in left-hand navigation panel
  2. Click on "Register a Model"
  3. Select a YAML file to be uploaded or provide a URL to download the YAML file
    • .tar.gz and .tgz files containing the compressed .yaml specification can also be uploaded
  4. Enter a name for the model; Otherwise the name from the YAML file will be used

Serve the Model

Models can easily be executed based on the metadata specified in the YAML file for a particular function

  1. Under the models tab, select a model
  2. Switch to the "LAUNCH" tab
  3. Optionally, provide a run name
  4. Click submit to run the pipeline which will deploy the model