Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the quickstart #65

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 69 additions & 0 deletions content/docs/machine-learning/quickstart.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
sidebar_position: 2
title: Get Started with Kubeflow
description: Learn how to get started with Civo's Kubeflow via an example.
---

<head>
<title>Kubeflow quickstart | Civo Documentation</title>
</head>

In this quickstart mainly focused towards Pipelines and Notebooks, we take a look at a simple example to train and run a model using Kubeflow Pipelines and Notebooks.
Rishit-dagli marked this conversation as resolved.
Show resolved Hide resolved

## Before you start

Make sure you can log into your Kubeflow cluster by following the instructions at [Logging into your cluster](kubeflow-dashboard.md#logging-into-your-cluster).

Make sure you can create a notebook and log into the notebook by following the instructions at [Creating a Kubeflow Notebook](creating-a-new-kubeflow-notebook.md).

## Run a pipeline

We will first compile a pipeline DSL that will train a model using the MNIST dataset. Notice that this pipeline mainly has the following coponents which are run: hyperparameter tuning with Katib, creating a new volume for training, run a training job and finally serve the model using KServe.
Rishit-dagli marked this conversation as resolved.
Show resolved Hide resolved

Now [create a new notebook instance](creating-a-new-kubeflow-notebook.md) and run the following commands:

```bash
git clone
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

git clone is missing a repo address?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, on this I had the code pushed to the civo gitlab on the link I shared in this pr, and I wanted to ask what would the best avenue to make a public repo by Civo would be?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, on this I had the code pushed to the civo gitlab on the link I shared in this pr, and I wanted to ask what would the best avenue to make a public repo by Civo would be?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, on this I had the code pushed to the civo gitlab on the link I shared in this pr, and I wanted to ask what would the best avenue to make a public repo by Civo would be?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any Civo admin would be able to create a repo. I believe the plan was to have an ML examples repo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, would you or Josh be able to do it, I do not have access to do so

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe Josh has the required creds. I don't.

cd
pip install -r requirements.txt
python mnist-example.py
```

This produces for us a `mnist-example.yaml` file that we will use to run our pipeline.

We now use [Pipelines](kubeflow-dashboard.md/) and use the pipeline configuration we generated in the previous step to define the pipeline. Once you do so, you should create a new [experiment](kubeflow-dashboard.md/) and then trigger a run for the pipline by going to the Run tab, clicking on the "create a run" button and choosing the pipeline you just created.
Rishit-dagli marked this conversation as resolved.
Show resolved Hide resolved

This would trigger a Kubeflow pipeline run and we would see the pipeline run in the Pipelines dashboard. Once the pipeline run is complete, we can see the model saved in the `end-to-end-pipeline-{ID}-model` volume we created earlier.

## Predict using the model

We can now use the model we trained to make predictions. We will use the [KFServing](kubeflow-dashboard.md/) component of Kubeflow to serve our model. You could now modify this code and run it in the Notebook instance you created to predict using the Serving endpoint.

```python
import numpy as np
from PIL import Image
import requests

name = "kfaas-docs"
namesapce = "my-profile"
image_url = "https://raw.githubusercontent.com/kubeflow/katib/master/examples/v1beta1/kubeflow-pipelines/images/9.bmp"
image = Image.open(requests.get(image_url, stream=True).raw)
data = (
np.array(image.convert("L").resize((28, 28)))
.astype(np.float)
.reshape(-1, 28, 28, 1)
)
data_formatted = np.array2string(
data, separator=",", formatter={"float": lambda x: "%.1f" % x}
)
json_request = '{{ "instances" : {} }}'.format(data_formatted)

url = "http://{}-predictor-default.{}.svc.cluster.local/v1/models/{}:predict".format(
name, namespace, name
)
response = requests.post(url, data=json_request)

print("Prediction for the image")
display(image)
print(response.json())
Rishit-dagli marked this conversation as resolved.
Show resolved Hide resolved
```