Skip to content

Commit

Permalink
Merge pull request #239 from tokk-nv/dev/openwebui
Browse files Browse the repository at this point in the history
Add more images for Ollama native installer
  • Loading branch information
tokk-nv authored Dec 17, 2024
2 parents 950ed44 + 23ab569 commit 7e866c3
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 0 deletions.
Binary file added docs/images/ollama-official-installer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/ollama-usage.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 20 additions & 0 deletions docs/tutorial_ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,30 @@ In this tutorial, we introduce two installation methods: (1) the default native

## (1) Native Install

Ollama's official installer already support Jetson and can easily install CUDA-supporting Ollama.

```bash
curl -fsSL https://ollama.com/install.sh | sh
```

![](./images/ollama-official-installer.png)

It create a service to run `ollama serve` on start up, so you can start using `ollama` command right away.

### Example: Ollama usage

```bash
ollama
```

![alt text](images/ollama-usage.png)

### Example: run a model on CLI

```bash
ollama run llama3.2:3b
```

## (2) Docker container for `ollama` using `jetson-containers`

```
Expand Down

0 comments on commit 7e866c3

Please sign in to comment.