Skip to content

Commit

Permalink
readme: distributed.yaml -> lora.yaml
Browse files Browse the repository at this point in the history
  • Loading branch information
KevinMusgrave committed Feb 26, 2024
1 parent 353939e commit 562ef91
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions blog/llm-finetuning-2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ pip install determined

Finetune with LoRA:
```bash
det e create distributed.yaml .
det e create lora.yaml .
```

Finetune with DeepSpeed:
Expand All @@ -19,7 +19,7 @@ det e create deepspeed.yaml .

## Configuration

Change configuration options in `distributed.yaml`. Some important options are:
Change configuration options in `lora.yaml`. Some important options are:
- `slots_per_trial`: the number of GPUs to use.
- `dataset_subset`: the difficulty subset to train on.
- `per_device_train_batch_size`: the batch size per GPU.
Expand Down

0 comments on commit 562ef91

Please sign in to comment.