diff --git a/README.md b/README.md index 5aa7116..e3c3678 100644 --- a/README.md +++ b/README.md @@ -73,7 +73,7 @@ Options: - `-l, --history-prompt-length ` Length of prompt history to retain for next request batch (default: 10) - `-b, --batch-sizes ` - Batch sizes of increasing order for translation prompt slices in JSON Array (default: `"[10, 100]"`) + Batch sizes of increasing order for translation prompt slices in JSON Array (default: `"[10,100]"`) The number of lines to include in each translation prompt, provided that they are estimated to within the token limit. In case of mismatched output line quantities, this number will be decreased step-by-step according to the values in the array, ultimately reaching one. @@ -85,7 +85,11 @@ Options: - `--experimental-structured-mode array` Structures the input and output into a plain array format. This option is more concise compared to base mode, though it uses slightly more tokens per batch. - `--experimental-structured-mode object` Structures both the input and output into a dynamically generated object schema based on input values. This option is even more concise and uses fewer tokens, but requires smaller batch sizes and can be slow and unreliable. Due to its unreliability, it may lead to more resubmission retries, potentially wasting more tokens in the process. - `--experimental-use-full-context` - Include the full context of translated data to work well with [prompt caching](https://openai.com/index/api-prompt-caching/). The translated lines per user and assistant message pairs are sliced as defined by `--history-prompt-length`. May risk running into the model's context window limit, typically `128K`, which should be sufficient for most cases. + Include the full context of translated data to work well with [prompt caching](https://openai.com/index/api-prompt-caching/). + + The translated lines per user and assistant message pairs are sliced as defined by `--history-prompt-length` (by default `--history-prompt-length 10`), it is recommended to set this to the largest batch size (by default `--batch-sizes "[10,100]"`): `--history-prompt-length 100`. + + Enabling this may risk running into the model's context window limit, typically `128K`, but should be sufficient for most cases. - `--log-level ` Log level (default: `debug`, choices: `trace`, `debug`, `info`, `warn`, `error`, `silent`) - `--silent`