Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Advanced OpenAI configuration #258

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

sebiweise
Copy link
Contributor

@sebiweise sebiweise commented Oct 24, 2024

In this PR I´ve added some OpenAI configuration parameters, updated the .env.example file and did some package updates for NextJs, OpenAI and Prisma.

OpenAI can no be configured via ENV vars. The following variables were added:

VAR name Default Description
OPENAI_API_URL https://api.openai.com/v1 This parameter can be used to define a custom OpenAI endpoint. I´ve tested this with the official OpenAI endpoint, Ollama and also with a Cloudflare AI Gateway
OPENAI_CATEGORY_EXPENSE_MODEL gpt-3.5-turbo This parameter can be used to define a different OpenAI model that will be used for the expense category determination
OPENAI_CATEGORY_RECEIPT_EXTRACT gpt-4-turbo This parameter can be used to define a different OpenAI model that will be used for the receipt extract feature

Using the Cloudflare AI Gateway will help decrease the amount of request that will be send to OpenAI and therefor decrease the cost that will be generated. Same expense titles for exampel will be cached by Cloudflare and the can be reused for the next requests.

It is also possible to define Ollama as your AI endpoint. Ollama is compatible with OpenAI, therefor the OpenAI SDK´s can be used with Ollama (Docs).
Just set your Ollama server url as your OPENAI_API_URL="http://localhost:11434/v1" and define your preferred local model in OPENAI_CATEGORY_EXPENSE_MODEL and OPENAI_CATEGORY_RECEIPT_EXTRACT.
OPENAI_API_KEY cannot be null and must be set to some random stuff, for Ollama it won´t be used.

@sebiweise sebiweise changed the title Feature: Extend OpenAI configuration Feature: Advanced OpenAI configuration Oct 24, 2024
@ChristopherJohnston
Copy link
Contributor

See #166

@sebiweise
Copy link
Contributor Author

See #166

Yeah as I see so far, your PR and your changes aim to add config options for Ollama and not to change anything OpenAI related.

@ChristopherJohnston
Copy link
Contributor

No worries, I was just linking them as they are related and there may be some merge conflicts for the other if/when one is merged.

@sebiweise
Copy link
Contributor Author

No worries, I was just linking them as they are related and there may be some merge conflicts for the other if/when one is merged.

Alright, but I think we can skip the use of fetch and just use my change to also support Ollama. As far as I can understand just changing the url and the model will do the trick for Ollama.
Have a look at this documentation of Ollama: https://ollama.com/blog/openai-compatibility
Ollama should be OpenAI SDK compatible🤔

@sebiweise
Copy link
Contributor Author

Yes I can confirm, that you can use my changes to use Ollama local models like this example:

OPENAI_API_URL="http://127.0.0.1:11434/v1"
OPENAI_CATEGORY_EXPENSE_MODEL="llama3"
OPENAI_CATEGORY_RECEIPT_EXTRACT_MODEL="llama3"

@sebiweise
Copy link
Contributor Author

Updated the PR description to include the Ollama instructions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants