How to Handle Token Limit Exceeded Error in OpenAI API #2897
Unanswered
MuhammedTech
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm getting an error from the OpenAI API stating that the context length exceeds the model's limit, even though I'm only passing the last four messages to the prompt. I’ve verified that each interaction is using around 1056 tokens, but I’m still encountering the error when sending the prompt to the model and not sure why I'm still exceeding the token limit. Full error message
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, your messages resulted in 8452 tokens (8415 in the messages, 37 in the functions). Please reduce the length of the messages or functions.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
Here is my code
for the embedding i am using Openai embedding, chunk size = 1000, overlap = 200, parsin with Llamaparse and Unstructured for MakrdownLoader`
Any advice or solutions would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions