You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 20189 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
The Writer Agent cant write long text article and also cant receive long text prompt, I think we need to disassemble write task to slice task, for example tell gpt there several source articles need to read and help to summary it, after that , give all the summarys to gpt and ask it to write a article outline with json response, then we send the part of json prompt to write part of article.
The very important work is need to keep the gpt konw the prompt context all time ,we need to contact before task when send prompt。
while, I am trying to do it although i dont know the result.
The text was updated successfully, but these errors were encountered:
{'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 20189 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
The Writer Agent cant write long text article and also cant receive long text prompt, I think we need to disassemble write task to slice task, for example tell gpt there several source articles need to read and help to summary it, after that , give all the summarys to gpt and ask it to write a article outline with json response, then we send the part of json prompt to write part of article.
The very important work is need to keep the gpt konw the prompt context all time ,we need to contact before task when send prompt。
while, I am trying to do it although i dont know the result.
The text was updated successfully, but these errors were encountered: