Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to use deepseek #7646

Open
sdw777 opened this issue Jan 9, 2025 · 0 comments
Open

Failed to use deepseek #7646

sdw777 opened this issue Jan 9, 2025 · 0 comments

Comments

@sdw777
Copy link

sdw777 commented Jan 9, 2025

2025-01-09 06:28:25,512 - wren-ai-service - INFO - Question Recommendation pipeline is running... (question_recommendation.py:189)
Forcing deployment: {'data': {'deploy': {'status': 'FAILED', 'error': 'Wren AI: Deploy wren AI failed or timeout, hash: ceac115ccc03d255caf6a0574eb5bd954256a5d2'}}}

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


generate [src.pipelines.generation.question_recommendation.generate()] encountered an error<
Node inputs:
{'generator': '<function LitellmLLMProvider.get_generator.<locals...',
'prompt': "<Task finished name='Task-111' coro=<AsyncGraphAda..."}


Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 485, in acompletion
response = await init_response
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 688, in acompletion
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 666, in acompletion
headers, response = await self.make_openai_chat_completion_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 378, in make_openai_chat_completion_request
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 366, in make_openai_chat_completion_request
await openai_aclient.chat.completions.with_raw_response.create(
File "/app/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 373, in wrapped
return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/langfuse/openai.py", line 744, in _wrap_async
raise ex
File "/app/.venv/lib/python3.12/site-packages/langfuse/openai.py", line 700, in _wrap_async
openai_response = await wrapped(**arg_extractor.get_openai_args())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 1720, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1843, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1537, in request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1638, in _request
raise self._make_status_error_from_response(err.response) from None
openai.UnprocessableEntityError: Failed to deserialize the JSON body into the target type: response_format: response_format.type json_schema is unavailable now at line 1 column 37554

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/hamilton/async_driver.py", line 122, in new_fn
await fn(**fn_kwargs) if asyncio.iscoroutinefunction(fn) else fn(**fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 220, in async_wrapper
self._handle_exception(observation, e)
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 516, in _handle_exception
raise e
File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 218, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/pipelines/generation/question_recommendation.py", line 42, in generate
return await generator(prompt=prompt.get("prompt"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/llm/litellm.py", line 70, in _run
completion: Union[ModelResponse] = await acompletion(
^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1220, in wrapper_async
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1074, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 507, in acompletion
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2148, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 373, in exception_type
raise BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: DeepseekException - Failed to deserialize the JSON body into the target type: response_format: response_format.type json_schema is unavailable now at line 1 column 37554

Oh no an error! Need help with Hamilton?
Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g

2025-01-09 06:28:26,059 - wren-ai-service - ERROR - An error occurred during question recommendation generation: litellm.BadRequestError: DeepseekException - Failed to deserialize the JSON body into the target type: response_format: response_format.type json_schema is unavailable now at line 1 column 37554 (question_recommendation.py:60)
INFO: 172.18.0.6:57484 - "GET /v1/question-recommendations/707db24f-d9d0-4257-bb7a-45f6d029e605 HTTP/1.1" 200 OK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant