Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request] Regarding Compatibility of Visual Tree of Thoughts function with OpenWebUI and Litellm #71

Open
Greatz08 opened this issue Oct 30, 2024 · 2 comments
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@Greatz08
Copy link

Greatz08 commented Oct 30, 2024

Reddit Discussion = https://www.reddit.com/r/LocalLLaMA/comments/1fnjnm0/visual_tree_of_thoughts_for_webui/

Code = https://pastebin.com/QuyrcqZC

This is what i found in the reddit discussion which i mentioned above which makes the openwebui Visual Tree of Thoughts PIPE function compatible with openai api endpoint which means we can use any model through litellm and this function would work on those models too and it did too and i was able to see several of my litellm routed proprietary llm's like google gemini family models and anthropic models and groq models and it created clone of orignal litellm models with 'mcts' prefix as you can see in below images :

image
image

But when i use those mcts models they give me this error - 'Depends' object has no attribute 'name'

image

This i guess is related to how function works in my opinion correct me if i am wrong . Function only has 'name' attribute which i guess is not understandable by Litellm because of which it is not processing the request ? so either function working has to be modified or litellm forking and modifying some stuff can be the solution . This much is what i can think of so if it is possible for you to fix it then do guide me and other openweb ui users on this so that we can use Visual Tree of Thoughts function not just with ollama models but with other models too which are supported by Litellm.

Btw thankyou very much for this awesome function , it feels great to see llm thought process in that way and we can understand alot more things on how it thinks and process info

@av
Copy link
Owner

av commented Nov 2, 2024

Hey, thanks for reaching out! Always nice to see someone from r/LocalLLaMa here :)

One huge issue with the original mcts was the fact it only works with Ollama backend in the WebUI (see here)

Unfortunately Ollama/OpenAI apps in the WebUI backend are like two twins - almost identical but still completely separate, so I didn't get to refactoring the function to support both via some kind of dynamic routing based on the model source

Workarounds:

  • There's a community patched version that is switched on OpenAI backend
  • It's possible to instead use mcts module from Harbor Boost
    • Works with any OpenAI-compatible API
    • Unfortunately, being a downstream service and not a function - it doesn't have acess to some fancy features from the Functions (full content replace, status bar), so the presentation is slightly different (linear append-only), but it'll still have all the ToT charts

@av av added documentation Improvements or additions to documentation question Further information is requested labels Nov 2, 2024
@Greatz08
Copy link
Author

Greatz08 commented Nov 5, 2024

@av I have tried first workarounds and it wont work as i mentioned in my long message . In starting i mentioned Code link which redirect to same pastebin link which you mentioned in your replied :-)) . Screenshots which i added showed clearly the error i faced when using that patched version , So can you setup Litellm and try yourself and see if you get same issue which i faced in openwebui or not . If you get same issue then can you fix that issue ? if yes then please do guide as it will make things much easier .
About Second workaround can you guide in simplest way how can i setup it locally and then connect with litellm and openwebui to get that functionality ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants