You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is what i found in the reddit discussion which i mentioned above which makes the openwebui Visual Tree of Thoughts PIPE function compatible with openai api endpoint which means we can use any model through litellm and this function would work on those models too and it did too and i was able to see several of my litellm routed proprietary llm's like google gemini family models and anthropic models and groq models and it created clone of orignal litellm models with 'mcts' prefix as you can see in below images :
But when i use those mcts models they give me this error - 'Depends' object has no attribute 'name'
This i guess is related to how function works in my opinion correct me if i am wrong . Function only has 'name' attribute which i guess is not understandable by Litellm because of which it is not processing the request ? so either function working has to be modified or litellm forking and modifying some stuff can be the solution . This much is what i can think of so if it is possible for you to fix it then do guide me and other openweb ui users on this so that we can use Visual Tree of Thoughts function not just with ollama models but with other models too which are supported by Litellm.
Btw thankyou very much for this awesome function , it feels great to see llm thought process in that way and we can understand alot more things on how it thinks and process info
The text was updated successfully, but these errors were encountered:
Hey, thanks for reaching out! Always nice to see someone from r/LocalLLaMa here :)
One huge issue with the original mcts was the fact it only works with Ollama backend in the WebUI (see here)
Unfortunately Ollama/OpenAI apps in the WebUI backend are like two twins - almost identical but still completely separate, so I didn't get to refactoring the function to support both via some kind of dynamic routing based on the model source
Unfortunately, being a downstream service and not a function - it doesn't have acess to some fancy features from the Functions (full content replace, status bar), so the presentation is slightly different (linear append-only), but it'll still have all the ToT charts
@av I have tried first workarounds and it wont work as i mentioned in my long message . In starting i mentioned Code link which redirect to same pastebin link which you mentioned in your replied :-)) . Screenshots which i added showed clearly the error i faced when using that patched version , So can you setup Litellm and try yourself and see if you get same issue which i faced in openwebui or not . If you get same issue then can you fix that issue ? if yes then please do guide as it will make things much easier .
About Second workaround can you guide in simplest way how can i setup it locally and then connect with litellm and openwebui to get that functionality ?
Reddit Discussion = https://www.reddit.com/r/LocalLLaMA/comments/1fnjnm0/visual_tree_of_thoughts_for_webui/
Code = https://pastebin.com/QuyrcqZC
This is what i found in the reddit discussion which i mentioned above which makes the openwebui Visual Tree of Thoughts PIPE function compatible with openai api endpoint which means we can use any model through litellm and this function would work on those models too and it did too and i was able to see several of my litellm routed proprietary llm's like google gemini family models and anthropic models and groq models and it created clone of orignal litellm models with 'mcts' prefix as you can see in below images :
But when i use those mcts models they give me this error -
'Depends' object has no attribute 'name'
This i guess is related to how function works in my opinion correct me if i am wrong . Function only has 'name' attribute which i guess is not understandable by Litellm because of which it is not processing the request ? so either function working has to be modified or litellm forking and modifying some stuff can be the solution . This much is what i can think of so if it is possible for you to fix it then do guide me and other openweb ui users on this so that we can use Visual Tree of Thoughts function not just with ollama models but with other models too which are supported by Litellm.
Btw thankyou very much for this awesome function , it feels great to see llm thought process in that way and we can understand alot more things on how it thinks and process info
The text was updated successfully, but these errors were encountered: