We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Great work THUDM team thank for this amazing model
Check below screenshots to see how to use it
Currently the APP works amazing with 4-bit quantization very fast
I am searching to lower VRAM usage even further with like adding CPU-Offloading and other stuff if possible
Previously we were lacking Triton but it now works perfect
My installer installs into a Python 3.10 VENV completely isolated and clean
You can see entire APP and installer source code
If you get Triton error make sure to delete your Triton cache after installing the app like below
C:\Users\Furkan.triton
You can find 19 example captioning here very powerful : https://www.reddit.com/r/SECourses/comments/1i3i53q/most_powerful_vision_model_cogvlm_2_now_works/
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Great work THUDM team thank for this amazing model
The app and the installers : https://www.patreon.com/posts/120193330
Check below screenshots to see how to use it
Currently the APP works amazing with 4-bit quantization very fast
I am searching to lower VRAM usage even further with like adding CPU-Offloading and other stuff if possible
Previously we were lacking Triton but it now works perfect
My installer installs into a Python 3.10 VENV completely isolated and clean
You can see entire APP and installer source code
If you get Triton error make sure to delete your Triton cache after installing the app like below
C:\Users\Furkan.triton
You can find 19 example captioning here very powerful : https://www.reddit.com/r/SECourses/comments/1i3i53q/most_powerful_vision_model_cogvlm_2_now_works/
The text was updated successfully, but these errors were encountered: