-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Use the best available model #6
Comments
Right ... so we/you have to scale up the builder ... which currently has I don't have any issues with using a The question is: will using a bigger pre-trained model significantly increase the cold-boot time for the app? 💭 ⏳ |
I think so. So if I just increase the model, it will probably take much, much longer. I'll try to see if caching is effectively working or not and if I can get this to work. |
Ok. Sounds like you might want to cache the model and any other useful stuff on an attached Oh, and |
Definitely open a forum topic https://community.fly.io to discuss how best to run this on |
Just captured the
We need to address |
As noted in #5 (comment) the mini model currently deployed returns basic classification. 💭
We want to use the best available pre-trained model to maximise the effectiveness of classification. ✨
The Fly.io instance is currently: https://fly.io/dashboard/dwyl-img-class
shared-cpu-1x@256MB
When the VM is not in-use it gets paused: https://fly.io/blog/fly-now-with-power-pause/
So we can easily bump this to
2 GB
or4 GB
RAM without fear of it costing us a fortune.https://fly.io/docs/about/pricing/#compute
Please:
RAM
size andCPU
to comfortably match the requirement of the best model.fly
CLI commands) you used to scale up.Thanks.
Chore
The text was updated successfully, but these errors were encountered: