Replies: 1 comment
-
Hey, thanks for your work on this. Curious to hear how the optimizations went and at what speed you're able to do TTS at! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
I am in the early stages of an app involving TTS and have been looking for methods to optimize the cost and latency of tortoise inference in prod.
I am curious to know if anyone else has tried to deploy tortoise in prod and what methods have you successfully used?
So far I am going to try:
I found that using the ultra_fast preset is good enough to produce TTS that’s almost at parity with GCP TTS but at much lower cost. I’m still experimenting with fiddling with the params but so far I haven’t found anything that can significantly decrease inference latency while maintaining the same quality.
Curious to hear any suggestions you have as I’m still new to the DL/transformers space?
Thanks,
Faiz
Beta Was this translation helpful? Give feedback.
All reactions