We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is your feature request related to a problem? (你需要的功能是否与某个问题有关?)
无
Describe alternatives you've considered (描述你想到的折衷方案) 直接 steal 一个 batch ,减少 steal 的次数可能会更好吧,我正在做相关的优化,不知道这是不是有帮助的,希望能提前解答一下之前是够有相关的考虑和实验
Additional context/screenshots (更多上下文/截图)
The text was updated successfully, but these errors were encountered:
batch可能会提高延迟吧,满足延迟条件下的最优batch大小不好找。
如果能提供对应工具找最优batchsize的话确实是个很好的优化点
Sorry, something went wrong.
找到一个合适的 batch 是很难的,因为任务本身可能有大有小,即使 steal 了合适的 batch,也不能保证是均匀做完的。
我的思路是其实这和一个流批的系统很像,可以把原来看作流,现在想做成批处理,对于不同的业务落地场景,肯定是不同的 batch 会更好,所以可以把这个 batch 设计成可变的,默认大小是 _rq.size() / NUM_WORKER。
No branches or pull requests
Is your feature request related to a problem? (你需要的功能是否与某个问题有关?)
无
Describe alternatives you've considered (描述你想到的折衷方案)
直接 steal 一个 batch ,减少 steal 的次数可能会更好吧,我正在做相关的优化,不知道这是不是有帮助的,希望能提前解答一下之前是够有相关的考虑和实验
Additional context/screenshots (更多上下文/截图)
The text was updated successfully, but these errors were encountered: