You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These tasks are unreasonably slow right now, probably because they transcode 1 segment at a time and our transcoding latency is not so much better than realtime (for a 4k video, I transcoded 10 minutes in 8 minutes).
We can improve this by parallelizing the transcoding process. This can be done easily by using go's concurrency abstractions. The only more complex part here is defining how much we're parallelizing without affecting the reliability of the transcoding infrastructure like the broadcaster, orchestrators, etc. The Bs might try to use the same O concurrently, which may not work well if we parallelize it in 5x or sth like that.
So the implementation here could be easy if it is just transcoding N segments at once or something like that, but picking the right value for N can be a complex task.
The text was updated successfully, but these errors were encountered:
I think at the B <> O level we would currently be capped at B uploading N = 4 segments in parallel for a single stream to O because O will uses a buffered channel [1] of size 4 [2] as a queue for incoming segments. There is no special reason for N = 4 - I believe it was just a magic number that was chosen to cap the size of the channel. The segments should be pulled off the queue to be transcoded as soon as they come in though so I think there should be a speed up if the segments are fed to the B in parallel and then B uploads them to O in quick succession. However, I'm not sure if there is any code at the Mist <> O level that would slow things down so I think it is worth running an experiment to test all this out.
Also, note: There used to be a single segment in flight rule b/w B & O where B would not send a segment to O if the previous segment was still "in flight" (i.e. a response has not been received from O yet). Now, the rule that B uses is that if the previous segment is still in flight then it will only re-use the last used O (i.e. the one that the previous segment was sent to) if the previous segment has been in flight for less than the segment duration [3]. Given this rule, if we have 4 2 second segments uploaded in parallel I think all of them would be assigned to the same O.
cc @thomshutt who may be able to help (or find someone who can) test things at the node level.
These tasks are unreasonably slow right now, probably because they transcode 1 segment at a time and our transcoding latency is not so much better than realtime (for a 4k video, I transcoded 10 minutes in 8 minutes).
We can improve this by parallelizing the transcoding process. This can be done easily by using go's concurrency abstractions. The only more complex part here is defining how much we're parallelizing without affecting the reliability of the transcoding infrastructure like the broadcaster, orchestrators, etc. The Bs might try to use the same O concurrently, which may not work well if we parallelize it in 5x or sth like that.
So the implementation here could be easy if it is just transcoding N segments at once or something like that, but picking the right value for N can be a complex task.
The text was updated successfully, but these errors were encountered: