The Media over QUIC (MoQ) initiative, led by the Internet Engineering Task Force (IETF), aims to revolutionize live streaming by leveraging the QUIC protocol to achieve low latency and scalable media transmission. Traditional live streaming platforms like Twitch and YouTube use protocols such as RTMP for media ingestion and adaptive streaming over HTTP for distribution, which are effective for scaling but often result in high latencies. Conversely, real-time protocols like RTP provide low latency but are challenging to scale. MoQ seeks to develop a unified protocol stack that addresses both issues by utilizing QUIC's advanced features like improved congestion control and elimination of head-of-line blocking.
This thesis aims to implement a prototype live-streaming system based on the MoQ framework, allowing media to be streamed through QUIC from a server to clients with low latency and high scalability. The system will be compared to traditional streaming architectures to demonstrate its advantages in reducing latency and improving performance. This project highlights the potential of MoQ to enhance live streaming experiences, setting a new standard for interactive media applications.
Class Diagram | State Machine Diagram | Sequence Diagram |
Version History |
- Build a client-server app using quic-go
- Extend it to support multiple sessions/clients
- Extend it to communicate using WebTransport API
- refine system architecture design
- subscription-based communication [streamer, channel, subscriber, channel manager, chat room, message (pending)]
- WebTransport streaming
- server side
- video support
- audio support
- client side
- video support
- audio support
- server side
- MOQT adaptation streaming
- control messages support
- server-side
- server-side
- client-side
- streamer-app
- audience-app
- server-side
- obj message support
- streamer-app sending
- server-side forwarding
- audience-app receiving
- control messages support
- Testbed setup
- network setup
- tc setup
- Automated test
- streamer-app
- audience-app
- Rate adaptation
- server-side
- cwnd_ema-based
- rtt_ema-based
- drop-rate-based
- retransmission-rate-based
- client-side
- drop-rate-based
- delay-rate-based
- jitter-based
- buffer-based
- server-side
- Automated log visualization
- go: 1.22.2
- node.js: ^20.14.9
- react: ^18.3.1
- npm: 9.2.0
- pipenv: 2023.12.1
- python: 3.12.3
- ffmpeg: 6.1.1
- mkcert: 1.4.4
-
Nav to
./utilities
and run the following command to generate the certificates that trusts all IP addresses used in the testbed:mkcert -key-file key.pem -cert-file cert.pem 10.0.1.1 10.0.2.1 10.0.2.2 10.0.4.1 10.0.5.1 10.0.6.1 localhost mkcert -install
-
Enable
WebTransport Developer Mode
in Chrome(v126+) (for manual testing):chrome://flags/#webtransport-developer-mode
-
Install go dependencies in root dir:
go mod tidy
-
moqtransport Modification
-
Comment out the
panic(err)
line ofloop()
function inlocal_track.go
ofmoqtransport
package:func (t *LocalTrack) loop() { defer t.cancelWG.Done() for { select { case <-t.ctx.Done(): for _, v := range t.subscribers { v.Close() } return case op := <-t.addSubscriberCh: id := t.nextID.next() t.subscribers[id] = op.subscriber op.resultCh <- id case rem := <-t.removeSubscriberCh: delete(t.subscribers, rem.subscriberID) case object := <-t.objectCh: for _, v := range t.subscribers { if err := v.WriteObject(object); err != nil { // TODO: Notify / remove subscriber? // panic(err) //! comment out for testing purposes } } case t.subscriberCountCh <- len(t.subscribers): } } }
To allow server to continue running when a subscriber unsubscribes from a track.
-
Comment out this section in
handleSubscribe()
ofsession.go
at line 470:t, ok := s.si.localTracks.get(trackKey{ namespace: msg.TrackNamespace, trackname: msg.TrackName, }) if ok { s.subscribeToLocalTrack(sub, t) return }
Then audience can resubscribe to hd track if it has subscribed it before (hd -> md, md -> hd).
-
(Congested network) Fix server crash with "panic: too many open streams" in
send_subscription.go
, useOpenUniStreamSync
instead ofOpenUniStream
:// send_subscription.go func (s *sendSubscription) sendObjectStream(o Object) error { stream, err := s.conn.OpenUniStreamSync(s.ctx) // fix for "panic: too many open streams" if err != nil { return err } os, err := newObjectStream(stream, s.subscribeID, s.trackAlias, o.GroupID, o.ObjectID, o.PublisherPriority) if err != nil { return err } if _, err := os.Write(o.Payload); err != nil { return err } return os.Close() }
To avoid opening too many streams in a congested network, but too many frame arrives late, results in audience high drop rate with syncing threshold 1 frame. Parameter tuning required for better performance.
-
-
Run the server in root dir:
go run ./server/main.go
-
Init & update submodule in root dir:
git submodule update --init
-
Prepare streamer video file: nav to
./client/streamer-app/src/test
then run:chmod +x *.sh ./prepare_video_file.sh
It will download a demo video from blender.org and transcode it into a webm container with vp8_opus codecs. Install
ffmpeg
if not installed. -
Start
streamer
: nav to./client/streamer-app
then run:npm install npm start
-
Start
audience
: nav to./client/audience-app
then run:npm install npm start
-
Nav to
./testbed
to setup network andtc
:-
Activate the virtual environment:
pipenv shell
-
Install dependencies via
pipenv
:pipenv install
-
Setup network:
python3 main.py setup
If run into permission issue, try
sudo -E pipenv run python3 main.py setup
to run in root mode while using the virtual environment. -
Setup tc:
python3 main.py tc
Or
sudo -E pipenv run python3 main.py tc
to run in root mode.python3 main.py -h
for help.
-
After network and tc
setup, run the following command in ./testbed/test_iperf3
:
python3 main.py
log files in ./testbed/test_iperf3/log/
.
After network and tc setup, run the following command in ./testbed/test_ping
:
python3 main.py
log files in ./testbed/test_ping/log/
.
-
Build server in project root (with all those moqtransport modifications applied to go dependencies on the server local machine):
go build -o server_binary server/main.go
Run server in
ns2
:sudo ip netns exec ns2 ./server_binary
-
Software installation:
-
Install google chrome if have't:
wget https://dl.google.com/linuxwget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb sudo apt install ./google-chrome-stable_current_amd64.deb/direct/google-chrome-stable_current_amd64.deb
-
Install the chromedriver that matches the installed google chrome version here such as:
https://storage.googleapis.com/chrome-for-testing-public/131.0.6778.139/linux64/chromedriver-linux64.zip unzip chromedriver-linux64.zip sudo mv chromedriver-linux64/chromedriver /usr/bin/chromedriver sudo chmod +x /usr/bin/chromedriver
-
-
Run the streamer-app in
./client/streamer-app
inns1
:chmod +x src/test/*.sh
sudo -E ip netns exec ns1 node src/test/webdriver.js
-E
: pass local env variables tons1
. -
Run the audience-app in
./client/audience-app
inns4
:chmod +x src/test/*.sh
sudo -E ip netns exec ns4 node src/test/webdriver.js
-E
: pass local env variables tons4
.