Skip to content

TUM-Master-Thesis-MoQ/moq-live-stream

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MOQ Live Stream: Low Latency Live-Streaming using Media over QUIC

The Media over QUIC (MoQ) initiative, led by the Internet Engineering Task Force (IETF), aims to revolutionize live streaming by leveraging the QUIC protocol to achieve low latency and scalable media transmission. Traditional live streaming platforms like Twitch and YouTube use protocols such as RTMP for media ingestion and adaptive streaming over HTTP for distribution, which are effective for scaling but often result in high latencies. Conversely, real-time protocols like RTP provide low latency but are challenging to scale. MoQ seeks to develop a unified protocol stack that addresses both issues by utilizing QUIC's advanced features like improved congestion control and elimination of head-of-line blocking.

This thesis aims to implement a prototype live-streaming system based on the MoQ framework, allowing media to be streamed through QUIC from a server to clients with low latency and high scalability. The system will be compared to traditional streaming architectures to demonstrate its advantages in reducing latency and improving performance. This project highlights the potential of MoQ to enhance live streaming experiences, setting a new standard for interactive media applications.

System Architecture

Class Diagram State Machine Diagram Sequence Diagram
Version History
Version History
Version History

Testbed Network Setup

Version History

Roadmap

  • Build a client-server app using quic-go
    • Extend it to support multiple sessions/clients
    • Extend it to communicate using WebTransport API
  • refine system architecture design
    • subscription-based communication [streamer, channel, subscriber, channel manager, chat room, message (pending)]
  • WebTransport streaming
    • server side
      • video support
      • audio support
    • client side
      • video support
      • audio support
  • MOQT adaptation streaming
    • control messages support
      • server-side
        • server-side
      • client-side
        • streamer-app
        • audience-app
    • obj message support
      • streamer-app sending
      • server-side forwarding
      • audience-app receiving
  • Testbed setup
    • network setup
    • tc setup
  • Automated test
    • streamer-app
    • audience-app
  • Rate adaptation
    • server-side
      • cwnd_ema-based
      • rtt_ema-based
      • drop-rate-based
      • retransmission-rate-based
    • client-side
      • drop-rate-based
      • delay-rate-based
      • jitter-based
      • buffer-based
  • Automated log visualization

Setup & Run

Prerequisites (Minimum Version)

  • go: 1.22.2
  • node.js: ^20.14.9
  • react: ^18.3.1
  • npm: 9.2.0
  • pipenv: 2023.12.1
  • python: 3.12.3
  • ffmpeg: 6.1.1
  • mkcert: 1.4.4

TLS Certificates Setup

  1. Nav to ./utilities and run the following command to generate the certificates that trusts all IP addresses used in the testbed:

    mkcert -key-file key.pem -cert-file cert.pem 10.0.1.1 10.0.2.1 10.0.2.2 10.0.4.1 10.0.5.1 10.0.6.1 localhost
    mkcert -install

Browser Setup

  1. Enable WebTransport Developer Mode in Chrome(v126+) (for manual testing):

    chrome://flags/#webtransport-developer-mode

Server Setup

  1. Install go dependencies in root dir:

    go mod tidy
  2. moqtransport Modification

    1. Comment out the panic(err) line of loop() function in local_track.go of moqtransport package:

      func (t *LocalTrack) loop() {
        defer t.cancelWG.Done()
        for {
          select {
          case <-t.ctx.Done():
            for _, v := range t.subscribers {
              v.Close()
            }
            return
          case op := <-t.addSubscriberCh:
            id := t.nextID.next()
            t.subscribers[id] = op.subscriber
            op.resultCh <- id
          case rem := <-t.removeSubscriberCh:
            delete(t.subscribers, rem.subscriberID)
          case object := <-t.objectCh:
            for _, v := range t.subscribers {
              if err := v.WriteObject(object); err != nil {
                // TODO: Notify / remove subscriber?
                // panic(err) //! comment out for testing purposes
              }
            }
          case t.subscriberCountCh <- len(t.subscribers):
          }
        }
      }

      To allow server to continue running when a subscriber unsubscribes from a track.

    2. Comment out this section inhandleSubscribe() of session.go at line 470:

      t, ok := s.si.localTracks.get(trackKey{
        namespace: msg.TrackNamespace,
        trackname: msg.TrackName,
      })
      if ok {
        s.subscribeToLocalTrack(sub, t)
        return
      }

      Then audience can resubscribe to hd track if it has subscribed it before (hd -> md, md -> hd).

    3. (Congested network) Fix server crash with "panic: too many open streams" in send_subscription.go, use OpenUniStreamSync instead of OpenUniStream:

      // send_subscription.go
      func (s *sendSubscription) sendObjectStream(o Object) error {
        stream, err := s.conn.OpenUniStreamSync(s.ctx) // fix for "panic: too many open streams"
        if err != nil {
          return err
        }
        os, err := newObjectStream(stream, s.subscribeID, s.trackAlias, o.GroupID, o.ObjectID, o.PublisherPriority)
        if err != nil {
          return err
        }
        if _, err := os.Write(o.Payload); err != nil {
          return err
        }
        return os.Close()
      }

      To avoid opening too many streams in a congested network, but too many frame arrives late, results in audience high drop rate with syncing threshold 1 frame. Parameter tuning required for better performance.

  3. Run the server in root dir:

    go run ./server/main.go

Clients Setup

  • Init & update submodule in root dir:

    git submodule update --init
  • Prepare streamer video file: nav to ./client/streamer-app/src/test then run:

    chmod +x *.sh
    ./prepare_video_file.sh

    It will download a demo video from blender.org and transcode it into a webm container with vp8_opus codecs. Install ffmpeg if not installed.

  • Start streamer: nav to ./client/streamer-app then run:

    npm install
    npm start
  • Start audience: nav to ./client/audience-app then run:

    npm install
    npm start

Testbed Run

Network Setup

  1. Nav to ./testbed to setup network and tc:

    1. Activate the virtual environment:

      pipenv shell
    2. Install dependencies via pipenv:

      pipenv install
    3. Setup network:

      python3 main.py setup

      If run into permission issue, try sudo -E pipenv run python3 main.py setup to run in root mode while using the virtual environment.

    4. Setup tc:

      python3 main.py tc

      Or sudo -E pipenv run python3 main.py tc to run in root mode. python3 main.py -h for help.

iperf3 for Bandwidth Test

After network and tc setup, run the following command in ./testbed/test_iperf3:

python3 main.py

log files in ./testbed/test_iperf3/log/.

ping for Latency Test

After network and tc setup, run the following command in ./testbed/test_ping:

python3 main.py

log files in ./testbed/test_ping/log/.

Run in testbed environment

Build and Run Server

  1. Build server in project root (with all those moqtransport modifications applied to go dependencies on the server local machine):

    go build -o server_binary server/main.go

    Run server in ns2:

    sudo ip netns exec ns2 ./server_binary

Install and Run WebDriver for Automated Test

  1. Software installation:

    1. Install google chrome if have't:

      wget https://dl.google.com/linuxwget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
      sudo apt install ./google-chrome-stable_current_amd64.deb/direct/google-chrome-stable_current_amd64.deb
    2. Install the chromedriver that matches the installed google chrome version here such as:

      https://storage.googleapis.com/chrome-for-testing-public/131.0.6778.139/linux64/chromedriver-linux64.zip
      unzip chromedriver-linux64.zip
      sudo mv chromedriver-linux64/chromedriver /usr/bin/chromedriver
      sudo chmod +x /usr/bin/chromedriver
  2. Run the streamer-app in ./client/streamer-app in ns1:

    chmod +x src/test/*.sh
    sudo -E ip netns exec ns1 node src/test/webdriver.js

    -E: pass local env variables to ns1.

  3. Run the audience-app in ./client/audience-app in ns4:

    chmod +x src/test/*.sh
    sudo -E ip netns exec ns4 node src/test/webdriver.js

    -E: pass local env variables to ns4.