đź›  Status: Work in progress

The Shaka Packager HTTP upload feature is currently in development. It’s on the fast track to a working alpha, so we encourage you to use it and give us some feedback. However, there are things that haven’t been finalized yet so you can expect some changes.

This document describes the current state of the implementation, contributions are always welcome.

The discussion about this feature currently happens at Add HTTP PUT output #149.

HTTP upload¶

Introduction¶

Shaka Packager can upload produced artefacts to a HTTP server using HTTP PUT requests with chunked transfer encoding to improve live publishing performance when content is not served directly from the packaging output location. For talking HTTP, libcurl is used.

The produced artefacts are:

  • HLS playlist files in M3U Format encoded with UTF-8 (.m3u8)

  • Chunked audio segments encoded with AAC (.aac)

  • Chunked video segments encapsulated into the MPEG transport stream container format (.ts)

References¶

Documentation¶

Getting started¶

To enable the HTTP upload transfer mode, use https: file paths for any output files (e.g. segment_template).

All HTTP requests will be declared as Content-Type: application/octet-stream.

Synopsis¶

Here is a basic example. It is similar to the “live” example and also borrows features from “FFmpeg piping”, see Live and FFmpeg piping.

Define UNIX pipe to connect ffmpeg with packager:

export PIPE=/tmp/bigbuckbunny.fifo
mkfifo ${PIPE}

Acquire and transcode RTMP stream:

# Steady
ffmpeg -fflags nobuffer -threads 0 -y \
    -i rtmp://184.72.239.149/vod/mp4:bigbuckbunny_450.mp4 \
    -pix_fmt yuv420p -vcodec libx264 -preset:v superfast -acodec aac \
    -f mpegts pipe: > ${PIPE}

Configure and run packager:

# Define upload URL
export UPLOAD_URL=http://localhost:6767/hls-live

# Go
packager \
    "input=${PIPE},stream=audio,segment_template=${UPLOAD_URL}/bigbuckbunny-audio-aac-\$Number%04d\$.aac,playlist_name=bigbuckbunny-audio.m3u8,hls_group_id=audio" \
    "input=${PIPE},stream=video,segment_template=${UPLOAD_URL}/bigbuckbunny-video-h264-450-\$Number%04d\$.ts,playlist_name=bigbuckbunny-video-450.m3u8" \
    --io_block_size 65536 --fragment_duration 2 --segment_duration 2 \
    --time_shift_buffer_depth 3600 --preserved_segments_outside_live_window 7200 \
    --hls_master_playlist_output "${UPLOAD_URL}/bigbuckbunny.m3u8" \
    --hls_playlist_type LIVE \
    --vmodule=http_file=1

Client Authentication¶

If your server requires client authentication, you can add the following arguments to enable it:

  • --ca_file: (optional) Absolute path to the Certificate Authority file for the server cert. PEM format.

  • --client_cert_file: Absolute path to client certificate file.

  • --client_cert_private_key_file: Absolute path to the private Key file.

  • --client_cert_private_key_password: (optional) Password to the private key file.

Backlog¶

Please note the HTTP upload feature still lacks some features probably important for production. Contributions are welcome!

HTTP DELETE¶

Nothing has be done to support this yet:

Packager supports removing old segments automatically. See preserved_segments_outside_live_window option in DASH options or HLS options for details.

Software tests¶

We should do some minimal QA, check whether the test suite breaks and maybe add some tests covering new code.

Network timeouts¶

libcurl can apply network timeout settings. However, we haven’t addressed this yet.

Miscellaneous¶

  • Address all things TODO and FIXME

  • Make io_cache_size configurable?

Example Backend¶

HTTP PUT file uploads to Nginx¶

The receiver is based on the native Nginx module “ngx_http_dav_module”, it handles HTTP PUT requests with chunked transfer encoding like emitted by Shaka Packager.

The configuration is very simple:

server {
    listen 6767 default_server;

    access_log  /dev/stdout combined;
    error_log   /dev/stdout info;

    root /var/spool;
    location ~ ^/hls-live/(.+)$ {

        dav_methods PUT;
        create_full_put_path on;

        proxy_buffering off;
        client_max_body_size 20m;

    }

}

Run Nginx:

nginx -p `pwd` -c nginx.conf -g "daemon off;"

HTTP PUT file uploads to Caddy¶

The receiver is based on the Caddy webserver, it handles HTTP PUT requests with chunked transfer encoding like emitted by Shaka Packager.

Put this configuration into a Caddyfile:

# Bind address
:6767

# Enable logging
log stdout

# Web server root with autoindex
root /var/spool
redir /hls-live {
    if {path} is "/"
}
browse

# Enable upload with HTTP PUT
upload /hls-live {
    to "/var/spool/hls-live"
}

Run Caddy:

caddy -conf Caddyfile

Development and debugging¶

Watch the network:

ngrep -Wbyline -dlo port 6767

Grab and run httpd-reflector.py to use it as a dummy HTTP sink:

# Ready
wget https://gist.githubusercontent.com/amotl/3ed38e461af743aeeade5a5a106c1296/raw/httpd-reflector.py
chmod +x httpd-reflector.py
./httpd-reflector.py --port 6767