Introduction
In the ever-evolving landscape of online video streaming, HLS (HTTP Live Streaming) has emerged as a leading protocol, revolutionizing the way content is delivered over the internet. Let's delve into what HLS is and why it's become a cornerstone of modern streaming technology.
What is HLS?
HLS, or HTTP Live Streaming, is a streaming protocol developed by Apple Inc. as part of its QuickTime, Safari, and iOS software. It enables the delivery of multimedia content, primarily video and audio, over the internet in a highly adaptable and efficient manner.
How Does HLS Work?
HLS breaks down multimedia content into small, manageable chunks, typically lasting a few seconds each. These chunks are encoded at multiple bitrates and resolutions, allowing for adaptive bitrate streaming. A manifest file, known as the M3U8 playlist, provides the client device with information about the available chunks and their characteristics. The client device dynamically selects the appropriate chunks based on network conditions and device capabilities, ensuring a smooth and uninterrupted streaming experience.
Know about docker in Deep
Reference of Docker blog :- https://teckbakers.hashnode.dev/getting-into-the-world-of-containers
Demo
At the first install docker in your local machine
Step 1:- Create an Empty folder for storing all the code
Step 2: Build a docker image with the help of Dockerfile
Step3: Check whether the docker image was created so not
Step4: For storing your Video from the local machine to docker images mount volume to the docker image from the local machine
Step5:- Use the command to start the video processing
ffmpeg -i sample.mp4 -codec:v libx264 -codec:a aac -hls_time 10 -hls_playlist_type vod -hls_segment_filename outputs/segment%03d.ts" -start_number 0 outputs/index.m3u8
Command Breakdown
ffmpeg
:- This is the command-line tool used for processing video and audio files.
-i
sample.mp
4
:-i
specifies the input file. In this case, it issample.mp
4
.
-codec:v libx264
:-codec:v
specifies the video codec to be used.libx264
is a widely used codec for encoding video in H.264 format.
-codec:a aac
:-codec:a
specifies the audio codec to be used.aac
(Advanced Audio Codec) is a popular audio codec known for its efficiency and quality.
-hls_time 10
:- This sets the target duration of each segment to 10 seconds. The actual duration might vary slightly to fit the video frames properly.
-hls_playlist_type vod
:This specifies the type of HLS (HTTP Live Streaming) playlist.
vod
(Video on Demand) indicates that the playlist is for on-demand content rather than live streaming.
-hls_segment_filename outputs/segment%03d.ts
:This defines the naming pattern for the output segments.
outputs/segment%03d.ts
means the segments will be saved in theoutputs
directory with filenames likesegment001.ts
,segment002.ts
, and so on (%03d
is a placeholder for a three-digit number).
-start_number 0
:- This sets the starting number for the segment files to 0.
outputs/index.m3u8
:- This specifies the output file for the HLS playlist, which will be named
index.m3u8
.
- This specifies the output file for the HLS playlist, which will be named
What the Command Does
Input File: Takes
sample.mp
4
as the input file.Video and Audio Encoding: Encodes the video using the H.264 codec (
libx264
) and the audio using the AAC codec (aac
).Segment Duration: Splits the video into segments of approximately 10 seconds each.
Playlist Type: Generates an HLS playlist suitable for Video on Demand.
Segment Naming: Names the segments sequentially starting from
segment000.ts
.Playlist File: Outputs an
index.m3u8
playlist file that references all the created segments.
Step6:- After running the command you will get the output in your local machine in folder outputs
Step7: Uploading in Output in s3 bucket so that we can use it to process video in our application
Step8:- check whether aws configure or not
Step9:- Upload all the outputs folder into the s3 bucket
Step10: Copy index file s3 url it use in index.html file as a source of video
Step10: Copy IP address and paste it into the browser
Conclusion
Following this step-by-step guide, you have successfully created an HLS Adaptive Streaming pipeline using Docker and FFmpeg. We built a Docker image, mounted a volume for storing video files, and processed a sample video into HLS format. The segmented video files were then uploaded to an S3 bucket, making them accessible for streaming. Finally, linking the playlist URL in an HTML file allows the processed video to be streamed seamlessly through a browser. This workflow ensures efficient, scalable, and adaptive video streaming for various applications.
For the complete source code, visit the GitHub repository: HLS Adaptive Streaming.