{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# StreamReader Basic Usages\n\n**Author**: [Moto Hira](moto@meta.com)_\n\nThis tutorial shows how to use :py:class:`torchaudio.io.StreamReader` to\nfetch and decode audio/video data and apply preprocessings that\nlibavfilter provides.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Note

This tutorial requires FFmpeg libraries.\n Please refer to `FFmpeg dependency ` for\n the detail.

\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n\nStreaming API leverages the powerful I/O features of ffmpeg.\n\nIt can\n - Load audio/video in variety of formats\n - Load audio/video from local/remote source\n - Load audio/video from file-like object\n - Load audio/video from microphone, camera and screen\n - Generate synthetic audio/video signals.\n - Load audio/video chunk by chunk\n - Change the sample rate / frame rate, image size, on-the-fly\n - Apply filters and preprocessings\n\nThe streaming API works in three steps.\n\n1. Open media source (file, device, synthetic pattern generator)\n2. Configure output stream\n3. Stream the media\n\nAt this moment, the features that the ffmpeg integration provides\nare limited to the form of\n\n` -> -> `\n\nIf you have other forms that can be useful to your usecases,\n(such as integration with `torch.Tensor` type)\nplease file a feature request.\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparation\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import torch\nimport torchaudio\n\nprint(torch.__version__)\nprint(torchaudio.__version__)\n\nimport matplotlib.pyplot as plt\nfrom torchaudio.io import StreamReader\n\nbase_url = \"https://download.pytorch.org/torchaudio/tutorial-assets\"\nAUDIO_URL = f\"{base_url}/Lab41-SRI-VOiCES-src-sp0307-ch127535-sg0042.wav\"\nVIDEO_URL = f\"{base_url}/stream-api/NASAs_Most_Scientifically_Complex_Space_Observatory_Requires_Precision-MP4.mp4\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Opening the source\n\nThere are mainly three different sources that streaming API can\nhandle. Whichever source is used, the remaining processes\n(configuring the output, applying preprocessing) are same.\n\n1. Common media formats (resource indicator of string type or file-like object)\n2. Audio / Video devices\n3. Synthetic audio / video sources\n\nThe following section covers how to open common media formats.\nFor the other streams, please refer to the\n[StreamReader Advanced Usage](./streamreader_advanced_tutorial.html)_.\n\n

Note

The coverage of the supported media (such as containers, codecs and protocols)\n depend on the FFmpeg libraries found in the system.\n\n If `StreamReader` raises an error opening a source, please check\n that `ffmpeg` command can handle it.

\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Local files\n\nTo open a media file, you can simply pass the path of the file to\nthe constructor of `StreamReader`.\n\n.. code::\n\n StreamReader(src=\"audio.wav\")\n\n StreamReader(src=\"audio.mp3\")\n\nThis works for image file, video file and video streams.\n\n.. code::\n\n # Still image\n StreamReader(src=\"image.jpeg\")\n\n # Video file\n StreamReader(src=\"video.mpeg\")\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Network protocols\n\nYou can directly pass a URL as well.\n\n.. code::\n\n # Video on remote server\n StreamReader(src=\"https://example.com/video.mp4\")\n\n # Playlist format\n StreamReader(src=\"https://example.com/playlist.m3u\")\n\n # RTMP\n StreamReader(src=\"rtmp://example.com:1935/live/app\")\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### File-like objects\n\nYou can also pass a file-like object. A file-like object must implement\n``read`` method conforming to :py:attr:`io.RawIOBase.read`.\n\nIf the given file-like object has ``seek`` method, StreamReader uses it\nas well. In this case the ``seek`` method is expected to conform to\n:py:attr:`io.IOBase.seek`.\n\n.. code::\n\n # Open as fileobj with seek support\n with open(\"input.mp4\", \"rb\") as src:\n StreamReader(src=src)\n\nIn case where third-party libraries implement ``seek`` so that it raises\nan error, you can write a wrapper class to mask the ``seek`` method.\n\n.. code::\n\n class UnseekableWrapper:\n def __init__(self, obj):\n self.obj = obj\n\n def read(self, n):\n return self.obj.read(n)\n\n.. code::\n\n import requests\n\n response = requests.get(\"https://example.com/video.mp4\", stream=True)\n s = StreamReader(UnseekableWrapper(response.raw))\n\n.. code::\n\n import boto3\n\n response = boto3.client(\"s3\").get_object(Bucket=\"my_bucket\", Key=\"key\")\n s = StreamReader(UnseekableWrapper(response[\"Body\"]))\n\n

Note

When using an unseekable file-like object, the source media has to be\n streamable.\n For example, a valid MP4-formatted object can have its metadata either\n at the beginning or at the end of the media data.\n Those with metadata at the beginning can be opened without method\n `seek`, but those with metadata at the end cannot be opened\n without `seek`.

\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Headerless media\n\nIf attempting to load headerless raw data, you can use ``format`` and\n``option`` to specify the format of the data.\n\nSay, you converted an audio file into faw format with ``sox`` command\nas follow;\n\n.. code::\n\n # Headerless, 16-bit signed integer PCM, resampled at 16k Hz.\n $ sox original.wav -r 16000 raw.s2\n\nSuch audio can be opened like following.\n\n.. code::\n\n StreamReader(src=\"raw.s2\", format=\"s16le\", option={\"sample_rate\": \"16000\"})\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Checking the source streams\n\nOnce the media is opened, we can inspect the streams and configure\nthe output streams.\n\nYou can check the number of source streams with\n:py:attr:`~torchaudio.io.StreamReader.num_src_streams`.\n\n

Note

The number of streams is NOT the number of channels.\n Each audio stream can contain an arbitrary number of channels.

\n\nTo check the metadata of source stream you can use\n:py:meth:`~torchaudio.io.StreamReader.get_src_stream_info`\nmethod and provide the index of the source stream.\n\nThis method returns\n:py:class:`~torchaudio.io.StreamReader.SourceStream`. If a source\nstream is audio type, then the return type is\n:py:class:`~torchaudio.io.StreamReader.SourceAudioStream`, which is\na subclass of `SourceStream`, with additional audio-specific attributes.\nSimilarly, if a source stream is video type, then the return type is\n:py:class:`~torchaudio.io.StreamReader.SourceVideoStream`.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For regular audio formats and still image formats, such as `WAV`\nand `JPEG`, the number of souorce streams is 1.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "streamer = StreamReader(AUDIO_URL)\nprint(\"The number of source streams:\", streamer.num_src_streams)\nprint(streamer.get_src_stream_info(0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Container formats and playlist formats may contain multiple streams\nof different media type.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "src = \"https://devstreaming-cdn.apple.com/videos/streaming/examples/img_bipbop_adv_example_fmp4/master.m3u8\"\nstreamer = StreamReader(src)\nprint(\"The number of source streams:\", streamer.num_src_streams)\nfor i in range(streamer.num_src_streams):\n print(streamer.get_src_stream_info(i))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Configuring output streams\n\nThe stream API lets you stream data from an arbitrary combination of\nthe input streams. If your application does not need audio or video,\nyou can omit them. Or if you want to apply different preprocessing\nto the same source stream, you can duplicate the source stream.\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Default streams\n\nWhen there are multiple streams in the source, it is not immediately\nclear which stream should be used.\n\nFFmpeg implements some heuristics to determine the default stream.\nThe resulting stream index is exposed via\n\n:py:attr:`~torchaudio.io.StreamReader.default_audio_stream` and\n:py:attr:`~torchaudio.io.StreamReader.default_video_stream`.\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Configuring output streams\n\nOnce you know which source stream you want to use, then you can\nconfigure output streams with\n:py:meth:`~torchaudio.io.StreamReader.add_basic_audio_stream` and\n:py:meth:`~torchaudio.io.StreamReader.add_basic_video_stream`.\n\nThese methods provide a simple way to change the basic property of\nmedia to match the application's requirements.\n\nThe arguments common to both methods are;\n\n- ``frames_per_chunk``: How many frames at maximum should be\n returned at each iteration.\n For audio, the resulting tensor will be the shape of\n `(frames_per_chunk, num_channels)`.\n For video, it will be\n `(frames_per_chunk, num_channels, height, width)`.\n- ``buffer_chunk_size``: The maximum number of chunks to be buffered internally.\n When the StreamReader buffered this number of chunks and is asked to pull\n more frames, StreamReader drops the old frames/chunks.\n- ``stream_index``: The index of the source stream.\n- ``decoder``: If provided, override the decoder. Useful if it fails to detect\n the codec.\n- ``decoder_option``: The option for the decoder.\n\nFor audio output stream, you can provide the following additional\nparameters to change the audio properties.\n\n- ``format``: By default the StreamReader returns tensor of `float32` dtype,\n with sample values ranging `[-1, 1]`. By providing ``format`` argument\n the resulting dtype and value range is changed.\n- ``sample_rate``: When provided, StreamReader resamples the audio on-the-fly.\n\nFor video output stream, the following parameters are available.\n\n- ``format``: Image frame format. By default StreamReader returns\n frames in 8-bit 3 channel, in RGB order.\n- ``frame_rate``: Change the frame rate by dropping or duplicating\n frames. No interpolation is performed.\n- ``width``, ``height``: Change the image size.\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. code::\n\n streamer = StreamReader(...)\n\n # Stream audio from default audio source stream\n # 256 frames at a time, keeping the original sampling rate.\n streamer.add_basic_audio_stream(\n frames_per_chunk=256,\n )\n\n # Stream audio from source stream `i`.\n # Resample audio to 8k Hz, stream 256 frames at each\n streamer.add_basic_audio_stream(\n frames_per_chunk=256,\n stream_index=i,\n sample_rate=8000,\n )\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. code::\n\n # Stream video from default video source stream.\n # 10 frames at a time, at 30 FPS\n # RGB color channels.\n streamer.add_basic_video_stream(\n frames_per_chunk=10,\n frame_rate=30,\n format=\"rgb24\"\n )\n\n # Stream video from source stream `j`,\n # 10 frames at a time, at 30 FPS\n # BGR color channels with rescaling to 128x128\n streamer.add_basic_video_stream(\n frames_per_chunk=10,\n stream_index=j,\n frame_rate=30,\n width=128,\n height=128,\n format=\"bgr24\"\n )\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can check the resulting output streams in a similar manner as\nchecking the source streams.\n:py:attr:`~torchaudio.io.StreamReader.num_out_streams` reports\nthe number of configured output streams, and\n:py:meth:`~torchaudio.io.StreamReader.get_out_stream_info`\nfetches the information about the output streams.\n\n.. code::\n\n for i in range(streamer.num_out_streams):\n print(streamer.get_out_stream_info(i))\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to remove an output stream, you can do so with\n:py:meth:`~torchaudio.io.StreamReader.remove_stream` method.\n\n.. code::\n\n # Removes the first output stream.\n streamer.remove_stream(0)\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Streaming\n\nTo stream media data, the streamer alternates the process of\nfetching and decoding the source data, and passing the resulting\naudio / video data to client code.\n\nThere are low-level methods that performs these operations.\n:py:meth:`~torchaudio.io.StreamReader.is_buffer_ready`,\n:py:meth:`~torchaudio.io.StreamReader.process_packet` and\n:py:meth:`~torchaudio.io.StreamReader.pop_chunks`.\n\nIn this tutorial, we will use the high-level API, iterator protocol.\nIt is as simple as a ``for`` loop.\n\n.. code::\n\n streamer = StreamReader(...)\n streamer.add_basic_audio_stream(...)\n streamer.add_basic_video_stream(...)\n\n for chunks in streamer.stream():\n audio_chunk, video_chunk = chunks\n ...\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example\n\nLet's take an example video to configure the output streams.\nWe will use the following video.\n\n.. raw:: html\n\n \n\nSource: https://svs.gsfc.nasa.gov/13013 (This video is in public domain)\n\nCredit: NASA's Goddard Space Flight Center.\n\nNASA's Media Usage Guidelines: https://www.nasa.gov/multimedia/guidelines/index.html\n\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Opening the source media\n\nFirstly, let's list the available streams and its properties.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "streamer = StreamReader(VIDEO_URL)\nfor i in range(streamer.num_src_streams):\n print(streamer.get_src_stream_info(i))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we configure the output stream.\n\n### Configuring ouptut streams\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# fmt: off\n# Audio stream with 8k Hz\nstreamer.add_basic_audio_stream(\n frames_per_chunk=8000,\n sample_rate=8000,\n)\n\n# Audio stream with 16k Hz\nstreamer.add_basic_audio_stream(\n frames_per_chunk=16000,\n sample_rate=16000,\n)\n\n# Video stream with 960x540 at 1 FPS.\nstreamer.add_basic_video_stream(\n frames_per_chunk=1,\n frame_rate=1,\n width=960,\n height=540,\n format=\"rgb24\",\n)\n\n# Video stream with 320x320 (stretched) at 3 FPS, grayscale\nstreamer.add_basic_video_stream(\n frames_per_chunk=3,\n frame_rate=3,\n width=320,\n height=320,\n format=\"gray\",\n)\n# fmt: on" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Note

When configuring multiple output streams, in order to keep all\n streams synced, set parameters so that the ratio between\n ``frames_per_chunk`` and ``sample_rate`` or ``frame_rate`` is\n consistent across output streams.

\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Checking the output streams.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for i in range(streamer.num_out_streams):\n print(streamer.get_out_stream_info(i))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remove the second audio stream.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "streamer.remove_stream(1)\nfor i in range(streamer.num_out_streams):\n print(streamer.get_out_stream_info(i))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Streaming\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Jump to the 10 second point.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "streamer.seek(10.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's finally iterate over the output streams.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "n_ite = 3\nwaveforms, vids1, vids2 = [], [], []\nfor i, (waveform, vid1, vid2) in enumerate(streamer.stream()):\n waveforms.append(waveform)\n vids1.append(vid1)\n vids2.append(vid2)\n if i + 1 == n_ite:\n break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For audio stream, the chunk Tensor will be the shape of\n`(frames_per_chunk, num_channels)`, and for video stream,\nit is `(frames_per_chunk, num_color_channels, height, width)`.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(waveforms[0].shape)\nprint(vids1[0].shape)\nprint(vids2[0].shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's visualize what we received.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "k = 3\nfig = plt.figure()\ngs = fig.add_gridspec(3, k * n_ite)\nfor i, waveform in enumerate(waveforms):\n ax = fig.add_subplot(gs[0, k * i : k * (i + 1)])\n ax.specgram(waveform[:, 0], Fs=8000)\n ax.set_yticks([])\n ax.set_xticks([])\n ax.set_title(f\"Iteration {i}\")\n if i == 0:\n ax.set_ylabel(\"Stream 0\")\nfor i, vid in enumerate(vids1):\n ax = fig.add_subplot(gs[1, k * i : k * (i + 1)])\n ax.imshow(vid[0].permute(1, 2, 0)) # NCHW->HWC\n ax.set_yticks([])\n ax.set_xticks([])\n if i == 0:\n ax.set_ylabel(\"Stream 1\")\nfor i, vid in enumerate(vids2):\n for j in range(3):\n ax = fig.add_subplot(gs[2, k * i + j : k * i + j + 1])\n ax.imshow(vid[j].permute(1, 2, 0), cmap=\"gray\")\n ax.set_yticks([])\n ax.set_xticks([])\n if i == 0 and j == 0:\n ax.set_ylabel(\"Stream 2\")\nplt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tag: :obj:`torchaudio.io`\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.14" } }, "nbformat": 4, "nbformat_minor": 0 }