{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "mt9dL5dIir8X" }, "source": [ "##### Copyright 2022 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "ufPx7EiCiqgR" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "4StGz9ynOEL6" }, "source": [ "# Load video data" ] }, { "cell_type": "markdown", "metadata": { "id": "KwQtSOz0VrVX" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "F-SqCosJ6-0H" }, "source": [ "This tutorial demonstrates how to load and preprocess [AVI](https://en.wikipedia.org/wiki/Audio_Video_Interleave) video data using the [UCF101 human action dataset](https://www.tensorflow.org/datasets/catalog/ucf101). Once you have preprocessed the data, it can be used for such tasks as video classification/recognition, captioning or clustering. The original dataset contains realistic action videos collected from YouTube with 101 categories, including playing cello, brushing teeth, and applying eye makeup. You will learn how to:\n", "\n", "* Load the data from a zip file.\n", "\n", "* Read sequences of frames out of the video files.\n", "\n", "* Visualize the video data.\n", "\n", "* Wrap the frame-generator [`tf.data.Dataset`](https://www.tensorflow.org/guide/data).\n", "\n", "This video loading and preprocessing tutorial is the first part in a series of TensorFlow video tutorials. Here are the other three tutorials:\n", "\n", "- [Build a 3D CNN model for video classification](https://www.tensorflow.org/tutorials/video/video_classification): Note that this tutorial uses a (2+1)D CNN that decomposes the spatial and temporal aspects of 3D data; if you are using volumetric data such as an MRI scan, consider using a 3D CNN instead of a (2+1)D CNN.\n", "- [MoViNet for streaming action recognition](https://www.tensorflow.org/hub/tutorials/movinet): Get familiar with the MoViNet models that are available on TF Hub.\n", "- [Transfer learning for video classification with MoViNet](https://www.tensorflow.org/tutorials/video/transfer_learning_with_movinet): This tutorial explains how to use a pre-trained video classification model trained on a different dataset with the UCF-101 dataset." ] }, { "cell_type": "markdown", "metadata": { "id": "PnpPjKVD68eH" }, "source": [ "## Setup\n", "\n", "Begin by installing and importing some necessary libraries, including:\n", "[remotezip](https://github.com/gtsystem/python-remotezip) to inspect the contents of a ZIP file, [tqdm](https://github.com/tqdm/tqdm) to use a progress bar, [OpenCV](https://opencv.org/) to process video files, and [`tensorflow_docs`](https://github.com/tensorflow/docs/tree/master/tools/tensorflow_docs) for embedding data in a Jupyter notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SjI3AaaO16bd" }, "outputs": [], "source": [ "# The way this tutorial uses the `TimeDistributed` layer requires TF>=2.10\n", "!pip install -U \"tensorflow>=2.10.0\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "P5SBasQcbwQA" }, "outputs": [], "source": [ "!pip install remotezip tqdm opencv-python\n", "!pip install -q git+https://github.com/tensorflow/docs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9RYQIJ9C6BVH" }, "outputs": [], "source": [ "import tqdm\n", "import random\n", "import pathlib\n", "import itertools\n", "import collections\n", "\n", "import os\n", "import cv2\n", "import numpy as np\n", "import remotezip as rz\n", "\n", "import tensorflow as tf\n", "\n", "# Some modules to display an animation using imageio.\n", "import imageio\n", "from IPython import display\n", "from urllib import request\n", "from tensorflow_docs.vis import embed" ] }, { "cell_type": "markdown", "metadata": { "id": "KbhwWLLM7FXo" }, "source": [ "## Download a subset of the UCF101 dataset\n", "\n", "The [UCF101 dataset](https://www.tensorflow.org/datasets/catalog/ucf101) contains 101 categories of different actions in video, primarily used in action recognition. You will use a subset of these categories in this demo." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gVIgj-jIA8U8" }, "outputs": [], "source": [ "URL = 'https://storage.googleapis.com/thumos14_files/UCF101_videos.zip'" ] }, { "cell_type": "markdown", "metadata": { "id": "2tm8aBzw6Md7" }, "source": [ "The above URL contains a zip file with the UCF 101 dataset. Create a function that uses the `remotezip` library to examine the contents of the zip file in that URL:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lY-x7TaZlK6O" }, "outputs": [], "source": [ "def list_files_from_zip_url(zip_url):\n", " \"\"\" List the files in each class of the dataset given a URL with the zip file.\n", "\n", " Args:\n", " zip_url: A URL from which the files can be extracted from.\n", "\n", " Returns:\n", " List of files in each of the classes.\n", " \"\"\"\n", " files = []\n", " with rz.RemoteZip(zip_url) as zip:\n", " for zip_info in zip.infolist():\n", " files.append(zip_info.filename)\n", " return files" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lYErXAdUr-rk" }, "outputs": [], "source": [ "files = list_files_from_zip_url(URL)\n", "files = [f for f in files if f.endswith('.avi')]\n", "files[:10]" ] }, { "cell_type": "markdown", "metadata": { "id": "rQ4l8D9dFPS7" }, "source": [ "Begin with a few videos and a limited number of classes for training. After running the above code block, notice that the class name is included in the filename of each video.\n", "\n", "Define the `get_class` function that retrieves the class name from a filename. Then, create a function called `get_files_per_class` which converts the list of all files (`files` above) into a dictionary listing the files for each class:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yyyivOX0sO19" }, "outputs": [], "source": [ "def get_class(fname):\n", " \"\"\" Retrieve the name of the class given a filename.\n", "\n", " Args:\n", " fname: Name of the file in the UCF101 dataset.\n", "\n", " Returns:\n", " Class that the file belongs to.\n", " \"\"\"\n", " return fname.split('_')[-3]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1qnH0xKzlyw_" }, "outputs": [], "source": [ "def get_files_per_class(files):\n", " \"\"\" Retrieve the files that belong to each class.\n", "\n", " Args:\n", " files: List of files in the dataset.\n", "\n", " Returns:\n", " Dictionary of class names (key) and files (values). \n", " \"\"\"\n", " files_for_class = collections.defaultdict(list)\n", " for fname in files:\n", " class_name = get_class(fname)\n", " files_for_class[class_name].append(fname)\n", " return files_for_class" ] }, { "cell_type": "markdown", "metadata": { "id": "VxSt5YgSGrWn" }, "source": [ "Once you have the list of files per class, you can choose how many classes you would like to use and how many videos you would like per class in order to create your dataset. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qPdURg74uUTk" }, "outputs": [], "source": [ "NUM_CLASSES = 10\n", "FILES_PER_CLASS = 50" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GUs0xtXsr9i3" }, "outputs": [], "source": [ "files_for_class = get_files_per_class(files)\n", "classes = list(files_for_class.keys())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-YqFARvqwon9" }, "outputs": [], "source": [ "print('Num classes:', len(classes))\n", "print('Num videos for class[0]:', len(files_for_class[classes[0]]))" ] }, { "cell_type": "markdown", "metadata": { "id": "yFAFqKqE92bQ" }, "source": [ "Create a new function called `select_subset_of_classes` that selects a subset of the classes present within the dataset and a particular number of files per class:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "O3jek4QimIj-" }, "outputs": [], "source": [ "def select_subset_of_classes(files_for_class, classes, files_per_class):\n", " \"\"\" Create a dictionary with the class name and a subset of the files in that class.\n", "\n", " Args:\n", " files_for_class: Dictionary of class names (key) and files (values).\n", " classes: List of classes.\n", " files_per_class: Number of files per class of interest.\n", "\n", " Returns:\n", " Dictionary with class as key and list of specified number of video files in that class.\n", " \"\"\"\n", " files_subset = dict()\n", "\n", " for class_name in classes:\n", " class_files = files_for_class[class_name]\n", " files_subset[class_name] = class_files[:files_per_class]\n", "\n", " return files_subset" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5cjcz6Gpcb-W" }, "outputs": [], "source": [ "files_subset = select_subset_of_classes(files_for_class, classes[:NUM_CLASSES], FILES_PER_CLASS)\n", "list(files_subset.keys())" ] }, { "cell_type": "markdown", "metadata": { "id": "ALrlDS1lZx3E" }, "source": [ "Define helper functions that split the videos into training, validation, and test sets. The videos are downloaded from a URL with the zip file, and placed into their respective subdirectiories." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "AH9sWS_6nRz3" }, "outputs": [], "source": [ "def download_from_zip(zip_url, to_dir, file_names):\n", " \"\"\" Download the contents of the zip file from the zip URL.\n", "\n", " Args:\n", " zip_url: A URL with a zip file containing data.\n", " to_dir: A directory to download data to.\n", " file_names: Names of files to download.\n", " \"\"\"\n", " with rz.RemoteZip(zip_url) as zip:\n", " for fn in tqdm.tqdm(file_names):\n", " class_name = get_class(fn)\n", " zip.extract(fn, str(to_dir / class_name))\n", " unzipped_file = to_dir / class_name / fn\n", "\n", " fn = pathlib.Path(fn).parts[-1]\n", " output_file = to_dir / class_name / fn\n", " unzipped_file.rename(output_file)" ] }, { "cell_type": "markdown", "metadata": { "id": "pejRTChA6mrp" }, "source": [ "The following function returns the remaining data that hasn't already been placed into a subset of data. It allows you to place that remaining data in the next specified subset of data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "6ARYc-WLqqNF" }, "outputs": [], "source": [ "def split_class_lists(files_for_class, count):\n", " \"\"\" Returns the list of files belonging to a subset of data as well as the remainder of\n", " files that need to be downloaded.\n", " \n", " Args:\n", " files_for_class: Files belonging to a particular class of data.\n", " count: Number of files to download.\n", "\n", " Returns:\n", " Files belonging to the subset of data and dictionary of the remainder of files that need to be downloaded.\n", " \"\"\"\n", " split_files = []\n", " remainder = {}\n", " for cls in files_for_class:\n", " split_files.extend(files_for_class[cls][:count])\n", " remainder[cls] = files_for_class[cls][count:]\n", " return split_files, remainder" ] }, { "cell_type": "markdown", "metadata": { "id": "LlEQ_I0TLd1X" }, "source": [ "The following `download_ucf_101_subset` function allows you to download a subset of the UCF101 dataset and split it into the training, validation, and test sets. You can specify the number of classes that you would like to use. The `splits` argument allows you to pass in a dictionary in which the key values are the name of subset (example: \"train\") and the number of videos you would like to have per class." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IHH2Y1M06xoz" }, "outputs": [], "source": [ "def download_ucf_101_subset(zip_url, num_classes, splits, download_dir):\n", " \"\"\" Download a subset of the UCF101 dataset and split them into various parts, such as\n", " training, validation, and test.\n", "\n", " Args:\n", " zip_url: A URL with a ZIP file with the data.\n", " num_classes: Number of labels.\n", " splits: Dictionary specifying the training, validation, test, etc. (key) division of data \n", " (value is number of files per split).\n", " download_dir: Directory to download data to.\n", "\n", " Return:\n", " Mapping of the directories containing the subsections of data.\n", " \"\"\"\n", " files = list_files_from_zip_url(zip_url)\n", " for f in files:\n", " path = os.path.normpath(f)\n", " tokens = path.split(os.sep)\n", " if len(tokens) <= 2:\n", " files.remove(f) # Remove that item from the list if it does not have a filename\n", " \n", " files_for_class = get_files_per_class(files)\n", "\n", " classes = list(files_for_class.keys())[:num_classes]\n", "\n", " for cls in classes:\n", " random.shuffle(files_for_class[cls])\n", " \n", " # Only use the number of classes you want in the dictionary\n", " files_for_class = {x: files_for_class[x] for x in classes}\n", "\n", " dirs = {}\n", " for split_name, split_count in splits.items():\n", " print(split_name, \":\")\n", " split_dir = download_dir / split_name\n", " split_files, files_for_class = split_class_lists(files_for_class, split_count)\n", " download_from_zip(zip_url, split_dir, split_files)\n", " dirs[split_name] = split_dir\n", "\n", " return dirs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NuD-xU8Q66Vm" }, "outputs": [], "source": [ "download_dir = pathlib.Path('./UCF101_subset/')\n", "subset_paths = download_ucf_101_subset(URL,\n", " num_classes = NUM_CLASSES,\n", " splits = {\"train\": 30, \"val\": 10, \"test\": 10},\n", " download_dir = download_dir)" ] }, { "cell_type": "markdown", "metadata": { "id": "MBMRm9Ub3Zrk" }, "source": [ "After downloading the data, you should now have a copy of a subset of the UCF101 dataset. Run the following code to print the total number of videos you have amongst all your subsets of data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zupvOLYP4D4q" }, "outputs": [], "source": [ "video_count_train = len(list(download_dir.glob('train/*/*.avi')))\n", "video_count_val = len(list(download_dir.glob('val/*/*.avi')))\n", "video_count_test = len(list(download_dir.glob('test/*/*.avi')))\n", "video_total = video_count_train + video_count_val + video_count_test\n", "print(f\"Total videos: {video_total}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "JmJG1SlXiOX8" }, "source": [ "You can also preview the directory of data files now." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "y9be0WlDiNM0" }, "outputs": [], "source": [ "!find ./UCF101_subset" ] }, { "cell_type": "markdown", "metadata": { "id": "U4uslY4dScyu" }, "source": [ "## Create frames from each video file" ] }, { "cell_type": "markdown", "metadata": { "id": "D1vvyT0F7JAZ" }, "source": [ "The `frames_from_video_file` function splits the videos into frames, reads a randomly chosen span of `n_frames` out of a video file, and returns them as a NumPy `array`.\n", "To reduce memory and computation overhead, choose a **small** number of frames. In addition, pick the **same** number of frames from each video, which makes it easier to work on batches of data.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vNBCiV3bMzpD" }, "outputs": [], "source": [ "def format_frames(frame, output_size):\n", " \"\"\"\n", " Pad and resize an image from a video.\n", " \n", " Args:\n", " frame: Image that needs to resized and padded. \n", " output_size: Pixel size of the output frame image.\n", "\n", " Return:\n", " Formatted frame with padding of specified output size.\n", " \"\"\"\n", " frame = tf.image.convert_image_dtype(frame, tf.float32)\n", " frame = tf.image.resize_with_pad(frame, *output_size)\n", " return frame" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9ujLDC9G7JyE" }, "outputs": [], "source": [ "def frames_from_video_file(video_path, n_frames, output_size = (224,224), frame_step = 15):\n", " \"\"\"\n", " Creates frames from each video file present for each category.\n", "\n", " Args:\n", " video_path: File path to the video.\n", " n_frames: Number of frames to be created per video file.\n", " output_size: Pixel size of the output frame image.\n", "\n", " Return:\n", " An NumPy array of frames in the shape of (n_frames, height, width, channels).\n", " \"\"\"\n", " # Read each video frame by frame\n", " result = []\n", " src = cv2.VideoCapture(str(video_path)) \n", "\n", " video_length = src.get(cv2.CAP_PROP_FRAME_COUNT)\n", "\n", " need_length = 1 + (n_frames - 1) * frame_step\n", "\n", " if need_length > video_length:\n", " start = 0\n", " else:\n", " max_start = video_length - need_length\n", " start = random.randint(0, max_start + 1)\n", "\n", " src.set(cv2.CAP_PROP_POS_FRAMES, start)\n", " # ret is a boolean indicating whether read was successful, frame is the image itself\n", " ret, frame = src.read()\n", " result.append(format_frames(frame, output_size))\n", "\n", " for _ in range(n_frames - 1):\n", " for _ in range(frame_step):\n", " ret, frame = src.read()\n", " if ret:\n", " frame = format_frames(frame, output_size)\n", " result.append(frame)\n", " else:\n", " result.append(np.zeros_like(result[0]))\n", " src.release()\n", " result = np.array(result)[..., [2, 1, 0]]\n", "\n", " return result" ] }, { "cell_type": "markdown", "metadata": { "id": "1ENtlwhxwyTe" }, "source": [ "## Visualize video data\n", "\n", "The `frames_from_video_file` function that returns a set of frames as a NumPy array. Try using this function on a new video from [Wikimedia](https://commons.wikimedia.org/wiki/Category:Videos_of_sports){:.external} by Patrick Gillett:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Z2hgSghlykzA" }, "outputs": [], "source": [ "!curl -O https://upload.wikimedia.org/wikipedia/commons/8/86/End_of_a_jam.ogv" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xdHvHw3hym-U" }, "outputs": [], "source": [ "video_path = \"End_of_a_jam.ogv\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "u845YODXyqo5" }, "outputs": [], "source": [ "sample_video = frames_from_video_file(video_path, n_frames = 10)\n", "sample_video.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zFHGHiFgGjv2" }, "outputs": [], "source": [ "def to_gif(images):\n", " converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)\n", " imageio.mimsave('./animation.gif', converted_images, fps=10)\n", " return embed.embed_file('./animation.gif')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7hiwUJenEN3p" }, "outputs": [], "source": [ "to_gif(sample_video)" ] }, { "cell_type": "markdown", "metadata": { "id": "3dktTnDVG7xf" }, "source": [ "In addition to examining this video, you can also display the UCF-101 data. To do this, run the following code:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MghJzJsWme0t" }, "outputs": [], "source": [ "# docs-infra: no-execute\n", "ucf_sample_video = frames_from_video_file(next(subset_paths['train'].glob('*/*.avi')), 50)\n", "to_gif(ucf_sample_video)" ] }, { "cell_type": "markdown", "metadata": { "id": "NlvuC5_E7XrF" }, "source": [ "Next, define the `FrameGenerator` class in order to create an iterable object that can feed data into the TensorFlow data pipeline. The generator (`__call__`) function yields the frame array produced by `frames_from_video_file` and a one-hot encoded vector of the label associated with the set of frames." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MVmfLTlw7Ues" }, "outputs": [], "source": [ "class FrameGenerator:\n", " def __init__(self, path, n_frames, training = False):\n", " \"\"\" Returns a set of frames with their associated label. \n", "\n", " Args:\n", " path: Video file paths.\n", " n_frames: Number of frames. \n", " training: Boolean to determine if training dataset is being created.\n", " \"\"\"\n", " self.path = path\n", " self.n_frames = n_frames\n", " self.training = training\n", " self.class_names = sorted(set(p.name for p in self.path.iterdir() if p.is_dir()))\n", " self.class_ids_for_name = dict((name, idx) for idx, name in enumerate(self.class_names))\n", "\n", " def get_files_and_class_names(self):\n", " video_paths = list(self.path.glob('*/*.avi'))\n", " classes = [p.parent.name for p in video_paths] \n", " return video_paths, classes\n", "\n", " def __call__(self):\n", " video_paths, classes = self.get_files_and_class_names()\n", "\n", " pairs = list(zip(video_paths, classes))\n", "\n", " if self.training:\n", " random.shuffle(pairs)\n", "\n", " for path, name in pairs:\n", " video_frames = frames_from_video_file(path, self.n_frames) \n", " label = self.class_ids_for_name[name] # Encode labels\n", " yield video_frames, label" ] }, { "cell_type": "markdown", "metadata": { "id": "xsvhPIkpzx-r" }, "source": [ "Test out the `FrameGenerator` object before wrapping it as a TensorFlow Dataset object. Moreover, for the training dataset, ensure you enable training mode so that the data will be shuffled." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "P5jwagZxzxOf" }, "outputs": [], "source": [ "fg = FrameGenerator(subset_paths['train'], 10, training=True)\n", "\n", "frames, label = next(fg())\n", "\n", "print(f\"Shape: {frames.shape}\")\n", "print(f\"Label: {label}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "E7MRRFSks7l1" }, "source": [ "Finally, create a TensorFlow data input pipeline. This pipeline that you create from the generator object allows you to feed in data to your deep learning model. In this video pipeline, each element is a single set of frames and its associated label. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HM4NboJr7ck4" }, "outputs": [], "source": [ "# Create the training set\n", "output_signature = (tf.TensorSpec(shape = (None, None, None, 3), dtype = tf.float32),\n", " tf.TensorSpec(shape = (), dtype = tf.int16))\n", "train_ds = tf.data.Dataset.from_generator(FrameGenerator(subset_paths['train'], 10, training=True),\n", " output_signature = output_signature)" ] }, { "cell_type": "markdown", "metadata": { "id": "9oF_8m8IZvcY" }, "source": [ "Check to see that the labels are shuffled. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3XYVmsgiZsJD" }, "outputs": [], "source": [ "for frames, labels in train_ds.take(10):\n", " print(labels)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Pi8-WkOkEXw5" }, "outputs": [], "source": [ "# Create the validation set\n", "val_ds = tf.data.Dataset.from_generator(FrameGenerator(subset_paths['val'], 10),\n", " output_signature = output_signature)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "V6qXc-6i7eyK" }, "outputs": [], "source": [ "# Print the shapes of the data\n", "train_frames, train_labels = next(iter(train_ds))\n", "print(f'Shape of training set of frames: {train_frames.shape}')\n", "print(f'Shape of training labels: {train_labels.shape}')\n", "\n", "val_frames, val_labels = next(iter(val_ds))\n", "print(f'Shape of validation set of frames: {val_frames.shape}')\n", "print(f'Shape of validation labels: {val_labels.shape}')" ] }, { "cell_type": "markdown", "metadata": { "id": "bIrFpUIxvTLe" }, "source": [ "## Configure the dataset for performance\n", "\n", "Use buffered prefetching such that you can yield data from the disk without having I/O become blocking. Two important functions to use while loading data are:\n", "\n", "* `Dataset.cache`: keeps the sets of frames in memory after they're loaded off the disk during the first epoch. This function ensures that the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.\n", "\n", "* `Dataset.prefetch`: overlaps data preprocessing and model execution while training.\n", "Refer to [Better performance with the `tf.data`](https://www.tensorflow.org/guide/data_performance) for details." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QSxjFtxAvY3_" }, "outputs": [], "source": [ "AUTOTUNE = tf.data.AUTOTUNE\n", "\n", "train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size = AUTOTUNE)\n", "val_ds = val_ds.cache().shuffle(1000).prefetch(buffer_size = AUTOTUNE)" ] }, { "cell_type": "markdown", "metadata": { "id": "VaY-hyr-Fbfr" }, "source": [ "To prepare the data to be fed into the model, use batching as shown below. Notice that when working with video data, such as AVI files, the data should be shaped as a five dimensional object. These dimensions are as follows: `[batch_size, number_of_frames, height, width, channels]`. In comparison, an image would have four dimensions: `[batch_size, height, width, channels]`. The image below is an illustration of how the shape of video data is represented.\n", "\n", "![Video data shape](https://www.tensorflow.org/images/tutorials/video/video_data_shape.png)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pp2Qc6XSFmeB" }, "outputs": [], "source": [ "train_ds = train_ds.batch(2)\n", "val_ds = val_ds.batch(2)\n", "\n", "train_frames, train_labels = next(iter(train_ds))\n", "print(f'Shape of training set of frames: {train_frames.shape}')\n", "print(f'Shape of training labels: {train_labels.shape}')\n", "\n", "val_frames, val_labels = next(iter(val_ds))\n", "print(f'Shape of validation set of frames: {val_frames.shape}')\n", "print(f'Shape of validation labels: {val_labels.shape}')" ] }, { "cell_type": "markdown", "metadata": { "id": "hqjXn1FgsMqZ" }, "source": [ "## Next steps\n", "\n", "Now that you have created a TensorFlow `Dataset` of video frames with their labels, you can use it with a deep learning model. The following classification model that uses a pre-trained [EfficientNet](https://arxiv.org/abs/1905.11946){:.external} trains to high accuracy in a few minutes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qzqgPBUuForj" }, "outputs": [], "source": [ "net = tf.keras.applications.EfficientNetB0(include_top = False)\n", "net.trainable = False\n", "\n", "model = tf.keras.Sequential([\n", " tf.keras.layers.Rescaling(scale=255),\n", " tf.keras.layers.TimeDistributed(net),\n", " tf.keras.layers.Dense(10),\n", " tf.keras.layers.GlobalAveragePooling3D()\n", "])\n", "\n", "model.compile(optimizer = 'adam',\n", " loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits = True),\n", " metrics=['accuracy'])\n", "\n", "model.fit(train_ds, \n", " epochs = 10,\n", " validation_data = val_ds,\n", " callbacks = tf.keras.callbacks.EarlyStopping(patience = 2, monitor = 'val_loss'))" ] }, { "cell_type": "markdown", "metadata": { "id": "DdJm7ojgGxtT" }, "source": [ "To learn more about working with video data in TensorFlow, check out the following tutorials:\n", "\n", "* [Build a 3D CNN model for video classification](https://www.tensorflow.org/tutorials/video/video_classification)\n", "* [MoViNet for streaming action recognition](https://www.tensorflow.org/hub/tutorials/movinet)\n", "* [Transfer learning for video classification with MoViNet](https://www.tensorflow.org/tutorials/video/transfer_learning_with_movinet)" ] } ], "metadata": { "accelerator": "GPU", "colab": { "name": "video.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }