Technical requirementsEfficient data servingIntroducing the TensorFlow Data APIIntuition behind the TensorFlow Data APIFeeding fast and data-hungry modelsInspiration from lazy structuresStructure of TensorFlow data pipelinesExtract, Transform, LoadAPI interfaceSetting up input pipelinesExtracting (from tensors, text files, TFRecord files, and more)From NumPy and TensorFlow dataFrom filesFrom other inputs (generator, SQL database, range, and others)Transforming the samples (parsing, augmenting, and more)Parsing images and labelsParsing TFRecord filesEditing samplesTransforming the datasets (shuffling, zipping, parallelizing, and more)Structuring datasetsMerging datasetsLoadingOptimizing and monitoring input pipelinesFollowing best practices for optimizationParallelizing and prefetchingFusing operationsPassing options to ensure global propertiesMonitoring and reusing datasetsAggregating performance statisticsCaching and reusing datasetsHow to deal with data scarcityAugmenting datasetsOverviewWhy augment datasets?ConsiderationsAugmenting images with TensorFlowTensorFlow Image moduleExample – augmenting images for our autonomous driving applicationRendering synthetic datasetsOverviewRise of 3D databasesBenefits of synthetic dataGenerating synthetic images from 3D modelsRendering from 3D modelsPost-processing synthetic imagesProblem – realism gapLeveraging domain adaptation and generative models (VAEs and GANs)Training models to be robust to domain changesSupervised domain adaptationUnsupervised domain adaptationDomain randomizationGenerating larger or more realistic datasets with VAEs and GANsDiscriminative versus generative modelsVAEsGANsAugmenting datasets with conditional GANsSummaryQuestionsFurther reading