<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
        xmlns:video="http://www.google.com/schemas/sitemap-video/1.1">
<url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c1/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 1. Intuition of AI: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>1. Intuition of AI - What is artificial intelligence? - Defining AI - Data is the fuel for AI algorithms - Algorithms are like recipes - Algorithms vs. models - The algorithm as the active solver - The algorithm as the builder - The evolution of AI - Different types of problems - Search problems: Finding a path to a solution - Optimization problems: Finding a good solution - Prediction and classification problems: Learning from patterns in data - Clustering problems: Identifying patterns in data - Deterministic models: Getting the same result each time - Probabilistic models: Getting potentially different results each time - Intuition of AI concepts - Narrow intelligence: Specific-purpose solutions - General intelligence: Humanlike solutions - Super intelligence: The great unknown - Old AI and New AI - Search algorithms - Biology-inspired algorithms - Machine learning algorithms - Deep learning algorithms - Generative models - Large language models - Generative image models - Some uses for AI algorithms - Agriculture: Optimizing plant growth - Banking: Preventing fraudulent transactions - Cybersecurity: Safeguarding email inboxes - Health care: Diagnosing patients - Logistics: Finding the best delivery route - Fitness and health: Optimizing your body - Games: Adapting in complexity - Conclusion - Summary of intuition of AI</video:description><video:duration>2062</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c2/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 2. Search fundamentals: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>2. Search fundamentals - What are planning and searching? - Cost of computation: The reason for smart algorithms - Problems that search algorithms can solve - Representing state: Creating a framework to represent problem spaces and solutions - Graphs: Representing search problems and solutions - Representing a graph as a concrete data structure - Trees: Concrete structures used to represent search solutions - Uninformed search: Looking blindly for solutions - BFS: Looking wide before looking deep - Steps of the BFS algorithm - DFS: Looking deep before looking wide - Steps of the DFS algorithm - Python code sample of the DFS algorithm - Use cases for uninformed search algorithms - Optional: More about graph categories - Optional: More ways to represent graphs - Incidence matrix - Adjacency list - Summary of search fundamentals</video:description><video:duration>2330</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c3/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 3. Intelligent search: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>3. Intelligent search - Defining heuristics: Designing educated guesses - Informed search: Looking for solutions with guidance - A* search - Steps in A* search - Python code sample for A* search - Use cases for informed search algorithms - Adversarial search: Looking for solutions in a changing environment - Minimax search: Simulate actions and choose the best future - Steps in Minimax search - Python code sample for Minimax search - Alpha-beta pruning: Optimize by exploring the sensible paths only - Steps in Minimax search with alpha-beta pruning - Python code sample for Minimax search with alpha-beta pruning - Use cases for adversarial search algorithms - Summary of intelligent search</video:description><video:duration>2004</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c4/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 4. Evolutionary algorithms: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>4. Evolutionary algorithms - What is evolution? - Problems that evolutionary algorithms can solve - Life cycle of genetic algorithms - Encoding the solution spaces - Binary encoding: Representing possible solutions with 0s and 1s - Creating a population of solutions - Measuring fitness of individuals in a population - Python code sample for calculating fitness - Selecting parents based on their fitness - Using population models - Performing roulette-wheel selection - Python code sample for roulette-wheel selection - Reproducing individuals from parents - Single-point crossover: Inheriting one part from each parent - Python code sample for crossover - Two-point crossover: Inheriting more parts from each parent - Uniform crossover: Inheriting many parts from each parent - Bit-string mutation for binary encoding - Python code sample for mutation - Flip-bit mutation for binary encoding - Populating the next generation - Exploration vs. exploitation - Stopping conditions - Python code sample for running the genetic algorithm - Configuring the parameters of a genetic algorithm - Use cases for evolutionary algorithms - Summary of evolutionary algorithms</video:description><video:duration>2840</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c5/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 5. Advanced evolutionary approaches: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>5. Advanced evolutionary approaches - Evolutionary algorithm life cycle - Alternative selection strategies - Rank selection: Even the playing field - Tournament selection: Let them fight - Elitism selection: Choose only the best - Real-value encoding: Working with real numbers - Real-value encoding at its core - Arithmetic crossover: Reproduce with math - Boundary mutation - Arithmetic mutation - Order encoding: Working with sequences - Importance of the fitness function - Order encoding at its core - Order mutation - Tree encoding: Working with hierarchies - Tree encoding at its core - Tree crossover: Inheriting portions of a tree - Change-node mutation - Common types of evolutionary algorithms - Genetic programming - Evolutionary programming - Glossary of evolutionary-algorithm terms - More use cases for evolutionary algorithms - Summary of advanced evolutionary approaches</video:description><video:duration>1409</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c6/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 6. Swarm intelligence: Ants: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>6. Swarm intelligence: Ants - What is swarm intelligence? - Problems that ACO algorithms can solve - Representing state: What do paths and ants look like? - Python code sample for ant data and operations - The ACO algorithm life cycle - Set up pheromones - Python code sample for initializing pheromones - Set up the population of ants - Python code sample for setting up ants - Choose the next visit for each ant - Simulating the “random” nature of ants - Selecting a destination based on a heuristic - Python code sample for ant decision-making - Update the pheromone trails - Updating pheromones due to evaporation - Updating pheromones based on ant tours - Python code sample for updating pheromones - Update the best solution - Python code sample for choosing the best solution - Determine the stopping criteria - Python code sample for running the ACO algorithm - Use cases for ACO algorithms - Summary of swarm intelligence: ants</video:description><video:duration>1882</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c7/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 7. Swarm intelligence: Particles: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>7. Swarm intelligence: Particles - What is particle swarm optimization? - Optimization problems: A slightly more technical perspective - Problems that PSO algorithms can solve - Representing state: What do particles look like? - PSO algorithm life cycle - Set up particles - Python code sample for generating a swarm - Calculate the fitness particles - Python code sample for calculating fitness - Update positions of particles - Updating particle velocity - Updating velocity - Updating position - Python code sample for updating particle velocity - Determine the stopping criteria - Use cases for PSO algorithms - Summary of swarm intelligence: particles</video:description><video:duration>2236</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c8/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 8. Machine learning: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>8. Machine learning - What is machine learning? - Problems that machine learning can solve - Supervised learning - Unsupervised learning - Reinforcement learning - Following a machine-learning workflow - Collecting and understanding data - Preparing data: Clean and wrangle - Missing data - Ambiguous values - Encoding categorical data - Testing and training data - Training a model: Predict with linear regression - Fitting a line to the data - Finding the mean of the features - Finding regression lines with the least-squares method - Python code sample for fitting a regression line - Testing the model: Determine the accuracy of the model - Separating training and testing data - Measuring the performance of the line - Improving accuracy - Classification with decision trees - Classification problems: This or that - The basics of decision trees - Training decision trees - Data structures for decision trees - Decision-tree learning life cycle - Python code sample for classification with decision trees - Classifying examples with decision trees - Other popular machine learning algorithms - Use cases for machine learning algorithms - Summary of machine learning</video:description><video:duration>3845</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c9/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 9. Artificial neural networks: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>9. Artificial neural networks - What are artificial neural networks? - The Perceptron: A representation of a neuron - Defining ANNs - Python code sample for scaling training and testing data - Python code sample for defining a neural network - Forward propagation: Using a trained ANN - Python code sample for forward propagation - Backpropagation: Training an ANN - Phase A: Setup - Phase B: Forward propagation - Phase C: Training - Python code sample for backpropagation - Options for activation functions - Designing ANNs - Inputs and outputs - Hidden layers and nodes - Weights - Bias - Activation functions - Cost function - Learning rate - Expressing ANNs mathematically - The weighted sum as a dot product - The hidden layer as matrix multiplication - Adding the activation function - The output layer - The final neural network equation - The cost function - Expressing backpropagation mathematically - The Chain Rule: Domino effect - Calculating the gradients: Backward pass - The weight update: Gradient descent - ANN types and use cases - Recurrent neural networks - Convolutional neural networks - Generative adversarial networks - Summary of artificial neural networks</video:description><video:duration>4079</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c10/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 10. Reinforcement learning: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>10. Reinforcement learning - What is reinforcement learning? - The inspiration for reinforcement learning - Problems that reinforcement learning can solve - The life cycle of reinforcement learning - Simulation and data: The agent’s environment - Python code sample for simulating the Parking-Lot Problem - Training with the simulation using Q-learning - Initialize - Repeat for n iterations - Python code sample for training using a Q-table - Testing with the simulation and Q-table - Measuring the performance of training - Model-free and model-based learning - Deep learning approaches to reinforcement learning - Training with an ANN - Scalar encoding - One-hot encoding - Calculating loss and backpropagation - Number of episodes - Use cases for reinforcement learning - Robotics - Recommendation engines - Financial trading - Game playing - Summary of reinforcement learning</video:description><video:duration>3332</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c11/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 11. Large language models: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>11. Large language models - What are LLMs? - The intuition behind language prediction - Why the sizes of tokens and parameters matter - An LLM training workflow - Prepare the training data - Selecting and collecting data - Volume and quality of tokens - Licensing of content - Cleaning and preprocessing data - Language filtering - Boilerplate stripping - Near-duplicate removal - Safety and compliance filters - Normalization - Data-quality audit - Encoding: From text to numbers - Tokenization - Using characters as tokens - Counting all bigrams - Merging the most frequent pair - Python code sample for a BPE tokenizer - Assigning IDs to the tokens - Vectorization - Python code sample for batching tokens - Designing the architecture - Encoding: Creating trainable embeddings - Sampling a batch of tokens - Creating a trainable embedding matrix - Creating positional encodings - Combining the embedding matrix and positional encodings - Python code sample for creating embeddings - Self-attention: Start training the LLM - Making linear weight matrix projections - Asking every other token - Calculating attention weights - Calculating the weighted sum - Python code sample for self-attention - Using multiple attention heads - Normalizing layers - Python code sample for layer normalization - Decoding: Finding meaning through neural networks - Project up layer - Python code sample for a project up layer - Project down layer - Adding bias - Adding the residual - Layer normalization - Stacking Transformer blocks - Making a prediction - Creating logits - Using softmax - Python code sample for a project down layer - Backpropagation and calculating loss - Calculating cross-entropy loss - Backpropagation - Python code sample for calculating training loss - Control the LLM - Training epochs - Saving checkpoints - Stopping mechanisms - Hyperparameter tuning - Few-shot and zero-shot learning - Refining LLMs with reinforcement learning - Collecting human-feedback data - Training a reward model - Fine-tuning the LLM with reinforcement lea…</video:description><video:duration>6742</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url><url><loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/9781633434813VE-gaase_c12/</loc><lastmod>2026-04-27</lastmod><changefreq>weekly</changefreq><priority>0.6</priority><video:video><video:title>Chapter 12. Generative image models: Grokking AI Algorithms, Second Edition, Video Edition</video:title><video:description>12. Generative image models - What are generative image models? - The intuition behind image generation - A generative image model training workflow - Prepare the image training data - Selecting and collecting image data - The crucial role of captions - Licensing of content - Cleaning and preprocessing image data - Embedding: From images to numbers - Image normalization - Forward diffusion - Python code sample for forward diffusion - Timestep embedding - Python code sample for timestep embedding - Text label embedding - Python code sample for text embedding - Design the architecture - Convolutional neural networks - CNN input shape - Filter (AKA kernel) - Convolution operation - Feature map - U-Net - Denoise: From numbers to an image - Encoder: Downsampling layers - First convolutional layer - Python code sample for a convolutional layer - Downsampling - Python code sample for downsampling - Bridge - Bridge convolution - Embedding injection - Python code sample for injecting embeddings - Decoder: Upsampling layers - Upsampling - Skipping connection - Python code sample for upsampling - Final convolutions - Python code sample for the skip-connection and final convolutions - Train: Calculate loss and backpropagation - Calculating loss - Backpropagation - Generating an image - Python code sample for backpropagation - Starting with a blank canvas (of pure noise) - Denoising the data - Predict the noise - Subtract the noise - Repeat - Python code sample for generating an image - Controlling the diffusion model - Training data composition and diversity - Timesteps and noise schedule - Attention layers and cross-attention injection - Training epochs - Inpainting and outpainting - Low-rank adaptation - High-resolution fixes and upscalers - ControlNets and Image Prompt Adapters - Refining aesthetics with human feedback - Use cases for image generation - Creative ideation and concept art - Commercial design and advertising - Content creation and media - Personalization and photo editing - Summary of generative image models</video:description><video:duration>6006</video:duration><video:thumbnail_loc>https://www.oreilly.com/library/cover/9781633434813VE/</video:thumbnail_loc><video:publication_date>2026-04-07</video:publication_date><video:tag>course</video:tag><video:requires_subscription>yes</video:requires_subscription><video:live>no</video:live><video:gallery_loc>https://www.oreilly.com/videos/grokking-ai-algorithms/9781633434813VE/</video:gallery_loc></video:video></url>
</urlset>
