Hands-On Neuroevolution with Python

Book description

Increase the performance of various neural network architectures using NEAT, HyperNEAT, ES-HyperNEAT, Novelty Search, SAFE, and deep neuroevolution

Key Features

  • Implement neuroevolution algorithms to improve the performance of neural network architectures
  • Understand evolutionary algorithms and neuroevolution methods with real-world examples
  • Learn essential neuroevolution concepts and how they are used in domains including games, robotics, and simulations

Book Description

Neuroevolution is a form of artificial intelligence learning that uses evolutionary algorithms to simplify the process of solving complex tasks in domains such as games, robotics, and the simulation of natural processes. This book will give you comprehensive insights into essential neuroevolution concepts and equip you with the skills you need to apply neuroevolution-based algorithms to solve practical, real-world problems.

You'll start with learning the key neuroevolution concepts and methods by writing code with Python. You'll also get hands-on experience with popular Python libraries and cover examples of classical reinforcement learning, path planning for autonomous agents, and developing agents to autonomously play Atari games. Next, you'll learn to solve common and not-so-common challenges in natural computing using neuroevolution-based algorithms. Later, you'll understand how to apply neuroevolution strategies to existing neural network designs to improve training and inference performance. Finally, you'll gain clear insights into the topology of neural networks and how neuroevolution allows you to develop complex networks, starting with simple ones.

By the end of this book, you will not only have explored existing neuroevolution-based algorithms, but also have the skills you need to apply them in your research and work assignments.

What you will learn

  • Discover the most popular neuroevolution algorithms – NEAT, HyperNEAT, and ES-HyperNEAT
  • Explore how to implement neuroevolution-based algorithms in Python
  • Get up to speed with advanced visualization tools to examine evolved neural network graphs
  • Understand how to examine the results of experiments and analyze algorithm performance
  • Delve into neuroevolution techniques to improve the performance of existing methods
  • Apply deep neuroevolution to develop agents for playing Atari games

Who this book is for

This book is for machine learning practitioners, deep learning researchers, and AI enthusiasts who are looking to implement neuroevolution algorithms from scratch. Working knowledge of the Python programming language and basic knowledge of deep learning and neural networks are mandatory.

Table of contents

  1. Title Page
  2. Copyright and Credits
    1. Hands-On Neuroevolution with Python
  3. Dedication
  4. About Packt
    1. Why subscribe?
  5. Contributors
    1. About the author
    2. About the reviewers
    3. Packt is searching for authors like you
  6. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
      1. Download the example code files
      2. Download the color images
      3. Conventions used
    4. Get in touch
      1. Reviews
  7. Section 1: Fundamentals of Evolutionary Computation Algorithms and Neuroevolution Methods
  8. Overview of Neuroevolution Methods
    1. Evolutionary algorithms and neuroevolution-based methods
      1. Genetic operators
        1. Mutation operator
        2. Crossover operator
      2. Genome encoding schemes
        1. Direct genome encoding
        2. Indirect genome encoding
      3. Coevolution
      4. Modularity and hierarchy
    2. NEAT algorithm overview
      1. NEAT encoding scheme
      2. Structural mutations
      3. Crossover with an innovation number
      4. Speciation
    3. Hypercube-based NEAT
      1. Compositional Pattern Producing Networks
      2. Substrate configuration
      3. Evolving connective CPPNs and the HyperNEAT algorithm
    4. Evolvable-Substrate HyperNEAT
      1. Information patterns in the hypercube
      2. Quadtree as an effective information extractor
      3. ES-HyperNEAT algorithm
    5. Novelty Search optimization method
      1. Novelty Search and natural evolution
      2. Novelty metric
    6. Summary
    7. Further reading
  9. Python Libraries and Environment Setup
    1. Suitable Python libraries for neuroevolution experiments
      1. NEAT-Python
        1. NEAT-Python usage example
      2. PyTorch NEAT
        1. PyTorch NEAT usage example
      3. MultiNEAT
        1. MultiNEAT usage example
      4. Deep Neuroevolution
      5. Comparing Python neuroevolution libraries
    2. Environment setup
      1. Pipenv
      2. Virtualenv
      3. Anaconda
    3. Summary
  10. Section 2: Applying Neuroevolution Methods to Solve Classic Computer Science Problems
  11. Using NEAT for XOR Solver Optimization
    1. Technical requirements
    2. XOR problem basics
    3. The objective function for the XOR experiment
    4. Hyperparameter selection
      1. NEAT section
      2. DefaultStagnation section
      3. DefaultReproduction section
      4. DefaultSpeciesSet section
      5. DefaultGenome section
      6. XOR experiment hyperparameters
    5. Running the XOR experiment
      1. Environment setup
      2. XOR experiment source code
      3. Running the experiment and analyzing the results
    6. Exercises
    7. Summary
  12. Pole-Balancing Experiments
    1. Technical requirements
    2. The single-pole balancing problem
      1. The equations of motion of the single-pole balancer
      2. State equations and control actions
      3. The interactions between the solver and the simulator
    3. Objective function for a single-pole balancing experiment
      1. Cart-pole apparatus simulation
      2. The simulation cycle
      3. Genome fitness evaluation
    4. The single-pole balancing experiment
      1. Hyperparameter selection
      2. Working environment setup
      3. The experiment runner implementation
        1. Function to evaluate the fitness of all genomes in the population
        2. The experiment runner function
      4. Running the single-pole balancing experiment
    5. Exercises
    6. The double-pole balancing problem
      1. The system state and equations of motion
      2. Reinforcement signal
      3. Initial conditions and state update
      4. Control actions
      5. Interactions between the solver and the simulator
    7. Objective function for a double-pole balancing experiment
    8. Double-pole balancing experiment
      1. Hyperparameter selection
      2. Working environment setup
      3. The experiment runner implementation
      4. Running the double-pole balancing experiment
    9. Exercises
    10. Summary
  13. Autonomous Maze Navigation
    1. Technical requirements
    2. Maze navigation problem
    3. Maze simulation environment
      1. Maze-navigating agent
      2. Maze simulation environment implementation
        1. Sensor data generation
        2. Agent position update
      3. Agents records store
      4. The agent record visualization
    4. Objective function definition using the fitness score
    5. Running the experiment with a simple maze configuration
      1. Hyperparameter selection
      2. Maze configuration file
      3. Working environment setup
      4. The experiment runner implementation
        1. Genome fitness evaluation
      5. Running the simple maze navigation experiment
        1. Agent record visualization
    6. Exercises
    7. Running the experiment with a hard-to-solve maze configuration
      1. Hyperparameter selection
      2. Working environment setup and experiment runner implementation
      3. Running the hard-to-solve maze navigation experiment
    8. Exercises
    9. Summary
  14. Novelty Search Optimization Method
    1. Technical requirements
    2. The NS optimization method
    3. NS implementation basics
      1. NoveltyItem
      2. NoveltyArchive
    4. The fitness function with the novelty score
      1. The novelty score
      2. The novelty metric
      3. Fitness function
        1. The population fitness evaluation function
        2. The individual fitness evaluation function
    5. Experimenting with a simple maze configuration
      1. Hyperparameter selection
      2. Working environment setup
      3. The experiment runner implementation
        1. The trials cycle
        2. The experiment runner function
      4. Running the simple maze navigation experiment with NS optimization
        1. Agent record visualization
      5. Exercise 1
    6. Experimenting with a hard-to-solve maze configuration
      1. Hyperparameter selection and working environment setup
      2. Running the hard-to-solve maze navigation experiment
      3. Exercise 2
    7. Summary
  15. Section 3: Advanced Neuroevolution Methods
  16. Hypercube-Based NEAT for Visual Discrimination
    1. Technical requirements
    2. Indirect encoding of ANNs with CPPNs
      1. CPPN encoding
      2. Hypercube-based NeuroEvolution of Augmenting Topologies
    3. Visual discrimination experiment basics
      1. Objective function definition
    4. Visual discrimination experiment setup
      1. Visual discriminator test environment
        1. Visual field definition
        2. Visual discriminator environment
      2. Experiment runner
        1. The experiment runner function
          1. Initializing the first CPPN genome population
          2. Running the neuroevolution over the specified number of generations
          3. Saving the results of the experiment
        2. The substrate builder function
        3. Fitness evaluation
    5. Visual discrimination experiment
      1. Hyperparameter selection
      2. Working environment setup
      3. Running the visual discrimination experiment
    6. Exercises
    7. Summary
  17. ES-HyperNEAT and the Retina Problem
    1. Technical requirements
    2. Manual versus evolution-based configuration of the topography of neural nodes
    3. Quadtree information extraction and ES-HyperNEAT basics
    4. Modular retina problem basics
      1. Objective function definition
    5. Modular retina experiment setup
      1. The initial substrate configuration
      2. Test environment for the modular retina problem
        1. The visual object definition
        2. The retina environment definition
          1. The function to create a dataset with all the possible visual objects
          2. The function to evaluate the detector ANN against two specific visual objects
      3. Experiment runner
        1. The experiment runner function
        2. The substrate builder function
        3. Fitness evaluation
          1. The eval_genomes function
          2. The eval_individual function
    6. Modular retina experiment
      1. Hyperparameter selection
      2. Working environment setup
      3. Running the modular retina experiment
    7. Exercises
    8. Summary
  18. Co-Evolution and the SAFE Method
    1. Technical requirements
    2. Common co-evolution strategies
    3. SAFE method
    4. Modified maze experiment
      1. The maze-solving agent
      2. The maze environment
      3. Fitness function definition
        1. Fitness function for maze solvers
        2. Fitness function for the objective function candidates
    5. Modified Novelty Search
      1. The _add_novelty_item function
      2. The evaluate_novelty_score function
    6. Modified maze experiment implementation
      1. Creation of co-evolving populations
        1. Creation of the population of the objective function candidates
        2. Creating the population of maze solvers
      2. The fitness evaluation of the co-evolving populations
        1. Fitness evaluation of objective function candidates
          1. The evaluate_obj_functions function implementation
          2. The evaluate_individ_obj_function function implementation
        2. Fitness evaluation of the maze-solver agents
          1. The evaluate_solutions function implementation
          2. The evaluate_individual_solution function implementation
          3. The evaluate_solution_fitness function implementation
      3. The modified maze experiment runner
    7. Modified maze experiment
      1. Hyperparameters for the maze-solver population
      2. Hyperparameters for the objective function candidates population
      3. Working environment setup
      4. Running the modified maze experiment
    8. Exercises
    9. Summary
  19. Deep Neuroevolution
    1. Technical requirements
    2. Deep neuroevolution for deep reinforcement learning
    3. Evolving an agent to play the Frostbite Atari game using deep neuroevolution
      1. The Frostbite Atari game
      2. Game screen mapping into actions
        1. Convolutional layers
        2. The CNN architecture to train the Atari playing agent
      3. The RL training of the game agent
        1. The genome encoding scheme
          1. Genome encoding scheme definition
          2. Genome encoding scheme implementation
        2. The simple genetic algorithm
    4. Training an agent to play the Frostbite game
      1. Atari Learning Environment
        1. The game step function
        2. The game observation function
        3. The reset Atari environment function
      2. RL evaluation on GPU cores
        1. The RLEvalutionWorker class
          1. Creating the network graph
          2. The graph evaluation loop
          3. The asynchronous task runner
        2. The ConcurrentWorkers class
          1. Creating the evaluation workers
          2. Running work tasks and monitoring results
      3. Experiment runner
        1. Experiment configuration file
        2. Experiment runner implementation
    5. Running the Frostbite Atari experiment
      1. Setting up the work environment
      2. Running the experiment
      3. Frostbite visualization
    6. Visual inspector for neuroevolution
      1. Setting up the work environment
      2. Using VINE for experiment visualization
    7. Exercises
    8. Summary
  20. Section 4: Discussion and Concluding Remarks
  21. Best Practices, Tips, and Tricks
    1. Starting with problem analysis
      1. Preprocessing data
        1. Data standardization
        2. Scaling inputs to a range
        3. Data normalization
      2. Understanding the problem domain
      3. Writing good simulators
    2. Selecting the optimal search optimization method
      1. Goal-oriented search optimization
        1. Mean squared error
        2. Euclidean distance
      2. Novelty Search optimization
    3. Advanced visualization
    4. Tuning hyperparameters
    5. Performance metrics
      1. Precision score
      2. Recall score
      3. F1 score
      4. ROC AUC
      5. Accuracy
    6. Python coding tips and tricks
      1. Coding tips and tricks
      2. Working environment and programming tools
    7. Summary
  22. Concluding Remarks
    1. What we learned in this book
      1. Overview of the neuroevolution methods
      2. Python libraries and environment setup
      3. Using NEAT for XOR solver optimization
      4. Pole-balancing experiments
      5. Autonomous maze navigation
      6. Novelty Search optimization method
      7. Hypercube-based NEAT for visual discrimination
      8. ES-HyperNEAT and the retina problem
      9. Co-evolution and the SAFE method
      10. Deep Neuroevolution
    2. Where to go from here
      1. Uber AI Labs
      2. alife.org
      3. Open-ended evolution at Reddit
      4. The NEAT Software Catalog
      5. arXiv.org
      6. The NEAT algorithm paper
    3. Summary
  23. Other Books You May Enjoy
    1. Leave a review - let other readers know what you think

Product information

  • Title: Hands-On Neuroevolution with Python
  • Author(s): Iaroslav Omelianenko
  • Release date: December 2019
  • Publisher(s): Packt Publishing
  • ISBN: 9781838824914