Introduction: The Right Brain in the Right Place (Why We Need Autonomous AI)

Though the problems industry has asked me to solve with autonomous AI are many and varied, they can nonetheless be divided into three clear categories, which I will now explain in detail.

I consulted for a company that uses computer numerical control (CNC) machines to make cell phone cases. Spinning tools cut metal stock into the shape of the phone. After each case is cut, the CNC machine door opens. A robotic arm loads the finished part onto a conveyor, then grasps the next part from a fixture, and loads it into the CNC machine to be cut. If the part does not orient in the fixture at precisely the right angle, the robot arm will fail to grasp the part or will drop the part before it reaches the CNC machine. And if the arriving case is wider or thinner than expected, even just a little, the robot arm will again fail to grasp the part or drop it before it reaches the CNC machine.

The automated system is inflexible. An automated system is one that makes decisions by calculating, searching, or lookup. The robot arm controller was programmed by hand to travel from one fixed point to another and perform the task in a very specific way. It succeeds only if the phone case is the perfect width and sits in the fixture at the perfect angle, as depicted in Figure I-1.

This organization needs more flexible and adaptable automation that can control the robot arm to successfully grasp cases of a wide variety of widths from a range of fixture orientations. This is a great application for autonomous AI. Autonomous AI is flexible and adapts to what it perceives. For example, it can practice grasping cases of various widths that sit in the fixture at various angles and learns to succeed in a wider variety of scenarios, as shown in Figure I-2.

Phone Cases
Figure I-1. Width and fixture angle variations that might challenge an automated system during a cell phone manufacturing process.
Figure_1-2.png
Figure I-2. Width and fixture angle variations that an autonomous AI might learn to adapt to for a cell phone case manufacturing process.

The Changing World Requires Adapting Skills

An executive from a global steel company asked me to travel to Indiana to examine part of their steel making operation, determine where an AI brain could help, and to design that brain. We arrived at the steel mill early in the morning and met with the site CTO, who directed us to the building that housed the process he wanted us to focus on and give us a bit of direction, then we put on protective shoes, hard hats, and metal sleeves and went in for a tour. The foreman came out to meet us and took us on a tour of a “building” that could fit many tall buildings inside of it and was many city blocks wide and long (see Figure I-3). This was the last phase of the steel-making process, where a strip of steel is rolled between what look like two paper towel rolls, then sent through a furnace to temper it and finally through a bath of molten zinc to protect it from rust (this is called “galvanizing”).

We talked to the operator at each control room (at steel mills these are called “pulpits”). I interviewed them about how they make decisions to run the machines (what information they use to make the decision and how they operate the machines differently under different scenarios). Then, my hosts whisked me away to a research center, where I reported my recommendations to the chief digital officer (CDO) and a group of researchers about which of the mill’s decisions could be improved using AI. I recommended involving AI in galvanization, the last step of the process.

Figure_1-4.jpg
Figure I-3. Photograph of steel mill.

The operators control the coating equipment in real time to make sure that the zinc coating is even and the correct thickness. This job used to be a lot easier when the plant made most of its steel with the same thickness, the same width, and the same coating thickness for the big three US auto manufacturers. Now there are many more customers who ask for many different thicknesses and widths of steel for heating ducts, construction, and all kinds of other things. The operators were having a hard time keeping the coating uniform and the thickness correct across all these variations. Some customers required wide, thin steel with a thin coating; others ordered narrow, thick steel with a thick coating. The world of steel manufacturing had changed and this company was looking to autonomous AI for solutions.

This company is facing a difficult situation. Their business environments (customers, markets, processes, equipment and workers) are changing and they are struggling to adapt their decision-making. Often, their automated systems, which were built to automate repeatable, predictable processes, cannot change their programmed behavior in response to these changing environments. As conditions change, they make worse decisions and sometimes are taken out of service altogether because their decisions are no longer relevant or of sufficiently high quality.

Problems Need Solutions, Not AI

Humans and automated systems are reaching the limits of improvements they can make to industrial processes. Now, enterprises are turning to AI for answers. Unfortunately, much of the discussion about AI focuses on AI as fiction (overhyped and overpromised capability) or science fiction (whether AI will ever reach superintelligence and if it does, what are the philosophical and ethical implications). Neither of these discussions help organizations improve their operations. What enterprises need instead is a playbook for how to design useful AI into autonomous systems where it can make decisions more effectively than humans or automated systems.

When I first started designing autonomous AI, I pitched “a new form of AI” that was different from other kinds of AI and machine learning. I quickly realized that the companies I consulted didn’t care about AI. They sought technology that had unique capabilities to control and optimize their high-stakes business processes well compared to their existing solutions. They cared that their operators and automated control systems were effective but struggled to deliver additional process improvement. They understood that control and optimization technology is always evolving and that autonomous AI is simply an evolution of control and optimization technology with unique differentiating characteristics.

What Can AI Do for Me in Real Life?

The AI Index Report cites that over 120,000 AI related peer-reviewed academic papers were published in 2019. More than a few of these papers were highly publicized in the press. Some call this the “research to PR pipeline” because of how companies shuttle research breakthroughs straight from the laboratory into press announcements. While it’s great to have access to cutting edge research, this research to PR pipeline can make it seem that every new algorithm is ready to solve real-life problems. The challenge is that people and process concerns, combined with the uncertainties of real-life production processes, render many algorithms which seem very promising in controlled laboratory experiments practically useless. Let me give you an example.

A major US rental car company came to us asking whether AI could help them schedule the daily delivery of cars between their locations. Every day, in most major cities, about a dozen drivers shuttle cars from the rental outlet where they were dropped off, to rental locations where they are needed for pickup. A human scheduler plans the routes for each driver, to deliver the right vehicles to the right place. Those familiar with the field of operations research, which is very active in solving logistics and delivery problems, might call this the “Vehicle Routing Problem.” Then, they might tell you that there are various optimization algorithms that can search and find the “optimal” solution of routes for each driver so that together, the drivers travel the shortest distance. So, what’s the problem? Why would this company be using human schedulers? Don’t they know about Dijkstra’s algorithm for finding routes that travel the shortest total distance? Wait a minute. It’s not that simple.

Dijkstra’s shortest path algorithm searches possible routes and schedules routes for each driver that place each stop as close together as possible. So, if you are in a city where the best policy is to always schedule each next stop as closely as possible, Dijkstra’s algorithm will give you the best possible answer every time. Here’s the problem. For most metropolitan cities, the determining factor for the time each trip leg takes is traffic, not distance. But operations research defines the vehicle routing problem without considering traffic. There are plenty of situations where the next best stop is not the closest because of bad traffic conditions. This is especially true during rush hour traffic. Each city has unique traffic patterns, but traffic varies based on a number of factors. Dijkstra’s algorithm doesn’t consider traffic at all and it doesn’t change its scheduling behavior based on any of the factors that dictate traffic patterns. So, even if every rental car company knows how to program and utilize Dijkstra’s algorithm, it won’t effectively replace human route schedulers. Dijkstra’s algorithm also has limited ability to train inexperienced schedulers or augment the expertise of experienced schedulers.

Instead, here’s a brain that might adapt to traffic patterns better than Dijkstra’s algorithm. This brain can also be used to coach inexperienced schedulers or advise experienced schedulers. Figure I-4 is a hypothetical AI brain example, not one that was designed for a real company, but using the techniques in this book, you can easily design similar brains and modify this brain design for similar applications.

The example brain in Figure I-4 works like a taxi dispatch. Each time a driver arrives delivering a car, it decides to which location the driver should deliver their next car. The goal is to deliver all cars to the locations where they are needed in the least amount of time.

Figure_1-5.png
Figure I-4. Example brain for real-time scheduling of rental car deliveries in a major city.

Here’s how to read the brain design diagram. The ovals represent the input and the output of the brain. The brain receives information about traffic, vehicles that need to be delivered, and delivery locations. For example, the brain might receive information that it is Wednesday during morning rush hour commute time, that 5 cars have been delivered so far, and that 98 cars await delivery so far for the day through its input node. The modules represent skills that the brain learns to make scheduling decisions. We design a machine learning module into the brain (represented by a hexagon) to predict the trip length to each possible destination based on traffic patterns for the city. This module works a lot like the algorithms in Google and Apple Maps that predict how long each trip will take. The rectangle represents an AI decision-making module that determines which destination to route the driver to. See “Visual Language of Brain Design” for more details on how we visually represent brain designs. The brain learns to make scheduling decisions that better adapt to traffic patterns and create schedules that deliver the daily stable of cars more quickly than Dijkstra’s algorithm.

This example doesn’t suggest that software algorithms are not useful to solve real-life problems. It’s a warning against picking a software algorithm or a technique that’s been demonstrated in research from a list and applying it to solve a real-world problem without considering all the requirements for a solution to that problem. Earlier, I posed the question, “Why hasn’t software solved more problems in manufacturing?” My response is that if you pick from a “list of software algorithms” without deeply understanding the operations and the processes that you are trying to improve, you will be unable to effectively solve real-life problems well.

Tip

Instead of simplifying decision-making processes until a particular algorithm can make a decision well, add nuance to your decision-making capabilities until it can solve the realistic problem well.

AI Decision-Making Is Becoming More Autonomous

In his 1970 book The Structure of Scientific Revolutions (University of Chicago Press), Thomas Kuhn describes research breakthroughs as the punctuation between long periods of incremental improvement and experimentation. For example, in 1687 Sir Isaac Newton made an important discovery about gravity. In 1915, Albert Einstein made breakthroughs that provided a more nuanced and accurate picture of gravity. Einstein’s breakthrough doesn’t contradict Newton’s Law, but it provides a more comprehensive and nuanced view of how gravity works. In the same way, quantum leaps in autonomous decision-making capability punctuate long periods of incremental improvement and experimentation within established paradigms. Stephen Jay Gould observes the same phenomena in his discussion of punctuated equilibrium.1 Figure I-5 illustrates how scientific revolutions and mainstream democratization advance science over time.

Figure_1-6.png
Figure I-5. Scientific advancement over time: revolutions separated by periods of puzzle solving where mainstreaming and democratization occurs.

Throughout the history of AI and other research technologies, these periods of puzzle solving and incremental change between research breakthroughs have added more nuanced decision-making capabilities that spin off from research and become useful for solving real problems in industry. The jet fighter engine was developed in German research laboratories during WWII and used for the first time in a production fighter, the Messerschmitt Me 262. War drove the innovation of jet aircraft, which was then mainstreamed into commercial aircraft over years after WWII.

The same is true for AI and automation-related technologies. Let’s take the expert system, for example: a method for making automated decisions based on human experience. Expert systems were developed during the second major wave of AI research (1975–1982). Expert systems are great at capturing existing knowledge about how to perform tasks but proved to be inflexible and difficult to maintain. At one point, expert systems (which some thought would reach full human comparable intelligence) comprised much of what was then considered AI research, but by the 1990s they had all but disappeared from AI research efforts. While it’s true that most of the fundamental research questions about expert systems had been answered by this time, expert systems hadn’t delivered the anticipated autonomy. Simultaneously, the period of puzzle solving began to mainstream and democratize expert systems into useful decision-making automation for real systems, while work began that led to new revolutions which addressed the weaknesses of expert systems for autonomous decision-making.

Expert systems are widely used today in finance and engineering. NASA developed a software language for writing them in 1980. In this book, I’ll show you how to combine expert systems with other AI techniques as we design autonomous AI. The first takeaway here is that research breakthroughs often aren’t ready to add value to production systems and processes until they mature to meet the people and process concerns of those who run these processes. The second takeaway here is that new revolutions will continue to improve autonomous decision-making.

One of these revolutions is machine learning, defined in the linked Wikipedia article at the time of writing this book as the study of computer algorithms that can improve automatically through experience and by the use of data. Machine learning is a powerful paradigm for prediction and prescriptive decision-making, and a fundamental building block for modern autonomous AI that I’ll discuss extensively in this book. It is not, however, a replacement for all preceding decision-making paradigms. Machine learning is to previous decision-making paradigms as Einstein’s theory of relativity is to Newtonian physics: Newtonian physics applies when looking at objects of a certain size traveling at certain speeds. Relativity is a more nuanced paradigm that applies to situations where Newtonian Physics doesn’t describe reality well. Discarding previous decision-making paradigms in favor of machine learning leads to a phenomenon that I call data science colonialism.

Beware of Data Science Colonialism

In the same way that software hasn’t solved more problems in manufacturing, it appears that the burgeoning field of data science hasn’t produced the anticipated sweeping positive effect on industry either. Data science, again using a definition from the linked Wikipedia article, is an interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from noisy, structured, and unstructured data, and to apply knowledge and actionable insights from data across a broad range of application domains. The field is on fire, and companies look to data scientists to solve a wide range of problems. Unfortunately, though, as VentureBeat reports, 87% of machine learning models that data scientists create never make it into production. Data science colonialism is a practice of using only data to dictate how to control a system or optimize a process, without sufficient understanding of the process. It ignores previous experience controlling a process and engineering experience gained from designing it. Sometimes it even ignores the physical and chemical laws that govern the process. Data science colonialism ignores important details and condescends to experts who should be involved in designing and building autonomous AI.

There’s a flavor of data science that functions a lot like colonialism. Colonialism is a practice in which countries explore or even invade other territories claiming intentions to improve the societies they encounter, usually without consideration for the existing culture and values. A colonialistic mindset might ask, “Why would I need to consult a more primitive society about what help they need from me? I should just be telling them what to do!” That’s one of the most egregious perspectives of colonialism: the arrogance that you don’t need to learn or consider anything about the people whose culture and way of life you are destroying. Unfortunately, I see a similar perspective among some misguided data science practitioners who don’t see the need to slow down, listen, and learn about the process from people before attempting to design a “superior” solution.

I was in Canada at a large nickel mine consulting with process experts about using AI to control a SAG mill (think about an eight-story-tall cement mixer). I stepped outside to take a phone call, and when I walked back down the hallway, I found one of the data scientists arguing with one of the experts. It wasn’t anything that professionals can’t work through together, but the disagreement was about whether we should trust and respect the operators’ existing expertise about how to run the mill.

Another time, I spoke with an executive who had an advanced degree in AI and oversaw optimization of processes at a manufacturing company. I explained to him that my approach to designing AI for industrial processes relies heavily on existing subject matter expertise about how to control or optimize the system. He thanked me for not being one of the many companies that have come in, told him that all they needed was some of his data, and that with that data they would build him an AI based control system. He didn’t believe that it was possible to ignore decades of human expertise and come up with a control system that would both function well and address all the people and process concerns related to running expensive safety-critical equipment. Neither do I.

Multiple companies have recently tried to solve problems in healthcare using machine learning algorithms. Activities like reading radiology scans to check for cancer can be life-saving. However, some firms have exaggerated the capabilities of algorithms and underemphasized the expertise of medical practitioners with disappointing results. Perhaps more respect for the experts would lead to better technology and stronger results.

Here’s another example of data science colonialism. Remember the bulldozer AI that I told you about in the introduction? One of the subject matter experts, a PhD controls engineer named Francisco, thanked me during the AI design process. He felt condescended to by others who had consulted him about AI in the past. What?! Francisco was brilliant in mathematics and control theory—why would anyone condescend to him about AI? The best brain designers are curious, humble, and resist the temptation to view data science, machine learning, or AI, as superseding the value of subject matter expertise.

This is not to say that all data scientists believe their trade is a cure-all; there are data scientists who are curious and practice great empathy. The humility and curiosity to inquire and learn what people already know about making decisions will go a long way when designing autonomous AI.

Tip

Any AI brain that you design to make real decisions for real processes should address the changing world, the changing workforce, and pressing problems.

The Changing Workforce Demands Transferred Skills

When automation systems don’t perform well or make good decisions, factories and processes revert back to human control. Humans step in to make high-value decisions for some processes only when the automated systems are making bad decisions, but humans retain complete control of other processes that they haven’t figured out how to automate well. However, experts are retiring at an alarming rate and taking decades of hard earned knowledge about how to make industrial decisions out of the workforce with them. After talking to expert after expert and business after business, I realized that people look to AI for answers to their changing workforce because expertise is hard to acquire and equally hard to maintain. To make matters worse, expertise is relatively easy to teach, but takes a lot of practice.

Expertise Is Hard to Acquire

I visited a chemical company that makes plastic film for computer displays and other products on an extruder. An extruder takes raw material (soap, cornmeal for food, or in this case plastic pellets) and heats them up in a metal tube with a turning screw inside. The screw forces the material out a slit to make the plastic. Then the plastic film (it looks just like Saran Wrap) gets stretched in both directions, cooled and sometimes coated. The control room was filled with computer screens and keyboards to check measurements and make real-time adjustments. Can you guess how long an operator trains before they can “call the shots'' in the control room as a senior operator? Seven years! Many operators put themselves through university chemical engineering programs during this time. It takes a whole lot of practice turning the knobs on a process until you can control it well for different products, across varying customer demand, types of plastic, types of coating and machine wear. And after your experts get really, really good at controlling your process, it’s time for them to retire and you need a way to pass on this expertise to others who are less experienced.

Expertise Is Hard to Maintain

Navasota, Texas, is a small town about two hours’ drive from Houston. I went there to help a company called NOV Inc. with their machine shop operations. We arrived in a pickup truck, to a parking lot full of pickup trucks and I felt out of place because I was only one of two people I saw that day who weren’t wearing cowboy boots. Our executive sponsor was a forward thinking executive named Ashe Menon who wanted to use AI as a training tool. Many are afraid that AI will take away people’s jobs, but he told me the opposite: “I want to be able to hire a 16 year-old high school dropout, sit a brain next to him and have him succeed as a machine operator.” He wants to augment human machinists with autonomous AI, not replace them.

We sat down in a no-frills, industrial conference room over strong coffee and he introduced me to a machinist named David. I prefer discussing AI in plain language instead of using research jargon, so I explained to this 35-year expert machinist that a new form of AI can learn by practicing and getting feedback just like he has over all these years, and that we can even use his valuable expertise to teach the AI some of the things he already knows so that it gets better faster as it practices.

You see, when David and other expert machinists control the cutting machines (give them instructions about where to move and how fast to spin the cutter), the cutting jobs get done much quicker and at higher quality than when the engineers use automated software to generate the instructions. David has practiced cutting many different kinds of parts using over 40 different machine makes and models. Some of the machines are new and some of the machines in the shop are over 20 years old. These machines all behave quite differently while cutting metal and David learned how to get the best out of each machine by operating it differently.

NOV and many other companies want to capture and codify the best expertise from their seasoned operators, upload this experience into an AI brain, and sit that brain next to less experienced operators to help them get up to speed more quickly and perform more proficiently. This requires interviewing experts to identify the skills and strategies that they practiced in order to succeed at a task. Then you will be able to design an AI that will practice these same skills, get feedback, and also learn to succeed at the task.

An executive in the resources industry told me that their 20- and 30-year experts are retiring in large groups and that it feels like their hard fought, valuable experience about how to best manage their business is walking out the door, never to return. Humans can learn how to control complex equipment that changes in really odd ways, but it takes a lot of practice time to build the nuances into our intuition. Most expert operators tell me that it took years or decades to learn to do their job well.

Tip

Designing autonomous AI allows you to package expertise into AI as neat units of skill that can be passed onto other humans, saved for later, combined in new and interesting ways, or used to control processes autonomously.

Expertise Is Simple to Teach, but Requires Practice

Whether playing chess, learning a sport, or controlling a process in a factory, gaining skill requires a lot of practice to understand the nuances of what to do in many complex situations. Expertise is complex and situationally nuanced. Teachers guide this practice in a way that leads to more efficient skill acquisition. Coaches and teachers do this all the time when they introduce (describe) a skill, then ask students to practice it. When teachers do this, they often have an opinionated sequence that they want skills to be introduced and practiced. If the lesson plan is good, it accelerates learning, but even the best teaching plan does not take away from the situational improvisation and nuance that the learner displays while practicing (acquiring) the skills.

SCG Chemicals is part of a 100-year-old company that manufactures plastic. For one type of plastic, they invented the process, learned to run the reactors efficiently, and even researched advanced chemistry to simulate the process. The operators practiced controlling the reactors well for all the different plastic products they make and for the catalysts they use to make them. One of the first questions that I asked the experts was “How do you teach new operators this complex skill of controlling reactors?” The answer was concise and easy to understand: there are two primary strategies that we teach every boardman (operators of all genders at SCG are called boardmen).

  1. Add ingredients until the density reaches the target range. Ignore the melting point measurements for the process while you are using this first strategy.

  2. Then, when the density of the plastic is in target range, switch over to the second strategy. While using this strategy, ignore the density and add ingredients until the melting point for the plastic reaches product specification.

Because of the way the chemistry works, if you work the strategies in the prescribed order, both the density and the melting point will turn out right. They invented this process, but even they don’t have all the chemistry that explains why it works this way. It works every time though, so SCG Chemicals teaches this sequence for the strategies to their operators.

Do you see how teaching the skills is relatively straightforward for a competent teacher, but how each skill still requires a lot of practice? In the hands of a good teacher, these skills are easy to outline but require a whole lot of practice to build into your intuition. Each minute, the boardman needs to decide how much of each ingredient (called reagents in chemistry) to add. They will need to add less reagent when the density is close to the target and more reagent when the temperature is higher. How much more will depend on which variant of plastic they’re making and which catalyst they’re using to drive the reaction.

The supervising engineer, Pitak, writes customized recipes that boardmen can follow to successfully execute each of the strategies even if they haven’t had enough practice to master the skills yet. The boardman leans on these rigid recipes to help them succeed until they have practiced enough to internalize the nuances and variations.

Even though every boardman knows the two strategies and has a procedure for how to use them, it takes a lot of practice to modify the strategies to match the changing process conditions. For example, a boardman might add reagents to the reactor in different amounts while making one kind of plastic using one type of catalyst, but might add ingredients to the reactor in slightly different amounts while making a different kind of plastic using a second type of catalyst. So, Pitak updates the recipes as conditions change.

This is very similar to what happens while baking (baking is a complex chemical reaction after all). Your father might have taught you to mix the dough while adding the first set of ingredients until it feels sticky and smells like almonds. This is the first strategy. He might have also taught you that, next, you add a different set of ingredients and knead the dough until it’s firm. This is the second strategy. Your father taught you two strategies and how to sequence them, as illustrated in Figure I-6.

Figure_1-7.png
Figure I-6. Baking preparation process with two skills used in sequence.

The strategies are pretty easy to teach and understand, but take practice to master. That’s what recipes are for. They tell you exactly how much of each ingredient to add during each step of the process and recommend how long to mix and how long to knead. The problem with recipes (for baking, manufacturing plastic, and many other tasks) is that the recipe is rigid. An expert baker knows that if it’s hot and humid outside you will mix for a shorter period of time before you start kneading, the same way that Pitak knows that if it’s more hot and humid outside, the boardman will need to add more reagents or more catalyst to the reactor. That’s why Pitak updates the recipes for the boardman to follow as the temperature and humidity change over time. With a lot more practice, bakers and boardmen no longer need the recipes. They create their own recipes on the fly (bakers based on the feel and smell of the dough, boardmen based on the temperature and pressure in the reactor). This is why my mom never uses recipes when she cooks. She started decades ago with a recipe for each dish, but now when she cooks each dish, she adjusts the ingredients to taste as she goes. When she first taught me how to make our family’s chili recipe, I followed the instructions “to a T,” but now I improvise while making chili just like she does!

Pressing Problems Demand Completely New Skills

Climate change is a pressing societal problem. Many companies have made pledges to take action to slow the effect of climate change. Is there a way that AI can help?

Well, less energy consumption means less need for energy from fossil fuels. Did you know that 50% of energy usage in buildings comes from heating, cooling, air conditioning and ventilation (HVAC) systems? It turns out that this is an opportunity for AI to make a material difference on climate change. Many commercial HVAC systems, like those that cool and heat office buildings, rely on human engineers and operators to tune and control them.

Driving various rooms toward the right temperature while carefully managing energy usage is not as easy or intuitive as it appears. Managing energy consumption for a building or campus adds several layers of variability like the controls for cooling towers, water pumps, and chillers. This is further complicated by occupants entering and leaving the building constantly throughout the day. There’s a pattern to it (imagine commute times and traffic conditions) but they are complex to perceive. The price of energy changes throughout the day. There are peak times where energy is most expensive and off-peak times where energy is cheaper. You can recycle air to save money from heating outside air, but legal standards dictate how much carbon dioxide is allowed in the building, which limits your ability to recycle. Each layer of complexity makes it harder for a human to understand how each variable will impact the outcome of a control setting.

Microsoft built an autonomous AI to control the HVAC systems on its Redmond West Campus. The campus had automated systems, but those systems cannot make supervisory decisions based on occupancy and outdoor temperature in real time. My team worked with mechanical engineers to design a brain to make those decisions, and the new system is currently using about 15% less energy. Two years earlier, Google successfully tested an AI that reduced energy consumption in data centers by 40%. If you’re wondering why the earlier AI generated more improvement, it’s because data centers are easier to control. They have less influence from outside factors. Commercial buildings have to deal with things like solar irradiation (most of the heat in commercial buildings comes from sun shining in the windows from different angles at different times) and large numbers of people constantly exiting and entering the building.

AI Is a Tool; Use It for Good

Every day I see people debating the ethics and perils of AI on social media. While I agree that ethics are important and that we should be very careful as a society about how we approach AI, the only way to ensure that AI gets used for good is to design and build AI that explicitly does useful, helpful, things.

I just finished teaching my first Designing Autonomous AI cohort to underrepresented minority students in New York City with the Urban Arts Partnership. What an amazing experience working with such energetic and talented college students!

As a Black man who works in AI research, I feel the weight of unequal access to advanced technologies like AI every day. If the Fourth Industrial Revolution can endow superpowers, tremendous wealth, and expansive opportunity for those who lead it, then unequal access to AI presents something of a calcifying caste system: 4% of the workforce at Microsoft and at Facebook are Black; 2.5% of the workforce at Google is Black. Less than 20% of all AI professors are women, 18% of major research papers at AI conferences are written by women, and only 15% of Google AI research staff are women.2

Robert J. Shiller, 2013 Nobel laureate in economics, says it well:

You cannot wait until a house burns down to buy fire insurance on it. We cannot wait until there are massive dislocations in our society to prepare for the Fourth Industrial Revolution.

Almost everyone has limited access to advanced technology to some extent, but those who are marginalized in additional ways, such as due to their income, race, and ability, are multiplicatively less likely to experience the benefits of working with autonomous AI.

Starting with the principles and techniques in this book, I intend to further democratize access to decision-making autonomous AI and put it into the hands of the underrepresented and the underprivileged as a means for solving societal problems and economic advancement.

First, imagine an operator at the chemical company that I talked about above (the one with the plastic extruder) not just learning to control the extruder well, but designing and building AI that they will take with them into the control room to help them make decisions. Next, imagine a squad of chess players from inner city East Oakland, California, all minorities, who learned how to play chess by playing with and against autonomous AI that they designed and taught. We have much work to do to fulfill this vision, but the progress is real and I invite you to use your skills designing autonomous AI to do good in areas that you are passionate about.

1 Stephen Jay Gould, The Structure of Evolutionary Theory (Belknap Press, 2002).

2 A version of this paragraph was originally published in Cases and Stories of Transformative Action Research: Five Decades of Collaborative Action and Learning by John Bilorusky (Routledge).

Get Designing Autonomous AI now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.