CHAPTER 6Human in the Loop

In this chapter, we'll continue the discussion of how humans are critical to your AI projects but through a different lens. It's no longer about how to form a team or deal with difficult personalities but how humans are critical checkpoints in the AI process itself, regardless of what processes or tasks are being automated, augmented, or outsourced.

  • We will discuss some of the legal and regulatory work that's taking place to make sure we incorporate AI in an ethical and governed way, and we'll discuss the four (4) layers of responsibility that go with it.
  • We will discuss the six (6) tenets of responsible AI and the risks associated with AI that largely stem from unintended consequences that arise when good intentions go awry.
  • We will discuss a concept called “human in the loop” that you need to incorporate into your deployments.
  • I will share a personal story that occurred in December 2023 to cement the concept of “human in the loop.”

What is “Human in the Loop”

Human in the loop refers to a framework within AI deployments where human judgment is incorporated into the AI decision-making process. It's a must-have process in any deployment.

This may sound boring, but this will be an entertaining chapter once you get to the story I share with you, because it's a doozy. But before we get into a story about “AI gone wrong” that involved me, I think it's only fair I give you a general understanding of what “human in the loop” means and its connection ...

Get Your AI Survival Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.