Chapter 2. People: Humans in the Loop

“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”

Pedro Domingos

Since its inception, there has been the temptation to give AI and ML increasingly more agency. However, this should not be the goal for organizations deploying ML today. Due to all the AI incidents we’re seeing, we firmly believe the technology isn’t mature enough. Instead, the goal should be to make sure humans are in the loop of ML-based decision making. Human involvement is imperative because an all too common mistake, as the quote above highlights, is for firms to assume their responsible ML duties lie solely in technological implementation. This chapter presents many of the human considerations that companies must address when building out their ML infrastructure. We’ll start with organizational culture then shift the discussion to how practitioners and consumers can get more involved with the inner workings of ML systems. The chapter closes by highlighting some recent examples of employee activism and data journalism related to the responsible practice of ML.

Responsible Machine Learning Culture

An organization’s ML culture is an essential aspect of responsible ML. This section will discuss the cultural notions of accountability, dogfooding, effective challenge, and demographic and professional diversity. We’ll also discuss the arguably stale adage, “go fast ...

Get Responsible Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.