Chapter 7Safe
Chief Data Scientist Juan always enjoyed the weekly all‐hands meeting with his team. It was an opportunity to look through their efforts, spitball ideas, and measure their efforts against their goals. On the whole, BAM Inc.'s data science operation was first class. Yet, as the company moved rapidly into the AI era, Juan was seeing an increasing need to build his team's capacity to deeply investigate how their AI models performed and ensure they were worth using (critically) before they were deployed.
As a leader, Juan worked to bring MLOps to their AI processes, and at their weekly meeting, he opened the floor for discussion concerning a newly acquired system that was already entering testing. The data scientists were thrilled to report the consistently high throughput a machine could achieve when run by the new system. It was just what engineering had requested, and on the face of it, they had achieved their goal.
“Is speed the only goal?” Juan asked.
“That's what this system is for,” his deputy confirmed.
“Has the utility function been optimized for anything else?” he asked, knowingly.
“Why would it be?”
Juan had seen this misstep before. With so much attention placed on the technical capabilities of the systems they built, deployed, and maintained, the human factor received short shrift. Their new system needed more testing.
AI systems are complex mathematical operations that can only do what designers and operators permit them to do. AI safety is a human charge. ...
Get Trustworthy AI now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.