Chapter 7. From Theory to Practice
Real-world ML projects are rarely straightforward. You don’t always know what exact fairness metric to implement or how robust you need the model inference to be. Creating trustworthy ML systems almost always involves trading off between technical considerations and human decisions like budget considerations, finding a balance between trust and utility, and aligning stakeholders toward a common goal. As an ML expert and practitioner, you are capable of handling the technical aspects. But when it comes to human-in-the-loop decisions, you may not be required to make all of those (and perhaps you shouldn’t). However, it’s important to have at least a high-level understanding of the concepts involved in both human and technical decisions in order to effectively align trustworthy ML development with the broader organizational picture.
In this chapter, we’ll share with you tools for actually implementing the trustworthy ML methods we’ve discussed in earlier chapters in messy, production-grade systems. We’ll start by reviewing some additional technical factors you might need to address before pushing a model to production—such as causality, sparsity, and uncertainty—in Part I. From there, we’ll move on to Part II to discuss how to effectively collaborate with stakeholders beyond the development team.
Part I: Additional Technical Factors
There are some additional technical considerations you might need to think about while incorporating one or more ...
Get Practicing Trustworthy Machine Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.