CHAPTER 6AI Is All About Trust
What does it mean to build trustworthy AI solutions and to operate your AI solutions in a reliable and legal manner? Despite the fact that the industry has made substantial investments in improving AI governance over the last couple of years, many companies still lack needed visibility into the potential risks their AI models pose. This includes taking control over how to mitigate the risks. This is a serious problem, given the increasingly critical role AI models play in supporting daily decision making, the ramp-up of regulatory frameworks, and the weight of reputational, operational, and financial damage companies face when AI systems malfunction, expose personal data, or include biases.
However, by leveraging MLOps practices in your company, you are taking a step in the right direction. It's important to understand that MLOps practices actually include comprehensive risk mitigation measures into the AI application life cycle by, for example, reducing manual errors through automated and continuous testing. Well-documented and reusable components also reduce the probability of errors and facilitate component updates. For example, companies using MLOps practices are starting to document, validate, and audit deployed models to understand how many models are in use, how those models were built, what data they depend on, how personal data is protected, and how they are governed. This provides risk management teams with an auditable trail to show ...
Get Operating AI now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.