CHAPTER 27Assessing + Achieving Trustworthiness
A recent Harvard Business Review article by Shalene Gupta, coauthor of The Power of Trust: How Companies Build It, Lose It, Regain It, recommends four core dimensions to assess the trustworthiness of a company's generative AI efforts: competence, motives, means, and impact.1
- Competence: This means identifying where generative AI should stop and humans should step in. By understanding the strengths and limitations of AI technology, we can make informed decisions about when to leverage its capabilities and when to rely on human expertise.
- Motives: As generative AI continues to advance, the landscape of compliance standards remains fragmented and inconsistent. Regulation is uneven, too. By embracing ethical practices and demonstrating a commitment to being a good actor in the AI space today, organizations can differentiate themselves while fostering trust among their key audiences.
- Means: What it means to be “fair” is still a very open-ended question. As we deliberate the question of compensating creators of training data and the potential shifts in job dynamics due to advancements in AI technology, one thing remains clear: trust will be a key factor in this transition. Employees need to trust that their skills are valued and that upskilling opportunities will be provided to adapt to changing job requirements.
- Impact: Problems are inevitable, and they can become increasingly challenging to anticipate as situations evolve. By implementing ...
Get AI + The New Human Frontier now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.