Chapter 4. Adoption Challenges for Next-Generation Analytics

The new world of AI apps promises to make AI available to everyone across organizations for every function, from frontline employees to executives. Before jumping into this AI-powered revolution, there are a few critical issues that organizations should consider as they invest. In addition to widely-discussed staffing and technology issues, AI is presenting several less well-known challenges that will be the focus of this chapter.

Ineffective Data and AI Principles

According to AlgorithmWatch, dozens of organizations, including governments, have published data and AI usage principles.1 These principles attempt to set broad guidelines for the use of data and AI within an organization and also signal to the organization’s peers, competitors, employees, or customers that they are considering certain data and AI risks. Unfortunately, AlgorithmWatch also recently published a report stating that only 10 of 160 reviewed sets of principles were enforceable.2 So, if your organization does create data and AI principles, keep in mind that a major pitfall to avoid is ineffectiveness. Getting a technological perspective, along with ethical, legal, oversight, and leadership perspectives into organizational AI and data principles are perhaps the best way to avoid such issues.

Lax Security Practices

In the hype around machine learning and data science, and in a rush to build end-to-end data and AI products (the kind of technology that reaches into data centers, out to the public, and back into data centers), data scientists can be intentionally or accidentally given too many privileges in an IT system. This is an ethics and security problem. If the same person can manipulate a database, create a predictive model, and make it operational, they can make a predictive model do what they want it to do, and in very subtle ways. (Maybe it’s to give their girlfriend’s mother a giant loan, or to deny loans to people in a political or socially discriminatory way.) Regardless, these kinds of insider attacks against AI can cost your organization money and are another avenue by which discrimination can enter into AI. More standard concerns about data privacy must also be recognized by AI practitioners to ensure solid security. Hence, AI systems and the teams working on them should be under the same, if not stronger, security constraints as other employees.

Inadequate Human Review

Related to the practice of model risk management, the concept of effective challenge is used to improve AI implementation at large financial services organizations in the US. An interpretation of an effective challenge is that, when building AI systems, one of the best ways to guarantee good results is to actively challenge and review each step of the development process. Of course, a culture of effective challenge must apply to everyone developing an AI system, even so-called “rock-star” engineers and data scientists. For instance, the Federal Reserve System’s famous SR 11-7 guidance on model risk management makes no exceptions for rock-star data scientists, and there’s probably a good reason for that: a rigorous human review of AI systems is one of the best known methods for mitigating risks associated with AI. One easy way to start to build a culture of effective challenge is to hold mandatory weekly meetings where alternative design and implementation choices for AI systems are put forward, questioned, and discussed.

Downplaying Traditional Domain Expertise

Real-world success in AI often requires input from humans with a deep understanding of the given problem domain. Of course, such experts can help with feature selection and engineering, and interpretation of AI system outputs. But they can also serve as a basic sanity and usefulness check mechanism. For instance, if you’re developing a medical ML system, you should consult with physicians and other medical professionals. How will generalist data scientists be able to understand the subtlety and complexity inherent in medical data and the results of AI systems trained on such data? They might not be able to, and this can lead to AI incidents when the system is deployed. The social sciences deserve a special callout in this regard as well. Sometimes called “tech’s quiet colonization of the social sciences,” technology companies are pursuing AI projects that either replace decisions that trained social scientists should make, or use practices, such as facial recognition, for criminal risk assessments that have been highly criticized by social scientists.3

AI Security and Privacy

Like nearly every other powerful commercial technology, AI systems are subject to failures and attacks. These can include the kind of hacks that plague other public-facing IT systems, wherein attackers block services with massive amounts of incoming web traffic or insert themselves between an AI service and a consumer. These can also include specialized concerns regarding training and output data privacy and security, or even highly specialized attacks on underlying machine learning algorithms.

In terms of data privacy and security, there are traditional data security concerns related to the confidentiality, integrity, and availability of input training data, intermediate data generated by the AI system, and output response data from the AI system, but there are also increasing legal and regulatory obligations around data privacy. These can include everything from the legal basis for data collection, to anonymization requirements, data retention limitations, and alignment with organizational privacy policies. Moreover, security and privacy breaches can also trigger breach reporting requirements. Because AI is so hungry for data, all of these can indirectly, or even directly, impact an organization’s use of AI.

For specialized attacks against machine learning algorithms that underpin most of today’s AI systems, organizations should have several known attack vectors on their radar, including:

  • Insider manipulation of training data (i.e., “data poisoning”)

  • Manipulation of model outcomes by external adversaries

  • The theft of intellectual property, like models and data, by external adversaries

  • Trojan horse code or manipulations buried in complex machine learning software and related artifacts, like model weights that give a favorable outcome under certain conditions only known to hackers or external adversaries

While basic security practices are an effective shield against some attacks on machine learning, it’s important to consider these attacks as part of updated model risk management or information security policies. Organizations can also leverage security audits, bug bounties, and red-teaming to help understand their vulnerabilities and to fortify their defensive measures.

The future of analytics in the enterprise is bright, but before you begin or scale up on your journey to build AI applications across the enterprise, there are clearly some organizational and security issues to consider and mitigate before taking the plunge.

1 See AlgorithmWatch’s “AI Ethics Guidelines Global Inventory”.

2 See AlgorithmWatch’s “In the Realm of Paper Tigers: Exploring the Failings of AI Ethics Guidelines” by Leonard Haas and Sebastian Gießler, with additional research by Veronika Thiel, April 28, 2020.

3 See, for example, “To Really ‘Disrupt,’ Tech Needs to Listen to Actual Researchers”, Wired, June 26, 2019; Rumman Chowdhury’s post on Twitter; “AI Researchers Say Scientific Publishers Help Perpetuate Racist Algorithms”, MIT Technology Review, June 23, 2020.

Get The Future of Analytics now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.