Chapter 8. Interpretable Models and Explainability Algorithms

Up to this point we have focused our coding efforts primarily on fairness as understood from the perspective of parity and antidiscrimination. Another type of fairness sees us all as victims when we consider threats that can result from arbitrary, capricious, or opaque decision making. The protection against such experiences is established in most countries through guarantees of transparency, the rule of law, and due process. This last, due process, in turn can be divided into procedural due process (the right to a good decision-making process) and substantive due process (the right to a reasonable decision).

In recent years we are seeing that the same concerns that have been evinced for centuries or even millennia with respect to human decision making in the role of government are also relevant to machine learning. If computational models will make decisions that affect us, be they important or even relatively unimportant decisions, shouldn’t we check them out to make sure these decisions make sense and are reached in a sensible way? This touches on a variety of values that all relate to fundamental needs and expectations of humans that when we design systems, these systems should make sense.

It also reflects security concerns, much like the ones that guided the original desires for transparency and due process in government. If an arbitrary decision can happen at all, then it is a threat to anyone who is subjected ...

Get Practical Fairness now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.