Chapter 9. How Are Your Morals? Ethics in Algorithms and IoT

Editor’s Note: At Strata + Hadoop World in Singapore, in December 2015, Majken Sander (Business Analyst at BusinessAnalyst.dk) and Joerg Blumtritt (CEO at Datarella) examined important questions about the transparency of algorithms, including our ability to change or affect the way an algorithm views us.

The codes that make things into smart things are not objective. Algorithms bear value judgments—making decisions on methods, or presets of the program’s parameters; these choices are based on how to deal with tasks according to social, cultural, legal rules, or personal persuasion. These underlying value judgments imposed on users are not visible in most contexts. How can we direct the moral choices in the algorithms that impact the way we live, work, and play?

As data scientists, we know that behind any software that processes data is the raw data that you can see in one way or another. What we are usually not seeing are the hidden value judgments that drive the decisions about what data to show and how—these are judgments that someone made on our behalf.

Here’s an example of the kind of value judgment and algorithms that we will be facing within months, rather than years—self-driving cars. Say you are in a self-driving car, and you are about to be in an accident. You have the choice: will you be hit straight on from a huge truck, or from the side? You would choose sideways, because ...

Get Analyzing Data in the Internet of Things now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.