Once you’ve laid out the foundations for a responsible AI framework by defining AI principles and ensuring data ethics are applied to the training data, you’ll want to start thinking about the different parts that form a responsible AI framework. One of these is fairness, which is covered in this chapter.
In 2018, Amazon scrapped its AI recruiting tool, which was discovered to be biased against women. It was trained on data submitted by applicants over a ten-year period, which unsurprisingly were mostly male candidates—a reflection of ...