13When AI Goes Wrong
While we are all enthralled at the moment on how much easier our daily work tasks can become by integrating ChatGPT, we also need to think critically about the damages it can do for us as a society. I think it’s extremely important to note the collateral damage while we reap the benefits. It’s a form of dialectical thinking that is important for us to have in this world, to know the harms that run alongside the benefits.
In the next section, I'll highlight some cases in the past few years of collateral damage that has been brought to society by AI.
Times That AI Went Wrong
Bias
In Joy Buolamwini's book,16 she discusses a profound realization when the facial recognition software she was working with failed to detect her face. This issue arose due to the software's inability to recognize individuals with darker skin tones. Funnily enough, in reverse fashion, when I was in China at Alibaba, we would have these turnstiles that allowed you to get into the building by scanning your face. When I scanned my Asian face, it would say, “Welcome Sharon.” I would have Caucasian colleagues be scanned in, but incorrectly identified. The welcome message that was shown was the name of another Caucasian colleague that worked in the company. AI is biased based on its training data. Give it more differentiation of a certain color of people, it will be more skilled at deciphering that group. We must be aware of this trait as AI consumers, and critical of the responses that ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access