6Putting Safeguards Around AI

“It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.”

– Charles Darwin

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”

– Isaac Asimov

THE LAST CHAPTER showed the potential dangers of AI run amok. In April 2023, George Hinton, one of the architects of AI as we know it today, left Google so that he could warn about the dangers of AI more freely. Adding his voice to the conversation set off tremors in the debate about where AI was headed. Going so far as to say he regretted his life's work and that bad actors will use AI for bad things, his comments lend new weight to critics of the pace of AI development.1

The technorati have long been aware of the potential of AI to remake our world in ways no one can fully anticipate. Meanwhile, the rest of us struggle to keep pace with changes happening at a rate that is hard to absorb and digest. There is a fear that instead of hammering a nail gently into a piece of brittle wood, the sudden, hard changes being hammered into our society may break and split us. There are calls for AI innovation to slow down and to proceed more deliberately with sober thought given to its development. While they express a growing sense of unease, these calls need to be backed up by laws and regulations that provide a framework within which to do that.

When AI development is unfettered, it could lead ...

Get AI for Social Good now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.