CHAPTER 12Pumping the Brakes on AI: Regulatory Considerations
While many want fast action, it's hard to regulate technology that's evolving as quickly as AI. The techno-utopians think AI will bring about a new age of prosperity. The techno-pessimists think it will be a damaging, destabilizing force.1 The latter is increasingly driving calls for more regulation. People like Elon Musk have called for a voluntary halt on AI development considering the risks. An open letter in 2023 signed by hundreds of well-regarded AI experts, tech entrepreneurs, and scientists called for a temporary break in the development and testing of AI technologies more powerful than GPT-4 so that any risks can be appropriately studied. The letter contended that developments are occurring faster than society and regulators can adequately deal with.2 But that's not necessarily realistic. Yes, we need stronger rules of the road. Yes, we need both our public and private institutions to adopt and enforce them. Most regulatory attempts will be fruitless. Technology may progress too quickly.
But guardrails—without going off the rails—will be more urgently needed. The commonly touted narrative asserting that technology is a force for public harmony may be one of the great marketing tools out of Silicon Valley. But that narrative also obscures the threats of concentrating power and subjecting it to leaders' personal impulses. In the United States, Congress's inability to pass broad regulations on AI has only led ...
Get AI + The New Human Frontier now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.