4ARTIFICIAL INTELLIGENCE, VALUE ALIGNMENT, AND THE CONTROL PROBLEM
4.1 Averting a Nuclear War
4.1 On September 26, 1983, a nuclear attack early‐warning system in the Soviet Union mistakenly indicated that a nuclear missile had been launched from the United States. The warning system indicated that the missile was heading in the direction of the Soviet Union and that the first missile had been followed by five more. According to Soviet military protocol, it was the job of the duty officer of the command center to report any such indications of the satellite warning system to officers higher up on the chain of command. They could then quickly prepare for war. The responsible lieutenant colonel of the air defense forces—Stanislav Petrov—decided to disobey his orders, however. He did so because he deemed this to be a false alarm. If he had relayed the message generated by the missile detection system in accordance with protocol, this might have triggered a large‐scale retaliatory nuclear attack by the Soviet Union on the United States. A nuclear war might have broken out. Because he refused to convey the warning created by the faulty detection system, Stanislav Petrov is thought to have helped to prevent a nuclear war, with all of the devastation that it might have caused to large parts of the world.1
4.2 That was a real‐world example of how a technology almost caused an enormous problem: a nuclear war. Academic discussions about risks related to advanced technologies also contain ...
Get This is Technology Ethics now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.