Four short links: 5 December 2016

Self-Driving Open Source, Self-Programming Software, Genetic Engineering, and The Off-Switch Game

By Nat Torkington
December 5, 2016
  1. Comma.ai’s Open Pilot — open source autopilot and hardware to convert cars to auto-pilot. Opened by comma.ai after regulators challenged them. The 3d-printed-gun strategy of self-driving cars? See Washington Post for context. Hotz says it is an open source alternative to Tesla’s autopilot, which is considered semi-autonomous. When a user switches it on, the car goes into autopilot mode, enabling the driver to take their hands off the wheel and the gas pedal. The car can also stay in its lane and brake for the driver. Currently, the software works only with some Hondas and Acuras.
  2. REX: A Development Platform and Online Learning Approach for Runtime Emergent Software Systems Using an emergent web server as a case study, we show how software can be autonomously self-assembled from discovered parts, and continually optimized over time (by using alternative parts) as it is subjected to different deployment conditions. Our system begins with no knowledge that it is specifically assembling a web server, nor with knowledge of the deployment conditions that may occur at runtime.
  3. Learn faster. Dig deeper. See farther.

    Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

    Learn more
  4. Future of Genetic Engineering (YouTube) — George Church blows minds.
  5. The Off-Switch Game (PDF) — Our goal is to study the incentives an agent has to allow itself to be switched off. We analyze a simple game between a human H and a robot R, where H can press R’s off switch but R can disable the off switch. A traditional agent takes its reward function for granted: we show that such agents have an incentive to disable the off switch, except in the special case where H is perfectly rational. Our key insight is that for R to want to preserve its off switch, it needs to be uncertain about the utility associated with the outcome, and to treat H’s actions as important observations about that utility. (R also has no incentive to switch itself off in this setting.) We conclude that giving machines an appropriate level of uncertainty about their objectives leads to safer designs, and we argue that this setting is a useful generalization of the classical AI paradigm of rational agents.
Post topics: Four Short Links
Share: