Four short links: 15 June 2017

Positive Design Fiction, Gray Failure, OMGLOLWTF Blockchain, and AI Negotiations

By Nat Torkington
June 15, 2017
Four short links.
  1. Various Sci Fi Projects Allegedly Creating a Better Future (Bruce Sterling) — he’s written for a lot of “imagine a better future” attempts counter to what seems to be a world lurching toward dystopia. The “better future” thing is jam-tomorrow and jam-yesterday talk, so it tends to become the enemy of jam today. You’re better off reading history and realizing that public aspirations that do seem great, and that even meet with tremendous innovative success, can change the tenor of society and easily become curses a generation later. Not because they were ever bad ideas or bad things to aspire to or do, but because that’s the nature of historical causality. Tomorrow composts today.
    (via Cory Doctorow)
  2. Gray Failure (PDF) — component failures, whose manifestations are fairly subtle and thus defy quick and definitive detection. Examples of gray failure are severe performance degradation, random packet loss, flaky I/O, memory thrashing, capacity pressure, and non-fatal exceptions. […] Our first-hand experience with production cloud systems reveals that gray failure is behind most cloud incidents. (via Adrian Colyer)
  3. Learn faster. Dig deeper. See farther.

    Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

    Learn more
  4. Daisy: A Private Blockchain Where Blocks Are SQLite Databases, in Go — as one Hacker News commenter described it: Everything about this feels like the most terrible idea ever, but in such a fascinating way. It’s beautiful.
  5. Facebook’s Negotiating AIsThe FAIR researchers’ key technical innovation in building such long-term planning dialog agents is an idea called dialog rollouts. Build a tree of possible conversation paths, and pick the one that has the greatest chance of success by simulating all those possible conversations. There were cases where agents initially feigned interest in a valueless item, only to later “compromise” by conceding it—an effective negotiating tactic that people use regularly. This behavior was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals.
Post topics: Four Short Links
Share: