On Computational Thinking, Inferential Thinking, and Data Science — That classical perspectives from these fields are not adequate to address emerging problems in data science is apparent from their sharply divergent nature at an elementary level—in computer science, the growth of the number of data points is a source of “complexity” that must be tamed via algorithms or hardware, whereas in statistics, the growth of the number of data points is a source of “simplicity” in that inferences are generally stronger and asymptotic results can be invoked. On a formal level, the gap is made evident by the lack of a role for computational concepts such as “runtime” in core statistical theory and the lack of a role for statistical concepts such as “risk” in core computational theory.Slides from a similar version of this talk are available.
Beyond the Rhetoric of Algorithmic Solutionism (danah boyd) — If you ever hear that implementing algorithmic decision-making tools to enable social services or other high-stakes government decision-making will increase efficiency or reduce the cost to taxpayers, know that you’re being lied to. When implemented ethically, these systems cost more. And they should. Points to Automating Inequality, a new book due out next week: Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation. (via BoingBoing)
Learn faster. Dig deeper. See farther.
Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.
Worm Brain in a Lego Body — (simulated a worm brain, because the worm brain is fully mapped). This Lego robot has all the equivalent limited body parts that C. elegans has—a sonar sensor that acts as a nose, and motors that replace the worm’s motor neurons on each side of its body. Amazingly, without any instruction being programmed into the robot, the C. elegans virtual brain controlled and moved the Lego robot.