Apple Watch and the skin as interface
The success of Apple’s watch, and of wearables in general, may depend on brain plasticity.
Recently, to much fanfare, Apple launched a watch. Reviews were mixed. And the watch may thrive — after all, once upon a time, nobody knew they needed a tablet or an iPod. But at the same time, today’s tech consumer is markedly different from those at the dawn of the Web, and the watch faces a different market all together.
One of the more positive reviews came from tech columnist Farhad Manjoo. In it, he argued that we’ll eventually give in to wearables for a variety of reasons.
“It was only on Day 4 that I began appreciating the ways in which the elegant $650 computer on my wrist was more than just another screen,” he wrote. “By notifying me of digital events as soon as they happened, and letting me act on them instantly, without having to fumble for my phone, the Watch became something like a natural extension of my body — a direct link, in a way that I’ve never felt before, from the digital world to my brain.”
On-body messaging and brain plasticity
Manjoo uses the term “on-body messaging” to describe the variety of specific vibrations the watch emits, and how quickly he came to accept them as second nature. The success of Apple’s watch, and of wearables in general, may be due to this brain plasticity.
For example, there’s a belt you can wear, ringed with pads, called the Sensebridge Northpaw. The north-facing pad on the belt vibrates, helping you to get your bearings. Quinn Norton wrote a post about the experience of trying one on, and users report never getting lost after a few days of wearing it. They also report disorientation when they remove it. Our brains are plastic, and they make surprisingly short work of turning the belt’s feedback into a new sense.
Adapting to new information is what brains do best. Radically transformative plasticity happens when a brain input is altered drastically — motor cortex remapping when fingers fuse together, compensating for lost limbs, and so on. But our brains are adapting all the time, constantly on the verge of chaos.
Synthesizing the world around us
Much of what your brain does is synthesis — creating entirely new, synesthesia-like responses in the brain that don’t exist in the real world. Our brains process what they can, and the world we perceive is a construct. Our senses aren’t great; our brain makes them so, and in doing so, makes a lot of stuff up.
“Consider that even your cell phone camera has better resolution than [your eyes]. So, how can it be that you have such a rich and detailed perception of the world, when in fact your visual system’s resolution is equivalent to a cheap digital camera?” ask neuroscientists Stephen L. Macknik and Susana Martinez-Conde, the authors of Sleights of Mind. “The short answer is that the richness of your visual experience is an illusion created by the filling-in processes of your brain.”
Overloading our senses
Some senses, like sight and touch, can be augmented: we can have several wearables, each with its own patch of skin; we can have heads-up displays projected onto our retina.
Other senses aren’t as adept at input overload. In the Music/Data report I’ve been writing, one of the “turing problems” of what several folks named — and I’m going to start calling — Music Science, is that we can’t quickly scan songs.
If you go to an art gallery, your eyes can saccade across many images to find the one you like; for music, it takes around five seconds to decide you hate something, and 25 seconds to decide you like it, as Google’s Douglas Eck explained to me. But of course, you can’t listen to two songs at once (okay, you can because Girl Talk, but you get my meaning.) Video has the same real-time bottleneck as audio. At least you can consume lectures at 1.5x speed without losing much fidelity of experience, but the same isn’t true of aesthetics like songs or art films.
The Sensebridge Northpaw and the Apple Watch are good examples of augmenting perception, co-opting bundles of nerves to send new kinds of information to our brains. Frankly, I’m way more excited about Magic Leap’s retinal projection and DARPA’s cortical modem because skin patches are a scarce, messy resource.
From one-way to two-way communication
Belts, watches, and heads-up displays are all one-way inputs from the world into the human, akin to broadcast back in the day. The next next thing is going to be two-way interfaces, just as the interactive Web supplanted broadcast.
Here, the future is still murky: upright display interaction suffers from Gorilla Arm problems; cyborgian portable keyboards and weird bone-conductive mics aside, it’s the control of the world by the human that’s an even bigger challenge.
Is it time for my implant?
There are 13 pairs of nerves (counting the recently discovered terminal nerve) going into my brain right now. If I don’t want to overload my optic nerve, or to clutter up patches of my skin, or make any other of the 13 nerves do double duty, is it time for an implant?
The notion of adding new, fundamental senses is fraught with peril and ethics:
- From whom do I buy my implant, and where is it legal?
- What will it do to my brain once it’s installed?
- If I miss my medical payments, can I pay it off by watching ads?
Perhaps I should I be wary of giving control of my nervous systems to technology, and just be happy repurposing patches of my skin, upgrading my input bandwidth the way I once upgraded a modem. If so, then maybe that’s why the Apple Watch will catch on. But it’s just a baby step toward physical augmentation.
When we create not only new senses, but also new brain areas for motor control, we’ll become genuinely new beings. And that will redefine consciousness and completely alter the species.
Thanks to Mike Loukides, Courtney Nash, Meghan Athavale, Nat Torkington, Simon St. Laurent, Marc Hedlund, Andy Oram, Roger Magoulas, and Jon Bruner for feedback on this post. Many of the good ideas here are theirs; I take full responsibility for quoting them out of context or choosing the wrong bits.