Winner of the Top Innovator Award at AI NY 2018: temi

The personal robot temi refactors robotic human behaviors we encounter in the “iPhone Slump,” and moves those back to actual robots.

By Paco Nathan
May 24, 2018
The temi personal robot at the AI Conference in New York The temi personal robot at the AI Conference in New York (source: O'Reilly)

When I was a child, I viewed as a child—viewed a lot of science fiction, that is. We were promised a future with amazing robots. As a child, I viewed that as a completely awesome possibility. Fully embodied robots with which we could talk, reason, argue, and possibly even trade jokes. Robots sophisticated enough to understand emotion. How cool would that be? Rosie in The Jetsons. Class B–9-M–3 General Utility Non-Theorizing Environmental Control Robot from Lost In Space. Maschinenmensch from Metropolis. Bishop 341-B in Aliens. Replicants!

That was a long time ago. Along the way, we got some amazing science fiction-ish tech marvels. For example, Steve Jobs’ “god phone”—which reeducated +2 billion people worldwide how to communicate effectively, or something. I only met Steve once, and he’s been gone for several years now. Even so, I encounter his ghost everyday: myriads of people slumped over, absorbed in swiping their smartphones, unknowingly mimicking Jobs’ edgy indifference to the world around—exercising their primary means of “communicating” with others. Yeah, I do it now, too.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

What about the AI we’d been promised by futuristic stories? That raptured off to ephemeral clouds. Machine learning, disembodied. More closely resembling the “bodiless exultation of cyberspace” described in William Gibson’s Neuromancer novel. Heartless and sometimes horribly biased algorithms, carefully cordoned behind layers of firewalls. Secretly curated as “disruptive accelerators of synergies” by product managers hellbent on their drive toward GA. Digital innovation hubs of collaborative social multidisciplinary ecosystem working groups! Gobbledegoo exhaust from a strange new species of corporation hellbent on racing toward trillion dollar valuations. Nothing even vaguely close to the cuddly likeableness of Rosie the robot, zooming brightly on her castor wheels with antennae blaring.

That’s probably why I felt so captivated by the AI NY ’18 keynote “Autonomy and human-AI interaction,” by Professor Manuela Veloso at CMU. Her team has developed CoBots—short for Collaborative Robots—which are capable of seeing, planning, and moving. One catch: based on CMU’s concept of “symbiotic autonomy,” those robots need help from humans. Often. For example, any time a CoBot needs to move between floors in the building, someone must help call an elevator. Because, so far, CoBots lack arms. Sarah Conner can sleep soundly. From the CoBot researchers:

Our CoBot robots follow a novel symbiotic autonomy, in which the robots are aware of their perceptual, physical, and reasoning limitations and proactively ask for help from humans, for example for object manipulation actions.

Through the CoBots, Veloso’s team is researching how humans can interact with AI. CoBots understand their own limitations, expressing need and vulnerability—two words about human-like characteristics rarely overheard in Silicon Valley, outside of VC strategy meetings. How refreshing! CMU’s outcomes may put a ding in the universe.

Over in the expo hall of AI NY ’18—or rather, all about the expo hall—I encountered another handsomely affective embodiment: temi the personal robot. Winner of the Top Innovator Award at AI NY ’18. Judges from our program committee evaluated the AI start-ups participating in this award contest based on:

  • overall market potential
  • value proposition: disruptive potential in industry and society
  • stage of development and time to market

I watched carefully as Danny Isserles, head of temi HQ in NYC, summoned temi the robot. Most immediately, the head tracking stood out: temi tilted its “face” upward to track Danny. Practically speaking, temi adjusted its tablet screen so that Danny’s face would be centered in the video camera perspective—but that seemed exactly like temi was glancing up to look at Danny. “Because temi cares,” was my immediate impression. Rosie would’ve been proud.

From that point, Danny began putting temi through its paces: making a video call, transcribing a conversation, playing music, suggesting restaurants in a particular area, etc.

At first, some people might only notice the parts: roughly speaking, part Alexa, part iPad, part high-end boombox, all rolling atop a Roomba and wired together with some software. However, that misses the bigger picture: temi refactors those aforementioned robotic human behaviors that we encounter in the “iPhone Slump,” moving those back to actual robots. When you get home from work, ditch your smartphone atop temi’s charging deck, then talk with people close to you, who aren’t currently nearby, via video chat while you do other things—fix dinner, play guitar, throw pottery, whatever. Hell, go play guitar with them. Because temi can follow you, keeping you in the video chat talking with loved ones while you’re living your life and not stuck slumping over some smartphone. This is particularly engaging for families separated by distance. Kiddos can talk with their parents or grandparents more naturally in video.

While the company behind temi has nine years developing robots for the DoD (e.g., med-evac bots) the origin story for temi happened closer to home. The company founder was visiting an elderly relative who wanted to serve him tea—walking slowly, hands shaking, focused on the serving tray and hot liquid so much that she nearly stumbled and fell. It turns out that older folks fall in their own homes most often while trying to carry things. Unfortunately, I had a parent in ER recently for that very reason. Now we have a personal robot, temi, that can carry things, follow people, and do much more.

There’s a $1,400 retail price for temi, which seems remarkable given how that costs so much less than the laptop on which I’m typing. The designers of temi decided that, except for its tablet, they needed to build every component—including 16 sensors for lidar, laser distance, multiple cameras, etc. Their software is based on Android apps, with an SDK in the works for release soon so that people can customize temi, plug-in other software services, make extensions for fully embodied connections.

My one chance encounter with Steve Jobs was when he’d been blocking the only path to the restroom at our mutually favorite Palo Alto coffeehouse. I asked politely, somewhat urgently. He glanced up from his smartphone distractedly and mumbled “Oh, sorry” then moved aside. Pretty sure that temi would’ve moved graciously without even needing to be asked. And served my chai latte at exactly 165 degrees.

h/t @wu_ming_80, @randerzander, @FloWi

Post topics: Artificial Intelligence
Share: