Brad Knox on creating a strong illusion of life

The O'Reilly Radar Podcast: Imbuing robots with magic, eschewing deception in AI, and problematic assumptions of human-taught reinforcement learning.

By Jenn Webb
December 15, 2016
Reflection. Reflection. (source: Pixabay)

In this episode, I sit down with Brad Knox, founder and CEO of Emoters, a startup building a product called bots_alive—animal-like robots that have a strong illusion of life. We chat about the approach the company is taking, why robots or agents that pass themselves off as human without any transparency should be illegal, and some challenges and applications of reinforcement learning and interactive machine learning.

Here are some links to things we talked about and some highlights from our conversation:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Links:

Creating a strong illusion of life

I’ve been working on a startup company, Emoters. We’re releasing a product called bots_alive, hopefully in January, through Kickstarter. Our big vision there is to create simple, animal-like robots that have a strong illusion of life. This immediate product is going to be a really nice first step in that direction. …. If we can create something that feels natural, that feels like having a simple pet—maybe not for a while anything like a dog or cat, but something like an iguana, or a hamster—where you can observe it and interact with it, that it would be really valuable to people.

The way we’re creating that is going back to research I did when I was at MIT with Cynthia Breazeal and a master’s student, Sam Spaulding—machine learning from demonstration on human-improvised puppetry. Our hypothesis for this product is that if you create an artificially intelligent character using current methods, you sit back and think, ‘Well, in this situation, the character should do this.’ For example, a traditional AI character designer might write the rule for an animal-like robot that if a person moves his or her hand quickly, the robot should be scared and run away.

That results in some fairly interesting characters, but our hypothesis is that we’ll get much more authentic behaviors, something that really feels real, if we first allow a person to control the character through a lot of interactions. Then, take the records and the logs of those interactions, and learn a model of the person. As long as that model is good fidelity—it doesn’t have to be perfect, but captures with pretty good fidelity the puppeteer—and the puppeteer is actually creating something that would be fun to observe or interact with, then we’re in a really good position. … It’s hard to sit back and write down on paper why humans do the things we do, but what we do in various contexts is going to be in the data. Hopefully, we’ll be able to learn that from human demonstration and really imbue these robots with some magic.

A better model for tugging at emotions

The reason I wrote that Tweet [Should a robot or agent that widely passes for human be illegal? I think so.] is that if a robot or an agent—you could think of an agent as anything that senses the state of its environment, whether it’s a robot or something like a chat bot, just something you’re interacting with— if it can pass as human and it doesn’t give some signal or flag that says, ‘Hey, even if I appear human, I’m not actually human,’ that really opens the door to deception and manipulation. For people who are familiar with the Turing Test—which is by far the most well-known test for successful artificial intelligence—the issues I have with it is that, ultimately, it is about deceiving people, about them not being able to tell the difference between an artificially intelligent entity and a human.

For me, one real issue is that, as much as I’m generally a believer in capitalism, I think there’s room for abuse by commercial companies. For instance, it’s hard enough when you’re walking down the street and a person tries to get your attention to buy something or donate to some cause. Part of that is because it’s a person and you don’t want to be rude. When we create a large number—eventually, inexpensive fleets—of human-like or pass-for-human robots that can also pull on your emotions in a way that helps some company, I think the negative side is realized at that point.

How is that not a contradiction [to our company’s mission to create a strong illusion of life]? The way I see illusion of life (and the way we’re doing it at bots_alive) is very comparable to cartoons or animation in general. When you watch a cartoon, you know that it’s fake. You know that it’s a rendering, or a drawing, or a series of drawings with some voice-over. Nonetheless, if you’re like most people, you feel and experience these characters in the cartoon or the animation. … I think that’s a better model, where we know it’s not real but we can still feel that it’s real to the extent that we want to. Then, we have a way of turning it off and we’re not completely emotionally beholden to these entities.

Problematic assumptions of human-taught reinforcement learning

I was interested in the idea of human training of robots in an animal training way. Connecting that to reinforcement learning, the research question we posed was: instead of the reward function being coded by an expert in reinforcement learning, what happens if we instead give buttons or some interface to a person who knows nothing about computer science, nothing about AI, nothing about machine learning, and that person gives the reward and punishment signals to an agent or a robot? Then, what algorithmic changes do we need to make the system learn what the human is teaching the agent to do?

If it had turned out that the people in the study had not violated any of the assumptions of reinforcement learning when we actually did the experiments, I think it wouldn’t have ended up being an interesting direction of research. But this paper dives into the ways that people did violate, deeply violate, the assumptions of reinforcement learning.

One emphasis of the paper is that people tend to have a bias toward giving positive rewards. A large percentage of the trainers we had in our experiments would give more positive rewards than punishment—or in reinforcement learning terms, ‘negative rewards.’ We found that people were biased toward positive rewards.

The way reinforcement learning is set up is, a lot of the reinforcement learning tasks are what we call ‘episodic’—roughly, what that means is that when the task is completed, the agent can’t get further reward. Its life is essentially over, but not in a negative way.

When we had people sit down and give reward and punishment signals to an agent trying to get out of a maze, they would give a positive reward for getting closer to the goal, but then this agent would learn, correctly (at least by the assumptions of reinforcement learning), that if it got to the goal, (1) it would get no further reward, and (2) if it stayed in the world that it’s in, it would get a net positive reward. The weird consequence is that the agent learns that it should never go to the goal, even though that’s exactly what these rewards are supposed to be teaching it.

In this paper, we discussed that problem and showed the empirical evidence for it. Basically, the assumptions that reinforcement learning typically makes are really problematic when you’re letting a human give the reward.

Post topics: O'Reilly Radar Podcast
Share: