Building structures with robot swarms
The future of construction will be inspired by social insects.
Construction is one of the biggest and most notable production activities people engage in—a trillion-dollar industry in the U.S. alone—and yet, it’s one in which almost no automation is used. (Contrast that with manufacturing, which has been transformed completely by automation—and is, in fact, in the middle of another such transformation.) That lack of automation is beginning to change, with advances in robotics starting to make it possible for robots to take over many dangerous, dirty, and dull tasks in a variety of new domains.
As these technologies improve, they’ll help us build in settings where humans can’t easily build without great risk or expense—think about emergency structures in disaster scenarios like raising dikes to hold back floodwaters, underwater structures like tidal energy farms or research stations, or first-stage habitats on the Moon or Mars to await the later arrival of astronauts. One day, we’ll be able to simply direct robots to a location and a goal, and watch these structures take shape without our intervention.
Most of the work in robotics focuses on single, sophisticated robots. This is the way we’re used to thinking—humans are exceptionally capable, and if we’re creating artificial workers to handle the kinds of tasks we do, it’s natural to try to design them to follow the same model.
But while single, sophisticated workers are intuitively appealing, nature often takes another approach. Think about an ant colony. There may be millions of insects at work simultaneously, accomplishing a great deal in a short time. None has a critical role; if some wander off or are eaten by predators, the rest can continue and get the job done. There’s no leader giving instructions, who could be a critical point of failure or need to stop and make new plans if the size of the workforce changes; the group self-adjusts to changes in number or conditions. What if we could create artificial systems with these kinds of advantages of parallelism, robustness, and scalability? This is the promise of swarm robotics.
Social insects provide inspiration for construction as well. While termites in North America are best known for destroying buildings, species on other continents build—creating massive, complicated mounds that can exceed 40 feet in height. These structures are created by armies of millions of tiny, blind workers, less than a centimeter long, with no information about the state of the mound or the activities of the others beyond what they can sense directly themselves. These systems provide a proof of principle, demonstrating that large numbers of limited agents without central control can together build large-scale, complex structures.
Anthills built to order
Inspired by these termites, we asked: how could we harness this kind of power? How could we design, build, and program teams of independent robots that could create structures for us? The long-term goal from a scientific standpoint is to understand how the low-level actions of independent agents are connected to the high-level emergent outcomes of the collective; from an engineering standpoint, it’s to be able one day to take a pile of robots, give them a supply of building material and a picture of what you want them to build, and be able to walk away and have a guarantee that they’ll build exactly that result, without needing to be involved in the details of how they do it.
The TERMES project at Harvard is one step toward these long-term goals. We set out to create a system that spans the gamut from theory to practice, with formal analytic guarantees that the collective outcome would match the desired result, and with a physical system demonstrating the feasibility in real life. The robots are independent, relying only on information they can perceive with their own few on-board sensors, without any centralized control or external information; a user can give the system a blueprint of a structure he or she wants it to build, and the robots will build exactly that structure, without producing some other result or getting stuck along the way.
Creating such a system poses a variety of challenges in theory as well as implementation. The same properties that make swarms potentially so powerful also make them potentially very hard to deal with.
- Robots are independent, with access only to local information; there needs to be a way of ensuring that separated robots won’t be working in ways that conflict with each other.
- The number of robots is typically unknown in advance and may change during a project, and the order and timing of their actions isn’t predictable; the algorithms they’re programmed with need to be able to handle this kind of variability.
- Simpler robots will have more limitations on what each can achieve, while building and operating large numbers of robots involves challenges that don’t occur with more traditional single-robot systems.
- Perhaps most fundamentally, this is a case of trying to design an emergent system. The hallmark of complex systems of independent agents is interesting collective behavior that emerges from their joint actions. We don’t have a way in general of predicting what the collective behavior will be, given a set of low-level agent rules; often the best we can do is to implement the rules in an experiment and see what happens. The inverse problem, of finding a set of low-level rules to produce a particular collective outcome, is in general even more challenging.
Our design philosophy in the TERMES project involved several points:
Simplicity: We tried to keep the hardware relatively simple. Each robot has only three motors and a handful of sensors (six infrared transceivers, four pushbuttons, four ultrasound emitter/receivers, and a tilt sensor).
Coordination via stigmergy and social convention: We decided not to have robots communicate directly. Instead, they coordinate their activities by manipulating and sensing their shared environment—an idea called stigmergy, traditionally inspired by termites. The presence or absence of building material provides a cue that robots can use to help them decide what to do, and as they add more material, that affects the future actions of others as well as themselves. The knowledge that all robots are following the same rules makes it possible for locally available information to provide implicit information about structure properties beyond sensing range.
Co-design: Designing all parts of the system together was critical for being able to demonstrate nontrivial results in the real world. Designing the algorithms and hardware together made it possible to ensure both that the algorithms relied on realistic assumptions and that the hardware provided necessary capabilities. Designing the robots and building material together made it possible for the former to take advantage of the latter in ways that made many things easier, as per the next point.
Mechanical intelligence: If hardware is designed in such a way that the physics of the world takes care of some things, then that’s less that needs to be handled explicitly by the controller. For instance, the building blocks the robots used incorporated triangular notches in their upper edges. These played multiple roles: (1) They provided a step helping robots climb higher. (2) They matched a projecting feature in the lower edge, helping building blocks self-align and freeing robots from the responsibility of precise manipulation. (3) They were angled in such a way that if a robot were drifting off course as it climbed, it would be physically pushed back on center. (See the video below.)
Reliability via recovery: Rather than trying to prevent robots from ever making any errors, we instead took the approach of allowing small errors to occur, and giving robots enough sensory feedback to detect and recover from them. This approach was very successful in preventing larger errors (like falling off the structure) that could be much more difficult to recover from. The video below shows how a robot makes frequent small errors (slipping while climbing, taking several tries to pick up or put down a block, etc.) but still achieves success in the larger goal of autonomously building a structure nearly 20 times its own volume.
Controlling a colony: Ensuring the collective outcome
Programming the swarm provided challenges (in addition to those outlined above) because of the limited sensor range coupled with the long-term consequences of adding building blocks. Careless building could lead to problems—it’s very easy for a robot to put blocks in places that interfere with later actions (building a cliff too high to climb or descend, for instance, or creating obstacles that keep robots from accessing other building sites), causing the building process to stall. Finding appropriate places to add blocks depends not only on what the robots are trying to build, and on what blocks are already present nearby, but also on where blocks have been added elsewhere in the workspace—and other robots can be active far beyond where one robot can perceive what they’re doing.
To address these challenges, we put restrictions on how robots are allowed to move and build, giving them two kinds of rules. One amounts to a set of traffic laws, specific to a particular structure the system is tasked with building—essentially a set of one-way signs, governing how robots are allowed to move through the workspace. The other is a set of safety checks, identical for any structure the system might work on, specifying conditions on the local configuration of building material that must be satisfied in order for adding a block to be permitted.
Together, these rules make it possible to prove that an arbitrary number of independent robots limited to local information will correctly produce a requested structure.
Our robots are inspired by termites in a high-level way. They’re independent, and limited to local information they can learn from their own few sensors; they communicate indirectly, by manipulating a shared environment; they build structures much larger than themselves, climbing over what’s already been built to reach otherwise unreachable places. However, the details of what the robots do are very different from the details of what termites do. One reason for this is that they have very different goals. Termites aren’t trying to build a particular mound with a specific architecture, but rather any mound that works for the colony, that’s suited to the details of the place where it’s located. By contrast, human construction projects typically have the goal of producing a specific structure corresponding to a predesigned blueprint. For relevance to human construction goals, we wanted our robots to be able to produce a specific result matching a provided design.
However, some applications may not call for a blueprint in advance. For instance, an extraterrestrial exploration system may need to be able to build a bridge to cross a chasm, to reach unexplored terrain on the other side; the important thing is the function, supporting the crossing to the other side, not the details of how the bridge looks or where specific elements are placed. In ongoing work, we’re studying how robots could build such structures, adapting them to the settings they’re built in, and coordinating their activity by reference to the changing forces on a structure as it’s built. We’re also returning to the termite colonies that inspired us from the start, trying to unravel the details of how the insects behave and how that connects to the architectures of the mounds they build. There’s a tremendous amount yet to be learned about these insect societies, and we’ve only scratched the surface of what they have to teach us.