Chapter 5. Engineering Cooperation

Promising to work with peers is what we call cooperation. Promise Theory exposes cooperation as an exercise in information sharing. If we assess those other agents’ promises to be trustworthy, we can rely on them and use that as a basis for our own behaviour. This is true for humans, and it is true for technological proxies.

The subject of cooperation is about coordinating the behaviours of different autonomous agents, using promises to signal intent. We see the world as more than a collection of parts, and dare to design and construct larger, intentional systems that behave in beneficial ways.

Engineering Autonomous Agents

There are many studies of cooperation in the scientific literature from differing viewpoints. To make a claim of engineering, we propose to think in terms of tools and a somewhat precise language of technical meanings. This helps to minimize the uncertainty when interpreting agents’ promises and behaviours. Cooperation involves a few themes:

  • Promises and intentions (the causes)

  • Agreement

  • Cooperation

  • Equilibration

  • Behaviour and state (outcomes)

  • Emergence

In this chapter, I want to paint a picture of cooperation as a process just like any other described in natural science by atomizing agency into abstract agents, and then binding them back together into a documentable way through the promises they advertise. In this way, we develop a chemistry of intent.

Promisees, Stakeholders, and Trading Promises

So far, ...

Get Thinking in Promises now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.