Preface
In a sense, distributed computing has been with us since the beginning of computer technology. A conventional computer can be thought of as “internally distributed,” in the sense that separate, distinct devices within the computer are responsible for certain well-defined tasks (arithmetic/logic operations, operator stack storage, short/long-term data storage). These “distributed” devices are interconnected by communication pathways that carry information (register values, data) and messages (assembler instructions, microcode instructions). The sources and destinations of these various pathways are literally hardwired, and the “protocols” that they use to carry information and messages are rigidly defined and highly specific. Reorganizing the distribution scheme of these devices involves chip fabrication, soldering, and the recoding of microprograms, or the construction of a completely new computer. This level of inflexibility offers benefits, however, in terms of processing speed and information-transfer latencies. The functions expected of a computing device at this level are very well defined and bounded (perform arithmetic and logic operations on data, and store the results); therefore, the architecture of the device can and should be highly optimized for these tasks.
The history of truly distributed computing begins with the first day that someone tapped a mainframe operator on the shoulder and asked, “Hey, is there any way we can both use that?” Display terminals, with ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access