Figure 1-1 shows a dumb terminal that displays output from a remote, monolithic process running on a gigantic mainframe or a powerful UNIX server. Note that the dumb terminal does no work, except to serve as remote eyeglasses into the process running on the mainframe through some type of direct link.
In the 1950s and 1960s, large enterprises normally leased a mainframe so many employees could share it to perform routine tasks. Since there were so few of these machines (and because they were so expensive), engineers came up with ways to use mainframes remotely. One of these ideas included a dumb terminal. The communication technique behind a dumb terminal was extremely simple. The engineers literally took a wire and connected it from a dumb terminal to the host mainframe. In this respect, a dumb terminal didn’t have to determine where to send the data, because it communicated with the mainframe through a direct link.
People used dumb terminals to access the mainframe from a distance. In a sense, the mainframe moved a bit closer to the users, and since moving closer brought comfort, the users asked for more. Thus, dumb terminals quickly led to the birth of smarter terminals. These more intelligent terminals often contained specialized circuitry, firmware, and possibly a communications protocol to specifically collaborate with their mother computer. In essence, these intelligent terminals led to the first glimmerings of distributed processing. The only things these smart terminals lacked to be stand-alone, personal computers (PC) were a local disk and an operating system.
As PCs came into the picture, a new idea emerged: terminal emulation. Using a simple terminal emulation software, hundreds of users could remotely log on to a powerful mainframe or a UNIX server. Unlike dumb terminals, the communication between the terminal emulators and their hosts became more complex. Users were able to configure their terminal emulators to use a particular communications protocol. This flexibility came with the price of complexity. For example, any slightest communications protocol incompatibility would almost definitely inhibit emulator to host communications.
For some time, terminals were great, because they allowed users to remotely control their business processing. However, as PCs became increasingly powerful, cheaper, and popular, software developers realized that this method of remote computing was a waste, since this environment restricted the power of the PC. In other words, some of the work the server did for two thousand users could have been done on each of the two thousand separate PCs. This was one of the reasons for the birth of client/server computing.
Get Learning DCOM now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.