Putting dhttp in Perspective
Of course, there are other, less exotic ways to move data. A typical office PC running Windows can easily be a peer file server. A central replication service could simply pull entire database files from a group of machines, sort out their differences, and put them back. The file-oriented approach sacrifices granularity, though. When you grab whole files, you can’t ask for just yesterday’s records. More generally, it sacrifices all the benefits that flow from a true distributed computing model. A distributed HTTP service is, I’m arguing, as fundamental to a web-centric model of computing as is a browser. Think about what Windows brought to the table back when DOS reigned: large memory, a GUI, device independence. Once developers could assume these services were available—wherever their code ran—there was no looking back. The dhttp model brings a new core service to the table—a lightweight, scripted, HTTP-aware service that connects local and remote applications and services to resources on the local machine.
That dhttp can serve files is, as we’ve seen, the least interesting of its capabilities. Products such as Microsoft’s Personal Web Server haven’t caught on widely as desktop-based applications, because they don’t really solve a new problem on the LAN. What made first-generation web servers interesting was the way they made the whole Internet a giant LAN. With peer file sharing, I can already grant you access to files on my disk, so in that respect ...
Get Practical Internet Groupware now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.