A pause to refresh the Web?

After years of racing, the Web may finally be slowing to breathe.

By Simon St. Laurent
December 13, 2015
Chesterfield four seater sofa Chesterfield four seater sofa (source: Sebastiaan ter Burg via Flickr)

The Web has enjoyed (mostly) a wild ride for the last decade. Coining the word “Ajax” showed the world how much more the Web could do, and developers have run with it since across many devices and contexts. We now have running jokes about days since the last JavaScript framework, as a language that was once a minor player has taken over ever-larger swathes of the computing world.

This giant burst came after a few years of calm that let the Web community sort out its foundations. After the dot-com bust, the Web stopped being a field everyone had to know about. The Browser Wars, when Netscape and Microsoft and others had competed by adding new features to HTML, came to a temporary end as Internet Explorer dominated and stagnation set in. In those years of quiet, web development communities found their technical, architectural, and political values. Ajax emerged from those conversations, as did RESTful models for protocols and shared Web Standards as a concrete goal. Development communities were able to filter the old and set a course that sustained the Web for a decade as interest returned.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

While developers certainly complained about the stagnation phase, the last few years of Web technology remind me too much of the Browser Wars period. This time it’s not the browser vendors – though happily there are many of them – creating most of the ruckus. This time the questions come more from the ways we use the Web, the ways we expect to be able to use the Web, and tensions between technical possibilities and dreams of gold.

Much of the burst in Web activity was driven by JavaScript. Developers found the good parts in a much-maligned language. Google popularized JavaScript-driven approaches with Google Maps, Gmail, and more, while vastly improving its V8 JavaScript engine. Other browser vendors raced to keep up, and JavaScript performance improved dramatically. Flash developers migrated to the Web as Apple barred Flash from iOS devices, and brought programming-centric approaches to what had previously been a document-centric model. HTML5 might have sounded like a revision of HTML, but it was a minor revision of the document parts of HTML combined with a massive expansion in JavaScript APIs supporting new features.

All those new features and approaches were great, but also overwhelming. Libraries and then frameworks spread rapidly as developers tried to simplify their work and reduce the mental overload. JavaScript’s incredible ability to fill gaps and to make the browser model conform to programmer expectations meant that the world got a lot of JavaScript. Node brought JavaScript to the server, and extended its reach beyond the browser generally. Even packaging and distributing JavaScript meant making a lot of choices. As Matt Gaunt put it, “all of the Web tooling ecosystem is Game of Thrones.” Finding the best choices for a particular situation in this landscape is difficult.

The community seems to be recognizing the costs of its magnificent explosion. Nicholas Zakas writes of “Innovation fatigue”. I enjoyed seeing the headline “JavaScript Tooling Settles to Merely Chaotic” in ThoughtWorks’ November 2015 Technology Radar. The last year of conversation has driven home repeatedly that a large part of the popularity of the React framework comes from its doing much less than the “superheroic” Angular and similar full MVC frameworks offer. While workflows and packaging remain intricate, the growing dominance of npm (though Bower’s not dead yet!) means that a one-stop shop for JavaScript libraries is finally emerging.

That may not help enough, though, because technical complexity is only one part of the problem. As happened in the dot-com boom, grasping efforts to monetize the Web threaten its long-term viability. This time even advertisers acknowledge that advertising has become a curse, impairing performance and driving away customers. This time, though, the Web has more (and better networked) competition from a few kinds of walled gardens.

  • Native mobile apps keep promising a better world, and customers (at least in the United States) are listening. That’s led at least some groups of Web developers and vendors to think that we need to make the Web more like native. (In many ways this echoes ’90s conversations grousing about why the Web didn’t have the same powers as desktop applications.)
  • Facebook is on the Web but not of the Web. While even Facebook’s native applications tend to use a lot of Web technologies, it delivers more to its users by delivering less, an environment where it controls the view.

While the maintainers of the walled gardens can mostly control the amount and kind of advertising degrading their users’ experience, the Web has become pretty overgrown. Since the Web runs on an open model, users have been able to choose to block advertising with a variety of tools on the desktop. Apple’s recent move to allow its users to block ads and more raises tough questions about the future of the Web. The Accelerated Mobile Pages (AMP) project tries to apply walled garden-like assumptions to a reinvented Web, one of which is to lock out JavaScript except its own. The increased visibility of blocking also means that mostly positive articles about turning off JavaScript are appearing in famously tech-friendly places like Wired.

2014’s calls to extend the Web forward are starting to face competition from people calling to slow down. What had seemed like a Web politics conversation reads very differently, though, in a world where people are willing to turn off JavaScript.

It’s not the end of the Web, by any means. The Web is not JavaScript and JavaScript is not the Web. Web developers have even had approaches for dealing with this situation – progressive enhancement – for more than a decade. Developers who work primarily on sites have dealt with these problems almost as long as the Web has existed, and those techniques also work for applications.

Programmers needn’t fear that they will have to throw out all of their existing code, either. It turns out that recent developments in JavaScript application architecture (isomorphic or universal JavaScript) give that approach even more power. A client isn’t running JavaScript? Run the JavaScript that would have generated their page on the server, and send the client the result. It takes more than flipping a switch, and may require substantial change to many kinds of interfaces (especially maps). As usual, though, the Web offers more than one way to solve a problem.

Slowing down a bit, especially if we can use that time to re-examine the architecture of the Web, feels like a huge win to me. Developers have outpaced even the much faster computers, networks, and browsers that emerged in the past decade.

Recognizing that and adjusting to it may well hurt for a little while, but it doesn’t mean stopping the Web or even standards development. Specs for new situations will continue to appear. Development communities focused on improving the Web still have lots of room to run. I suspect that even Alex Russell’s call for Progressive Apps (article or video), applications that can shift context from a browser tab to a device, remains possible in this slower-moving world. The barrier there isn’t so much the Web technologies as the keepers of the walled gardens he wants to invade.

Post topics: Web Programming
Share: