Chapter 8. Next Steps
In the mid-1990’s, just as the World Wide Web was gaining popularity, I was sure that the Internet would become a powerful force in our lives. But I didn’t have a clue that services such as Google would emerge, or that weblogs and other personal media would play such a transformative role in my chosen craft.
I didn’t anticipate online experiments such as Feed, the pioneering but now defunct online magazine that had an edginess bloggers later incorporated, or group-edited sites such as Kuro5hin, where the audience writes and ranks the stories and then adds context and ideas as they discuss them. I didn’t imagine that blogs and other tools would come along to make writing on the Web almost as easy as reading from it. So I won’t try to predict the shape of the news business and how it will be practiced a decade from now. But even if we can’t make specific predictions, we can look forward and make some safe assumptions about the architecture and technology of tomorrow’s news, and then consider what they suggest.
My assumptions rest on two guiding principles. The first is a belief in basic journalistic values, including accuracy, fairness, and ethical standards. The second is rooted in the very nature of technology: it’s relentless and unstoppable.
Only one thing is certain: we’ll all be astounded by what’s to come.
Laws and Other Codes
As we’ve already established, the mass media in the latter part of the 20th century was organized, for the most part, along a fairly simple, top-down framework. Editors and reporters inside big companies decided which stories to cover. They received information from a variety—but not too big a variety—of mostly official and sometimes unofficial sources. Editors massaged what reporters wrote, and the results were printed in newspapers and magazines or broadcast on radio and television. Alternatives did exist, particularly when desktop publishing came on the scene. But the conversational aspect of the news we’ve been discussing in this book hadn’t arrived.
Technology and an increasing dissatisfaction with mass media have created the conditions for a new framework. To understand this, we must first understand the technology and the trends underlying the collision of journalism and technology. These trends take the shape of laws, not the kind enacted by governments but the kind imagined by scientists and acute observers of society.
The first law is named after Gordon Moore, cofounder of computer chip maker Intel. More than any other, Moore’s Law is the key to understanding today’s reality and tomorrow’s possibilities.
Moore’s Law says that the density of transistors on a given piece of silicon will double every 18 to 24 months. It’s been true since Moore came up with the notion in the 1960s, and the pace of improvement looks set to continue for some time to come. There’s no historical equivalent for this kind of change; humans are fortunate to do anything twice as fast or as twice as well even once, much less double that improvement again and again. Moore’s Law is about exponential change: it doesn’t take long before you’ve increased power by thousands-fold. [218]
As engineers shrink millions of transistors onto tiny chips, they can embed enormous calculating power—something akin to intelligence—into almost every electronic device we use. You and I use many computers each day: the microprocessors, also called microcontrollers, are in computers, handheld devices, alarm clocks, coffee makers, home thermostats, wristwatches, and automobiles. Most of these devices contain vastly more processing power than early mainframe computers.
Not only are we embedding brains into everything we touch, but we’re adding memory to everything, too. The manufacturers of computer memory chips and disk drives are improving their products at an even faster pace than Moore’s Law. And now, with modern communications—wired and wireless—we’re connecting devices that are more and more powerful.
Grassroots journalism feeds on all these innovations. Devices for collecting, working with, and distributing data are becoming smaller and more powerful every year. People are figuring out how to put them to work in ways professional journalists are only beginning to catch on to, such as collaborative news sites where readers do the writing and editing and posting newsy pictures from camera phones.
Moore himself has been somewhat surprised at how long Silicon Valley’s engineers have kept his law not just alive, but vibrant. “It went further than I ever could have imagined,” he told me in 2001.
Next, consider Metcalfe’s Law, named after Bob Metcalfe, inventor of the Ethernet networking standard that is now ubiquitous in every personal computer. [219] Essentially, Metcalfe’s Law says that the value of a communication network is the square of the number of nodes, or end-point connections. That is, take the number of nodes and multiply it by itself.
The canonical example of Metcalfe’s Law is the growth of fax machines. If there’s only one fax machine in the world, it’s not good for much. But the minute someone else gets a fax machine, both can be used, and real value is created. The more people with fax machines, the more value there is in the network—a utility that greatly exceeds the raw numbers—because each individual user has many more people to whom he can send faxes. [220]
Each new Internet-connected computer is a node. So, increasingly, is each new mobile phone that can send and retrieve Internet data. And in a few years, it’s probable that most of the smarter devices made possible by Moore’s law—everything from refrigerators to cars to computers—will be a node. When billions or even trillions of people and things are connected, the value of the network will transcend calculation.
Finally, we have Reed’s Law, named after David Reed, about whom I’ll talk more in Chapter 11. Reed noticed that when people go online, they don’t only conduct one-to-one communications, as they would with a telephone or fax machine. They conduct many-to-many, or few-to-few, communications.
According to Reed’s Law, groups themselves are nodes. The value of networks in that context, he asserts, is the number of groups factorial. Here, factorial means that you take the number of groups, and every integer less than that number all the way back to one, and multiply all of those numbers together. For example, 8 factorial is 1 times 2 times 3 times 4 times 5 times 6 times 7 times 8. The number of group nodes factorial is a very, very, very big number. [221]
Obviously, Metcalfe’s Law and Reed’s Law are as much opinions as anything else. But they make sense intuitively, and more and more they make sense in a practical way: the more the Net grows, the more valuable and powerful it becomes. [222]
All of these trends, applied to communications in general, add up to an even more “radical democratization of access to the means of production and distribution,” Howard Rheingold told me.
The people who’ll invent tomorrow’s media are not in my age bracket. They are just growing up now. In a decade, Rheingold observed: “The 15-year-olds today in Seoul and Helsinki, who are already adept at mobilizing media to their end, will be 25. And what they carry in their pockets will be thousands of times more powerful than what they have today.”
What does this mean for news and journalism? As the technologies of creation and communication grow more powerful and become smaller, and ultimately become part of the fabric of life, we’ll have vastly more raw data. And we’ll need tools—and humans—to help us make sense of it all.
Creating the News
There’s no longer any doubt that personal publishing of various stripes is becoming a major trend. The Pew Internet & American Life Project found that in mid-2003, slightly less than half of adult Internet users had used the Net to “publish their thoughts, respond to others, post pictures, share files and otherwise contribute to the explosion of content available online.” [223] If you added in the under-18 population, no doubt the numbers would rise significantly. While much of what is considered publishing on the Net consisted of trading files, causing some doubters to downplay the survey, the bottom line was that there was an enormous and growing cadre of content creators, some of whom were creating news.
The tools of creation are now everywhere, and they’re getting better. Musicians can get the near-equivalent of a big recording studio in a package costing only a couple of thousand dollars, or considerably less if they’re willing to make some compromises. Digital video is becoming so cheap that anyone with the requisite talent can make a feature film for a fraction of what it once cost. The notion of writing on the Web is expanding to include all kinds of media, and there’s little to stop it.
The Web can’t compete today—and may not compete in our lifetimes—with live television for big-event coverage. The architecture just doesn’t permit it. But for just about everything else, it’s ideal. Adam Curry, who became prominent as a VJ on MTV and has since been exploring the blogosphere and even newer media, [224] envisions “Personal TV Networks” that use the Net in a more appropriate way to deliver video content. In an introduction to a session at a 2004 blogging conference, [225] he described it this way:
Since the invention of the video tape recorder, most content delivered via television is created offline and prepared well in advance of its broadcast slot. In many cases a program will have to be cleared through the legal department and be reviewed for network “policies.” And so the program sits in a queue, waiting to be distributed. During this time the program could be distributed by bike messengers and still arrive on time when you would normally turn on your set as directed by TV Guide. Or . . . it could be distributed via the Internet. Since big files take a long time to download, a day’s worth of downloading should be time enough. The download can take place at night, when usage of your network and pc is low and, most importantly, you aren’t waiting for it. It’ll “just be there” in the morning. [226]
Hundreds of millions of people in the U.S. and abroad are using camera phones (soon to be video-camera phones) and SMS to share information. Soon, said Larry Larsen, multimedia editor at the Poynter Institute, location will be one of the data points. For example, he told me that if he’s house-hunting, he should be able to visit a location and ask his Treo handheld for all relevant news stories within a two-mile radius. “If the bulk of that includes violent crimes,” he wrote me, “I’m out of there.” [227]
But how easy will it be to use the tools of creation? Blogs set an early standard, but they’re still relatively crude instruments. You still need to know some HTML to make a blog work. In the future, tools need to be drop-dead simple, or the promise of grassroots journalism won’t be kept.
The reporter of the future—amateur or professional—will be equipped with an amazing toolkit. But reporting is more than collecting facts, or raw data. Rheingold’s smart mobs are morphing into a news team of unparalleled reach. Is there depth to match?
In Snow Crash, [228] a 1991 novel of a post-apocalyptic American future, Neal Stephenson offered an image that has stuck with me.
Gargoyles represent the embarrassing side of the Central Intelligence Corporation. Instead of using laptops, they wear their computers on their bodies, broken up into separate modules that hang on the waist, on the back, on the headset. They serve as human surveillance devices, recording everything that happens around them. Nothing looks stupider, these getups are the modern-day equivalent of the slide-rule scabbard or the calculator pouch on the belt, marking the user as belonging to a class that is at once above and far below human society.
The gargoyles in the novel aren’t journalists in Stephenson’s vision. They’re more like human personal assistants, with a dual role: recording what’s going on in the environment and then interacting with the network by looking up someone’s face or biography from the Net, for example. In a sense, the gargoyles are web-cams with brains.
“Journalists are supposed to filter information, not just be web-cams,” Stephenson told me. There’s too little respect for the journalistic function when people see it as “a primitive substitute for having web-cams everywhere. No one has time to sift through all that crap.”
The sifting process will be handled both by people and machines. The role of the journalist will surely change, but it will not go away. But the role of automated tools will grow.
Sorting It Out
The ability to get the news you want is the hallmark of a networked world. People can create their own news reports from a variety of sources, not just the ones in their hometowns, which typically have been dominated by a monopoly local newspaper and television stations that would have to dig deeper to be shallow.
Creating our own news reports is still a largely haphazard affair. The sheer volume of information deters all but the most dedicated news hunters and gatherers. But the tools are improving fast, and it won’t be long before people will be able to pick and choose in a far more organized way than they do today. New kinds of Big Media are emerging in this category, including Google, Microsoft, and Yahoo!. But the opportunity for small media is enormous, too.
I’ve been a fan of Google News [229] since it launched in “beta” form (it was still beta as I wrote this) in early 2002. The brainchild of Krishna Bharat, it has become a popular, and I’d argue essential, part of the web news infrastructure. The search engine “crawls” various news sites—designated by humans—and then machines take over to display all kinds of headlines on a variety of subjects from politics to business to sports to entertainment and so on. The display is calculated to resemble a newspaper. It’s an effective glimpse into what’s big news on the Web right now, or at least what editors think is big.
A user who wants to be better informed on a particular topic can use Google News to drill deeper, which may be the most important aspect of the site. One click and the user gets a list, sorted by what Google estimates is relevant or by date, of all stories on a given topic. There’s a great deal of repetition, but it can be eye-opening to see how different media organizations cover the same issue, or what different angles they choose to highlight.
A useful element of Google News is called Google Alerts, a service that lets users create keyword searches, the results of which are sent by email on a regular basis. But as of early 2004, the service didn’t let you read the alerts in RSS (the syndication format I discussed in Chapter 2 and will look at again below), a serious drawback.
Another Google News drawback, as of this writing, was a refusal to acknowledge news content from the sphere of grassroots journalism. For example, only a few blogs are considered worthy. This underestimates the value of the best blogs. Bharat told me the site has one basic rule: news requires editors, and Google News is displaying what editors think is important at any given moment. He saw the site as “complementary” to what newspapers do, but this seemed to understate its potential. Of course, it would not exist without the actual news reporting and editing from elsewhere. But it has the potential to turn into the virtual front page for the rest of us.
Microsoft, racing to catch Google in the search-engine wars, has long been established in the news business. MSNBC, the company’s partnership with General Electric’s NBC News unit, is a classic news site—big, heavy, rich with content. It’s innovative in how it provides multimedia news. Now Microsoft is making Google-like experiments in news, too, with its “Newsbot,” [230] the early tests of which closely resemble Google News.
More interesting, by the sound of it, is an upcoming Microsoft product called NewsJunkie, which is due to be released later in 2004. As Kristie Heim reported in the San Jose Mercury News on March 24, 2004, it is being designed to keep track of what readers have already seen, but with refinements. “It reorganizes news stories to rank those with the most new information at the top and push those with repetitive information to the bottom, or filter them out entirely,” she wrote.
In looking at the major web companies’ moves, I’ve been most impressed with Yahoo!’s direction. The MyYahoo! page has been more customizable than any of the other major sites, letting the user create a highly tailored news report. In early 2004, Yahoo! folded RSS into the service, letting users select feeds from weblogs and other sites and add them to the MyYahoo! news page. [231] It’s the best blend yet of old and new.
Syndication Takes Off
Let’s revisit RSS. You’ll recall that RSS is a file generated automatically by weblog and web site software, and increasingly by other applications, that describes the site’s content for the purpose of syndication.
Here’s an example. A typical blog consists of a homepage with several postings. Each posting consists of a headline and some text. The RSS “feed,” as it’s known, is a file containing a list of the headlines and some or all of the text from the postings. In other words, RSS describes the structure and some of the content of a particular page.
RSS feeds can be read by "aggregators” or "newsreaders,” software that allows individuals to collect news from many different sites into one screenful of information instead of having to surf from one page to another. Today, RSS readers are fairly primitive, but that will change in coming years.
Some of the most exciting new work surrounding RSS is coming from fledgling companies such as Feedster, which mines RSS data and keeps track of bloggers’ mentions of products, among other things. The inherent possibilities seem nearly endless, including the ability to follow conversations in much more detailed ways. As I was finishing this book, Microsoft quietly let it be known that it was planning "Blogbot,” a search tool that sounded very much like Feedster and Technorati. Surprisingly, Google, which owns Blogger, a company that makes blogging software, hadn’t done any of this.
The technologists looking at this field see rich lodes in RSS and other data created on blogs and web sites. Mountains of data are being created every day by RSS feeds and other structured information, and smart entrepreneurs and researchers are creating tools that I believe will become an integral part of tomorrow’s news architecture.
The World Live Web
Dave Sifry, a serial entrepreneur, started Technorati in 2002. By April 2004, he was tracking more than two million blogs, with thousands coming online every day. Though many people abandon their blogs, the trend line is growing fast.
Technorati’s tools are basically semi-canned queries that go into a giant, constantly updated database that Sifry likens to a just-in-time search engine. The service helps people search or browse for interesting or popular weblogs, breaking news, and hot topics of conversation. It also lets users rank people and their blogs and blog topics not just by popularity—the number of blogs linking to something—but by weighted popularity, determined by the popularity of the linking blogs. You can also see not just the most popular blogs, but the fastest-rising ones. My blog had about 2,100 incoming links the last time I checked. If I get 100 more, that’s gratifying but not, relatively speaking, a huge change. But if someone who has a dozen incoming links today gets six more, that’s an enormous relative change, and Technorati will probably flag it. Think of this as a “buzzmeter” for determining how fast a blogger—or a blogger’s specific posting—is rising or cooling off.
The idea behind Technorati might be called the Google Hypothesis: link structure matters. Knowing who is linking to whom can take a seemingly random collection of weblogs and extract a highly structured set of information. This information can then be filtered in a variety of ways. The original Technorati application was the “Link Cosmos”—what Sifry called “an annotated listing of all weblog sources pointing to a site [blog] in recent time.” Type in the URL of a weblog (or an individual posting), and the engine shows a list of weblogs pointing to that URL, sorted by time of linking or by “authority”—the “most popular” linking weblog is ranked first. Searching on any linking weblog will show its Cosmos as well, and so on. (Imagine what this would look like displayed graphically as a web of links. Inevitably, someone will offer such a tool.)
In addition to the Cosmos, the Technorati data can also be expressed as ordered lists. The Top 100 list, for example, shows the hundred most popular sites on the Web (whether weblogs or web sites such as Slashdot), based on the number of outgoing links from blogs. Though Technorati’s algorithms are simpler than Google’s, Technorati can offer the blogging community what Google offers news junkies with the Google News site: timeliness. Because the weblog world moves so fast, it’s helpful to know when something was posted. Google looks at links and documents to get its Page Rank, Sifry explained, but Technorati adds two things: time of posting and the fact that with blogs, the postings are typically more personal than institutional. Combine all of this, he said, and you end up with a “World Live Web,” a subset of the World Wide Web that gets at the actual conversation.
As of March 2004, Technorati’s services included NewsTalk (“News items people are talking about”), BookTalk (“The books people are talking about”), and Current Events (“Conversations going on around current events”). For serious news users, these were invaluable additions.
But these are only the start of something much more interesting. The Web transcends mere links. Machines are talking to each other on our behalf.
Probing APIs and Web Services
Few users of Technorati know, and fewer care, about something called the Technorati API. API stands for “applications programming interface,” a term used by tech people to explain how to hook one piece of software to another. In effect, APIs are standards created to help ensure that one product can interoperate with another. Think of the phone jack in your wall as an API that allows you to connect your phone to the phone network. Anyone can make an RJ-11 plug, connecting to a wire that runs between your phone and the wall.
Software development relies on APIs. Operating systems have them so that independent software programmers can create applications, such as word processors, that use the underlying features of the system. They don’t have to reinvent the proverbial wheel each time they write software, and they help ensure a vibrant ecosystem on whatever programming platform they’re using. Technorati is one of a growing number of web companies, including Google and Amazon, to create and publish APIs for its software. Most blogging software also has APIs.
With these and other APIs, programmers are using a technology called “web services” to further change the basic rules of the information game. According to programmer and blogger Erik Benson, [232] A web service is basically a system that lets web sites talk to each other, sharing information between each other without the intervention of pesky humans.” In a sense, humans have used the Web this way for years: type a query into Google, or buy a book on Amazon, and you’re using a web service.
When Google [233] and Amazon, [234] and Technorati [235] (among others) offer APIs into their data, they’re not offering us the entire database the way the U.S. government does with, for example, census data, much of which can be downloaded and massaged at will. They’re offering a way to get specific information out of the databases in a structured way. But their willingness to do this means we can build, using web services, entirely new kinds of queries—and learn new things—with just a little bit of expertise. This may be beyond you and me, but programmers have already created some useful applications using APIs and web services, such as “Amazon Light,” [236] which uses the Amazon API to turn the retailer’s site into something more closely resembling a search engine. Another extraordinarily interesting application is Valdis Krebs’ analysis of people who buy books about politics with a right or left slant, and how little overlap there is among people who buy those books. [237]
Web services get even more interesting when you consider how we might wire them together to create new kinds of applications. Long before Technorati started watching conversations about books, Benson had created AllConsuming, [238] which combined four web services to watch and highlight the books bloggers were discussing. I’m also fascinated by GoogObits, [239] which takes newspaper obituaries and essays and then augments them with Google searches.
These technologies will be part of future news dissemination systems. They’ll help us do something essential: keep better track of conversations. For example, I would like to be able to track news of innovative applications for my Treo smartphone. The news includes conversations among people I respect, not just standard journalists. If someone in the group I trust posts an item about the Treo, I want to know about it, of course. But I also want to know what others in that group—and people they designate as trustworthy or well-informed—are saying about this news. I want software that tracks not just the top-level item, which in this case could be a news story or blog posting or SMS response, but how the conversation then takes shape about the item across a variety of media. Now imagine having the same ability to track conversations about local, national, or international issues. Today, this is impossible except in a laborious and time-wasting way. Web services will eventually make it possible. [240]
Okay, but Whose “Information” Do You Trust?
Among the missing components in this hierarchy is a way to evaluate a person’s reputation beyond the crude systems in place today. A reliable reputation system would allow us to verify people and judge the veracity of the things they say based, in part, on what people we trust say about them. In a sense, Google is already a reputation system: Google my name and you’ll discover a lot about me, including where I work, what I’ve written, and a lot about what I think about various issues—and what some other people think of me (not all flattering by any means). Technorati is also this type of system: the more people linking to you, the more “authority” you have. But it’s important to note that the majority of blogs tracked by Technorati have nobody linking to them. This doesn’t mean the blogs lack value, because there are people close to the bloggers who trust them. No matter who you are, you probably know something about a topic that’s worth paying attention to. [241]
Someday, a person who is interested in news about the local school system, which rarely rates more than a brief item in the newspaper except to cover some extraordinary event, will be able to get a far more detailed view of that vital public body. Any topic you can name will be more easily tracked this way. Just in the political sphere, the range will go beyond school governance to city councils to state and federal government to international affairs. Now multiply the potential throughout other fields of interest, professional and otherwise. And when audio and video become an integral part of these conversations—it’s already starting to happen as developers connect disparate media applications—the conversations will only deepen.
The tools are being built now. Look on the accompanying web site for this book, where we will maintain a comprehensive list along with links to the toolmakers.
Dinosaurs and Dangers
The technology tells us we’re heading in one direction, but the law and cultural norms will have something to say about the process.
The media of the late 20th century was largely the province of big corporations. All else being equal, it might be headed toward extinction. But all is not equal in the halls of power and influence. If today’s Big Media is a dinosaur, it won’t die off quietly. It will, with government’s help, try to control new media rather than see its business models eroded by it.
Meanwhile, one of the valuable artifacts of modern journalism is a commitment—however poorly kept at times—to integrity. The growth of grassroots journalism has been accompanied by serious ethical issues, including veracity and outright deception. Are traditional values compatible with this new medium? The questions of integrity and struggle for control are potentially deadly flies in the ointment of tomorrow’s media. We’ll look at them closely in the next several chapters.
Endnotes
Moore’s original paper on the subject is on Intel’s web site at: ftp://download.intel.com/research/silicon/moorespaper.pdf.
In this 2003 CNET interview, Metcalfe talks about the genesis and future of Ethernet: http://news.com.com/2008-1082-1008450.html.
As Hal Varian and Carl Shapiro noted in their important 1999 book, Information Rules (Harvard Business School Press), Metcalfe’s Law relies on what economists call “network externalities.” This is the notion that the larger the network, the more attractive it will be to users in most cases—and the harder it will be for a new entrant in the market to get people to switch.
David Reed’s own explanation of his “law” is on his site: http://www.reed.com/Papers/GFN/reedslaw.html.
I’m particularly indebted to Howard Rheingold for his observations, in conversations and his writing, which have helped clarify my own understanding of the power of these various laws.
Pew report on online content production: http://www.pewinternet.org/reports/toc.asp?Report=113.
Adam Curry: http://live.curry.com.
Curry’s BloggerCon session introduction: http://blogs.law.harvard.edu/bloggerCon/2004/04/09#a1119.
Andrew Grumet has been experimenting with video as RSS “enclosures,” delivered to a desktop (or other device) as needed. See http://blogs.law.harvard.edu/tech/bitTorrent for more information.
Advertisers saw this potential long ago. In Hong Kong in 2000, a friend showed me a mobile phone that let him know if a nearby store was having a sale.
Bantam, 1991.
Google News: http://news.google.com.
Microsoft Newsbot: http://newsbot.msn.com.
MyYahoo! RSS: http://add.my.yahoo.com/rss/.
Erik Benson blog: http://erikbenson.com.
Google’s API: http://www.google.com/apis/.
Amazon’s Web Services: http://www.amazon.com/gp/aws/landing.html/102-2039287-6152169.
Technorati Developers Center: http://www.technorati.com/developers/index.html.
Amazon Light: http://www.kokogiak.com/amazon.
Valdis Krebs’ political book-buying analysis: http://www.orgnet.com/divided.html.
AllConsuming: http://www.allconsuming.com.
GoogObits: http://www.googobits.com.
In April 2004, Technorati launched a preliminary version of a service that went part of the way toward making the conversation visible. It let a weblogger automatically show a link to Technorati’s index of all the blogs that had linked to a specific posting. It was launched first on BoingBoing and became an instant hit.
As David Weinberger says, updating the Andy Warhol aporism: “In the future everyone will be famous for fifteen people.”
Get We the Media now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.