Media in the age of algorithms

The problem of fake news and bad sites trying to game the system is an industry-wide problem — companies should share data and best practices in the effort to combat it.

By Tim O’Reilly
November 16, 2016
The first prototype de Havilland DH106 Comet at Hatfield, 1949. The first prototype de Havilland DH106 Comet at Hatfield, 1949. (source: Imperial War Museums on Wikimedia Commons)

Since the U.S. election, there’s been a lot of finger pointing, and many of those fingers are pointing at Facebook, arguing that its newsfeed algorithms played a major role in spreading misinformation and magnifying polarization. Some of the articles are thoughtful in their criticism, others thoughtful in their defense of Facebook, while others are full of the very misinformation and polarization that they hope will get them to the top of everyone’s newsfeed. But all of them seem to me to make a fundamental error in how they are thinking about media in the age of algorithms.

Consider Jessica Lessin’s argument in The Information:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

I am deeply, deeply worried about the calls I am hearing, from journalists and friends, for Facebook to intervene and accept responsibility for ensuring citizens are well-informed and getting a balanced perspective.

…Facebook promoting trustworthiness sounds great. Who isn’t in favor of accepting responsibility and ferreting out misinformation? But major moves on Facebook’s part to mediate good information from bad information would put the company in the impossible position of having to determine ‘truth,’ which seems far more objective than it really is. Moreover, it would be bad for society.

My response: Facebook crossed this river long ago. Once they got into the business of curating the newsfeed rather than simply treating it as a timeline, they put themselves in the position of mediating what people are going to see. They became a gatekeeper and a guide. This is not an impossible position. It’s their job. So, they’d better make a priority of being good at it.

But those who argue strongly for Facebook’s responsibility to weed out the good from the bad also get it wrong. For example, on Vox, Timothy B. Lee wrote:

A big issue here is about the way Facebook has staffed its editorial efforts. In a traditional news organization, experienced editorial staff occupy senior roles. In contrast, Facebook has relegated the few editorial decisions it has made to junior staffers. For example, until earlier this year Facebook had a team of 15 to 18 independent contractors who were in charge of writing headlines for Facebook’s ‘trending news’ box.

When Facebook faced accusations that these staffers were suppressing conservative stories, Facebook panicked and laid all of them off, running the trending stories box as an automated feature instead. But that hasn’t worked so well either, as fake news keeps popping up in the trending news box.

The problem here wasn’t that Facebook was employing human editors to evaluate stories and write headlines. The problem was that Facebook’s leadership didn’t treat this as an important part of Facebook’s operations.

If Facebook had an experienced, senior editorial team in place, there’s a lot it could do to steer users toward high-quality, deeply reported news stories and away from superficial, sensationalistic, or outright inaccurate ones.

Lee is right to say that curating the news feed isn’t a job for junior staffers and independent contractors. But he’s wrong that it’s a job for “an experienced, senior editorial team.” It’s a job for the brightest minds on Facebook’s algorithm team!

And Lee is wrong to say that the problem wasn’t that Facebook was employing human editors to evaluate stories and write headlines. That was precisely the problem.

Like drivers following a GPS over a bridge that no longer exists, both Jessica Lessin and Timothy Lee are operating from a out-of-date map of the world. In that old map, algorithms are overseen by humans who intervene in specific cases to compensate for their mistakes. As Lessin rightly notes, this is a very slippery slope.

Lessin says:

We shouldn’t let Facebook off the hook for every problem it creates or exacerbates. But we can’t hold it responsible for each of them either. We’re witnessing the effects of a world where the internet has driven the cost of saying whatever you want to whomever you want to zero, as Sam often says. This is an irreversible trend no company can stop, nor should we want them to.

But there is a good existence proof for another approach, one that Facebook has worked long and hard to emulate.

Google has long demonstrated that you can help guide people to better results without preventing anyone’s free speech. Like Facebook, they are faced every day with determining which of a thousand competing voices deserve to be at the top of the list. The original insight that Google was founded on, that a link is a vote, and that links from reputable sources that had been around a long time, were worth more than others, was their initial tool for weeding out the wheat from the chaff. But over the years, they developed hundreds if not thousands of signals that help to determine which links are the most valuable.

For the better part of two decades, Google has worked tirelessly to thread the needle between curating an algorithmic feed of a firehose of content that can be created by anyone and simply picking winners and losers. And here’s the key point: they do this without actually making judgments about the actual content of the page. The “truth signal” is in the metadata, not the data.

Anyone who wants to understand how a 21st century company tackles the problem of editorial curation in a world of infinite information and limited attention would do well to study the history and best practices of Google’s search quality team, which are well documented and widely shared. Here’s one video from Matt Cutts, former head of the Google web spam team. What Google teaches us is that improving the algorithms to deliver better results is a constant battle because there are always those who are trying to game the system. But they also teach us that the right answer is not to make manual interventions to remove specific results.

Google and Facebook constantly devise and test new algorithms. Yes, there is human judgment involved. But it’s judgment applied to the design of a system, not to a specific result. Designing an effective algorithm for search or the newsfeed has more in common with designing an airplane so it flies, or designing a new airplane so that it can fly faster than the old one, than with deciding where that airplane flies.

Improving the “truth value” of articles doesn’t depend on manual interventions to weed out bad results, as commentators on both sides of this issue seem to think, but on discovering signals that cause good results to float to the top.

The question is how to determine what “good results” means.

In the case of making an airplane fly, the goals are simple—stay aloft, go faster, use less fuel—and design changes can be rigorously tested against the desired outcome. There are many analogous problems in search—finding the best price, or the most authoritative source of information on a topic, or a particular document—and many that are far less rigorous. And when users get right to what they want, the users are happy, and so, generally, are advertisers. Unfortunately, unlike search, where the desires of the users to find an answer and get on with their lives are generally aligned with “give them the best results,” Facebook’s prioritization of “engagement” may be leading them in the wrong direction. What is best for Facebook’s revenue may not be best for users.

Even in the case of physical systems like aerodynamics and flight engineering, there are often hidden assumptions to be tested and corrected. In one famous example that determined the future of the aerospace industry, a radically new understanding of how to deal with metal fatigue was needed. As described by University of Texas professor Michael Marder:

Britain was set to dominate the jet age. In 1952, the de Havillands Comet began commercial service, triumphantly connecting London with the farthest reaches of the Empire. The jet plane was years ahead of any competitor, gorgeous to look at, and set new standards for comfort and quiet in the air. Then things went horribly wrong.

In 1953 a Comet fell out of the sky, and the crash was attributed to bad weather and pilot error. …In 1954, a second Comet fell out of clear skies near Rome. The fleet was grounded for two months while repairs were made. Flights then resumed with the declaration, ‘Although no definite reason for the accident has been established, modifications are being embodied to cover every possibility that imagination has suggested as a likely cause of the disaster. When these modifications are completed and have been satisfactorily flight tested, the Board sees no reason why passenger services should not be resumed.’ Four days after these words were written, a third Comet fell into the sea out of clear skies near Naples, and the fleet was grounded again indefinitely.

…As the Comet accident report was being released in 1955, a little-known military contractor in the northwest corner of the United States was completing its prototype for a civilian jet airplane. Boeing had had little success with civilian craft in the past. The company knew that cracks had brought down the Comet, and they had better understand them before they brought down the Boeing 707.

Boeing brought in a researcher for the summer, Paul Paris, a mechanical engineer who had just finished a Master’s degree and was pursuing graduate studies at Lehigh University. …The view of fracture Paris brought to Boeing was dramatically different from the one that had guided construction of the Comet. Cracks were the centerpiece of the investigation. They could not be eliminated. They were everywhere, permeating the structure, too small to be seen. The structure could not be made perfect, it was inherently flawed, and the goal of engineering design was not to certify the airframe free of cracks but to make it tolerate them.[Emphasis mine.]

The essence of algorithm design is not to eliminate all error, but to make results robust in the face of error. Where de Havillands tried in vain to engineer a plane where the materials were strong enough to resist all cracks and fatigue, Boeing realized that the right approach was to engineer a design that allowed cracks, but kept them from propagating so far that they led to catastrophic failure. That is also Facebook’s challenge.

Facebook’s comment in response to Timothy Lee suggests that they understand the challenge they face:

We value authentic communication, and hear consistently from those who use Facebook that they prefer not to see misinformation. In News Feed, we use various signals based on community feedback to determine which posts are likely to contain inaccurate information, and reduce their distribution. In Trending, we look at a variety of signals to help make sure the topics being shown are reflective of real-world events and take additional steps to prevent false or misleading content from appearing.

Despite these efforts, we understand there’s so much more we need to do, and that is why it’s important that we keep improving our ability to detect misinformation. We’re committed to continuing to work on this issue and improve the experiences on our platform.

The key question to ask is not whether Facebook should be curating the news feed, but how. They clearly have a lot of work to do. I believe they are taking the problem very seriously. I hope they make breakthroughs that don’t force them to choose between their business model and giving better results to their users. If they don’t, I fear that despite their good intentions, the business model will win. Their goal is to find a way for the plane to fly faster, but fly safely.

The bright side: searching through the possibility space for the intersection of truth and engagement could lead Facebook to some remarkable discoveries. Pushing for what is hard makes you better.

But the answer is not for Facebook to put journalists to work weeding out the good from the bad. It is to understand, in the same way that they’ve so successfully divined the features that lead to higher engagement, how to build algorithms that take into account “truth” as well as popularity.

And they need to be asking themselves whether they are de Havilland or Boeing.

Update

I sent a copy of this article to Matt Cutts, former head of the web spam team at Google. I thought his experience with Google’s 2011 Panda search algorithm update, which they introduced in response to the rise of low-quality “content farms,” was very relevant to today’s discussion. Much as there are today about Facebook, there were articles about Google failing. (I was quoted in one of them, saying that Google was “losing some kind of war with spammers.”) Like Facebook today, Google was already wrestling with these concerns, but the public feedback still played an important role in Google taking the problem seriously, despite the financial consequences to the company.

Matt wrote back:

For Google, the growth of content farms and low-quality sites threatened users’ trust in Google’s search result. When external commentary started to mirror our own internal discussions and concerns, that was a real wake up call. The Panda algorithm was Google’s response, and it sought to reward higher-quality sites and to encourage a healthier web ecosystem.

In my personal opinion, I see a pretty direct analogy between the Panda algorithm and what Facebook is going through now. It seems like Facebook’s touchstones have been connecting people and engagement. But you get what you measure, and the dark side of engagement might produce shady stories, hoaxes, incorrect information, or polarizing memes as an unintended consequence.

With Panda, Google took a big enough revenue hit via some partners that Google actually needed to disclose Panda as a material impact on an earnings call. But I believe it was the right decision to launch Panda, both for the long-term trust of our users and for a better ecosystem for publishers. Facebook is a different company, but I’ll be interested to see how they tackle some of these recent issues.

As noted above, the problem of fake news and bad sites trying to game the system is a constant battle. Google, too, is struggling with a wave of fake news sites right now. See for example the false Google news result shown in this tweet from Aza Raskin. (Thanks, Jim Stogdill, for pointing it out. Isaiah Saxon has also sent me a couple of examples.) This is an industry-wide problem, not just a problem for Facebook, and companies should share data and best practices in the effort to combat it.

Second update

I thought it might be worthwhile to collect useful links I’ve come across that relate to algorithmic analysis of content about the election, some of which might be useful to people wrestling with this problem:

The Wall Street Journal’s Blue Feed/Red Feed points out how content diversity (rather than truth or falsity) may be a key vector for addressing the problem:

Below is an argument that one of the problems with the Facebook algorithm is that it suppresses content diversity. That is followed by a fairly technical paper from 2009 that explains how freshness and diversity are problems that have been tackled in a completely different context, for recommendation algorithms on ecommerce sites.

Post topics: Next Economy
Post tags: Commentary
Share:

Get the O’Reilly Next Economy newsletter