9

“A HOT TEMPER LEAPS O’ER A COLD DECREE”

I SPOKE IN EARLY 2017 AT A GATHERING OF MINISTERS FROM the Organisation for Economic Co-operation and Development (OECD) and G20 nations to discuss the digital future. One of the German ministers confidently asserted over lunch, “The only reason that Uber is successful is because it doesn’t have to follow the rules.” Fortunately, I was not the one who had to ask the obvious question. One of the OECD officials asked, “Have you ever ridden in an Uber?” “No,” he admitted, “I have my own car and driver.”

Of course, if you’ve ever used a service like Uber or Lyft, you know that the experience is far better than it is with taxis in most jurisdictions. The drivers are polite and friendly; they all use Google Maps or Waze to find the most efficient way to their destination; while there is no meter, you can get an estimate of the fare in advance and a detailed electronic receipt within seconds after you finish the trip, and you never have to fumble for cash or a credit card when you want to pay; but most important, you have a car on call to pick you up wherever you are, just like that German minister, except at a fraction of the price he pays.

Over the years, I’ve had similarly frustrating conversations with others charged with regulating or litigating a new technology. For example, during the controversy about Google Book Search back in 2005, I was asked to debate a lawyer for the Authors Guild, which had sued Google for scanning books in order to create a searchable index of their content. Only snippets of the content were shown in the book search index, just like the snippets of text from websites that show up in the normal Google index. The actual content could be viewed only with the permission of the publisher, with the exception of books that were known to be in the public domain.

“Scanning the books means they are making an unauthorized copy,” she said. “They are stealing our content!” When I tried to explain that making a copy was an essential step in creating a search engine, and that Google Book Search worked exactly the same way as web search, it gradually dawned on me that she had no idea how Google Search worked. “Have you ever used Google?” I asked. “No,” she said, adding (I kid you not), “but people in my office have.”

The unintended consequences of simply trying to apply old rules and classifications in the face of a radically different model highlight the need for deeper understanding of technology on the part of regulators, and for fresh thinking on the part of both regulators and the companies they seek to regulate. Silicon Valley companies intent on “disruption” often see regulation as the enemy. They rail against regulations, or just ignore them. “A hot temper leaps o’er a cold decree,” as Shakespeare’s Portia put it in The Merchant of Venice.

Regulation is also the bête noir of today’s politics. “We have too much of it,” one side says; “We need more of it,” says the other. Perhaps the real problem is that we just have the wrong kind, a mountain of paper rules, inefficient processes, and little ability to adjust the rules or the processes when we discover the inevitable unintended consequences.

RETHINKING REGULATION

Consider, for a moment, regulation in a broader context. Your car’s electronics regulate the fuel-air mix in the engine to find an optimal balance of fuel efficiency and minimal emissions. An airplane’s autopilot regulates the countless factors required to keep that plane aloft and heading in the right direction. Credit card companies monitor and regulate charges to detect fraud and keep you under your credit limit. Doctors regulate the dosage of the medicine they give us, sometimes loosely, sometimes with exquisite care, as with the chemotherapy required to kill cancer cells while keeping normal cells alive, or with the anesthesia that keeps a patient unconscious during surgery while keeping vital processes going. Internet service providers and corporate mail systems regulate the mail that reaches their customers, filtering out spam and malware to the best of their ability. Search engines and social media sites regulate the results and advertisements they serve up, doing their best to give us more of what we want to see.

What do all these forms of regulation have in common?

        1.     A clear understanding of the desired outcome.

        2.     Real-time measurement to determine if that outcome is being achieved.

        3.     Algorithms (i.e., a set of rules) that make continuous adjustments to achieve the outcome.

        4.     Periodic, deeper analysis of whether the algorithms themselves are correct and performing as expected.

There are a few cases—all too few—in which governments and quasi-governmental agencies regulate using processes similar to those outlined above. For example, central banks regulate the money supply in an attempt to manage interest rates, inflation, and the overall state of the economy. They have a target, which they try to reach by periodic small adjustments to the rules. Contrast this with the normal regulatory model, which focuses on the rules rather than the outcomes. How often have we faced rules that simply no longer make sense? How often do we see evidence that the rules are actually achieving the desired outcome?

The laws of the United States, and most other countries, have grown mind-bogglingly complex. The Affordable Care Act was nearly two thousand pages long. By contrast, the National Highway Bill of 1956, which led to the creation of the US Interstate Highway System, the largest public works project in history, was twenty-nine pages. The Glass-Steagall Act of 1933, which regulated banks after the Great Depression, was thirty-seven pages long. Its dismantling led to the 2008 financial crisis; the regulatory response this time, the Dodd-Frank Act of 2010, contains 848 pages, and calls for more than 400 additional bouts of rulemaking, in total adding up to as much as 30,000 pages of regulations.

Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly and clearly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated set of tools designed to achieve the outcomes specified in the laws.

Increasingly, in today’s world, this kind of responsive regulation is more than a metaphor. New financial instruments are invented every day and implemented by algorithms that trade at electronic speed. How can these instruments be regulated except by programs and algorithms that track and manage them in their native element in much the same way that Google’s search quality algorithms, Google’s “regulations,” manage the constant attempts of spammers to game the system? There are those who say that government should just stay out of regulating many areas, and let “the market” sort things out. But bad actors take advantage of a vacuum in the absence of proactive management. Just as companies like Google, Facebook, Apple, Amazon, and Microsoft build regulatory mechanisms to manage their platforms, government exists as a platform to ensure the success of our society, and that platform needs to be well regulated.

As the near collapse of the world economy in 2008 demonstrated, it is clear that regulatory agencies haven’t been able to keep up with the constant “innovations” of the financial sector pursuing profit without regard to the consequences. There are some promising signs. For example, in the wake of Ponzi schemes like those of Bernie Madoff and Allen Stanford, the SEC instituted algorithmic models that flag for investigation hedge funds whose results meaningfully outperform those of peers using the same stated investment methods. But once flagged, enforcement still goes into a long loop of investigation and negotiation, with problems dealt with on a haphazard, case-by-case basis. By contrast, when Google discovers that a new kind of spam is damaging search results, they can quickly change the rules to limit the effect of those bad actors. And those rules are automatically executed by the system in pursuit of its agreed-on fitness function.

We need to find more ways to make the consequences of bad action systemic, part of a high-velocity workflow akin to the way that Internet companies use DevOps to streamline and accelerate their internal business processes. This isn’t to say that we should throw out the concept of “due process” that is at the core of the Fifth Amendment, just that in many cases that process can be sped up enormously, and made fairer and clearer at the same time.

There are some important lessons from technology platforms. Despite the enormous complexity of the algorithmic systems used to manage platforms like Google, Facebook, and Uber, the fitness function of those algorithms is usually simple: Does the user find this information relevant, as evidenced by their propensity to click on it, and then go away? Does this user find this content engaging, as evidenced by their willingness to keep clicking on the next story? Is the user being picked up within three minutes? Does the driver have a rating above 4.5 stars?

Outside regulators should focus on defining the desired outcome, and measuring whether or not it has been achieved. They should also diagnose the delta between the intended outcomes and the fitness function of the algorithms being used by those they aim to regulate. That is, are the participants incented to achieve the stated goal of the regulation, or are they incented to try to thwart it? The best regulations encourage the regulated party to take on the problem themselves. This is not “self-regulation” in the sense that government simply trusts the market to do the right thing. Instead, it is a matter of creating the right incentives. For example, the Fair Credit Billing Act of 1974 made consumers responsible for only $50 of any fraudulent credit card charges, making it in the industry’s own self-interest to police fraud aggressively.

Diego Molano Vega, the former minister of information technologies and communications in Colombia, told me how he’d used a similar approach to solve the chronic problem of dropped telephone calls by replacing a regime of fines and three-year-long investigations with a simple rule that telecom providers had to reimburse customers for the cost of every dropped call. After a year and $33 million in refunds, the problem was solved.

And of course, this is ultimately how Google regulated the problem of content farms, which produced content specifically designed to fool the search algorithms but that provided little value to users. Google didn’t assess penalties. They didn’t set detailed rules for what kind of content sites could publish. But by demoting these sites in the search results, they created consequences that led the bad actors to either improve their content or go out of business.

Andrew Haldane, the executive director for financial stability at the Bank of England, made a compelling case for simplicity in regulations in a 2012 talk to the Kansas City Federal Reserve called “The Dog and the Frisbee.” He pointed out that while precisely modeling the flight of a Frisbee and running to catch it requires complex equations, simple heuristics mean that even a dog can do it. He traces the failures of financial regulation that led to the 2008 crisis in large part to the increase in their complexity, which made them almost impossible to administer. The more complex the regulations, the less likely they are to succeed, and the more fragile they are in the face of changing conditions.

The modernization of how data is reported to both the government and the market is an important way of improving regulatory outcomes. When reporting is on paper or in opaque digital forms like PDF, or released only quarterly, it is much less useful. When data is provided in reusable digital formats, the private sector can aid in ferreting out problems as well as building new services that provide consumer and citizen value. There’s an entirely new field of regulatory technology, or RegTech, that uses software tools and open data for regulatory monitoring, reporting, and compliance.

Data-driven regulatory systems need not be as complex as those used by Google or credit card companies. The point is to measure the outcome, and to put any adverse consequences of divergence from the intended outcome on the appropriate parties. Too often, incentives and outcomes are not aligned. For example, government grants mobile phone carriers exclusive licenses to spectrum with the goal of creating reliable and universal access, yet spectrum licenses are auctioned off to the highest bidder. Is this approach giving the right outcome? The quality of mobile services in the United States would suggest otherwise. What if, instead, spectrum licenses were granted based on promises of maximum coverage? Much as Minister Molano Vega did for phone service in Colombia, rebates to customers for failures to live up to coverage promises could potentially create a much more self-regulating system.

THE ROLE OF SENSORS IN FUTURE REGULATION

Increasingly, our interactions with businesses, government, and the built environment are becoming digital, and thus amenable to creative forms of measurement, and ultimately responsive regulation. For example, fines are routinely issued to motorists running red lights or making illegal turns by cameras mounted over highly trafficked intersections. With the rise of GPS, we are heading for a future where speeding motorists are no longer pulled over by police officers who happen to spot them, but instead automatically ticketed whenever they exceed the speed limit.

We can also imagine a future in which that speed limit is automatically adjusted based on the amount of traffic, weather conditions, and other variable conditions that make a higher or lower speed more appropriate than the static limit that is posted today. The endgame might be a future of autonomous vehicles that are able to travel faster because they are connected in an invisible web, a traffic regulatory system that keeps us safer than today’s speed limits. Speed might be less important than the quality of the algorithm driving the car, the fact that the car has been updated to the latest version, and that it is equipped with adequate sensors. The goal, after all, is not to have cars go more slowly than they might otherwise, but to make our roads safe.

Congestion pricing on tolls, designed to reduce traffic to city centers, is another example. Smart parking meters have similar capabilities—parking can cost more at peak times, less off-peak, just like plane tickets or hotel rooms. But perhaps more important, smart parking meters can report whether they are occupied or not, and eventually give guidance to drivers and car navigation systems, reducing the amount of time spent circling aimlessly looking for a parking space.

As we move to a future with more electric vehicles, there are proposals to replace the gasoline taxes with which we currently fund road maintenance with miles driven—reported, of course, once again by GPS. Companies like Metromile already offer to base your insurance rates on how often and how fast you drive. It is only a small step further to do the same for taxes.

THE SURVEILLANCE SOCIETY

Living in a world of pervasive connected sensors questions our assumptions of privacy and other basic freedoms, but we are well on our way toward that world purely through commercial efforts. We are already being tracked by every site we visit on the Internet, through every credit card charge we make, through every set of maps and directions we follow, and by an increasing number of public or private surveillance cameras. Ultimately, science fiction writer David Brin got it right in his prescient 1998 nonfiction book, The Transparent Society. In an age of ubiquitous commercial surveillance that is intrinsic to the ability of companies to deliver on the services we ask for, the kind of privacy we enjoyed in the past is dead. Brin argues that the only way to respond is to make the surveillance two-way through transparency. To the Roman poet Juvenal’s question “Who will watch the watchers?” (“Quis custodiet ipsos custodes?”), Brin answers, “All of us.”

Security and privacy expert Bruce Schneier offers an important caveat to the transparent society, though, especially with regard to the collection of data by government. When there is a vast imbalance of power, transparency alone is not enough. “This is the principle that should guide decision-makers when they consider installing surveillance cameras or launching data-mining programs,” he writes. “It’s not enough to open the efforts to public scrutiny. All aspects of government work best when the relative power between the governors and the governed remains as small as possible—when liberty is high and control is low. Forced openness in government reduces the relative power differential between the two, and is generally good. Forced openness in laypeople increases the relative power [of government], and is generally bad.”

We clearly need new norms about how data can be used both by private actors and by government. I love what Gibu Thomas, now the head of global commerce at PepsiCo, had to say when he was the head of digital innovation at Walmart. “The value equation has to be there. If we save them money or remind them of something they might need, no one says, ‘Wait, how did you get that data?’ or ‘Why are you using that data?’ They say, ‘Thank you!’ I think we all know where the creep factor comes in, intuitively.”

This notion of “the creep factor” should be central to the future of privacy regulation. When companies use our data for our benefit, we know it and we are grateful for it. We happily give up our location data to Google so they can give us directions, or to Yelp or Foursquare so they can help us find the best place to eat nearby. We don’t even mind when they keep that data if it helps them make better recommendations in the future. Sure, Google, I’d love it if you could do a better job predicting how long it will take me to get to work at rush hour. And yes, I don’t mind that you are using my search and browsing habits to give me better search results. In fact, I’d complain if someone took away that data and I suddenly found that my search results weren’t as good as they used to be.

But we also know when companies use our data against us, or sell it on to people who do not have our best interests in mind. If I don’t have equal access to the best prices on an online site because the site has determined that I have either the capacity or willingness to pay more, my data is being used unfairly against me. In one notable case, Orbitz was steering Mac users to higher-priced hotels than they offered to PC users. This is data used for “redlining,” so called because of the old practice of drawing a red line on the map to demarcate geographies where loans or insurance would be denied or made more costly because of location (often as a proxy for a racial profile). Political microtargeting with customized, misleading messages based on data profiling also definitely fails the creep factor test.

These people are privacy bullies, who take advantage of a power imbalance to peer into details of our private lives that have no bearing on the services from which that data was originally collected. Government regulation of privacy should focus on the privacy bullies, not on the routine possession and use of data to serve customers.

Regulators have to understand the fair boundary of the data transaction between the consumer and the service provider. It seems to me that insurance companies would be quite within their rights to offer lower rates to people who agree to drive responsibly, and to verify the consumer’s claims of how many miles they drive annually or whether they keep to the speed limit, but if my insurance rates suddenly spike because of data about formerly private legal behavior, like the risk profile of where I work or drive for personal reasons, I have reason to feel that my data is being used unfairly against me.

The right way to deal with data redlining is not to prohibit the collection of data, as so many privacy advocates seem to urge, but rather, to prohibit its misuse once companies have that data. As David Brin once said to me, “It is intrinsically impossible to know if someone does not have information about you. It is much easier to tell if they do something to you.”

Regulators should consider the possible harms to the people whose data is being collected, and work to eliminate those harms, rather than limiting the collection of the data itself. When people are denied health coverage because of preexisting conditions, that is their data being used against them; this harm was restricted by the Affordable Care Act. By contrast, the privacy rules in HIPAA, the 1996 Health Insurance Portability and Accountability Act, which seek to set overly strong safeguards around the privacy of data, rather than its use, have had a chilling effect on many kinds of medical research, as well as patients’ access to their very own data.

As was done with credit card fraud, regulators should look to create incentives for companies themselves to practice the right behavior. For example, liability for misuse of data sold on to third parties would discourage sale of that data. A related approach is shown by legal regimes such as that controlling insider trading: If you have material nonpublic information obtained from insiders, you can’t trade on that knowledge, while knowledge gained by public means is fair game.

Data aggregators, who collect data not in order to provide services directly to consumers, but to other businesses, should come in for particular scrutiny, since the data transaction between the consumer and the service provider has been erased, and it is far more likely that the data is being used not for the benefit of the consumer who originally provided it but for the benefit of the purchaser.

Disclosure and consent as currently practiced are extraordinarily weak regulatory tools. They allow providers to cloak malicious intent in complex legal language that is rarely read, and if read, impossible to understand. Machine-readable disclosure similar to those designed by Creative Commons for expressing copyright intent would be a good step forward in building privacy-compliant services. A Creative Commons license allows those publishing content to express their intent clearly and simply, ranging from the “All Rights Reserved” of traditional copyright to a license like CC BY-NC-ND (which requires attribution, but allows the content to be shared freely for noncommercial purposes, and does not allow derivative works). Through a mix of four or five carefully crafted assertions, which are designed to be both machine and human readable, Creative Commons allows users of a photo-sharing site like Flickr or a video-sharing site like YouTube to search only for content matching certain licenses. An equivalent framework for privacy would be very helpful.

During the Obama administration, there was a concerted effort toward what is called “Smart Disclosure,” defined as “the timely release of complex information and data in standardized, machine readable formats in ways that enable consumers to make informed decisions.” New technology like the blockchain can also encode contracts and rules, creating new kinds of “smart contracts.” A smart contracts approach to data privacy could be very powerful. Rather than using brute force “Do Not Track” tools in their browser, users could provide nuanced limits to the use of their data. Unlike paper disclosures, digital privacy contracts could be enforceable and trackable.

As we face increasingly automated systems for enforcing rules, though, it is essential that it be possible to understand the criteria for a decision. In a future of what some call “algocracy”—rule by algorithm—where algorithms are increasingly used to make real-world decisions, from who gets a mortgage and who doesn’t, to how to allocate organs made available for donation, to who gets out of jail and who doesn’t, concern for fairness demands that we have some window into the decision-making process.

If, like me, you’ve ever been caught going through a red light by an automated traffic camera, you know that algorithmic enforcement can seem quite fair. I was presented with a time-stamped image of my car entering the intersection after the light had changed. No argument.

Law professor Tal Zarsky, writing on the ethics of data mining and algorithmic decision making, argues that even when software makes a decision based on thousands of variables, and the most that the algorithm creator could say is “this is what the algorithm found based on previous cases,” there is a requirement for interpretability. If we value our human freedom, it must be possible to explain why an individual was singled out to receive differentiated treatment based on the algorithm.

As we head into the age of increasingly advanced machine learning, though, this may be more and more difficult to do. If we are not explicit about what regulatory regime—inherited or, optimally, to be devised—shall apply, expect lawsuits down the line.

REGULATION MEETS REPUTATION

It is said that “that government is best which governs least.” Unfortunately, evidence shows that this isn’t true. Without the rule of law, capricious power sets the rules, usually to the benefit of a powerful few. What people really mean by “governs least” is that the rules are aligned with their interests. In an economy tuned to the interests of the few, the rules are often unfair to the rest. An economy tuned to the interests of the majority may seem unfair to some, but John Rawls’s “veil of ignorance”—the idea that the best rules for a political or economic order are those that would be chosen by people who had no prior knowledge of their place in that order—is a convincing argument that that government is best that governs for most.

That, as it turns out, is also the lesson of technology platforms. As we saw with TCP/IP, the rules should ideally be intrinsic to the design of the platform, not something added to it. But as long as the rules, however complex, are aligned with the simple interests of the participants, as is the case with Google’s quest for relevance, regulation becomes largely invisible. Things just appear to work.

Reputation systems are one way that regulation is built into the design of online platforms. Amazon has consumer ratings for every one of millions of products, helping consumers make informed decisions about which products to buy. Sites like Yelp and Foursquare provide extensive consumer reviews of restaurants; those that provide poor food or service are flagged by unhappy customers, while those that excel are praised. TripAdvisor and other similar sites have had a similar effect in helping travelers discover the best places to stay in remote places around the world. These reviews help the sites to algorithmically rank the products or services that users are most likely to be satisfied with.

eBay, which grew out of Pierre Omidyar’s quest to create a perfect marketplace, was a pioneer in reputation systems. eBay was faced with enormous challenges. Unlike Amazon, which began by selling products from familiar vendors, and was therefore just an online version of something familiar—a bookstore—eBay was the online version of a worldwide garage sale or swap meet, where the trust that is engendered by existing brands is absent.

In their paper “Trust Among Strangers in Internet Transactions: Empirical Analysis of eBay’s Reputation System,” economists Paul Resnick and Richard Zeckhauser point out that customers of an online auction site can’t inspect the goods and make their own determination as to their quality; they rarely have repeated interactions with the same seller; and they can’t learn about the seller from friends or neighbors. Especially in the early days, photographs and descriptions were often unprofessional, and little or nothing was known about the sellers. Not only was there risk that items were not as shown, or might be counterfeit, but there was a risk that they might never be delivered. And in 1995, when eBay and Amazon were established, using a credit card on the Internet was itself widely considered an unacceptable risk.

So, in addition to building a network of buyers and sellers, eBay had to build mechanisms for helping buyers and sellers to trust one another. The eBay reputation system, in which customers rated vendors and vendors rated customers, was one of their answers. It was widely emulated.

David Lang summarized the Internet’s journey toward trust in a Medium post about the success of education crowdfunding site DonorsChoose. He points out that traditional charities typically give funds only to established nonprofits, in large chunks, usually with a great degree of oversight. By contrast, DonorsChoose allows individual teachers to advertise classroom needs, which can be met by either individuals or institutions. Describing other examples where technology has enabled trust, Lang wrote: “The novelty isn’t the financial transaction—room renting, car sharing and art patronage has been around for centuries—the novelty is rather in the level of trust we’re willing to extend to strangers because the apps and algorithms provide a filter.”

As the battles of companies like Uber, Lyft, and Airbnb with regulators demonstrate, though, the journey toward trust requires more than just getting consumers on board. Logan Green told me that Lyft’s original approval for peer-to-peer car-hire services from the California Public Utilities Commission was based on the argument that they could use technology to provide many of the same benefits as traditional taxi regulation. Passenger safety was of paramount importance to the CPUC. One key regulator, a former military officer known simply as “the General,” reportedly said, “Nobody dies on my watch!” Logan said that his team was able to persuade the CPUC that the tracking of the ride via GPS, the reputation system, and careful vetting of the drivers were an effective way of meeting their mutual goals. “Safety is the most important thing to our users too,” Logan told me. “So we said, ‘Let’s nail it!’

But in many jurisdictions, reputation systems and traditional regulations are still on a collision course. Ostensibly, taxis are regulated to protect the quality and safety of the consumer experience, as well as to ensure that there are an optimal number of vehicles providing service at the time they are needed. In practice, most of us know that these regulations do a poor job of ensuring quality or availability. A strong argument can be made that the reputation system used by Uber and Lyft, by which passengers are required to rate their drivers after each ride, does a better job of weeding out bad actors. Certainly, I’ve had taxi drivers who would never have been able to offer a ride again if it were as easy to file a taxi complaint as it is to give a one-star rating.

However, this has not stopped opponents of the new services from claiming that the drivers provided by Uber and Lyft have been insufficiently vetted. While all of the new services perform driver background checks before they are allowed to offer rides, opponents argue that the checks are not strenuous enough because they don’t require fingerprinting and FBI criminal background checks, an onerous and time-consuming step that, from the point of view of Uber and Lyft, is undesirable because it would limit the participation of part-time and occasional drivers, who provide a majority of the service on these new platforms. Uber and Lyft feel so strongly about this issue that they actually pulled their services from the city of Austin after it required fingerprinting and full FBI checks. Both companies claim that the background checks they perform, using a third-party service, actually provide better data on drivers.

In any event, it turns out that existing regulations for licensing drivers provided two intertwined functions: ensuring the quality of drivers, and, for a number of reasons, limiting the supply. According to Steven Hill, author of Raw Deal, a critical book about Uber, the first “taxi” regulations were promulgated in 1635 by King Charles I of England, who ordered that all vehicles on the streets of London needed to be licensed “to restrain the multitude and promiscuous use of coaches.” The same thing happened in the United States during the Great Depression. People were desperate for work, and cars-for-hire clogged the streets. In 1933, a US Department of Transportation official wrote: “The excess supply of taxis led to fare wars, extortion and a lack of insurance and financial responsibility among operators and drivers. Public officials and the press in cities across the country cried out for public control over the taxi industry.” As a result, cities imposed limits on the number of taxis using a “medallion” system. They awarded only a limited number of licenses to commercial drivers, and issued regulations on fares, insurance, vehicle safety inspections, and driver background checks.

This brief history illuminates how easy it is to mix up means and ends. If the problem is framed as “the multitude and promiscuous use of coaches,” as King Charles I put it, limiting the number of licensed coaches looks functionally equivalent to the actual objective, which is eliminating congestion and pollution. (In 1635, horse manure was the equivalent of twentieth-century smog.) If, as the DOT official claimed in 1933, the excess of supply led to fare wars where no driver could make a decent living, thus leading to a decline in safety and lack of insurance on the part of drivers, the one-time solution, limiting the number of drivers and subjecting them to mandatory inspections, becomes a goal in and of itself. But to echo the refrain in Stephen King’s Dark Tower, “The world has moved on,” and perhaps there are now better solutions.

While there are still risks of bad drivers, and critics have made the most of crimes committed by Uber drivers, the fact that every Uber ride is tracked in real time, with the exact time, location, route, and the identity of both the driver and the passenger known, makes an Uber or Lyft ride inherently safer than a taxi ride. And the use of post-ride ratings by both passenger and driver helps, over time, to weed bad actors out of the system. Hal Varian put this in the broader context of how computer-mediated transactions change the regulatory game. “The entire transaction is monitored. If something goes wrong with the transaction, you can use the computerized record to find what went wrong.”

And as to congestion, while the current algorithm is optimized to create shorter wait times, there is no reason it couldn’t take into account other factors that improve customer satisfaction and lower cost, such as the impact of too many drivers on congestion and wait time. Algorithmic dispatch and routing is in its early stages; to think otherwise is to believe that the evolution of Google Search ended in 1998 with the invention of PageRank. For this multi-factor optimization to work, though, Uber and Lyft have to make a deep commitment to evolving their algorithms to take into account all of the stakeholders in their marketplace. It is not clear that they are doing so.

Understanding the differences between means and ends is a good way to help untangle the regulatory disagreements between the TNCs (transportation network companies) and taxi and limousine regulators. Both parties want enough safe, qualified drivers available to meet the needs of any passenger who wants a ride, but not so many drivers that drivers don’t make enough money to keep up their cars and give good service. The regulators believe that the best way to achieve these objectives is to limit the number of drivers, and to certify those drivers in advance by issuing special business licenses. Uber and Lyft believe that their computer-mediated marketplace achieves the same goals more effectively. Surely it should be possible to evaluate the success or failure of these alternative approaches using data.

As discussed in Chapter 7, there is a profound cultural and experiential divide here between Silicon Valley companies and government that is part of the problem. In Silicon Valley, every new app or service starts out as an experiment. From the very first day a company is funded by venture capitalists, or launches without funding, its success is dependent on achieving key metrics such as user adoption, usage, or engagement. Because the service is online, this feedback comes in near-real time. In the language of Eric Ries’s popular Lean Startup methodology, the first version is referred to as “minimum viable product (MVP),” defined as “that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” The goal of every entrepreneur is to grow that MVP incrementally till it finds “product-market fit,” resulting in explosive growth.

This mindset is taught to every entrepreneur. Once an app or service is launched, new features are added and tested incrementally. Not only is the usage of the features measured, and those that are not adopted by users silently dropped or rethought, but different versions of each feature—the placement or size of a button, messaging, or graphic—are tested against random samples of users to see which version works better. Feedback loops are tight, and central to the success of the service.

By contrast, despite the changes beginning under the Obama administration that were described in Chapter 7, lawmakers and government regulators are accustomed to considering a topic, taking input from stakeholders in public meetings (and too often, in private meetings with lobbyists), making a considered decision, and then sticking with it. Measurement of the outcome, if it happens at all, perhaps comes in the form of an academic study years after the event, with no clear feedback into the policy-making process. I once came across a multimillion-dollar project to build a job search engine for veterans that had managed to reach only a few hundred users but was about to have its contract renewed. I asked a senior government official who had overseen the project whether they ever did the math to understand what their cost was for each user. “That would be a good idea,” she said. A good idea? Any Silicon Valley entrepreneur who couldn’t answer that question would be laughed out of the room. Tom Loosemore, the former chief operating officer of the UK Government Digital Service, speaking at the 2015 Code for America Summit, noted that the typical government regulatory framework represents “500 pages of untested assumptions.”

Government technology procurement processes echo this same approach. A massive specification is written, encapsulating everyone’s best thinking, and spelling out every detail of the implementation so that it can be put out to bid. The product typically takes years to develop, and the first time its assumptions are tested is when it is launched. (Note that while this may sound similar to the Amazon “working backwards” approach, it is actually very different. Amazon asks those tasked with doing the work to imagine the intended user experience, not to specify all of the implementation details in advance. As they build the actual product or service, they continue to learn and refine their ideas.)

Now, to be fair, many (though far from all) of the things that government regulates have far higher stakes than a consumer app. “Move fast and break things,” Mark Zuckerberg’s famous admonition to his developers at Facebook, hardly applies to the design of bridges, air traffic control, the safety of the food supply, or many of the other things that government regulates. Government also must be inclusive, serving all residents of the country, not just a highly targeted set of users. Nonetheless, there is a great deal for government to learn from the iterative development processes of modern digital organizations.

“Regulatory capture,” the process by which companies that benefit from a regulation become parties to manage it, accelerates the confusion. I once had a conversation with former Speaker of the House Nancy Pelosi about a piece of legislation (the Stop Online Piracy Act, or SOPA). I told her, based on data from my company’s publishing business, that online piracy was less of a problem than proponents of the bill were claiming. She didn’t ask to look at my data, she didn’t counter that proponents of the bill had offered different data. She said, “Well, we have to balance the interests of Silicon Valley against the interests of Hollywood.”

I was shocked. It is as if Google’s Search Quality team had sat down with representatives of search spammers and agreed to set aside a third of the top results in order to preserve their business model. In my mind, the job of our representatives is not to balance the interests of various lobbying groups, but to gather data and make an informed decision on behalf of the public.

I’m not saying that Silicon Valley always gets it right—it certainly doesn’t get it right the first time—and government doesn’t always get it wrong. While government is often too responsive to lobbyists, its fundamental goal is to look out for the interests of the public, including populations that would otherwise be ignored.

Getting extremely specific about the objectives of any regulation allows for a franker, more productive discussion. Both sides can debate the correct objectives. And when they have come to an agreement, they can start to look at alternative ways to achieve them, as well as how to measure whether or not they have succeeded. They should also define a process for modifying the regulation in response to what is learned through that measurement. And there must be a mechanism for resolving conflicts between overlapping regulations. If it is a complex regulation, this process should be followed for each subcomponent. The lessons about modularity from Jeff Bezos’s platform memo are surprisingly relevant to the design of regulations as well as to platforms and modern technology organizations.

In this regard, I was heartened by the National Highway Traffic Safety Administration’s 2016 guidance on regulation for self-driving cars. It lays out a clear set of objectives, organized in such a way that they can be measured. It breaks up its guidance by what it calls Operational Design Domain (ODD)—a set of constraints for which competency needs to be demonstrated: roadway types, geographical location, speed range, lighting conditions for operation (day and/or night), weather conditions, and other operational domain constraints. It highlights the need for measurement: “Tests should be developed and conducted that can evaluate (through a combination of simulation, test track or roadways) and validate that the Highly Automated Vehicle system can operate safely with respect to the defined ODD and has the capability to fall back to a minimal risk condition when needed.”

When you focus on outcomes rather than rules, you can see that there are multiple ways to achieve comparable outcomes, and sometimes new ways that provide better outcomes. Which approach is best should be informed by data.

Unfortunately, it isn’t just government that is unwilling or unable to put its data on the table. Companies like Uber, Lyft, and Airbnb jealously guard much of their data for fear that it will give away trade secrets or relative marketplace traction to competitors. Instead, they should open up more data to academics as well as to regulators trying to understand the impact of on-demand transportation on cities. Nick Grossman, who leads Union Square Ventures’ efforts on public policy, regulatory, and civic issues, argues that open data may be the solution to Uber’s many debates with regulators. He makes the case that “regulators need to accept a new model where they focus less on making it hard for people to get started.” Relaxing licensing requirements and increasing the freedom to operate means more people can participate, and companies can experiment more freely. But “in exchange for that freedom to operate,” Nick continues, “companies will need to share data with regulators—un-massaged, and in real time, just like their users do with them. AND, will need to accept that that data may result in forms of accountability.”

Open data could help lay to rest other persistent questions about Uber’s market-based approach. For example, Uber claims that lowering prices does not affect driver income, but drivers say that they have to work longer in order to make the same amount, and that too many drivers are increasing their wait times between pickups. This shouldn’t be a matter of claim and counterclaim, because the answer to that question is to be found in data Uber has on its servers. Open data is a great way for everyone to better understand how well the system is working. Open data would also help cities to understand the impact of on-demand car services on overall congestion, and make it much easier to evaluate Airbnb’s impact on housing availability and affordability. It is a shame that cities and platform companies are not working together more proactively, using data to craft better outcomes for both sides.

WORKERS IN A WORLD OF CONTINUOUS PARTIAL EMPLOYMENT

There is no better demonstration of how outdated maps shape public policy, labor advocacy, and the economy than in the debate over whether Uber and Lyft drivers (and workers for other on-demand startups) should be classified as “independent contractors” or “employees.” In the world of US employment law, an independent contractor is a skilled professional who provides his or her services to multiple customers as a sole proprietor or small business. An employee provides services to a single company in exchange for a paycheck. Most on-demand workers seem to fall into neither of these two classes.

Labor advocates point out that the new on-demand jobs have no guaranteed wages, and hold them in stark contrast to the steady jobs of the 1950s and 1960s manufacturing economy that we now look back to as a golden age of the middle class. Yet if we are going to get the future right, we have to start with an accurate picture of the present, and understand why those jobs are growing increasingly rare. Outsourcing is the new corporate norm. That goes way beyond offshoring to low-wage countries. Even for service jobs within the United States, companies use “outsourcing” to pay workers less and provide fewer benefits. Think your hotel housekeeper works for Hyatt or Westin? Chances are good they work for Hospitality Staffing Solutions. Think those Amazon warehouse workers who pack your holiday gifts work for Amazon? Think again. It’s likely Integrity Staffing Solutions. This allows companies to pay rich benefits and wages to a core of highly valued workers, while treating others as disposable components. Perhaps most perniciously, many of the low-wage jobs on offer today not only fail to pay a living wage, but they provide only part-time work.

Which of these scenarios sounds more labor friendly?

Our workers are employees. We used to hire them for eight-hour shifts. But we are now much smarter and are able to lower our labor costs by keeping a large pool of part-time workers, predicting peak demand, and scheduling workers in short shifts. Because demand fluctuates, we keep workers on call, and only pay them if they are actually needed. What’s more, our smart scheduling software makes it possible to make sure that no worker gets more than 29 hours, to avoid triggering the need for expensive full-time benefits.

or

Our workers are independent contractors. We provide them tools to understand when and where there is demand for their services, and when there aren’t enough of them to meet demand, we charge customers more, increasing worker earnings until supply and demand are in balance. We don’t pay them a salary, or by the hour. We take a cut of the money they earn. They can work as much or as little as they want until they meet their income goals. They are competing with other workers, but we do as much as possible to maximize the size of the market for their services.

The first of these scenarios summarizes what it’s like to work for an employer like Walmart, McDonald’s, the Gap, or even a progressive low-wage employer like Starbucks. Complaints from workers include lack of control over schedule even in case of emergencies, short notice of when they are expected to work, unreasonable schedules known as “clopens” (for example, the same worker being required to close the store at 11 p.m. and open it again at 4 a.m. the next day—a practice that Starbucks only banned in mid-2014, and that is still in place at many retailers and fast-food outlets), “not enough hours,” and a host of other labor woes.

The second scenario summarizes the labor practices of Uber and Lyft. Talk to many drivers, as I have, and they tell you that they mostly love the freedom the job provides to set their own schedule, and to work as little or as much as they want. This is borne out by a study of Uber drivers by economists Alan Krueger of Princeton and Jonathan Hall, now an economist at Uber. Fifty-one percent of Uber drivers work fewer than 15 hours a week, to generate supplemental income. Others report working until they reach their target income. Seventy-three percent say they would rather have “a job where you choose your own schedule and be your own boss” than “a steady 9-to-5 job with some benefits and a set salary.”

Managing a company with workers who are bound to no schedule but simply turn on an app when they want to work and who compete with other workers for whatever jobs are available requires a powerful set of algorithms to make sure that the supply of workers and customers is in dynamic balance.

Traditional companies have also always had a need to manage uneven labor demand. In the past, they did this by retaining a stable core of full-time workers to meet base demand and a small group of part-time contingent workers or subcontractors to meet peak demand. But in today’s world, this has given way to a kind of continuous partial employment for most low-wage workers at large companies, where workplace scheduling software from vendors like ADP, Oracle, Kronos, Reflexis, and SAP lets retailers and fast-food companies build larger-than-needed on-demand labor pools to meet peak demand, and then parcel out the work in short shifts and in such a way that no one gets full-time hours. This design pattern has become the dominant strategy for managing low-wage workers in America. According to a management survey by Susan Lambert of the University of Chicago, by 2010, 62% of retail jobs were part-time and two-thirds of retail managers preferred to maintain a large part-time workforce rather than to increase hours for individual workers.

The advent of scheduling software enabled this trend. As Esther Kaplan of the Investigative Fund describes it in her Harper’s article “The Spy Who Fired Me,”

In August 2013, less than two weeks after the teen-fashion chain Forever 21 began using Kronos, hundreds of full-time workers were notified that they’d be switched to part-time and that their health benefits would be terminated. Something similar happened last year at Century 21, the high-fashion retailer in New York. . . . Within the space of a day, Colleen Gibson’s regular schedule went up in smoke. She’d been selling watches from seven in the morning to three-thirty in the afternoon to accommodate evening classes, but when that availability was punched in to Kronos, the system no longer recognized her as full-time. Now she was getting no more than twenty-five hours a week, and her shifts were erratic. “They said if you want full hours, you have to say you’re flexible,” she told me.

That is, both traditional companies and “on demand” companies use apps and algorithms to manage workers. But there’s an important difference. Companies using the top-down scheduling approach adopted by traditional low-wage employers have used technology to amplify and enable all the worst features of the current system: shift assignment with minimal affordances for worker input, and limiting employees to part-time work to avoid triggering expensive health benefits. Cost optimization for the company, not benefit to the customer or the employee, is the guiding principle for the algorithm.

By contrast, Uber and Lyft expose data to the workers, not just the managers, letting them know about the timing and location of demand, and letting them choose when and how much they want to work. This gives the worker agency, and uses market mechanisms to get more workers available at periods of peak demand or at times or places where capacity is not normally available.

When you are drawing a map of new technologies, it’s essential to use the right starting point. Much analysis of the on-demand or “gig” economy has focused too narrowly on Silicon Valley without including the broader labor economy. Once you start drawing a map of “workers managed by algorithm” and “no guarantee of employment” you come up with a very different sense of the world.

Why do we regulate labor? In an interview with Lauren Smiley, Tom Perez, secretary of labor during the Obama administration, highlighted that the most important issue is whether or not workers make a living wage. The Department of Labor’s Wage and Hour Division head David Weil put it succinctly: “We have to go always back to first principles of who are we trying to protect and how the people emerging in these new jobs fall on that spectrum.”

On first blush, it would seem that being an employee has many benefits. But there is a huge gulf between the benefits often provided to full-time employees and part-time employees. And that has led to what I call “the 29-hour loophole.” Unscrupulous managers can set the business rules for automated scheduling software to make sure that no worker gets more than 29 hours in a given week. Because employment law allows different classes of benefits for part-time and full-time workers, with the threshold being at 30 hours per week, this loophole allows core staff at the company to be given generous benefits, while the low-wage contingent workers get the bare-bones version. Once you realize this, you understand the potentially damaging effect of current labor regulations not just for new Silicon Valley companies but also for their workers. Turn on-demand workers from 1099 contractors into W2 employees, and the most likely outcome is that the workers go from having the opportunity to work as much as they like for a platform like Uber or TaskRabbit to one in which they are kept from working more than 29 hours a week. This was in fact exactly what happened when Instacart converted some of its on-demand workers to employees. They became part-time employees.

(Even before the advent of computerized shift-scheduling software, companies played shell games with employee pay and benefits. I remember student protests at Harvard that my daughter was part of in 2000 focused on the unfair treatment of janitors and other maintenance personnel. “You’re not a full-time employee and aren’t eligible for full-time benefits,” janitors were told. “You don’t work 40 hours for Harvard University. You work 20 hours for Harvard College, and 20 hours for the Harvard Law School.”)

Perhaps as pernicious as the fact that companies limit workers to 29 hours a week, the capricious nature of many of the schedules that are provided by traditional low-wage employers and the lack of visibility into future working hours means that workers can’t effectively schedule hours for a second job. They can’t plan their lives, their childcare, a short vacation, or even know if they will be able to be present for their children’s birthdays. By contrast, workers for on-demand services can work as many hours as they like—many report working until they reach their desired income for the week, rather than some set number of hours—and equally important, they work when they want. Many report that the flexibility to take time off to deal with childcare, health issues, or legal issues is the most important part of what they like about the job.

It is essential to look through the labels—employee and independent contractor—and examine the underlying reality that they point to. So often, we live in the world of labels and associated value judgments and assumptions, and forget to reduce our intellectual equation to the common denominators. As Alfred Korzybski so memorably wrote, we must remember that “the map is not the territory.”

When you put yourself into the mapmaker’s seat, rather than simply taking the existing map as an accurate reflection of an unchanging reality, you begin to see new possibilities. The rules that we follow as a society must be updated when the underlying conditions change. The distinction between employees and subcontractors doesn’t really make sense in the on-demand model, which requires subcontractor-like freedoms to workers who come and go at their own option, and where employee-based overtime rules would prohibit workers from maximizing their income.

Professor Andrei Hagiu, writing in Harvard Business Review, and venture capitalist Simon Rothman, writing on Medium, both argue that we need to develop a new classification for workers—we might call them “dependent contractors.” This new classification might allow some of the freedoms of independent contractors, while adding some of the protections afforded to employees. Nick Hanauer and David Rolf go further, arguing that just as technology allows us to deploy workers without the overhead of traditional command-and-control employment techniques, it also could let us provide traditional benefits to part-time workers. There is no reason that we couldn’t aggregate the total amount worked across a number of employers, and ask each of them to contribute proportionally to a worker’s account. Hanauer and Rolf call this a “Shared Security Account” in conscious echo of the safety net of a Social Security account.

A similar policy proposal for portable benefits comes from Steven Hill at New America. Hanauer, Rolf, and Hill all suggest that we decouple benefits like worker’s compensation, employer contribution to Social Security and Medicare taxes, as well as holiday, sick, and vacation pay, from employers and instead associate them with the employee, erasing much of the distinction between 1099 independent contractor and W2 employee. Given today’s technology, this is a solvable problem. It would be entirely possible to allocate benefits across multiple employers. It shouldn’t matter if I work 29 hours for McDonald’s and 11 for Burger King, if both are required to contribute pro rata to my benefits.

However, none of these proposals have solved the deeper dynamics that drive companies to use the 29-hour loophole. It isn’t basic payroll taxes that lead companies to want to have two classes of workers. It is healthcare to start with (a single-payer system would solve that problem, as well as many others), but also other “Cadillac” benefits that companies wish to lavish on their most prized workers but not on everyone. More powerfully, it is the notion that workers are just a cost to be eliminated rather than an asset to be developed. Ultimately, the segregation of workers into privileged and unprivileged classes, and the moral and financial calculus that drives that segregation, has to stop. Over time, we will realize that this is an existential imperative for our economy, not just a moral imperative.

It will take much deeper thinking (and forceful and focused activism) to come up with the right incentives for companies to understand and embrace the value of taking care of all their workers on an equal footing. Zeynep Ton’s Good Jobs Strategy is a good place to start. Ton outlines the common principles that make companies as diverse as Costco and Google great places to work. As Harvard Business School lecturer and former CEO of Stop & Shop José Alvarez writes, “Zeynep Ton has proven what great leaders know instinctively—an engaged, well-paid workforce that is treated with dignity and respect creates outsized returns for investors. She demonstrates that the race to the bottom in retail employment doesn’t have to be the only game being played.” Economists have long recognized this phenomenon. They call wages higher than the lowest that the market would otherwise offer “efficiency wages.” That is, they represent the wage premium that an employer pays for reduced turnover, higher employee quality, lower training costs, and many other significant benefits.

In Chapters 11 and 12, we’ll look at the key drivers of the race to the bottom in wages, and why we need to rewrite the rules of business. But even without radically changing the game, businesses can gain enormous tactical advantage by better understanding how to improve the algorithms they use to manage their workers, and by providing workers with better tools to manage their time, connect with customers, and do all of the other things they do to deliver improved service.

Algorithmic, market-based solutions to wages in on-demand labor markets provide a potentially interesting alternative to minimum-wage mandates as a way to increase worker incomes. Rather than cracking down on the new online gig economy businesses to make them more like twentieth-century businesses, regulators should be asking traditional low-wage employers to provide greater marketplace liquidity via data sharing. The skills required to work at McDonald’s and Burger King are not that dissimilar; ditto Starbucks and Peet’s, Walmart and Target, or the AT&T and Verizon stores. Letting workers swap shifts or work on demand at competing employers would obviously require some changes to management infrastructure, training, and data sharing between employers. But given that most scheduling is handled by standard software platforms, and that payroll is also handled by large outsourcers, many of whom provide services to the same competing employers, this seems like an intriguingly solvable problem.

The algorithm is the new shift boss. What regulators and politicians should be paying attention to is the fitness function driving the algorithm, and whether the resulting business rules increase or decrease the opportunities offered to workers, or whether they are simply designed to increase corporate profits.

In the next two chapters, we’ll look at how the same flawed fitness function is driving media and finance, and how the speed and scale of digital platforms is algorithmically amplifying that flaw.

Get WTF?: What's the Future and Why It's Up to Us now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.