Ethical Social Media: Oxymoron or Attainable Goal?
It's Time to Look More Closely at Regulation
Humans have wrestled with ethics for millennia. Each generation spawns a fresh batch of ethical dilemmas and then wonders how to deal with them.
For this generation, social media has generated a vast set of new ethical challenges, which is unsurprising when you consider the degree of its influence. Social media has been linked to health risks in individuals and political violence in societies. Despite growing awareness of its potential for causing harm, social media has received what amounts to a free pass on unethical behavior.
Minerva Tantoco, who served as New York City’s first chief technology officer, suggests that “technology exceptionalism” is the root cause. Unlike the rapacious robber barons of the Gilded Age, today’s tech moguls were viewed initially as eccentric geeks who enjoyed inventing cool new products. Social media was perceived as a harmless timewaster, rather than as a carefully designed tool for relentless commerce and psychological manipulation.
“The idea of treating social media differently came about because the individuals who started it weren’t from traditional media companies,” Tantoco says. “Over time, however, the distinction between social media and traditional media has blurred, and perhaps the time has come for social media to be subject to the same rules and codes that apply to broadcasters, news outlets and advertisers. Which means that social media would be held accountable for content that causes harm or violates existing laws.”
Ethical standards that were developed for print, radio, television, and telecommunications during the 20th century could be applied to social media. “We would start with existing norms and codes for media generally and test whether these existing frameworks and laws would apply to social media,” Tantoco says.
Taking existing norms and applying them, with modifications, to novel situations is a time-honored practice. “When e-commerce web sites first started, it was unclear if state sales taxes would apply to purchases,” Tantoco says. “It turned out that online sales were not exempt from sales taxes and that rules that had been developed for mail-order sites decades earlier could be fairly applied to e-commerce.”
Learning from AI
Christine Chambers Goodman, a professor at Pepperdine University’s Caruso School of Law, has written extensively on the topic of artificial intelligence and its impact on society. She sees potential in applying AI guidelines to social media, and she cited the European Commission’s High-Level Expert Group on Artificial Intelligence’s seven key ethical requirements for trustworthy AI:1
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Diversity, non-discrimination and fairness
- Societal and environmental well-being
The commission’s proposed requirements for AI would be a good starting point for conversations about ethical social media. Ideally, basic ethical components would be designed into social media platforms before they are built. Software engineers should be trained to recognize their own biases and learn specific techniques for writing code that is inherently fair and non-discriminatory.
“It starts with that first requirement of human agency and oversight,” Goodman says. If ethical standards are “paramount” during the design phase of a platform, “then I see some room for optimism.”
Colleges and universities also can play important roles in training a new generation of ethical software engineers by requiring students to take classes in ethics, she says.
Economic Fairness and Equity
Social media companies are private business entities, even when they are publicly held. But the social media phenomenon has become so thoroughly woven into the fabric of our daily lives that many people now regard it as a public utility such as gas, electricity, and water. In a remarkably brief span of time, social media has become an institution, and generally speaking, we expect our institutions to behave fairly and equitably. Clearly, however, the social media giants see no reason to share the economic benefits of their success with anyone except their shareholders.
“The large social media companies make hundreds of billions of dollars from advertising revenue and share almost none of it with their users,” says Greg Fell, CEO of Display Social, a platform that shares up to 50 percent of its advertising revenue with content creators who post on its site.
Historically, content creators have been paid for their work. Imagine if CBS had told Lucille Ball and Desi Arnaz that they wouldn’t be paid for creating episodes of “I Love Lucy,” but that instead they would be allowed to sell “I Love Lucy” coffee mugs and T-shirts. If the original TV networks had operated like social media corporations, there never would have been a Golden Age of Television.
Most societies reward creators, artists, entertainers, athletes, and influencers for their contributions. Why does social media get to play by a different set of rules?
“Economic fairness should be part of the social media ethos. People should be rewarded financially for posting on social media, instead of being exploited by business models that are unfair and unethical,” Fell says.
From Fell’s perspective, the exploitive and unfair economic practices of the large social media companies represent short-term thinking. “Ultimately, they will burn out their audiences and implode. Meantime, they are causing harm. That’s the problem with unethical behavior—in the long run, it’s self-destructive and self-defeating.”
Transforming Attention into Revenue
Virtually all of the large social media platforms rely on some form of advertising to generate revenue. Their business models are exceedingly simple: they attract the attention of users and then sell the attention to advertisers. In crude terms, they’re selling your eyeballs to the highest bidder.
As a result, their only real interest is attracting attention. The more attention they attract, the more money they make. Their algorithms are brilliantly designed to catch and hold your attention by serving up content that will trigger dopamine rushes in your brain. Dopamine isn’t a cause of addiction, but it plays a role in addictive behaviors. So, is it fair to say that social media is intentionally addictive? Maybe.
“For many social media companies, addictive behavior (as in people consuming more than they intend to and regretting it afterwards) is the point,” says Esther Dyson, an author, philanthropist, and investor focused on health, open government, digital technology, biotechnology, and aerospace. “Cigarettes, drugs, and gambling are all premised on the model that too much is never enough. And from the point of view of many investors, sustainable profits are not enough. They want exits. Indeed, the goal of these investors is creating ever-growing legions of addicts. That starts with generating and keeping attention.”
As it happens, misinformation is highly attractive to many users. It’s a digital version of potato chips—you can’t eat just one. The algorithms figure this out quickly, and feed users a steady supply of misinformation to hold their attention.
In an advertising-driven business model, attention equals dollars. With the help of machine learning and sophisticated algorithms, social media has effectively monetized misinformation, creating a vicious, addictive cycle that seems increasingly difficult to stop.
Social media has staked its fortunes to a business model that is deeply unethical and seems destined to fail in the long term. But could the industry survive, at least in the short term, with a business model that hews more closely to ethical norms?
Greg Fell doesn’t believe that ethical guidelines will slow the industry’s growth or reduce its profitability. “People expect fairness. They want to be treated as human beings, not as products,” he says. “You can build fairness into a platform if you make it part of your goal from the start. But it shouldn’t be an afterthought.”
Slowing the Spread of False Narratives
In addition to implementing structural design elements that would make it easier for people to recognize misinformation and false narratives, social media companies could partner with the public sector to promote media literacy. Renée DiResta is the technical research manager at Stanford Internet Observatory, a cross-disciplinary program of research, teaching, and policy engagement for the study of abuse in current information technologies. She investigates the spread of narratives across social and traditional media networks.
“I think we need better ways for teaching people to distinguish between rhetoric and reality,” DiResta says, noting that tropes such as “dead people are voting” are commonly repeated and reused from one election cycle to the next, even when they are provably false. These kinds of tropes are the “building blocks” of misinformation campaigns designed to undermine confidence in elections, she says.
“If we can help people recognize the elements of false narratives, maybe they will build up an immunity to them,” DiResta says.
It’s Not Too Late to Stop the Train
The phenomenon we recognize today as “social media” only began taking shape in the late 1990s and early 2000s. It is barely two decades old, which makes it far too young to have developed iron-clad traditions. It is an immature field by any measure, and it’s not too late to alter its course.
Moreover, social media’s business model is not terribly complicated, and it’s easy to envision a variety of other models that might be equally or even more profitable, and represent far less of a threat to society. Newer platforms such as Substack, Patreon, OnlyFans, Buy Me a Coffee, and Display Social are opening the door to a creator-centric social media industry that isn’t fueled primarily by advertising dollars.
“Social media has its positives, and it isn’t all doom and gloom, but it certainly isn’t perfect and resolving some of these issues could ensure these applications are the fun and happy escape they need to be,” says Ella Chambers, UX designer and creator of the UK-based Ethical Social Media Project. “The majority of social media is okay.”
That said, some of the problems created by social media are far from trivial. “My research led me to conclude that the rise of social media has brought the downfall of many users’ mental health,” Chambers says. A recent series of investigative articles in the Wall Street Journal casts a harsh spotlight on the mental health risks of social media, especially to teen-age girls. Facebook has issued a rebuttal3 to the WSJ, but it’s not likely to persuade critics into believing that social media is some kind of wonderful playground for kids and teens.
Creating a practical framework of ethical guidelines would be a positive step forward. Ideally, the framework would evolve into a set of common practices and processes for ensuring fairness, diversity, inclusion, equity, safety, accuracy, accountability, and transparency in social media.
Chinese officials recently unveiled a comprehensive draft of proposed rules governing the use of recommendation algorithms in China.2 One of the proposed regulations would require algorithm providers to “respect social ethics and ethics, abide by business ethics and professional ethics, and follow the principles of fairness, openness, transparency, scientific rationality, and honesty.”
Another proposed regulation would provide users with “convenient options to turn off algorithm recommendation services” and enable users to select, modify or delete user tags. And another proposed rule would restrict service providers from using algorithms “to falsely register accounts … manipulate user accounts, or falsely like, comment, forward, or navigate through web pages to implement traffic fraud or traffic hijacking …”
Eloy Sasot, group chief data and analytics officer at Richemont, the Switzerland-based luxury goods holding company, agrees that regulations are necessary. “And the regulations also should be managed with extreme care. When you add rules to an already complex system, there can be unintended consequences, both at the AI-solution level and the macro-economic level,” he says.
For instance, small companies, which have limited resources, may be less able to counter negative business impacts created by regulations targeting large companies. “So, in effect, regulations, if not carefully supervised, might result in a landscape that is less competitive and more monopolistic, with unintended consequences for end consumers whom the regulations were designed to protect,” he explains.
Technology Problem, or a People Problem?
Casey Fiesler is an assistant professor in the Department of Information Science at University of Colorado Boulder. She researches and teaches in the areas of technology ethics, internet law and policy, and online communities.
“I do not think that social media—or more broadly, online communities—are inherently harmful,” says Fiesler. “In fact, online communities have also done incredible good, especially in terms of social support and activism.”
But the harm caused by unfettered use of social media “often impacts marginalized and vulnerable users disproportionately,” she notes. Ethical social media platforms would consider those effects and work proactively to reduce or eliminate hate speech, trolling, defamation, cyber bullying, swatting, doxing, impersonation, and the intentional spread of false narratives.
“I consider myself an optimist who thinks that it is very important to think like a pessimist. And we should critique technology like social media because it has so much potential for good, and if we want to see those benefits, then we need to push for it to be better,” Fiesler says.
Ultimately, the future of ethical social media may depend more on the behaviors of people than on advances in technology.
“It’s not the medium that’s unethical—it’s the business people controlling it,” Dyson observes. “Talking about social media ethics is like talking about telephone ethics. It really depends on the people involved, not the platform.”
From Dyson’s point of view, the quest for ethical social media represents a fundamental challenge for society. “Are parents teaching their children to behave ethically? Are parents serving as role models for ethical behavior? We talk a lot about training AI, but are we training our children to think long-term, or just to seek short-term relief? Addiction is not about pleasure; it’s about relief from discomfort, from anxiety, from uncertainty, from a sense that we have no future,” she adds. “I personally think we’re just being blind to the consequences of short-term thinking. Silicon Valley is addicted to profits and exponential growth. But we need to start thinking about what we’re creating for the long term.”