A Project of National Affairs

Slow Down and Break Less: Taking Cultural Impacts Seriously

Christine Rosen

The longer we live with the large Internet platforms, the more clearly we see their cultural impacts—for better and for worse. We need to take these impacts seriously, because a platform’s architecture can profoundly affect the users’ habits and ideas generally. Recent regulatory proposals try to grapple with these problems, but there are limits to what legal mandates can achieve. Instead, we should approach the platforms’ cultural impacts like other companies’ environmental impacts: by requiring the companies to think seriously about their impacts, so that they—and we—can try to minimize the harms while promoting the benefits.

Since the founding of large Internet platforms like Facebook, Google, and Twitter, competing narratives have developed about their cultural impact. The optimistic narrative, which until recently held sway, claims that just as we have done after the introduction of disruptive technologies in earlier eras (the printing press, the telegraph, the radio, the television) we will eventually tame the beast that is the Internet and reach a form of beneficial equilibrium in our use of it. After all, this argument goes, the notion that the Internet is unique in the hazards it presents —-such as spreading misinformation or dangerous ideas—is belied by centuries of history.[i]

This narrative is the one that forward-looking, innovative Silicon Valley developers and entrepreneurs (and investors) often promote. It’s one that free-market-oriented lawmakers (as well as those who receive generous campaign contributions from Silicon Valley) find amenable. And it’s the one that, as a culture, we want to believe is true because it justifies our enthusiastic use of these platforms and comports with an idealized story Americans like to tell about the liberating influence of technological progress.

But in recent years, after a series of privacy scandals and a presidential election that produced charges of foreign election tampering aided and abetted by these platforms, a competing narrative has gained traction: The scale of these platforms, the speed with which we have adopted them, and the impact of the behavioral adaptations we have made to use them is something new — and something potentially damaging.

As early Internet enthusiast Douglas Rushkoff argued in a recent mea culpa, “Computers were the tools that would upscale humanity. But maybe humanity simply wasn’t developed enough to handle the abilities potentiated by digital technologies distributed so widely and rapidly. At least not the sector of humanity that ended up being responsible for developing this stuff.”[ii]

The acceleration is undeniable: Google, founded in 1998, is now a verb and the dominant search platform. Facebook, founded in 2004, now has more than 2.5 billion users worldwide. Nearly everyone in the U.S. uses a cellphone or smartphone and most people under the age of thirty spend hours weekly on social media.[iii] Compared to earlier eras, our rate of technology adoption is astonishing in its speed and saturation.[iv]

In turn, these platforms prize speed and growth and innovation over any other values. Consider Facebook (whose early mantra was “Move fast and break things”). Although its rapid growth through the acquisition of platforms like WhatsApp and Instagram and its consolidation of its messaging services has sparked serious antitrust concerns in the U.S. recently, the company is pushing ahead with ambitious plans to develop its own global digital currency; to sell a new device called Portal that would put a microphone and camera in every home that doesn’t already have a smart speaker such as Amazon’s Alexa; and to create a global “Supreme Court” to regulate the content that appears across many Facebook-controlled platforms.

These are not modest goals, nor are they all potentially harmful, but they do raise the question of how we should measure their cultural impact when the speed with which we are encouraged to embrace these new tools offers little time for reflection.

Thus far we have measured the power of these platforms by the level of responsibility we do or do not assign when something goes wrong on them, such as when YouTube (owned by Google) was fined $170 million dollars for violating the Children’s Online Privacy Protection Act (COPPA), the 1998 federal statute that requires parental consent for companies wanting to gather data about children.

Although everyone knows what YouTube is, few Americans realized that Google was allowing companies to illegally gather data about children and target them with personalized advertisements based on the videos they watched. The scale of this illegality was vast: More than 75 percent of 8-12 year olds watch YouTube.[v] When these platforms make mistakes, or, as is too often the case, deliberately test the limits of the law and social boundaries to see what regulators and the public will tolerate, they are making large-scale errors that impact hundreds of millions of people.

The question is whether the people impacted by them (which is to say, nearly all of us who use these platforms daily) understand them as temporary but necessary mistakes on the way to a more culturally enriching digital future, or a sign that these platforms are not functioning in a way that promotes the common good for the culture that embraces them.

Assessing Impacts

How should we assess the cultural impact of these platforms?

First, it’s useful to recall some of the cultural institutions and functions that have migrated to online platforms from the offline world and attempt to measure some of the consequences of that migration.

In the past twenty years a great deal of commercial activity has migrated online. Many brick and mortar stores and malls have gone out of business as Americans embraced the convenience of online shopping rather than in-person purchases. Similarly, digital streaming services have utterly transformed the television, movie, and music industries in a short span of time. In both cases the migration of these activities online offered consumers many more options than in previous eras but also a great deal more surveillance of their individual habits and tastes.

As advertising revenue shifted from print publications to the Internet, it forced an often-painful financial reckoning for many traditional publishing institutions. When the free online classified advertising website Craigslist debuted in 1995, for example, enthusiasts hailed it as an example of the nimble, digital future we could all look forward to embracing as well as an example of ethically-motivated capitalism: the site’s icon was a purple peace symbol and its motto was “people helping people.” Now we know that Craigslist was largely, albeit unintentionally, responsible for the death of many local newspapers that relied on paid classified advertising to stay solvent.

As well, many of the local, small-scale institutions that used to be a part of daily life for many Americans and which encouraged regular face-to-face interaction — the bowling league, local service clubs like the Elks Lodge, and churches and synagogues—have seen their membership decline or disappear as more civic activities moved online. Social organizing is now largely mediated through platforms like Facebook and Twitter, and the convenience and ease of signing an online petition or tweeting one’s support for a cause compared to the logistically more challenging requirement to show up in person to volunteer has become ubiquitous enough to have spawned the phrase “hashtag activism.” Thanks to these online platforms, we can now satisfy the admirable human need to give back with a lot more convenience and speed, but it’s not clear that that has made for a more empathetic or charitable culture.

Second, the cultural power of these platforms should be measured by the power they exercise over their users and whether that power is legitimate. How much power are users themselves able to exercise to opt out of certain forms of surveillance of their online activity or to recover from mistakes when so much of what they do is transparently public? When the Internet and large technology platforms were young, we could reassure ourselves that a distinction still existed between the “online” world and the “real world.” That distinction no longer exists, since what happens online has repercussions that are often immediate and harsh for life offline. There are now too many examples to cite of people tweeting something inappropriate and getting fired for doing so.

Given the power and ubiquity of smartphones, there is effectively no such thing as offline for most people any longer. Even if you aren’t actively reading or responding to something online, the phone in your pocket is transmitting information about you to countless entities while you walk down the street. This kind of tracking and data-gathering isn’t neutral; look only to China’s social credit system, which is the logical, albeit dystopian, conclusion of this kind of constant, granular tracking.[vi] Is America’s seemingly benign but ubiquitous consumer surveillance a net gain for freedom and autonomy?

Similarly, assessing the cultural impact of these platforms requires asking what kinds of engineering and design choices their creators are making; do those choices benefit the platforms while exposing users to unnecessary risks? Can they be redesigned in ways to maximize civic engagement without contributing to polarization?

Platforms like Facebook have user-experience design (UX) at the center of all of their decision-making, with the goal of keeping people on the platform as long as possible by exploiting what we know about human behavior. For Facebook users, for example, this means their activity on the site makes them part of large-scale (and usually secret) behavioral experiments the company regularly performs to better hone its targeted advertising. In 2014, for example, Facebook was forced to apologize when it emerged that it had been deliberately seeding users’ NewsFeeds with positive or negative stories to see if there was such a thing as “emotional contagion” on the platform.[vii] In a culture that values autonomy and informed decision-making, should these powerful platforms be required to offer more thorough disclosures of their design choices and manipulation of their users?

Third, and perhaps most important, we can measure the cultural power of these platforms by assessing the habits of mind they foster in their users and the effects those habits of mind have had on civic and cultural life. How has our use of these platforms changed our cultural understanding of privacy and surveillance? What do we now accept as reasonable behavior for ourselves as well as our public figures (including politicians and celebrities) now that social media platforms have become ubiquitous? How have powerful platforms altered our understanding or tolerance of harassment or hate speech or violence? How have these platforms promoted or undermined civic norms for debate and democratic disagreement?

In other words: What have these platforms already habituated us to accept that a healthier society might not otherwise choose to endorse?

One dramatic change wrought by these platforms’ constant surveillance of our behavior has been the transformation of daily life for more and more people into a kind of public performance. Opportunities for such performance didn’t exist at this scale in previous eras; now that they are common and designed to be incredibly rewarding, they have habituated users to an expectation of transparency about one’s private life that would have been unimaginable even fifty years ago.

This has changed our understanding of privacy, our expectations for affirmation and attention, and our sense of what is and is not authentic. As Cornell communications researcher Brooke Erin Duffy and University of Pennsylvania digital culture researcher Emily Hund have argued, successful “influencers” on platforms like Instagram practice new forms of authenticity that are rooted in the reality of constant performance — what other writers have called “curated imperfection” and “aspirational ordinariness” that, taken together, form a new kind of performative authenticity — and a new normal for a growing number of people.[viii]

Writing New Rules?

The question of what can or should be done about these platforms’ power often devolves into a debate about whether or not they should be more heavily regulated by the government.

Much good research has been done exploring the regulatory agencies and frameworks that would most reasonably tame platforms like Facebook, for example, and the effectiveness or lack thereof of punitive measures like those taken by the European Union to protect the individual privacy of these platforms’ users.

These rule-creation efforts are ongoing, and in many cases laudable (although not all of them are entirely compatible with the U.S. Constitution). They exist alongside chronic concerns about regulatory capture given these platforms’ power and their budget for lobbying lawmakers to achieve their aims.

A recent Echelon Insights poll found that Sen. Josh Hawley’s proposal to regulate social media for political bias “is favored 48-21%, with similar margins among Republicans, Democrats, and Independents” and that 59% of voters “have significant concerns about bias by social media platforms.”[ix] Although the public has recently grown uneasy about these platforms’ power, there is not yet firm consensus on how best to tame them.

Both critics and supporters of the legislation Hawley is proposing can agree that he is attempting to shift the terms of the debate about these platforms, a shift which has implications for the platforms’ cultural impact. Hawley frequently invokes the language of values in his legislation – calling on technology companies to act in “good faith” and with “honest belief and purpose” and “fair dealing standards” in his proposed reform of Section 230, for example.[x]

This and other legislation Sen. Hawley has proposed, such as his SMART Act, which seeks to prohibit certain features like infinite scrolling and autoplay that arguably contribute to addictive behavior, have met with strong criticism given the intrusiveness of their recommendations for private businesses.[xi] There’s an argument to be made that the federal government does not need to act as either moral tutor or scold to Americans about their technology habits (any more than it does about their other habits), and bills like the SMART Act, however well-intentioned, seem more likely to benefit tort lawyers than smartphone addicts.

But Hawley is no twenty-first century Comstock. He is at least making an effort to frame this challenge in a forceful and necessary way: as a technical problem with moral and civic implications, in contrast to an anti-regulatory technology industry that wants to portray itself and its products as the technical solutions to everything, regardless of morals and heedless of civics.

Platforms have – for good and for ill—scaled up and monetized the most intimate aspects of life (friendship, dating, one’s personal taste in music and literature and food) all while privileging the quantity of our engagement over the quality of the time we spend. There is room in our national debate over technology companies’ power for lawmakers and civic leaders to explore how we might slow or reverse this trend, and to examine what role, if any, the government should play in that process.

Currently, the debate about platforms is siloed between people concerned with increasing or avoiding regulation on the one hand, and those concerned with the cultural and social impact of these platforms on the other. We need more fruitful interactions between these two worlds.

For example, both groups might benefit from an approach akin to that taken by the federal government with regard to genetic engineering, cloning, and other new biotechnologies, which prompted federally sponsored commissions that explored the ethical, legal, and cultural ramifications of these powerful new technologies, and made recommendations to lawmakers and industry.

The Limits of Limits

Ultimately, debates over regulation elide an important distinction: rule-creation will not necessarily lead to the creation of new and healthier cultural norms.

Consider the framing device for much of our discussion of the cultural impact of these platforms: what they do to children. We are currently raising the first generation of human beings for whom large parts of their childhood (and, in all likelihood, their later adult lives) will be available as data to be mined for insight into their habits; who will have their preferences sold back to them as advertisements throughout their lives; and whose mistakes are forever preserved for any future employer or future spouse to see. From a cultural standpoint, these platforms’ capture of younger generations’ attention is nearly complete.

And although age-limit restrictions such as those outlined by COPPA exist, they are easily evaded. So, too, are warnings buried in the Terms of Service, which are rarely read or heeded—just ask any teenager with a “finsta” (Fake Instagram) account. Even existing legal protections about adult content online are no barrier to exploitation, as a 37-year-old mother who recently spent a week posing as an 11-year-old on Instagram found out. She was inundated with messages from pedophiles asking her to send explicit pictures of herself and even to meet them.[xii]

However well-intentioned, such rules rarely effectively transform norms.

Nor have efforts to better educate the public about online risks been effective. The push for “digital literacy” that marked the early years of the Internet did not yield future populations of well-informed users. Instead, the fakery itself has become more sophisticated: fake news and uncannily realistic deep fake videos; A.I.-generated chatbots that mimic real people; and endless fake followers (easily purchased by people who think their cultural power can be measured by their popularity on social media).

Efforts by legislators to regulate the platforms are not likely to have more success. When Sen. Elizabeth Warren placed a false ad on Facebook in October 2019 to make a point about the platform’s unwillingness to regulate fake news, she noted, “Facebook has incredible power to affect elections and our national debate.” But her concerns about that power were notably partisan: “Facebook already helped elect Donald Trump once. Now, they’re deliberately allowing a candidate to intentionally lie to the American people. It’s time to hold Mark Zuckerberg accountable.”[xiii]

But Facebook also “helped” elect Barack Obama twice, and one of the company’s founders, Chris Hughes, even ran the Obama campaign's digital strategy. Is it wise to assume that a platform like Facebook can be regulated into behaving as a neutral arbiter of information when its own size and power have become a polarizing election-year issue?

Even if these platforms make a good-faith effort to regulate content, they suffer the same challenge as government regulators with regard to establishing consistent, acceptable norms for online behavior. When companies like Google hired a small group of people to monitor YouTube to keep it free from expressions of violence, depravity, and the like, the monitors couldn’t keep up with the amount of content. The monitors also had trouble handling the psychological burden of what they saw; many ended up suffering post-traumatic stress.[xiv] These large platforms have engineered a system whereby some of the most horrible things humans do can be shared so widely and instantaneously that even humans can’t prevent it or stop it.

Nor have the A.I.-driven detection systems that companies like Facebook have promised as a solution been entirely effective. In a recent call with reporters, Sheryl Sandberg of Facebook boasted about how much more hate speech the company tracks and takes down from the platform using A.I.: 95 percent of it, compared to just 24 percent a few years ago. “If you look at how we do our jobs and compare it to four years ago—Mark, myself, all of our senior leaders...we all spend a lot more of our time on the protection of the community than we did,” Sandberg told the Wall Street Journal.[xv]

But that boast unwittingly revealed something else about Facebook: A shift in what the platform believes it should be protecting. “Protection of the community” in Sandberg’s rendering means protecting Facebook’s users from seeing something offensive, not protecting their ability to express themselves on the platform. As well, the outsourcing of the dirty work of content moderation to A.I. (and to contractors), rather than dealing with it in-house, suggests that the company’s focus on “protecting” users is somewhat disingenuous.

Whether Facebook and other platforms use A.I. or outside contractors, their policies are the digital equivalent of the nameless “night soil” workers of previous eras who every evening collected and disposed of the human excrement produced by household privies and latrines before widespread indoor plumbing. Their job was unenviable but crucial to maintaining public health – not unlike the technology platforms’ responsibility for content moderation now. And yet these companies too often treat this like a public relations problem than a structural necessity. As Paul Barrett, author of a NYU Stern Center for Business and Human Rights report about Facebook’s content moderation policies, told reporters, “I think there’s this psychological impulse to push away this unsavory but absolutely necessary function of running a global media platform.”[xvi] We are only beginning to see the downstream effects of this practice.

From Consumer Welfare to Cultural Impact

These platforms have a great deal of cultural power; until now, they have assumed that such power automatically granted them legitimacy in the eyes of the public, and so usually asked for forgiveness for their mistakes rather than permission to push cultural boundaries with regard to privacy, for example.

Although voluntary industry efforts lack the force of law, they aren’t without value. Platform companies could practice greater transparency around the internal policy-making procedures and design choices they make. They could consider adopting international standards for speech and content-moderation.[xvii] They could accept rather than smother competition and innovation along these lines, such as the company, Solid, founded by World Wide Web creator Tim Berners-Lee, who hopes to decentralize the web by returning greater ownership of data to individuals rather than large platform companies.

The government could, of course, use antitrust law and other powers to regulate platforms, but this carries the risk of regulatory overreach and stifling innovation.

During the first six months of 2020, as the world was ravaged by the coronavirus pandemic, and the United States witnessed months of protest and civil unrest after the killing of George Floyd in Minneapolis, the benefits and dangers of these platforms’ cultural power (and the challenges of taming them via regulation) were thrown into high relief.

Internet platforms offered anxious Americans a daily tally of new virus cases and horrifying death tolls, but also information about public health guidelines and the location of testing facilities. Conspiracy-minded Facebook or YouTube or Twitter users could easily find misinformation to provide justification for their fears; but they could just as easily click on links to the Centers for Disease Control’s recommendations about social distancing and proper hygiene, or their local jurisdictions mandates for pandemic-related lockdowns. The creation of contact tracing apps and smartphone data as tools for tracking the infected also revived crucial debates about privacy. Such discussions serve as a reminder of Americans’ cultural discomfort, even during times of crisis, with the granular level of tracking and surveillance that citizens of other nations, like South Korea, embrace as a matter of course.

Similarly, as the video footage of George Floyd’s deadly arrest ricocheted around social media, hashtag activism quickly transformed into large-scale, real-world protest, with social media platforms serving as both organizational tools and real-time clearinghouses of information for demonstrators. These same platforms broadcast the evidence of cities looted and burned, peaceful protests that turned destructive, and, tragically, violence and death. As the Black Lives Matter movement gained visibility, platforms also provided a place where the movement’s more radical goals could be questioned, and its supporters could make their demands public as well as debate tactics.

If our platforms act at least in part as barometers of the public mood, then they were effective in showing us human nature in all its forms during a time of great cultural upheaval. It was also a reminder that, despite their creators’ claims, these platforms are not and have never really been neutral. Unintentionally or not, by design and expansive reach they succeed in guiding cultural debates in certain directions. As the recent past reveals, they have, time and again, proven effective at inflaming passions; whether they are equally as useful at regulating the extreme effects of such passions remains to be seen.

The inconsistent application of rules regarding speech and harassment on platforms such as Facebook and Twitter will likely remain a chronic challenge for the future, as will debates over whether inciting violence online (or chasing social media fame by broadcasting acts of violence) should be regulated as speech or something else, given its instantaneous, global reach.

Ultimately, neither voluntary industry standards nor government-enforced regulation will succeed in doing much to limit platforms’ cultural reach, as recent events have reminded us. No amount of regulation or tech industry promise-making can limit individuals’ right to give away their time and attention and information to these platforms, nor should it.

But that doesn’t mean there is nothing to be done. Antitrust arguments often rely on “consumer welfare” as the standard by which we measure harm; as users of these platforms, we might want to rethink their impact on us from a “cultural welfare” standard.

This would not look like any of the proposals being bandied about for Section 230 reform; or the technical arguments for or against regulation of the technology industry. Rather, it would more closely resemble the environmental impact statements required before alterations to our physical environment. Like those, cultural impact statements could attempt to forecast potential challenges caused by the policy (or algorithm, or new device, or update) and any mitigating factors that could be taken in advance to forestall them. It’s not as if we aren’t aware that these devices and software shape our everyday behavior. As technologist (and thoughtful critic) Jaron Lanier has noted, “We’re headed toward a world where people are being partially programmed by algorithms. A lot of what you’ll do during the day will be designed and set up and contextualized and motivated by algorithms.”[xviii]

We ask private businesses to think through the impact of their actions in our physical environment; why shouldn’t we ask them to do the same for our digital world (which also can have an impact on the physical world in the form of violence and disinformation)?

This is in marked contrast to the viewpoint shift now being advocated by many technology companies. When Nick Clegg, Facebook’s vice president of global affairs and communications, urged lawmakers in 2019 to “reconfigure old concepts,” including their understanding of data, and to “relinquish themselves of the idea that [using data] is the same as using finite resources in finite, one-off ways,” he was not arguing for more thoughtful assessments of our common cultural welfare. Rather, his reasoning is that if data is not, like oil, a finite thing, it will prove more challenging to regulate how it is harvested and used. As well, even this reconfiguring, if expressed as new regulation, is only a means to an end, and the end is not thriving communities, but image management: “Mr. Clegg said new rules imposed by democratic governments could help the social media sector to restore its image, which has been hurt by scandals,” the Wall Street Journal noted.[xix]

Technology scholar Robert R. Johnson has argued, “As technologies emerge and are incorporated into a cultural context they alter not just the immediate activity for which they were designed but also have ‘ripple effects’ that shape culture in defining ways.” That shaping, in turn, shapes us – our behavior, our tolerance for certain ideas and for others, our understanding of how our world should work.

We are seeing those first ripple effects on our culture, and not all of them, such as political polarization, have been beneficial. Blunting the impact of the negative cultural effects while continuing to nurture the positive ones requires an embrace of new virtues that have fallen out of popularity in a world that, thanks to these powerful platforms, expects instant gratification and constant entertainment: prudence, temperance, and self-governance, to name a few. But as the public’s concerns about the power of these platforms grows, and questions about their impact on us increase, such old-fashioned habits of mind might end up having a surprisingly modern relevance.


[i] Mackay, Charles, Extraordinary Popular Delusions and the Madness of Crowds (1841).

[ii] Rushkoff, Douglas, “Was Humanity Simply Not Ready for the Internet?,” Medium, November 14, 2019. https://medium.com/team-human/was-humanity-simply-not-ready-for-the-internet-968ff564653c.

[iii] “Mobile Fact Sheet,” Pew Research Center, June 12, 2019. https://www.pewresearch.org/internet/fact-sheet/mobile.

[iv] Ritchie, Hannah and Max Roser, “Technology Adamption,” Our World in Data, 2017. https://ourworldindata.org/technology-adoption.

[v] Rideout, Victoria and Michael B. Robb, “The Common Sense Census: Media Use by Tweens and Teens,” Common Sense, 2019. https://www.commonsensemedia.org/sites/default/files/uploads/research/2019-census-8-to-18-full-report-updated.pdf.

[vi] Wang, Maya, “China’s Chilling ‘Social Credit’ Blacklist,” Human Rights Watch, December 12, 2017. https://www.hrw.org/news/2017/12/12/chinas-chilling-social-credit-blacklist#. See also https://theweek.com/articles/752442/chinas-black-mirror-moment.

[vii] Kramer, Adam D.I., Jamie E. Guillory, Jeffrey T. Hancock, “Experimental evidence of massive-scale emotional contagion through social networks,” Proceedings of the National Academy of Sciences of the United States of America, June 17, 2014. https://www.pnas.org/content/pnas/111/24/8788.full.pdf.

[viii] Duffy, Brooke Erin and Emily Hund, “Gendered Visibility on Social Media: Navigating Instagram’s Authenticity Bind,” International Journal of Communication, 2019. https://ijoc.org/index.php/ijoc/article/view/11729/2821.

[ix] “Big Tech and the 2020 Policy Debate,” Echelon Insights, June 2019. http://echeloninsights.com/wp-content/uploads/Omnibus-June-2019_Policies.pdf.

[x] S.3983 Limiting Section 230 Immunity to Good Samaritans Act, 116th Congress. https://www.congress.gov/bill/116th-congress/senate-bill/3983?q=%7B%22search%22%3A%5B%22limiting+section+230+immunity%22%5D%7D&s=1&r=1.

[xi] S.2314 SMART Act, 116th Congress (2019-2020). https://www.congress.gov/bill/116th-congress/senate-bill/2314.

[xii] Ryan, Sloane and Roo Powell, “I’m a 37-Year-Old Mom & I Spent Seven Days Online as an 11-Year-Old Girl. Here’s What I Learned. “ Medium, December 13, 2019. https://medium.com/@sloane_ryan/im-a-37-year-old-mom-i-spent-seven-days-online-as-an-11-year-old-girl-here-s-what-i-learned-9825e81c8e7d.

[xiii] Elizabeth Warren tweet; Oct. 12, 2019. https://twitter.com/ewarren/status/1181348866744098816?lang=en.

[xiv] Newton, Casey, “The Terror Queue,” The Verge, December 16, 2019. https://www.theverge.com/2019/12/16/21021005/google-youtube-moderators-ptsd-accenture-violent-disturbing-content-interviews-video.

[xv] Stoll, John D., “Sheryl Sandberg and Facebook’s Billion-Dollar Balancing Act,” Wall Street Journal, August 21, 2020. https://www.wsj.com/articles/sheryl-sandberg-and-facebooks-billion-dollar-balancing-act-11598023448.

[xvi] Oremus, Will, “How Facebook Can Prevent Its Next Deadly ‘Operational Mistake,’” OneZero, September 5, 2019. https://onezero.medium.com/how-facebook-can-prevent-its-next-deadly-operational-mistake-58c1d0addafc.

[xvii] Asward, Evelyn, “The Future of Freedom of Expression Online,” 17 Duke Law & Technology Review 26 (2018). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3250950.

[xviii] Jaron Lanier quoted in The Tech & Design Issue, New York Times Magazine, November 17, 2019; pg.88.

[xix] Schechner, Sam and Valentina Pop, “Facebook Gives Warning on Value of Data,” Wall Street Journal, December 3, 2019, pg. B3.

Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

subscribe

The Mobile Edition. Simply readable.

daily findings by kevin lewis

Global versus Local

Body Problems

Real Management

The Right Schools

Positivity and Negativity

Limited Partner


Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.