Protecting Children from Social Media

Chris Griswold

Spring 2022

On September 19, 1874, somewhere between 20 and 40 girls died when the Granite Mill cotton facility in Fall River, Massachusetts, burned down. The girls, most of whom were younger than 14 and some of whom were as young as five, died from suffocation, burning, or throwing themselves out of the mill's windows in desperation. Many more were injured.

That young children were working under dangerous conditions in a cotton mill was not unusual. In 1870, there were about 765,000 American workers between the ages of 10 and 15. Some ardent reformers objected, but the mainstream view held that child labor was inevitable, economically and socially valuable, and none of the government's business to regulate.

The subject matter of child-welfare debates changes over time as technological progress presents new challenges, but the contours remain eerily fixed. On one side stands the well-being of children; on the other, a range of economic interests and ideological commitments that insist on deference. Child labor's prevalence in the 19th century was a consequence of that era's industrial revolution, which obviated traditional agriculture's demand for size and muscle. Much smaller bodies could operate industrial machines — an innovation that afforded a new opportunity for converting childhood into economic surplus.

The information revolution of our own century has produced the most dramatic economic and social transformation since industrialization, precipitating its own set of unanticipated and unacceptable harms to children. As with the fight against child labor, those harms will continue to manifest until policymakers act.

The debate today focuses on children's lives online, and their use of social media in particular. The great majority of American teenagers use social media, and they are experiencing an unprecedented mental- and emotional-health crisis as a result. Rates of depression, self-harm, and suicide have risen rapidly for teenagers in the last two decades. To cite one devastating example highlighted by New York University psychologist Jonathan Haidt, hospital-admission rates for self-harm by girls ages 10 to 14 doubled between 2010 and 2014. And while correlation is not causation, as the lobbyists forever remind, a growing body of research indicates that the most plausible explanation is in large part the migration of teenage social life to social media.

Luring children onto social media — and keeping them there — is a top priority for online platforms. This is because, like all social-media users, children are not so much the customer as they are the labor. The platforms induce them to produce the content that engages other children and that, in the cycle of virtual affirmation these companies deliberately engineer, drives them to keep on engaging and producing. The sale of this captured attention to advertisers is big business — the industry's advertising revenue was projected to reach over $56 billion in 2021. Each platform's success relies on attracting and retaining a critical mass of such producers. When it comes to social media's economic imperatives, nothing could be a more vital strategic priority than recruiting and retaining the youngest users.

Today's digital world is not merely a product we use, but an alternative virtual space we increasingly inhabit — an environment where Silicon Valley wishes us to, as the New Testament puts it, live, breathe, and have our being. Facebook's public-relations maneuver to rebrand itself as "Meta" and transform concern over its current platform into enthusiasm for a more immersive and ubiquitous "metaverse" should have the opposite effect. To fulfill our society's commitment to providing environments safe for children, these new spaces must be appropriately governed, which requires going beyond the cramped confines of Washington's usual Big Tech debates.

Policymakers will need to resist the temptation to consider technology policy primarily through the lens of what will maximize economic dynamism or best promote innovation. These are not the concerns that we should allow to dominate, nor are they typically ones we consider, when confronting issues of child safety.

DEFINING THE HARM

Internal corporate research by Facebook's Instagram, first published in the Wall Street Journal, suggests the obvious motivation for social media's relentless focus on children: the economic imperative of maximizing market share. "Why do we care about tweens?" the company asked rhetorically. "They are a valuable but untapped audience." "Our ultimate goal," it added, "is messaging primacy with U.S. tweens, which may also lead to winning with teens." At a Senate subcommittee hearing in December 2021, Instagram chief Adam Mosseri refused to foreclose the possibility of a version of Instagram targeted at children between the ages of 10 and 12.

Social-media executives see no moral dimension or limiting principle to their ambitions. As Facebook spokesperson Dani Lever put it, "[w]e have no commercial or moral incentive to do anything other than give the maximum number of people as much of a positive experience as possible."

This rush to maximize profit leaves in its wake a range of emotional- and mental-health harms to the children whose attention these platforms so badly covet. The leaked Instagram research adds color to what is now a well-known picture: Using Instagram makes body-image challenges worse for one in three teen girls and social-comparison struggles worse for one in five. Across teen boys and girls, one in five report feeling worse about themselves.

The researchers asked a group of teenagers who had experienced one of several mental- or emotional-health challenges in the past month whether a given issue had started "on Instagram." An astounding 42% reported their feelings of not having enough money started on the platform (perhaps an indication of the commercial impulses animating what kids are shown on social media). Similarly, 41% reported that their feelings of not being attractive began on Instagram, 39% said the same regarding pressure to have a perfect image, and one in four indicated as much regarding their sense of not being good enough. A tenth reported that their depression began on Instagram, 9% that their desire to self-harm did, and 6% that their desire to kill themselves did. In commentary attempting to mitigate these statistics, Facebook clarified an error in a related PowerPoint slide: "The estimates for 'wanted to hurt themselves' and 'wanted to kill themselves' should be flipped."

A major vector along which this harm travels is social pressure — a natural teenage phenomenon that young people are now facing on an unprecedented scale. "Teens blame Instagram for increases in the rates of anxiety and depression among teens," read the headline of one slide, which reported that "unprompted and consistent across all groups," teens believe that "constant comparison" driven by social media "is 'the reason' why there are higher levels of anxiety and depression in young people." Teens who are already struggling with these issues are especially vulnerable.

The fact that teens are themselves "acutely aware that Instagram can be bad for their mental health" does not lead to them disengaging. Instead, they feel "compelled to spend time on the app," and for good reason: It is increasingly where their social lives take place. Facebook's research describes what many parents already intuit: that social media isn't simply a product children use or consume, but a digital environment they inhabit, which they struggle to distinguish from the real world. The "boundary between social media and IRL [in real life] is blurred; feelings and boundaries cross between the two."

The addictive, reality-distorting quality of social media leads to harms beyond digitally amplified social pressure: The platforms also make children more vulnerable to traditional dangers, including sexual exploitation. Not only have they opened new avenues for child traffickers and adult predators, they also facilitate the rising trend of children sharing explicit images of themselves, often after being coerced into doing so. Other children frequently re-share such images without the subject's consent. According to one 2020 study, almost one in five children between the ages of nine and 12 have seen such explicit, non-consensually re-shared images of other children. Greater social-media usage correlated with a notably higher likelihood of exposure.

Social media also provides avenues for undeniably harmful content to reach children directly, tailored to capture their interest and designed to encourage obsession. In December 2021, the Wall Street Journal reported on a particularly tragic example: the way TikTok drives extreme weight-loss content to young girls, contributing to the rise of eating disorders in children. As one dietician who treats such disorders bluntly concluded: "The algorithm is just too freaking strong." Other unhealthy childhood behaviors like bullying are likewise expanded in both scope and intensity. And as with everything online (and plainly not appropriate for children making mistakes as they find their way in the world), children's virtual interactions create a durable record that can cause further damage, both in the moment and later in life.

Some officials suggest that this growing body of evidence does not justify serious policy action, since causation between social-media use and emotional harm has not been definitively proven. This view fails to consider where the burden of proof lies in matters of child welfare. Policymaking always happens under conditions of uncertainty; the job of policymakers is to make the best judgments they can based on the evidence they have. This responsibility is all the more urgent where the safety of children is concerned. The practical standard for taking action to protect children cannot possibly be scientific proof, or some statistical regression with a suitable p-value; it must be whether the benefits of acting are likely to outweigh the costs.

Haidt makes a compelling case that, by this standard, the debate should be over. He traces the explosion of anxiety, depression, and self-harm in teenage girls since the early 2010s, highlighting its concurrence with the proliferation of teenage social-media use during the same period. He notes that while rates of teenage anxiety and depression have always fluctuated over time, the current spike is extremely and unusually stark. "Something terrible," he observes, "has happened to Gen Z."

This concurrence points squarely to social media as the problem. The available evidence implicates specific platforms (Instagram) and specific victims (teenage girls). Alternative suggested explanations, like the financial crisis, school shootings, or frightening events like 9/11, fit neither the data nor common sense.

We should expect that social-media platforms will be all too happy to embrace an emphasis on "science" and follow along in pursuit of "proof," knowing that it is unlikely to emerge and that they can always outspend good-faith researchers to muddy the results. This is the same playbook Big Tobacco followed last century. As Harvard University's Allan Brandt explains, when the correlation between smoking and lung cancer could no longer be plausibly denied, the industry began "declar[ing] the positive value of scientific skepticism of science itself." It called for more scientific research, which it controlled, thus creating a cloud of unprovability that delayed serious policy action for years.

Political leaders have rightly taken to describing Big Social Media by analogy to Big Tobacco — two industries that have aggressively marketed addictive products to children while knowing such products to be harmful. The analogy should not be extended to tolerance of action-delaying tactics.

PARENTS IN CRISIS

Roughly 72% of America's teenagers use Instagram, over half use Facebook, and 45% report being online "almost constantly." For hours every day, millions of American children enter the virtual care of a Silicon Valley interested in monetizing them. Unlike, say, the school setting, or the home environment, or the company of friends, social media exists not to nurture their growth, but to maximize shareholder return. It's no wonder Facebook's own research reports that teens feel abandoned, believing they must cope with the harms of social media alone despite wishing someone would help them. In the final economic analysis, social media is not for them.

Parents are unable to give their children this wanted, and needed, help on their own. In Facebook's own leaked words, "parents can't understand and don't know how to help...social media has fundamentally changed the landscape of adolescence." A 2020 Pew Research Center survey offers similar insight, finding that two-thirds of parents believe parenting is harder than it was 20 years ago; they cite "technology" and "social media" as the top reasons why. Respondents reported concerns that echo Facebook's internal research, saying things like, "parents don't know how to handle the technology" and "I think that social media is harmful to kids and their self-concepts and self-images....Parenting through all of the obstacles is really challenging."

This inability is not for lack of parental concern or effort. In 2020, the Lurie Children's Hospital of Chicago surveyed parents and found that over two-thirds believe that their teenage child is addicted to social media and that it disrupts their child's capacity to socialize normally. Over half believe their teen has "an unhealthy desire for attention/approval via social media." In response, 80% of parents report setting rules around phone and social-media use, and for the most part believe themselves successful in enforcing those rules.

But the rules available to parents are limited in crucial ways. In the Lurie survey, parents' top strategy was to limit the time their children spent on social media or their phone, followed by limits on where and when they can access it. These may be useful measures for addressing concerns where duration and timing of social-media use is the problem — say, in children not getting enough exercise, or being disconnected at the dinner table. But they represent roughly as effective a solution as locking the family liquor cabinet while the store down the street is handing out free bottles. Parents simply cannot compete with social media's pervasiveness. Nor do they have the power to alter dangers inherent in social media's design, which may harm their children even with reduced use.

Parents also overestimate their own ability to control what their children access, and the one nuclear option — forbidding social-media use entirely — is both technologically and socially untenable for most. Social media is here to stay, and fighting a rearguard, Luddite rebellion of withdrawal is unlikely to succeed in individual families. It is certainly doomed to fail as a society-wide strategy: Teenage social life in America now happens largely on social media, and expecting that families can or will opt out on their own in game-changing numbers is not realistic. Even engaged parents find themselves easily outmatched by the power of this network effect: They rightly fear excluding children from the social world of their peers, and are unlikely to succeed if they try. This remains true even if a handful of families act in coordination with one another.

The various policy proposals focused on educating parents so they can better control social-media use miss the point: Parents, right alongside their children, are the ones who need help. Leaders must pursue measures that help parents, rather than blaming and further burdening them. "The legitimate object of government," as Abraham Lincoln once put it, "is to do for a community of people, whatever they need to have done, but can not do, at all, or can not, so well do, for themselves — in their separate, and individual capacities." Protecting children from the dangers of social media fits this description.

A MODEST AGENDA

The abolition of child labor proved shockingly difficult over many decades. More than a hundred years passed between the National Trades' Union making the first formal call to institute a minimum age requirement for factory workers in 1836 and the passage of the Fair Labor Standards Act in 1938. And early efforts met with fierce resistance. Federal limits on child labor were called a violation of parents' rights, a denial of the alleged benefits of labor for children, and flatly unconstitutional. At the turn of the century, popular efforts to regulate child labor were buried by congressional Republican leadership loath to impinge on the prerogatives of private enterprise and Southern Democrats reluctant to harm regional industrial interests dependent on child labor. Successive presidents of both parties opposed federal intervention and denied their authority to act on the matter. Protecting children in the wake of technological revolution, it turned out, was a much heavier lift for policymakers than it should have been.

That mistake does not have to be repeated. American society agrees to supervise the places children inhabit, protect them from environments they should not encounter, regulate the products they use, and screen the content they consume. In most any elementary school in the United States, an adult cannot enter without proof of identity and a legitimate purpose for being there. In most any sex shop in the country, people will (or should) be turned away if they cannot prove they are of age. Businesses are not permitted to sell tobacco, alcohol, pornography, dangerous fireworks, handguns, and numerous other goods and services to minors. Notwithstanding the First Amendment, the Federal Communications Commission (FCC) governs what kind of content can be broadcast via public airwaves to address the compelling public interest in protecting children from harm generated by a medium that is especially accessible.

Social media presents a bizarre exception to this norm. Perhaps nowhere in American society are children as unsupervised and unprotected, and with so much scope to do and be harmed, as in social media's virtual environment. The dangers children face when abandoned so completely in this ungoverned space are profound. It is not only reasonable, but urgent, that policymakers set sensible parameters to govern how children encounter social media — and how Big Tech encounters them.

Protecting children online will, at least in many cases, require first knowing who is a child, without violating the privacy of either children or adults along the way. By law, social-media companies are supposed to prohibit users under 13 who do not have adult supervision — but they do an abysmal job of it. About 40% of American children under 13 use Instagram daily; even more use Facebook. A third of children ages seven through nine use social media. If the age of digital consent is raised to a more sensible age like 16 or 18, and new rules are put in place relative to that age, this dilemma will only become more pronounced.

Age verification has long been the Achilles' heel of online child-protection efforts. Privacy advocates rightly stress that many mechanisms for verifying age online weaken the anonymity and privacy that public policy should properly protect. Giving social-media platforms and the corporations that convert their data into advertising sales additional user information hardly seems constructive, even if it is to verify age.

Many countries are struggling with this dilemma, and most of the methods of age verification that online services currently implement — and that third-party companies seeking to fill the market gap increasingly offer — pose problems. Uploading a driver's license, inputting credit-card information, allowing facial-recognition technology to scan photos, or requiring users to participate in a video call not only present privacy concerns, but are also cumbersome. In countries that require such checks before permitting access to certain services or material, many users simply decide it's too much trouble to bother. Methods that ping public records to verify age can work to some extent, but they risk failing to capture all users equally well. More fundamentally, these approaches privatize what is likely to become — or indeed, already is — an essential social function. All of them reduce privacy and sacrifice anonymity.

Social-media platforms have at times suggested that the matter be left to them (or, in more savvy recent days, to a standard-setting industry body), and of late have touted the ability of their internal artificial intelligence to identify young users so that platforms can self-police. It is certainly true that social media is improving its ability to identify users' ages based on their online activity. But amassing and analyzing data in that way is closer to the problem than the solution. Moreover, relying primarily on an internal mechanism would kneecap new market entrants that start with less data than existing firms, as well as any business models looking to minimize data collection and preserve user privacy. Proposals for a properly constituted and truly independent body to offer informed perspectives on best practices deserve consideration, but determining how best to supervise age-appropriate access to dangerous spaces is not a decision best left to the financially invested purveyors of those spaces.

If public policy requires proof of age to access a certain environment, it must also provide an effective mechanism for doing so. Just as prior eras demanded the development of laminated identification cards, which must still be shown to enter certain establishments and conduct certain transactions in the real world, our latest technological revolution presents the need for a reliable, anonymous, private yet publicly provided method for demonstrating age to enter certain virtual environments. Such a method should allow users to verify anonymously whether they are above a given age without revealing anything further about themselves and their online activity, either to social-media platforms or to the government.

Administratively, the government is best suited to provide verification of the information it creates and maintains — such as the birth dates public entities certify and the Social Security numbers they issue. Public services already put this information to widely accepted, non-controversial use — as with the federal E-Verify system, or the Social Security Administration's (SSA) number-verification service, which employers routinely use to confirm that new hires have a valid identity. It would not constitute an engineering marvel to create a simple public system that allows Americans to use these data to anonymously prove whether they exceed a given age.

One possibility would be for the SSA to offer a service through which an American could type his Social Security number into a secure federal website and receive a temporary, anonymized code via email or text, like the dual-authentication methods already in widespread use. Providing this code to an online platform could allow it to confirm instantly with the SSA whether the user exceeds a certain age without further personal data reaching the platform or the government. Whatever the technological details, in an age when Mark Zuckerberg's Meta strives to move all American life into virtual spaces, a secure, anonymous way to verify age will only become more urgent.

Congress has already begun to feel this urgency. Senators Richard Blumenthal and Marsha Blackburn have introduced the Kids Online Safety Act that, among other measures, would require the federal government to explore the technological potential of on-device age-verification systems.

In addition to figuring out how to verify age in the social-media environment without sacrificing privacy, we also need to change the commercial incentives that drive Big Tech's social-media design choices and business strategies. An obvious first step would be to ban targeted advertising aimed at children. Social-media companies need to hook users on their products so they can generate revenue by advertising to them or selling their data. If policy were to prohibit platforms from mining children's data to target them individually, one element of their interest in children may diminish.

This is not a new idea; it has already garnered support from key legislators like Democratic senator Ed Markey and Republican senators Bill Cassidy and Josh Hawley, as well as from President Joe Biden, who used his 2022 State of the Union address to insist the United States "ban targeted advertising to children" as a matter of bipartisan priority. It is also eminently achievable — in fact, Google has already taken some voluntary steps in this direction. Anyone inclined to believe voluntary steps are sufficient, however, should take note of Facebook's behavior. In the summer of 2021, the company announced efforts to limit how third-party advertisers could target children across its platforms, including Instagram. At the same time, it continued to employ such practices itself. Within months, watchful members of Congress were irately demanding to know how Facebook accounted for the "inconsistency between the company's promises and its practices."

Even with perfect compliance, banning ads targeting children is no silver bullet. As an initial step, it would communicate that American society accepts limits to how much children's attention can be manipulated for profit. However, the platforms would retain a strong incentive to engage children by showing them content — and inducing them to create content — in hopes of retaining them as profitable users later on. Both the consumption and production of social-media content, therefore, require limits.

As for the more dangerous content children encounter on social media — namely child sexual-abuse material and content that promotes or glorifies illicit-drug use, eating disorders, self-harm, and the like — the most straightforward way to address this is to punish it when it happens. American society has long accepted that certain content ought not to be shown to children via the public airways. Congress need not dictate how to prevent the display of harmful content to children; it simply needs to establish clear and economically meaningful consequences for failure to prevent it. Slapping Big Tech companies with per-instance fines represents a promising option: Place a price on harmful content, and Silicon Valley will channel its much-trumpeted innovative capacity to solving the problem. Imposing a legal duty of care on social-media platforms to protect children, as the Kids Online Safety Act could do, would only further sharpen their focus on solutions.

Policymakers might begin with the FCC's definition of obscene content to demarcate what types of content social-media platforms may not display to children without hefty per-instance fines. This would not only impose costs on displaying harmful content, it might also prompt a useful reconsideration of existing jurisprudence. In FCC v. Pacifica Foundation, the Supreme Court permitted the protection of children from certain content in public broadcasts in part because such broadcasts are "uniquely pervasive." Conversely, in Reno v. ACLU, the Court struck down certain provisions of the 1996 Communications Decency Act (CDA), which attempted to prohibit the display of certain harmful content to children online, in part because the internet was not seen as meeting this threshold. More than 25 years later, legislators might reasonably prompt the Court to consider whether the internet, and social media in particular, is now pervasive in the relevant sense.

In their partial concurrence to this decision, Chief Justice William Rehnquist and Justice Sandra Day O'Connor viewed the CDA as a kind of zoning law intended to distinguish between adult-only and child-safe spaces online. The justices saw this intent as legitimate and constitutionally sound, so long as such efforts did not impinge on adults' right to access the regulated material. In their view, the CDA failed to meet this threshold not only because it was imprecise in its language, but because the necessary technology for effective "internet zoning" did not yet exist. A better-crafted law, enacted in a more technologically advanced moment, might meet a different constitutional fate.

Finally, policymakers should address directly the design choices that lead to the mental- and emotional-health harms that the leaked Instagram research so vividly describes and that common sense suggests lies at the heart of the problem. Public or pseudo-public image sharing is the primary mechanism that enables social media's hyper-accelerated social pressures. This is inherent to the design of the platforms and cannot be engineered away through strengthened parental-consent mechanisms, greater corporate transparency, or smarter algorithms. Again, Jonathan Haidt puts it clearly: "The toxicity comes from the very nature of a platform that girls use to post photographs of themselves and await the public judgments of others."

The most straightforward way to address this harm is to disrupt the process that enables it. Prohibiting social-media platforms from publicly or pseudo-publicly displaying images uploaded by children in feeds, stories, and other broadly accessible social spaces would go a long way toward accomplishing the goal. Making it harder for images uploaded by children to reach widespread dissemination would also help to address other risks of social-media life, including the creation of a permanent public record and the various channels for sexual exploitation that social media enables. Rather than accept at face value Silicon Valley's claims of inevitable progress, policymakers should ask what social value accrues from commercial entities widely sharing images of children, especially when those children have been induced to upload such images by engineered social pressure and addictive compulsion.

In its rush to head off a public-relations nightmare, Instagram announced in 2021 that all new accounts for users under 16 would default to a "private" setting, meaning only users approved by the account holder can see what the user posts. But any child can switch that setting from private to public with a mere click. After highlighting this measure's technical faults, Senator Blackburn asked Instagram's CEO an insightful question: "[H]ave you considered turning off the public option altogether for minor accounts?" This is the right instinct, because it thinks beyond how social media is used to how it is designed. Blackburn's and Blumenthal's subsequent legislation proposes baked-in safety tools for minors and controls for parents. Policymakers should continue to ask how else social media's design can be constrained to protect children.

Such measures would be milder versions of the step many Silicon Valley leaders take in their private lives. The phenomenon of Big Tech figures prohibiting their own children from using their creations has been widely reported; many use their wealth to send their children to expensive schools in which such technology is forbidden. Their rationale is not hard to identify. Facebook's first president, Sean Parker, made it clear enough in 2017 when he noted that the company's founding goal was to "consume as much of your time and conscious attention as possible" through the creation of "a social-validation feedback loop" that exploits "a vulnerability in human psychology." As he added candidly, "God only knows what it's doing to our children's brains." Former Facebook vice president for user growth Chamath Palihapitiya put it even more bluntly in 2017: "The short-term, dopamine-driven feedback loops that we have created are destroying how society works," leading him to proclaim that his own children are "not allowed to use this shit."

Of course, social media is not all bad, even for children; it's fair to acknowledge the various benefits that social media affords young people. Facebook's leaked research shows that the element of social-media use teens report as having the most positive effect on their mental health is following comedy accounts, which they report make them feel better. The second-most positive element is conversations the platform allows them to have — it offers them a way to connect with family and friends. Children also use social media to access essential information or conduct urgent outreach — to find out how to escape an abusive situation, for example, or to reach out to the police or a social worker.

Enjoying uplifting content, engaging with friends and family, and accessing urgent information and assistance are important ways children can use social media. Policymakers need not interfere with them. The relevant question is not whether social media has benefits, but whether it also causes harms, and what measures might protect against those harms while preserving the benefits. Curtailing the ability of Big Tech to display the pictures young people post on social media to the broader public passes this test. The problem isn't the use of technology itself, but the radical transformation of childhood into an online experience mediated by commercial interests and predicated on broadcasting one's life while asking the world to judge it.

HISTORY REPEATING

In the national shock that followed the Granite Mill fire, reform discussions abounded. As historian Marjorie Wood notes, most amounted to debates over how best to spit shine the status quo by tweaking safety standards for the factories in which children worked. But one Massachusetts resident, writing anonymously to the Boston Globe a few days after the fire, wondered why the status quo had to be accepted as the inevitable future.

The unknown resident suggested that perhaps the problem wasn't inadequate safety measures at all, but rather that "the fourth and fifth stories of a cotton mill are full of children." In contrast to the newspaper editors and politicians carefully debating the finer points of fire safety, the writer floated a simple, almost embarrassingly obvious idea: "Take the children out of the mills." It took a long time, but this was the obvious solution that America eventually — and rightly — agreed upon.

Our most recent technological upheaval has again confronted America with new avenues for child exploitation. As there were then, there will be fights about what action is feasible and appropriate in response. These are fights worth having, sooner rather than later.

Parents have primary responsibility for raising and protecting their children. But within the digital sphere their children now occupy, they cannot do it alone. Light-touch efforts at providing them additional options and information are insufficient. Policymakers will need aggressive tools that will discomfit sophisticated opinion in Washington and the Bay Area but make sense to American parents — parents who are increasingly worried for their children's safety online and frightened by their own powerlessness in the face of Silicon Valley's influence over their children's emotional and social development.

In 1906, while defending his widely popular bill restricting child labor, Indiana Republican senator Albert Beveridge laid out what was at stake: "We cannot permit any man or corporation to stunt the bodies, minds, and souls of American children," he declared. "We cannot thus wreck the future of the American republic." He argued in vain: His bill was quashed by a Congress fearful of action and acquiescent to the prevailing commercial interests.

Once again, policymakers face choices about whether and how they will protect children from new forms of harm emerging from technological change. They should make a better choice this time.

Chris Griswold is the policy director at American Compass. 


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.