Courts Are Letting Social-Media Platforms Get Away with Manipulating Children


Published September 24, 2024

National Review Online

Earlier this month, Judge Robert Shelby of the U.S. District Court of Utah declared unconstitutional the Utah Minor Protection in Social Media Act, which the state legislature passed this spring. We believe this is an act of judicial activism that is based on flawed reasoning, legally and practically.

You don’t need social scientists to tell you something is wrong. Parents can look around and see for themselves: Kids aren’t talking to each other or the adults in their lives. Instead they are staring down at screens, lost in their own virtual world. This behavior is not accidental; nor are the deleterious effects that social scientists such as Jonathan Haidt, author of The Anxious Generation, and Jean Twenge have documented. Social media have conducted a radical experiment on our kids. Algorithms and addictive design features determine what our kids see and learn, with no parental involvement or oversight. The result has been major spikes in the rates of teen anxiety, depression, suicide, and self-harm.

Many states like Utah have stepped up to help parents free their children from the addictive design and harmful effects of social media.

Utah’s law would have required platforms to determine the age of their users. For minor users, the platforms would’ve then been required to enable certain protective default privacy settings and to disable addictive features such as autoplay, infinite scroll, and push notifications. Each of these is a reasonable regulation to help make social media less addictive for kids and protect them from dangerous online predators.

So reasonable in fact, that Meta just announced they have now decided on their own to do some of the very things, such as determining the age of users and automatically defaulting minor accounts into certain privacy settings, that Utah’s law was trying to compel. Utah’s law is hardly burdensome on platforms or unworkable.

However, the district court in Utah struck it down.

As an initial matter, Judge Shelby ruled that when social-media companies transmit, feature, or promote children’s speech, it becomes the company’s speech — protectable by the First Amendment. In other words, the platforms express themselves in constitutionally protected ways by using our children. It seems unlikely that James Madison thought the First Amendment protected social-media companies’ right to use our children’s posts and personal, often intimate expressions for their own profit.

The judge, not Madison, got it wrong. Infinite scroll, autoplay, and push notifications are not a platform’s speech; they are product design elements. Shelby wrote that Utah’s law “restricts social media companies’ abilities to collage user-generated speech into their ‘own distinctive compilation[s] of expression.’” But these companies are not expressing themselves or their own viewpoints through making a never-ending content feed. What particular message is that sending? Or what are they saying by having one video play after another automatically? Autoplay is a feature, not a message.

Shelby also ruled that the Utah law faced “strict scrutiny,” the highest degree of constitutional review, because the law imposes content-based restrictions on speech. In other words, he found the law targeted certain types of content.

That is a strange determination. The Utah social-media law targets ways of presenting content, such as autoplay, not specific ideas or topics. It is a classic “time, place, and manner restriction” and should therefore receive a more deferential judicial review.

Shelby went on to say that the law targeted a particular type of content, namely the “‘social’ subject matter of the material these platforms disseminate.” This is absurd. Social media convey communications about all kinds of content: religion, politics, sports, art, and how the neighbors’ kid did at the state soccer tournament. All communication is inherently social.

Applying strict scrutiny, Shelby then found that Utah had no compelling interest to justify the law. He waved away the mountain of social research documenting the harms social media are causing. One wonders what sort of harm Utah children would need to endure to meet the judge’s criteria of justified legislation.

Finally, the judge also found that the law is not “narrowly tailored” because Utah has not shown that existing parental controls are an inadequate alternative. Shelby writes, “Other methods exist to advance the goal of protecting children on the internet, including parental controls and web filtering technology.”

It seems that Shelby is not aware of how parental controls work, or rather, how they don’t work. While other methods, including parental controls, do indeed exist, they are woefully inadequate to the problem. No control or filter can mitigate or filter out the design features of these platforms, such as aggressive algorithms, recommendations, or autoplay, that are inherently harmful to children’s developing brains. Social-media platforms do not grant third-party controls or filters access to their apps. The most that any external control software can do is set time limits, but even these limits are not entirely effective, since kids find workarounds by toggling the VPN off and on, turning on and off airplane mode, and using other such hacks that reset the time limits. As a result, parents cannot meaningfully monitor what their children are doing on the platforms or filter out dangerous content within the apps. Parents need help from lawmakers.

America’s Founders did not intend for the First Amendment to be weaponized against parents in their fundamental role as protectors of their children. Nor did they intend that it shield huge tech platforms’ use of America’s kids for their own “expression.” Social media have created a generation in pain that presents a genuine civilizational crisis. We need judicial common sense and constitutional faithfulness from our judges to right these wrongs.


Clare Morell is a fellow at the Ethics and Public Policy Center, where she directs EPPC’s Technology and Human Flourishing Project. Prior to joining EPPC, Ms. Morell worked in both the White House Counsel’s Office and the Department of Justice, as well as in the private and non-profit sectors.

Most Read

EPPC BRIEFLY
This field is for validation purposes and should be left unchanged.

Sign up to receive EPPC's biweekly e-newsletter of selected publications, news, and events.

SEARCH

Your support impacts the debate on critical issues of public policy.

Donate today

More in Technology and Human Flourishing