Published February 17, 2022
With Congress unlikely to pass meaningful Big Tech reform anytime soon, and no consensus on how to reform Section 230 of the Communications Decency Act or tackle antitrust reform, any solutions to the harms of Big Tech for the foreseeable future are going to come from the states. I would thus like to offer four possible approaches states can take to address one of the main threats posed by Big Tech companies to families: the harm to children, particularly through the distribution of obscene and dangerous content.
1. Taxes on commercial websites that are conditioned on effective measures to protect children, particularly from obscene and harmful content.
As Adam Candeub and I have written, states could use their power of taxation to encourage websites and social media platforms to adopt child-friendly policies. As a general rule, states have been hesitant to impose taxation on social media firms. A properly written state tax, however, could avoid the constitutional and legal obstacles of the Internet Tax Freedom Act (ITFA). Significantly, states could impose the tax obligation only on sites or platforms who have not put effective measures in place to protect children from obscene and harmful content.1
The constitutional power of states to impose taxes is limited by requirements from the Complete Auto Transit case.2 However, several types of taxes could satisfy these requirements. A state could impose a tax on revenue associated with advertisements read within or aimed at the particular state, or a state could impose a per subscriber tax. Or a state could charge major interactive service providers a quarterly fee on their active in-state users.
One last obstacle to overcome is that ITFA prohibits states from taxing “internet access,” which encompasses most online platforms. Fortunately, there is an exemption. ITFA expressly permits state taxes that fund universal service programs. Provided a state has such a program—the overwhelming number do—taxes on social media sites could be directed to these programs that expand broadband access and other communications services.
Most importantly, the tax should be made conditional upon a firm’s successful adoption of an effective system to protect minors from obscene and harmful content. If the firm has an effective system, it would be exempt from the tax, and thus incentivize companies to put robust protections for children in place.
2. Unfair or Deceptive Trade Practices Actions (“Little FTC Acts”)
One way to address Big Tech abuses would be for states to bring unfair and deceptive trade practice actions (an approach articulated elsewhere by Adam Candeub) to hold platforms accountable for abusively marketing to children. With this approach, state attorneys general or agencies could take action against platforms’ unfair or deceptive trade practices. And such actions would be brought using states’ “Little FTC Acts.”
The Federal Trade Commission Act prohibits “unfair or deceptive acts or practices in or affecting commerce.”3 More than 40 states have state laws that mirror the FTC Act’s protections, the so-called “Little FTC Acts.” The “Little FTC Acts” allow the states, as with the federal government, to take specific action to protect children. We know that social media and internet platforms target and market to children. Their enticement of children to engage on social media through marketing, without parental approval, could be deemed an unfair trade practice under current “Little FTC Acts,” or a state legislature could amend its law to make explicit that it will be considered an unfair trade practice for social media companies to form contracts with minors without verifiable parental consent.
States could thus use their Little FTC Acts, perhaps with some amendments, to prohibit tech platforms from abusively marketing their goods to children.
3. Enact broadband bills that require (or prioritize) companies to add a default pornography filter to their internet services.
States could also pass a law that requires broadband companies to make users opt in to receive pornography channels/streaming through the company. States could regulate pornography at the Internet Service Provider level by passing a law requiring ISPs in their state to provide a default version of the Internet that is filtered of indecent content, while allowing adult users the ability to opt in to an unfiltered version of the Internet. The ISP would be required to set up an opt-in system, using a ratings system that provides differently filtered versions of the Internet with a default pornography-free setting of 18+.
Last legislative session in Texas, House Bill (HB) 5, which created a Broadband Expansion Office tasked with awarding federal money to contractors who agree to expand internet access to underserved areas, had tried to include an amendment to prioritize contracts for companies that agree to add a default pornography filter to their services. Offered by Rep. Jeff Cason (R-Bedford), the amendment read, “The office shall … prioritize an applicant that the broadband provided by the applicant will maintain a program to, by default, block access to pornographic or other obscene materials.” While the final version did not contain this amendment, it serves as a possible model for other states to use or for Texas to try to pass again.
4. Pass age-verification laws
Finally, state legislatures could pass laws to require interactive computer services that are in the business of creating or hosting obscene or other content harmful to minors, to implement age-verification measures on their platforms or websites to ensure minors cannot access them. Such verification measures could include adult identification numbers, credit card numbers, bank account payment, a driver’s license, or other identification mechanism. The law could also impose a civil penalty for any violation of the law, and each day of the violation could constitute a separate violation of the law. It could also include a private cause of action or perhaps a class action as an enforcement mechanism where, for example, parents could sue for damages for the exposure of their children to dangerous material.
This approach does present some constitutional risks. Ashcroft v. Am. C.L. Union (2004) struck down a similar federal age-verification requirement for internet sites on the grounds that filtering was a more effective and less restrictive means than age verification. However, given that filters in their current forms have not been the most effective at limiting the availability of pornography, and given the growing acceptance of paywalls and other types of restrictive access, courts may be willing to revisit this conclusion.
All four of these approaches offer states a viable path forward for taking effective action now to hold Big Tech accountable for its egregious harms against children and families.
Clare Morell is a policy analyst at the Ethics and Public Policy Center, where she works on the Big Tech Project. She worked in the White House Counsel’s Office and the Justice Department during the Trump administration.
1. The landmark Supreme Court case, South Dakota v. Wayfair, Inc., 138 S. Ct. 2080, 585 U.S. ___ (2018), eliminated the requirement that an entity must have a physical presence in the state as a condition to require such entity to remit taxes on the sale of goods and services sold therein.
2. Complete Auto Transit, Inc. v. Brady, 430 U.S. 274 (1977). The case holds that “state taxes will be sustained so long as they (1) apply to an activity with a substantial nexus with the taxing State, (2) are fairly apportioned, (3) do not discriminate against interstate commerce, and (4) are fairly related to the services the State provides.”
3. 15 U.S.C. § 45(a)(1).