Keeping Kids Safe Is Good but Policy Proposals Are Still Bad for Speech, Innovation, and Kids’ Safety

David Inserra, Jennifer Huddleston, and Christopher Gardner

When it comes to kids online, policymakers often have the best of intentions. Even if they radically disagree on what’s good for children, it is fair to say that most policymakers want to do something they believe will help and protect kids. 

But good intentions are not enough to make good policy. The House Energy and Commerce Committee recently passed a package of bills centered around the Kids Internet and Digital Safety Act (KIDS Act) and the App Store Accountability Act (ASAA). Sadly, these bills fail to keep kids safe and threaten online speech for everyone. 

KIDS Act

The KIDS Act combines several youth online safety-focused bills into a single proposal to create new requirements for how platforms treat children. However, these proposals would still have a far more significant impact on all users.

Proponents of policy action often indicate that both parents and the law support this approach. The reality is far more nuanced. The recent polling that advocates point to shows overwhelming support for age verification only related to accessing sexually explicit content. But the policies proposed will require age verification far more generally for content or types of websites, like social media. Again, we can learn from what has happened elsewhere that broad categories will be interpreted broadly. For example, in the UK, the “harmful to minors” standard resulted in the removal of content on the wars in Gaza and Ukraine, as well as age-gating on Spotify. [JH1] 

While some proponents point to similar state laws, many of these are currently enjoined due to their impact on speech. In FSC v. Paxton, the Supreme Court held that age verification for pornography was considered under intermediate scrutiny, given its precedent on obscenity and children. This was premised on its existing jurisprudence regarding obscenity, not more general age-verification for speech. More recently, while the Supreme Court denied a request to reinstate a preliminary injunction against a broad Mississippi state law requiring age verification for social media, Justice Kavanaugh noted, “I[DI2]n short, under this Court’s case law as it currently stands, the Mississippi law is likely unconstitutional.” As Rep. Lori Trahan noted during the markup, laws that are struck down protect no one. Existing jurisprudence would make that highly likely of all but the most tailored age verification laws.

But the KIDS Act has several elements that merit consideration beyond its age verification provisions.

For example, within the KIDS Act is a section known as the Kids Online Safety Act (KOSA) that also requires all social platforms of all sizes to create and enforce policies against violent and harassing speech, sexual, exploitative, and financially deceptive content, and content around drugs, tobacco, alcohol, and gambling. The main issue with this provision is that it raises significant First Amendment concerns by mandating that platforms adopt certain content policies. 

Congress could not, for example, force newspapers to take down op-eds that are deemed too violent for kids or purge stories involving alcohol or drug sales for the sake of the children. Similar bills by the states that have mandated policies against hate speech, for example, have been struck down for the same reason. And, putting aside the constitutional issues, in practice, this provision will result in companies dramatically overremoving content from their platforms to avoid legal liability. Such collateral censorship will be even more injurious to the online speech of adults and children. 

KOSA also requires a series of new design features that platforms must implement for minors using their platforms. Limits on who can contact them, limits on “compulsive” features, content controls and defaults for minors, various parental controls, etc. These sections try to pretend they are only design features, but are fundamentally based on certain types of content that policymakers are worried about. As David has written before, no policymakers would be demanding these changes if kids were primarily using social media to talk about gardening or to watch educational videos because such content is considered benign or even virtuous. 

Throughout the bill markup, members of the committee repeatedly noted that their concerns stemmed from the content’s harmful or immoral nature. Chairman Guthrie himself opened the hearing by arguing that “algorithms amplify addictive, harmful content.” And when Congress tries to regulate “harmful content,” aka speech, the First Amendment will rightly be a barrier to government censorship.

The Safe Messaging for Kids Act, included in the broader KIDS Act, would place limits on the kinds of direct messaging that platforms can offer to minors. But if an adult doesn’t wish to provide identification to verify their age, they, too, will have their messaging tools limited. Unfortunately, this section includes a potential threat to virtual private network (VPN) usage by requiring messaging services to take action to prevent circumvention of these limits. 

As with messaging apps, the KIDS Act also considers video games in the Safer GAMING Act. The bill requires games to have controls and limits over communications that happen within the game and default to the most restrictive settings. This means adults who don’t want to verify their age by providing photo IDs, biometrics, financial records, or similar proof would be locked into the “kid safe” settings, effectively making them unable to communicate with others they are playing games with. Multiplayer gaming is an incredibly common feature of modern video games and can even be useful to build community and connection. While Rep. Kean noted that it was “alarming” that “70% of teens report playing online games with strangers at least weekly,” that statement actually shows a lack of understanding of modern video games. Most avid gamers regularly join together with players from across the internet to defeat a game boss, do battle with other players, or build an online world. Whether it is trash-talking your opponents or coordinating with members of your team, playing and communicating with others is a major part of most games and is even seen in the growing e‑sports industry. 

Of course, given growing concerns about young people and AI, chatbots are also part of the KIDS Act. The SAFE BOTs portion of the proposal adds several new requirements for AI chatbots when communicating with minors. They must not claim to be licensed professionals (unless they are), they must regularly disclose that they are not a person but an AI, provide suicide and crisis hotline information, and advise users to take a break after three hours. Chatbot providers are also required to develop and enforce reasonable policies to prevent their bots from promoting gambling, illegal substances, and providing access to sexual material to minors.

While the proposed disclosure requirements are fairly simple to implement, the bill’s requirements for chatbot providers introduce significant enforcement uncertainty and may run afoul of the First Amendment. As with KOSA, forcing platforms to maintain certain policies regarding certain types of content can constitute a restriction on protected speech. This encourages companies to over-moderate on topics that could be useful, including helping a teen or adult struggling. Rather than attempting to carefully tune models to be able to handle and support complex conversations with those who are in crisis, chatbot providers may simply choose to avoid liability and shut down on someone with nowhere else to turn, making a bad situation worse. Many companies will simply choose not to take the risk of engaging, leaving minors and adults who can benefit from chatbot tutors or other positive interactions worse off.

There are other provisions as well. Notably, the KIDS Act concludes with a series of studies on various online harms, as well as several education and partnership provisions. If conducted fairly and rigorously, the studies could yield real knowledge about this topic. The education and partnership efforts may also help parents, kids, and other private sector actors tangibly navigate and prevent online harms. If only policymakers had focused more on education and looked for ways to leverage private sector and civil society expertise, then kids could learn how to best navigate the online spaces, and parents could be empowered. Strikingly, there is also nothing in these bills to find and punish the actual criminals and predators that are doing real harm to children. 

It is also worth noting that across these proposals, there was consistent language that preempted some significant degree of state legislation. The bill also limited enforcement of these provisions primarily to the Federal Trade Commission and, secondarily, to state attorneys general, rather than including a private right of action for individuals to sue. These issues have come up in a variety of contexts before, including data privacy and AI. While there are, of course, specific nuances to the debate in the context of youth online safety legislation, the same general discussion about the potential impact of the cost of litigation on small businesses, the disruption of a state patchwork, and the potential abuse of power by the administrative state without clear guardrails remains similar.

App Store Accountability Act 

The ASAA is an age-verification bill that requires users of app stores to verify their age before purchasing apps. While proponents of age verification measures promote this bill as being simpler and safer than age verification requirements for each website or app, this approach still has the same basic concerns when it comes to the data privacy and potential speech concerns. Age verification is fundamentally flawed because it chills anonymous speech, weakens users’ privacy, and incentivizes users to circumnavigate the law, including potentially turning to risky tools or services. This does not change depending on the level at which the verification occurs. At the heart of this legislation are provisions requiring all users to be treated and restricted as children unless they are verified as otherwise. This raises significant questions about speech and privacy for all users, as well as the kids and teens such laws intend to protect.

Anonymous and pseudonymous speech have long defined human expression. Whistleblowers, government critics, journalists, and activists, individuals in vulnerable or dangerous situations, or just those who are very private or security-conscious, all benefit from anonymous speech. But if your ability to speak on a social media or direct messaging app requires you to first identify yourself as someone over the age of 18, then your speech really isn’t anonymous. Your identity is one data breach or hack away from being made public. And since everyone will be required to age-verify to use any app, that’s a lot of identity information in the hands of app stores, making them a prime target for hackers. 

A prominent example of such vulnerability was raised by Rep. Ocasio-Cortez, who cited the recent breach of online communication platform Discord that revealed the personal information of about 70,000 users. Moving this verification to be per app store rather than per website does not change the reality that this same information is being collected and shared with those apps. 

As a result, those who want to use an app to speak may be wary of doing so if their identity can later be unmasked, chilling anonymous speech. Those in potentially dangerous situations, such as an individual escaping an abusive relationship or a whistleblower, may not want to use these tools because they don’t want to (or in some cases cannot) provide their identity information. Others may want to use an app regarding potentially sensitive topics like sexuality or mental health, but are afraid that information could be weaponized were it to be exposed. 

Even when “successful,” age-verification can actually make those whom it is attempting to protect less safe. As we’ve seen in places like Australia and the UK, kids and teens may turn to darker corners of the web with less parental controls or use a less than reputable VPNs to access the desired entertainment or information. Users may end up less safe and secure as a result of age verification

Moving verification to the app store level also creates new problems. First, it adds complexity to what happens when multiple users are using one store on a shared gaming device or tablet. It also raises questions of what responsibilities fall on apps that were general-purpose, such as a weather app, a budgeting app, or even a calculator, that now may find themselves with new expectations to know a user is underage and engage in additional steps. And more generally, it restricts access to every single app covering a vast range of topics and purposes, not just those that might have similar restrictions in an offline world, like gambling or pornography. 

As mentioned earlier, the Supreme Court upheld age verification only for pornography under intermediate scrutiny, citing its precedents on children and obscenity. App store age verification is far broader and far-reaching in its impact. It is akin to a law restricting every American’s access to all brick-and-mortar stores in the same shopping center as a liquor store unless they show sufficient evidence to identify themselves, to prevent minors from potentially buying something dangerous. And restricting app stores is even more constitutionally problematic because it not only restricts access to goods, but also many apps that serve as media and speech tools, directly infringing on the expressive rights of all Americans.

Age verification or estimation requirements also put app stores in an extremely tenuous position and could lock out competition. To achieve an acceptable standard of cybersecurity, platforms will have to invest significant time and money to ensure that all transactions are cryptographically protected. Smaller platforms will not be able to make this investment effectively, forcing them to rely on larger, more established tech companies. This would result in a highly centralized set of verification systems. And because these systems will be handling millions of individuals’ sensitive personal information, they are effectively gold mines for cybercriminals.

A Mistaken Approach

The approach taken by ASAA and the KIDS Act is deeply flawed, even if well-meaning, to online safety. They burden vast elements of online speech and activity for adults and minors, threaten online privacy, weaken innovation, and fail to do enough to empower parents and children to face the modern world. Sadly, only a few members of the committee discussed the glaring constitutional challenges of these bills.

An additional bill, Sammy’s Law, is similarly on improved online safety but more specifically focused on messaging access and will be discussed in a separate piece. Similarly, a bill making numerous changes to the existing Children’s Online Privacy Protection Act was tabled after a different version was passed on the Senate floor. Again, this is a distinct approach that will be discussed separately.

While the committee may feel that their good intentions are enough, the many problems with the legislation make it unfit for the challenges at hand.