GUARD Act Puts Policymakers, Not Parents, in Charge of Kids’ AI Use

Jennifer Huddleston and Juan Londoño

The Senate Judiciary Committee is considering the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act. Introduced by Senator Josh Hawley (R‑MO), the bill would restrict or even prohibit minors’ access to various artificial intelligence (AI) products. Like age-verification proposals for online services, this bill is unlikely to survive constitutional scrutiny. But beyond its likely unconstitutionality, Sen. Hawley’s approach endangers users’ privacy, limits parental rights, and locks minors out of beneficial uses of AI.

AI tools, including chatbots, can benefit young people in many ways. This includes online tutors, practicing a foreign language, or developing an array of skills. AI tools have also become ubiquitous in many products, doing everything from providing tech support to helping customize a burrito (and perhaps being able to write code in the process). A February 2026 survey by the Pew Research Center found that over half of US teens use chatbots for help with schoolwork. The GUARD Act would prevent those under 18 from accessing any of these products.

Among the most concerning aspects of the bill is its lack of a parental consent option that would allow a child to use these products. This would mean parents no longer have the choice on when or how their child might learn to use this technology. Not only does this approach limit choice, but it defaults to the most restrictive choice of no access at all. Restricting parental choice in this manner is indicative of a failure to consider both the unique values of every family and the potential for AI chatbots to improve the lives of many young people, including those with disabilities like autism.

Age verification laws don’t just impact parents and children; they impact all users of a technology. Age verification also requires adult users to share sensitive information, such as a government ID or biometric data, to verify their age. This is effectively not only age verification but identity verification, raising significant concerns about the impact on anonymous speech. The eradication of online anonymity undermines core values that have long been understood as necessary for a free society. Furthermore, such a precedent could be exploited by governments to target individuals or censor access to information.

Increasingly, we have seen evidence that the privacy risks associated with age verification mandates are very real. For example, the enactment of age verification mandates in Europe has exposed European users to frequent data breaches in which their personal identification or biometric data has fallen into the hands of bad actors. In October 2025, the IDs of 70,000 users of the gaming-focused communications platform Discord were leaked after Discord’s age verification provider, 5CA, experienced a data breach. 

Similarly, when the European Union released its proprietary age-verification mobile app in April 2026, reports indicated that cybersecurity experts breached it within two minutes. By forcing all users to submit valuable personal data, regulators create additional risk vectors for users of all ages.

Perhaps the good news is that the GUARD Act would be almost certainly found unconstitutional. The presumptions in the bill about the harms of AI are not sufficiently proven, nor do they establish adequate government interest. The Supreme Court has struck down on First Amendment grounds similar attempts restricting access to violent video games and the early internet. The legislation is also broad in the types of technology it covers and places significant burdens on all users, further emphasizing the potential speech impacts that the courts would consider.

Different families may have different views on when a child should or shouldn’t access any technology. This decision appropriately belongs with parents, not policymakers. This proposal also misses a key point. Broadly denying minors’ access to all sorts of technology is simply bad policy, and it typically backfires. Denying kids access to these resources deprives them of the benefits offered by these services while also making them less prepared to use AI tools in the future, whether in the workplace or in further education. Not only that, but it does not provide adequate options for families who want to embrace these technologies or have specific needs for them.

The debate about kids, teens, and technology is ongoing, but the GUARD Act would be a solution that would sacrifice an array of freedoms and make no one safer.