Table of Contents
AI is new — the laws that govern it don’t have to be

Wall-E
On Monday, Virginia Governor Glenn Youngkin vetoed House Bill 2094, the High-Risk Artificial Intelligence Developer and Deployer Act. The bill would have set up a broad legal framework for AI, adding restrictions to its development and its expressive outputs that, if enacted, would have put the bill on a direct collision course with the First Amendment.
This veto is the latest in a number of setbacks to a movement across many states to regulate AI development that originated with a working group put together last year. In February, that group broke down — further indicating upheaval in a once ascendant regulatory push.
While existing laws may or may not be applied prudently, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.
At the same time, another movement has gained steam. A number of states are turning to old laws, including those prohibiting fraud, forgery, discrimination, and defamation, which have long managed the same purported harms stemming from AI in the context of older technology.
Gov. Youngkin’s HB 2094 veto statement echoed the notion that existing laws may suffice, stating, “There are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more.” FIRE has pointed to these abilities of current law in previous statements, part of a number of AI-related interventions we’ve made as the technology has come to dominate state legislative agendas, including in states like Virginia.
The simple idea that current laws may be sufficient to deal with AI initially eluded the thinking of many lawmakers. Now — it's quickly becoming common sense in a growing number of states.
While existing laws may be applied in ways prudent and not, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.
The regulatory landscape
AI offers the promise of a new era of knowledge generation and expression, and these developments come at a critical juncture as AI development continues to advance towards that vision. Companies are updating their models at a breakneck pace, epitomized by OpenAI’s popular new image generation tool.
Public and political interest, fueled by fascination and fear, may thus continue to intensify over the next two years — a period during which AI, still emerging from its nascent stage, will remain acutely vulnerable to threats of new regulation. Mercatus Center Research Fellow and leading AI policy analyst Dean W. Ball has hypothesized that 2025 and 2026 could represent the last two years to enact the laws that will be in place before AI systems with “qualitatively transformative capabilities” are released.
With AI’s rapid development and deployment as the backdrop, states have rushed to propose new legal frameworks, hoping to align AI’s coming takeoff with state policy objectives. Last year saw the introduction of around 700 bills related to AI, covering everything from “deepfakes” to the use of AI in elections. This year, that number is already approaching 900-plus.
Texas's TRAIGA, the Texas Responsible Artificial Intelligence Governance Act, has been the highest-profile example from this year’s wave of restrictive AI bills. Sponsored by Republican State Rep. Giovanni Capriglione, TRAIGA has been one of several “algorithmic discrimination” bills that would impose liability on developers, deployers, and often distributors of AI systems that may introduce a risk of “algorithmic discrimination.”
Other examples include the recently vetoed HB 2094 in Virginia, Assembly Bill A768 in New York, and Legislative Bill 642 in Nebraska. While the bills have several problems, most concerning are their inclusion of a “reasonable care” negligence standard that would hold AI developers and users liable if there is a greater than 50% chance they could have “reasonably” prevented discrimination.
Such liability provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive — even if doing so curtails the models’ usefulness or capabilities. The “chill” of these kinds of provisions threatens a broad array of important applications.
In Connecticut, for instance, Children’s Hospitals have warned how the vagueness and breadth of such regulations could limit health care providers’ ability to use AI to improve cancer screenings. These bills also compel regular risk reports on the models’ expressive outputs, similar to requirements that were held as unconstitutional under the First Amendment in other contexts by a federal court last year.
So far, only Colorado has enacted such a law. Its implementation, spearheaded by the statutorily authorized Colorado Artificial Intelligence Impact Task Force, won’t assuage any skeptics. Even Gov. Jared Polis, who conceived the task force and signed the bill, has said it deviates from standard anti-discrimination laws “by regulating the results of AI system use, regardless of intent,” and has encouraged the legislature to "reexamine the concept” as the law is finalized.
With a mandate to resolve this and other points of tension, the task force has come up almost empty-handed. In its report last month, it reached consensus on only “minor … changes,” while remaining deadlocked on substantive areas such as the law’s equivalent language to TRAIGA on reasonable care.
The sponsors of TRAIGA reached a similar impasse as it came under intense political scrutiny. Rep. Capriglione responded earlier this month by dropping TRAIGA in favor of a new bill, HB 149. Among HB-149’s provisions, many of which run headlong into protected expression, is a proposed statute that holds “an artificial intelligence system shall not be developed or deployed in a manner that intentionally results in political viewpoint discrimination” or that “intentionally infringes upon a person’s freedom of association or ability to freely express the person’s beliefs or opinions.”
But this new language overlooks a landmark Supreme Court ruling just last year that laws in Texas and Florida with similar prohibitions on political discrimination for social media raised significant First Amendment concerns.
A more modest alternative
An approach different from that taken in Colorado and Texas appears to be taking root in Connecticut. Last year, Gov. Ned Lamont signaled he would veto Connecticut Senate Bill 2, a bill similar to the law Colorado passed. In reflecting on his reservations, he noted, “You got to know what you’re regulating and be very strict about it. If it’s, ‘I don’t like algorithms that create biased responses,’ that can go any of a million different ways.”
At a press conference at the time of the bill’s consideration, his office suggested existing Connecticut anti-discrimination laws could already apply to AI use in relevant areas like housing, employment, and banking.
Attempting to solve all theoretical problems of AI, before the contours of its problems become clear, is not only impractical but risks stifling innovation and expression in ways that may be difficult to reverse.
Scholars Jeffrey Sonnenfeld and co-author Stephen Henriques of Yale’s School of Management expanded on the idea, noting Connecticut’s Unfair Trade Practices Act would seem to cover major AI developers and small “deployers” alike. They argue that a preferable route to new legislation would be for the state attorney general to clarify how existing laws can remedy the harms to consumers that sparked Senate Bill 2 in the first place.
Connecticut isn’t alone. In California, which often sets the standard for tech law in the United States, two bills — AB 2930, focusing on liability for algorithmic discrimination in the same manner as the Colorado and Texas bills, and SB 1047, focusing on liability for “hazardous capabilities” — both failed. Gov. Gavin Newsom, echoing Lamont, stressed in his veto statement for SB 1047, “Adaptability is critical as we race to regulate a technology still in its infancy.”
Newsom’s attorney general followed up by issuing extensive guidance on how existing California laws — such as the Unruh Civil Rights Act, California Fair Employment and Housing Act, and California Consumer Credit Reporting Agencies Act — already provide consumer protections for issues that many worry AI will exacerbate, such as consumer deception and unlawful discrimination.
New Jersey, Oregon, and Massachusetts have offered similar guidance, with Massachusetts Attorney General Andrea Joy Campbell noting, “Existing state laws and regulations apply to this emerging technology to the same extent as they apply to any other product or application.” And in Texas, where HB 149 still sits in the legislature, Attorney General Ken Paxton is currently reaching settlements in cases about the misuse of AI products in violation of existing consumer protection law.
Addressing problems
The application of existing laws, to be sure, must comport with the First Amendment’s broad protections. Not all conceivable applications will be constitutional. But the core principle remains: states that are hitting the brakes and reflecting on the tools already available give AI developers and users the benefit of operating within established, predictable legal frameworks.
And if enforcement of existing laws runs afoul of the First Amendment, there is an ample body of legal precedent to provide guidance. Some might argue that AI poses different questions from prior technology covered by existing laws, but it departs in neither essence or purpose. Properly understood, AI is a communicative tool used to convey ideas, like the typewriter and the computer before it.
If there are perceived gaps in existing laws as AI and its uses evolve, legislatures may try targeted fixes. Last year, for example, Utah passed a statute clarifying that generative AI cannot serve as a defense to violations of state tort law — for example, a party cannot claim immunity from liability simply because an AI system “made the violative statement” or “undertook the violative act.”
Rather than introducing entirely new layers of liability, this provision clarifies accountability under existing statutes.
Other ideas floated include "regulatory sandboxes," a voluntary way for private firms to test applications of AI technology in collaboration with the state in exchange for certain regulatory mitigation, the aim being to offer a learning environment for policymakers to study how law and AI interact over time, with emerging issues addressed by a regulatory scalpel rather than a hatchet.
This reflects an important point. The trajectory of AI is largely unknowable, as is how rules imposed now will affect this early-stage technology down the line. Well-meaning laws to prevent discrimination this year could preclude broad swathes of significant expressive activity in coming years.
FIRE does not endorse any particular course of action, but this is perhaps the most compelling reason lawmakers should consider the more restrained approach outlined above. Attempting to solve all theoretical problems of AI before the contours of problems become clear is not only impractical, but risks stifling innovation and expression in ways that may be difficult to reverse. History also teaches that many of the initial worries will never materialize.
As President Calvin Coolidge observed, “If you see 10 troubles coming down the road, you can be sure that nine will run into the ditch before they reach you and you have to battle with only one of them.” We can address those that do materialize in a targeted manner as the full scope of the problems become clear.
The wisest course of action may be patience. Let existing laws do their job and avoid premature restrictions. Like weary parents, lawmakers should take a breath — and maybe a vacation — while giving AI time to grow up a little.
Recent Articles
FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Defending free speech: FIRE and Substack partner to protect writers in America

Brown University targets student journalist for sending DOGE-like emails

FIRE-supported Utah legislation secures students’ rights to freely associate on campus
