Recent surveys show that legal teams plan to spend big on GenAI next year, despite the fact that many jurisdictions (notably the EU) have expansive and still-forming regulatory plans for artificial intelligence. Below, we outline some already enacted laws to keep your eye on as you map out your AI adoption strategy.
As we’ve seen before, the European Union (EU) is ahead of other jurisdictions when it comes to regulations, and the European Union Artificial Intelligence Act is the one regulation to overrule them all. The EU AI Act is in force across all EU member states and will impact the governance of AI in the EU and beyond. The EU AI Act sets forth a legal framework for developing, marketing, and using AI systems and establishes four "risk tiers" for AI software. It also makes it explicitly clear that both providers and users of AI systems are responsible for the development and use of AI systems, so you can't play the "ChaptGPT made me do it" card if an AI agent you bought harms your customers.
If you plan to use AI for certain law enforcement, employment, or educational activities, you're automatically in the high-risk tier. AI also can’t be used to criminally profile users or the EU public at large. ("Profile" in this context refers to using AI to predict an individual's propensity for future criminal behavior.) And finally, content that constitutes a deepfake must be accompanied by a disclosure that the content has been created using AI. A deepfake is defined as, “ images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media”
We'll see how these broad definitions get defined in court, so this is your primary regulation to watch.
Colorado, Tennessee & Utah
Other than broad executive orders, the US federal government has little to no substantive AI regulation as of yet. However, it is worth noting that the majority of the enacted and proposed federal and state-level AI legislation in the US regulates government use of artificial intelligence, particularly to prevent profiling. Three states—Colorado, Tennessee, and Utah—have made notable laws regarding AI use in the private sector.
The Colorado Consumer Protection for Artificial Intelligence Act requires that anyone deploying AI systems that interact with consumers must disclose to the consumer that they are interacting with AI where it is not obvious to a reasonable person that they are interacting with an AI system. (Bots can't pretend to be human; deepfakes have to be labeled as deepfakes.) Deployers of high-risk AI systems must also perform an impact assessment.
The Utah Artificial Intelligence Policy Act applies to companies subject to Utah’s consumer protection laws and requires transparency and accountability related to the use of generative AI applications and to AI agents. Put simply, the deployer of an AI agent is liable for the AI agent's violation of applicable consumer laws in the same way it would be if a human made the violation. If an AI agent acting as a customer service rep hallucinates a sale price or return policy, you may be bound by that statement.
The Tennessee Ensuring Likeness Voice and Image Security Act (“ELVIS Act”) makes it a civil offense to create a deepfake of a performer without their or their estate's explicit consent.
And these are just the major laws already on the books. Stay tuned for part two, where we discuss AI laws under consideration you'll want to stay on top of. If you want to adopt an AI contract analysis tool to help you stay in conformance with these regulations, contact LinkSquares today.