AI Regulation Is Coming — Here's What It Actually Means
The EU AI Act is live. US regulation is brewing. Here's what builders and users need to know.
Mike Smith
@MikeSmithShowWhat's Already Law
The EU AI Act is the most comprehensive AI regulation globally. It categorizes AI systems by risk level and imposes requirements accordingly. High-risk systems (hiring, credit scoring, law enforcement) face strict obligations. General-purpose AI models face transparency requirements.
The practical impact for US companies: if you serve EU customers, you're subject to the AI Act. The extraterritorial reach means most global companies need to comply.
What's Coming in the US
US regulation is fragmented — state-level AI bills are multiplying while federal legislation stalls. California, Colorado, and New York have the most aggressive AI-specific regulations. Federal guidance from the White House Executive Order sets direction but lacks enforcement teeth.
The likely trajectory: state patchwork forces federal action to create consistency. Expect comprehensive federal AI regulation within 2-3 years, probably modeled on a lighter version of the EU approach.
Impact on Builders
If you're building AI products, start thinking about: transparency (can you explain how your AI makes decisions?), data governance (where does training data come from?), testing and evaluation (can you demonstrate your AI works as intended?), and human oversight (is there a human in the loop for consequential decisions?).
These aren't just compliance requirements — they're good engineering practices. Companies that build with these principles from the start will have an easier time when regulation arrives.
Impact on Users
For AI users — which is increasingly everyone — regulation means more transparency about when you're interacting with AI, more control over how your data is used for training, and more accountability when AI systems cause harm.
The practical impact will be subtle at first: more disclosure notices, more consent forms, and AI-generated content being labeled. Over time, it may limit certain AI applications in sensitive domains.
My Take
Some regulation is necessary and good. Unconstrained AI in hiring, credit, and law enforcement creates real harm. The question is whether regulators can write rules that prevent harm without killing innovation.
The EU approach is too heavy-handed for my taste — it imposes compliance costs that favor large incumbents over startups. A lighter US approach that focuses on outcomes (preventing harm) rather than process (how you build) would be better for innovation while still protecting people.
Key Takeaways
- →What's Already Law
- →What's Coming in the US
- →Impact on Builders
- →Impact on Users
Frequently Asked Questions
Follow the work in real time
@MikeSmithShow on X for daily prediction market takes.
Weekly Signal