If AI is rewriting the world, AI regulation is the fine print we all forgot to read.
For the better part of a decade, artificial intelligence has moved faster than the laws trying to contain it. What started as a wave of curiosity in academic and research circles has surged into a core driver of industry transformation - from how doctors diagnose rare conditions to how kids are taught to read. But the more consequential the applications, the louder the calls have become for AI regulation.
We are now in the middle of a global reckoning. Regulation is no longer a distant conversation about the future of AI. It is unfolding in real time - in policy documents, courtrooms, boardrooms, and public discourse - with direct implications for builders, businesses and their users.
The European Union: Moving AI regulation from principles to practice
The European Union has taken the lead with the world’s first comprehensive AI law: the AI Act, officially adopted in 2024. The Act introduces a risk-based framework - categorizing AI systems by the potential harm they pose, from minimal to unacceptable. Systems deemed high-risk (such as those used in medical diagnostics, HR screening or public infrastructure) will be subject to strict requirements around data quality, transparency, human oversight and accountability.
Beyond compliance checklists, the AI Act signals a new regulatory posture: proactive, enforceable and deeply attuned to fundamental rights. Violations of certain provisions can result in fines of up to 7% of a company’s global annual turnover.
What’s emerging from the EU is not just a set of rules, but a model. Already, jurisdictions from Brazil to Canada are referencing elements of the AI Act in their drafts. For any company operating internationally - or hoping to – this piece of AI regulation is not just European business. It’s a global precedent.
The United States of America: Fragmented AI regulation, but warming up
The United States has taken a more decentralized route, with multiple agencies and states stepping in where federal action has lagged. The Food and Drug Administration (FDA) has developed frameworks for AI/ML-enabled medical devices. The Federal Trade Commission (FTC) has warned companies against deceptive or biased AI use, citing its authority under consumer protection law. And the Office of Science and Technology Policy (OSTP) released a Blueprint for an AI Bill of Rights, outlining principles around AI safety, privacy and algorithmic fairness - though without legal force.
At the state level, California and New York are exploring their own legislative pathways, especially concerning employment and education.
But things are shifting. In late 2023, President Biden issued an Executive Order on “Safe, Secure and Trustworthy AI”, directing agencies to set new standards for model testing, red-teaming and data governance. While still piecemeal, the momentum toward a more cohesive national strategy is unmistakable.
Global view: Convergence or chaos?
Globally, the picture is uneven. China has implemented specific laws governing recommendation algorithms and generative content, with a focus on political and social stability. The UK has opted for a “pro-innovation” approach, tasking existing regulators (like the Information Commissioner’s Office and Competition and Markets Authority) with overseeing AI based on their existing mandates.
At the intergovernmental level, efforts like the OECD AI Principles and the G7’s Hiroshima Process are trying to create soft alignment, encouraging transparency, human oversight and shared safety practices without imposing binding obligations.
"Policy gives us the frame - but what defines the picture is how intentionally we build within it. That’s where long-term advantage lies," says Rishabh Sood, Founder, GoML.
In other words, there’s growing alignment in spirit - but fragmentation in form. For companies operating across borders, this creates operational friction and compliance ambiguity. One practical solution gaining traction: voluntary adherence to the strictest jurisdiction (often the EU) as a ‘de facto’ global standard.
AI Liability when things go wrong
Liability in AI is a regulatory puzzle still being pieced together. If an AI misdiagnoses a patient, who’s at fault - the hospital, the software vendor or the underlying model provider?
In the EU, liability frameworks are being updated to clarify responsibility, with proposals to reverse the burden of proof in high-risk cases. The US legal system, meanwhile, is seeing early tests of AI-related torts. One emerging case involved an AI chatbot that gave unsafe medical advice during a test scenario; another, a more high-profile lawsuit, accused a generative AI system of defamation for producing false information about a real individual.
"Every engineering decision is now a governance decision. If we design with alignment in mind from the ground up, compliance becomes a byproduct - not a burden," says Prashanna Rao, VP of Engineering, GoML.
Healthcare, insurance and finance are under the microscope. These industries deal with high stakes and historically heavy regulation - so AI tools used in these contexts are likely to face greater scrutiny and demand stronger explainability and audit trails.
IP and copyright: A legal minefield
One of the most publicly visible flashpoints has been copyright. Major publishers and creators have filed lawsuits against AI companies, arguing that large language models have been trained on copyrighted material without permission or compensation.
In 2024, a US court ruled that AI-generated images cannot be copyrighted unless meaningful human input is involved - a decision that shook the generative art world. At the same time, courts in Canada and Germany are hearing cases on the legitimacy of training data scraping.
There is no consensus yet on what constitutes “fair use” in the context of AI. Until clearer norms emerge, this remains a legally grey - and risky - zone for developers and deployers alike.
The industry’s push for AI self-regulation
Faced with looming government oversight, many in the AI industry have rallied behind voluntary frameworks. Initiatives like the Frontier Model Forum and the Partnership on AI have issued best practices on AI safety, red-teaming, bias mitigation and model disclosures.
Open-source communities are also advancing their own standards - for example, data documentation protocols like “Model Cards” and “Datasheets for Datasets.”
At the same time self-regulation has its limits. As Sam Altman of OpenAI put it, “We don’t want to be the ones making the rules that apply to us.” His statement reflects a growing realization: voluntary principles help, but legal accountability is inevitable.
What should companies do about AI regulation?
Navigating AI regulation isn’t about waiting for laws to settle. It’s about building systems and strategies - that anticipate them. And that means:
- Auditing AI systems for bias, explainability and risk exposure
- Implement AI guardrails in your system
- Mapping operations to multiple jurisdictions' compliance requirements
- Embedding human-in-the-loop and red-teaming mechanisms early
- Engaging with evolving standards bodies and policy forums
At GoML, we see this moment not just as a compliance challenge, but as a design opportunity. Regulations are drawing clearer lines - but the quality of what gets built inside those lines still depends on us.
The future of AI will belong to those who not only meet the bar but spearhead it responsibly.