Why the EU Made a Law About AI
What Can Go Wrong With AI
Before we look at the law itself, let's understand why it exists. Here are real things that have happened:
Amazon built an AI hiring tool that discriminated against women. Why? Because it was trained on 10 years of resumes — and historically, more men had been hired. The AI learned that being male was a positive signal. Amazon scrapped the tool.
China implemented a social credit scoring system where AI rates citizens based on their behavior — paying bills on time, jaywalking, social media posts — and then restricts freedoms based on the score. Low score? No train tickets.
Facial recognition systems have been shown to misidentify people with darker skin at rates 10 to 100 times higher than people with lighter skin. This has led to wrongful arrests.
Deepfake technology has been used to impersonate politicians and spread misinformation. Predictive policing algorithms have led to over-policing of minority neighborhoods, creating a feedback loop of discrimination.
The EU's Track Record
The EU has regulated technology before, and the results set the global standard.
In 2018, the General Data Protection Regulation (GDPR) became enforceable. It gave people control over their personal data. Initially, companies complained it was too strict. Today, countries around the world have copied it. The GDPR is considered the gold standard for data protection.
Now the EU is doing the same thing with AI. The EU AI Act was adopted on June 13, 2024, and its main obligations apply from August 2, 2026. Just like with the GDPR, the EU aims to set the global standard — protecting people's rights while giving companies clear rules to follow.
The pattern is the same: protect citizens, create clear rules, enforce with serious fines. Companies that complained about GDPR eventually benefited from having clear rules. The same will likely happen with the AI Act.
What Is the EU AI Act?
Here are the basics:
Full name: Regulation (EU) 2024/1689 — commonly called the "Artificial Intelligence Act" or "AI Act". It was adopted on June 13, 2024, published in the Official Journal of the EU on July 12, 2024, and entered into force on August 1, 2024.
The main obligations take effect on August 2, 2026. The regulation has 113 articles plus several annexes. It is directly applicable in all 27 EU member states — meaning no national implementation is needed. Unlike some EU directives where each country writes its own version, this regulation applies identically everywhere.
Critically, it applies to anyone placing AI on the EU market or using AI in the EU, regardless of where the company is based. A US company selling AI software to European customers must comply. A Chinese company whose AI is used by EU citizens must comply.
Who Does This Apply To?
If you are reading this, it probably applies to you. The EU AI Act applies to:
Companies that develop AI systems (called "providers" in the Act) — anywhere in the world, as long as the AI is used in the EU. Companies that use AI systems in their operations (called "deployers") — if they operate in the EU. Companies that import AI into the EU. Companies that distribute AI systems.
There are a few exceptions: military and national defense uses, purely scientific research, and personal non-professional use. So a student using ChatGPT for homework is not covered. But a company using ChatGPT to handle customer service is.
The important thing to understand is that both the maker of the AI and the user of the AI have obligations. Building AI creates responsibilities. Using AI also creates responsibilities.
Interactive Exercise
Does the EU AI Act apply in each of these situations? Click to reveal.