Is My AI System High-Risk Under the EU AI Act? Here's How to Actually Tell
Stop guessing whether your AI is high-risk. This is the same framework regulators use, broken down without the legalese. Eight steps, five minutes.
Three weeks ago I was on a call with a CTO at a Berlin-based HR tech company. They had built an AI tool that ranks job candidates based on CVs and cover letters. Their position was that it wasn't high-risk because, in their words, "we just provide rankings, the recruiter still makes the final decision."
They were wrong. Their system is unambiguously high-risk under the EU AI Act, and they had nine months to fix that before the August 2, 2026 enforcement deadline.
This kind of misclassification is everywhere. Companies look at the regulation, find a clause that seems to let them off the hook, and miss the line two paragraphs later that locks them right back in. Or they assume the rules don't apply to them because they're "just B2B" or "not consumer-facing" or "just using a third-party API."
The fines for getting this wrong go up to €35 million or 7% of global revenue. Whichever is higher.
So let me walk you through the actual framework. The same one regulators use. No legalese, no fluff, just the eight questions that determine whether your AI is high-risk.
What "high-risk" actually means
The EU AI Act sorts every AI system into one of four buckets:
Prohibited. Banned outright. You can't operate it in the EU, full stop. This includes things like social scoring, untargeted facial recognition scraping, and emotion recognition in workplaces. Most companies don't have to worry about this category.
High-risk. Permitted, but with serious documentation and oversight obligations. About nine specific requirements under Articles 9 through 72. We'll come back to what those actually mean.
Limited-risk. Permitted, with transparency obligations. Mostly applies to chatbots and AI-generated content. The bar is low: tell users they're interacting with AI, label generated content, and you're done.
Minimal-risk. No specific obligations. The vast majority of AI systems land here. Spam filters, recommendation engines, video game NPCs.
The classification you land in determines whether you spend two weeks adding disclosure notices or three months building a full compliance documentation system. The stakes are real, which is why getting it right matters.
Step 1: Does the regulation even apply to you?
Before you classify anything, make sure the EU AI Act applies to your situation. It does if any of these are true:
- Your AI system is placed on the market or used in the EU
- The output of your AI is used in the EU (even if your system is hosted elsewhere)
- You as a provider or deployer are established in the EU
A US company with European customers is covered. A Canadian SaaS whose AI processes European user data is covered. A French company using a third-party AI tool internally is covered as a deployer.
Two narrow exceptions: AI systems built exclusively for military or national security purposes, and AI systems used only for pre-deployment scientific research.
If neither applies, you're in scope. Move on.
Step 2: Make sure you're not doing something that's outright banned
Article 5 of the regulation lists things that are illegal regardless of how well-documented they are. The full list:
- Subliminal or manipulative AI that materially distorts behavior and causes harm
- Exploiting vulnerabilities based on age, disability, or socioeconomic status
- Social scoring by public authorities
- Predictive policing based purely on profiling
- Untargeted scraping of facial images for biometric databases
- Emotion recognition in workplace or education settings
- Biometric categorization that infers race, political opinions, religion, sexual orientation, or trade union membership
- Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
If your AI does any of these, you can't operate it in the EU. No documentation will fix it.
For most companies reading this, the answer is "obviously not." But if you're working with biometrics, behavioral analysis, or anything involving public spaces, read this list carefully.
Step 3: Is your AI a safety component of a regulated product?
There are two paths to high-risk classification. The first is the easy one to identify: if your AI is a safety component of a product that's already covered by EU product safety legislation, it's high-risk by default.
This includes things like:
- Cars, trucks, and motorcycles
- Medical devices and diagnostics
- Industrial machinery
- Toys
- Aviation, marine, and rail equipment
- Pressure equipment, lifts, gas appliances
The phrase "safety component" matters. AI that controls lane-keeping in a car is a safety component. AI in the car's infotainment system is not. The test is whether the AI's failure could cause physical harm.
If your AI is a safety component of one of these regulated products, congratulations, you're high-risk. Skip ahead to Step 7. If not, keep going.
Step 4: Does your AI fall into one of the eight high-risk areas?
The second and more common path to high-risk classification is via Annex III. This is a list of eight areas where AI is automatically considered high-risk. Most SaaS companies hit this list, not the safety-component one.
Here are the eight areas, with examples of AI systems that fall into each:
Biometrics. Remote biometric identification, biometric categorization, emotion recognition.
Critical infrastructure. AI as a safety component in road traffic management, water supply, energy, or digital infrastructure. Worth noting: this is the "safety component" version again. AI that monitors a data center for capacity planning is not high-risk. AI that autonomously controls cooling systems where failure could cause damage might be.
Education and vocational training. AI that determines admission, evaluates learning outcomes, assesses educational level, or detects cheating during exams.
Employment and worker management. This is huge. AI for recruitment, candidate screening, candidate ranking, performance evaluation, task allocation, promotion decisions, or termination decisions. If your AI touches anything in the employee lifecycle, you're probably here.
Access to essential services. AI used to evaluate eligibility for public benefits, credit scoring by financial institutions, risk assessment in life and health insurance, or emergency services dispatch.
Law enforcement. AI for risk assessment, polygraphs, evaluating evidence reliability, or profiling individuals during investigations.
Migration, asylum, and border control. AI for security risk assessment, application evaluation, or identification of individuals.
Administration of justice and democratic processes. AI assisting judges in interpreting facts and law, or AI used to influence elections (excluding outputs that aren't directly seen by individuals).
If your AI doesn't fall into any of these eight areas, it's probably not high-risk. Skip to Step 7.
If it does fall into one, don't celebrate yet. There's an exemption that might apply. Or might not. This is where most classification mistakes happen.
Step 5: The "I'm not really high-risk" exemption
Article 6(3) gives you a way out. Even if your AI falls under Annex III, you might be exempt if your system "does not pose a significant risk of harm." Specifically, you qualify for the exemption if your AI does at least one of these:
- Performs a narrow procedural task (like converting unstructured data into structured data)
- Improves the result of a previously completed human activity (like polishing text a human already wrote)
- Detects decision-making patterns without replacing human assessment (like flagging inconsistencies for human review)
- Performs a preparatory task before an Annex III assessment (like pre-sorting documents)
This sounds great. A lot of AI systems would seem to qualify. The CV-ranking AI from my earlier story? It "just performs a preparatory task before the recruiter makes a decision." Sounds like criterion four.
Except.
Step 6: The trap that catches almost everyone
Here's the line in Article 6(3) that companies miss:
"Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons."
In plain English: even if you meet the exemption criteria above, you're still high-risk if your AI does profiling.
So what counts as profiling? GDPR Article 4(4) defines it. Profiling is any automated processing of personal data that evaluates personal aspects of an individual, especially their work performance, economic situation, health, preferences, interests, reliability, behavior, location, or movements.
That definition is much broader than most people realize. If you're using AI to predict, analyze, or evaluate anything about a specific person, that's profiling.
Going back to the HR tech example. Their AI ranks candidates based on suitability for a role. Suitability is a prediction about future work performance. That's profiling. Doesn't matter if a human makes the final decision. Doesn't matter that the AI is "just preparatory." The profiling clause kicks in, the exemption disappears, and the system is high-risk.
The same logic applies to:
- Credit scoring AI that estimates default risk
- Insurance pricing AI that estimates claim likelihood
- Churn prediction AI that flags at-risk customers
- Health monitoring AI that flags patients needing intervention
- Performance management AI that evaluates employee productivity
If your AI does any of this, even as a "preparatory" or "advisory" tool, you're high-risk. There's no path to exemption.
This is the single most common classification mistake I see. Companies stop reading at the exemption criteria, miss the profiling clause, and convince themselves they're not high-risk. They are.
Step 7: What being high-risk actually means
If you're high-risk, you have nine specific obligations under the regulation. Here's the short version:
-
Risk management system (Article 9). Document how you identify, evaluate, and mitigate risks throughout the system's lifecycle.
-
Data governance (Article 10). Show that your training, validation, and testing data meets quality standards, including bias detection.
-
Technical documentation (Article 11). A detailed document covering system design, capabilities, limitations, training methodology, and risk mitigation. This is the longest single thing you'll produce.
-
Record-keeping (Article 12). Automatic logging of system events and decisions, retained appropriately and accessible for audit.
-
Transparency (Article 13). Users need to understand how the system works, its limitations, and how to interpret outputs.
-
Human oversight (Article 14). Humans must be able to monitor, intervene in, or override AI decisions. The system needs to be designed for this.
-
Accuracy, robustness, cybersecurity (Article 15). Performance has to be consistent, the system has to be resilient to errors and attacks.
-
Post-market monitoring (Article 72). Ongoing monitoring of how the system performs after deployment.
-
Registration in the EU database (Article 49). Before placing high-risk systems on the market, providers register them in the EU database.
These aren't best-effort. They're documented, auditable requirements. If a regulator shows up and asks for your risk management plan, "we have one in our heads" doesn't work.
Most companies need 4-12 weeks to produce this documentation properly, depending on complexity and how many AI systems they have. Starting in May means comfortably hitting the August 2 deadline. Starting in July means scrambling.
Step 8: What if you're not high-risk?
Two possibilities here.
Limited-risk. You're here if your AI interacts with users (chatbots, voice assistants) or generates synthetic content (text, images, audio, video). The obligation is transparency. Tell users they're interacting with AI. Label AI-generated content. Mark deepfakes. This is typically a few days of work, not weeks.
Minimal-risk. No specific obligations under the AI Act. The vast majority of AI systems land here.
But even if you're minimal-risk, document the classification. Investors and customers increasingly ask about AI Act compliance during due diligence. Having a documented classification report shows you did the work, even when no formal documentation was required.
The classification framework, condensed
To classify your AI system, walk through these questions in order:
- Does the regulation apply to you (EU market involvement)? If no, stop here.
- Is your AI doing anything prohibited under Article 5? If yes, you can't operate it.
- Is your AI a safety component of a regulated product? If yes, you're high-risk.
- Does your AI fall under any of the eight Annex III areas? If no, jump to question 7.
- Does your AI perform profiling? If yes, you're high-risk regardless of other factors.
- Does your AI meet at least one Article 6(3) exemption criterion (and not perform profiling)? If yes, you're not high-risk.
- Does your AI interact with users or generate content? If yes, you're limited-risk.
- None of the above? You're minimal-risk.
This is the same framework ActScope's classifier walks you through. Each step builds on the previous one, and skipping any step leads to misclassification.
The mistakes I see most often
A few patterns from classifying hundreds of systems:
"It's just advisory." Doesn't matter. If your AI influences a decision in employment, credit, insurance, or any other Annex III area, it's high-risk. The "human in the loop" doesn't exempt you, especially when profiling is involved.
"It's only used internally." The regulation applies the same way to internal use cases. Your internal HR tool that screens candidates is high-risk even though no candidate ever sees it directly.
"We're using a third-party API, so it's not our problem." It is your problem. If you're using AI under your authority, you're a deployer with obligations. If you're significantly modifying or rebranding the AI, you might also become a provider.
"We're a small company, so the regulation doesn't really apply to us." There's no general SME exemption. The same obligations apply at any company size, though SMEs get some procedural relief.
"Our AI doesn't make decisions, it just provides scores or rankings." If those scores or rankings are about specific individuals and their personal characteristics, that's profiling, and profiling makes you high-risk.
What to actually do next
If you're high-risk, start your documentation work now. Don't wait. The volume of work is real, and trying to do it in July with the deadline at your throat is not a good time.
If you're limited-risk, allocate a week or two to implement transparency notices and content labeling. This is doable but shouldn't be put off.
If you're minimal-risk, document the classification. Save it. Done.
If you're not sure which category you're in, don't guess. Use a structured framework. ActScope's free classifier walks through these eight steps in five minutes and produces a documented classification report. No signup needed.
The August 2, 2026 deadline is real. The companies that handle this calmly are starting now. The ones that are still telling themselves "it'll probably be fine" will spend July either paying consultants double-rate or accepting compliance risk.
Don't be either of those companies.
Common questions
Does the EU AI Act apply to small companies? Yes. There's no SME-wide exemption. Some procedural relief exists (simplified registration, longer timelines for certain obligations), but the core requirements apply at any size.
What's a "provider" vs a "deployer"? A provider develops or significantly adapts an AI system. A deployer uses an AI system under their authority. Most obligations sit with providers, but deployers have responsibilities too, especially for high-risk systems.
Can I just buy a "compliant" AI tool to avoid the obligations? Partly. Buying high-risk AI from a compliant provider means they handle their obligations. But you as the deployer still have obligations: human oversight, record-keeping, informing affected individuals.
What happens if I get the classification wrong? Fines up to €15 million or 3% of global turnover for most violations. €35 million or 7% for prohibited practices and data governance failures. Beyond fines: market access can be suspended, and personal liability for executives is on the rise.
Do I need a lawyer to classify my AI? Not for clear cases. The framework is well-defined and most situations are straightforward. For ambiguous cases (especially around the profiling clause, the Article 6(3) exemption, or critical infrastructure), legal counsel is worth the investment.
Related guides:
Stop guessing
Run your AI system through the classifier.
Five minutes. Eight questions. A documented classification report you can save, share, and act on.
Try the classifier