AI Risk Classification: Which Compliance Rules Apply to Your Agent?
One of the most confusing parts of AI compliance is figuring out which rules actually apply to you.
The EU AI Act has a 4-tier risk classification system. GDPR has different requirements depending on whether you're processing "special category" data. US state regulations have jurisdiction-specific triggers. Industry regulations (HIPAA, FINRA, FERPA) add another layer.
The result: It's not immediately obvious what you need to comply with.
Here's a practical guide to risk classification and requirement mapping. By the end of this post, you'll know exactly which regulations apply to your AI agent and what you need to do about it.
The EU AI Act: 4-tier risk system
The EU AI Act classifies AI systems into four risk categories. Your obligations depend entirely on which category your system falls into.
Tier 1: Unacceptable Risk (Banned)
These AI systems are prohibited in the EU. If you're building one of these, you cannot deploy it—period.
Banned systems include:
- Social scoring by governments (think China's social credit system)
- Real-time biometric identification in public spaces (except law enforcement with court approval)
- Subliminal manipulation that causes harm
- Exploitation of vulnerable groups (children, disabled individuals)
Developer impact: If you're building consumer or B2B AI, you're probably not in this category. But if you're building surveillance tech, emotion detection for children, or behavioral manipulation tools—stop now. These are flatly illegal in the EU.
Tier 2: High-Risk (Heavy Requirements)
High-risk AI systems are allowed, but you must meet extensive compliance requirements before deployment.
High-risk systems include:
1. AI used in critical infrastructure
- Traffic management systems
- Water, gas, electricity supply control
- Examples: Predictive maintenance AI for power grids
2. Educational or vocational training
- Systems that determine access to education
- Exam scoring algorithms
- Examples: Automated essay grading, college admissions AI
3. Employment and worker management
- Recruitment tools (resume screening, interview bots)
- Performance evaluation systems
- Task allocation algorithms
- Examples: HireVue-style interview analysis, warehouse productivity tracking
4. Access to essential services
- Credit scoring
- Insurance risk assessment
- Public benefits eligibility
- Examples: Loan approval AI, health insurance underwriting
5. Law enforcement
- Predictive policing
- Evidence evaluation
- Crime risk assessment
- Examples: Recidivism prediction models
6. Border control and migration
- Visa application processing
- Risk assessment for travelers
- Examples: Automated visa screening
7. Justice and democratic processes
- Legal research tools used in court
- Examples: AI that assists judges in sentencing
What high-risk classification means:
If your AI system falls into any of these categories, you must:
- Conduct conformity assessment (third-party audit in some cases)
- Maintain technical documentation (data sources, model architecture, training methodology)
- Implement human oversight (humans must be able to override AI decisions)
- Ensure accuracy, robustness, cybersecurity (ongoing monitoring and testing)
- Register in EU database (public registry of high-risk AI systems)
- Provide transparency (users must know they're interacting with AI)
Example: Recruitment AI
If you're building AI that screens resumes or ranks candidates, you're in the high-risk category. You need:
- Documentation of training data (where did the resumes come from? what's the demographic distribution?)
- Bias testing (does the model discriminate based on protected characteristics?)
- Human review process (HR must be able to override AI decisions)
- Explainability (candidates have the right to know why they were rejected)
Developer checklist:
- [ ] Document your training data sources
- [ ] Test for bias across protected groups
- [ ] Add human override capability
- [ ] Log all decisions for audit trail
- [ ] Prepare for third-party conformity assessment
Tier 3: Limited Risk (Transparency Requirements)
Most commercial AI agents fall into this category. You're allowed to deploy without heavy restrictions, but you must meet transparency obligations.
Limited-risk systems include:
- Chatbots and conversational AI
- Content generation tools (marketing copy, blog posts)
- Emotion recognition systems
- Deepfake generators
- AI that categorizes people based on biometrics
Requirements:
- Disclose that users are interacting with AI Users must know they're talking to a bot, not a human. This applies to customer support chatbots, AI sales agents, virtual assistants.
Implementation:
<div class="ai-disclosure">
🤖 You're chatting with an AI assistant.
<a href="/about-our-ai">Learn more</a>
</div>
- Label AI-generated content If your AI creates text, images, audio, or video, it must be marked as AI-generated.
Example: If your AI writes marketing emails, add a footer:
This email was generated with assistance from AI.
- Warn users about deepfakes If your AI manipulates images or video, you must make this clear.
Developer checklist:
- [ ] Add visible AI disclosure to user-facing interfaces
- [ ] Label all AI-generated content
- [ ] Implement watermarking for AI-generated images/video (if applicable)
Tier 4: Minimal Risk (No Specific Requirements)
Most AI systems fall here. These are low-stakes applications with minimal regulatory burden.
Examples:
- AI-powered spam filters
- Content recommendation algorithms (Netflix, Spotify)
- Inventory management AI
- Predictive analytics for internal business operations
Requirements: None specific to the EU AI Act. However, GDPR still applies if you're processing personal data.
GDPR: What data are you processing?
GDPR obligations depend on what kind of data your AI processes, not just what your AI does.
Regular personal data
Examples: Names, email addresses, IP addresses, device IDs, job titles
Requirements:
- Legal basis for processing (consent, contract, legitimate interest)
- Transparency (tell users what you're collecting and why)
- User rights (access, deletion, portability)
- Data minimization (only collect what you need)
Common violations:
- Collecting more data than necessary (e.g., sending entire user profiles to LLM prompts)
- Vague privacy policies ("we may share data with partners")
- No documented legal basis
Special category data (extra protections)
Examples: Health data, biometric data, racial/ethnic origin, political opinions, religious beliefs, sexual orientation
Requirements:
- Explicit consent (opt-in checkbox isn't enough—users must actively affirm)
- OR specific legal exemptions (e.g., medical necessity, vital interests)
- Extra security measures
- Data Protection Impact Assessment (DPIA) required
Common pitfall: Using facial recognition or voice analysis? That's biometric data—special category. You need explicit consent and a DPIA.
Example: Health coaching AI
If your AI gives fitness or diet advice and processes health data (weight, blood pressure, medical conditions), you're handling special category data.
You need:
- Explicit, informed consent from users
- DPIA documenting risks and mitigations
- Extra security controls (encryption, access restrictions)
- Clear explanation of how AI uses health data
Automated decision-making (GDPR Article 22)
If your AI makes decisions that have "legal or similarly significant effects" on users, GDPR gives users the right to:
- Know the decision is automated
- Receive an explanation of the logic involved
- Contest the decision
- Request human review
Examples of "significant effects":
- Credit approval/denial
- Insurance pricing
- Job application rejection
- Content moderation bans
What you need:
- Explicit consent OR contract/legal obligation as basis
- Explanation of how decisions are made
- Human review process
- Right to appeal
US State Regulations: Where are your users?
US privacy law is fragmented. Different states have different rules.
California (CPRA)
If you have California users, CPRA applies.
Key requirements:
- Automated decision-making disclosure: If your AI makes decisions about users, you must tell them the logic involved and allow them to opt out
- Data sale opt-out: If you "sell" user data (including sharing with third-party LLMs for training), you must provide "Do Not Sell My Info" link
- Sensitive data limits: Extra protections for precise geolocation, biometric data, health info
Triggers:
- You process data of 100,000+ California residents
- OR you derive 50%+ of revenue from selling personal data
Colorado (Effective February 2026)
Key requirements:
- AI impact assessments: If your AI makes consequential decisions (education, employment, housing, credit, healthcare), you must document risks
- Transparency: Users must be told about automated decision-making
- Right to appeal: Users can contest AI decisions
Trigger: You process data of 100,000+ Colorado residents
Virginia
Similar to Colorado—transparency requirements for automated decision-making, with appeal rights.
Industry-Specific Requirements
Depending on your industry, you face additional regulations.
Healthcare (HIPAA + FDA)
If you're building medical AI:
- HIPAA applies if you handle Protected Health Information (PHI)
- FDA oversight if your AI qualifies as a "medical device" (diagnostic tools, treatment recommendations)
Requirements:
- Business Associate Agreement (BAA) with any vendor processing PHI
- PHI encryption and access controls
- Audit logging
- FDA premarket approval (for high-risk medical devices)
Example: An AI that interprets X-rays is a medical device—FDA approval required. An AI that schedules appointments isn't—just HIPAA.
Finance (Fair Lending + Explainability)
If you're building credit or lending AI:
- Equal Credit Opportunity Act (ECOA) prohibits discrimination
- Fair Credit Reporting Act (FCRA) requires adverse action notices
- Dodd-Frank Act mandates model risk management
Requirements:
- Bias testing (disparate impact analysis)
- Explainable AI (must be able to tell applicants why they were denied)
- Model validation and ongoing monitoring
Education (FERPA)
If you're building educational AI:
- FERPA protects student records
- Parental consent required for data collection from minors
Requirements:
- Schools must have direct control over student data
- No selling student data
- Parental access to student records
Decision Tree: What applies to you?
Use this to quickly determine your compliance obligations:
Step 1: Where are your users?
- EU? → EU AI Act + GDPR apply
- California? → CPRA applies
- Colorado/Virginia? → State regulations apply
- Other US states? → Monitor emerging laws
Step 2: What does your AI do?
- Makes hiring/credit/healthcare decisions? → High-risk AI (heavy requirements)
- Chatbot/content generation? → Limited-risk AI (transparency requirements)
- Internal analytics/recommendations? → Minimal-risk AI (GDPR only)
Step 3: What data do you process?
- Special category data (health, biometric)? → Explicit consent + DPIA required
- Regular personal data? → Legal basis + transparency required
- No personal data? → Minimal GDPR obligations
Step 4: What's your industry?
- Healthcare? → HIPAA + possibly FDA
- Finance? → Fair lending + explainability
- Education? → FERPA
How Compliable helps with multi-framework compliance
Tracking requirements across EU AI Act, GDPR, CCPA, HIPAA, and state laws is complex. Compliable checks your AI-generated content against all relevant frameworks in a single API call.
How it works:
const result = await compliable.check.create({
framework: 'gdpr', // or 'eu-ai-act', 'ccpa', 'hipaa'
content: aiGeneratedContent,
scope: {
jurisdiction: 'EU', // or 'US-CA', 'US-CO'
industry: 'healthcare' // optional: triggers industry-specific checks
}
});
if (!result.isCompliant) {
console.log('Violations:', result.violations);
// Each violation includes:
// - regulation (GDPR, EU AI Act, CCPA, etc.)
// - article/section
// - severity
// - description
// - suggested fix
}
Why this matters:
- One API covers multiple frameworks
- Jurisdiction-aware (checks only relevant regulations)
- Industry-specific rules included
- Structured output with article references
Start with 500 free checks/month →
Conclusion
Not all AI systems face the same requirements. Your compliance obligations depend on:
- What your AI does (high-risk vs. limited-risk)
- What data it processes (special category vs. regular)
- Where your users are (EU, California, Colorado, etc.)
- Your industry (healthcare, finance, education)
The key is knowing where you fall in these classifications. Once you know that, the requirements become clear.
Map your risk level. Document your data flows. Implement the appropriate controls. And automate compliance validation so you're not guessing.
If you're not sure which regulations apply to you, start by auditing your data flows and user base. That will tell you 80% of what you need to know.