BLOG

EU AI Act 2026 — What Every Business Owner Needs to Know

14 min readMateusz Sawka

August 2, 2026 marks a major milestone: the core provisions of the EU AI Act — the world's first comprehensive AI regulation — come into full effect. For most business owners, this sounds like a distant problem. "AI regulations? That's for Microsoft, not my 30-person company."

Not quite. The AI Act applies to everyone who uses AI systems in the European Union — not just those who build them. If your company uses ChatGPT for customer service, an AI tool for recruitment, or automated credit scoring — the AI Act applies to you.

I've implemented AI in over 8 companies, and I see how few business owners are aware of this. That's why I wrote this guide — in plain language, without legal jargon, with concrete steps.

AI Act Timeline — What's Already in Effect, What's Coming

The AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legislation regulating artificial intelligence. It was adopted in June 2024 and takes effect in stages:

Already in effect (since February 2, 2025)

  • Prohibited AI practices — AI systems that manipulate behavior, exploit vulnerabilities (age, disability), government social scoring, emotion recognition in workplaces and schools (with exceptions)
  • AI literacy obligation — companies must ensure employees using AI have an adequate level of AI competence (Art. 4)

From August 2, 2025

  • Obligations for general-purpose AI models (GPAI) — applies to providers like OpenAI, Anthropic, Google. They must provide technical documentation, copyright policy, and training data summaries

From August 2, 2026 (key date)

  • Full obligations for high-risk AI systems — if your company deploys or uses AI in high-risk areas, you must meet detailed requirements
  • Risk management system, documentation, human oversight, cybersecurity

From August 2, 2027

  • Remaining high-risk AI systems listed in Annex I (e.g., AI in medical devices, vehicles)

Decision Tree for SMBs — Does the AI Act Apply to Your Business?

Before you panic, let's check:

Question 1: Does your company build AI systems?

  • YES → You're a "provider." The AI Act applies to you fully.
  • NO → Go to question 2.

Question 2: Does your company use AI systems?

  • YES → You're a "deployer." You have fewer obligations, but you have them.
  • NO → The AI Act probably doesn't apply to you (but read on — you may not realize you're using AI).

Question 3: In what area do you use AI?

  • Recruitment and workforce management → High risk
  • Credit scoring → High risk
  • Insurance (pricing, risk assessment) → High risk
  • Education (grading, admissions) → High risk
  • Critical infrastructure → High risk
  • Customer service, marketing, sales → Limited or minimal risk
  • Internal tools (summaries, email writing) → Minimal risk

Question 4: Does your AI interact directly with people (chatbot, voicebot)?

  • YES → You must inform them they're talking to AI (transparency obligation)
  • NO → Fewer obligations

Risk Categories — What They Mean in Practice

Unacceptable Risk (Prohibited)

These AI systems cannot be used in the EU:

  • Behavioral manipulation (e.g., AI deliberately exploiting addictions)
  • Social scoring (government rating of citizens)
  • Mass facial recognition in public spaces (with exceptions)
  • Emotion recognition in workplaces and schools

For SMBs: You probably aren't building these systems. But note — if you use AI to monitor employee emotions in the office, that's prohibited.

High Risk (Detailed Requirements)

AI systems in these areas require full compliance:

  • Recruitment: AI scanning CVs, evaluating candidates, automated screening
  • HR: AI monitoring employee performance, making promotion decisions
  • Finance: Credit scoring, insurance risk assessment
  • Education: AI grading students, making admissions decisions

What you must do:

  1. Risk management system (documentation, testing, monitoring)
  2. Training data of appropriate quality
  3. Technical documentation
  4. Transparency — information for users
  5. Human oversight — a person must be able to intervene
  6. Cybersecurity measures

Limited Risk (Transparency Obligation)

  • AI chatbots — you must inform the person they're talking to AI
  • AI-generated content — you must label it (deepfakes, synthetic images)
  • Emotion recognition systems (where permitted) — disclosure required

For SMBs: If you have a chatbot on your website — add a notice: "You're chatting with an AI assistant." That's it.

Minimal Risk (No Additional Requirements)

  • ChatGPT for writing emails
  • AI for document summaries
  • AI translation tools
  • AI in marketing (content generation, data analysis)

For SMBs: Most AI applications in small businesses fall here. You don't need to do anything special, aside from the general Art. 4 requirement (AI competence).

Article 4 — The Obligation That Applies to EVERY Company

This article is the most frequently overlooked, yet it's crucial:

"Providers and deployers of AI systems shall take measures to ensure, to the best extent possible, a sufficient level of AI literacy of their staff." (Art. 4)

What this means in practice:

  • Every company using AI must train its employees
  • Training should be role-appropriate — different knowledge for managers vs. operators
  • You must document it

What to do:

  1. Conduct AI training for your team (even a 2-hour session)
  2. Document who was trained and in what scope
  3. Create internal AI usage guidelines

This is something I help with — I run AI training for companies in a 6-session mentoring format, tailored to each company's specifics.

Practical Compliance Steps — A Plan for SMBs

Step 1: AI Inventory (Week 1)

List every AI tool your company uses:

  • Subscriptions (ChatGPT Team, Copilot, Gemini)
  • Website chatbots
  • Automations (Make.com, Zapier, n8n with AI components)
  • AI in CRM/ERP
  • Any other tools using AI

Step 2: Categorize Risk (Week 2)

For each tool on your list, determine the risk category:

  • Minimal → No additional action required
  • Limited → Add AI disclosure (transparency)
  • High → You need a compliance plan
  • Unacceptable → Stop using immediately

Step 3: Ensure Transparency (Week 3)

  • Add "Powered by AI" notices wherever customers interact with AI
  • Label AI-generated content (if published as company content)
  • Update your privacy policy to mention AI usage

Step 4: Train Your Team (Week 4)

  • Conduct Art. 4 training (AI competence)
  • Create a "Rules for Using AI in Our Company" document
  • Document the training (dates, participants, scope)

Step 5: Monitor and Update (Ongoing)

  • Review your AI tools list quarterly
  • Track AI Act updates (e.g., European Commission guidelines)
  • Update training as new use cases emerge

How an AI Audit Prepares You for the AI Act

A professional AI audit — like the ones I conduct for businesses — naturally prepares you for AI Act requirements:

Inventory — the audit maps all AI systems in the company Risk categorization — we assess which applications are high-risk Processes — we document how AI is used in workflows Competence — we evaluate team readiness (the "Team" dimension) Action plan — the audit concludes with recommendations that include compliance

This isn't coincidental. An AI readiness audit and AI Act preparation are two perspectives on the same problem: "How to responsibly and effectively use AI in business?"

FAQ — Most Common Questions

Will there be penalties for non-compliance? Yes. Up to 35 million EUR or 7% of annual turnover for the most serious violations (prohibited practices). Up to 15 million EUR or 3% for other violations. For SMBs, penalties may be lower, but still painful.

Does a small business (10 employees) need to worry? If you use AI for customer service — add AI disclosures. Train your team (Art. 4). That's the minimum. You don't need to hire an AI lawyer.

Does using ChatGPT for writing emails require compliance? That's minimal risk — no special action required. But Art. 4 still applies — employees should know how to use AI responsibly (e.g., don't paste customer personal data into ChatGPT).

Do I need to inform customers that I use AI? If customers directly interact with AI (chatbot, voicebot) — yes. If AI supports internal processes (writing emails, data analysis) — there's no such obligation, unless AI makes decisions that affect the customer.

Who enforces the AI Act? Each EU member state must designate a supervisory authority. The specifics vary by country and are still being finalized in some jurisdictions.

Don't Wait Until August 2026

You have 4 months. That's enough time to:

  1. Inventory your AI tools
  2. Categorize risk
  3. Train your team
  4. Implement transparency measures

The worst strategy is ignoring the topic and hoping "it doesn't apply to us." It does — the question is only to what extent.

If you want to check where your business stands on AI Act compliance, start with a free conversation. In 30 minutes, we'll assess which requirements apply to you and what you need to do.

Book a free consultation

Want to implement AI in your business?

Book a free 30-minute consultation. I'll tell you where to start and how much it will cost.

Book a free consultation