You thought GDPR was complicated. The EU AI Act just arrived — and unlike GDPR, most marketing agencies don’t even know whether it applies to them. It does. If you use AI tools to process data about EU consumers, the EU AI Act applies to you. Full enforcement for the systems most commonly used in marketing began in August 2025.
This isn’t a legal brief. It’s a plain-English picture of where marketing agencies actually sit under this regulation, what the high-risk categories mean in practice, which AI uses are outright prohibited, and what you can do right now to reduce exposure — without a compliance lawyer on retainer.
The EU AI Act: What Kind of Regulation Is This?
The EU AI Act (Regulation EU 2024/1689) is a risk-based framework. It doesn’t regulate AI as a technology. It regulates AI applications based on the risk they pose to individuals. The riskier the use, the heavier the obligations. Some uses are banned outright. Others require documentation, transparency disclosures, and human oversight systems. Low-risk uses — which cover most marketing AI — carry lighter requirements but are not obligation-free.
The enforcement timeline matters. The Act entered into force in August 2024. Prohibited AI practices were banned from February 2025. Rules for general-purpose AI models — the category that covers ChatGPT, Claude, and Gemini — applied from August 2025. Full enforcement for high-risk AI systems landed August 2026. Fines for the most serious violations reach up to 7% of global annual turnover; standard non-compliance carries up to 3% (EU AI Act, Article 99).
GDPR is still there too. The EU AI Act doesn’t replace GDPR — it sits on top of it. If your AI use involves personal data about EU individuals, both frameworks apply simultaneously. The European Data Protection Board published its AI privacy guidance in March 2025 specifically to address how the two overlap. For marketing agencies, that dual burden is the real compliance challenge.
Are You a Provider or a Deployer?
The EU AI Act draws a sharp distinction between providers (who build AI systems) and deployers (who use them in a professional context). Most marketing agencies are deployers — you’re using ChatGPT, Claude, Midjourney, or similar tools for client work, not training your own models.
Being a deployer doesn’t exempt you. Deployers of high-risk AI systems must conduct fundamental rights impact assessments, implement human oversight measures, keep usage logs for post-market monitoring, and ensure their staff receive appropriate AI literacy training (EU AI Act, Articles 26 and 4). The provider — OpenAI, Anthropic, Google — handles their obligations. You handle yours as the deployer.
You may be interested in: Why Your Marketing Data Shouldn’t Go to ChatGPT
The distinction that trips agencies up: using a compliant AI tool doesn’t mean your use of it is compliant. The tool can be fully EU AI Act-approved and your specific application can still violate the regulation, depending on what you’re doing with it.
What Is High-Risk AI — and Does Marketing Qualify?
High-risk AI under the EU AI Act is defined in Annex III of the regulation. The categories that most directly intersect with marketing work are: AI used to evaluate or score individuals, AI used in employment and worker management contexts, and AI used in systems that influence access to essential services.
For most campaign execution work — generating copy, resizing images, summarising reports — you’re operating in the limited-risk category. Transparency obligations apply (you must disclose AI-generated content in certain contexts) but the heavier compliance machinery doesn’t.
The grey zone is behavioural profiling and targeting. AI systems that build individual profiles to predict behaviour, segment audiences at the individual level, or make automated decisions that significantly affect individuals start moving toward high-risk territory — particularly if the output influences credit, insurance, or employment-adjacent decisions. Most programmatic targeting AI sits below this threshold. Agentic AI that makes autonomous decisions about individual users may not.
Deloitte estimates that 50% of companies using generative AI expect to have agentic AI pilots by 2027. As agencies move from AI-assisted work to AI-autonomous work — where the model takes actions rather than drafts suggestions — the risk classification of those systems rises with the autonomy.
The Prohibited Practices: What’s Already Illegal
These aren’t grey areas. The following AI applications have been prohibited under the EU AI Act since February 2025, regardless of who is doing them:
- Subliminal manipulation: AI systems that influence behaviour through techniques operating below conscious awareness. This directly intersects with certain personalisation approaches that exploit cognitive biases without the individual’s awareness.
- Exploitation of vulnerabilities: AI that targets individuals based on age, disability, or social and economic situation in ways that distort their behaviour to their detriment.
- Social scoring by public authorities: Not directly a marketing concern, but relevant for any agency working in public sector contexts.
- Real-time remote biometric identification in public spaces: Relevant for any retail or physical-space marketing technology involving facial recognition or similar.
- Emotion inference in workplace and education contexts: AI that infers emotional states of individuals in professional or educational settings.
The subliminal manipulation prohibition is the one most likely to create unexpected exposure for marketing agencies. AI-powered personalisation that targets psychological vulnerabilities — dark patterns, urgency exploitation, anxiety-driven messaging calibrated by AI — is now legally prohibited for EU-facing campaigns, not merely ethically questionable.
What General-Purpose AI Models Mean for Agency Work
ChatGPT, Claude, Gemini, and similar tools are classified as General-Purpose AI (GPAI) models under the EU AI Act. Rules for these models applied from August 2025. Providers of GPAI models with systemic risk — determined by training compute thresholds — face additional obligations including adversarial testing and incident reporting.
For agencies using these tools: your obligation is to use them in compliant ways. Transparency is the first requirement — if content is AI-generated and could be mistaken for human-produced content in a context where that distinction matters, disclosure is required. The second is data handling: what you feed into a GPAI model is subject to both GDPR and the AI Act simultaneously.
Here’s where local LLM inference changes the compliance picture. A model running entirely on your own hardware — a Mac Mini running Ollama, a Mac Studio serving your team — processes data in your environment, under your control, with no transmission to a GPAI provider. The GPAI model rules apply to the provider, not to you. The GDPR transfer risk disappears. The transparency requirements still apply to your outputs, but the data sovereignty exposure is eliminated.
You may be interested in: GDPR Article 25 and Local AI: Why On-Premise LLM Inference Is Privacy by Design
What to Do Right Now: A Practical Starting Point
You don’t need a compliance lawyer to take the first steps. You need an honest inventory of how your agency uses AI, and a clear picture of which uses sit where on the risk spectrum.
Start with three questions. First: which AI tools does your agency use, and what data flows into them? Any tool that receives personal data about EU individuals is in scope for both GDPR and the AI Act. Second: what are those tools being used for? Generating copy is low-risk. Profiling individuals or making automated decisions about them is not. Third: can you evidence human oversight? For anything above low-risk, you need to show that a human reviews and can override AI outputs before they affect individuals.
The documentation baseline for agencies using AI in client work: a register of AI systems used, the purpose of each use, the data involved, and the oversight mechanism. This doesn’t require enterprise compliance software. It requires an honest spreadsheet and a policy that your team actually follows.
For agencies managing WooCommerce or WordPress-based client tracking — where first-party data feeds into AI analysis tools — the data pipeline itself becomes a compliance asset. Server-side tracking via Transmute Engine™ keeps client data on a first-party server (your subdomain, not a third-party domain) before it routes anywhere. That first-party architecture reduces GDPR Article 46 transfer exposure and provides a defensible data processing chain when the regulator asks where the data went.
Key Takeaways
- The EU AI Act applies to marketing agencies as deployers — using ChatGPT or Claude for client work puts you in scope from August 2025 onwards.
- Prohibited practices are already in force since February 2025 — subliminal manipulation and vulnerability exploitation via AI are illegal for EU-facing campaigns, not just ethically discouraged.
- Most campaign AI is low-risk — copy generation, image creation, report summarisation carry transparency obligations but not the full high-risk compliance burden.
- Behavioural profiling and agentic AI push toward high-risk — the more autonomous and individually targeted the AI decision, the higher the classification.
- Local LLM inference reduces exposure — on-premise models eliminate GPAI provider dependencies, GDPR transfer risk, and data sovereignty concerns in a single architectural decision.
Frequently Asked Questions
Yes. Marketing agencies using AI tools for client work are classified as deployers under the EU AI Act. Rules for general-purpose AI models like ChatGPT and Claude applied from August 2025. Deployer obligations include human oversight for high-risk uses, AI literacy training for relevant staff, and transparency disclosures for AI-generated content. Using a compliant tool doesn’t mean your use of it is automatically compliant.
High-risk AI is defined in Annex III of the regulation. For marketing, the main risk categories involve AI that evaluates or scores individuals, AI used in employment contexts, and AI influencing access to essential services. Standard campaign execution AI — copy generation, image creation, analytics summarisation — is generally low-risk. AI that builds individual behavioural profiles or makes autonomous decisions about specific individuals starts moving toward high-risk classification.
Local LLM inference is not exempt from the EU AI Act, but it substantially reduces exposure. Running an open-weight model on your own hardware means you’re not subject to GPAI provider rules — those apply to the model developer, not you as a self-hosted operator. GDPR transfer risks disappear because no data leaves your infrastructure. Transparency obligations for AI-generated outputs still apply. The compliance burden is lighter, and the data sovereignty position is stronger.
A practical baseline: a register of AI systems your agency uses, the purpose of each use, what personal data flows into each system, and the human oversight mechanism for each. For high-risk AI uses, a fundamental rights impact assessment is required. For any AI-generated content, transparency disclosures must be in place. This doesn’t require enterprise compliance software — an honest audit of your AI stack and a documented policy your team follows is the starting point.
Since February 2025: AI that manipulates individuals through subliminal techniques below conscious awareness, AI that exploits vulnerabilities based on age, disability, or economic situation to distort behaviour, and AI-based real-time biometric identification in public spaces. For marketing agencies, the subliminal manipulation prohibition is the most directly relevant — AI-powered personalisation that targets psychological vulnerabilities in ways individuals cannot perceive or consciously resist is now prohibited for EU-facing campaigns.
The EU AI Act isn’t waiting for agencies to catch up. The clock is already running. Seresa builds first-party data infrastructure that gives you a defensible data chain — before the regulator asks for one.
