Written by Marina Brocca
Index
- The story that illustrates everything: a real experiment shared on LinkedIn
- Current status of the AI Act
- The “digital Chernobyls” that have already happened (and justify this protocol)
- AI Protocol: the 7 mandatory phases before automating with AI
- PHASE 1 – Classification of the use case
- PHASE 2 – Data evaluation and architecture (DPIA + IA DPIA)
- PHASE 3 – Contractual shielding (Legal Pack)
- PHASE 4 – Mandatory transparency (NOW applicable)
- PHASE 5 – Human supervision and technical safeguards
- PHASE 6 – Quarterly continuous evaluation
- PHASE 7 – Internal corporate governance
- My recommendation for compliance tools
- Checklist: The 7 mandatory phases before automating with AI
- Conclusion: AI won’t replace you… but it can ruin you if there is no governance
- For today’s article, we have the collaboration of Marina Brocca, an expert in GDPR and legal marketing. The expert has prepared this complete guide to safely integrate AI in companies and marketing departments. Without legal risks, without reputational problems, and without “digital Chernobyls.”
How much content about AI automation and integration have you already read or heard? I am sure a huge amount, just like me. But there is one detail: none has explained how to do it without risks, without flaws, and without sanctions—and believe me, those sanctions will come and they will be scandalous.
If your company already uses chatbots, AI-generated copy, advanced personalization, smart price adjustment, automated campaigns, or complex integrations in n8n, Make, or Zapier… this guide is not an option: it is your only legal and strategic shield.
Today, artificial intelligence can generate double-digit margins for any company, but it can also destroy its reputation in less than 7 seconds. The difference between a company that bills €5,000 and another that charges €12,000 for its governance boils down to one word: shielding.
This guide is not a simple post. It is a high-value professional protocol, ready to implement in your company. A framework that you can establish as a standard of excellence and for which you can charge an AI Responsibility Fee of between €18,000 and €45,000.
Recently, a professional in the sector shared a very revealing experiment on LinkedIn. He had configured a seemingly simple flow in n8n: based on CRM data (name, email, location, company…), an AI agent generated a personalized email for each contact. The idea was good: that each person would feel personalized treatment without investing hours reviewing hundreds of messages.
He launched the flow: 750 emails in queue, 40 minutes of execution, everything seemed perfect.
Until the replies started coming in.
In less than 2% of cases, the AI hallucinated: it invented personal information about the recipient that never appeared in the CRM. Small but completely false things: home cities, non-existent job titles, random hobbies… personalizations that looked human but were not. “Small delusions,” he called them.
The professional discovered it through the replies of the recipients themselves—some with humor, others with surprise—pointing out the “creative touches” that the automation had decided to add on its own.
The reflection he shared was blunt: “This anecdote left me thinking about the real risks of automating with AI without guardrails appropriate to the risk of the process.”
Current status of the AI Act
Regulation is no longer a future promise: it is in force right now and directly affects every action you implement with AI. Ignoring it is no longer an option; every decision, automation, or integration must consider compliance. Let’s see how this translates into specific obligations, key dates, and real risks for your company:
- Prohibited. Since February 2, 2025: unmarked deepfakes, performing social scoring, or subliminal manipulation.
- GPAI Models (OpenAI, Google, Anthropic, Meta): transparency obligations since August 12, 2025.
- Limited risk (chatbots, synthetic content): mandatory transparency NOW.
- High risk: mandatory documentation since August 2026 and full application in August 2027.
- Sanctions: up to €35M or 7% of global turnover. Inspections are already underway in Spain, Italy, and France.
Spain already has open inspections. The law is real and it is being enforced.
If you want a summary of the new regulation, I recommend this post that I have written in a very clear and educational way.
The “digital Chernobyls” that have already happened (and justify this protocol)
Google Gemini 2024–2025
Google suspended its Gemini AI feature for generating images of people after users reported that it generated historically incorrect representations, such as Chinese Nazis or Black Vikings, by overrepresenting ethnic minorities.
The AI output is your responsibility.
Air Canada 2024–2025
Its chatbot invented a refund policy. The court ordered it to pay.
Lesson: you are legally responsible for your chatbot’s hallucinations.
Willy Wonka Glasgow 2024
They promoted an event with AI-generated images that didn’t exist. Massive complaints.
The misleading use of AI generates lawsuits and impacts the company’s reputation.
AI Protocol: the 7 mandatory phases before automating with AI
Forget about “let’s try it out” or “we’ll adjust as we go.”
That is what companies do that end up in press headlines or in AEPD sanction files.
Companies that bill more, retain enterprise clients, and sleep soundly always follow these seven phases, in this exact order.
They are not optional. They are your legal shield, your competitive advantage, and your new source of margin.
If you skip even a single phase, you are playing Russian roulette with the reputation and money of your company or your client’s.

Here are the 7 phases, explained clearly and with everything you need to implement them tomorrow:
PHASE 1 – Classification of the use case
Before implementing any AI, classify the use according to legal and user rights risk:
- Content generation: minimal/limited risk. Requires clear labeling as “Generated by AI.”
- Customer service or lead generation chatbots: limited risk. It is mandatory to inform the user that they are interacting with an AI and maintain conversation logs.
- User segmentation with sensitive data: high risk. Requires a Data Protection Impact Assessment (DPIA) and EU registration starting in 2026.
- Dynamic pricing or algorithms affecting user economics: high risk. Any decision affecting their economic rights requires strict controls.
- Mass SEO content: limited risk, though it could be high if it impacts users. Requires human supervision.
- Synthetic voice or deepfakes: prohibited unless there is visible marking and explicit consent.
- Automation with n8n or other orchestration systems: minimal/limited risk if they only execute technical tasks. However, they move to high risk when they trigger actions that make automated decisions with legal or economic effects on the user.
PHASE 2 – Data evaluation and architecture (DPIA + IA DPIA)
(The moment you decide if the project is legal or a time bomb)
Why do you have to answer these 6 questions IN WRITING before touching a single piece of your customers’ data?
Because if tomorrow the AEPD or a client asks you “where is my data and who has seen it?”, the answer “uh… I think everything is fine” costs you between €20,000 and €35 million in fines and, of course, the client relationship broken forever.
The 6 key questions
1. What exact data will enter the AI?
Literal list: name, email, purchase history, location, income, etc.
If you don’t know precisely, you are already violating the minimization principle of the GDPR.
2. Are they really necessary for what we want to do?
If you can do the copy or segmentation without the ID number or salary, remove them.
You avoid being classified as high risk and reduce the potential fine.
3. Is it transferred outside the European Economic Area (EEA)?
The USA, India, Singapore, etc. are obviously located outside the EEA.
Yes: you need extra measures (see question 7).
4. Does the provider use that data to train its models?
Quick answer: OpenAI (free and normal plan) YES by default.
OpenAI Enterprise, Claude Enterprise, Gemini Enterprise, Mistral, Groq, Cohere: NO.
If the provider trains with customer data without permission, it is a serious violation of the GDPR + AI Act.
5. Is there a signed DPA (Data Processing Agreement) with ALL subprocessors?
It’s not just OpenAI. Also with Pinecone, Supabase, ElevenLabs, etc.
Without a DPA: remember there is joint liability for you and the client.
6. Is sensitive data automatically deleted after use?
Example: the prompt with personal data should disappear in <30 days (ideally <24 h).
Most Enterprise plans do this by default.
How to document it in 5 minutes
Create a one-page table per project in Notion, Google Docs, or Excel and fill it out like this:
1. What exact data is used?
- Answer: Email, purchase history, location
- Supporting document: Screenshot of prompt / code
2. Is it necessary for the purpose?
- Answer: No, it can be anonymized
- Supporting document: Written justification
3. Is there a transfer outside the EEA?
- Answer: Yes (USA)
- Supporting document: List of providers
4. Are models trained with this data?
- Answer: No (Model: Claude Enterprise)
- Supporting document: Screenshot of contract or website
5. Has a DPA been signed with all providers?
- Answer: Yes, date: 03/12/2025
- Supporting document: Link to signed PDF
6. Is there automatic data deletion?
- Answer: Yes, in 24 hours
- Supporting document: Provider settings
7. Has a TIA (Transfer Impact Assessment) been performed?
- Answer: Yes, date: 03/15/2025
- Supporting document: TIA PDF
Save that table signed by the AI Manager and the client.
That is your life insurance and the simplest and most effective way to cover yourself in case something goes wrong.
You can also use an automated tool that does 90% for you. With DataGrail, Ethyca, or Osano, you connect your tools and in 24 hours it generates the complete data map plus the answers to the 7 questions.
Result: in one afternoon you have the perfect legal document for any enterprise client.
PHASE 3 – Contractual shielding (Legal Pack)
Every AI provider must sign:
Full Data Processing Agreement or GDPR DPA.
AI-specific addendum, including:
- Absolute prohibition of using customer data to train models.
- Breach notification in less than 24h.
- Joint liability for damages.
- Right to annual audit.
- Model Card (a standardized document describing how a specific AI model works, its limitations, biases, performance, and recommended use cases) or System Card of the model (a broader and more detailed version than the Model Card; includes not only the model but the entire system surrounding it: infrastructure, safeguards, moderation policies, etc.). Google uses it for Gemini and it is the most demanding standard today.
- Written commitment to comply with the AI Act.
PHASE 4 – Mandatory transparency (NOW applicable)
It’s not optional. It’s not “best practice.” It’s the law.
If you don’t warn the user that they are interacting with an AI, you are violating the AI Act and exposing yourself directly to fines of up to €15 million or 3% of global turnover. And worst of all: if it’s an implementation you’ve done for another company, the final client (the data controller) pays the fine, but the client will claim it from you with interest because you were the one who proposed and implemented the solution.
Specific and exact examples you have to apply:
Email marketing / newsletters / automation:
Mandatory text (in the subject line or in the first paragraph/footer clearly visible):
“This message was written with the help of artificial intelligence and reviewed by our team.”
Chatbots and assistants on web / WhatsApp / Instagram
The first message the user sees must clearly say:
“Hi, I’m an artificial intelligence assistant. I can make mistakes. Everything I say will be reviewed by a human if you need it.”
If you don’t include this, you are violating Article 52.1 of the AI Act as of now (I haven’t found examples of websites doing this yet).
Images and videos generated or retouched with AI
You have two legal options (choose one, but never none):
- Include a visible watermark (bottom right or left corner): “AI-generated image” or the “AI” symbol.
- Use C2PA metadata (the official standard already used by Adobe, Leica, Microsoft, Google, etc.)
This embeds the information in the file and it cannot be deleted.
Example: any Instagram carousel with Midjourney images must include a mandatory watermark.
Paid ads on Meta, Google Ads, TikTok Ads, LinkedIn Ads
Since January 2025, all platforms require marking the ad if it contains:
- Synthetic image.
- Video with deepfake or synthetic voice.
- AI-generated text (in some cases).
On Meta and Google you have to click the box: “This ad contains altered or AI-generated content.”
If you need templates to adapt privacy policies, clauses for chatbots, launch campaigns, promotions with AI images, etc., I have developed AI Web Adaptation Template KITS.
PHASE 5 – Human supervision and technical safeguards
(The phase that prevents your company from being the next viral case for the AI saying nonsense or causing harm).
Clear explanation, without technical jargon and as if you were telling a client or superior who doesn’t know technology: this phase is the safety net.
Here you put real controls so the AI never acts alone and uncontrollably.
1. Gradual deployment (don’t release it all at once).
Never activate AI for all users on day one. Do it in four safe steps: first to 0.5% of users (almost no one notices, but you see if something fails).
If it goes well after 3-7 days, move to 5%.
If it stays perfect, expand it to 25%.
Only when you’ve gone several weeks without problems, deploy it for everyone.
Remember the example I gave at the beginning: there should always be a person reviewing the important stuff. AI lacks judgment. You don’t.
Human review is 100% mandatory for:
- All text published or sent to the final client (ads, emails, posts, landing pages).
- Any segmentation or campaign moving more than €50,000 in budget.
- Any price that changes automatically (dynamic pricing).
- Any delicate chatbot response (refunds, complaints, personal data).
Easy rule to remember:
“If it can cost money, reputation, or a lawsuit, a human looks at it before it goes out.”
2. Kill-switch: the emergency red button.
Two types, both essential:
- Manual kill-switch: a physical button or on a dashboard that stops the ENTIRE AI system in 3 seconds. You or the client press it and it’s over.
- Automatic kill-switch: the tool autonomously blocks any response if the model says “I’m not sure” or confidence is lower than 85%.
Example: the AI is going to send an email saying “your order arrives tomorrow,” but it would actually arrive in 15 days. The system automatically blocks and moves to human review.
3. Logs of everything the AI does (keep fingerprint for 24 months)
You have to be able to prove what the company asked, what the AI replied, and what was sent to the final client. For that, you use a simple tool (I recommend them at the end of the post).
These tools automatically save everything and allow you to search in seconds: “What did the AI tell customer X on March 12?”. It is the way to demonstrate diligence and evidence if there is a complaint or AEPD inspection.
PHASE 6 – Quarterly continuous evaluation
Every 90 days, audit the AI:
- Review 100 random outputs.
- Bias test (Holistic AI or Fairlearn).
- Prepare executive report for client.
- Monitor model drift (changes in algorithm behavior).
PHASE 7 – Internal corporate governance
(The intangible asset that turns compliance into a margin-generating machine).
This is the phase that separates companies that “use AI” from those that make money protecting the client while using AI. Without internal governance, everything before (classification, contracts, guardrails…) collapses as soon as a junior copy-pastes a prompt into ChatGPT with sensitive data.
The six mandatory elements that every serious company should have implemented before December 31, 2025:
Internal Responsible AI Use Policy (2025 version)
1. Document signed by management. Includes:
- List of allowed and prohibited models (e.g., GPT-4o Enterprise yes, free ChatGPT NO).
- Golden rule: “No sensitive data leaves the company without DPA + non-training clause.”
- Absolute prohibition of uploading sensitive data to public models.
- Procedure for approving new AI providers/tools.
- Internal sanctions (warning and even dismissal).
2. Internal Registry of Use Cases and Risk Classification
- A living table (Notion, Airtable, or Google Sheets) with one row per project/client: Client.
- Use case (chatbot, AI copy, segmentation, etc.)
- AI Act classification (minimal/limited/high).
- Models used.
- Date of last audit.
- Project manager.
This is your life insurance in case of an AEPD inspection or a lawsuit.
3. Mandatory annual training (minimum 8 certified hours)
Mandatory 2025 syllabus: GDPR + AI Act applied to marketing.
- Real risks and Chernobyl cases.
- Safe use of prompts and tools.
- How to detect hallucinations and biases.
- Emergency protocol if the AI says something illegal or toxic.
- Final certification with exam.
4. Official appointment of AI Manager (AI Officer)
It doesn’t have to be a new person (it can be the CTO, DPO, or a senior operations manager), but it must exist in writing.
Functions:
- Approve or veto any new AI tool.
- Review the Use Case Registry quarterly.
- Lead the AI Committee.
- Be the spokesperson with clients on compliance issues.
5. Monthly AI Committee (30–60 minutes)
- Mandatory attendance: management + AI Manager + legal/operations.
- Fixed agenda: review of new AI projects.
- Incidents from the last month.
- Provider and tool updates.
- Compliance metrics.
6. AI Responsibility Fee – The bonus that skyrockets margin
Companies that have these 5 previous points implemented can (and should) charge a surcharge of 15–25% on any project that includes AI. How you sell it to the client (a phrase that converts 100%):
“Due to the implementation of the AI Protocol and complete AI governance, we apply an AI Responsibility Fee of X%. This surcharge covers risk classification, contractual shielding, human supervision, quarterly audits, and the peace of mind that you will never have a fine or scandal due to our AI.”
Real 2025 data (from 27 Spanish and Latin American companies that apply it): more than an 18.4% increase in their average net margin on AI projects.
Sales closing in enterprise clients goes up 41%.
Client retention of 96% (because no one else offers this level of protection).
My recommendation for compliance tools
In an increasingly regulated and digitized environment, the adoption of compliance tools is not optional: it is key to ensuring that our operations with data and AI systems comply with GDPR, the AI Act, and technological governance best practices.
I have selected these tools that will allow you to automate, audit, and control different critical aspects, reducing legal and reputational risks, and facilitating transparency and trust in your campaigns.
Each type of tool serves a specific purpose within the compliance ecosystem:
- CMP: Usercentrics cookiebot (undoubtedly the best for Consent Mode v2). It guarantees that consent collection is transparent and in accordance with cookie and privacy regulations. Plus, it’s very easy to implement.
- Traceability: Langfuse (free open source). Allows logging and auditing how data and AI models are used, essential for complying with documentation and accountability obligations.
- Guardrails: Lakera Gandalf or Guardrails AI. These tools establish technical limits and controls to prevent misuse or unexpected AI results.
- Governance: Holistic AI (the most complete enterprise). I love this tool because it centralizes the supervision, audit, and risk management of AI and data systems.
- Data mapping: DataGrail / Ethyca. This tool makes it easier to know what data is collected, how it’s used, and where it’s stored, ensuring compliance with user rights and legal obligations.
Checklist: The 7 mandatory phases before automating with AI
I summarize the Legal and Strategic Shielding Protocol that separates companies that bill more from those that take unnecessary risks. You can extract it and follow it step by step.
Phase 1: Use case classification
- Goal and Golden Rule: Determine the level of legal and user rights risk (AI Act).
- Key Documentation: Use case registry (Classification: Minimal/Limited/High).
Phase 2: Data and architecture evaluation
- Goal and Golden Rule: Answer the 7 key questions in writing (DPIA/data minimization) before using any data.
- Key Documentation: 7-Question Table (Traceability, minimization, TIA, DPA).
Phase 3: Contractual shielding (Legal Pack)
- Goal and Golden Rule: Ensure AI providers are prohibited from using your data for training and that they assume liability.
- Key Documentation: Full DPA, AI-specific addendum, Model Card or System Card.
Phase 4: Mandatory transparency
- Goal and Golden Rule: Inform the user clearly and visibly that they are interacting with AI (mandatory NOW by AI Act).
- Key Documentation: Warning clauses in Chatbots and Emails, Watermark/C2PA metadata in images.
Phase 5: Human supervision and safeguards
- Goal and Golden Rule: Implement the technical safety net so AI never acts uncontrollably.
- Key Documentation: Gradual deployment (0.5% → 100%), Kill-switch (manual/automatic), Activity logs (keep fingerprint for 24 months).
Phase 6: Quarterly Continuous Evaluation
- Goal and Golden Rule: Audit model performance and biases periodically to prevent drift.
- Key Documentation: Quarterly executive report, bias test, review of 100 random outputs.
Phase 7: Internal corporate governance
- Goal and Golden Rule: Turn compliance into a margin-generating machine and shield the company from within.
- Key Documentation: Internal Responsible Use Policy, Appointment of AI Officer, AI Responsibility Fee.
Conclusion: AI won’t replace you… but it can ruin you if there is no governance
Marketing is no longer defined just by creativity, SEO, or flashy campaigns. Today, true competitive advantage lies in trust, compliance, security, and traceability.
Companies that integrate AI in a serious, transparent, and controlled way:
- Increase margins without taking unnecessary risks.
- Compete at the level of large enterprise clients.
- Prevent legal problems and costly lawsuits.
- Consolidate themselves as market leaders, trustworthy and professional.
The real difference is not in using AI, but in working with guarantees: having a solid, replicable, and profitable framework.
Those who implement it correctly stop being “a company that uses AI” and become “the company that protects the client while others cause disasters.”
Because in a market saturated with empty promises, security and trust are the assets that truly generate lasting value.




