Google, Gemini and Consent: Why This Matters
Over the past months, Google has quietly rolled out new AI functionality — including Gemini — across its ecosystem: Gmail, Docs, Drive and more.
What concerns me is not the existence of AI itself.
It’s how it was enabled.
Through product updates, AI features were activated by default, requiring users to manually opt out by changing multiple settings. No clear, explicit consent was requested beforehand.
In other words:
users had to revoke access — rather than grant it.
For many people, this meant discovering Google AI embedded throughout their accounts, analysing content unless they actively disabled it. I did so immediately, because I do not consent to:
automated AI processing of my private communications
broad data collection justified as “product improvement”
cookies and tracking mechanisms beyond what is strictly necessary
This issue is not hypothetical. It is currently at the centre of legal actions in the United States and Europe, where the core question is simple:
Can a company enable AI access to personal data by default — and call silence “consent”?
From a user perspective, this matters deeply.
Consent should be explicit, informed and contemporary — not inherited from decades-old terms of service, nor assumed through inactivity after an update.
I am not anti-technology.
I am pro-choice, pro-privacy and pro-transparency.
If AI is going to touch personal data, the burden should not be on users to hunt through settings to protect themselves. The burden should be on companies to ask — clearly, plainly, and in advance.
Anything else erodes trust.
Geen opmerkingen:
Een reactie posten