Joan D. Mulder, mr.

 I’m a legally trained consultant specialized in digital governance and AI & data risk.

I help organizations understand how technology decisions translate into legal exposure, contractual risk, and accountability gaps.

My strength lies in asking the right structural questions — before issues escalate into disputes or compliance problems.

I work remotely with international teams on legal research, contract strategy, and governance frameworks around AI and data use.

AI

 That small lens icon on images. Harmless convenience?

With one click, an image is analyzed.

Objects identified.

Text extracted.

Context interpreted.

Convenient. Efficient.


But it normalizes something: that images are routinely analyzed by AI.

A photo often contains more than we realize: faces, locations, documents, interiors, behavioral context.

The question is not whether visual search is useful. The question is: when does image analysis become the default layer of our online environment?

And how transparent are we about what happens to that data afterwards?

Technology rarely shifts abruptly. It simply becomes normal.

🔶️

Additional Layer: From Interaction to Infrastructure

What begins as a feature gradually becomes infrastructure.

When visual analysis is embedded by default, several structural questions arise:

Is image processing limited to immediate functionality, or does it feed broader machine learning systems?

Is visual interaction treated as behavioral data?

Are images transient inputs, or long-term assets within AI ecosystems?

Does user awareness meaningfully match technical reality?

The normalization of image analysis changes the baseline of digital interaction.

It subtly shifts control from user perception to system capability.

This is not about suspicion. It is about proportionality.

As analytical capacity increases, so should clarity around:

• purpose limitation

• data retention

• secondary use

• model training inputs

• oversight mechanisms


Convenience scales quickly. 

Accountability must scale with it.

The lens icon is small.

But the governance questions it raises are structural.

Google Lens

Recently, a small lens symbol has begun appearing on images across the internet.

With a single click, the image is analyzed.

This functionality — powered by Google Lens from Google — enables object recognition, text extraction, product comparison, and contextual identification.

It is efficient.

It is technically impressive.

It is seamless.

But it also represents something larger.

The Normalization of Image Analysis

When image analysis becomes embedded as a default layer of online interaction, a structural shift occurs.

An image is rarely “just” an image. It may contain:

Faces

Addresses or street signs

License plates

Documents

Interior details

Behavioral context

With one click, that visual information enters an AI analysis environment.

The question is not whether visual search is useful. It clearly is.

The question is what happens next.

Is the image used solely to generate immediate search results?

Is it retained?

Is it used to improve AI models?

Is usage behavior linked to broader profiling?

From Feature to Infrastructure

Technological shifts rarely happen abruptly.

They become normalized through convenience.

A tool introduced as a search enhancement can, over time, become a structural component of data ecosystems.

This is not an argument against innovation.

It is an invitation to consider governance.

When image analysis becomes frictionless, transparency and purpose limitation become more important — not less.

Efficiency does not eliminate the need for clarity.

It increases it.

The lens icon may be small.

The infrastructure behind it is not.

Joan D. Mulder

When Banks Claim Your Digital Identity — Under the Banner of Security

 More and more banks now ask for additional digital data: selfies, video ID, device information, location data, and behavioral signals.

The explanation is always the same:

“For your security.”

“To prevent data breaches.”

It sounds reasonable.

But something fundamental is shifting.

Banks are quietly moving from financial service providers to custodians of your digital identity.

Where you once were simply a customer, you are now becoming a data profile: how you type, where you log in from, which device you use, how your face moves during verification. This is known as behavioral biometrics. Officially it exists to prevent fraud. In practice, it creates an increasingly detailed record of your existence.

Not because individual institutions are malicious — but because the system itself is evolving this way.

Here’s the paradox: the more data gets centralized, the greater the damage when it leaks.

You can change a password. You can replace a passport. But you cannot replace your face. ou cannot reset your nervous system.You cannot regenerate your behavioral patterns.

Regulation is often cited as justification. Yes, banks are required to identify customers. But regulation calls for verification — not maximal data extraction. That distinction matters.

What we’re seeing instead is a gradual normalization of expansion: extra checks, “smart security,” default opt-ins. No public debate. No clear moment of consent. Just small screens with large consequences.

We’ve already watched this pattern unfold across tech platforms, including AI rollouts by companies like Meta Platforms — features appearing quietly, framed as convenience, powered by continuous data capture.

This isn’t primarily a technological issue. It’s a question of ownership.Who owns your digital shadow?

Privacy is not a luxury. It is sovereignty.And sovereignty rarely disappears overnight. It erodes through forms. Through updates.Through policies labeled “security.”

My personal boundary is simple: I accept financial services. I accept reasonable identification. But I do not accept an open-ended license on my digital existence.

Maybe it’s time we start asking a different question: not what is allowed —but what is proportional?

Wanneer banken jouw digitale identiteit opeisen — onder het mom van veiligheid

Steeds vaker vragen banken om extra digitale gegevens: selfies, video-ID, apparaatinfo, locatie en gedragsdata. De uitleg is altijd dezelfde: voor jouw veiligheid, om datalekken te voorkomen. Maar hier verschuift iets fundamenteels.

Banken bewegen van financiële dienstverlener naar beheerder van jouw digitale identiteit.

Waar je vroeger gewoon klant was, word je nu een dataprofiel: hoe je typt, waar je inlogt, welk apparaat je gebruikt, hoe je gezicht beweegt tijdens verificatie. Dit heet behavioral biometrics. Officieel bedoeld voor fraudepreventie. In werkelijkheid ontstaat er een steeds rijker dossier van jouw bestaan.

Ook grote partijen zoals ING Group en Rabobank volgen deze internationale lijn — niet uit kwade wil, maar omdat het systeem zo is ingericht.

Het wrange is: hoe meer data wordt gecentraliseerd, hoe groter de schade bij een lek. Je kunt een wachtwoord wijzigen. Je kunt een nieuw paspoort aanvragen. Maar je kunt geen nieuw gezicht aanvragen. Geen nieuw zenuwstelsel.

Wetgeving wordt vaak aangehaald als rechtvaardiging. Maar die vraagt identificatie — geen maximale dataverzameling. Dat verschil is essentieel.

Net als bij AI-functies van Meta Platforms gebeurt dit stap voor stap: extra checks, slimme beveiliging, vinkjes die standaard aan staan. Geen debat. Geen grote aankondiging. Alleen kleine schermpjes met grote gevolgen.

De echte vraag is niet technologisch. Ze is existentieel: wie bezit jouw digitale schaduw?

Privacy is geen luxe. Het is soevereiniteit. En die verdwijnt zelden met een klap — maar via formulieren, updates en zogenoemde veiligheidsmaatregelen.

Mijn grens is simpel: ik accepteer dienstverlening. Ik accepteer redelijke identificatie. Maar geen open eind-licentie op mijn digitale bestaan.

Misschien wordt het tijd dat we opnieuw vragen: niet wat mag, maar wat is proportioneel?

Data farming: when convenience quietly becomes ownership

Lately, a new icon suddenly appeared on my WhatsApp home screen.

No announcement. No explicit consent. Just… there.

An AI assistant from Meta Platforms.

It looks harmless: an extra button, a smart helper.

But it points to something much bigger: data farming.

Data farming isn’t just about collecting information.

It’s about turning our conversations, searches, preferences, and behaviors into commodities.

Not once.

Continuously.

And usually quietly.

We live in a time where:

– features are added “automatically”

– opting out is often harder than opting in

– updates happen server-side, outside your personal settings

– AI tools appear without clear explanations of what is actually being stored

That’s not a conspiracy theory.

That’s the business model.

Platforms like WhatsApp and LinkedIn don’t primarily exist to help us — they exist to collect, analyze, and monetize data.

The real question isn’t:

“Is AI useful?”

The real question is:

who owns the context of your life?

Privacy isn’t a luxury.

It’s autonomy.

And autonomy rarely disappears with a bang.

It fades through small icons.

Through defaults.

Through silent updates.

My personal rule is simple:

If something appears without my explicit consent,

then it doesn’t belong to me.

Maybe it’s time we all take a closer look at what we use “for free” —

and what we quietly give in return.

 Remote Legal Support (Research & Structuring)

 

CORE PAGE -

Joan D. MULDER, lawyer

Role

Remote Legal Support — Research & Structuring

I support international teams and professionals by structuring, summarising and researching legal and regulatory information, enabling clear and informed decision-making.

I do not provide legal advice and do not act as a legal representative. My work is strictly supportive and informational.


What I Do

  • Structure and summarise legal documents and contracts
  • Identify key issues and factual risk areas
  • Conduct legal and regulatory desk research
  • Translate complex legal information into clear, decision-ready documents

What I Do Not Do (Scope)

  • No legal advice
  • No binding legal interpretation
  • No representation or procedural actions
  • No claims of local legal qualification

All deliverables are intended to support internal or external decision-making.


Example Engagements

1. Short Assignment — Hourly

Contract or Document Review (Quick Scan)

Scope

  • Review and structure provided document(s)
  • Summarise key provisions
  • Identify factual attention points and risk areas

Deliverable

  • 2–3 page structured summary
  • Clear bullet-point overview

Time & Terms

  • 2–4 hours
  • Hourly rate: €45–€65 (excl. VAT)
  • Delivery within 1–2 business days

2. Defined Project — Fixed Fee

Research Memo / Decision Support Document

Scope

  • Define research question
  • Conduct desk research (legislation, guidance, public sources)
  • Structure findings into a clear memo

Deliverable

  • 3–5 page research memo
  • Findings overview
  • Source references
  • Key considerations for next steps

Time & Terms

  • 8–15 hours
  • Fixed fee: €500–€900 (excl. VAT)
  • Turnaround: approximately one week

Working Method

  • Clear scope and time agreement upfront
  • One delivery moment
  • Additional questions or extensions are treated as a new assignment

Practical

  • Fully remote
  • English or Dutch output
  • Suitable for international contexts
  • Rates excl. VAT

🌐

A short assignment can be used to assess fit before engaging in a larger project.