Ancestry.com | DNA | Big Pharmaio

 My father had an unusual hobby.

He was a register accountant by profession, but in his spare time he reconstructed our family tree. Not with software, but in archives. Hours behind microfiche readers, scrolling through handwritten church registers, municipal records and fragile documents that were sometimes centuries old.

It required patience, discipline and a certain fascination with detail.

Genealogy today looks very different.

Millions of people explore their ancestry through consumer DNA tests and large online genealogy databases. With a saliva sample and a few clicks, people can discover relatives across continents and reconstruct family histories that once required years of archival work.

It is fascinating technology.

But it also raises an interesting privacy question.

Genetic information is fundamentally different from other personal data. It does not only describe you as an individual. It also reveals information about your parents, siblings, children and future generations.

And in today’s data economy, genetic data has extraordinary value.

Large aggregated DNA databases are extremely valuable for medical research and pharmaceutical development. Patterns in genetic data help researchers identify disease mechanisms, discover drug targets and understand why certain treatments work for some people but not for others.

In other words: the data generated through consumer DNA testing contributes to one of the most valuable biological datasets ever created.

None of this is inherently negative. Medical progress depends on data and research.

But it does raise a simple and important question about awareness.

When people take a DNA test to explore their ancestry, do they fully realize that they are also contributing to a rapidly growing genetic data ecosystem with scientific and commercial value?

Family history once required archives, patience and microfiches.

Today it also requires something else: understanding the value of our data

Waarom sommige mensen voorzichtig zijn met DNA-tests

Bij DNA-tests van bedrijven zoals Ancestry en 23andMe geven mensen een stukje genetische informatie af. Dat heeft voordelen, maar ook een paar punten waar mensen over nadenken.

1. Privacy van je DNA

Je DNA is de meest persoonlijke data die er bestaat.

Bedrijven slaan dit op in hun database. Sommige mensen vinden dat gevoelig, omdat:

gegevens jarenlang bewaard kunnen blijven

ze soms gebruikt worden voor wetenschappelijk onderzoek

databases theoretisch gehackt kunnen worden

2. Politie-gebruik (in sommige landen)

In enkele gevallen zijn DNA-databases gebruikt bij strafonderzoek.

Dat gebeurt vooral via andere sites zoals GEDmatch, waar gebruikers vrijwillig DNA-data kunnen uploaden.

3. Onverwachte familieontdekkingen

DNA-tests kunnen dingen onthullen die families niet wisten, bijvoorbeeld:

een onbekende halfbroer of halfzus

een andere biologische vader

adoptiegeschiedenis

Voor sommige mensen kan dat emotioneel of ingewikkeld zijn.

4. Je deelt ook DNA van familie

Als jij je DNA uploadt, geef je indirect ook informatie over:

ouders

broers/zussen

kinderen


Grote DNA-databases zijn extreem waardevol voor farmaceutische bedrijven. Door miljoenen genetische profielen te analyseren kunnen ze:

genetische varianten koppelen aan ziekten

nieuwe medicijndoelen ontdekken

begrijpen waarom sommige mensen wel of niet op een medicijn reageren

Bedrijven zoals 23andMe hebben bijvoorbeeld onderzoeksdeals gehad met farmabedrijven zoals GlaxoSmithKline.

De data die daarbij wordt gebruikt is meestal geanonimiseerd en geaggregeerd voor onderzoek.

In de meeste landen (zeker in Europa onder General Data Protection Regulation) mogen bedrijven niet zomaar individuen targeten op basis van hun genetische data.

Maar er zijn wel realistische risico’s of discussiepunten:

genetische data kan in theorie invloed hebben op verzekeringen of risicoprofielen

bedrijven kunnen geaggregeerde genetische trends gebruiken voor marktstrategieën

grote DNA-databases versnellen commerciële geneesmiddelontwikkeling

Het echte debat

De kernvraag die vaak wordt gesteld is:

Wanneer mensen een DNA-test doen voor genealogie, realiseren ze zich dan dat hun genetische data ook kan bijdragen aan een enorme commerciële onderzoeksdatabase?

Genetische data is voor onderzoek en farmacie extreem waardevol. Sommige onderzoekers noemen het zelfs de olie van de 21e eeuw, maar dan biologisch.

Joan D. Mulder,  jurist

Joan D. Mulder, mr.

 I’m a legally trained consultant specialized in digital governance and AI & data risk.

I help organizations understand how technology decisions translate into legal exposure, contractual risk, and accountability gaps.

My strength lies in asking the right structural questions — before issues escalate into disputes or compliance problems.

I work remotely with international teams on legal research, contract strategy, and governance frameworks around AI and data use.

AI

 That small lens icon on images. Harmless convenience?

With one click, an image is analyzed.

Objects identified.

Text extracted.

Context interpreted.

Convenient. Efficient.


But it normalizes something: that images are routinely analyzed by AI.

A photo often contains more than we realize: faces, locations, documents, interiors, behavioral context.

The question is not whether visual search is useful. The question is: when does image analysis become the default layer of our online environment?

And how transparent are we about what happens to that data afterwards?

Technology rarely shifts abruptly. It simply becomes normal.

🔶️

Additional Layer: From Interaction to Infrastructure

What begins as a feature gradually becomes infrastructure.

When visual analysis is embedded by default, several structural questions arise:

Is image processing limited to immediate functionality, or does it feed broader machine learning systems?

Is visual interaction treated as behavioral data?

Are images transient inputs, or long-term assets within AI ecosystems?

Does user awareness meaningfully match technical reality?

The normalization of image analysis changes the baseline of digital interaction.

It subtly shifts control from user perception to system capability.

This is not about suspicion. It is about proportionality.

As analytical capacity increases, so should clarity around:

• purpose limitation

• data retention

• secondary use

• model training inputs

• oversight mechanisms


Convenience scales quickly. 

Accountability must scale with it.

The lens icon is small.

But the governance questions it raises are structural.

Google Lens

Recently, a small lens symbol has begun appearing on images across the internet.

With a single click, the image is analyzed.

This functionality — powered by Google Lens from Google — enables object recognition, text extraction, product comparison, and contextual identification.

It is efficient.

It is technically impressive.

It is seamless.

But it also represents something larger.

The Normalization of Image Analysis

When image analysis becomes embedded as a default layer of online interaction, a structural shift occurs.

An image is rarely “just” an image. It may contain:

Faces

Addresses or street signs

License plates

Documents

Interior details

Behavioral context

With one click, that visual information enters an AI analysis environment.

The question is not whether visual search is useful. It clearly is.

The question is what happens next.

Is the image used solely to generate immediate search results?

Is it retained?

Is it used to improve AI models?

Is usage behavior linked to broader profiling?

From Feature to Infrastructure

Technological shifts rarely happen abruptly.

They become normalized through convenience.

A tool introduced as a search enhancement can, over time, become a structural component of data ecosystems.

This is not an argument against innovation.

It is an invitation to consider governance.

When image analysis becomes frictionless, transparency and purpose limitation become more important — not less.

Efficiency does not eliminate the need for clarity.

It increases it.

The lens icon may be small.

The infrastructure behind it is not.

Joan D. Mulder

When Banks Claim Your Digital Identity — Under the Banner of Security

 More and more banks now ask for additional digital data: selfies, video ID, device information, location data, and behavioral signals.

The explanation is always the same:

“For your security.”

“To prevent data breaches.”

It sounds reasonable.

But something fundamental is shifting.

Banks are quietly moving from financial service providers to custodians of your digital identity.

Where you once were simply a customer, you are now becoming a data profile: how you type, where you log in from, which device you use, how your face moves during verification. This is known as behavioral biometrics. Officially it exists to prevent fraud. In practice, it creates an increasingly detailed record of your existence.

Not because individual institutions are malicious — but because the system itself is evolving this way.

Here’s the paradox: the more data gets centralized, the greater the damage when it leaks.

You can change a password. You can replace a passport. But you cannot replace your face. ou cannot reset your nervous system.You cannot regenerate your behavioral patterns.

Regulation is often cited as justification. Yes, banks are required to identify customers. But regulation calls for verification — not maximal data extraction. That distinction matters.

What we’re seeing instead is a gradual normalization of expansion: extra checks, “smart security,” default opt-ins. No public debate. No clear moment of consent. Just small screens with large consequences.

We’ve already watched this pattern unfold across tech platforms, including AI rollouts by companies like Meta Platforms — features appearing quietly, framed as convenience, powered by continuous data capture.

This isn’t primarily a technological issue. It’s a question of ownership.Who owns your digital shadow?

Privacy is not a luxury. It is sovereignty.And sovereignty rarely disappears overnight. It erodes through forms. Through updates.Through policies labeled “security.”

My personal boundary is simple: I accept financial services. I accept reasonable identification. But I do not accept an open-ended license on my digital existence.

Maybe it’s time we start asking a different question: not what is allowed —but what is proportional?

Wanneer banken jouw digitale identiteit opeisen — onder het mom van veiligheid

Steeds vaker vragen banken om extra digitale gegevens: selfies, video-ID, apparaatinfo, locatie en gedragsdata. De uitleg is altijd dezelfde: voor jouw veiligheid, om datalekken te voorkomen. Maar hier verschuift iets fundamenteels.

Banken bewegen van financiële dienstverlener naar beheerder van jouw digitale identiteit.

Waar je vroeger gewoon klant was, word je nu een dataprofiel: hoe je typt, waar je inlogt, welk apparaat je gebruikt, hoe je gezicht beweegt tijdens verificatie. Dit heet behavioral biometrics. Officieel bedoeld voor fraudepreventie. In werkelijkheid ontstaat er een steeds rijker dossier van jouw bestaan.

Ook grote partijen zoals ING Group en Rabobank volgen deze internationale lijn — niet uit kwade wil, maar omdat het systeem zo is ingericht.

Het wrange is: hoe meer data wordt gecentraliseerd, hoe groter de schade bij een lek. Je kunt een wachtwoord wijzigen. Je kunt een nieuw paspoort aanvragen. Maar je kunt geen nieuw gezicht aanvragen. Geen nieuw zenuwstelsel.

Wetgeving wordt vaak aangehaald als rechtvaardiging. Maar die vraagt identificatie — geen maximale dataverzameling. Dat verschil is essentieel.

Net als bij AI-functies van Meta Platforms gebeurt dit stap voor stap: extra checks, slimme beveiliging, vinkjes die standaard aan staan. Geen debat. Geen grote aankondiging. Alleen kleine schermpjes met grote gevolgen.

De echte vraag is niet technologisch. Ze is existentieel: wie bezit jouw digitale schaduw?

Privacy is geen luxe. Het is soevereiniteit. En die verdwijnt zelden met een klap — maar via formulieren, updates en zogenoemde veiligheidsmaatregelen.

Mijn grens is simpel: ik accepteer dienstverlening. Ik accepteer redelijke identificatie. Maar geen open eind-licentie op mijn digitale bestaan.

Misschien wordt het tijd dat we opnieuw vragen: niet wat mag, maar wat is proportioneel?

Data farming: when convenience quietly becomes ownership

Lately, a new icon suddenly appeared on my WhatsApp home screen.

No announcement. No explicit consent. Just… there.

An AI assistant from Meta Platforms.

It looks harmless: an extra button, a smart helper.

But it points to something much bigger: data farming.

Data farming isn’t just about collecting information.

It’s about turning our conversations, searches, preferences, and behaviors into commodities.

Not once.

Continuously.

And usually quietly.

We live in a time where:

– features are added “automatically”

– opting out is often harder than opting in

– updates happen server-side, outside your personal settings

– AI tools appear without clear explanations of what is actually being stored

That’s not a conspiracy theory.

That’s the business model.

Platforms like WhatsApp and LinkedIn don’t primarily exist to help us — they exist to collect, analyze, and monetize data.

The real question isn’t:

“Is AI useful?”

The real question is:

who owns the context of your life?

Privacy isn’t a luxury.

It’s autonomy.

And autonomy rarely disappears with a bang.

It fades through small icons.

Through defaults.

Through silent updates.

My personal rule is simple:

If something appears without my explicit consent,

then it doesn’t belong to me.

Maybe it’s time we all take a closer look at what we use “for free” —

and what we quietly give in return.