(margin*notes) ^squared

(margin*notes) ^squared

Share this post

(margin*notes) ^squared
(margin*notes) ^squared
Displaced and Datafied

Displaced and Datafied

Biometrics and the contested politics of humanitarian tech

Michael L. Bąk's avatar
Michael L. Bąk
Aug 03, 2025
∙ Paid

Share this post

(margin*notes) ^squared
(margin*notes) ^squared
Displaced and Datafied
Share

If you’ve flown domestically through a Thai airport lately, you’ve probably been nudged to voluntarily hand over your face scan in exchange for a “smoother” boarding process. I decline. (And for now, at least, that’s still allowed.)

At the UNESCO Global Forum on the Ethics of AI in Bangkok back in June, I avoided the badge scanners tracking participants as they entered sessions. I slipped in through a side door. Why should someone log the topics I care about?

These are intentional choices over which I still have agency. After all, where does all that data go? Who gets to see it and analyse it? To what end? What’s it combined with?

That’s how I opened a recent talk at Chiang Mai University, part of a seminar on biometric digital IDs for refugees organized with Goldsmiths-University of London. (You can find out more about this project, Reimagining Digital Identity: Practices among Karen refugees in Thailand, at https://www.redid.net/.)

And then just two nights ago after I started work on this article I joined a call with Southeast Asian tech and human rights advocates. We listened to Dr. Jean Linis-Dinco, a Filipina recognised by Women in AI Ethics as among the global Top 100 Women in Artificial Intelligence Ethics in 2022. She warned against the normalisation of biometric scanning where refugees, dissenters, the incarcerated, and the indigenous are almost always the first to be digitally datafied.

Such populations often lack the visibility or leverage to challenge these practices, making them convenient test beds for normalizing surveillance with very little resistance.

Meanwhile, the prevailing narrative pushed by tech companies frames the technology as somehow neutral and apolitical. They are usually positioned benevolently as a tool for things like ensuring child safety, preventing fraud, or strengthening national security.

But wait a sec: these technologies aren’t just tools.

While we may argue the merits of whether or not digital IDs are solutions searching for a problem, we must agree that these technologies are deeply political instruments. And they are imposed on vulnerable people who don’t have meaningful ways to say no. As Dr. Linis-Dinco reminded us, when you call these systems tools, you strip away the politics and power embedded in how they’re built, where they’re deployed, and who profits from them. In other words, like political things these tools contain an implanted hierarchy of priorities and values. By affirming their political nature, these priorities come into focus and we make them visible for democratic debate.

As you likely already gathered from my previous writing, I resist the temptation to fall in line with the view of the inevitability of technology. For instance, it is not inevitable that some countries and entities can push for innovation while other places – say, conflict zones and refugee camps – are the labs, spaces for experimentation and exploitation. That’s not inevitability; it's conscious choice making.

Technology’s trajectory is shaped by public policy, just financial incentives, and social decisions. And activism, and democratic claims, and much else besides. They are political as much as they are technical. A new asset class, if you will.

When it comes to biometric digital ID systems — especially when imposed on vulnerable populations — we must resist narratives that present these new tools as somehow apolitical, inevitable, or inherently beneficial. When we do so, we deny developers and deployers the ability to walk away from the consequences of what they create.

Wait, let me rephrase that: We deny developers and deployers the ability to disregard from the outset the consequences for which they should be responsible.

So, if we accept these technological tools are political, we have to ask political questions:

  • Whose interests do they serve?

  • Who has power?

  • Who gets to exercise control over the data, insights and implications?

  • And who bears the risks?

I’m no luddite. I use tech all the time. I’ve worked for aid agencies. I’ve worked in refugee and displaced persons camps. I’ve worked in big tech. I know the systems from the inside. And I know how easily people can and are reduced to data points, users and data subjects – especially when they look, love, eat, believe and live differently then us. I’ve also seen how so-called “good ideas” go unchallenged. Tech adopted as a bureaucratic duty.

But people aren’t data. They are citizens, political beings. And if we apply a citizenship lens – rather than a consumer or compliance lens – then our tech solutions need to hold up to the scrutiny demanded of human dignity and agency, not just operational efficiency.

At their core, these tech tools, these political pools, are about power.

As Petra Molnar reminds us in her book The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence, this space where tech impacts the lives of people on the move, the most vulnerable among us, is largely unregulated, with weak oversight and governance mechanisms. Government border agencies and thousands of private companies all engage in and justify technological experiments (or research, or optimisation, or…) in migration management because people on the move have been historically and continue to be treated as expendable. [1]

Please keep this political frame in mind as I cover just three of the key topics that I spoke to at the Chiang Mai seminar: Consent, Purpose, and Policy.

Consent:

Let’s start with the idea of informed consent. In short, this means that without an individual understanding, and without real (equitable) alternatives, we cannot assume consent is informed or given freely.

Refugees are routinely “asked” to consent to providing iris scans, fingerprints, and facial biodata. These biometrics are often surrendered without knowing who will store that data, how it might be used in the future, or what could happen if it’s breached or misused. Despite people thinking that their data is being held by that NGO or that UN agency ostensibly helping them, most of the time the data is stored with private companies with opaque links to governments or security or tech firms. Sometimes border enforcement agencies like ICE or Frontex. Or companies with questionable, extremist credentials, like Palantir or ClearviewAI.

So what kind of choice is that?

Imagine this: You’re invited onto a roller coaster, a contraption you’ve never seen before. Maybe you’ve never even heard of a coaster before. It's confusing. You have no idea how fast it goes. Naively, you might not even be afraid — unless someone tells you about the loops, the drops, the batwings, the barrel rolls. You might ask: is there a gentler ride? Can I walk away?

Now imagine being told: No ride, no dinner. No ride, no medical care. No ride, no fuel. What would that feel like to you? (Coercive, possibly?)

Now imagine trying to explain that roller coaster to someone in an indigenous or underrepresented language that doesn’t even have words for “loop” or “corkscrew” or “helix.” Substitute the roller coaster analogy for the tech reality where they are asked to navigate systems built around terms like “cybersecurity,” “data governance,” “surveillance,” “cloud” and “privacy.”

When those words don’t translate clearly – or at all – it becomes pretty hard to pretend that consent in digital ID systems is anything but hollow, especially for the world’s most vulnerable populations. Weakly or uninformed consent without choice for equitable alternatives can’t be considered consent. It’s more like a mandated SOP that fails to consider the political, social and economic implications on people.

And frankly, it’s not good enough. Because when things go wrong – and they will go wrong, whether through a data breach or the unintentional merging of datasets that were never meant to be combined – it’s the most vulnerable who bear the consequences. The insights produced may be flawed, invasive, or harmful, and the damage will land squarely on those least equipped to contest it.

As one of the conveners of the Chiang Mai seminar noted in her book Technocolonialism, a rigid digital ID system in Kenya created lifelong consequences. In the late 1980s, thousands of Somali refugees arrived there. Extremely vulnerable themselves, ethnic Somali Kenyans had their children fingerprinted and registered as refugees in order to access food and healthcare. Decades later, their children were denied Kenyan national ID cards because their fingerprints were in the refugee database that ended up in government hands. That data, passed from agency to agency, left around 40,000 people effectively stateless. [1]

I think it's pretty hard to trust in a system when consent is so weak. And when trust is weakened, it's pretty hard to uncritically accept the purpose behind the consent demands.

Purpose:

Digital ID systems are sold as making things more efficient and combatting fraud. But faster for whom? Fraud-proof against whom?

If this is about a refugee mother taking an extra kilo of rice, I’m not much interested. Because refugee camps often have internal self-regulating mechanisms. It would be tough to get away with self-serving fraud for any length of time when you’re surrounded by people suffering as much as you. Real fraud – the kind that harms people – tends to happen upstream: in procurement deals, inflated contracts, security forces and opaque supply chains.

Some place blame on the government donors for pushing these tech-based systems to address their need to ferret out fraud (real or imagined). Maybe. But agencies and NGOs also have more power and agency than they realize, to demand the donor slow down, to question the premise, to say no. To not contribute to the normalisation of using military-grade technology to control how much food or fuel a refugee mother receives.

When faced with this fraud dread, these are the questions I think we need to ask:

  • What’s the marginal efficiency gain of a high-tech ID system over some other solution?

  • Are those gains worth the surveillance risks, both current and future, imposed on real human beings?

I’d argue it would be quite sensible to adopt a digital-age version of Occam’s Razor when deploying frontier technologies — especially on vulnerable populations like refugees, or in contexts where surveillance creep is a risk. It might go something like this:

Among competing solutions, the one that requires the least complex human rights safeguards and the least mitigations of socio-economic and political harms to people should be preferred.

In other words: the simplest solution isn’t just the most efficient, it’s the one least likely to violate rights.

If we applied this new Occam’s Razor, it would force us to keep asking ourselves what we aren’t talking about. Uncovering further and further the potential harms, both present and future.

Consider transgender refugees. Facial recognition systems regularly misgender or flag non-binary people as suspicious. These are not harmless glitches. They can trigger bullying, exclusion, even violence if the data and insights land in the wrong hands. The harm follows them across checkpoints, schools, clinics, employment and financial systems.

We need to be concerned about what gets linked to all this data extracted from vulnerable bodies of refugees. Not only because their human dignity and worth demands it, but also because what’s tested on them, eventually gets deployed on us as well.

Skeptical? Consider that right now in Australia the prominent hardware chain, Bunnings, is seeking changes to privacy laws so that they can once again use facial recognition systems to mitigate shoplifting. (They deployed it in their stores already but were required to pause it after the Australian Information Commissioner determined they violated privacy laws with it.)

So maybe this biometric data somehow gets linked to things like your social media network. Your ride-hailing trips. Your purchase history. Stores and venues you visit. Your political activism. Now combine all that with a facial recognition database used by law enforcement or border patrol. That’s no longer just bits of data, but rather becoming a fairly sophisticated, structured surveillance infrastructure with more and more power – often in the hands of private actors – to exert control.

And here’s the kicker: data doesn’t die.

You can’t un-share it. You can’t burn it. Data sticks around—ready to be reused, misused, or misinterpreted. Forever.

And when that data is extracted from our bodies at moments of deep physical, emotional, or financial vulnerability, the risks compound. Later combinations of that data — across systems and platforms — can produce harmful, inaccurate, even discriminatory insights. These “insights” can then fuel AI systems making decisions about everything from access to healthcare and loans, to immigration status, job offers, or surveillance flags.

Policy:

Given the lasting harms that can arise from the permanence of digital data — and the capitalist drive to endlessly extract, combine and recombine it in search of profitable insights and uses — we cannot afford to outsource frontier tech governance (AI being just one part of it) to developers or private deployers.

It demands an informed public sector, strong legal infrastructure, and political leadership grounded in accountability — with inclusive oversight led not just by experts, but by citizens, too.

In the U.S., just in the last year, we’ve seen political shifts open the door to database combinations that would have been unthinkable even a year ago: IRS records, social protection services usage, even visa applicants’ social media histories — all now potentially fair game for surveillance and immigration control.

This isn’t theoretical because it's happening right now.

Civil society, community groups, and independent researchers must be part of tech governance processes. Those who make the tech, peddle the tech, and optimise the tech cannot regulate themselves. As citizens, we can demand much broader consideration of harms. For example, not stopping at “data breaches” as a core harm. Rather, digging deeper to understand the danger lies in data combinations and sharing – especially when AI models generate insights that are speculative, biased, or even fabricated.

We need governments with in-house expertise, knowledge and cutting edge skills because we can’t defer these responsibilities to private actors. Too many tech deployments are driven by corporations (with incentives that are decidedly not in the public interest, especially when their targets are the vulnerable, poor, and foreign) precisely because public officials don’t have the capacity to evaluate them. The result? Private companies define not only what gets built and the narratives around its utility, but also how it’s used, stored, and procured. [2]

Importantly, don’t forget the history of innovation is littered with examples of vulnerable groups being used as test subjects.

  • The US government’s STI study of Guatemalans in the 1940s. [3]

  • The Tuskegee Syphilis Study on African American men from 1932 to 1972. [4]

  • The Pfizer Trovan study in Nigeria in 1996. [5]

  • Facebook’s Emotive Contagion Study in 2014. [6]

  • All the AI-enabled surveillance tech tested and used upon occupied Palestinians. [7]

  • Rohingya biometric data shared with Myanmar (the government they were fleeing). [8] [9]

  • Border zones.

We can’t forget that what’s labeled innovative and championed as the progress of technological innovation is often built on the backs of people least able to opt out or push back.

The so-called voiceless.

A Final Word

Most of you reading this post likely aren’t refugees, and likely haven’t experienced the pain of displacement. You’re likely reading this on a mobile phone, tablet, or laptop computer. Their experiences in camps likely seem a world away, out of sight, possibly out of mind.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Michael L. Bąk
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share