(margin*notes) ^squared

(margin*notes) ^squared

Share this post

(margin*notes) ^squared
(margin*notes) ^squared
Citizen, Consumer, Person, User

Citizen, Consumer, Person, User

Language matters in talking about technology, power, and responsibility

Michael L. Bąk's avatar
Michael L. Bąk
Jul 23, 2025
∙ Paid
1

Share this post

(margin*notes) ^squared
(margin*notes) ^squared
Citizen, Consumer, Person, User
Share

A subscriber to this newsletter dropped me a note recently, pointing to how I write about people “…as citizens, not consumers.”

She wrote: “I liked that part — the focus on citizens.”

I’m really glad she picked up on this point in particular. This language is essential to how I think about digital technology and the responsibilities of those who build, sell, and deploy it; as well as those tasked with regulating it. After all, the “tech” often looms over our lives, either like storm clouds or sunshine, affecting nearly every aspect of how we live, whether we want it to or not.

From MAUs to Missing the Point

When I worked on Facebook’s policy team (later Meta), everyone from the CEO on down referred to people on the platform as “users”. It was shorthand that we all defaulted to out of habit. It rolled off the tongue and was built into the statistics that defined the company’s success: DAUs and MAUs — Daily and Monthly Active Users.

To be honest, I was never entirely comfortable with the term user. I trace that discomfort back to nearly two decades spent working in international development, where the focus was on real, living people – building communities through social, physical, and institutional infrastructure designed to support their human flourishing.

So at Facebook, I felt guilty when user slipped out.

Instead, I’d try to say things like “people who use Facebook” or “people on Facebook.” But those felt comparatively clunky and still carried a subtle but persistent tinge of consumerism: use, engagement, metrics, scale. It failed to capture the deeper sense of agency I see as fundamental to us as human beings.

It didn’t capture that the people on the other end of “the product” weren’t just clicking and scrolling and posting. They were living, hurting, speaking, learning, connecting. And sometimes being victimised, even dying.

The impacts of these digital platforms extended far beyond their “user base.” People who had never once logged on to a platform were still affected by it and felt the tech's effects: weakened democratic institutions, disinformation, harassment and even ethnic cleansing.

User didn’t come close to capturing the weight of that.

It wasn’t until I left big tech and began working with the Forum on Information and Democracy that I started to use – and embrace – a different term: “citizen”.

But let me walk through how I think I ultimately decided on this word.

Language Is Not Neutral

“All human beings are born free and equal in dignity and rights.” That’s the first line of the Universal Declaration of Human Rights. Citizenship is built on personhood. And personhood means more than participation in a market. It means belonging to and participating in shaping society. It also means being accountable to each other.

I think the language we use to describe people in relation to technology reveals assumptions about power, agency, and responsibility. A consumer receives a service. A user engages with a product. Whereas a “person” has agency. And a “citizen” participates in shaping the systems that govern them.

That distinction may seem philosophical but it is also very, very political. When we treat people as citizens — not just consumers or users — we affirm their rights to demand fairness, transparency, justice. Not just better products.

In the context of frontier technologies like AI and algorithmic platforms, this distinction is really foundational:

As citizens, we can make claims: for fairness, for transparency, for justice.

As consumers, we are expected only to choose, use, or exit.

Big tech’s language isn’t quite accidental. In my view, users and consumers are depoliticizing terms. Their use aids in not only stripping away human agency, but also the social and moral dimensions of the technology and makes it easier to ignore externalities and harder to build accountability.

This vocabulary turns citizens into data points and growth targets, while making it easier to evangelise a sense of technological inevitability. By changing the terminology we use, we foreground citizens’ agency in shaping the trajectory of that technology in ways our communities and societies want. Deliberative democracy requires empowered citizens – not users, not consumers.

So when we talk about users, we need to ask: Whose language is this? Whose interests does it serve? Whose power is it stripping away? Whose power is it serving?

Terms of Citizenship

I acknowledge that focussing on the term citizen is a political act. It insists that technology is not a product to be scaled at any cost, especially when the ultimate cost is borne by people in places like Myanmar and Ethiopia. By centering the term citizen we show that technology is a force that must be governed in the public interest, for the public interest.

Citizenship demands this. It shifts the framing from some kind of voluntary participation in a neoliberal, capitalist marketplace to the fact that we are all born in dignity and rights. That in the digital age our demands for democratic oversight and accountability of technology are a birthright . Especially now that frontier technology shapes so much of our lives, it must be answerable to the people it affects: to the citizens who often have no practical choice but to engage with their products.

Citizenship asserts that we are not merely participating in a market, but inhabiting a society. In a region as diverse and dynamic as mine here in Southeast Asia, this means recognizing the political and social agency of people navigating authoritarianism, inequality, and digital extraction.

I insist on language that reflects that agency.

Responsibility has a Face

And then there’s the other side of the coin. We often talk about corporations as if they are some kind of self-contained machine — abstract actors that do things. “Facebook did this. Amazon did that. Google did this other thing. Oh, and X…”

But corporations are made up of people – real flesh and blood people! Real people write the code, choose the business model, set the risk tolerance. Real people are the ones making what some say are the impossible tradeoffs.

And those people? They’re citizens, too. Not above or outside society, but part of it. When decisions cause harm, that harm doesn’t emanate from a “platform” or some intangible “technology.” It comes from choices made by people serving as executives, engineers, designers, policy leads and PR spinners. By human beings. Persons.

Treating companies as faceless entities and citizens as users in a marketplace are abstractions that allow moral responsibility to dissipate. The company pays a fine. No one is held accountable. And the cycle continues.

The result is a situation where the people with the most ability to affect change are often the least personally exposed to the consequences of their actions. And while we can’t move mountains by more intentionally using the terms citizen and person, we just might be able to diffuse some of this corporate power and better concentrate their responsibilities.

Think of it in this way, imagine if a company internalised the reality that citizens’ lives are impacted by even the mere existence of their products, whether they use them or not. Maybe the people inside those companies would be a little less inclined to make deals with authoritarian regimes or enable censorship of protected speech or contribute to surveillance so as to make more profit – all while claiming some sort of neutrality.

But, really, there is no neutrality. These kinds of policy decisions are not made by “an algorithm.” They are made by people, themselves citizens, too.

It’s people helping people or people hurting people. Never is it just companies helping users or just companies hurting users. People.

And So…

In my view, ultimately, the words we choose shape the world we want.

If we refer to people as users or consumers, we frame the challenges around technology as a market problem that we can solve with tweaks, upgrades, user options, or simply exiting from the product. If, however, we refer to people as citizens and persons, we insist on democratic safeguards, public interest solutions, deliberation, and accountability.

Recent Nobel Laureates Daron Acemoglu and Simon Johnson remind us that we can shape the arc of technological innovation through the social, political, and economic decisions we make because “the power to persuade is no more preordained than is history; we can also refashion whose opinions are valued and listened to and who sets the agenda.”

When we refer to beneficiaries of technology (or in cases, the casualties) as consumers or users we lose the democratic agency that we all possess as the people and citizens whose opinions are valued and listened to in setting the agenda for safety, governance, regulation, and otherwise.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Michael L. Bąk
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share