(margin*notes) ^squared

(margin*notes) ^squared

Blinded by the Rights

From Tech and Personal Freedoms to Defending Collective Resiliencies

Michael L. Bąk's avatar
Michael L. Bąk
Nov 10, 2025
∙ Paid

We’ve all been there — nodding along at conferences and seminars where digital rights are framed in familiar terms: privacy, freedom of expression, access to remedy. These are essential. But something’s missing.

In our rush to defend individual entitlements, have we been blinded by the rights?

By focusing so intently on the personal — on what I can say, what you can do — we’ve lost sight of the public infrastructures and systems that hold and lift those rights up: functioning institutions, open discourse, ecological balance, social trust and shared truth.

Digital technologies are transforming how individuals experience harm, and they’re reshaping the underlying conditions that make rights possible in the first place.

By only protecting the person, we may have left the public unguarded.

To be fair, those human rights took millennia of human existence to finally be codified in international law. After two devastating world wars, societies across Europe, Japan, the Americas, and former colonies were left grappling with the trauma of violence, authoritarianism, and state-sponsored abuse. In response, the nations of the world came together to define and defend a core set of human rights, grounded not in citizenship or status, but in our shared humanity.

As digital technologies accelerated in the 21st century, governments began reinforcing human rights frameworks in national and regional law — think GDPR, privacy and cybersecurity laws. It looked like a win-win: policymakers committed to protecting our rights in the digital age, and tech companies proclaimed these values were “in the DNA” of their digital products and business practices.

Freedom of expression. Check.

Freedom of thought. Check.

Freedom of association. Check.

All very good.

But in all the enthusiasm to focus on individual rights — and the smooth corporate rhetoric that accompanied it — did we miss something crucial?

By centering personal entitlements, early on we largely overlooked the structural effects of the technologies themselves. We missed the forest for the trees. And in doing so, we failed to fully imagine how these powerful new systems — and the surveillance capitalism that fuels them — would erode the public goods, institutions, and collective dignity that make rights enforceable in the first place.

I’ve been thinking about this lately: how the narratives that shaped early digital governance — largely written in and by the Global North — missed the mark. Perhaps the Global Majority has always been more attuned to the collective and structural costs of frontier tech. Perhaps histories of colonial extraction and top-down development make them more alert to systems that concentrate power, extract value and profit, and eschew accountability.

What if the Global Majority had been more deeply involved in shaping the trajectory of these technologies from the start? Not as dependent recipients. Not as afterthoughts. But as leaders of global digital governance.

What if?

As we approach governance of AI and frontier tech today, we have another chance. Another chance to find out how the experts and lived experiences from the Global Majority can provide teachable moments and leadership for policymakers and policy influencers around the world, especially in the north. Yes, leadership from the Global Majority.

So I ask: by focusing almost entirely on individual rights, might global tech governance have missed the forest through the trees?

(margin*notes) ^squared is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Considering Rights in the Age of AI

There’s a category of rights that rarely gets airtime in Northern policy discussions: solidarity rights — sometimes called third-generation rights. These are not about individual entitlements, but about collective well-being. They include the right to development, peace, a healthy environment, education, healthcare, and the right to share in scientific progress. And perhaps most urgently today: the emerging right to information integrity.

Skeptical that the North struggles with this concept? Just remember how Hillary Clinton was pilloried in American conservative media for saying “It takes a village to raise a child” — a sentiment that’s second nature to much of the world. The backlash was beyond simply political; it reflected a deep American discomfort with collective responsibility. We see that same discomfort with climate justice.

Individual rights are well established in international law — and some collective rights are, too. But the capitalist North has long prioritised the former, marginalising community-based rights in both discourse and regulation.

Solidarity rights do have a basis in hard law, and increasingly they’re being expanded through soft law — in UN resolutions, declarations, and normative frameworks built on international cooperation. That doesn’t make them less important. In fact, in the context of AI and frontier technologies, they may be more urgent than ever.

These solidarity rights fundamentally ask us to think about justice, fairness, and dignity not only in terms of individual entitlements, but in terms of what holds us together and makes our communities resilient — institutions, trust, ecological balance, information ecosystems, and social cohesion.

They force us to reconsider our priorities — not just how we regulate technology, but why. Should growth be the goal? Or should we centre human flourishing at the heart of innovation?

Blinded by the Rights

In much of the Global North, digital governance has generally focused narrowly on shielding individuals from personal harms: protecting privacy, ensuring access, preserving expression. That work matters, but it is incomplete.

Our failure to adequately govern that new American “innovation” of social media was a warning. While we sought to safeguard individual rights (likely over-indexing on the American First Amendment view of the world), we overlooked how these same platforms were degrading the structural foundations of democracy: public discourse, pluralism, and civic trust.

Governments stood by as the extractive logic of surveillance capitalism — led by American tech bros — undermined our societies, all so they could grow rich. As societies, we paid a heavy price in terms of polarisation, disinformation that eroded truth, corroded elections, and fractured public trust.

We underestimated how fast bad actors could exploit attention markets or collapse consensus reality. Worse, we overlooked how the platforms themselves were built to do just that: optimise outrage, monetise emotion, and profit from division. All in order to make lots of money, very fast.

We defended individual privacy but failed to forcefully defend public goods.

We protected the right to speak — but not the people harmed by what was said. Vulnerable communities were left exposed to hate speech, harassment, even genocide.

Meanwhile, many countries in the Global South — particularly those long engaged in climate justice and development debates — have consistently emphasised collective rights and global equity. Their views recognise that harm is not just personal, but structural, systemic, and often generational.

Might it be that their approaches are better suited to grappling with the community-level impacts of AI: on democratic institutions, on shared knowledge, on societal resilience?

And so…. What?

We need to expand our perspectives. That means treating AI and frontier tech as not just a technical or economic or regulatory issue, but fundamentally as a human rights issue, with consequences for both individuals and communities.

The good news: the tools already exist. What’s needed is the political will to apply them properly across government, civil society and business.

For Governments

Governments must move beyond privacy regulation and start governing for collective rights. That means ensuring transparency in public-sector AI systems — especially in high-stakes areas like immigration, policing, social protection, and healthcare. These systems clearly affect individuals, but they also shape entire communities.

States must also fund independent research and impact assessments that examine not only individual outcomes (e.g., biased immigration decisions), but community-level effects (i.e. discrimination against entire groups and the erosion of social trust).

International frameworks already affirm this duty: the UN Guiding Principles on Business and Human Rights (UNGPs), UNESCO’s AI Ethics Recommendations, and the OECD AI Principles all recognise the state’s role in preventing tech-related rights harms. But states can’t do that without updated regulatory approaches that consider community-level harms, independent oversight, and a commitment to safeguarding not only data, but democracy and public life.

For Civil Society

We citizens now live in algorithmic environments where visibility, credibility, and participation are shaped by forces we don’t control. Surveillance, facial recognition, disinformation, and opaque moderation systems all constrain our civic spaces, mediate our information, and limit freedom of association.

Yet these same technologies offer new tools for advocacy, movement-building, and accountability. Civil society must therefore build capacity in algorithmic governance, demand meaningful seats at the tech policy table, and demand states and tech companies respect emerging community rights — not just formal individual ones.

CSOs have a crucial role in defending solidarity rights – loudly if necessary – especially when government institutions and business enterprises fail to do so.

For Businesses

Under the UN Guiding Principles on Business and Human Rights (UNGPs), companies — including tech giants — have a responsibility to respect all human rights, not only the ones enforced through litigation (i.e. what they can get sued for). That means conducting meaningful due diligence on how their AI systems affect not only users, but communities, institutions, and public trust.

Too often, corporate risk assessments focus on user-level harms. But the deeper risks of frontier tech are broader and structural: weakening of epistemic integrity, manipulation of public discourse, unequal access to resources and information. These are community-level harms — and businesses must account for them.

Don’t be fooled. This focus on the individual by big tech is by design and it feels good to us as individuals. It’s strategic. Why? By narrowing the considerations, companies avoid scrutiny of the business models that drive harm — surveillance capitalism, attention economies, and algorithmic extraction. To take collective harms seriously would mean confronting the true cost of their own success.

To take collective harms seriously would mean confronting the true cost of their own success.

As AI continues to evolve, so too must our understanding of harm — and of rights. And business must not be exempt. They must be part of this reckoning.

The Bottom Line

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Michael L. Bąk
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture