Hear Me Roar: AI Governance from the Margins
What to do when you feel shut out of the AI conversation — but still affected by every part of it.
This first post of the new year is a perfect moment to take a big breath and reflect not just on what’s ahead in 2026, but how we keep showing up for the work of building better digital futures, for all people, everywhere.
I am so grateful to all of you who’ve joined me on this journey: sharing hopes, fears, ideas, and hard questions about the frontiers of technology. For many of us, it can feel like shouting from the sidelines. Structural power leaves many of us actually on the margins. But we shouldn’t think of this vantage point as a weakness because it really is where much of the most honest thinking can happen.
In fact, today’s piece was sparked by one of you reading this newsletter — an old friend from my democratic governance days — who asked in response to my year-end essay:
“Any thoughts on what those who feel unheard and unseen can do to contribute to the ethical AI governance effort?”
I read it and immediately knew that I had the topic for my next article. Her question isn’t just about AI or tech policy, but broadly about power, voice, and agency. It hints pretty clearly at the fatigue many of us feel in the face of sweeping changes that seem to happen to us, not with us, and certainly not always for our benefit.
It’s also a timely question. We are in a moment where the language of “responsible AI” and “AI for good” is everywhere — and yet meaningful participation in shaping these systems remains out of reach for most. So what can we do when we feel marginalised or disempowered? What does it mean to push back, even without institutional clout or insider access?
It can often feel like there’s no seat at the table for people from the margins. But even if we are denied the ability to bring our own chairs to that table, there are still ways to build pressure outside the room.
Here are some pathways you can consider for participating in the ethical governance of AI from the margins.
→ Normalise Uncomfortable Words
We live in an era dominated by surveillance capitalism — where our data is extracted, monetised, and used to optimise behaviour. Simply talking about this openly can be a radical act. Normalise naming it: surveillance capitalism. Push back when it’s hand-waved away as some cost of convenience, or no big deal, or overblown rhetoric from luddhites.
Call out technocolonialism — the imposition of extractive technologies on communities without consent or consultation. Whether it’s AI systems in the global south or predictive policing in over-surveilled communities, we need to shine a light on the pattern of power consolidation and disappearing voices and weakening agency. Don’t be shy. Say it.
Talk about the legacy of colonialism and how it plays out in the technology governance spaces today that are so dominated by Northern (primarily private) interests. Discuss the similarities between the Facebooks and Amazons and Googles of today and the British and Dutch East India Companies of the past. Because the impacts on us today are profound.
→ Create Friction
Big Tech wants a seamless, optimised life — but it’s often in the friction of life that we find serendipity, humanity, and challenge. Don’t let every analog moment get optimised out.
Choose the in-person meetup over the app.
Have slow conversations that meander instead of the quick text messages that lose much .
Make room for the unmeasurable.
These are small rebellions against empires of tech — but they add up and matter.
→ Show Up Where Decisions Are Being Made and Priorities Discussed
Even without formal power, anyone can show up where it counts.
Zoning meetings: Data centers are exploding globally, often with enormous environmental impact and little local benefit. Say “no” — or at least, “not like this.” Ask tough questions, demand hard answers. And dig deeply to uncover the big tech firms behind some of these infra projects and their shell companies (because we know they sometimes hide so that light can’t shine on them).
Public comment periods: Governments often invite feedback on AI-related regulation. These are underused. Civil society groups sometimes publish templates that can help you weigh in. Step on the scale.
Town halls: Ask your legislator, city officials or other civil servants about AI use in public services, about campaign donations from tech interests, about their stance on algorithmic accountability, surveillance, and tech-enabled genocide. Be polite, but persistent.
→ Pressure the Powerful — Even as an Individual
Buy one share in a Big Tech company. It’s often enough to attend shareholder meetings, ask questions, and vote. A single voice — if informed and coordinated — can start something.
Ask hard questions at conferences, public talks, panels. Don’t accept the “I’ll have to look into that” or “we’ll circle back” brush-off from anyone. Make the discomfort visible and press on.
Write, speak, share — whether through op-eds, social posts, or holiday dinner conversations. Your lived experience is valid data and you should leverage it.
→ Build or Join Something Analog
Those who design or tolerate disempowerment often do so to make people feel isolated. But you’re not alone and it can be super empowering to connect with others:
Join tech accountability groups.
Help local advocacy organisations integrate digital justice into their work.
Start a community teach-in group, like a book club — but for demystifying AI, surveillance, and digital rights.
→ Dissect the Narrative
“AI for Good” often sounds wonderful — but you gotta ask: Good for whom?
These initiatives are frequently PR-driven, masking extractive incentives behind a veneer of benevolence. When you hear vague claims about “ethics panels” or “responsible AI,” dig deeper. Who benefits? Who gets harmed? Who profits from the behavioral insights extracted? Who makes money?
→ Use The Law
In many countries, Freedom of Information laws allow you to ask how tech is used in public services — from welfare algorithms to policing tools. If you suspect harmful AI use, file a request (and some NGOs or journalists out there could help in this regard). You might uncover more than you expect.
→ Support What Can Bring the Future We Want
Not all tech is extractive. Civic tech and public interest AI efforts are building tools with democratic oversight, community governance, and user agency in mind.
Recommend these tools to your local government, libraries, or schools.
Share them widely. Visibility will help build legitimacy.
Parting Thoughts
These are some reasonably actionable things that we can take when we don’t individually possess power and influence. When we don’t all have institutional power or the luxury to get paid to attend conferences and seminars, let alone be featured speakers at them.
But when we come together, our actions take on exponential impact and the impact compounds. Like this math formula: (the*margins)^squared
Imagine if thousands of us interrogated Silicon Valley’s “AI for Good” initiatives with the same rigor they apply to their own market forecasts and policy narratives.
Imagine if we made it impossible for companies to wrap exploitative practices in feel-good narratives without facing substantive scrutiny.
At the end of the day, we can’t feel helpless — and we certainly can’t remain uninspired. That’s what private interests (and authoritarian actors) are counting on. They’d prefer for us to remain quiet, disconnected and confused. All the easier to accept their narratives.




