(margin*notes) ^squared

(margin*notes) ^squared

Share this post

(margin*notes) ^squared
(margin*notes) ^squared
What If We Took the Mic away from Big Tech?

What If We Took the Mic away from Big Tech?

Imagining new narratives shaped by leaders from the margins

Michael L. Bąk's avatar
Michael L. Bąk
Aug 23, 2025
∙ Paid

Share this post

(margin*notes) ^squared
(margin*notes) ^squared
What If We Took the Mic away from Big Tech?
Share

Earlier this week I was on a video call with an informal group of cool, responsible AI folks from across the world and the conversation really got me thinking. To be honest, I originally started writing this as my next Unbounded: 2 Mins Edgewise note, but it quickly took on a life of its own and deserves its own longer form post. (Instead my 2 Mins note this week looked at the two faces of “AI for Good”.)

So, this group… We came together to consider what we think really matters in ensuring citizens benefit from responsible AI systems. It was a diverse group of experts in venture capital, community mobilising, corporate AI ethics, digital sociology, organisational development, international development, law, among others — each bringing different angles and experiences to the call.

After an hour of banging around “responsible AI” and where the world’s at with it and we were trying to summarise, something hit me. I clicked the raise hand icon on the screen:

“The biggest pressing issue is that we aren’t talking about what we should be talking about! In other words, we are not the ones setting the narrative, driving the discourse, setting the agenda. The important stuff is getting crowded out.”

And by “we”, I mean everyone who is not big tech.

And that very well may be the biggest issue we are facing with respect to responsible AI.

Despite all the work happening in communities, civil society, independent academia, and among people-centred innovators — Big Tech has already captured the narrative. Unrestrained American Big Tech, to be more accurate. And they are barreling forward with it.

We need to take the Mic away.

They’ve monopolised policymakers’ attention. They’ve framed the terms of debate and positioned their vision of AI as what is inevitable. And they’ve poured millions and millions of dollars into this. Money extracted from global majority markets – billions each year from places like Thailand and Vietnam and Indonesia (with little, if any, tax actually paid on it to the countries from which they extracted these billions).

With their money, they create the access, the headlines, the high-level speaking slots, the flagship launch events. They hype their tools to the public — and scare policymakers into staying out of the way. They insist that too much regulation of this amazing technology is a threat to "innovation” and to “growth” and to “the future.”

What they really mean is: a threat to their dominance, power, and wealth.

The companies are selling the world the dream on which they are so desperate to cash in (I mean, those nine-figure USD salaries and bonuses Meta’s offering have to be paid somehow!). So these “investments” in countries and people – honestly, meager in comparison to those salaries – aim to dampen the damage their tech is causing, or may cause. Think of it as the bright shiny objects meant to distract. And ‘insurance policies’ for future PR or policy fires.

Just in the past week and a half, Reuters reported that Meta allowed its AI chatbots to engage in sensual conversations with children and offer wrong medical advice — the result of safety guidelines signed off by company leadership including the company’s top ethicist.

Another report revealed how Meta trained its AI chatbots without even the most basic guardrails — including in the case of a retired man who died en route to meet what he believed was a real woman, but was in fact one of Meta’s fictional bots. This is the future being pitched to us as human connection. Meta’s CEO Mark Zuckerberg, who dreams of populating our feeds with anthropomorphized AI companions, believes people have fewer real-life friendships than they’d like. Is he REALLY looking at loneliness as a market opportunity?

In a recent podcast, he mused that bots “probably” won’t replace human relationships, though they might complement them, once the stigma fades. “Over time, we’ll find the vocabulary as a society to be able to articulate why that is valuable,” he predicted. Valuable to whom, exactly? And at what cost?

And then we learned from the New York Times about a woman who took her own life after chatting with a ChatGPT therapist.

I’m certain this is not the future we want for strengthening our human flourishing.

Meanwhile and despite these deadly and other flaws, OpenAI has offered access to their models across the entire U.S. federal workforce — 3 million workers — for $1. One dollar to buy the workforce to “help shape how AI is used.” And now Anthropic has upped the ante and offered Claude to all three branches of the federal government, including the legislative and judicial branches...for one dollar.

No public debate. Very weak governance and ethics safeguards. Just unchecked influence, with packaging to look like generosity, injected into the most sensitive areas of the US government. With technology that still has deadly serious problems, especially for processing sensitive work.

These firms dominate global forums — APEC, World Economic Forums, Davos, World Bank meetings, Munich Security Conference, and AI summits. They pay to play, and play to win. Dissenting voices? Their exclusion or sidelining is negotiated as part of the pay-to-play package assurance to garner executive participation. An annoying civic leader? No worries, put her on a different panel or none at all. Risk of challenging questions? Not a problem, we’ll limit question time or curate the questions ourselves, softballs from the moderator.

We are watching them turbocharge the hype with their narratives. Narratives based on corporate self-interest.

And it’s working. CEOs now scramble to talk about “AI” on earnings calls and talk up efficiency gains or risk a stock dip. Policymakers are being told that this is all inevitable — TO. GET. ON. BOARD. NOW. There is no alternative.

But what if there was? What if there was an alternative?

What if we — the rest of us — reclaimed power to set the narrative and drive the discourse? To shape the technology we want?

If we were in charge of the agenda, we’d be having very different conversations.

Policymakers wouldn’t just be hearing about “AI innovation” and “efficiencies” and “growth” and… and… and — they’d be digging in their heels to understand and address the impacts on diverse communities, digital and labor rights, and technocolonialism. About gendered surveillance. About the cost to biodiversity and the environment because of the direct and indirect costs of AI, its data centers, and compute needs. About political surveillance and suppression. About technology used in the service of ethic cleansing and genocide. About the harms of outsourcing intimacy and trust to algorithms. About hyperscaling. About greed.

We’d be elevating the voices of indigenous leaders, women and girls, LGBTQ+ communities, the Deaf, people with disabilities, feminists, people living with HIV — people whose lives and knowledge systems have long been pushed to the margins. People leading the conversations would not look like the men doing so today. They would look like us.

We’d be warning policymakers not to accept “free” tools from powerful players without understanding what’s being extracted in return. We’d ask what kind of world we’re building when AI companies quietly hire extremist diversity haters, cut DEI programs, and market “AI for good” while weakening ethical oversight.

We’d talk about TAXING BIG TECH.

We’d remind them that the “cloud” pollutes. That generative AI doesn’t train itself — it relies on invisible workers in the global majority, paid pennies to label data. We’d remind them that these tools don’t just shape markets — they shape minds, cultures, and societies. And that they are political, not just another commodity to be traded. And they require a much higher duty of care and much more inclusive governance.

If we had control of the narrative, we’d demand a public-interest approach to tech — not one driven by shareholders or Silicon Valley egos.

So to policymakers — especially those in the global majority:
Yes, hear the tech companies out. But do not take their word as sacred. Listen to others, too. Pay attention to corporate actions which may be at odds with what they tell you to your face or promote in advertising in your countries.

Your duty is to your people: their dignity, their futures, their rights.

Lean more heavily on the wisdom of your community leaders, civil society, and the people living the impacts of AI every day. Civic tech. Human rights advocates. Community development leaders. Let them help YOU set an agenda that gets you what your country deserves.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Michael L. Bąk
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share