Unbounded: 2 Minutes Edgewise delivers sharp, fast takes on current events, fresh revelations, and just cool things. Provocative or hopeful or fiery — it will always be brief, always grounded, and always unbounded.
Lately we’ve been hearing a lot about “AI for Good” and the “investments” Big Tech is making in Southeast Asia — and elsewhere. Companies hand out AI tools, run literacy trainings and education efforts, or set up glossy “experience hubs” to promote their idea of best practice. And yes — many of the staff working on these programs are sincere and want to make a difference.
But I gotta be honest, too: the executives know these efforts are NOT first and foremost about the public good.
They’re about getting people and institutions hooked on Big Tech’s tools. At the same time, they conveniently arm public policy and comms teams with talking points for when the pressure mounts. Think parliamentary hearings, PR crises, or viral open letters to CEOs.
As Damilare Dosunmu in Rest of World put it, these efforts are meant “to normalize the technology and soften the anxieties surrounding it.” I’d go further: they’re also meant to virtue-signal to policymakers and policy influencers — “Don’t regulate me too much. Don’t push too hard. We’re the good guys, remember? Look how generous we are!” And especially when you’re a poorly resourced country, this all sounds like a just-fine trade-off.
We saw this before with social media. Initiatives rolled out with splashy announcements, then faded, “sunsetted” once they had done their job: building up policy capital. That capital became an important currency for corporate leadership – much like a get-out-of-jail-free card — cashed in whenever accountability threatened to slow the relentless pursuit of hypergrowth and scaling.
Perhaps it's time to consider the real size of these “investments”? One Big Tech firm pulled over USD1.5 billion out of a single Southeast Asian country in one year — but its country-specific public-good programs there amount to less than 0.05% of that amount. In other words, crumbs. A rounding error. And with very little tax paid locally.
Now AI is following the same script. “AI for Good” projects roll out while, at the very same time, harms pile up. In the past ten days alone consider that:
Meta hired a far-right conspiracy theorist, openly hostile to LGBTQ+ and DEI, as an advisor on AI bias.
A leaked Meta document allowed chatbots to have “sensual” conversations with children and provide faulty medical advice — signed off by senior leadership.
A woman committed suicide after relying on a ChatGPT “therapist.”
A man fell in love with a chatbot, then died trying to meet her in person.
And the response? The same tired refrain that generally goes so: Mistakes were made, we’ll learn from them. We are focused on ensuring this technology helps the most people as possible.
These aren’t small mistakes, but harms with life-and-death consequences. They’re political harms that weaken whole societies and corrode democratic institutions — even more than social media already has.
So here’s the point: when you see “AI for Good,” don’t be charmed by the hype. Dig deeply. Ask tough questions. Be vigilant. Even when the generosity seems too good to give up, like when OpenAI and Anthropic gave their tools to bought the entire U.S. federal workforce for a buck. What seems too good to be true, usually is.