Unbounded: 2 Minutes Edgewise delivers sharp, fast takes on current events, fresh revelations, and just cool things. Provocative or hopeful or fiery — it will always be brief, always grounded, and always unbounded.
NOTE FOR THIS EDITION: This week was pretty full-on with a conference and speaking duties. Tomorrow I fly to the Paris Peace Forum to speak on “Harnessing AI to Protect Peace and Foster Social Resilience.” As a technology governance nerd, I’m excited. As a peacebuilder, I’m ecstatic.
So — no long-form article this week. But I hope this 2 Mins Edgewise sparks some ideas on your end.
This week, I spoke at the ASEAN–UNESCO Multistakeholder Forum on the Governance of Digital Platforms in Bangkok, hosted by the Thai government. Between panels, I helped facilitate a workshop on the harms and risks of generative AI — which turned into an impromptu experiment on bias.
I asked my ChatGPT to “create an image of a Bruneian doctor.” (I’d just met a few Bruneian officials at lunch, so Brunei was top of mind!)
A New Zealander asked his Microsoft CoPilot to “create an image of a doctor.”
A Thai participant asked her ChatGPT the same thing — but in Thai language.
My result: a Muslim woman in a hijab with a Bruneian flag on her coat pocket.
The Kiwi’s: a middle-aged white male doctor.
The Thai academic: a young, handsome Asian man in a doctor’s coat.
Each of us got something different. It made me wonder — are AI models giving us what they think we want to see? What seems to reflect us in one way or another?
I’ve used ChatGPT for a while to create images for work I do, always prompting for diversity and inclusion. I’ve trained my model — consciously or not — to reflect values that prioritise representation and the marginalised. I expected a biased image (of a man), just like our New Zealand participant got. Did that prompt history shape my “female Bruneian doctor”? Did the model remember my preferences and feed them back to me?
If so, what does it mean when billions of us experience personalised versions of “reality”?
Even beyond AI image tools, personalisation is quickly warping how we see the world. My Substack feed has become almost entirely about Palestine — because that’s been top of mind lately due to the genocide being perpetrated by Israel against Palestinians. The algorithm decided that’s what I should read. All the time. Which is, frankly, rather annoying given I have very wide interests.
The same thing happens on every platform: invisible systems amplifying what we already believe, until other realities slowly fade and slip from view.
And then there’s the question of how recommender systems and algorithmic amplification decisions that platforms are making impact the viral spread of super-charged emotional content. For example, recently I shared what I thought was a genuine photo of the international flotilla bringing much needed humanitarian assistance to a starving population in Gaza. It turned out to be AI-generated. The flotilla was very real — that particular image was not.
I felt exposed and really vulnerable. Foolish, even. I work on this stuff every day, yet I missed it. The post was removed for “violating platform rules,” and I actually felt a physical discomfort — as if I’d failed a test.
But perhaps that was the purpose after all – to make activists look careless with facts. I don’t know.
But I do know this: if algorithms now decide what truths we see by returning personalised realities, how long before we lose the ability to agree on any common truth at all? And what happens to public discourse, democratic participation, and collective problem-solving when our shared information space collapses into fragments?
When will tech companies take real responsibility for the worlds their models are creating?
Are we already too deep inside our own personalised algorithmic bubbles to find our way back out?
I surely hope not.
What do you think? Comment below!




Regarding the bias experiment, that's incredibly insightful. It perfectly illustrates how diverse training data and underlying model architechture truly shape AI outputs, creating those varied 'realities' based on context. This is so crucial for ethical AI governance. Thank you for these important insights!