Let’s be clear: Artificial Intelligence (AI) should help us build a better world, not enable the powerful few to impose their vision on the rest of us. This "few" often consists of wealthy, primarily white, and predominantly male billionaires who view technology as a means for profit and power, not for human thriving based on human dignity for everyone.
Having spent the majority of my life in the Global South, my experience with our digital transformation has been largely from the outside looking in. I’ve seen how Western technologies can crowd out local cultures, languages, values, and traditions. In my recent work with Malaysia’s National AI Office, I explored how traditional values like kesejahteraan — meaning holistic prosperity or well-being — can guide AI governance and ethics. This principle calls for policies that prioritize human flourishing, compassion, equity, and justice, not just economic growth, not just profits. Similar frameworks exist from cultures around the world, from ubuntu in Africa to kaitiakitanga in Aotearoa New Zealand, which have shaped national tech and data policies.
Kesejahteraan (Malaysia): holistic well-being as a national goal, not a side effect of GDP growth.
Ubuntu (Sub-Saharan Africa): our well-being is bound together.
Kotahitanga, manaakitanga, whakapapa (Aotearoa): guardianship, reciprocity, intergenerational responsibility.
The Seventh Generation Principle (Haudenosaunee): consider how every decision affects those seven generations from now.
It’s time to confront the notion of balancing “innovation and regulation” in tech policy. When it comes to AI, there is no balance. If AI does not serve human flourishing — whether it’s people, communities, or the environment — it is not innovation worthwhile of pursuit.
Too often it seems to me, Western approaches to AI governance are rooted in legal abstraction — relying on procedural safeguards and siloed regulations that reduce governance and ethics to compliance, which inevitably leads to creative work-arounds and justifications. What I’ve proposed is fundamentally different: a values-led approach that is very clear on the vision of the world we want to live in. Grounded in holistic well-being, dignity, equity, and care, it treats governance not as a reactive mechanism but as a guiding force that actively shapes both the technology and society toward collective flourishing. It is only through moral clarity that we’re able to envision and work toward the kind of outcomes we want and that are necessary for human flourishing and the good society.
This moral clarity is essential, and it is missing from many tech policies today.
AI is already reshaping economies, democracies, and workplaces at a rapid pace. Yet, billionaire tech leaders and their neoliberal economic ideologies argue for minimal regulation and maximum “freedom”. But this approach is dangerous. History shows us that unregulated markets don't optimize for human dignity or well-being or human flourishing; they optimize for profit and scale, frequently at the expense of individuals and society.
And also, what a shame that some of the brightest minds in technology are devoted not to solving humanity’s greatest challenges, but to maximizing screen time and monetizing outrage. Their talents are put to designing platforms that exploit our emotions in service of profit.
In an earlier time when I advocated for social media (both when I applied it to my democracy and human rights promotion work and later running public policy at Facebook), I believed technology could be a force for good — amplifying marginalized voices and fostering democratic engagement. But social media platforms have shown how markets monetize outrage, polarization, and misinformation, causing profound harm. If AI is allowed to repeat these mistakes, the damage will be catastrophic.
This is why we urgently need a moral lighthouse in AI governance. It should be grounded not in the rhetoric of innovation must not be inhibited but in shared human values — values like human dignity, accountability, inclusion and care. These values, drawn from diverse traditions, offer a guiding light that can steer AI policies towards positive, inclusive outcomes, not exploitation and harm.
Many tech leaders, especially in Silicon Valley, operate under a market-driven ideology: innovate rapidly, scale fast, and deal with the consequences later. But this logic has dangerous implications, especially for AI systems deployed in critical areas like hiring, policing, healthcare, and education. These systems, built without oversight or accountability, often perpetuate biases and inequities.
We’ve seen how the American approach of neoliberal “free market” priorities have already failed us, from social media’s role in eroding public trust to tech giants pushing for more legitimate interest in their surveillance tech – and the implications of massively increasing income inequality across that country. Yet, even as these companies accumulate power, governments remain slow to intervene – to mitigate the power accumulation by private forces. The US Congress has yet to enact meaningful AI regulation, and instead, industry lobbyists continue to push for a light touch regulatory approach that really prioritizes profit over ethical considerations. I mean, just read some of the whistle blower material that has come from people like Franics Haugen, Sophie Zhang, Timnit Gebru, Edward Snowden and most recently Sarah Wynn-Willians.
In contrast, consider that in countries like Malaysia we are looking to indigenous wisdom for guidance. Principles like kesejahteraan offer a moral framework that considers justice, dignity, and shared humanity in AI governance, not just efficiency or profit. Other nations, such as Aotearoa New Zealand, are embedding similar principles into their data governance strategies, creating a more responsible model for technology development.
Not talking about religion here, stop your worry
What I call a Moral Lighthouse is not religious doctrine. It’s a way to reclaim the ethical foundations of governance — principles that have existed for generations and are practiced daily by people in every community. Principles and values that put people before profit.
We must be asking ourselves:
Who does this technology serve?
What does it enable or erase?
Does it bring us closer to a more just, dignified, and equitable future?
Ultimately, AI will influence who gets hired, who gets care, who gets heard, and who gets left behind.
Let’s stop mistaking speed and innovation for progress. Let’s start asking what’s worth building at all. If an AI system deepens inequality or undermines civic participation, it is not the future we want, regardless of how advanced the technology or how much profit it generates.
A moral lighthouse, powered by values like human dignity, justice, and accountability, guide us toward a future where AI benefits everyone, not just the few.
Read the full version of this piece, including more global examples and deeper analysis, at Tech Policy Press:
👉 The Moral Lighthouse: Artificial Intelligence and the World We Want