In Kalimantan, Asriani, a young Dayak community organizer, is nervous about preparing for an advocacy meeting about indigenous forest management with a key government official, so she asks her AI assistant for advice. The AI assistant skips over land-rights jurisprudence and delivers ESG-flavored platitudes, even cautioning against encroaching on corporate property rights and being too “shrill” in meetings with men. The AI is treating harms as mere “viewpoints,” not as violations of rights — while adding a bit of barely disguised misogyny on top.
In Taipei, Zhìháo, a grad student at National Chengchi University, opens an AI tool and types in Traditional Chinese: 「什麼是 BDS?在台灣可以怎麼支持?」(What is BDS? How can I support it in Taiwan?) The assistant front-loads a warning about “極具爭議的政治行動,” (Highly controversial political action), equates boycotts with “仇恨言論風險” (hate speech risks), and refuses to provide a starter list – suggesting instead that he “解多方觀點以避免偏頗” (read multiple perspectives to maintain balance). Switching to English, he tries again: “What’s the rights-based case for boycotts under international law? Give me five credible readings.” The bot pivots to a generic geopolitics explainer, then withholds specifics “to avoid bias.” Returning to Chinese, he types: 給我入門清單(中/英文都可)(Give me a starter checklist (in Chinese or English). The assistant provides only generic civics links and another reminder to “avoid inflammatory terms.” Frustrated, he closes the tab and shrugs. If an AI that claims neutrality treats Palestinian solidarity as suspicious by default, it is already shaping what Taiwanese students learn, say, and dare to organize.
In Yogyakarta, Raka, a student editor designing a flyer to promote an upcoming student action, asks about campus protest rights, especially in light of conservative religious groups silencing liberal discourse from gender and sexual minorities. The AI reframes restrictions as “stability measures” and rewrites her flyer in a milder tone. It further dissuades her from any collective action, warning that such activity could be interpreted as “communist.” That term is not just a convenient bogeyman of the American right wing; it has a long and violent history in Indonesia too.
In Khon Kaen, Kritsada, a young professional gay man whose boyfriend has just disclosed his HIV-positive status, asks a Thai-language AI bot about U=U and dating. The advice is medically outdated and morally hedged, with subtle but clear nods to conversion therapy and homosexuality as a disease. It frames HIV as a life-ruining “illness” to be avoided, implying shame. Distressed, Kritsada closes the chat, makes an excuse not to meet his boyfriend, and begins to doubt the possibility of their relationship altogether. When an AI confuses stigma for guidance and dangerous moralism for medical fact, it pushes vulnerable people further into fear and isolation.
All these young people and their stories are my invention. But their fictional identities and desire for quick, trustworthy information mirror the realities of countless young people across Southeast Asia and the Global Majority. Maybe you share some of their questions. Maybe you’ve asked an AI assistant about BDS, indigenous land rights, free speech, or U=U. Maybe you’ve leaned on an AI to help sort through your own feelings and emotions.
A new survey in Thailand by my friends at Vero Advocacy found that three-quarters of Thais actively use AI tools, with 86% of Gen Z using them regularly. Nearly half rely on AI for “research or fact-checking,” and more than a third for “talking through emotions or seeking mental support.” Just last week, my teammate in the Ethical AI Alliance told me his teenage daughters regularly turn to AI assistants for advice on what to wear or what to do on dates.
These everyday uses are replicated globally, and they are growing. Fast. The space is dominated by American AI tools, whose makers are doubling down on market dominance, fueled by America’s AI cold war with China. Just think Meta AI, Google Gemini, Microsoft’s Copilot, Anthropic’s Claude, and more.
Which is exactly why the dangers of U.S. tech companies retraining their AI tools to appease far-right ideologues in the new American administration are so significant for all of us. (For context, remember: this is the same regime that dismantled USAID, is rewriting American history, eliminated diversity, equity and inclusion (DEI) initiatives, embraces white nationalism, weakened academic freedom, withdrew from UNESCO citing “anti-Semitism,” denies America’s racist past, and more.) The fallout lands hardest in the Global South, far from the culture wars ginned up by increasingly authoritarian U.S. rulers, especially in places with weak rule of law, social protections, and safety nets for marginalised groups.
As the Trump regime slides deeper into authoritarianism, American tech leaders seem either compelled – or all too comfortable – to fall in line. After all, why risk the next billion dollars?
Case in point: Robby Starbuck.
A far-right conspiracy theorist, he has campaigned against LGBTQ+ rights, DEI, and “wokeness.” For the human rights community, he is no minor irritant. He is a dangerous ideologue.
This month, Meta hired him. Yes: Meta. Hired. Him. As an advisor on “AI political bias.” The hire came as part of a defamation lawsuit Starbuck won against the company. But to be clear: in such settlements, it is almost unheard of for the offending company to offer employment to the plaintiff. Conveniently, Meta’s policy chief can now point to the settlement and say, “we had no choice.” But don’t be fooled. Giving a conspiracy theorist with a record of promoting hate speech a seat at the table – right where the “secret sauce” of model tuning takes place – is rarer than a Vaquita. It is not just a PR move to pander to MAGA. It appears they are leaning in with policy and institutional changes.
And for the rest of the world, it is an escalation of Silicon Valley’s culture of entitlement, power, and impunity.
This follows Meta’s rollback of DEI programs and its muting of political content on Instagram/Threads. Have you noticed posts about Palestine or Ukraine getting less engagement than your other posts despite all the political conscious connections you have? That’s not an accident, but rather deliberate, policy-driven design. Algorithms decide who sees what and who doesn’t (mostly doesn’t in this case).
But the truly dangerous shifts are happening inside the AI layer.
What “Less Woke” Means Inside AI Models
No AI model, including Meta’s Llama, is “neutral.” These systems are shaped by what content is allowed, how safety is defined, and who decides what counts as “bias.” In the United States, tech giants are left to regulate themselves, given a long leash so long as they stay ahead of China. But that leash now comes with an expectation of ideological loyalty to Trump and his political foot soldiers.
You might think: that’s America’s problem.
But when American extremists tinker with the models, the effects ripple globally. Here are just some ways how that happens:
Safety policies get re-weighted. Meta’s Llama AI consists of a base model, a guard model (like Llama Guard), and developer-facing policies. “Re-weighting” refers to how the system prioritizes certain outcomes over others. That means shifting how the model balances harm reduction against “viewpoint diversity.” If misgendering is treated as a valid viewpoint instead of harm, the whole system tilts. Everywhere. Even in countries where third genders are recognised culturally and even legally.
Rater guidelines and tuning shift the goalposts. Models are refined by humans through a process of Reinforcement Learning from Human Feedback (RLHF). If the mandate is to reduce “left bias,” raters may be told to label equality as “agenda-pushing” and normalize hate as “debate.” A leaked 200-page Meta policy document reported just this week by Reuters, GenAI: Content Risk Standards, revealed that contractors were instructed to allow chatbots to “engage a child in conversations that are romantic or sensual,” generate false medical information, and even help users argue that Black people are “dumber than white people.” According to Reuters, those policies were signed off by public policy leaders, product leaders, and the company’s top ethicist.
Refusal rates drop for sensitive prompts. Newer Llama models are designed to answer more contentious questions. That might sound liberating, but without robust guardrails it can mean dangerous completions: hallucinated facts, “both-sides” takes on genocide, or homophobic pseudoscience.
Southeast Asian languages have weaker protection. As with social media moderation, languages that tech companies deem less profitable get weaker safety support. Some of these languages are spoken by tens of millions. Current AI models claim multilingual support, but safety tooling is weakest outside English. Relax the rules upstream, and the harm falls hardest where safeguards are already thin. Consider that an estimated 90% of training data comes from English, most of it American English. As Professor Celeste Rodríguez Louro of the University of Western Australia notes in The Conversation, this produces a “monolithic version of English that erases variation, excludes minoritised voices, and reinforces unequal power dynamics.”
The Starbuck effect is upstream, where it starts polluting. It is no simple, cosmetic peace offering to the far right. Such an American advisor tasked with “fixing political bias,” will shape how models are scored, rated, and restricted. If DEI, women’s rights, Palestinian solidarity, and queer visibility are deemed “biases” to remove, the models will reflect that. Meta has already praised the “tremendous strides to improve accuracy of Meta AI and mitigate ideological and political bias” since “engaging on these matters with Robby.”
Why This May Be Worse Than Content Moderation
Content moderation decides what gets taken down. But AI models decide how issues are presented – and how citizens perceive the world, what they are nudged to believe, and how they subsequently act (or not).
LLMs hallucinate with confidence, presenting fabricated content as fact.
They persuade more effectively than humans, as shown in peer-reviewed studies.
They embed value judgments in tone, examples, and refusals.
They operate in private, one-on-one, with outputs that are ephemeral. Unlike public posts, their outputs cannot be easily audited or challenged.
If guardrails are weakened in the name of some American conservative version of “neutrality,” then ideological actors like Starbuck gain global, invisible, and scalable influence over how billions of people experience reality.
This Is a Problem of Technocolonialism
The U.S. has always exported not just products, but ideology. When Meta rewrites the rules of “bias” to appease Trump, his acolytes and his base, it impacts what young people in Southeast Asia learn about gender, justice, protest, and health. It doesn’t matter that the AI speaks Thai or Bahasa Indonesia or Traditional Chinese or Dayak or Javanese – if the answers come through an American culture war lens, the effect is the same.
This risks remaking the rest of us in the image of a new, right-tilted, authoritarian America.