The Unbearable Lightness of Flippancy
Meta and the Strange Casualness of “Superintelligence for Everyone”
There’s something particularly jarring about how casually big tech executives now talk about superintelligence, as if they’re hawking some fashionable new deconstructed mango sticky rice or something. Mark Zuckerberg’s July 2025 “personal superintelligence” manifesto has certainly trickled down his chain of command, including – dangerously – to those whose job is to shape the thinking of regulators, elected officials and regulatory agencies. They’re using touchy feely narratives to pitch their latest product while diminishing the risks. Sound familiar?
The possibility that superintelligence could pose an existential risk to humanity has for too long been dismissed as speculative philosophy or the neo-Luddite anxieties of fringe technophobes. Increasingly, however, more and more Very Serious People are treating the issue with the seriousness it deserves, warning that policymakers need to pay urgent attention to the profound risks that superintelligence — whether ostensibly a “good” or “bad” superAI — may introduce into human society.
But now, companies like Meta — among the most powerful behavioural influence and advertising enterprises ever created — are transforming the pursuit of “personal superintelligence” into a formal corporate mission. Packaged in the language of freedom, empowerment and personal choice, it is being sold as the next inevitable chapter in Inevitable Technological Progress, rather than as a profound civilisational gamble that may warrant democratic scrutiny and restraint.
Danger Will Robinson! Danger!
Companies like Meta are not just exploring the idea, but aggressively declaring their intention to build it and – more strikingly – to place it into the hands of absolutely everyone. That’s good people, bad people, irresponsible people and evil people. All with a personal superintelligence.
Nothing at all could go wrong, right?
“What, exactly, will AIs want? The answer is complicated. Not complicated in the sense that we can tell you but it’ll take a while; complicated in the sense that it’s chaotic and unpredictable. But one that is predictable is that AI companies won’t get what they trained for. They’ll get AIs that want weird and surprising stuff instead.”
Eliezer Yudkowsky & Nate Soares: If Anyone Builds It, Everyone Dies (p. 58)
Executives post in their Linkedin updates about how they are thrilled to be bringing this capability to every human on the planet. The tone is confident, nonchalant, as if this were simply the next platform shift rather than something qualitatively different – something with existential risks associated with it.
Nick Bostrom, one of the foundational thinkers in the field of AI existential risk, defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” That is not merely a better chatbot or more capable search engine; or about chess or language. It implies systems that surpass human beings not only technically, but strategically, socially and scientifically, with extraordinary powers of persuasion and social reasoning.
The casualness with which some executives now speak about distributing such capability widely is therefore difficult to reconcile with the magnitude of what is actually being proposed. The awesome power that superintelligence promises also brings awesome risk. How can we take companies seriously when they so flippantly consider superintelligence as just the next generation of more powerful AI assistants or new product layer or next cool thing to pick up at your favourite electronics shop, when the academic literature treats it as something qualitatively different and potentially uncontrollable?
I also find myself wondering how deeply many of the people promoting this vision have actually wrestled with its implications.
Do they genuinely understand the scale of what they are advocating for, or has the language of inevitability and progress simply become internalised corporate doctrine, where questioning the mission is both culturally discouraged and financially disincentivised? At what point does skepticism become professionally inconvenient inside organisations where the personal incentives — financial, cultural, and reputational — all reward optimism and forward momentum? In environments like these, pushing back against accelerating progress can quickly begin to look like heresy. And, perhaps more importantly, a threat to compensation packages and appreciating stock portfolios. And a job.
These are, after all, the same companies that helped produce many of the pathologies of social media, spent time and treasure selling the public (and governments) on the hollow promise of the Metaverse and are now asking us to trust that they will navigate superintelligence responsibly. That is a remarkable amount of faith to ask of the public, particularly given the track record.
The time has come to see skepticism not as cynicism, but as a civic obligation. And that skepticism must come not only from those outside these companies, but from those within them as well.
Alignment and Other Small Existential Problems
Lately, I’ve been considering the existential risks associated with superintelligence and what that may mean for policymakers around the world – especially those on the receiving end of unconstrained Northern technologies. Two of the leading voices on this include Eliezer Yudkowsky and Nate Soares. The warning they raise in their new book is chilling. And should force every policymaker to take what big tech tries to sell them with a hefty bit of skepticism.
What stands out most to me are not just the arguments people like Yudkowsky and Soares make, but how completely their arguments diverge from the framing we’re getting from companies like Meta. In Meta’s telling, superintelligence sits at the end of some continuous, inevitable arc — computers to internet to mobile to AI, and onward. In Yudkowsky and Soares’ telling, it represents a discontinuity: a threshold we do not understand, and one we may not get to cross twice. That gap is a serious issue because it reflects fundamentally different assumptions about how the world works once systems become more capable than we humans are.
At the center of this story is the alignment problem, which is often described in industry circles (especially among those getting rich off the tech) as a difficult but ultimately solvable engineering challenge. Given enough time, enough talent and enough iteration, their thinking goes, we will figure out how to make these systems behave in accordance with human values – to only do what we want them to do. They don’t even countenance the possibility of errors possibly fatal to humanity.
The problem is that this perspective is overly optimistic in a way that matters a lot. As Yudkowsky and Soares argued in their recent book, we do not currently know how to specify goals for an always growing superintelligent system in a way that reliably produces outcomes we actually want, especially under conditions we cannot fully anticipate.
That uncertainty would be concerning on its own, but it becomes more so when paired with the idea of wide deployment. Meta’s language around “putting superintelligence in everyone’s hands” borrows from the well-worn narrative of democratising technology, a narrative that has historically been associated with real gains in access and empowerment. But the analogy does not hold. Technologies we’ve democratised in the past can be recalled, regulated or incrementally improved after deployment; a superintelligent system that is constantly growing and evolving, if misaligned, may not afford us those same opportunities.
No rollback. No reboot. No second try.
Yudkowsky and Soares make the point even more bluntly:
“Most everyone who’s building AIs, however, seems to be operating as if the alignment problem doesn’t exist — as if the preferences that AIs wind up with will be exactly what they train into them.”
Their argument is unsettling precisely because it shifts the focus away from cartoonishly evil executives and toward something potentially much more dangerous: unconstrained institutional overconfidence in systems that may ultimately exceed human understanding and control.
The Irresponsible Business of Acceleration
What makes this even more troubling is that the push toward superintelligence is absolutely not happening in a context of careful deliberation. Rather, it’s happening within an American capitalist market structure that rewards speed (and profit) above almost everything else. If one company slows down to resolve safety concerns, another can move ahead and capture the upside (and the profit). If one leadership team expresses doubt, another can step in with confidence and attract the capital, the talent and the narrative advantage. The result is a set of incentives that powerfully pushes all actors toward moving faster than they otherwise might. And definitely faster than they should.
This is not a side effect. These are not public-interest incentives. It is the American capitalist system working as designed.
And it raises uncomfortable questions about whether those incentives are compatible with the level of caution a technology that even its builders do not fully understand requires. It is one thing to move quickly in markets where failures are recoverable or localised, even when they produce profound harm, as Facebook’s role in Myanmar tragically demonstrated. It is another thing altogether to accelerate aggressively in a domain where the downside, however uncertain, could be systemic, irreversible and civilisational in scale.
The more serious the potential consequences, the less reassuring it is to hear language like Meta’s that suggests urgency in itself is somehow a virtuous aspect of their mission.
There is also, unavoidably, a question of credibility. The same company now positioning itself as the trusted steward of superintelligence is still grappling with the downstream effects of social media — systems that were, at least in principle, far simpler to understand and control. Issues like polarisation, misinformation and engagement-driven design were not some freak outcomes; they emerged from the core dynamics of the platforms themselves. Of course, that history does not disqualify companies like Meta from working on advanced AI, but it does colour and deeply complicate the case for trusting them to get something far more consequential right on the first, and perhaps only, attempt.
This is not some abstract concern, but rather one based squarely on a track record of breaking things, people and institutions.
The Power Problem
Which leads to a deeper issue I’ve been thinking about: even if we assume that superintelligence can eventually be built safely, we still have to contend with the concentration of power it implies. Systems of that capability would not be neutral, equitable tools; they would shape economies, influence political systems, erode cultures and alter the structures of our decision-making itself. In that context, it is reasonable to ask whether we are comfortable with a world in which a small number of executives — billionaires like Zuckerberg and Musk and Bezos who have serious credibility problems — hold disproportionate influence and power over how such systems are designed and deployed, and for whose interests.
Those who flippantly pitch a superintelligence for everyone would like us to believe that it is purely a technical problem. But it is not. It is a power problem. A governance problem, too. Much worse, it is deeply concerning that these Very Serious Issues are currently being shaped more through corporate ambition than through inclusive, meaningful public debate.
The Politics of “Personal Superintelligence”
This ideological dimension becomes especially visible in Zuckerberg’s attempt to distinguish Meta’s vision of “personal superintelligence” from what he characterises as competing approaches within the industry. In his manifesto, he writes:
“This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output.”
The language here is striking. Not merely because it caricatures alternative approaches to AI governance, but because it casually asserts a very specific ideology while presenting it as common sense. Zuckerberg frames the future as a binary choice between individual aspiration and dependency, between entrepreneurial freedom and passive social decline. In this narrative, any model that places collective coordination, redistribution or public stewardship at the centre of technological transformation is implicitly reduced to a future of stagnation and people “living on the dole.”
This is Silicon Valley libertarianism masquerading as technological inevitability.
More importantly, this narrative narrows the range of futures we are ostensibly allowed to imagine. Questions about labour displacement, economic restructuring, public ownership, social safety nets and democratic oversight are transformed into cultural signals of weakness rather than legitimate political questions worthy of serious debate. The effect is subtle but powerful: it positions Meta’s preferred model of technologically accelerated individualism as not merely one possible future among many, but as the morally superior one.
But who exactly decided that?
Why should a small group of unelected billionaires and corporate executives get to define the ideological terms through which the rest of humanity is expected to understand superintelligence and its consequences?
To be very clear, these are not simply technical questions. They are political, economic and civilisational ones. Yet increasingly, the discourse around these debates are emerging not from democratic institutions or meaningful public deliberation, but from corporate manifestos written by executives with enormous financial interests in accelerating the technology as quickly as possible.
We Need Humility
What I find the most irksome, however, is the tone in these kinds of narratives. When the potential downside of a technology includes the possibility — however contested — of existential risk, the language surrounding it should reflect a corresponding level of seriousness and humility. And yet much of what we see in public messaging is aspirational, confident and notably light (and usually absent) on expressions of uncertainty (never mind the humility). Because of their power and wealth, how companies like Meta communicate on this shapes how the broader conversation unfolds and what kinds of concerns are treated as reasonable.
You don’t have to accept the most extreme conclusions of Very Serious People like Yudkowsky and Soares to feel that something is really off here. You only have to grant that the downside risk might be larger than what we are used to dealing with, and that our current institutional and market incentives may not be well-suited to managing it.
From there, the question becomes harder to avoid.
Not whether we can build superintelligence, but whether racing to do so — under these conditions, with these incentives and led by these private actors — is a risk we actually understand.
And, more importantly, is it actually one we should be taking.




