Recently, OpenAI published a policy update impacting private information it will share with security officials.
If a person conversing with ChatGPT is found planning to harm others, their chats may be routed to a special team of OpenAI staff who can determine, based on internal risk criteria, if the chat indicates an imminent threat. If so, they can release the data and personal information to security officials.
Now, this is not entirely new. For example, for years Facebook safety teams have voluntarily disclosed information and data on people using their service (including chats and GPS locations) to police in narrowly scoped “harm” cases defined by the compnay itself: such as terrorism, child sexual exploitation, and suicide and self harm.
Sometimes those interventions saved lives. But they always relied on trust — trust that corporations could, and would, interpret “harm” responsibly.
So, in the new era of artificial intelligence systems this still sounds a lot like “safety.” And something that we can just trust corporations to once again interpret responsibly.
But recent history shows why that trust is now shaky.
Snowden revealed how tech companies handed vast amounts of citizens’ data to the United States government. The U.S. State Department now demands visa applicants to make all social media accounts “public”, using their posts as reasons to deny visas or, at the last moment, deny entry to the United States.
In other words: what you post online can already be turned against you.
With AI systems like ChatGPT, the stakes are much higher.
These are not just public posts or Messenger texts. These are intimate, one-on-one conversations — as personal as a discussion with your doctor or your lawyer. OpenAI admits as much. And yet, Sam Altman himself has said, “there’s no legal confidentiality … we haven’t figured that out yet.”
That contradiction should set off alarm bells in your head.
And then there’s the broader context.
This year alone, tech giants scrubbed DEI from their websites, while others eliminated it DEI altogether. Altman joined a chorus of executives fawning over donald trump, praising him as a “pro-business” president and a “refreshing change” (all while unmarked, masked ICE units harass foreigners and Americans alike in their own country). OpenAI’s Chief Product Officer actually joined the U.S. Army AI Unit Detachment 201 — with a $200 million military contract following right behind.
The lines between Silicon Valley, government, and the military are not just blurry; they’ve vanished.
So what happens when:
Those same companies – active assets in a military force, ingratiating themselves for government contracts – feel pressure to share?
When dissent on Israel’s genocide in Gaza, LGBTQ+ rights, or even criticism of donald himself is flagged as “harmful”?
When the State Department demands AI chat logs to vet visa applicants?
When ICE demands them for its mass deportations?
And if the new regime in America can do it, what happens when authoritarians elsewhere demand access in exchange for market share? And we know corporations are happy to negotiate with bad governments when their corporate interests are at stake.
Honestly, this isn’t just a slippery slope. This is the road we’re already standing on.
For the global majority, this has all the hallmarks of super-charged digital colonialism: Silicon Valley billionaires and U.S. authorities controlling storage and access to our most private thoughts.
So here’s the warning:
Policymakers, leaders, experts and activists cannot take “safety” at face value, not in the current technological and political context of the world. Trust cannot be how we mediate our relationship with tech companies or their good intentions.
We need enforceable regulation, real transparency, and independent oversight.
If we don’t regulate the interface between corporations, their products and us citizens, we’ll be forced to trust the very people — the very companies and governments — who have shown us time and again why we cannot.
AND *THAT* IS 2 MINUTES EDGEWISE, UNBOUNDED.