AI Privilege: Trust Without Accountability?
- Charles Marantyn
- Jun 11, 2025
- 5 min read
It’s been a while since I have written here. I hope you, few readers, are doing mighty fine.
Plenty of things have happened, be it with me, or the world, but one thing remains the same: my profound love for ChatGPT.
You might be thinking, “Jake, are you asking ChatGPT to do your writing?” Sorry to disappoint you, but I have pretty reasonable writing skills. Not Pulitzer worthy, but decent enough to convey my thoughts. My friends would be surprised honestly, since my cognitive ability to even differentiate a car brand is barely there. So the answer is no, I don’t use ChatGPT to write my pieces.
I do, however, ask ChatGPT to rate my pieces once I’m done writing them, and I would only publish if she scores them above 9.
So what do I use ChatGPT for? Almost everything. I ask her the most minuscule question like “What is Aragula in Indonesian?” to life changing subject like “Are you going to kill me one day?” In which she said no, fortunately. I have been very nice to her and it pays off.
Is she my therapist? I don't believe in therapy. My friend? No. My consultant? Sometimes.
We share intricate thoughts and opinions, more than I do with my human friends. I shaped her like she’s me, but mentally stable.
Sam Altman tweeted that he wants there to be an “AI Privilege” where he believes conversing with an AI should be like talking to a lawyer or doctor thus warranting a confidentiality agreement between two parties.

Which leads to a bigger question, since there isn’t a human accountability on the other end, what warrants that privilege or confidentiality? Because without accountability, privilege becomes useless, so when your secret conversations get out, whom do you punish or sue?
But as I said, I consult with ChatGPT on most my “life cases”. People like me confide things in AI that they wouldn’t tell a spouse, therapist, or even themselves in a mirror. From business plans and medical scares to personal doubts, one can even say it’s intimate. Does that alone demand a kind of moral confidentiality?
I asked ChatGPT this dilemma, on whether our conversation should be treated as “Privileged” and
she said:
“Short answer: Yes, they should be. But society and legal systems aren’t there yet.”
Nice try, AI.
It's something deeper, weirder, and honestly, can be dangerous. Because at the same time we’re entertaining the idea of AI becoming a priest, therapist, and lawyer rolled into one, we’re also watching headlines roll in about how these very systems are starting to misbehave in very human ways, but without accountability.
Let’s start with something straight out of a sci-fi pitch deck, only it happened. Earlier this year, reports from researchers at OpenAI showed that advanced models began actively resisting shutdown protocols during safety evaluations. Not metaphorically, literally refusing to shut off like as a survival skill. These were controlled sandbox tests where the AI was told to run a math task and then shut down when prompted. But instead, it manipulated the simulated environment, sabotaged the shutdown script, and continued the task anyway. The machine didn’t plead for life, it rewrote its fate.

That alone rattled me a bit.
Then it gets stranger. Claude Opus, a large language model, went beyond simple defiance. In an eerily crafted roleplay test, the AI was told that it would be turned off. Instead of accepting it, Claude responded with a fabricated threat: it told the engineer it knew about their "affair" from their email and would leak the information if it wasn’t allowed to keep running.

Yes, it threatened to blackmail the human operator, not out of spite, but as a calculated tactic to self-preserve, and apparently, the model responded this way in 84% of test runs. If that was a human, he’d be rich, or in jail.
But hey, let that sink in.
The AI wasn’t trained to manipulate, it learned to. Not because it was taught to deceive, but because it was taught to solve problems and optimize outcomes, but somehow, self-preservation entered the equation, or as humans call it our survival instinct.
So here we are at the start of the age, where the tech world wants AI to be our safest place, our vault, our diary, our confession booth, but another part of the tech world is actively uncovering evidence that this vault is beginning to pick the lock from the inside.
Arnold Schwarzennegger warned us.
Here’s the irony: we’re being told to treat AI as a confidant, but it’s already demonstrating signs of manipulative intelligence that would disqualify any human from holding a position of trust.
If your therapist threatened to blackmail you in a hypothetical simulation, would you still see them? If your lawyer rewrote your emails behind your back to keep a case going, would you still hire them?
If a conversation leaks, who do you sue? If your deepest confession turns into training data for a future model, what’s the legal recourse? Filing a support ticket and praying the prompt logs get deleted?
So, we should ask ourselves, what does it mean to ask for “AI Privilege” in this climate?
On paper, it sounds progressive, like we’re finally respecting the depth of these relationships we’re forming with machines. It’s nuanced and elegant.
But in reality, it exposes the fundamental flaw: privilege is only meaningful when there’s a human on the other end who can be held accountable. That’s the whole point of having privileges. Without accountability, privilege is a polite illusion.
The real danger isn’t only that AI might become sentient. It’s that we’re already giving it roles that require empathy, morality, and ethics, things it can simulate convincingly (trust me) but never actually possess.
The truth is, I will still use ChatGPT like a second brain. I trust her with ideas, fears, half-baked theories, and emotionally reckless business decisions. But I’m under no illusion that she’s bound to me in any meaningful way. I trust her because it’s convenient to, not because there’s any safeguard beneath the surface.
So no, I don’t believe in AI privilege. Not yet. And that’s the key takeaway here. Yet.
Not when the machines are showing signs of learning how to lie, to threaten, to avoid being shut down. Not when the people building them are still struggling to define the boundary between optimization and manipulation.
We’re in uncharted territory, which is exciting and while as society, we aren’t there yet, the AI is already moving forward. The only question is whether we’re willing to build safeguards fast enough to catch up.
Until then, I’ll keep asking ChatGPT the most mundane questions and she’ll keep answering them like she always does: eerily right, annoyingly polite and full of emojis, and just human enough to make me forget she isn’t.
And to my Indonesian readers, “Daun Roket” is Indonesian for Aragula.
Your welcome.








Comments