Anyone else letting ChatGPT make their tough decisions?
(and losing themselves in the process)
Living in LA, I’d gone on three dates with a woman living outside of DC. The dates were OK… well… if I’m being honest with you, I was getting mixed signals… but she was cute, and we met in a rom-com fairy tale sorta way… so I was ready to fly across the country for 36 hours to get to know her better. That’s how it works in the movies, right?
I had already bought tickets, but she didn’t know. We were set to chat on the phone the day before I’d fly, and I couldn’t wait to tell her.
That’s when AI pulled the plug:
“Don’t reveal the weekend ticket unless she explicitly expresses a strong wish to see you sooner and gives concrete availability.”
Why?
“It matches investment to reciprocity, avoids pressure, and keeps you in the ‘confident, grounded, has a full life’ lane while still being romantic when the signal’s there.”
I didn’t go.
In hindsight, that was the right decision. Also in hindsight, I was grateful I didn’t have to make the decision at all. After all, I was clearly not thinking objectively.
But… wait… isn’t the whole point of being human to think emotionally sometimes?
Have you also dabbled down the rabbit hole?
It starts innocently enough. You use ChatGPT to help write a difficult email. It works brilliantly: better than you could have written yourself. You get a dopamine hit from the efficiency, the quality, the removal of that painful word salad anxiety.
The next step feels equally reasonable. Your boss asks for a strategic recommendation on a major business decision. Instead of wrestling with market analysis and risk assessment for weeks, you ask AI to evaluate the options. The response is comprehensive, nuanced, data-driven. You present it as your own analysis and get praised.
Then the boundaries blur.
If AI performed so well with your inquiries and strategic direction, why wouldn’t you get its guidance on decisions?
[it starts simple… building trust]
Which of these two medicines is better for your headache?
[then more personal]
Should you attend that drab networking event that your boss is hosting tomorrow?
[then gets deeper]
Should you take that job offer in another city?
[and deeper]
Should you divorce your husband?
[uh oh]
Sounds crazy. But is it?
These are the decisions that historically keep people awake for months, weighing impossible trade-offs with incomplete information. But AI has built your confidence through every smaller choice. Why would you subject yourself to all that agonizing stress when you could just... ask?
And, the more you converse with AI platforms developing memory features, the more context they build about you… the more accurate + specific the verdict.
Each step feels reasonable. Each step works. But somewhere along the way, you stop wrestling with choices and start executing recommendations.
You become optimized. Efficient. Successful at avoiding the discomfort of uncertainty.
You also become hollow: just monkeyhands executing directives.
Each step feels reasonable. Each step works. But somewhere along the way, you stop wrestling with choices and start executing recommendations.
Eliminating Frictions Has Always Driven Innovation
Humans are hardwired to eliminate friction. We build tools to make hard things easy, slow things fast, and expensive things cheap. It's what drives every great innovation:
"What if we didn't have to do X anymore?"
What if we didn't have to travel on foot? → Horses, then cars
What if we didn't have to calculate everything? → Calculators, then computers
What if we didn't have to know everything? → Wikipedia, then search engines
What if we didn't have to decide everything? → AI
The desire to eliminate discomfort has driven most consumer adoption choices (well… and also vanity / ego… but that’s a different essay). Eliminating the pain of decision is the obvious next pull from consumers.
Nearly two years ago, I showed how we could train AI to predict our futures. I was wrong. Instead, I see we’re actually letting it shape them.
The New Religion
Three months after ChatGPT launched, I predicted that AI would spawn a new faith. I wrote we’d initially watch our spiritual leaders leverage AI for writing and delivering their own doctrine. Then, we’d realize we don’t need the human intermediaries and get guidance directly from the source.
Think about what religious faith fundamentally provides: relief from the burden of uncertainty.
Instead of agonizing over impossible questions (What should I do? What happens next? What is all this for?), faith offers the comfort of deferring to a higher wisdom.
Traditional faith says: "Trust in God's plan."
Our emerging AI faith says: "Trust in AI’s guidance."
The difference is that our new digital deities don't require belief in the unseen: they demonstrate their superiority through measurable results. They don't ask for faith; they earn it through performance. They show their work. They ground it in deep research, citing 472 sources.
Which makes them far more convincing than any traditional god ever was.
Whether we all end up following the same AI deity or each worship our own personalized algorithm matters less than the fundamental shift: we're creating a world where human reasoning is replaced by algorithmic faith.
Traditional faith says: "Trust in God's plan."
Our emerging AI faith says: "Trust in AI’s guidance."
A Life Without Regret
Decision fatigue is real. Choice overload is exhausting. Those are nothing compared to the feeling of regret for a poor choice that can haunt us for decades.
When AI offers to shoulder that burden, it feels like the ultimate relief.
Imagine never looking back and regretting a breakup, a choice of where you went to school, or a keystone decision in your career. You could always shrug and confidently know you made the right choice because AI told you so (and even if your gut tries to gnaw at you, that choice was AI’s fault, not yours!).
Shaping of Our Unique Identity
And yet…
Your worst career decision taught you more about yourself than any optimal AI recommendation ever could.
Your most awkward romantic encounter led to growth that no compatibility algorithm could predict.
Your biggest creative failure opened doors that no AI-generated success would have revealed.
These experiences share something crucial: they emerged from your willingness to choose imperfectly, to act with incomplete information, to be gloriously, messily human.
When we defer to the optimal path, we lose the uniquely human experience and our unique individual viewpoint. We lose ourselves.
The Most Important Choice Left to Make
Last week, I caught myself following AI’s strategic recommendation. It didn’t quite sit right with me, but the advice was reasonable and intellectually sound. I drafted the email based on its direction… and slept on it (or at least tried to).
Instead, I had my first AI nightmare… I viscerally felt the loss of my own conscience.
I woke up at 2 AM and reworked the email to follow my instinct born from lived experience and the full context of the situation.
We're at an inflection point. The technology exists to outsource increasingly complex decisions to algorithms. The user experience is undeniably better when anxiety and uncertainty disappear.
But better for what? For getting things done? Absolutely. For remaining human? That's less clear.
We have to decide: Do we want to feel the human experience, or numb the stress away?
The irony is that this might be the most important choice we ever make… and it's one we can't outsource to AI.
All comments and perspectives from personal lived experience welcome, including guidance on my love life ;)



Agree with this subject. We are indeed trying to protect ourselves as humans and in so doing, potentially erasing the learning, growth, and wisdom that is born from the discomfort of making choices, some mistakes and the consequences of those decisions - ‘free will’ exists in the spiritual realm, and AI is allowing us to slowly subjugate it. I did use AI to help with a breakup decision - that said, the emotions of it all still there. Regret? Perhaps less so given the sound logic AI outlined— the sadness? Still there. In that, we are still very much remaining human and that’s not a bad thing.