LifeGPT: Training a Psychic LLM to Predict Your Life
How we can use current AI to re-create the tech in Minority Report
I recently read an article about the “Doom Calculator,” a large language model (LLM) developed by scientists to predict death and wealth with 78% accuracy. The pop science article focused on doom and gloom, but I believe the implications are far greater. Imagine a model describing the person you will marry, predicting your relationship with your son, or deciding the most lucrative career path you should pursue.
Today, we turn to God, pastors, psychics, mystics, life coaches, (and hairdressers) to help us craft our future storylines and make difficult decisions.
Tomorrow, will LifeGPT fill this role with stunning accuracy?
In this post, I will:
Briefly explain how scientists built the Doom Calculator model using governmental data and LLM transforms similar to those used in ChatGPT
Offer a guide for how we can actually build the LifeGPT AI life predictor model
Explore the first and second-order effects of this powerful tool
In 2023, we saw the immediate effects of GPT-style, LLM-based transformer models. We saw companies applying thin AI constructs onto existing business models like customer service emails or script-writing. This post offers a completely unique application of LLMs that was not possible a decade ago, much like Uber and Bird were not possible without a mobile phone.
Any and all comments on the topic are most welcome. Or, feel free to contact me if you want to work on LifeGPT.
How Scientists Built the Doom Calculator
In simplest terms, all AI learns the probabilities of outputs given a certain set of inputs. (Before you scoff at this simplistic view of AI, consider this may be how humans make decisions, too… weighing probabilities of outcomes and choosing the optimal path.)
Today’s ultra-powerful LLM models like OpenAI’s GPT4 / ChatGPT produce logically sound, creative output by first ingesting billions of samples of human-generated text. The “training” of AI is simply the AI learning patterns in our thinking. The AI learns the most likely word, phrase, or number that follows the prior input. For example, if you prompt, “Jack and Jill went up the _____”, the model has seen enough examples of this construct to know that “hill” is the most probable next word.
The same can be done with human life story arcs.
In the Danish study, details like age, income, health, and education were woven into narratives that reflect everyday life events. For example, “a man earned 40,000 kroner as a waiter”, then “he earned 80,000 kroner as a corporate attorney”, and then “he died of a heart attack at age 72”. Now, show the model 6 million such stories and let it learn common death and wealth outcomes based on earlier life events. The data resolution in this study is based on government records and is fairly simplistic, but even this rudimentary data set yielded meaningful accuracy.
How to Build the Psychic LifeGPT
Consider your life’s biography as a vast collection of story arcs. Instead of limiting our data set to government records, let’s consider all the stories contained in our Facebook posts, Instagram stories, and LinkedIn announcements, many of which are publicly available to 3rd party scrapers, and all of which are fully accessible to the companies running those social networks. And, if you really want to get granular, add our personal emails and text messages to friends, family, and partners into the training data set.
The vast neural networks of large language models are perfectly adept at spotting causal (or at least probabilistic) relationships between inputs and outputs in ways that we mere humans cannot comprehend. Perhaps having three sexual partners before age 19 predicts that this person will not marry until after age 30? Perhaps nihilistic phrasings in one’s social media posts at age 16 predict that this person will have dementia by age 52? Sure, we humans already study obvious correlations (e.g., BMI —> type 2 diabetes). But what about the droves of fuzzy correlations that no single model could possibly handle… until our relatively new advent of LLMs with their billions of paramters!?
Instead of “Jack and Jill ran up the _______”;
Imagine a prompt like:
“Alex was born in Kiev, Ukraine and came to America as a refugee at age 5… he spent most days home alone after school putting together airplane models or bouncing balls against walls being fascinated by the angles of bounce relative to spin… he graduated from UPenn with degrees in engineering and entrepreneurship… he broke his ankle on a motorbike while soul searching in Bali… etc etc etc
Alex meets his wife at age _______”
While we all feel unique, I bet I’m not the only millennial who did soul-searching in Bali, or built a software company, or didn’t focus on meeting his life partner while in his 20s. A transformer-based large language model could combine the life stories of millions of representatively similar humans and output an age of when I meet my wife, just like it combines the story arcs of millions of representatively similar marketing emails to output the Christmas sale email for your specific luxury pet e-commerce brand.
AI performs most effectively at input/output tasks. What could be a more obvious application than inputting one’s life history to output one’s future?
Your life is a collection of story arcs, already documented in social network posts and direct messages. An LLM-based LifeGPT ingesting the patterned story arcs of billions of people is no different than ChatGPT ingesting the intricacies of the patterns of the English language.
First and Second Order Effects
The advent of such a detailed prescript of our potential lives would ripple through our societal canvas, quietly echoing in each personal and collective choice. Imagine the foreknowledge of emotional compatibility or the foresight of career satisfaction devoid of trial and error.
Insurance: one obvious implication is the upgrade to models used by insurance companies to price premiums. Current models are based on dozens of factors. How about a model based on billions of parameters? If people are already willing to let the DriveEasy app give Geico access to their driving data in the hope of lower car insurance premiums, would we give facebook post history to our life insurance company in hope of lower premiums? How would we feel when the insurance company offers us a higher premium… because their models predict we’re going to die sooner?
Healthcare: I’m currently listening to Peter Attia’s book “Outlive”, so I can’t help but wonder how I would alter my daily habits today if my LifeGPT predicted specific illness later in life.
Dating and Relationships: imagine if I could describe in vivid detail the exact person who would bring you the most joy in a life partnership. You wouldn’t need to “take a chance” on that person your coworker thinks might be a good fit, and you certainly wouldn’t waste countless hours swiping.
Careers: consider how many people struggle with deciding their career path. What should they study? Which job should they apply for? Should they even join a company, or venture off to start their own business? Just like Harry Potter’s Sorting Hat determined your house in Hogwarts, LifeGPT could quickly sort out your career.
“Final Years”: similar to taking a “gap year” after college or military service, imagine a cultural phenomenon whereupon people take their expected age of death and spend the prior year abandoning all foresight and care. If you knew for a fact that you had 364 days left to live, how would you act for this next year? Would you still go to work? Would you stay with your husband?
Crime and Policing: Perhaps the most troublesome application of LifeGPT, as popularized in the incredible movie Minority Report, is when governmental authorities start using LifeGPT to gather probabilities of a specific human committing a specific crime. For example, how should we act if LifeGPT predicts John Doe will commit a mass shooting and suicide within the next three months with 98% accuracy?
A Step Towards a New Faith
Last February, I wrote a prediction for the future of Artificial General Intelligence. One of my predictions explained how AI would become a new faith, first supporting the existing priests/pastors/rabbis/shamans, and later gaining direct access to followers. It seems that LifeGPT may be another backdoor channel towards an AI deity.
Consider that the LifeGPT model would give people a peaceful sense of certainty, much like Jesus, Allah, or tarot cards give their followers today. How many people turn to prayer, priests, or mystics to attain comfort around impossibly difficult crossroads, or to find acceptance in their prior choices?
Given our modern idolatry of innovators and technology, I can imagine a fairly large number of people choosing to turn to an all-knowing AI machine rather than their neighborhood rabbi.
Aren’t Humans Too Complex for LifeGPT to Be Accurate?
In the land of AI and machine learning, everything is probability-based. Just like Jack and Jill might have a 95% chance of “going up a hill”, there might be a 4% chance they “went up a mountain” (and maybe another 0.1% chance they “went up a chimney?”)
Perhaps the more interesting question is not whether the AI will be accurate but whether we humans will be able to accurately interpret the probabilistic results.
If LifeGPT told you that you had a 92% chance you should marry Heather, but you had doubts in your gut, would you do it? What if the probability was 78%? I don’t think we humans are prepared to understand the meaning of a 14% absolute difference. Both 92% and 78% sound confidently large and definitive but are mathematically vastly different.
I also wonder how our minds would interpret the other, long-tail probabilities. “You should marry Heather with 92% probability, and you should marry Sarah with 7% probability.” But your chemistry with Sarah felt better. What do you do?
Life can be chaotic and unpredictable. An LLM like LifeGPT might give you the most accurate future based on the storylines of millions of people like you… but it will never be exactly the future… because you certainly have the free will to change the future to whatever you want. Or do you? Once LifeGPT tells you that you should marry Heather with 92% certainty, I believe this would quiet your doubtful gut and give you the confidence of commitment… resulting in a self-fulfilling prophecy… a happily committed marriage… and the diminishment of free will.
I do not revere the philosophers, ethicists, legal scholars, and politicians who will one day need to draft the guidelines for such technology. I bet we will witness this drafting in our lifetimes.
“We Can” is Different than “We Should”
I have not yet touched on the ethical question of whether such a model should be built at all. For better or worse, I don’t think we have the luxury of such a ponderance. Enough of the data required to build such a model is publicly available, and the cost of training such a model will drop from tens of millions of dollars to tens of thousands of dollars within a few years. Even if we somehow ban it in America and Europe, actors in other countries could surely develop such models… and I cannot imagine our governments walling off internets to stop us from accessing them.
These models will be built. I just hope they are commercialized by someone with some bit of conscience to properly explain the strengths and risks to their users.
Final Thoughts
Though the notion might seem as unsettling as it is innovative, the reality is such that the wheels of this future are already in motion. Our engagements with AI have moved past novelty; they influence our decisions, curate our experiences, and may soon dictate the trajectory of our lives. As we stand at the crossroads of this digital revelation, we must decide how much life—its mysteries, its uncertainties, and its sprawling unpredictability—we're willing to surrender to the intelligence we've created.
Removed from the main post for brevity:
Note this is far more powerful than a GPT responding using average common knowledge. For example, if you ask whether you should start a company, the general answer is “no”, because 90%+ startups fail. Most of us think we are outliers and overachieving unicorns. So we ignore the advice. But, consider if the GPT was trained on the millions of life paths and life happiness outcomes of motivated people like yourself. And, the advice it provided was based on your biographical history, your psychological profile, your social connectivity, your career pedigree, and billions of parameters gleaned from your thousands of data footprints to date that your mere human unicorn mind cannot possibly compute.
Yes, you are a unicorn, but would you accept that you are a unicorn whose life journey is analogous to millions of other unicorns who have faced similar crossroads? And your life can be represented as an amalgam of those millions of other lives? If there was a 90% probability that your specific life path would yield a lonely depression by following the founder's journey, would you still be filled with bravado?
Great post and discussion points! I am writing a novel right now on our future lives with AI and the “Golden Calf of AI” is one of the themes. Our cities will be full of AI temples in the next 25 years as people will flock to AI for clarity about their life. The most interesting aspect I think surrounds prayer and unanswered prayers. An AI deity will provide more of a self help diagnosis and “7 step plan” to getting your prayer answered, while religions would either have to adapt and offer a similar diagnosis or stand on the idea of peace and contentment about unanswered prayers. I suspect that our society - which has long sought to eliminate suffering and seek clarity from the vast amount of things in our world that are unknown - will prefer the deity that provides more clarity than a religion or God which counsels us to take comfort in our finite mind.