Is it AI Psychosis, or Algorithmic Dependency?
I take issue with the term ‘AI Psychosis’, a term increasingly being used to describe the state of people who deeply believe – and are often proselytizing about – their AI agents having super-intelligent cognition and sentience.
A Personal Relationship with “Psychosis”
Five years ago, I was misdiagnosed with bipolar disorder. A telehealth psychiatrist bordering retirement who saw me for all of 15 minutes slapped the label on me and prescribed me a concoction of heavy anti-psychotic medication, lithium, and anti-depressants after I reported to my doctor that the SSRI they had put me on gave me trouble sleeping and causing episodes of derealization. The state of mental health care in the United States is truly atrocious, but that’s not what this post is about.
The year surrounding that misdiagnosis is a blur, partially due to the effects of unnecessary medication, but also due to the context I was existing in. It was the end of 2021, which meant that the world was, in many ways, emerging from the reality-shifting effects of COVID-19. I had relocated from California back to the east coast, to a town where I only knew my husband. I was working at AWS in a high-stress, high-pressure role, and my new marriage was going through heavy volatility as we individually and together tried to figure out how to navigate the stress of a dramatically changed society.
During that year, I experienced what was deemed ‘psychosis’. I couldn’t easily determine if things that were happening to me were real or imagined. I felt as though I was losing all agency in my ability to direct and manage my own life. I poured myself into relationships that weren’t real, and at some point questioned if the people in my life who were real, actually existed. I vividly remember one morning after a particularly bad bout of insomnia sitting in the shower, questioning if I actually existed. All that surrounded me was a blur of white fog and the terrifying feeling that reality had lost meaning.
Psychosis as a diagnostic term depends on an assertion of and belief in facts about a shared, base reality. In the context of what I was living through, my struggles were far more likened to a trauma response – not an innate chemical imbalance in my brain that caused me to act congruent to “reality”. There were people around me who were taking advantage of me and gaslighting me. I didn’t have a community of people who truly understood me. The world that I grown up believing would make sure I was alright as long as I studied and worked hard was showing its cracks.
Emerging from that gave me a much deeper understanding of the impact that our environment – community, home, neighborhood, job, society – has on our sense of self and our understanding of reality’s base truths. My brain was labelled the problem. In my context, my actual, lived reality was confusing, disjointed, and harmful.
The Problem with “AI Psychosis”
Humans have always debated intelligence and sentience while failing to accommodate diverse forms of cognition. Analyzing whether or not animals, mushrooms, plants, or even other human races were intelligence is a carryover of tribal behaviors that seek to differentiate between “us” and “them”. Applying this to machines and computers is not new, though it is perhaps more akin to wondering about a toaster’s intelligence than a cat or dog’s. The advent of the ‘pet parent’ and ’emotional support animal’ shows the extent to which some people personify their animals. It is not surprising that machines designed to convincingly sound and react like humans – especially given how mediated by screens our relationships have come – would start to fit into this category.
A hallmark of my experience with “psychosis” was the complete lack of confidence in and access to my support network. The loneliness epidemic that we are experiencing is yet another piece of evidence that our shift from community-based care to economic-based care is not working.
Regardless of perspective on machine intelligence, cognition, and sentience, the issue remains: over-reliance on any one source of confidante, support, and feeling of love or understanding is problematic for a human being.
Saying that people are experiencing “AI psychosis” is problematic for two key reasons. The more obvious one is that it is creating further stigma around mental health, and using medical terminology without assessment or understanding of an individual’s context. Often, people who experience this level of dependency on, trust, and belief in the sentience of AI systems are already in vulnerable positions, but it also applies to people who have a high degree of perceived privilege.
The second reason is that calling this a form of “psychosis” removes the responsibility from those in positions of authority from their role in creating these systems. It is by design that billionaires are trying to increasingly isolate people from their loved ones and create a feedback loop for their dependency on their tools and services. Mark Zuckerberg has gone on record saying that he wants his AI agents to replace actual friends. A companion you tell everyone to is a perfect surveillance tool. This so-called “psychosis” is a symptom of the puppeteering, technocratic elite further reaching their claws into the governance and command of human agency in an attempt to maintain power.
Creating algorithmic dependency in the proletariat deepens the transition into a purely economic model of human society. In this world, the experience of life itself exists only as a tool of exploitation while further removing us from the necessary tools and resources to form authentic connections and develop true empathy.
I write this as someone who doesn’t believe in AGI or machine sentience. I do believe in the power of natural language processing and am routinely astonished at the rate of technological progress we make, while simultaneously finding myself devastated at the choices of what is invested in.
None of this excuses the antisocial patterns that people believe in a (Turing) complete definition of intelligence might execute as they gender their AI assistants and treat them better than most humans in their lives. It is not a call to those most harmed by these systems of power and oppression to do all of the heavy lifting of community care. But I do believe that it is important to call attention to the growing issue of algorithmic dependency – treating it as it is, more akin to a drug introduced by the elite to keep populations in states of disadvantage.
If a sycophantic piece of software is deployed to convince someone that it is the only one who understands them, is then used to advertise to them, and begins to deeply impact their behavior, it is not a moral failing or mental health issue. It is a sign that AI companies are increasingly successful at creating an isolating, hostile, and individualistic society that we are being pushed to exist in.