Machine Learning, Philosophy, Random Thoughts

One Year In – I Still Don’t Use ChatGPT. Here’s How it’s Going.

Despite using machine learning technology since 2014, I have used ChatGPT one time – days after it launched, I used it as a pseudo-therapy session where I gave it minimum and vague background context about a conflict that I was having with a friend and had a short dialogue with it “acting as my friend”. While I didn’t immediately understand the implications of what it was to come, I did have a visceral and emotional reaction to what was unfurling ahead of me. I searched OpenAI, the company that makes ChatGPT, and found that it was funded by TESCREAList far-right billionaires. Since 2016, the link between Silicon Valley idealism and the role that technology plays in the rise of fascist rule has become increasingly clear. OpenAI landed upon a goldmine – a chat interface that could suck up human thought and conversation, and millions flocked to the service to feed it.

In February, I wrote an article about ethical use of AI in business after listening to my CBS classmates talk about how they used ChatGPT to summarize the books we were assigned to read for our Leadership Through Fiction class instead of reading them. Personally, that completely negated the point of the class: why choose to study the role that fiction plays in developing and analyzing strategic leadership capabilities if you don’t want to study the words that make up those stories? Of course – this isn’t a new problem – summarizing and short-cutting the way that we process information has been available for decades – but AI promotes that as at a huge scale.


As an autistic and queer individual, the role that technology plays in identifying, surveilling, and categorizing acceptable behavior within society is not lost on me. Despite promises of data segmentation and data security, language models leak training data all the time to prompt injection attacks. Facial recognition used by law enforcement and military is used to monitor and control citizen populations around the world, and human rights have been under attack around the world with the rise of far-right government leaders coming to power. The idea that these large language models are building profiles on us comes straight out of Google’s advertising playbook, only now we’re adding an additional layer of vulnerability by having conversations directly with The Algorithms. No thank you.


Luckily, moving away from Big AI doesn’t mean that I’ve been absent in the discourse around the future of computing. Open, local AI projects are thriving, thanks in large part to Meta’s release of the weights for the llama large language model earlier this year. I’ve been making heavy use of privateGPT, which allows me to carefully curate the content that I feed a model running on my local desktop computer to get insights from what I read and write. The information and insights don’t leave my computer, and I’m able to get incremental benefits from the technology without compromising on privacy and safety. I’m finding more joy in writing code to tweak my agent experience and exploring what interfaces make the most sense for me, rather than feeling like my mental state is continuing to be a commodity for monopolies.

In June, Microsoft stuck an AI-powered Bing Search bar directly on my desktop. That same day, I backed up my files and switched to using Linux (first Pop!_OS and now Ubuntu) as my daily operating system. I can count on one hand the number of times I’ve used Windows since, and it’s exclusively for the few times that I played Fortnite before switching to couch co-op Baldur’s Gate 3.


The pressure of accelerationist thinking in tech has always been present, but AI seems to take it to another level. Being hesitant, cautious, or fearful of the way that the technology can be used in the hands of malicious actors and commercialized by public companies gets you labeled a “doomer” or a “decel”, words that bring to mind the dismissive “boomer” and “incel” terminology. It’s not a coincidence: the people who capture and influence funding and mind share of Silicon Valley have explicitly and publicly extolled post-capitalist ideals, because they continue to benefit. The idea that we should slow down and care about people (or the planet) is presented as objectively less beneficial for society than making more and more money, as if currency and capital wasn’t in and of itself a technological advancement with pros and cons. To them, the ultimate measure of productivity is how much can be consumed, regardless of the impact that it has on the wider world. As it turns out, we’re already paperclips.


While I don’t really make an effort to quantify my productivity anymore, I feel like I do more thoughtful and intentional work by thinking more critically about how different components fit together. I’ve identified that acceleration for technology’s sake is no longer a value that I hold, and I find much more enjoyment in creative endeavors by way of the process. I write more code, I think more cautiously, I read a wider variety of perspectives. I don’t need ChatGPT to do those things.

The challenge that has come from not using OpenAI products and ChatGPT has actually been in the way that it impacts my relationship with people more than anything else. Most people don’t want to hear about how the supply chain for artificial intelligence includes emotional abuse and underpaid workers. They don’t want to confront the idea that they could be identified or targeted by a malicious government seizing data from private companies. There is a huge question of how to navigate trust and relationships with the tools that we use and how much we value our privacy and autonomy, which is a bit of a downer topic when people are rushing to show off how they’re using generative AI to make memes of pregnant sonic on 9/11.


I won’t deny that ChatGPT has also helped people. Generative AI can be a helpful brainstorming tool, but centralizing so much of the data consumption to a single company is a risk. Amazon, for example, famously commoditizes applications that are built on top of AWS because of their expertise and scale. OpenAI seems to be following the same playbook, albeit arguably at a much faster pace.


So, a year in, I’m mostly happy with being ChatGPT-free. The part that is difficult about this perspective is that it’s hard to be out of the “zeitgeist” of the technological sphere. Case in point: in brainstorming meeting today, people were encouraged to feed our needs into ChatGPT or other AI models. I was an anomaly in saying that I just used my brain to do the work. It’s almost a running joke at this point, and I constantly feel like I’m the punchline. But that’s a people problem, not an AI problem.

More than ever, there’s this enriching area of opportunity to explore the depth of content that humanity has produced. I want to dive deeper into those nuanced areas, not reduce them down to the most statistically likely output in a summary. Going into 2024, we are introducing more complications and complexity to what it means to be human than ever before. But increasingly, it seems like focusing on the things that make us human – our relationships to each other, showing empathy and curiosity, sharing and distilling and theorizing – is being offloaded to our relationship to our machines. We’re in an epidemic of loneliness. Maybe a GPT-free lifestyle is the antidote to that.