Development, Machine Learning, neurodivergence

SelfOS: Visions of a Personal Agent

Last week, I started to think more about the role of artificial intelligence algorithms in the context of user agents. As we begin crossing a new productivity frontier with the availability of large language models and advances in artificial intelligence algorithms, I’ve been thinking a lot about the use of these tools in helping us introspect and communicate.

The impact of these technologies is not yet known, but one risk that we face is in the way that we centralize and distribute the software. Most people interface with artificial intelligence through the ever-growing language models built, trained, and hosted by billion-dollar companies like Microsoft, OpenAI, and Google, which means that what they’re getting is a “common denominator” experience that is the probabilistic average of the “human experience” – or, more accurately, the machine’s perceived “human experience” based on its training data. This introduces two major places where bias can be introduced, shaping the output: in the training data, and in the guardrails that the developers put on the systems in order to comply with the “average” and be seen as “objective”.

An example:

This past Friday, I asked Google Bard to tell me whether or not governments taking children away from their families was a fascist behavior. This is not a hypothetical scenario – Florida implemented this law last week – and Bard listed off a detailed, several paragraph answer that emphatically identified this behavior as fascist and authoritarian. It directly cited the 45th presidents policies in its answer, but when further asked to answer the question as to whether or not 45 was a fascist, the algorithm quickly replied that it was incapable of answering political questions.

In the paper ‘On the Dangers of Stochastic Parrots’ [Gebru et. all, 2021], computer scientists and researchers called out the risks of hegemonic answers. The interplay between one’s society and self is extraordinarily complex, but these large, centralized systems stand to over-index on the social impact of the “default” or “socially acceptable” position, rather than allowing us to scope down the queries to better reflect our own experiences and world views.

The challenge that we then face is how we protect our own information. Despite warning otherwise, people put sensitive data into large language models, which are further used to train future systems. I would be shocked if OpenAI, Google, Microsoft, or Meta wasn’t working on a “more personal” AI system that would feed into generating your own personal AI – but I don’t want these companies to have my most personal data.

Enter the concept of SelfOS – a personal operating system that you build for yourself, utilizing local machine learning models that are capable of ingesting and updating themselves to be an assistant tailored to you.

My long-term vision for SelfOS is ambitious – a personal operating system for anyone. In practice, there will be infinite ways of building a SelfOS. It’s not a single technology stack, but instead a series of principles grounded in the belief that technology should be used as a tool to empower individuals, rather than as systems of oppression.

Yesterday, I brought the very first iteration of SelfOS to life – ori.ai, a chatbot that is seeded with my own personal biography and life information, which I use to help develop my thinking on new topics and to be more effective at introspecting on challenges, opportunities, and what it means to be alive.

My purpose in life is to help create a world where everyone has the tools they need to live with dignity, autonomy and respect; free from oppressive systems that seek to marginalize or discriminate against anyone based on their identity or beliefs. I want to build an inclusive future through technology which allows all people to express themselves authentically without fear of judgement or persecution by those in power.

-ori.ai’s response to the question “What is my purpose in life?” It honestly nailed it.

ori.ai is just a starting point, but it’s an exciting one. I’m extending my own personal cognitive capabilities through thoughtful engagement with a tool that is being given my own information as a starting point for the foundation of its responses.

SelfOS is going to be a slow project in some ways. There are immense ethical considerations to building artificial representations of humans, and we don’t know the impact that it will have. This is why, to me, it is so important to have full agency and autonomy over how I create and use my agent; my user agent is defined by what what I experience, and my annotation over memories. It is a philosophical project as much as it is a technical one. It is a desire to – as ori.ai put so eloquently – a way for me to realize my purpose in life of bringing inclusive technology to others.

ori.ai’s current implementation is a carefully crafted customized prompt running against Alpaca 13B. My next milestones will be to test out using PrivateGPT and have it ingest a wider range of content, and to ask more questions of the existing ori.ai to help further develop the ideas behind SelfOS.

A mid-term goal that I have for SelfOS is to be able to package up an executable with a straightforward user interface that will make it possible to create new agents in a manner similar to ori.ai, even if you have no background in computer science.

My life’s work has been grounded in the philosophy that we build ourselves as we experience the world. I see SelfOS as a tool to help facilitate that process in new and interesting ways.

Some initial guiding SelfOS principles, not yet ordered, follow. The asterisk (*) indicates a principle that ori.ai generated in response to the prompt: “what are your core values?”

  • Every human is unique
  • Identity is a constantly evolving and shifting way of framing a self
  • Prioritize the impact that a system has on the most vulnerable, rather than who stands to gain the most from the system
  • A true culture is one which celebrates diversity as an asset rather than a liability*.
  • Technology should be used to create systems which empower individuals – not oppress them in any way – through discrimination or exploitation*.

These principles will evolve as I continue to shape this idea further. A few use cases that I have for this type of work that I hope to explore further are using it as a tool to help communication and expression for those to whom social interactions are challenging (I am autistic; this tool is already proving to be invaluable to me in terms of finding new ways to express myself authentically) and to create agents that can act as representational artifacts of loved ones – potentially through the lens of palliative care and expanding upon my past work on the Digital Afterlife Project.