As I’m writing this, I’m wearing a green t-shirt with a giant eyeball over my rapidly growing stomach. It’s Halloween, and I’ve decided to dress up as Mike Wazowski – it feels like I’m all stomach these days, so it felt appropriate. My partner dressed up as Boo. Halloween is an especially interesting time of year to reflect on identity and persona: it’s a holiday that encourages people to step into a different character and
Read moreI’ve written a bit about my interest in using local artificial intelligence for memory recall, and this week I finally made some progress on a project to start turning some of my earlier thinking into an actual part of my workflow. Memory Cache is a project that allows you to save a webpage as a PDF document, which is then used to augment context for a local instance of privateGPT.
Read moreBack in 2015, I bought the domain ‘livi.link’ to use as a shortlink domain. Why? It’s freaking adorable, and it meant that I could leave links around the internet that would be immediately identified as mine. I set up a bit.ly account with Twitter authentication, and used “livi links” for years. Then, I deleted my Twitter account, and forgot to detach it from my bit.ly account. Oops. Yesterday, I wanted to make a short link
Read moreI joke sometimes that my entire career to date has been about Learning How to Human – that I was drawn to social VR and metaverse platforms because my neurodivergent self wanted to experience a taste of a world that I could both understand, navigate, and flourish within. As it turns out, there’s a ton of overlap in the product domains of AI and metaverse, because while the core enabling technologies and their interaction modes look quite different from one another, the entire premise of the advancements and opportunities are grounded in emergent behaviors of computers simulating people and reality.
Read moreI ran a Not-Scientific-Experiment using everyone’s favorite liar, Google Bard, to get an example of what “re-projection” for AI responses might look like in a very basic form. While the Bad Experiment above doesn’t showcase the full potential of re-projecting algorithmic responses, it hints at something more to be uncovered. What if we built a dedicated AI application that was intentionally crafted to respond with not one answer, but with many, each response filtered through prompts and datasets that reflected a specific lived perspective?
Read moreI can understand the appeal of language models. Language – the act and structure of communicating the cognitive processes I undergo on a day to day basis – is observable, whereas memory is not.Over the past several months, I’ve been working through the development of an architecture that may someday allow me to digitize my memory in a more complete way on the glass whiteboard in my office.
Read moreBecause foundation models are used to build many other models that are trained to new, more specific tasks, it can be hard to evaluate models consistently. The one-model-many-models paradigm attempts to study interpretability of foundation models by looking for similarities and differences across the foundation model and its downstream models to try and understand which behaviors were likely emergent from the foundation model itself, and which come from the derivative models.
Read moreToday I read about the five stages of foundation model development. The paper breaks foundation models down into these stages in order to specify the unique challenges and ethical considerations at each step of the process. The five stages are: data creation, data curation, training, adaptation, and deployment. Having this vocabulary for explaining the process of building AI models is a helpful way to emphasize the different challenges that face builders at each step.
Read moreWhile LLAMA 2 is certainly interesting, and more openly licensed than some other AI language models, it’s definitely not open source. Open source is a term that is defined by a non-profit called the Open Source Initiative. The OSI explicitly calls out that it is not sufficient for code to be open for something to be called open source. The actual definition of open source includes provisions that must be true for the licensing of the software. LLAMA 2’s “permissive” license doesn’t apply.
Read moreI turned to Bard to help brainstorm ideas for how I can replace the OpenAI Completions API in a project that a former colleague worked on, and it helpfully recommended me three alternatives. It turns out that the Bard API doesn’t exist – at least, it doesn’t exist yet.
Read more