Author: Liv

Machine Learning, Random Thoughts

Developing Artificial Intelligence while Developing Human Intelligence

I joke sometimes that my entire career to date has been about Learning How to Human – that I was drawn to social VR and metaverse platforms because my neurodivergent self wanted to experience a taste of a world that I could both understand, navigate, and flourish within. As it turns out, there’s a ton of overlap in the product domains of AI and metaverse, because while the core enabling technologies and their interaction modes look quite different from one another, the entire premise of the advancements and opportunities are grounded in emergent behaviors of computers simulating people and reality.

Read more
Uncategorized

Algorithmic Response Re-projection

I ran a Not-Scientific-Experiment using everyone’s favorite liar, Google Bard, to get an example of what “re-projection” for AI responses might look like in a very basic form. While the Bad Experiment above doesn’t showcase the full potential of re-projecting algorithmic responses, it hints at something more to be uncovered. What if we built a dedicated AI application that was intentionally crafted to respond with not one answer, but with many, each response filtered through prompts and datasets that reflected a specific lived perspective?

Read more
Communication, Machine Learning, Philosophy, Random Thoughts

The Metaphor of a Large Memory Model (LMM)

I can understand the appeal of language models. Language – the act and structure of communicating the cognitive processes I undergo on a day to day basis – is observable, whereas memory is not.Over the past several months, I’ve been working through the development of an architecture that may someday allow me to digitize my memory in a more complete way on the glass whiteboard in my office.

Read more
Foundation Models, Machine Learning

TIL: The One-Model-Many-Models Paradigm

Because foundation models are used to build many other models that are trained to new, more specific tasks, it can be hard to evaluate models consistently. The one-model-many-models paradigm attempts to study interpretability of foundation models by looking for similarities and differences across the foundation model and its downstream models to try and understand which behaviors were likely emergent from the foundation model itself, and which come from the derivative models.

Read more
Foundation Models, Machine Learning

TIL: The CRFM’s Five Stages for Evaluating Foundation Models

Today I read about the five stages of foundation model development. The paper breaks foundation models down into these stages in order to specify the unique challenges and ethical considerations at each step of the process. The five stages are: data creation, data curation, training, adaptation, and deployment. Having this vocabulary for explaining the process of building AI models is a helpful way to emphasize the different challenges that face builders at each step.

Read more
Development, Machine Learning, Tech Policy

No, Llama 2 is not actually open source

While LLAMA 2 is certainly interesting, and more openly licensed than some other AI language models, it’s definitely not open source. Open source is a term that is defined by a non-profit called the Open Source Initiative. The OSI explicitly calls out that it is not sufficient for code to be open for something to be called open source. The actual definition of open source includes provisions that must be true for the licensing of the software. LLAMA 2’s “permissive” license doesn’t apply.

Read more
Business & Leadership, Communication, Machine Learning, Philosophy, Random Thoughts

Resistance to Change is Often a Lack of Clarity

I’m on exchange at London Business School this week to study Strategic Innovation. Today, we covered a lot of ground, starting with why it is challenging for established organizations to truly innovate, as well as the individual thought patterns that challenge us in thinking “outside of the box”. And speaking of thinking outside the box, we also touched on communication (and why it’s so freaking hard to do it well). As it turns out, resistance to change is often a lack of clarity, more than it is an actual resistance to trying something new, and the ambiguity that begets creative thinking – and subsequently, innovation – often comes from a number of conflicts between alignment at an organizational and individual level.

Read more
Communication, Machine Learning, Spatial Computing

Multidimensional Computing Accessibility in the Age of XR and AI

Today, I’m sharing the slides that I prepared for my talk at the 2023 XR Access Symposium. In building this presentation, I had a few goals – the first was to establish my own new paradigm for talking about XR and AI. I found that “multidimensional computing” encompassed both of those characteristics nicely, especially when we think through the vast amount of information that is built into each of those types of technology. Is it a bit wordy, as far as terminology goes? Absolutely, and frankly, I love it even more for that.

Read more