Uncategorized

Algorithmic Response Re-projection

I ran a Not-Scientific-Experiment using everyone’s favorite liar, Google Bard, to get an example of what “re-projection” for AI responses might look like in a very basic form. While the Bad Experiment above doesn’t showcase the full potential of re-projecting algorithmic responses, it hints at something more to be uncovered. What if we built a dedicated AI application that was intentionally crafted to respond with not one answer, but with many, each response filtered through prompts and datasets that reflected a specific lived perspective?

Read more
Communication, Machine Learning, Philosophy, Random Thoughts

The Metaphor of a Large Memory Model (LMM)

I can understand the appeal of language models. Language – the act and structure of communicating the cognitive processes I undergo on a day to day basis – is observable, whereas memory is not.Over the past several months, I’ve been working through the development of an architecture that may someday allow me to digitize my memory in a more complete way on the glass whiteboard in my office.

Read more
Foundation Models, Machine Learning

TIL: The One-Model-Many-Models Paradigm

Because foundation models are used to build many other models that are trained to new, more specific tasks, it can be hard to evaluate models consistently. The one-model-many-models paradigm attempts to study interpretability of foundation models by looking for similarities and differences across the foundation model and its downstream models to try and understand which behaviors were likely emergent from the foundation model itself, and which come from the derivative models.

Read more
Foundation Models, Machine Learning

TIL: The CRFM’s Five Stages for Evaluating Foundation Models

Today I read about the five stages of foundation model development. The paper breaks foundation models down into these stages in order to specify the unique challenges and ethical considerations at each step of the process. The five stages are: data creation, data curation, training, adaptation, and deployment. Having this vocabulary for explaining the process of building AI models is a helpful way to emphasize the different challenges that face builders at each step.

Read more
Development, Machine Learning, Tech Policy

No, Llama 2 is not actually open source

While LLAMA 2 is certainly interesting, and more openly licensed than some other AI language models, it’s definitely not open source. Open source is a term that is defined by a non-profit called the Open Source Initiative. The OSI explicitly calls out that it is not sufficient for code to be open for something to be called open source. The actual definition of open source includes provisions that must be true for the licensing of the software. LLAMA 2’s “permissive” license doesn’t apply.

Read more
Business & Leadership, Communication, Machine Learning, Philosophy, Random Thoughts

Resistance to Change is Often a Lack of Clarity

I’m on exchange at London Business School this week to study Strategic Innovation. Today, we covered a lot of ground, starting with why it is challenging for established organizations to truly innovate, as well as the individual thought patterns that challenge us in thinking “outside of the box”. And speaking of thinking outside the box, we also touched on communication (and why it’s so freaking hard to do it well). As it turns out, resistance to change is often a lack of clarity, more than it is an actual resistance to trying something new, and the ambiguity that begets creative thinking – and subsequently, innovation – often comes from a number of conflicts between alignment at an organizational and individual level.

Read more
Communication, Machine Learning, Spatial Computing

Multidimensional Computing Accessibility in the Age of XR and AI

Today, I’m sharing the slides that I prepared for my talk at the 2023 XR Access Symposium. In building this presentation, I had a few goals – the first was to establish my own new paradigm for talking about XR and AI. I found that “multidimensional computing” encompassed both of those characteristics nicely, especially when we think through the vast amount of information that is built into each of those types of technology. Is it a bit wordy, as far as terminology goes? Absolutely, and frankly, I love it even more for that.

Read more
Machine Learning, Random Thoughts, Tech Policy

Google Bard: OpenAI’s Exclusivity Deal with Microsoft may Violate Anti-Trust Law in US

I decided to ask both ChatGPT and Google Bard to provide three arguments about why this particular exclusivity deal was in violation with the Sherman Act.

Completely in alignment with my expectations, ChatGPT immediately announced that it was unable to answer such a question, but it happily explained some generics about the Sherman Act. Bard, on the other hand, gave a seemingly quite convincing breakdown of the reasons for and against an anti-competitive ruling.

Read more