Machine Learning

Business & Leadership, Machine Learning, Random Thoughts

I Finally Signed Up for ChatGPT – Here’s Why

I’m no stranger to being a contrarian – so when ChatGPT exploded onto the scene a few years back, I resisted the urge to start using it for everything and instead decided to go as long as possible without. At the one year mark, I wrote about how my non-use of ChatGPT was going. The TL;DR – the ethical concerns related to large-scale, hosted LLMs around copyright, labor in training, climate, and monopolization/market capture led

Read more
Development, Machine Learning

TIL: Writing a Custom AdBlocker Ultimate Filter to hide LinkedIn’s AI Feature Upsell

On the list of “things that annoy me about AI”, trying to upsell AI features as part of a service is close to the top. I’ve recently noticed that my LinkedIn feed has been shoving an AI icon and related “coaching prompts” into my feed, and my solution of hiding every post that had the icons on them wasn’t actually filtering the system the way that I wanted. Today I spent a few minutes learning

Read more
Machine Learning, Philosophy, Random Thoughts

One Year In – I Still Don’t Use ChatGPT. Here’s How it’s Going.

As an autistic and queer individual, the role that technology plays in identifying, surveilling, and categorizing acceptable behavior within society is not lost on me. Despite using machine learning technology since 2014, I have used ChatGPT one time. The challenge that has come from not using OpenAI products and ChatGPT has actually been in the way that it impacts my relationship with people more than anything else.

Read more
Business & Leadership, Machine Learning, Philosophy, Random Thoughts

Eigenvectors as a Representation of Nuance? (Or, How I’m Re-Visiting all of my Freshman Year Math Classes To Explain my Brain) – Pt. 1

One decision that many organizations may be making right now is how to develop a corporate policy about artificial intelligence. Could, perhaps, an eigenvalue be calculated against a matrix of perspectives within an organization, to represent a new form of communicating the nuance and fluid nature of these complex, multi-cellular entities in which we house business endeavors? To evaluate this idea, I took a small (9 person) survey of team members and asked them to share their perspectives on AI innovation.

Read more
Development, Machine Learning, Philosophy, Random Thoughts

Being a Pregnant Developer in the Age of AI is Weird

As I’m writing this, I’m wearing a green t-shirt with a giant eyeball over my rapidly growing stomach. It’s Halloween, and I’ve decided to dress up as Mike Wazowski – it feels like I’m all stomach these days, so it felt appropriate. My partner dressed up as Boo. Halloween is an especially interesting time of year to reflect on identity and persona: it’s a holiday that encourages people to step into a different character and

Read more
Machine Learning, Random Thoughts

Developing Artificial Intelligence while Developing Human Intelligence

I joke sometimes that my entire career to date has been about Learning How to Human – that I was drawn to social VR and metaverse platforms because my neurodivergent self wanted to experience a taste of a world that I could both understand, navigate, and flourish within. As it turns out, there’s a ton of overlap in the product domains of AI and metaverse, because while the core enabling technologies and their interaction modes look quite different from one another, the entire premise of the advancements and opportunities are grounded in emergent behaviors of computers simulating people and reality.

Read more
Communication, Machine Learning, Philosophy, Random Thoughts

The Metaphor of a Large Memory Model (LMM)

I can understand the appeal of language models. Language – the act and structure of communicating the cognitive processes I undergo on a day to day basis – is observable, whereas memory is not.Over the past several months, I’ve been working through the development of an architecture that may someday allow me to digitize my memory in a more complete way on the glass whiteboard in my office.

Read more
Foundation Models, Machine Learning

TIL: The One-Model-Many-Models Paradigm

Because foundation models are used to build many other models that are trained to new, more specific tasks, it can be hard to evaluate models consistently. The one-model-many-models paradigm attempts to study interpretability of foundation models by looking for similarities and differences across the foundation model and its downstream models to try and understand which behaviors were likely emergent from the foundation model itself, and which come from the derivative models.

Read more
Foundation Models, Machine Learning

TIL: The CRFM’s Five Stages for Evaluating Foundation Models

Today I read about the five stages of foundation model development. The paper breaks foundation models down into these stages in order to specify the unique challenges and ethical considerations at each step of the process. The five stages are: data creation, data curation, training, adaptation, and deployment. Having this vocabulary for explaining the process of building AI models is a helpful way to emphasize the different challenges that face builders at each step.

Read more
Development, Machine Learning, Tech Policy

No, Llama 2 is not actually open source

While LLAMA 2 is certainly interesting, and more openly licensed than some other AI language models, it’s definitely not open source. Open source is a term that is defined by a non-profit called the Open Source Initiative. The OSI explicitly calls out that it is not sufficient for code to be open for something to be called open source. The actual definition of open source includes provisions that must be true for the licensing of the software. LLAMA 2’s “permissive” license doesn’t apply.

Read more