TIL: The CRFM’s Five Stages for Evaluating Foundation Models
I’m reading the Center for Research on Foundation Models (CRFM)’s paper “On the Opportunities and Risks of Foundation Models“, a 200+ page paper from 2022 that defines and explores the ins and outs of foundation models. “Foundation Model” is a term chosen to describe large AI models that are trained to be general-purpose task doers, which are subsequently updated and fine-tuned in order to become good at specific tasks.
This is a dense paper packed with good information, and is quickly becoming my #1 recommended read for someone who wants to get into the weeds of AI model research. Foundation models (not to be confused with the recently coined marketing hype “frontier” models, which as far as I can tell means nothing meaningful other than wanting to own a new word) make up the majority of today’s hype-driven-development in the world of machine learning, and this paper is full of great information that is relevant and helpful.
Today I read about the five stages of foundation model development. The paper breaks foundation models down into these stages in order to specify the unique challenges and ethical considerations at each step of the process. The situation is much more nuanced than whether a given model is (or more likely, is not) ethical.
The five stages are:
- Data creation
- Data curation
- Training
- Adaptation
- Deployment
The paper goes on to talk about how humans sit at both ends of the creation process, as all data is ultimately attributable back to a human source or cause, and how humans are also the ones using and impacted by these systems once they’re deployed.
Having this vocabulary for explaining the process of building AI models is a helpful way to emphasize the different challenges that face builders at each step, and to clarify the possible techniques, mitigations, and harms that uniquely apply to the various stages.