Mapping the Self: Data, Introspection, and the Future of Human-Centered Knowledge
Our digital lives are increasingly noisy, shaped by the gigabytes of information that we consume on a daily basis. The bits and bytes that make their way into our psychology via messages, articles, videos, shorts, and podcasts are algorithmically curated to latch into relevancy. Platforms that serve information to us use our attention to personalize recommendations and advertisements, leaving us in a swirling vortex of information. Within this landscape of information, our software can become truly individualized, facilitated by thoughtful design of emergent applications that center user privacy, customization, and co-constructed features.
Noise is not inherently bad, especially in a computational sense. Noise creates the environment necessary for randomness to occur, and in human creativity, randomness is thought to be a key ingredient in novel innovation. One way that this manifests itself from a neurological perspective is in the creation of new pathways as disconnected and disparate ideas are suddenly joined together by new synaptic connections. The noise we’re exposed to allows us to have new ideas, which in turn prompts us to explore novel information – and the creative cycle continues.
Data is the lifeblood of artificial intelligence. Machine learning algorithms that power our ChatGPTs, Copilots, and Geminis find patterns in internet-scale pools of content that are then used as the ultimate probabilistic generator, capable of interpreting human language almost as easily as a 1 or 0.
Whether we realize it or not, we all house data and information in some way. Our browsing habits and preferences online create personal data. Our health records, held by our doctors, or our LinkedIn profiles, our old Facebook messages – they all store information about us that can reveal more than we might expect. We expect others to make decisions for us based on our data, but what if we had better visibility into our data as a tool to understand ourselves?
Today, most social media algorithms are designed to maximize attention. User interfaces prioritize easy, repeatable interactions that keep users engaged. Some apps utilize controversial dark patterns to make users feel urgency or anxiety, nudging them into actions they might not have taken if the interface were more transparent or respectful of their intent. But these defaults can be challenged, especially in a world where we make introspection a design goal of the tools we are building in this age of emergent interfaces.
Reclaiming authorship and ownership over our knowledge and attention is a necessary form of agency for this next generation of computing. We can defend our thinking by considering not just the idea of knowledge development or management, but our knowledge as a tool that shapes and leads us through the world. Information access – through AI agents or procured otherwise – augments and reflects back the concepts we form of our environment and allows us to craft the narratives we hold as individuals and societies.
In her book How Emotions Are Made, Neuroscientist Lisa Feldman Barrett explores the relationship of external stimuli to the concepts we form in our minds about feeling and emotion. Barrett’s research explains how our brains map the information we perceive to specific physical sensations within our bodies, and repeated exposure to similar experiences create neural pathways that solidify into a given concept. This process is not something that happens once. Instead, the brain often rewires itself based on new information and ideas. This neuroplasticity is how we can ‘change our minds’ about something when exposed to new information, whether that information is about something within us, or about our world.
This concept applies to the way we think about ourselves. In our youth, we are given ‘data’ about ourselves through our families, teachers, and friends. We build an initial mental model of the world, and as we grow, we aim to fit new information into the beliefs we’ve constructed for ourselves. Techniques like meditation, therapy, and journaling all offer insight into how we understand ourselves (and our relationship to the world), but up until now, we have had limited tools for capturing and reflecting on our mental changes over time. What might it look like to design systems that use our own data to act as a catalyst for reflection? There is an opportunity to shift our relationship to data away from simply consuming it, and more intentionally into a form more like metabolizing it.
Within our personal data landscapes, we’re confronted with a challenging truth of our times: there is no ‘base’ reality that everyone experiences. Each individual’s experience and the context that they bring with them reflects a unique web of interdependent individuals, culture, experiences, and institutions that the person is – or has been – a part of. Modern technological tools must embrace this multiplicity, not flatten it.
Generative AI offers new capabilities for translating and shaping content online. The advent of ‘liquid content’ – where images, text, and audio seamlessly shift from one format to another – enables people to further shape their experiences. Consider Google’s NotebookLM, an AI tool that transforms documents into study guides and podcasts, or ElevenReader, which turns any written document into an audiobook. The promise of liquid content offers an unimaginable level of adaptability of our information. But throughout all of this, we need to come back to the imperative of data sovereignty. In a world where content can be manipulated with just a few keywords, true ownership means ensuring that communities and individuals own not just their data, but their narratives and algorithms.
Imagine a world where you have all of your personal data in your own vault, with seamless protections and sharing, with a context-aware personal agent who can answer not just logistical questions about your calendar, but aids in understanding how you’ve grown over time, or informs potential differences between how you act at home or work based on your Facebook and LinkedIn profile histories. A world of agent-mediated content is the direction the web is headed, but it will be a battle to ensure that ownership of these experiences stay in the hands of those whose data is feeding the machine.
Imagine what becomes possible when we treat the fabric of our lives, currently stored away in pockets of discrete applications and platforms, as the rich, interconnected tapestry that makes up our collective social groups. Mapping ourselves – across all contexts – could unlock better collaboration, trust, and collective intelligence. These tools could help us move beyond simply working better to creating new ways to support authentic belonging.
Changing who we are into who we can be is not going to happen simply because new technologies emerge. At the end of the day, we as humans are responsible for how we treat each other, the ways that we respect one another, and how we cultivate communities of mutual care. Mapping ourselves through data analysis and introspective tools can help guide us, but it is ultimately up to us how we use the technology available to us to facilitate the change we want to see.