Claude Code is a Floor Bed
My son, who just turned two years old, was always a wakeful baby – and now, he’s a wakeful toddler. He’s been in a Montessori-style floor bed since he was about 15 months old. On paper, the idea of a floor bed was extremely promising for our family. It gave us an opportunity to have a comfortable place to lie down during his night wake-ups (I can count on one hand the number of times I’ve had a full night sleep since he was born). We also loved the general principles behind Montessori, giving our son the opportunity to develop life skills at his own pace and have freedom to explore his surroundings.
I’ve recently been more exposed to Claude Code through both my teammates at work – we use it heavily in our product development process – and now through my husband, who is using it to build a personally tailored web app that combines all of the sources of data that he’s accumulated over the past 15 or so years. Subsequently, I’ve decided: Claude Code is a floor bed.
Like a floor bed, Claude Code gives you access to more freedom than the traditional software development process might have enabled. For many toddlers who stay in a crib until the recommended three years or so, they’re constrained to a set of known conditions. They have guard rails that are keeping them physically constrained and from doing anything completely wild, because they’re generally not considered to be ready or equipped to handle the full process of being able to get in and out of bed. Claude Code essentially makes this true for people building software as well.
With a floor bed, toddlers are given free roam of their room. Now, we’ve done a lot of work to make sure my son’s room is completely toddler-proofed and safe for him to be in, but as he’s gotten older, it’s become a lot more evident to us why more people don’t use floor beds.
Toddlers don’t understand invisible boundaries. They especially don’t understand invisible boundaries when they don’t want to go to sleep. Instead, they’re awake and they don’t understand why they’re being constrained to their bedroom. My toddler, every night, puts on a show for anywhere for 15 minutes if we’re lucky to 45 minutes or longer of refusing to stay in his bed. And he is definitely not falling asleep. Instead, he wants to explore every option available to him. He can open the blinds or close the blinds as many times as he wants. He can get in bed out of bed, slide up and down the bed, roll all over the bed, kick the walls, get out of the bed and kick his door. Try the lock; take his plushies over to the door and have them try the lock.
He doesn’t know any better. We should have known better. We should have known better than to give him full access to a space and activities that he doesn’t fully understand how to manage himself. Instead, the idea of giving him autonomy to make his own mistakes has resulted in a situation where he is, indeed, very autonomous, very independent, and also very, very angry about having to go to bed every night.
Working with Claude Code seems to have parallel challenges. We’re seeing more and more reports of software vulnerabilities that are happening because people are using AI tools to create these pieces of code that they don’t really understand, because they don’t know how to operate within the boundaries of safe development. While these tools are getting better – like my husband and parenting skills, we hope – what we’re seeing is that we may have opened the floodgates to too much capability a little bit too soon.
There’s nothing dangerous about my son‘s floor bed set up. There is something that constantly challenges a little bit of our sanity every night. I can’t help but wonder whether there are similar risks to using Claude Code as well. Recently, my husband pointed out how tempting it was every day when he’s working on his projects to toggle on the overage button while he waits for his tokens to reset, and then can continue to explore. Silicon Valley has found a way to target their dopamine extraction machine inward, now targeting the very developers who used to target the rest of us.
AI agents make the illusion of productivity and value creation more tangible than ever before. Compute is now currency; true value is separated even further from the means of labor and production.
Perhaps it isn’t an entirely fair comparison to make. My husband‘s brain is more developed than my toddlers. He can at least understand a little bit more about the complexities and trade-off of using these tools versus doing the work directly himself. There’s also something really powerful to be said about what pools like Claude enable people to do when they are in a situation where they can’t give their ideas their full-time attention. My husband is a trained and practiced software developer, so he does actually understand the output, and can guard and constrain the system appropriately. But often, people are using these tools who don’t have that knowledge – they’re listening to the machines, and research is showing that we’re not checking them as much as we should be. We’ve gotten used to the illusion of capability in these coworker agents and as a result, we’re trusting them more and more.
Given all of that, I still can’t help, but think there’s probably some lessons to be learned.
It turns out there’s a very good reason why pediatricians don’t recommend giving your toddler as much freedom as floor bed offers. It’s a lot of additional overhead cognitively to manage, the benefits are unproven, and it ends up creating significant extra work in the long-run for other people involved.
At the end of the day, there’s likely not going to be a long-term harmful effects for my toddler, who will will someday learn how to sleep through the night: that sleep is good, actually, and that climbing into bed does not need to be followed by climbing out of bed and banging on the door three or four times before we can actually go to sleep.
I’m not sure the same is true for tools like Claude. In some ways, the tools are unlocking near superhuman capabilities for some – or at least they’re claiming to – but there are also a lot of other effects that are starting to be known from over-reliance on these tools. People are increasingly finding themselves tired from the work of directing and managing agents and computer programs. It is unclear if all of this is resulting in better technological outcomes – to mention the broader challenges that society faces with these tools. What happens if we start to lose the fundamental knowledge of the underlying information that these systems are drawing from?
There’s also a societal component to it all. One of the reasons why my husband and I were so drawn to the idea of a Montessori bed, and giving our son a lot of autonomy and agency, is that we live in a highly individualistic society. In American culture, he’s going to be expected to be able to be independent pretty early on in his life. If we still lived in a society where raising children was a collective effort, perhaps bedtime would be less of a struggle.
AI agents make it extremely easy for us to continue to take a hyper-individualistic approach to the way we operate our society. We are now able to build completely tailored software for ourselves instead of being in community or building software that applies to many other people, and we are able to output so much more information on a daily basis than we were able to do even a year or two ago. That mentality and mindset shift in terms of normalizing capability will subsequently also have effects on our psyche and the way that we’re developing. It has an effect on our mental health just like the way a child who refuses to sleep has a challenging effect on a parent’s health. Are we really capable of understanding the spaces that we’re in, the invisible boundaries around us, and how to choose what’s best for us with these tools and agents that are all available to us now?
I decided to try out Claude for myself. Or at least, I would – if I were able to get a functioning account set up. I first tried to sign up with my organizational SSO account, and my account was immediately (without even getting through the login) suspended for suspected TOS violations. My appeal has – for the last week – gone ignored. I tried a personal email account. After three failed login attempts with no error messages, I was finally able to select a Pro plan and enter in my credit card details. The website told me the transaction failed, but I received an invoice to my email. There is virtually no option to contact support unless you are able to log-in (which I still can’t). The CEO of Anthropic brags about how the system was written by agents, with little human intervention and oversight. Frankly, it shows.
So maybe Claude isn’t a floor bed. Maybe Claude is Kool-Aid.