Epistemic status: more or less certain.

Why write this post? Why not.

Note: I wrote this post and only after finishing it did it become clear that it’s essentially an extension of something I wrote a week ago. If this resonates, you might want to read When in doubt, do alongside it.


I’m often surprised at the rewards of doing the basic things consistently, month after month, year after year.

Take investing, which is the industry I work in. You’d be surprised how successful you can be just by doing the basics: buying low-cost index mutual funds, diversifying across asset classes and geographies, consistently increasing the amount you invest, and staying disciplined. Four steps. Nothing more.

Same goes for health. Eating well, getting decent physical activity, and taking care of your mental health will help you avoid most of the pitfalls that afflict people.

Same goes for your brain. One of the easiest ways to not be a complete moron in life is to read five to ten pages a day and maybe one or two articles. That alone puts most average people into the leagues of normie superintelligence.

And even if you don’t know where to start, you can reason by exclusion. For health, just list the most harmful things and you’ll automatically end up doing the right stuff. Same for investing: list the most spectacularly dumb things people do with their money. And luckily, we live in the age of LLMs. Just ask: what are the things I should never do with X? Health, investing, life in general. You’ll end up knowing.


The trigger for this post was this conversation between Simon Willison, who writes one of the best tech blogs around and is a big inspiration for the link blogging I do here, and Lenny Rachitsky. It’s brilliant and I’d recommend it to anyone, technical or not, because the takeaways are universal.

Starting around the 20-minute mark, Simon gets into how LLMs have changed his life as a programmer. He has 25 years of experience as a software engineer, and after AI coding agents got genuinely good, all his prior assumptions about how long it takes to build software became essentially useless. Models can now produce really, really good work, fully tested, in a matter of hours.

At the risk of Lenny suing me, here’s the standout part of the conversation for me:

Simon: I want to stand in defense of software engineers for a bit. On the one hand, these things can write code that used to be our thing. But I’m finding that using coding agents well is taking every inch of my 25 years of experience as a software engineer, and it is mentally exhausting. I can fire up four agents in parallel and have them work on four different problems, and by 11 a.m. I’m wiped out for the day. There is a limit on human cognition — how much you can hold in your head at one time, even if you’re not reviewing everything they’re doing. It’s very easy to pop that stack at the moment.

There’s a sort of personal skill we have to learn, which is finding our new limits. What is a responsible way to not burn out? I’ve talked to a lot of people who are losing sleep because they think their agents could be doing work for them, so they stay up an extra half hour setting off a bunch of tasks and wake up at 4 a.m. That’s obviously unsustainable. The agents only really got good in the past four or five months, so we’re all still learning what that looks like. But there’s an element of gambling and addiction to how we’re using some of these tools.

To stand in defense of software engineers: I get great results out of these things because they are amplifiers of existing skills and experience. I have 25 years of pre-AI experience which I can now amplify. I can talk to the agent at a very high level, use sophisticated engineering language that I’ve mastered over the years, and we can collaborate incredibly effectively. I can look at a problem and say, this is a one-sentence prompt and I know it’ll find and fix that bug — as opposed to this other problem, which is a different beast entirely.

The flip side is that my 25 years of instinct about how long it takes to build something is completely gone. I’d look at a problem and say, this will take two weeks, it’s not worth it. Now it might take 20 minutes, because all the crafty coding work the estimate was based on is now being handled by AI.

So I constantly throw tasks at AI that I don’t think it’ll be able to do, because every now and then it does. And when it doesn’t, you learn — okay, this model still can’t do this particular thing. But when it does something the previous models couldn’t, that’s actually cutting-edge AI research. You can be the first person in the world to spot that AI can now do X, just because you kept a backlog of interesting tasks and kept trying.

One of the more memorable things he says: using coding agents well is taking every inch of his 25 years of experience, and it’s mentally exhausting. He can fire up four agents in parallel working on four different problems, and by 11 a.m. he’s wiped out for the day.

It’s kind of crazy. And as an aside, it’s wild that we live in times where all we have to do is wave our verbal magic wand and these magical, ethereal entities just turn imagination into reality.

Simon also talks about how he thinks about himself in this new age. He’s having more fun now than ever. For many years his New Year’s resolution used to be “do less.” This year’s is “do more.”

I’m ripping out all the side projects I’d wanted to do for years that were held back by my lack of technical knowledge. Doing them in days and weeks. Running countless experiments, building small things, seeing if they work, running them by people.

But the thing that stuck with me most were Simon’s mental models around how to think about yourself in the age of AI. He says he’s become more ambitious since LLMs came along, and that people should lean into these tools and figure out how they can make them better. The worries about skill atrophy are real, but being thoughtful about how you apply the technology and thinking about how it amplifies what you already know is the right frame.

I completely agree. Not being ambitious in the age of LLMs is a sin.

Simon: My New Year’s resolution this year was the opposite of every previous year. Every year I’d tell myself: this year I’m going to focus more, take on less. This year my ambition was to take on more and be more ambitious. We’ve got these tools — bring it all in, try to do everything.

Lenny asked how it was going.

Simon: Fun. I’m enjoying myself. I’ll probably get to the end of the year and realize the most important things I should have been focusing on didn’t get done — but that’s the case when your ambition is simply to do them.

But the real banger, the part that triggered this post, was Simon saying that because things are changing so fast, the only universal skill is being able to roll with the changes. Any mental model you’ve built about what LLMs can do won’t survive the next model release.

Simon: I think trying to apply these tools to self-improvement is a really useful habit to develop, because honestly everything is changing so fast right now. The only universal skill is being able to roll with the changes. That’s the thing we all need.

Weirdly, the term that comes up most in conversations about how to be great with AI is agency. Human beings have agency — we use it to decide what problems to take on and where to go. Agents, I’d argue, have no agency at all. The one thing AI can never have is agency, because it doesn’t have human motivations. Sure, you can tell it to make more money or whatever, but it’s never going to decide on its own what makes sense to act on next. So invest in your own agency. Invest in how you use this technology to get better at what you do and to do new things.

A classic example: Boris Cherny, the creator of Claude Code, has said that what used to be clever prompt engineering and scaffolding in a previous model is now just default behavior in the latest one. People go to great lengths to wrangle and steer these models, and with each upgrade, all that cleverness becomes unnecessary. Whatever intuitions I’d built about how models worked are now more or less useless.


I related deeply to the experimentation Simon described. I’m not a programmer, and I shouldn’t be putting myself in the same sentence as Simon Willison, but I’ve been running every little experiment I’ve wanted to run, and I’m kind of unhinged about it. I don’t care how long it takes or whether it’s a waste of time. I’m just leaning into my curiosity and all those little prompts in my brain that say, hey, let’s try this, it’ll be a billion-dollar startup.

This isn’t a new obsession for me. I wrote about it recently — the idea that when in doubt, you should just do. This post is more or less the extended argument for why.

Action expands the surface area of possibilities and outcomes. Chance is the thing that finds you when you expand that surface.

I can now whip up five prototypes in ten minutes and get a quick sense of whether they work, whether they’re worth pursuing, or whether they’re useless. If they’re useless, I kill them. All I lose is a couple of hours. What I gain is a sense of the patterns: what works, what doesn’t, what’s worth doing and what isn’t.

But while Simon’s advice about rolling with changes is genuinely great, I don’t think most people will actually do it. Because most people, given a choice between doing something and not doing something, almost always choose not doing something.

This is also a biological reality. Our bodies and brains aim for homeostasis, a golden mean where everything is chugging along without being too volatile. We like the moderate middle. We want a life where nothing too bad or too good happens.

And this gets us in trouble when it comes to intellectual growth and careers. If you don’t experiment, you don’t know the full range of what’s possible. It sounds obvious, but you’d be surprised at how many people don’t seem to grasp it. We are all essentially blind people groping in the dark. All we know is what has worked before and where we currently stand. The future is inherently unknowable. Until we fuck around, we can’t imagine the possibilities.

Despite this being obvious, I’m always struck by how many smart people, people who can take shots, who have the privilege and the wherewithal to try new things, just stick to what they know. Stay on whatever comfortable path they’re already on.

I try not to be judgmental about it. I force myself to think about why the smartest, most gifted people are like this. I don’t have a great answer. But what feels abundantly clear to me, after having lived almost a third of my life, is that the payoff to experimentation is remarkable. By an order of magnitude higher than whatever you invest in time, effort, and money.

It’s like buying a stock for ten rupees and watching it go to fifty crore.

In most professional settings, the payoffs to experimenting are infinitely higher. If you’re lucky enough to be somewhere that your personal interests align with your professional ones, whatever you experiment with personally will pay off professionally. And if you’re in a setting where experimentation is encouraged, you’d be a fool not to take advantage of it.

A random conversation, a random tool you try, might end up becoming something a lot of people use, or in rare cases, a new product altogether. Claude Code itself, which seems to be all the rage among both technical and non-technical people, was the result of exactly this kind of wild experiment. Boris Cherny started it as a side project in September 2024 with no idea it would become what it did. The story gets better: even Dario Amodei, Anthropic’s CEO, was baffled by the internal adoption. He apparently asked Cherny whether he was forcing engineers to use it, because everyone seemed to be on it. All Cherny had done was give people access.


So why don’t people experiment? Let me put on my shitty psychologist hat for a second.

Some people genuinely don’t have the luxury. Life situations for some people are genuinely hard, and they can’t afford to introduce more uncertainty. That’s a real constraint. But setting aside that extreme, I think most people are fundamentally afraid of the unknown.

We are biological organisms hardwired to crave certainty. Which is why we see penises and horses and oddly shaped bums when we stare at clouds. Given a choice between certainty and uncertainty, we almost always choose certainty. There’s a famous UCL study where participants had to guess which rocks in a computer game had snakes under them, and when there was a snake, they got a mild electric shock. What the researchers found was that a 50% chance of getting shocked was far more stressful than knowing for certain you would be shocked. Uncertainty, it turns out, is harder on us than guaranteed pain.

Then there’s imposter syndrome and underconfidence, something I still struggle with, and it doesn’t go away.

A lot of people also can’t think in terms of exponential payoffs. They see the cost of experimentation and not the potential reward. The cost feels like a loss, so they don’t take the shot. And the cost could be money, time, effort. Given a choice between scrolling reels and picking up a book, it’s not even a competition for most of us.

And then there are people in professional environments where experimentation isn’t just discouraged, it’s forbidden. Companies that want employees to do the thing they’re asked and go home. No pirates, no explorers.

But maybe one of the biggest reasons people don’t experiment is that they simply don’t give themselves permission. Even capable people have this weird blind spot when it comes to deciding whether it’s worth trying something new. They miscalculate the payoff and don’t take the shot.

And there’s also the fact that most of us either underestimate or overestimate our own abilities. I’ve rarely met someone who correctly estimates what they can do. Because of that, we either take too many shots or too few. I don’t think this ever gets fully fixed, no matter how much introspection you do. I certainly haven’t managed it.

And this applies more than anything else to AI tools specifically. A lot of people relate to LLMs primarily from a place of threat — this is the thing that will take my job. And from that place of threat, they either refuse to use them altogether, or they convince themselves that using these tools means handing over their training data to the very thing that will eventually replace them. Which is, I’m sorry, a spectacularly idiotic way to think about it. If the thing is coming for your job regardless, the only sensible response is to get so good at using it that you become the last person standing.


All that said, life becomes infinitely more fun when you try new things, take random shots, and allow life to sweep you off your chosen course and carry you somewhere you couldn’t have imagined. A lot of the most wonderful things that happen in life are downstream of walking off the beaten path.

Being open to experimentation, allowing serendipity and randomness and occasional chaos to kick you in the mental groin, makes life more fun.

And please don’t misread this as coming from someone who has any of this figured out. God knows how many times I’ve chosen Netflix over doing something new. God knows how many times I’ve been afraid to experiment. I’m as flawed as the next person. This is more a note to self than me confidently puffing my chest out and masturbating and spewing my half-baked thoughts on these digital pages.

What do we have to lose?

And if fun isn’t reason enough, there’s also the very real possibility that we could all end up part of what people are calling the permanent underclass. My personal theory is there’s a non-trivial probability that AI will keep getting better and better.

And think about it this way: if that’s true, if AI does automate away most jobs and normies like us get consigned to the permanent underclass, wouldn’t that be all the more reason to experiment like crazy right now? To front-run that depressing reality? To throw enough things at the wall that one of them becomes a billion-dollar startup, which you can then quickly sell to someone, preferably a rich idiot, make all the money, and swim in a pool full of dollar bills like that meme? Wouldn’t you want to do that?

And think about it this way: if AI does automate away most jobs, and if it does get so horny from ingesting all the weird porn that’s the last thing left on the internet that it turns us all into its own on-demand pleasure providers — shouldn’t that be all the more reason to fuck around and try random things while we still can? And here’s the other thing: our soon-to-be AI oligarchs, Sam Altman and Dario, have — out of some temporary and presumably fleeting benevolence, before they start ruling over us — generously offered to subsidize our token consumption. Whether at work or on a personal side project, you should be burning tokens like there’s no tomorrow and you’re about to die from insert-reason-here. The tools are cheap, the window is open, and the permanent underclass isn’t here…yet. Fuck about!

Also, the simplest way to get ahead in life is by taking more shots than most people. For most people, that number is basically zero. Even doing one or two more things than most people puts you surprisingly far ahead. I hate framing it as you versus other people, but you do compete, especially professionally. And if you can take shots that others won’t, and you can stomach the uncertainty and stay open to feedback from reality as you go, you’ll end up places other people simply can’t follow.

I can already sense the tension in all of this. On one hand I’m saying doing the basics consistently will take you remarkably far. On the other, I’m saying the only universal skill is rolling with the punches. Those might seem fundamentally contradictory. They’re not, to my mind. Being able to try new things, take shots, and experiment is itself a basic. It’s as foundational an ability as it gets.

More importantly, we live in a golden age. We have all the tools to run every experiment we ever wanted to run, build everything we ever wanted to build.

We are here for a brief, fleeting fart on this planet. For as long as we’re here, about as long as a smelly fart lingers, we might as well have fun.

What’s the whole point of not doing things?

Let’s just fuck around and see where it goes.

Or go back to swiping reels you soon to be disgusting poor and homeless person.