The Barefoot Engineer

Recently, a colleague mentioned with pride that he’d shipped a large feature without using LLMs. My initial reaction was that writing code without LLMs is like running a 5K barefoot. Sure, it’s doable. But why make it harder than it has to be? Still, I’m curious about the source of that pride. Was it fear of obsolescence? Or something deeper about identity and craft? After all, this same colleague relies on AI tools for other tasks.

The answer lies in the deep connection between our tools and our identity. Letting go of familiar tools feels like letting go of part of ourselves. As Karl Weick noted in his study of firefighters, “dropping one’s tools is a proxy for unlearning, for adaptation, for flexibility.” Firefighters have died in wildfires holding onto heavy equipment they could have dropped to run faster. For many engineers, code defines their craft. Being asked to code differently can feel like being asked to give up who they are.

As AI reshapes software engineering, those who lean on core principles and evolve their tools will thrive. Those who resist change risk irrelevance.

We’ve seen this before. In the 1950s, many resisted compilers. They insisted their hand-crafted assembly was superior. Early compilers outperformed many engineers, though not all. Over time, compilers outpaced the majority and became essential tools in every engineer’s toolbox. Compilers didn’t change the fundamentals of programming. They changed how those fundamentals were applied. AI tools follow a familiar path: first doubted, now transforming how we work, just like compilers once did.

Principles Stay, Tools Evolve

While AI changes the tools we use, the principles that guide good engineering remain the same. Principles are about how we approach problems. We still decompose complex systems into manageable pieces. We still follow the UNIX philosophy of doing one thing well. We still design clean, interoperable interfaces. These practices are timeless. They mattered when writing assembly. They matter when building distributed systems or orchestrating LLM agents.

Tools, on the other hand, are force multipliers. They help us work faster. They automate repetitive tasks. They reduce human error. But they don’t replace creativity, judgement, or understanding. A new IDE or an AI assistant may change the mechanics of coding. But it can’t replace the reasoning about systems, anticipating problems, or structuring solutions carefully. Knowing when and how to use each tool is what separates good engineers from those who struggle with outdated habits.

Carpenters didn’t abandon hammers when nail guns arrived. Engineers who grasp core principles won’t lose their craft when they adopt new tools. Principles stay, tools evolve. Mastering both is the foundation of a modern engineering toolbox, one ready for AI and whatever comes next.

Three Mindsets, One Engineer

Modern engineering demands three distinct mindsets. In his Software Is Changing (Again) lecture at YC AI Startup School 2025, Andrej Karpathy introduces Software 3.0, a new way of writing programs. To put it in context:

  • Software 1.0: Classical programming with explicit instructions and deterministic outputs.
  • Software 2.0: Neural networks that learn patterns and behaviors from data, producing probabilistic outputs.
  • Software 3.0: AI tools like LLMs, where programs are expressed as natural language instructions or prompts.

Each paradigm builds on previous rather than replacing it. Classic software remains essential for deterministic systems. ML-based approaches excel at pattern recognition and probabilistic tasks. LLMs and AI tools shine when problems are exploratory, language-heavy, or too large to handle manually. Problem-solving spans all three, depending on the domain and context. Great engineers understand all three. They know when each is the right fit. They also know how to apply core principles through all three.

Consider building a content moderation system. You’d use Software 1.0 for the deterministic code like API and business rules. Software 2.0 powers the spam detection with a neural network trained on millions of examples. Software 3.0 handles the ambiguous cases. It evaluates context to determine if content is satire or harassment. It explains to users why their posts were removed. It adapts rules based on cultural nuances that rigid systems miss. Three paradigms, one system. Each doing what it does best.

That’s the big picture. But what does it mean for how we actually work?

What Actually Changes

Before coding agents like Claude Code, many of us were skeptical about AI tools in daily engineering. Now, it’s hard to imagine working without them. The biggest shift is how AI changes the balance of our work. We spend less time wrestling with boilerplate or unfamiliar codebases. We spend more time designing systems, reasoning about trade-offs, and tackling problems that actually matter.

Take understanding a new codebase, for example. What used to take hours of tracing modules, following function chains, and guessing at design intent can now be accelerated. AI tools summarize code. They explain dependencies. They answer “why” questions about past decisions. Engineers get up to speed faster and contribute meaningfully.

Debugging has changed too. Rather than staring at cryptic stack traces, AI assistants can suggest likely causes. They propose targeted fixes. They even offer hypotheses that might not occur to you. Of course, they aren’t perfect. Sometimes they confidently lead us down dead ends. But they free mental energy for the tricky, high-impact parts that demand human judgment.

AI also reduces repetitive work. From generating CRUD APIs to configuring infrastructure, LLMs handle the scaffolding. Engineers can focus on architecture, abstraction, and creative problem-solving.

Finally, AI changes how we learn. It’s like having an endlessly patient tutor at your side. Even the most patient tutor can get things wrong or make things up. Critical thinking remains essential. Still, picking up a new language, library, or algorithm becomes genuinely more enjoyable. Engineers can experiment, learn by doing, and iterate faster than ever.

The Choice

The AI wave feels like a wildfire sweeping across software engineering. Just as firefighters drop heavy equipment to survive, we face a choice: cling to familiar tools or let them go and adapt. Wildfires destroy, but also clear old growth and make room for new ecosystems. AI frees us from repetitive tasks, creating space to focus on problems that challenge us, excite us, and actually move the needle.

Here’s the uncomfortable truth: if your value as an engineer comes from writing code quickly or memorizing language specifics, AI is a threat. But if your value comes from understanding systems, making architectural decisions, and solving complex problems, AI is a gift. It handles the tedious so you can focus on the meaningful.

This isn’t just about AI. Engineers have faced similar moments before: the arrival of compilers, IDEs, cloud infrastructure. Each time, some confused their tools with their identity. Each time, those who adapted solved harder problems than ever before. The lesson is the same: principles stay, tools evolve. Those who embrace both push the boundaries of what’s possible.

Don’t be the engineer who ran barefoot to prove a point. Be the one who put on better shoes and ran further.