The Builders Who Figure This Out First Will Be Impossible to Catch. Why You Need an Identity Shift.

23.9K views January 23, 2026

My site: https://natebjones.com
Full Story w/ 30-Page Builder's Gude: https://natesnewsletter.substack.com/p/6-practices-for-when-the-models-got?r=1z4sm5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
_______________________
What's really happening with AI productivity in 2026? The common story is that better prompting is the answer — but the reality is more complicated.

In this video, I share the inside scoop on why the bottleneck has shifted from capability to cognitive architecture:

• Why adopting an engineering manager mindset changes everything
• How killing the contribution badge unlocks real velocity
• What strategic deep diving looks like across technical and non-technical work
• Why experience cannot be compressed at the speed you can build

We solved the wrong problem. For two years we optimized for prompting and tool selection. Those remain foundational — but they're no longer sufficient. The people who distinguish themselves now have the same toolset as everyone else. What separates them is systems thinking and the ability to move fluidly between altitudes of abstraction.

For builders at every level, the forcing function is here — but partnership with AI requires knowing what actually matters about what you're building.

Chapters
00:00 We solved the wrong problem
01:23 The bottleneck shifted and we don't have the mental tools
03:33 Practice 1: Adopt the engineering manager mindset
05:57 The transition feels like loss but it's leverage
07:00 Practice 2: Kill the contribution badge
08:23 Our habits didn't update as models improved
09:30 Practice 3: Develop strategic deep diving capabilities
10:49 Worst vibe coders vs worst traditional developers
12:39 Why the terminal works: constrained and unambiguous
14:00 Practice 4: Create temporal separation
15:30 Practice 5: Two kinds of architecture
17:03 Quality without a name remains human work
17:54 Practice 6: Experience is not compressible
20:00 The 2026 builders operating system

Subscribe for daily AI strategy and news.
For deeper playbooks and analysis: https://natesnewsletter.substack.com/

0:00 We solved the wrong problem. For 2
0:02 years, we've been optimizing for
0:04 capability in our AI usage. It made
0:07 sense at the time. It still makes sense.
0:09 It remains a foundational toolkit. It's
0:12 like learning how to use hand tools,
0:14 better prompting, smarter tool
0:15 selection. We treated AI fluency as a
0:18 skill pack we all needed to acquire. And
0:21 it worked. The capability ceiling has
0:23 lifted. The average engineer can now
0:25 produce a lot more code. And AI is
0:28 taking over non-technical disciplines
0:30 like crazy because people are learning
0:32 how to prompt better, learning how to
0:34 use tools like Claude Co-work. And yet,
0:37 and yet we still feel like we are
0:40 behind. We still feel like the AI
0:43 revolution is catching up with everyone
0:45 else faster than it's catching up with
0:47 us. And that is me saying that after
0:49 talking to workers and after talking to
0:51 leaders, everyone thinks if they look
0:54 over the shoulder of the person next to
0:55 them, they're going to see a secret hint
0:58 they've missed. This is the secret,
1:00 guys. The models are getting smarter.
1:02 And we now have to learn a new way to
1:06 think sensibly about how to use these 10
1:10 and 100x capable models. And we don't
1:12 have the mental tools to do it. And the
1:14 conventional assumptions around prompt
1:17 better are necessary, but they're not
1:20 sufficient. The shift is that the
1:22 bottleneck has moved and we're all
1:24 really frustrated by it because we there
1:27 you don't have an easy quick band-aid
1:30 fix. There is not a magic prompt for
1:32 this one. But you can understand where
1:34 the bottleneck moved and you can learn
1:36 to engage with it. you can get the
1:39 mental equipment you need, the software
1:41 upgrade for your brain that you need to
1:43 improve how you work with agents because
1:46 I am convinced that the bottleneck in
1:49 AI, the thing that is making us
1:51 frustrated even though we've developed
1:53 so many of these basic skills is it's
1:56 happening because the bottleneck has
1:57 shifted from our ability to learn a
2:01 skill like a capability like I need to
2:03 learn how claude code works or gosh I
2:06 got to learn how notebook LM works. Now
2:08 it's shifted toward something that is
2:11 harder to grasp. It's shifted toward our
2:13 cognitive architecture, systems
2:15 thinking. The people who distinguish
2:17 themselves often in 2026 have the same
2:21 core tool set as the people who don't.
2:24 Uh they use Cloud Code, they use Claude
2:27 Co-work, they use Gemini Nano Banana,
2:29 they use Notebook LM. It's not a
2:32 different tool set anymore. It's not
2:33 even necessarily incredibly better
2:35 prompting skills.
2:37 The bottleneck is moving from capability
2:42 to systems thinking and cognitive
2:44 architecture. That is what distinguishes
2:46 people who are able to make the most of
2:48 the AI systems. And almost nobody has
2:50 thought this one through and really
2:52 articulated. And I want to lay it out
2:54 for you as honestly as I can. These are
2:56 the practices that I am seeing top 1%
3:00 builders take who are able to take their
3:03 foundation skills and I am not
3:05 denigrating prompting here. Prompting is
3:07 a critical baseline skill. I talk about
3:09 it a lot. I'm going to keep talking
3:10 about it in 2026. You got to have it.
3:13 But it's not enough. It's not
3:15 sufficient. You also have to upgrade
3:17 your thinking. Practice number one.
3:20 people who are 100x builders who can
3:23 take advantage of where these systems
3:25 actually are in 2026, they are adopting
3:28 the mindset of an engineering manager.
3:31 Not metaphorically, operationally. And
3:34 so when you think about what an
3:37 engineering manager does, think about it
3:40 as I am responsible for the overall
3:43 quality of what is coming out of my
3:45 team. I'm responsible for their ability
3:47 to ship. I'm responsible for their
3:49 well-being and their ability to
3:51 successfully coordinate. That's not just
3:54 word games. That is how you start to
3:56 allocate your attention when you work
3:59 with teams of agents. In this sense,
4:01 you're responsible for defining a team
4:04 environment for agents that is actually
4:06 successful. You're responsible for
4:08 understanding that you have to give an
4:09 agent a very clear set of guard rails, a
4:11 very clear endpoint, a very clear
4:13 mission to accomplish, and a clear
4:15 definition of done. And you have to be
4:16 able to replicate that. And I have
4:18 different videos that talk about how you
4:19 do that in much more detail. But in this
4:21 situation, what I want you to focus on
4:24 is that instead of managing a team of
4:28 humans who have their own judgment,
4:30 their own context, their own memory,
4:32 you're managing agents. They are
4:34 tireless. They are prone to confident
4:36 incorrectness. You have to have a
4:39 different discipline. And even though
4:41 the discipline is different because
4:43 you're managing agents, not people, the
4:45 analogy holds. You're still responsible
4:48 for throughput. You're responsible for
4:49 setting direction. You're responsible
4:51 for output. And I think the hardest part
4:54 is letting go of our identity about how
4:56 you get there. If you have built your
4:58 career as being the person who writes
5:01 the code or the person who writes the
5:03 perfect product requirement document as
5:04 a PM, the person who is able to do the
5:07 individual work at a high craft level,
5:09 this is a moment of grief because the
5:13 transition feels like a loss and it is a
5:15 loss. Something is changing here, but
5:18 it's also leverage. You are gaining
5:21 leverage to get an unprecedented amount
5:24 of work done if you are willing to
5:27 change your mindset. And so I'm sharing
5:30 the engineering manager analogy because
5:32 I think it's accessible, but I want you
5:34 to not hear that and think, oh, that's
5:36 for engineers. It's for everybody at all
5:39 levels. And so is practice number two.
5:41 Kill the contribution badge. This is a
5:44 legacy behavior that is just costing all
5:47 of us. We have this instinct to bring
5:51 something comprehensive before we engage
5:54 with AI. And I've seen it in myself.
5:56 I've seen it in others. We want to feel
5:58 like we made a contribution to the
6:00 process. So we feel a degree of
6:02 ownership in the outcome. And so we want
6:04 to say, "Let me do some pre-thinking.
6:05 Let me get really organized and then let
6:07 me go to AI." This runs against a lot of
6:10 our professional instincts. But with AI,
6:14 this is often backwards, especially if
6:17 you are working with a model like Claude
6:20 that works well with progressive intent
6:24 discovery. The models are often better
6:27 at handling unstructured input than you
6:29 might expect. And so your comprehensive
6:31 effort to think is often just premature
6:33 structure and noise. And there's a
6:37 little bit of a technical reason here
6:38 for this. This useful pre-thinking was
6:43 something that we legitimately valued in
6:45 the first half of 2025 because models
6:48 weren't there yet. But part of the whole
6:50 lesson of this is we need to change the
6:52 way we think because models keep getting
6:54 better and they have gotten better and
6:57 so they our habits didn't update and so
7:00 many of us still think we need to do
7:02 that pre-work. Now, if you are working
7:05 on a complex technical task and you're
7:08 working with a tool like Codeex that
7:09 values a very clear spec, sure you need
7:12 to take the time to develop that spec
7:14 before kicking off a longunning Agentic
7:16 build pattern. But that's not most of us
7:19 and that's not most of the time. And so
7:21 the larger pattern here is that builders
7:24 that succeed are not married to their
7:28 own personal sense of how much pre-work
7:31 they did for AI to make it valuable.
7:34 They are willing to roll with the fact
7:36 that the models are getting better and
7:38 they're willing to bring less structured
7:40 information to the table so that the
7:43 model can work more productively off an
7:46 earlier starting point and they can get
7:47 overall velocity gains. Practice number
7:49 three, develop strategic deep diving
7:52 capabilities. This is what distinguishes
7:55 the best builders from the people who
7:57 are wasting tokens. You have to have the
8:00 ability to deliberately change your
8:03 altitude. So the discourse around AI
8:05 coding has been stuck in sort of a
8:07 binary. Either you understand every
8:09 single line like traditional development
8:12 or you accept code you don't understand
8:14 like vibe coding. And neither one is
8:16 necessarily productive because one of
8:18 the things that distinguishes good
8:21 builders is that they understand they
8:24 have a fingertippy feel on what matters
8:27 about the product they're building and
8:28 where it goes in front of customers. And
8:31 so they can ladder down and they can say
8:33 look this particular checkout experience
8:36 is broken and I'm going to ladder into
8:37 the code and dig in as far as I need to
8:39 go to understand why it's broken. But
8:41 then they can ladder back out into a
8:43 higher abstraction and say what is the
8:45 agentic prompting pattern that is
8:48 producing this issue. Think of it like
8:50 flying a plane, right? Cruising altitude
8:52 has been really efficient for most
8:54 builders for most of the time. If you
8:57 were a product manager, one of the
8:58 distinguishing features is you were able
9:00 to cruise at a higher altitude.
9:02 Frontline engineers cruised at a lower
9:04 altitude. Everyone pretty much kept to
9:06 their lane. Now we've got turbulence in
9:08 the air. You have to be able to descend
9:11 to see the terrain and the specific
9:13 pieces of the code. You have to be able
9:15 to ascend to a much higher level of
9:17 abstraction because you're now thinking
9:19 about the correct abstractions for like
9:21 running dozens of agents which we've
9:23 never had to do. And so we just have to
9:26 have the ability to pull that plane up
9:28 and down to pull our mental model up and
9:30 down. And that is one of the hardest
9:33 things for us to learn because it's it's
9:35 literally training our brain to think
9:37 different. And the worst vibe coders
9:39 stay permanently high level, right?
9:40 They'll ship features. They'll never
9:42 understand what they built and they will
9:44 create what Adi Osmani calls
9:46 archaeological programming, something
9:48 future developers will have to excavate.
9:51 And it creates experiential debt because
9:53 at the end of the day, they don't really
9:56 know deeply the experience that they are
9:59 creating because they vibe coded it so
10:01 fast. I love that we can go faster with
10:04 vibe coding. I've t courses on it. I'm
10:06 not against it at all. Lighting is
10:07 great, but you have to understand what
10:10 experience you're creating to do a good
10:12 job with that tool. It's like a
10:13 blowtorrch. You can use it badly or you
10:15 can use it well. Either way, it lights
10:17 things on fire fast. Now, the worst
10:19 traditional developers have different
10:21 weaknesses. They stay permanently really
10:23 low level. They insist on understanding
10:25 every line and their throughput has hit
10:26 a ceiling and it's not going anywhere.
10:28 The best builders these days and they're
10:30 not just named engineers, guys. The best
10:33 builders move fluidly. And I would
10:35 challenge you that this is not just an
10:37 engineering thing. The best
10:39 non-technical people who are using AI to
10:41 produce artifacts like proposals, like
10:44 spreadsheets also move fluidly. They
10:46 also translate from low-level work where
10:50 they get hands-on in the spreadsheet to
10:52 highle work where they think about what
10:54 are the data patterns I need to present
10:56 so that this particular task goes more
10:59 smoothly next time. This pattern scales.
11:02 It's not just for engineers. You have to
11:04 let agents handle initial implementation
11:06 and you have to be able to examine
11:08 critical paths. Both of those are true
11:10 in 2026. Cal Newport's analysis of why
11:13 agents work is really interesting here.
11:15 The terminal is a constrained and
11:18 textbased environment where feedback is
11:20 immediate and unambiguous. That is part
11:22 of why coding has been so fast for AI
11:25 agents. Your deep dive should target
11:28 that level of clarity. If you drop down
11:30 in a in a non-technical role,
11:32 spreadsheets or or reports or what have
11:34 you, you should seek to verify reality
11:37 against your expectation. You should
11:38 check that it really works the way you
11:40 think. All right, practice number four.
11:43 Take some time. You can call it create
11:45 temporal separation if you want, but
11:47 just take some time. It sounds like
11:49 productivity advice. It's actually
11:51 advice for cognitive architecture and
11:53 handling AI agents. And so I want you to
11:56 think about it as if you are building a
12:00 system and you have a tremendous amount
12:02 of firepower at your disposal to deploy
12:04 that system and it will go very fast.
12:07 Your primary job is actually
12:09 correctness. It's to deploy it well. You
12:12 need to be in both a flow state which
12:15 you should be if you're building with
12:16 agents where things scroll by, features
12:19 ship, you're building and switching
12:21 context. People describe this when
12:22 they're managing multiple agents where
12:24 they're just going back. And I've seen
12:25 it with myself in Claude Code or Claude
12:27 Co-work where you're just going back and
12:28 there's always an update and hours slip
12:30 by quickly and you're just cycling
12:32 between tabs.
12:34 That's only one part of it. As much as
12:37 you need that to be productive in
12:38 execution mode, [cough]
12:41 you need to take time
12:44 to go back and review the work in a more
12:47 meditative state of mind. Your brain is
12:49 a really high quality asset in 2026 and
12:52 it needs both modes. It needs both the
12:55 build mode where you're like
12:57 coordinating all those agents and it
12:58 needs the reflect mode. It needs the
13:01 step back mode because otherwise you're
13:04 not going to really have genuine
13:06 leverage because you won't get to
13:07 reflect and say, "Well, what kinds of
13:09 prompts did work well this week? Which
13:11 agents got stuck and why? Where did I
13:13 waste time on problems I could have
13:15 caught earlier?" If you can't answer
13:17 those questions, you're not learning
13:19 from all of that building. You need the
13:22 distance. You need literally different
13:24 brain chemistry. You need to reflect.
13:27 It's not overhead. It's the difference
13:29 between getting faster and getting
13:31 better. Practice five. Look at the two
13:33 different kinds of architecture. So,
13:35 there's a conversation that keeps
13:37 surfacing between builders about two
13:39 different kinds of architecture and how
13:40 we build. The first kind is sort of a a
13:42 civil engineering pattern, right? the
13:44 stuff you would put in a cloud markdown
13:46 file or an agents markdown file and you
13:49 write about like this is how problems
13:51 should be solved consistently across my
13:52 codebase. It's my special rule. This is
13:55 my code standard etc. The second kind is
13:58 what Christopher Alexander called
14:00 quality without a name. It's the thing
14:02 that makes some towns feel better than
14:05 others. Some products feel more coherent
14:07 than others. Why we go on vacation to
14:10 Paris and not to Cincinnati. It's not
14:12 just the food. It's a philosophy of
14:15 living embedded in the design. And this
14:18 is there's something here around scaling
14:22 taste that is an unsolved challenge in
14:25 2026. And the distinction really matters
14:28 because most of the discourse around AI
14:30 development and how we should scale
14:31 ourselves in the age of AI just puts
14:34 these two architectures together. People
14:36 assume that if you can get agents to
14:38 follow your conventions and you write
14:39 down your rules, they're going to just
14:41 magically produce coherent products.
14:43 They don't. Not yet. Now, you absolutely
14:46 need the first architecture. I'm not
14:48 trying to make you judge between the
14:49 two. You need good patterns where agents
14:52 excel. And guess what, non-technical
14:54 folks, that goes for you, too. You have
14:56 to have the ability to write down rules
14:59 that measure what best practice looks
15:00 like. And it's even harder for all of us
15:02 on the non-technical side because those
15:04 rules are not as crystal clear as the
15:07 engineers have got. But the second
15:09 architecture remains human work. The
15:11 idea of taste of coherence of vision of
15:14 quality without a name of what Steve
15:16 Jobs had when he looked at the iPhone
15:19 that remains human work. And this is
15:21 where cognitive architecture has a real
15:24 bottleneck because you can delegate the
15:26 technical patterns. It is hard to
15:29 delegate the judgment about what makes
15:31 something intuitively feel right. When
15:34 it's the quality without a name, it's
15:37 your job. And that is part of why I'm
15:39 advocating that you take time to
15:41 reflect, that you take time to go down
15:43 and look in the details of what you're
15:45 building because we are desperate for
15:48 that kind of coherent product in the AI
15:51 building world. We don't have a lot of
15:53 it. We have a lot of go fast practice
15:56 number six. Accept that your experience
15:59 is not compressible. This is a really
16:02 counterintuitive insight that a lot of
16:04 vibe coding discords keep missing. You
16:07 cannot immediately
16:09 speedrun experience. You can speedrun
16:12 software development now. And the
16:14 standard critique of vibe coding is that
16:16 that doesn't work well because of
16:17 technical data and bugs and so on. But
16:19 you can vibe code really well now. you
16:21 have minimal bugs if you do it right and
16:23 set it up against evals and you can set
16:24 multi- aent systems going. That's not
16:26 the real issue. The real issue is that
16:30 you need to be deeply familiar with what
16:33 you're working on and recognize that
16:35 that experience takes a degree of time
16:38 and that that matters more than most of
16:41 us would like to admit because the
16:43 vision you have for your product has to
16:46 be stable. It has to be long-term vision
16:50 that expresses how you want things to
16:54 work differently. And by the way, we're
16:55 all in the product business now. So if
16:57 you're looking at this and saying, "I'm
16:58 not a product manager. I'm not an
16:59 engineer. I don't have to do this." No,
17:01 you do. Because if we're building with
17:04 agents in marketing, in creative, in
17:07 customer success, we're all in the
17:09 product business. I'm sorry. These are
17:10 products we are constructing because we
17:12 have agents that are building stuff for
17:13 us. And you have to have the larger
17:16 vision of what it means and why it
17:18 matters and the experience the the
17:21 knowledge of the thing that you need to
17:24 do that well. You cannot speedrun at the
17:27 speed at which you can build stuff. And
17:29 the builders I know who are genuinely
17:31 thriving find ways to preserve an
17:35 experiential loop while still capturing
17:37 the benefits of AI. You can call this
17:40 talking to customers. That's a part of
17:41 it. You can still ship really fast. Good
17:44 teams still do that. But you have to let
17:48 your understanding develop through
17:49 iteration and be in touch with reality
17:52 and your customers. You cannot just
17:54 iterate through prompting. One of the
17:56 bigger underlying shifts that all of
17:58 this is sort of building around is that
18:01 we are moving from a world where
18:03 prompting was a one-way street and we
18:06 were giving the agent stuff to do and it
18:08 was our capability that was holding
18:09 things back to a world where it's a
18:12 two-way street. The system sometimes
18:14 invites us to level up our
18:16 conversational intent and our prompts.
18:18 Have you ever been at a point where your
18:21 AI asks you a question? I have. It
18:24 happens out of the blue and it's usually
18:25 a really good question. It asks even
18:28 better questions if you invite it to.
18:30 And so, yes, you still need to craft
18:32 good prompts, but your job isn't just
18:35 prompting at this point. It's actually
18:37 to understand at a deep human level what
18:39 really matters about what you're doing
18:41 at work and to be open to unfolding that
18:45 thing that matters as you go into the
18:47 process of building with AI. And that
18:49 sounds really woo woo, but there's a
18:51 very practical edge to it because the
18:54 stable altitude of abstraction that
18:56 allowed us to do our jobs in the same
18:58 way that we've already done for a long
19:00 time, as I've said, that's been
19:02 disrupted. And so because that altitude
19:05 is disrupted, we need to double down on
19:09 partnership and commitment with the AI
19:12 to building what really matters.
19:14 basically the only thing that is going
19:16 to hold in 2026 for us if we want to
19:19 work with AI, if we want to scale
19:20 ourselves, which we're going to have to
19:22 do because that's the forcing function
19:24 in the system right now. AI is coming
19:25 whether we like it or not. The only
19:27 thing that's going to hold is
19:29 understanding what matters about our
19:30 work, why we're building something at a
19:32 deep level and insisting on that coming
19:35 through. Even as Agentic systems keep
19:38 getting better, even as the AI we work
19:40 with gets 10 or 100 times smarter, we
19:42 won't get lost if we know what we want
19:45 to build. And that is the thing that's
19:47 underneath all of this is that you can
19:49 get into a space where you have much
19:51 more of a partner dynamic with AI in
19:53 2026 than you did in 2025. But you
19:56 cannot yield the sense of what matters
20:01 about what you're building. And the best
20:04 builders, whether they are building with
20:06 literal code or building with
20:08 spreadsheets or building with their
20:10 customer service agents, understand
20:11 that. And that is the 2026 builder
20:14 operating