Head of Claude Code: What happens after coding is solved | Boris Cherny
Transcript
Boris Cherny (00:00:00): 100% of my code is written by Claude Code. I have not edited a single line by hand since November. Every day, I ship 10, 20, 30 pull requests. So, at the moment I have, like, five agents running.
Lenny Rachitsky (00:00:10): While we’re recording this?
Boris Cherny (00:00:11): Yeah. Yeah. Yeah.
Lenny Rachitsky (00:00:12): Do you miss writing code?
Boris Cherny (00:00:13): I have never enjoyed coding as much as I do today, because I don’t have to deal with all the minutia. Productivity per engineer has increased 200%.
Lenny Rachitsky (00:00:21): There’s always this question, “Should I learn to code?”
Boris Cherny (00:00:22): In a year or two, it’s not going to matter. Coding is virtually solved. I imagine a world where everyone is able to program, anyone can just build software any time.
Lenny Rachitsky (00:00:29): What’s a next big shift to how software is written?
Boris Cherny (00:00:32): Claude is starting to come up with ideas. It’s looking for feedback, it’s looking at bug reports, it’s looking at telemetry for bug fixes, and things to ship. A little more like a coworker or something like that.
Lenny Rachitsky (00:00:41): A lot of people listening to this are product managers and they’re probably sweating.
Boris Cherny (00:00:44): I think by the end of the year everyone is going to be a product manager, and everyone codes. The title software engineer is going to start to go away. It’s just going to be replaced by builder, and it’s going to be painful for a lot of people.
Lenny Rachitsky (00:00:56): Today my guest is Boris Cherny, head of Claude Code at Anthropic. It is hard to describe the impact that Claude Code has had on the world. Around the time this episode comes out will be the one year anniversary of Claude Code. And in that short time it has completely transformed the job of a software engineer, and it is now starting to transform the jobs of many other functions in tech, which we talk about.
(00:01:19): Claude Code itself is also a massive driver of Anthropic’s overall growth over the past year. They just raised a round at over $350 billion. And as Boris mentions, the growth of Claude Code itself is still accelerating. Just in the past month, their daily active users has doubled. Boris is also just a really interesting, thoughtful, deep-thinking human, and during this conversation we discover we were born in the same city in Ukraine. That is so funny. I had no idea.
(00:01:47): A huge thank you to Ben Mann, Jenny Wen, and Mike Krieger for suggesting topics for this conversation. Don’t forget to check out LennysProductPass.com for an incredible set of deals available exclusively to Lenny’s newsletter subscribers. Let’s get into it after a short word from our wonderful sponsors.
(00:02:04): Today’s episode is brought to you by DX, the developer intelligence platform designed by leading researchers. To thrive in the AI era, organizations need to adapt quickly, but many organization leaders struggle to answer pressing questions like, “Which tools are working? How are they being used? What’s actually driving value?” DX provides the data and insights that leaders need to navigate the shift. With DX, companies like Dropbox, Booking.com, Adeon, and Intercom get a deep understanding of how AI is providing value to their developers and what impact AI is having on engineering productivity. To learn more, visit DX’s website at GetDX.com/Lenny. That’s GetDX.com/Lenny.
(00:02:48): Applications break in all kinds of ways: crashes, slowdowns, regressions, and the stuff that you only see once real users show up. Scntry catches it all. See what happened where and why down to the commit that introduced the error, the developer who shipped it, and the exact line of code all in one connected view.
(00:03:07): I have definitely tried the five tabs and Slack thread approach to debugging. This is better. Scntry shows you how the request moved, what ran, what slowed down, and what users saw. Seer, Scntry’s AI debugging agent takes it from there. It uses all of that Scntry context to tell you the root cause, suggest a fix, and even opens a PR for you. It also reviews your PRs and flags any breaking changes with fixes ready to go. Try Scntry and Seer for free at Scntry.AO/Lenny, and use code LENNY for $100 in Scntry credits. That’s S-C-N-T-R-Y dot A-O slash Lenny.
(00:03:49): Boris, thank you so much for being here, and welcome to the podcast.
Boris Cherny (00:03:54): Yeah. Thanks for having me on.
Lenny Rachitsky (00:03:55): I want to start with a spicy question. About six months ago, I don’t know if people even remember this, you actually left Anthropic. You joined Cursor. And then two weeks later you went back to Anthropic. What happened there? I don’t think I’ve ever heard the actual story.
Boris Cherny (00:04:12): It was the fastest job change that I’ve ever had. I joined Cursor, because I’m a big fan of the product. And, honestly, I met the team and I was just really impressed. They’re an awesome team. I still think they’re awesome, and they’re just building really cool stuff. And they saw where AI coding was going I think before a lot of people did.
(00:04:33): So, the idea of building good product was just very exciting for me. I think as soon as I got there what I started to realize is what I really missed about Ant was the mission. And that’s actually what originally drove me to Ant also, because before I joined Anthropic I was working in Big Tech, and then, at some point, I wanted to work at a lab to just help shape the future of this crazy thing that we’re building in some way.
(00:05:00): And the thing that drew me to Anthropic was the mission. And it’s all about safety. And when you talk to people at Anthropic, just, like, find someone in the hallway, if you ask them why they’re here, the answer is always going to be, “Safety.”
(00:05:11): And so, this mission-driven [inaudible 00:05:14] just really, really resonated with me. And I just know, personally, it’s something I need in order to be happy. And that’s just a thing that I really missed, and I found that whatever the work might be, no matter how exciting, even if it’s building a really cool product, it’s just not really a substitute for that. So, for me, it was pretty obvious that I was missing that pretty quick.
Lenny Rachitsky (00:05:35): Okay. So, let me follow the thread of just coming back to Anthropic and the work you’ve done there. This podcast is going to come out around the year anniversary of launching Claude Code. So, I want to spend a little time just reflecting on the impact that you’ve had. There’s this report that recently came out that I’m sure you saw by Semi-Analysis that showed that 4% of all GitHub commits are authored by Claude Code now. And they predicted it’ll be a fifth of all code commits on GitHub by the end of the year.
(00:06:04): The way they put it is, “While we blinked, AI consumed all software development.” The day that we’re recording this Spotify just put out this headline that their best developers haven’t written a line of code since December thanks to AI. More and more of the most advanced senior engineers, including you, are sharing the fact that you don’t write code anymore, that it’s all AI-generated, and many aren’t even looking at code anymore is how far we’ve gotten.
(00:06:31): In large part, thanks to this little project that you started, and that your team has scaled over the past year. I’m curious just to hear your reflections on this past year, and the impact that your work has had.
Boris Cherny (00:06:42): These numbers are just totally crazy. Right? Like, 4% of all commits in the world is just way more than I imagined. And, like you said, it still feels like the starting point. These are also just public commits. So, we actually think if you look at private repositories it’s quite a bit higher than that.
(00:06:56): And I think the crazy thing for me isn’t even the number that we’re at right now, but the pace at which we’re growing, because if you look at Claude Code’s growth rate across any metric it’s continuing to accelerate. So, it’s not just going up, it’s going up faster and faster.
(00:07:12): When I first started Claude Code, it was just supposed to be a little hack. We broadly knew at Anthropic that we wanted to ship some kind of coding product. And for Anthropic for a long time, we were building the models in this way that fit our mental model of the way that we build safe AGI where the model starts by being really good at coding. Then it gets really good at tool use. Then it gets really good at computer use. Roughly, this is, like, the trajectory.
(00:07:40): And we’ve been working on this for a long time. And when you look at the team that I started on, it was called Anthropic Labs team, and actually Mike Krieger and Ben Mann they just kicked this team off again for round two.
(00:07:53): The team built some pretty cool stuff. So, we built Claude Code, we built MCP, we built the desktop app. So, you can see the seeds of this idea. It’s coding, then it’s tool use, then it’s computer use.
(00:08:03): And the reason this matters for Anthropic is because of safety. It’s, again, just back to that AI is getting more and more powerful, it’s getting more and more capable. The thing that’s happened in the last year is that, at least, for engineers, the AI doesn’t just write the code. It’s not just a conversation partner, but it actually uses tools. It acts in the world.
(00:08:23): And I think now with Cowork we’re starting to see the transition for non-technical folks also. For a lot of people that use conversational AI, this might be the first time that they’re using the thing that actually acts, it can actually use your Gmail, it can use your Slack. It can do all these things for you, and it’s quite good at it. And it’s only going to get better from here.
(00:08:42): So, I think for Anthropic for a long time, there was this feeling that we wanted to build something, but it wasn’t obvious what. And so, when I joined Ant, I spent one month hacking, and built a bunch of weird prototypes. Most of them didn’t ship, and weren’t even close to shipping. It was just understanding the boundaries of what the model can do.
(00:08:59): Then I spent a month doing post-training. So, to understand the research side of it. And I think, honestly, that’s just, for me, as an engineer, I find that to do good work you really have to understand the layer under the layer at which you work. And with traditional engineering work, if you’re working on product, you want to understand the infrastructure, the run time, the virtual machine, the language, whatever that is, the system that you’re building on.
(00:09:23): But, yeah. If you work in AI, you just really have to understand the model to some degree to do good work. So, I took a little detour to do that, and then I came back and just started prototyping what eventually became Claude Code.
(00:09:36): In the very first version of it I have a … There’s, like, a video recording of this somewhere, because I recorded this demo, and I posted it. It was called Claude CLI back then. And I just showed off how it used a few tools, and the shocking thing for me was that I gave it a batch tool, and it just was able to use that to write code, to tell me what music I’m listening to when I asked it like, “What music am I listening to?”
(00:09:59): And this is the craziest thing. Right? Because it’s, like, there’s no … I didn’t instruct the model to say, “Use this tool for this,” or do whatever. The model was given this tool, and it figured out how to use it to answer this question that I had that I wasn’t even sure if it could answer, “What music am I listening to?”
(00:10:16): And so, I started prototyping this a little bit more. I made a post about it, and I announced it internally and it got two likes. That was the extent of the reaction at the time, because I think people internally … When you think of coding tools, you think of IDEs, you think of all these pretty sophisticated environments. No one thought that this thing could be terminal-based. That’s a weird way to design it, and that wasn’t really the intention.
(00:10:43): But from the start I built it in a terminal, because for the first couple months it was just me. So, it was just the easiest way to build. And, for me, this was actually a pretty important product lesson. Right? This is, like, you want to under-resource things a little bit at the start.
(00:10:58): Then we started thinking about what other form factors we should build, and we actually decided to stick with the terminal for a while. And the biggest reason was the model is improving so quickly, we felt that there wasn’t really another form factor that could keep up with it.
(00:11:13): And, honestly, this was just me struggling with, “What should we build?” For the last year, Claude Code has just been all I think about. And so, just late at night this is just something I was thinking about like, “Okay. The model is continuing to improve. What do we do? How can we possibly keep up?” And the terminal was, honestly, just the only idea that I had.
(00:11:31): And, yeah. It ended up catching on. After I released it, pretty quickly it became a hit at Anthropic, and the daily active users just went vertical, and it … Really early on actually, before I launched it, Ben Mann nudged me to make a DAU chart. And I was like, “It’s early. Should we really do it right now?” And he was like, “Yeah.”
(00:11:51): And so, the chart just went vertical pretty immediately. And then in February, we released it externally. Actually, something that people don’t really remember is Claude Code was not initially a hit when we released it. It got a bunch of users. There was a lot of early adopters that got it immediately, but it actually took many months for everyone to really understand what this thing is. Again, it’s just so different.
(00:12:15): And when I think about it, part of the reason Claude Code works is this idea of latent demand where we bring the tool to where people are, and it makes the existing workflows a little bit easier. But also because it’s in the terminal, it’s a little surprising, it’s a little alien in this way. So, you have to be open-minded, and you had to learn to use it.
(00:12:33): And, of course, now Claude Code is available in the iOS and Android Claude app. It’s available in the desktop app. It’s available on the website. It’s available as IDE extensions in Slack and GitHub. All of these places where engineers are it’s a little more familiar, but that wasn’t the starting point.
(00:12:49): So, yeah. At the beginning it was a surprise that this thing was even useful. And as the team grew, as the product grew, as it started to become more and more useful to people, just people around the world from small startups to the biggest [inaudible 00:13:05] companies started using it, and they started giving feedback.
(00:13:09): And I think just reflecting back it’s been such a humbling experience, because we keep learning from our users and just the most exciting thing is none of us really know what we’re doing, and we’re just trying to figure out along with everyone else. And the single best signal for that is just feedback from users. So, that’s just been the best. I’ve been surprised so many times.
Lenny Rachitsky (00:13:29): It’s incredible how fast something can change in today’s world. You launched this a year ago. And it wasn’t the first time people could use AI to code, but in a year, the entire profession of software engineering has dramatically changed. Like, there’s all these predictions, “Oh, AI is going to be written … 100% of code is going to be written by AI.” Everyone is like, “No. That’s crazy. What are you talking about?” But now it’s like, “Oh, of course. It’s happening exactly as they said.” So, things move so fast, and change so fast now.
Boris Cherny (00:13:58): Yeah. It’s really fast. Back at Code with Claude back in May, that was our first developer conference that we did as Anthropic, I did a short talk. And in the Q&A after the talk, people were asking, “What are your predictions for the end of the year?” And my prediction back in May of 2025 was, “By the end of the year, you might not need an IDE to code anymore. And we’re going to start to see engineers not doing this.”
(00:14:20): And I remember the room audibly gasped. It was such a crazy prediction. But I think at Anthropic, this is just the way … The way we think about things is exponentials. And this is very deep in the DNA. Like, if you look at our co-founders, three of them were the first three authors on the scaling laws paper.
(00:14:37): So, we really just think in exponentials. And if you look at the exponential, the percent of code that was written by Claude at that point, if you just trace the line, it’s pretty obvious we’re going to cross 100% by the end of the year, even if it just does not match intuition at all.
(00:14:51): And so, all I did was trace the line. And, yeah. In November that happened for me personally. And that’s been the case since. And we’re starting to see that for a lot of different customers too.
Lenny Rachitsky (00:15:01): I thought it was really interesting what you just shared there about the journey is this idea of just playing around and seeing what happens. This comes up with OpenClaw a lot, just, like, “Peter was playing around and a thing happened.” And it feels like that’s a central ingredient to a lot of the biggest innovations in AI is people just sitting around trying stuff, pushing the models further than most other people.
Boris Cherny (00:15:22): That’s the thing about innovation. Right? You can’t force it. There’s no road map for innovation. You just have to give people space. You have to give them … Maybe the word is, like, safety. So, it’s, like, psychological safety that it’s okay to fail. It’s okay if 80% of the ideas are bad.
(00:15:36): You also have to hold them accountable a bit. So, if the idea is bad, you cut your losses, move onto the next idea instead of investing more. In the early days of Claude Code, I had no idea that this thing would be useful at all, because even in February when we released it, it was writing maybe, like, I don’t know, 20% of my code, not more. And even in May, it was writing maybe 30%. I was still using Cursor for most of my code.
(00:15:58): And it only crossed 100% in November. So, it took a while, but even from the earliest day, it just felt like I was onto something, and I was just spending every night, every weekend hacking on this. And, luckily, my wife was very supportive. But it just felt like it was onto something. It wasn’t obvious what. And sometimes you find a thread, you just have to pull on it.
Lenny Rachitsky (00:16:17): So, at this point, 100% of your code is written by Claude Code. Is that the current state of your coding?
Boris Cherny (00:16:23): Yeah. So, 100% of my code is written by Claude Code. I am a fairly prolific coder. And this has been the case even when I worked back at Instagram. I was one of the top few most productive engineers. And that’s still the case here at Anthropic.
Lenny Rachitsky (00:16:38): Wow. Even as head of the team?
Boris Cherny (00:16:41): Yeah. Yeah. Still od a lot of coding. And so, every day, I ship, like, 10, 20, 30 pull requests or something like that.
Lenny Rachitsky (00:16:47): Every day?
Boris Cherny (00:16:49): Every day. Yeah.
Lenny Rachitsky (00:16:50): Good God.
Boris Cherny (00:16:51): 100% written by Claude Code. I have not edited a single line by hand since November. And, yeah. I do look at the code. So, I don’t think we’re at the point at where you can be totally hands-off, especially, when there’s a lot of people running the program. You have to make sure that it’s correct, you have to make sure it’s safe, and so on.
(00:17:13): And then we also have Claude doing automatic code review for everything. So, here at Anthropic, Claude reviews 100% of pull requests. There’s still a layer of human review after it, but you still do want some of these checkpoints. Like, you still want a human looking at the code. Unless it’s pure prototype code that it’s not going to run anywhere. It’s just a prototype.
Lenny Rachitsky (00:17:32): What’s the next frontier? So, at this point, 100% of your code is being written by AI. This is, clearly, where everyone is going in software engineering. That felt like a crazy milestone. Now it’s just like, “Of course. This is the world now.” What’s the next big shift to how software is written that either your team is already operating in or you think will head towards?
Boris Cherny (00:17:54): I think something that’s happening right now is Claude is starting to come up with ideas. So, Claude is looking for feedback. It’s looking at bug reports. It’s looking at telemetry, and things like this, and it’s starting to come up with ideas for bug fixes, and things to ship. So, it’s just starting to get a little more like a coworker or something like that.
(00:18:16): I think the second thing is we’re starting to branch out of coding a little bit. So, I think, at this point, it’s safe to say that coding is virtually solved. At least, for the kinds of programming that I do, it’s just a solved problem, because Claude can do it. And so, now we’re starting to think about, “Okay. What’s next? What’s beyond this?”
(00:18:31): There’s a lot of things that are adjacent to coding, and I think this is [inaudible 00:18:35] becoming, but also just general to us. Like, I use Cowork every day now to do all sorts of things that are just not related to coding at all, and just to do it automatically.
(00:18:45): Like, for example, I had to pay a parking ticket the other day. I just had Cowork do it. All of my project management for the team, Cowork does all of it. It’s, like, syncing stuff between spreadsheets, and messaging people on Slack, and email, and all this kind of stuff.
(00:18:57): So, I think the frontier is something like this. And I don’t think it’s coding, because I think coding, it’s pretty much solved, and over the next few months, I think what we’re going to see is just across the industry it’s going to become increasingly solved for every kind of code base, every tech stack that people work on.
Lenny Rachitsky (00:19:14): This idea of helping you come up with what to work on is so interesting. A lot of people listening to this are product managers and they’re probably sweating. How do you use Claude for this? Do you just talk to it? Is there anything clever you’ve come up with to help you use it to come up with what to build?
Boris Cherny (00:19:30): Honestly, the simplest thing is, like, open Claude or Cowork, and point it at a Slack thread. Like, for us, we have this channel that’s all the internal feedback about Claude Code. Since we first released it, even in 2024, internally, it’s just been this fire hose of feedback. It is the best.
(00:19:46): And in the early days, what I would do is any time that someone sends feedback, I would just go in, and I would fix every single thing as fast as I possibly could. So, like, within a minute, within five minutes, or whatever. And this just really fast feedback cycle, it encourages people to give more and more feedback. It’s just so important, because it makes them feel heard.
(00:20:03): Because, usually, when you use a product, you get feedback, it just goes into a black hole somewhere, and then you don’t get feedback again. So, if you make people feel heard, then they want to contribute, then they want to help make the thing better.
(00:20:13): And so, now I do the same thing, but, Claude, honestly, does a lot of the work. So, I pointed at the channel, and it’s like, “Okay. Here’s a few things that I can do. I just put up a couple PRs. Want to take a look at that one?” I’m like, “Yeah.”
Lenny Rachitsky (00:20:25): Have you noticed that it is getting much better at this? Because this is the holy grail. Right now it’s, cool, building solved. Code review became the next bottleneck with all these PRs. Who is going to review them all? The next big open question is just, like, “Okay. Now humans are necessary for figuring out what to build, what to prioritize,” and you’re saying that’s where Claude Code is starting to help you. Has it gotten a lot better with, like, Opus 4.6, or what’s been the trajectory there?
Boris Cherny (00:20:50): Yeah. Yeah. It’s improved a lot. I think some of it is training that we do specific to coding. So, obviously, the best coding model in the world, and it’s getting better and better. Like, 4.6 is just incredible. But also actually a lot of the training that we do outside of coding translates pretty well too.
(00:21:07): So, there is this transfer where you teach the model to do X, and it gets better at Y. Yeah. And the gains have just been insane. Like, at Anthropic, over the last year, like, since we introduced Claude Code, we probably … I don’t know the exact number. We probably 4X-ed the engineering team, or something like this. But productivity per engineer has increased 200% in terms of pull requests.
(00:21:31): And this number is just crazy for anyone that actually works in the space and works on dev productivity. Because back in a previous life, I was at Meta, and one of my responsibilities was code quality for the company. So, this is all of our code bases, that was my responsibility. Like, Facebook, Instagram, WhatsApp, all this stuff.
(00:21:47): And a lot of that was about productivity, because if you make the code higher quality, then engineers are more productive. And things that we saw is in a year with hundreds of engineers working on it, you would see a gain of a few percentage points of productivity, something like this. And so, nowadays, seeing these gains of just hundreds of percentage points is just absolutely insane.
Lenny Rachitsky (00:22:06): What’s also insane is just how normalized this has all been. Like, we hear these numbers. Like, of course, AI is doing this to us. It’s so unprecedented, the amount of change that is happening to software development, to building products, to just the world of tech. It’s just so easy to get used to it, but it’s important to recognize this is crazy.
Boris Cherny (00:22:25): This is something I have to remind myself once in a while. There’s a downside of this, because the model changes so … Well, there’s many downsides that we could talk about, but I think one of them on a personal level is the model changes so often that I sometimes get stuck in this old way of thinking about it. And I even find that new people on the team, or even new grads that join do stuff in a more AGI-forward way than I do.
(00:22:53): So, sometimes, for example, I had this case a couple of months ago where there was a [inaudible 00:22:57] week. And so, what this is is Claude Code, the memory usage is going up, and, at some point, it crashes. This is a very common engineering problem that every engineer has debugged 1000 times.
(00:23:07): And, traditionally, the way that you do it is you take a heap snapshot, you put it into a special debugger, and figure out what’s going on. You use these special tools to see what’s happening.
(00:23:16): And I was doing this, and I was looking through these traces, and trying to figure out what was going on, and the engineer that was newer on the team just had Claude Code do it. And was like, “Hey, Claude. It sounds like there’s a leak. Can you figure it out?” And so, Claude Code did exactly the same thing that I was doing. It took the heap snapshot, it wrote a little tool for itself, so, it can analyze it itself. It was a just in time program. And it found the issue, and put up a pull request faster than I could.
(00:23:43): So, it’s something where for those of us that have been using the model for a long time, you still have to transport yourself to the current moment, and not get stuck back in old model, because it’s not Sonnet 3.5 anymore. The new models are just completely, completely different. And just this mindset shift is very different.
Lenny Rachitsky (00:24:03): I hear you have these very specific principles that you’ve codified for your team that when people join you walk them through them. I believe one of them is, “What’s better than doing something? Having Claude do it.” And it feels like that’s exactly what you describe with this memory leak is you almost forgot that principle of like, “Okay. Let me see if Claude can solve this for me.”
Boris Cherny (00:24:21): There’s an interesting thing that happens also when you under-fund everything a little bit, because then people are forced to Claude-ify. And this is something that we see so … For work, where sometimes we just put one engineer on a project, and the way that they’re able to ship really quickly … Because they want to ship quickly. This is an intrinsic motivation that comes from within. It’s just wanting to do a good job. If you have a good idea, you just really want to get it out there. No one has to force you to do that. That comes from you.
(00:24:49): And so, if you have Claude, you can just use that to automate a lot of work, and that’s what we see over and over. So, I think that’s one principle is under-funding things a little bit.
(00:25:01): I think another principle is just encouraging people to go faster. So, if you can do something today, you should just do it today. And this is something we really, really encourage on the team. Early on, it was really important, because it was just me. And so, our only advantage was speed. That’s the only way that we could ship a product that would compete in this very crowded coding market.
(00:25:21): But nowadays, it’s still very much a principle we have on the team, and if you want to go faster, a really good way to do that is to just have Claude do more stuff. So, it just very much encourages that.
Lenny Rachitsky (00:25:32): This idea of under-funding, it’s so interesting, because, in general, there’s this feeling like AI is going to allow you to not have as many employees, not have as many engineers. And so, it’s not only you can be more productive. What you’re saying is that you will actually do better, if you under-fund. It’s not just that AI can make you faster, it’s you will get more out of the AI tooling if you have fewer people working on something.
Boris Cherny (00:25:54): Yeah. If you hire great engineers, they’ll figure out how to do it. And, especially, if you empower them to do it. This is something I actually talk a lot about with CTOs and all sorts of companies. My advice generally is, “Don’t try to optimize. Don’t try to cost cut at the beginning. Start by just giving engineers as many tokens as possible.” And now you’re starting to see companies … Like, at Anthropic, we have … Everyone can use a lot of tokens.
(00:26:19): We’re starting to see this come up as a perk at some companies where if you join you get unlimited tokens. This is a thing I very much encourage, because it makes people free to try these ideas that would have been too crazy, and then if there’s an idea that works, then you can figure out how to scale it, and that’s the point to optimize, and to cost-cut, figure out … Maybe you can do it with a haiku, or with Sonnet instead of Opus, or whatever.
(00:26:44): But at the beginning, you just want to throw a lot of tokens at it, and see if the idea works, and give engineers the freedom to do that.
Lenny Rachitsky (00:26:49): So, the advice here is just be loose with your tokens, with the cost on using these models. People hearing this may be like, “Of course. He works at Anthropic. You want us to use as many tokens as possible.”
(00:27:00): But what you’re saying here is the most interesting innovative ideas will come out of someone just taking it to the max and seeing what’s possible.
Boris Cherny (00:27:08): Yeah. And I think the reality is at small-scale, you’re not going to get a giant bill for anything like this. If it’s an individual engineer experimenting, the token cost is still probably relatively low relative to [inaudible 00:27:21] other costs of running the business. So, it’s actually not a huge cost.
(00:27:27): As the thing scales up, so, let’s say they build something awesome, and then it takes a huge amount of tokens, and then the cost becomes pretty big, that’s the point at which you want to optimize it. But don’t do that too early.
Lenny Rachitsky (00:27:37): Have you seen companies where their token cost is higher than their salary? Is that a trend you think we’re going to find and see?
Boris Cherny (00:27:44): At Anthropic, we’re starting to see some engineers that are spending hundreds of thousands a month in tokens. So, we’re starting to see this a little bit. There’s some companies that we’re starting to see similar things. Yeah.
Lenny Rachitsky (00:27:58): Going back to coding, do you miss writing code? Is it something you’re sad about that this is no longer a thing you’ll do as a software engineer?
Boris Cherny (00:28:06): It’s funny. For me, when I learned engineering, for me, it was very practical. I learned engineering, so, I could build stuff. And, for me, I was self-taught. I studied economics in school, but I didn’t study CS. But I taught myself engineering. Early on, I was programming in middle school.
(00:28:26): And from the very beginning, it was very practical. So, I learned to code, so, that I can cheat on a math test. That was the first thing-
Lenny Rachitsky (00:28:33): Nice.
Boris Cherny (00:28:33): … we had these graphing calculators, and I just programmed-
Lenny Rachitsky (00:28:36): The T83. TI83?
Boris Cherny (00:28:39): Yeah. T83+. Yeah. Yeah. Exactly.
Lenny Rachitsky (00:28:40): Plus.
Boris Cherny (00:28:41): Plus. Yeah. I programmed the answers in, and then the next math test, or whatever, the next year, it was just too hard. I couldn’t program all the answers in, because I didn’t know what the questions were. And so, I had to write a little solver, so, that it was a program that would just solve these algebra questions or whatever.
(00:28:58): And then I figured out you can get a little cable, you can give the program to the rest of the class, and then the whole class gets As, but then we all got caught, and the teacher told us to knock it off. But from the very beginning-
Lenny Rachitsky (00:29:08): Wow.
Boris Cherny (00:29:08): … it’s always just been very practical for me where programming is a way to build the thing. It’s not the end in itself. At some point, I personally fell into the rabbit hole of the beauty of programming. So, I wrote a book about typescript. Actually, at the time, it was the world’s biggest typescript [inaudible 00:29:29] just because I fell in love with the language itself. And I got deep into functional programming and all the stuff.
(00:29:36): I think a lot of coders they get distracted by this. For me, it was always … There is a beauty to programming, and, especially, to functional programming. There’s a beauty to type systems. There’s a certain, like, this buzz that you get when you solve a really [inaudible 00:29:54] math problem. It’s similar when you balance the types or the program is just really beautiful.
(00:30:01): But it’s really not the end of it. I think, for me, coding is very much a tool, and it’s a way to do things. That said, not everyone feels this way. So, for example, there’s one engineer on the team, Lena, who was still writing C++ on the weekends by hand, because for her she just really enjoys writing C++ by hand.
(00:30:20): And so, everyone is different. And I think even as this field changes, even as everything changes, there is always space to do this. There is always space to enjoy the art, and to do things by hand, if you want.
Lenny Rachitsky (00:30:34): Do you worry about your skills atrophying as an engineer? Is that something you worry about or is it just like, “This is just the way it’s going to go”?
Boris Cherny (00:30:41): I think it’s just the way that it happens. I don’t worry about it too much personally. I think, for me, programming is on a continuum. And way back in the day, software actually is relatively new. Right? If you look at the way programs are written today using software that’s running on a virtual machine or something, this has been the way that we’ve been writing programs since probably the 1960s. So, it’s been 60 years or something like that.
(00:31:06): Before that it was punch cards, before that it was switches. Before that, it was hardware. And before that, it was just, like, literally, pen and paper. It was a room full of people that were doing math on paper.
(00:31:16): And so, programming has always changed in this way. In some ways, you still want to understand the layer under the layer, because it helps you be a better engineer. And I think this will be the case maybe for the next year or so, but I think pretty soon it just won’t really matter. It’s just going to be the assembly code running under the program or something like this.
(00:31:36): At an emotional level, I feel like I’ve always had to learn new things. And as a programmer, it doesn’t feel that new, because there’s always new frameworks. There’s always new languages. It’s just something that we’re quite comfortable with in the field.
(00:31:50): But at the same time, this isn’t true for everyone. And I think for some people they’re going to feel a greater sense of, I don’t know, maybe, like, loss or nostalgia, or atrophy, or something like this.
Lenny Rachitsky (00:32:00): I don’t know if you saw this, but Elon was saying that, “Why isn’t the AI just writing binary straight to binary? Because what’s the point of all this programming abstraction in the end?”
Boris Cherny (00:32:12): Yeah. It’s a good question. It totally can do that, if you want it too.
Lenny Rachitsky (00:32:15): Oh, man. So, what I’m hearing here is in terms … There’s always this question, “Should I learn to code? Should people in school learn to code?” What I heard from you is your take is in a year or two you don’t really need to.
Boris Cherny (00:32:27): My take is I think for people that are using Claude Code, that are using agents to code today, you still have to understand the layer under, but, yeah, in a year or two, it’s not going to matter.
(00:32:40): I was thinking about what is the right historical analog for this? Because somehow we have to situate this thing in history, and figure out when have we gone through similar transitions? What’s the right mental model for this?
(00:32:54): I think the thing that’s come closest for me is the printing press. And so, if you look at Europe and in the mid 1400s, literacy was actually very low. There was sub-1% of the population, it was scribes, that they were the ones that did all the writing. They were the ones that did all the reading. They were employed by lords and kings that often were not literate themselves.
(00:33:18): And so, it was their job of this very tiny percent of the population to do this. And, at some point, Gutenberg and the printing press came along, and there was this crazy stat that in the 50 years after the printing press was built, there was more printed material created than in the thousand years before.
(00:33:38): And so, the volume of printed material just went way up. The cost went way down. It went down something, like, 100 X over the next 50 years. And if you look at literacy, it actually took a while, because of learning to read and write, it’s quite hard, it takes an education system, it takes free time. It takes not having to work on a farm all day, so, that you actually have time for education, and things like this.
(00:34:00): But over the next 200 years it went up to 70% globally. So, I think this is the thing that we might see is a similar kind of transition. And there was actually this interesting historical document where there was an interview with some scribe in the 1400s about, “How do you feel about the printing press?” And they were actually very excited, because they were like, “Actually, the thing that I don’t like doing is copying between books. The thing that I do like doing is drawing the art in books, and then doing the book binding. And I’m really glad that now my time is freed up.”
(00:34:33): And it’s interesting. As an engineer, I felt a parallel with this. Like, this is how I feel where I don’t have to do the tedious work anymore of coding, because this has always been the detail of it. It’s always been the tedious part of it, and messing with [inaudible 00:34:51], and using all of these different tools.
(00:34:53): That was not the fun part. The fun part is figuring out what to build, and coming up with this. It’s talking to users. It’s thinking about these big systems. It’s thinking about the future. It’s collaborating with other people on the team, and that’s what I get to do more of now.
Lenny Rachitsky (00:35:07): And what’s amazing is that the tool you’re building allows anybody to do this, people that have no technical experience can do exactly what you’re describing. I’ve been doing a bunch of random little projects, and it’s just, like, “Any time you get stuck just help me figure this out.” And you get unblocked.
(00:35:24): I was an engineer earlier in my career for 10 years. And I just remember spending so much time on libraries and dependencies and things, and just like, “Oh my God. What do I do?” And then looking on [inaudible 00:35:34]. And now it’s just like, “Help me figure this out,” and, “Here’s a step-by-step, one, two, three, four. Okay. We got this.”
Boris Cherny (00:35:39): Yeah. Exactly. Exactly. I was talking to an engineer earlier today. They’re like, “They’re writing some service [inaudible 00:35:44], and it’s been a month already, and they built up the service. It’s working quite well.” And then I was like, “Okay. So, how do you feel writing it?” And he was like, “I still don’t really know Go.”
(00:35:55): And I think we’re going to start to see more and more of this. It’s, like, if you know that it works correctly and efficiently, then you don’t actually have to know all the details.
Lenny Rachitsky (00:36:02): Clearly, the life of a software engineer has changed dramatically. It’s a whole new job now as of the past year or two. What do you think is the next role that will be most impacted by AI? Either within tech, like, product managers, designers, or even outside tech, just what do you think? Where do you think AI is going next?
Boris Cherny (00:36:23): I think it’s going to be a lot of the roles that are adjacent to engineering. So, yeah. It could be product managers, it could be design, it could be data science. It is going to expand to pretty much any kind of work that you can do on a computer, because the model is just going to get better and better at this. And the Cowork product is the first way to get at this, but it’s just the first one.
(00:36:44): And it’s the thing that I think brings Agentic AI to people that haven’t really used it before, and people are starting just to get a sense of it for the first time. When I think about the engineering a year ago, no one really knew what an agent was, no one really used it, but nowadays it’s just the way that we do our work.
(00:37:04): And then when I look at non-technical work today, so, like … Or maybe semi-technical, like, product work, and data science, and things like this, when you look at the concept of AI that people are using, it’s always these conversational AI. It’s, like, a chatbot or whatever. But no one really has used an agent before, and this word agent just gets thrown around all the time, and it’s just so misused. It’s lost all meaning.
(00:37:26): But agent actually has a very specific technical meaning, which is it’s an AI, it’s an LLM that’s able to use tools. So, it doesn’t just talk. It can actually act, and it can interact with your system. And this means it can use your Google Docs, and it can send email, it can run commands on your computer, and do all this kind of stuff.
(00:37:46): So, I think any kind of job where you do use computer tools in this way, I think this is going to be next. This is something we have to figure out as a society, this is something we have to figure out as an industry. And I think, for me, also this is one of the reasons, it feels very important and urgent to do this work at Anthropic, because I think we take this very, very seriously.
(00:38:08): And so, now we have economists, we have policy folks, we have social impact folks. This is something we just want to talk about a lot, so, as a society we can figure out what to do, because it shouldn’t be up to us.
Lenny Rachitsky (00:38:19): So, the big question, which you’re alluding to is jobs and job loss, and things like that. There’s this concept of Jevons paradox of just as we can do more, we hire more, and it’s not actually as scary as it looks. What have you experienced so far I guess with AI becoming a big part of the engineering job? Are you hiring more than if you didn’t have AI? And just thoughts on jobs.
Boris Cherny (00:38:41): Yeah. For our team we’re hiring. So, Claude Code team is hiring. If you’re interested just check out the jobs page on Anthropic. Personally, it’s all this stuff has just made me enjoy my work more. I have never enjoyed coding as much as I do today, because I don’t have to deal with all the minutia.
(00:39:00): So, for me, personally, it’s been quite exciting. This is something that we hear from a lot of customers where they love the tool, they love Claude Code, because it just makes coding delightful again, and that’s just so fun for them.
(00:39:14): But it’s hard to know where this thing is going to go. And, again, I have to reach for these historical analogs. And I think the printing press is just such a good one, because what happened is this technology that was locked away to a small set of people, like, knowing how to read and write became accessible to everyone. It was just inherently democratizing. Everyone started to be able to do this.
(00:39:36): And if that wasn’t the case then something like the Renaissance just could never have happened, because a lot if the Renaissance, it was about knowledge spreading. It was about written records that people use to communicate. Because there were no phones or anything like this. There was no internet at the time.
(00:39:54): So, it’s about what does this enable next? And I think that’s the very optimistic version of it for me, and that’s the part that I’m really excited about. It’s just unimaginable. We couldn’t be talking today, if the printing press hadn’t been invented. Like, our microphones wouldn’t exist. None of the things around us would exist. It just wouldn’t be possible to coordinate such a large group of people if that wasn’t the case.
(00:40:15): And so, I imagine a world a few years in the future where everyone is able to program, and what does that unlock? Anyone can just build software any time. And I have no idea. It’s just the same way that in the 1400s, no one could have predicted this. I think it’s the same way.
(00:40:31): But I do think in the meantime, it’s going to be very disruptive, and it’s going to be painful for a lot of people. And, again, as a society, this is a conversation that we have to have, and this is a thing that we have to figure out together.
Lenny Rachitsky (00:40:42): So, for folks hearing this that want to succeed and make it in this crazy turmoil we’re entering, any advice? Is it play with AI tools? Get really proficient at the latest stuff? Is there anything else that you recommend to help people stay ahead?
Boris Cherny (00:40:58): Yeah. I think that’s pretty much it. Experiment with the tools, get to know them, don’t be scared of them. Just dive in, try them, be on the bleeding edge, be on the frontier.
(00:41:08): And maybe the second piece of advice is try to be a generalist more than you have in the past. For example, in school, a lot of people that study CS, they learn to code, and they don’t really learn much else. Maybe they learn a little bit of systems architecture or something like this.
(00:41:25): But some of the most effective engineers that I work with every day, and some of the most effective product managers and so on, they cross over disciplines. So, on the Claude Code team everyone codes. Our product manager codes, our engineering manager codes, our designer codes, our finance guy codes, our data scientist codes. Everyone on the team codes.
(00:41:43): And then if I look at particular engineers, people often cross different disciplines. So, some of the strongest engineers are hybrid, product, and infrastructure engineers, or product engineers with really great design sense, and they’re able to do design also. Or an engineer that has a really good sense of the business, and can use that to figure out what to do next, or an engineer that also loves talking to users and can just really channel what users want to figure out what’s next.
(00:42:10): So, I think a lot of the people that will be rewarded the most over the next few years they won’t just be AI native, and they don’t just know how to use these tools really well, but also they’re curious and they’re generalists, and they cross over multiple disciplines and can think about the broader problem they’re solving rather than just the engineering part of it.
Lenny Rachitsky (00:42:29): Do you find these three separate disciplines still useful as a way to think about the team? Their engineering, design, product management, do you find those … Even though, they are now coding and contributing to thinking about what to build do you feel like those are three roles that will persist long-term? At least, at this point.
Boris Cherny (00:42:46): I think in the short-term it’ll persist, but one thing that we’re starting to see is there’s maybe a 50% overlap in those roles where a lot of people are actually just doing the same thing, and some people have specialties. For example, I code a little bit more [inaudible 00:42:58] Kat, our PM, does a little bit more coordination or planning or forecasting, or things like this.
Lenny Rachitsky (00:43:04): Stakeholder alignment.
Boris Cherny (00:43:06): Stakeholder alignment. Exactly. I do think that there is a future where I think by the end of the year what we’re going to start to see is these start to get even murkier where I think in some places the title software engineer is going to start to go away, and it’s just going to be replaced by builder or maybe it’s just everyone is going to be a product manager and everyone codes, or something like this.
Lenny Rachitsky (00:43:26): Who says hiring has to be fair? Every founder and hiring manager I’ve been speaking with these days is feeling the same pressure. Hire the best people as fast as possible, but recruiting is time-consuming, alignment is hard, and competition for great talent keeps getting tighter.
(00:43:43): That’s why teams like ElevenLabs, Brex, Replit, Deal, and 5000 other organizations use Metaview, the AI company giving high-performance teams a real, unfair advantage in hiring. They give you a suite of AI agents that behave like recruiting coworkers. They find candidates for you based on your exact criteria, take interview notes automatically, gather insights across your hiring process, and help you identify the best candidates in your pipeline.
(00:44:11): AI handles the recruiting toil and gives you a real source or truth. That means our [inaudible 00:44:17] and a team focused on what matters most, winning the right candidates. Don’t let your competitors out-hire you. Metaview customers close roles 30% faster. Try Metaview today for free and get an extra month of sourcing at Metaview.AI/Lenny. That’s M-E-T-A View dot AI slash Lenny.
(00:44:38): You talked about how you’re enjoying coding more. I actually did this little informal survey on Twitter. I don’t know if you saw this where I just asked … I did three different polls. I asked engineers, “Are you enjoying your job more or less since adopting AI tools?” And then I did a separate one for PMs, and one for designers. And both engineers and PMs, 70% of people said they are enjoying their job more. And about 10% said they’re enjoying their job less.
(00:45:02): Designers, interestingly, only 55% said they are enjoying their job more, and 20% said they’re enjoying their job less. I thought that was really interesting.
Boris Cherny (00:45:11): That’s super interesting. I’d love to talk to these people, both in the more bucket and the less bucket just to understand. Did you get to follow up with any of them?
Lenny Rachitsky (00:45:20): A few people replied, and we’re actually doing a followup poll that we’ll look to in the show notes of going deeper into some of the stuff. But a lot … There’s the factors that make it more fun and less fun. The designers, they didn’t share a lot actually of just the people that [inaudible 00:45:33]. I actually asked just like, “Why are you enjoying your job less?” I didn’t hear a lot. So, I’m curious what’s going on there.
Boris Cherny (00:45:37): Yeah. I’m seeing this a little bit with … At Anthropic, I think everyone is fairly technical. This is something that we screen for when people join. There’s a lot of technical interviews that people go through even for non-technical functions.
(00:45:53): And our designers largely code. So, I think for them this is something that they have enjoyed from what I’ve seen, because now instead of bugging engineers they can just go in and code. And even some designers that didn’t code before have just started to do it, and for them it’s great, because they can unblock themselves.
(00:46:12): But I’d be really interested just to hear more people’s experiences, because I bet it’s not uniform like that.
Lenny Rachitsky (00:46:18): Yeah. So, maybe if you’re listening to this leave a comment if you’re finding your job is less fun, and you’re enjoying your job less, because what you’re saying and what I’m hearing from most people, 70% of PMs and engineers are loving their job more. Like, if you’re [inaudible 00:46:30] bucket, you could … Something’s going on.
Boris Cherny (00:46:32): Yeah. Yeah. We do see that people use also different tools. So, for example, our designers they use the Claude desktop app a lot more to do their coding. So, you just download the desktop app. There’s a code tab. It’s right next to Cowork.
(00:46:45): And it’s actually the same as at Claude Code. So, it’s the same agent, and everything. We’ve had this for many, many months. And so, you can use this to code in a way that you don’t have to open a bunch of terminals. But you still get the power of Claude Code, and the biggest thing is you can just run as many Claude sessions in parallel as you want. We call this multi-Claude-ing.
(00:47:04): So, it’s a little more native I think for folks that are not engineers, and really this is back to bringing the product to where the people are. You don’t want to make people use a different workflow. You don’t want to make them go out of their way to learn a new thing. It’s whatever people are doing, if you can make that a little bit easier, then that’s just going to be a much better product that people enjoy more.
(00:47:23): And this is just this principle of latent demand, which I think is just the single-most important principle in product.
Lenny Rachitsky (00:47:29): Can you talk about that actually? Because I was going to go there. Explain what this principle is and just what happens when you unlock this latent demand.
Boris Cherny (00:47:37): Latent demand is this idea that if you build a product in a way that can be hacked, or can be misused by people in a way it wasn’t really designed for to do something that they want to do, then this helps you as the product builder learn where to take the product next.
(00:47:55): So, an example of this is Facebook Marketplace. So, the manager for the team Fiona, she was actually the founding manager for the Marketplace team and she talks about this a lot. Facebook Marketplace is based on the observation back in … This must have been, like, 2016 or something like this. That 40% of posts in Facebook groups are buying and selling stuff.
(00:48:17): So, this is crazy. It’s, like, people are abusing the Facebook groups product to buy and sell, and it’s not abuse in a security sense. It’s abuse in that no one designed the product for this, but they’re figuring it out, because it’s just so useful for this.
(00:48:29): And so, it’s pretty obvious. If you build a better product to let people buy and sell, they’re going to like it. And it was just very obvious that Marketplace would be a hit from this. And so, the first thing was buy and sell groups, so, special purpose groups to let people do that, and the second product was Marketplace.
(00:48:45): Facebook Dating I think started in a pretty similar place. And I think the observation was if you look at profile views, so, people looking at each other’s profiles on Facebook, 60% of profile views were people that are not friends with each other that are opposite gender. And so, this traditional dating setup, people are just creeping on each other. So, maybe if you can build a product for this, it might work.
(00:49:11): And so, this idea of latent demand I think is just so powerful. And, for example, this is also where Cowork came from. We saw that for the last six months or so, a lot of people using Claude Code were not using it to code. There was someone on Twitter that was using it to grow tomato plants. There was someone else using it to analyze their genome. Someone was using it to recover photos from a corrupted hard drive that was, like, wedding photos. There was someone that was using it for I think … They were using it to analyze an MRI.
(00:49:43): So, there’s just all these different use cases that are not technical at all, and it was just really obvious. Like, people are jumping through hoops to use a terminal to do this thing. Maybe we should just build a product for them.
(00:49:55): And we saw this actually pretty early. Back in maybe May of last year, I remember walking into the office and our data scientist Brendan had a Claude Code on his computer. He just had a terminal up. And I was shocked. I was like, “Brendan, what are you doing?” He figured out how to open the terminal, which is … It’s a very engineer-y product. Even a lot of engineers don’t want to use a terminal. It’s just the lowest level way to do your work. Just really, really in the weeds of the computer.
(00:50:26): And so, he figured out how to use the terminal. He downloaded [inaudible 00:50:28]. He downloaded Claude Code. And he was doing SQL analysis in the terminal. It was crazy. And then the next week all of the data scientists were doing the same thing.
(00:50:36): So, when you see people abusing the product in this way, using it in a way that it wasn’t designed in order to do something that is useful for them, it’s just such a strong indicator that you should just build a product and people are going to like that. It’s something that’s special purpose for that.
(00:50:50): I think now there’s also this interesting second dimension to latent demand. This is the traditional framing is look at what people are doing, make that a little bit easier, empower them.
(00:50:59): The modern framing that I’ve been seeing in the last six months is a little bit different. And it’s look at what the model is trying to do, and make that a little bit easier.
(00:51:10): And so, when we first started building Claude Code, I think a lot of the way that people approached designing things with LLMs is they put the model in a box, and they were like, “Here’s this application that I want to build. Here’s the thing that I wanted to do, a model, you’re going to do this one component of it. Here’s the way that you’re going to interact with these tools and APIs,” and whatever.
(00:51:28): And for Claude Code, we inverted that. We said, “The product is the model. We want to expose it. We want to put the minimal scaffolding around it. Give it the minimal set of tools. So, it can do the things, it can decide which tools to run. It can decide in what order to run them in,” and so on.
(00:51:41): And I think a lot of this was just based on latent demand of what the model wanted to do. And so, in research we call this being on distribution. You want to see what the model is trying to do. In product terms, latent demand is just the same exact concept but applied to a model.
Lenny Rachitsky (00:51:55): You talked about Cowork. Something that I saw you talk about when you launched that initially is your team built that in 10 days. That’s insane.
Boris Cherny (00:52:02): Yeah.
Lenny Rachitsky (00:52:02): It came out, I think it was used by millions of people pretty quickly, something like that being built in 10 days. Anything there? Any stories there other than it was just, “We used a lot of code to build it and that’s it”?
Boris Cherny (00:52:14): Yeah. It’s funny. Claude Code, like I said, when we released it it was not immediately a hit. It became a hit over time, and there was a few inflection points. So, one was, like, Opus 4, it just really, really inflected, and then in November it inflected, and it just keeps inflecting. The growth just keeps getting steeper and steeper and steeper every day.
(00:52:31): But for the first few months it wasn’t a hit. People used it, but a lot of people couldn’t figure out how to use it. They didn’t know what it was for. The model still wasn’t very good.
(00:52:40): Cowork when we released it it was just immediately a hit. Much more so than Claude Code was early on. I think a lot of the credit, honestly, just goes to Felix and Sam and Jenny, and the team that built this. It’s just an incredibly strong team.
(00:52:55): And, again, the place Cowork came from is just this latent demand. Like, we saw people using Claude Code for these non-technical things. And we’re trying to figure out, “What do we do?” And so, for a few months the team was exploring, they were trying all sorts of different options. And, in the end, someone was just like, “Okay. What if we just take Claude Code and put it in the desktop app?” And that’s, essentially, the thing that worked.
(00:53:15): And so, over 10 days they just completely used Claude Code to build it. And Cowork is actually … There’s this very sophisticated security system that’s built in. And, essentially, these guardrails to make sure that the model does the right thing, it doesn’t go off the rails.
(00:53:30): So, for example, we ship an entire virtual machine with it, and Claude Code just wrote all of this code. So, we just had to think about, “All right. How do we make this a little bit safer? A little more self-guided for people that are not engineers.” It was fully implemented with Claude Code. It took about 10 days. We launched it early. It was still pretty rough, and it’s still pretty rough around the edges.
(00:53:50): But this is the way that we learn, both on the product side and on the safety side is we have to release things a little bit earlier than we think, so, that we can get the feedback, so, that we can talk to users. We can understand what people want, and that’ll shape where the product goes in the future.
Lenny Rachitsky (00:54:05): Yeah. I think that point is so interesting, and it’s so unique. There’s always been this idea, release early, learn from users, get feedback, iterate. The fact that it’s hard to even know what the AI is capable of, and how people will try to use it is a unique reason to start releasing things early. So, that’ll help you as you exactly describe this idea of what is a latent demand in this thing that we didn’t really know? Let’s put it out there and see what people do with it.
Boris Cherny (00:54:30): Yeah. And for Anthropic as a safety lab, the other dimension of that is safety, because when you think about model safety there’s a bunch of different ways to study it. The lowest level is alignment, and mechanistic interpretability. So, this is when we train the model we want to make sure that it’s safe. We, at this point, have pretty sophisticated technology to understand what’s happening in the neurons to trace it.
(00:54:52): And so, for example, if there’s a neuron related to deception, we’re starting to get to the point where we can monitor it and understand that it’s activating. And so, this is alignment, this is mechanistic interpretability, it’s the lowest layer.
(00:55:05): The second layer is evals, and this is, essentially, a laboratory setting, the model is in a Petri dish, and you study it. And you put in the synthetic situation and just say, “Okay. Model, what do you do?” And, “Are you doing the right thing? Is it aligned? Is it safe?”
(00:55:17): And then the third layer is seeing how the model behaves in the wild. And as the model gets more sophisticated, this becomes so important, because it might look very good on these first two layers, but not great on the third one.
(00:55:30): We released Claude Code really early, because we wanted to study safety. And we actually used it within Anthropic for I think four or five months, or something before we released it, because we weren’t really sure. Like, this is the first big agent that I think folks had released at that point. It was definitely the first coding agent that became broadly used.
(00:55:51): And so, we weren’t sure if it was safe. And so, we actually had to study it internally for a long time before we felt good about that. And even since there’s a lot that we’ve learned about alignment. There’s a lot that we’ve learned about safety, that we’ve been able to put back into the model, back into the product.
(00:56:05): And for Cowork, it’s pretty similar. The model is in this new setting. It’s doing these tasks that are not engineering tasks. It’s an agent that’s acting on your behalf. It looks good on alignment, it looks good on evals, we tried it internally, it looks good. We tried it with a few customers, it looks good. Now we have to make sure it’s safe in the real world.
(00:56:21): And so, that’s why we release a little early. That’s why we call it a research preview. But, yeah. It’s constantly improving. And this is really the only way to make sure that over the long-term the model is aligned, and it’s doing the right things.
Lenny Rachitsky (00:56:33): It’s such a wild space that you work in where there’s this insane competition and pace. At the same time, there’s this fear that if the God can escape and cause damage, and just finding that balance must be so challenging. What I’m hearing is there’s these three layers, and I know there’s … This could be a whole podcast conversation is how you all think about the safety piece, but just what I’m hearing is there’s these three layers you work with. There’s observing the model thinking and operating. There’s tests, evals that tell you this is doing bad things. And then releasing it early.
(00:57:05): I haven’t actually heard a ton about that first piece. That is so cool. So, you guys can … There’s an observability tool that can let you peak inside the model’s brain and see how it’s thinking and where it’s heading.
Boris Cherny (00:57:16): Yeah. You should, at some point, have Chris Olah on the podcast, because he’s just the industry expert on this. He invented this field of we call it mechanistic interpretability. And the idea is at its core what is your brand? What is it? It’s a bunch of neurons that are connected.
(00:57:33): And so, what you can do is in a human brain or in an animal brain, you can study it at this mechanistic level to understand what the neurons are doing. It turns out surprisingly a lot of this does translate to models also. So, model neurons are not the same as animal neurons, but they behave similarly in a lot of ways.
(00:57:50): And so, we’ve been able to learn just a ton about the way these neurons work, about this layer or this neuron maps to this concept, how particular concepts are encoded, how the model does planning, how it thinks ahead.
(00:58:03): A long time ago we weren’t sure if the model was just predicting the next token, or is doing something a little bit deeper. Now I think there’s actually quite strong evidence that it is doing something a little bit deeper. And then the structures that let it do this are pretty sophisticated now where as the models get bigger it’s not just a single neuron that corresponds to a concept. A single neuron might correspond to a dozen concepts, and if it’s activated together with other neurons this is called super position. And together it represents this more sophisticated concept.
(00:58:32): And it’s just something we’re learning about all the time. And for Anthropic as we think about the way this space evolves doing this in a way that is safe and good for the world is just this is the reason that we exist, and this is the reason that everyone is at Anthropic. Everyone that is here, this is the reason why they’re here.
(00:58:50): So, a lot of this work we actually open source. We publish it a lot. And we publish very freely to talk about this just so we can inspire other labs that are working on similar things to do it in a way that’s safe. And this is something we’ve been doing for Claude Code also. We call this, “The race to the top” internally.
(00:59:08): And so, for Claude Code, for example, we released an open source sandbox. And this is a sandbox they can run the agent in, and just make sure that there’s certain boundaries and they can’t access, like, everything on your system. And we made that open source, and it actually works with any agent, not just Claude Code, because we wanted to make it really easy for others to do the same thing.
(00:59:29): So, this is just the same principle of race to the top. We want to make sure this thing goes well, and this is the lever that we have.
Lenny Rachitsky (00:59:37): Incredible. Okay. I definitely want to spend more time on that. I will follow up with this suggestion. Something else that I’ve been noticing in the field across engineers and product managers, others that work with agents is there’s this anxiety people feel when their agents aren’t working. There’s a sense that [inaudible 00:59:57] has a question [inaudible 00:59:58] answer, or it’s, like, blocked on something, or, “There’s all this productivity I’m losing. I need to wake up and get it going again.” Is that something you feel? Is that something your team feels? Do you feel like this is a problem we need to track and think about?
Boris Cherny (01:00:11): I always have a bunch of agents running. So, at the moment I have five agents running. And, at any moment, I wake up and I start a bunch of agents. Like, the first thing I did when I woke up was like, “Oh, man. I really want to check this thing.” So, I opened up my phone Claude iOS app code tab.” Like, agent, do blah, blah, blah.
(01:00:29): Because I wrote some code yesterday and I was like, “Wait. Did I do this right?” I was double guessing something, and it was correct. But now it’s just so easy to do this. So, I don’t know. There is this little bit of anxiety maybe. I, personally, haven’t really felt it just because I have agents running all the time. And I’m also just not locked into terminal anymore. Maybe a third of my code now is in the terminal, but also a third is using the desktop app. And then a third is the iOS app, which is just so surprising, because I did not think that this would be the way that I could even in 2026.
Lenny Rachitsky (01:01:03): I love that you describe it as coding still, which is just talking to Claude Code to code for you, essentially. And it’s interesting that this is now coding. Coding now is describing what you want, not writing actual code.
Boris Cherny (01:01:16): I wonder if the people that used to code using punch cards, or whatever, if you showed them software what they would have said.
Lenny Rachitsky (01:01:22): Isn’t that great? Yeah.
Boris Cherny (01:01:24): I remember reading something, this was maybe very early versions of ACM magazine, or something, where people were saying, “No. It’s not the same thing.” Like, “This isn’t really coding.” And they called it a programming. I think coding is a new word.
(01:01:39): But I think about this. Back in the … My family is from the Soviet Union. I was born in Ukraine. And my grandpa was actually one of the first programmers in the Soviet Union, and he programmed using punch cards. My mom growing up told these stories of … Or she told these stories of when she was growing up, he would bring these punch cards home. And there was these big stacks of punch cards. And for her she would draw all over them with crayons, and that was her childhood memory.
(01:02:08): But for him that was, like, his experience of programming, and he actually never saw the software transition. But, at some point, it did transition to software. And I think there was probably this older generation of programmers that just didn’t take software very seriously. And they would have been like, “Well, it’s not really coding.”
(01:02:23): But I think this is a field that just has always been changing in this way.
Lenny Rachitsky (01:02:27): I don’t think you know this, but I was born in Ukraine also.
Boris Cherny (01:02:30): Oh, I don’t [inaudible 01:02:31]. Yeah.
Lenny Rachitsky (01:02:31): Yes.
Boris Cherny (01:02:32): Which town?
Lenny Rachitsky (01:02:32): I’m from Odessa.
Boris Cherny (01:02:34): Oh, me too.
Lenny Rachitsky (01:02:35): What?
Boris Cherny (01:02:36): Yeah. That’s crazy.
Lenny Rachitsky (01:02:39): Wow. Incredible. What a moment. Maybe related in some small way.
Boris Cherny (01:02:44): Yeah.
Lenny Rachitsky (01:02:44): What year did you leave? And your family leave.
Boris Cherny (01:02:48): We came in ‘95.
Lenny Rachitsky (01:02:50): Okay. We left in ‘88. A little earlier. Yeah. What a different life that would have been to not leave home.
Boris Cherny (01:02:57): Yeah. I feel so lucky every day that I got to grow up here.
Lenny Rachitsky (01:03:02): Yeah. My family, any time there’s a toast or a meal, they’re just like, “To America.” [inaudible 01:03:07]. It’s, like, “Okay. Enough about that. We get it.” Once you start really thinking about what life could have been.
Boris Cherny (01:03:12): Yeah. Yeah. Exactly. Yeah. We do the same toast, but it’s still vodka.
Lenny Rachitsky (01:03:16): It’s still vodka. Absolut. Oh, man. Okay. Let me ask you a couple more things here. You shared some really cool tips for how to get the most out of AI, how to build on AI, how to build great products in AI. One tip you shared is give your team as many tokens as they want. Just let them experiment. You also shared just advice, generally, of just build towards where the model is going, not to where it is today. What other advice do you have for folks that are trying to build AI products?
Boris Cherny (01:03:43): I’d probably share a few more things. So, one is don’t try to box the model in. I think a lot of people’s instinct when they build on the model is they try to make it behave a very particular way. They’re like, “This is a component of a bigger system.”
(01:03:56): I think some examples of this are people layering very strict workflows on a model, for example, to say, like, “You must do step one, and then step two, then step three.” And you have this very fancy orchestrator doing this.
(01:04:06): But actually almost always you get better results if you just give the model tools, you give it a goal, and you let it figure it out. I think a year ago you actually needed a lot of the scaffolding, but nowadays you don’t really need it.
(01:04:16): So, I don’t know what to call this principle, but it’s, like, ask not what the model can do for you. Maybe it’s something like this. Just think about how do you give the model the tools to do things. Don’t try to over-curate it. Don’t try to put it into a box. Don’t try to give it a bunch of context upfront. Give it a tool, so, that it can get the context it needs. You’re just going to get better results.
(01:04:37): I think a second one is maybe actually even a more general version of this principle is just the bitter lesson. And actually for the Claude Code team we have a … Hopefully, listeners have read this, but [inaudible 01:04:52] this blog post maybe 10 years ago called The Bitter Lesson. And it’s actually a really simple idea. His idea was that the more general model will always out-perform the more specific model. And I think for him he was talking about self-driving cars and other domains like this.
(01:05:06): But actually there’s just so many corollaries to the bitter lesson, and, for me, the biggest one is just always bet on the more general model. And over the long-term. Like, don’t try to use tiny models for stuff. Don’t try to fine-tune. Don’t try to do any of this stuff. There’s some applications. There’s some reasons to do this, but almost always try to bet on the more general model, if you can, if you have that flexibility.
(01:05:29): And so, these workflows are, essentially, a way that … It’s not a general model. It’s putting the scaffolding around it. And, in general, what we see is maybe scaffolding can improve performance maybe 10%, 20%, something like this, but often these gains just get wiped out with the next model. So, it’s almost better to just wait for the next one.
(01:05:50): And I think maybe this is a final principle, and something that Claude Code I think got right in hindsight, from the very beginning, we bet on building for the model six months from now. Not for the model of today. And for the very early versions of the product, it just wrote so little of my code, because I didn’t trust it. Because it was, like, Sonnet 3.5. Then it was, like, 3.6, or … I forget. 3.5 New, whatever name we gave it.
(01:06:18): These models just weren’t very good at coding yet. They were getting there, but it was still pretty early. So, back then the model did [inaudible 01:06:26] it automated some things, but it really wasn’t doing a huge amount of my coding. And so, the bet with Claude Code was, at some point, the model gets good enough that it can just write a lot of the code.
(01:06:37): And this is a thing that we first started seeing with Opus 4 and Sonnet 4, and Opus 4 was our first ASL3 class model that we released back in May. And we just saw this inflection, because everyone started to use Claude Code for the first time. And that was when our growth really went exponential. And, like I said, it stayed there.
(01:06:56): So, I think this is advice that I actually gave to a lot of folks, especially, people building startups. It’s going to be uncomfortable, because your product market fit won’t be very good for the first six months. But if you build for the model six months out, when that model comes out, you’re just going to hit the ground running, and the product is going to click, and start to work.
Lenny Rachitsky (01:07:15): And when you say build for the model six months out, what is it that you think people can assume will happen? Is it just generally it will get better at things? Is it just, like, “Okay. It’s almost good enough, and that’s a sign that it’ll probably get better at that thing”? Is there any advice there?
Boris Cherny (01:07:30): I think that’s a good way to do it. Obviously, within an AI lab, we get to see the specific ways that it gets better.
Lenny Rachitsky (01:07:36): Yeah.
Boris Cherny (01:07:37): So, it’s a little unfair, but also, we try to talk about this. So, one of the ways that it’s going to get better is it’s going to get better and better at using tools, and using computers. This is a bet that I would make.
(01:07:49): Another one is it’s going to get better and better for running it for long periods of time. And this is a place … Like, there’s all sorts of studies about this, but if you just trace the trajectory, or maybe even for my own experience when I use Sonnet 3.5 back a year ago, it could run for maybe 15 or 30 seconds before it started going off the rails, and you just really had to hold its hand through any kind of complicated task.
(01:08:13): But nowadays with Opus 4.6, on average it’ll run maybe 10, 20, 30 minutes unattended, and I’ll just start another Claude, and have it do something else. And, like I said, I always have a bunch of Claudes running. And they can also run for hours, or even days at a time. I think there were some examples where they ran for many weeks.
(01:08:31): And so, I think over time this is going to become more and more normal where the models are running for a very, very long period of time, and you don’t have to sit there and babysit them anymore.
Lenny Rachitsky (01:08:39): So, you just talked about tips for building AI products. Any tips for someone just using Claude Code, say, for the first time or just someone already using Claude Code that wants to get better. What are a couple pro-tips that you could share?
Boris Cherny (01:08:51): I will give a caveat, which is there’s no one right way to use Claude Code. So, I can share some tips, but, honestly, this is a dev tool. Developers are all different. Developers have different preferences. They have different environments. So, there’s just so many ways to use these tools. There’s no one right way. You have to find your own path.
(01:09:08): Luckily, you can ask Claude Code. It’s able to make recommendations. It can edit your settings. It knows about itself. So, it can help with that.
(01:09:17): A few tips that, generally, I find pretty useful. So, number one is just use the most capable model. Currently, that’s Opus 4.6. I have maximum effort enabled always. The thing that happens is sometimes people try to use a less expensive model like Sonnet, or something like this, but because it’s less intelligent, it actually takes more tokens in the end to do the same task.
(01:09:35): And so, it’s actually not obvious that it’s cheaper if you use a less expensive model. Often, it’s actually cheaper and less token-intensive if you use the most capable model, because it can just do the same thing much faster with less correction, less hand holding, and so on. So, that’s the first step is just use the best model.
(01:09:51): The second one is use plan mode. I start almost all of my tasks in plan mode, maybe, like 80%, and plan mode is actually really simple. All it is is we inject one sentence into the model’s prompt to say, “Please don’t write any code yet.” That’s it. There’s actually nothing fancy going on. It’s just the simplest thing.
(01:10:11): And so, for people that are in the terminal it’s just shift tab twice. And that gets you into plan mode. For people in the desktop app, there’s a little button on the web, there’s a little button. It’s coming pretty soon to mobile also. And we just launched it for the Slack integration too. So, plan mode is the second one.
(01:10:27): And, essentially, the model would just go back and forth with you. Once the plan looks good then you let the model execute. I auto-accept edits after that, because if the plan looks good, it’s just going to one shot it. It’ll get it right the first time almost every time with Opus 4.6.
(01:10:42): And then maybe the third tip is just play around with different interfaces. I think a lot of people when they think about Claude Code, they think about a terminal. And, of course, we support every terminal, we support Mac, Windows, whatever terminal you might use, it works perfectly.
(01:10:54): But we actually support a lot of other form factors too. Like, we have iOS and Android apps. We have a desktop app. There’s the Slack integration. There’s all sorts of things that we support. So, I would just play around with these. And, again, it’s, like, every engineer is different. Everyone that’s building is different. Just find the thing that feels right to you, and use that. You don’t have to use a terminal. It’s the same Claude agent running everywhere.
Lenny Rachitsky (01:11:15): Amazing. Okay. Just a couple more questions to round things out. What’s your take on Codex? How do you feel about that product? How do you feel about where they’re going? Just competing in this very competitive space in coding agents.
Boris Cherny (01:11:30): Yeah. I actually haven’t really used it, but I think I did use it maybe when it came out. It looked a lot like Claude Code to me, so, that was flattering. I think it’s actually good to have more competition, because people should get to choose and, hopefully, it forces all of us to do an even better job.
(01:11:49): Honestly, for our team, though, we’re just focused on solving the problems that users have. And so, for us, we don’t spend a lot of time looking at competing products, we don’t really try the other products. You want to be aware of them. You want to know they exist.
(01:12:03): But, for me, I love talking to users. I love making the product better. I love just acting on feedback. So, it’s really just about building a good product.
Lenny Rachitsky (01:12:13): Maybe a last question. So, I talked to Ben Mann, co-founder of Anthropic, what to talk to you about. He had a bunch of suggestions, which I’ve integrated throughout our chat. One question he had for you is what’s your plan post-AGI? What do you think you’re going to be doing? What’s your life like once we hit AGI? Whatever that means.
Boris Cherny (01:12:30): So, before I joined Anthropic I was actually living in rural Japan. And it was a totally different lifestyle. I was the only engineer in the town. I was the only English speaker in the town. It was just a totally different vibe. A couple times a week I would bike to the farmer’s market, and you bike by rice paddies and stuff. It was just a totally different speed. Just complete opposite of San Francisco.
(01:12:54): One of the things that I really liked is a way that we got to know our neighbors, and we built friendships is by trading pickles. So, in the town where we lived it was actually, like, everyone made miso, everyone made pickles. And so, I actually got decently good at making miso. And I made a bunch of batches, and this is something that I still make.
(01:13:18): Miso is this interesting thing where it teaches you to think on these long time skills that’s just very different than engineering, because a batch of white miso takes, at least, three months to make. And a red miso is, like, two, three, four years. You just have to be very patient.
Lenny Rachitsky (01:13:31): Wow.
Boris Cherny (01:13:32): You mix it up, and then you just let it sit. You have to be very, very patient. So, the thing that I love about it is just thinking in these long time skills. And, yeah. I think post-AGI or if I wasn’t at Anthropic, I’d probably be making miso.
Lenny Rachitsky (01:13:46): I love this answer. Ben asked me to ask you about what’s the deal with you and Miso. And so, I love that you answered it. Okay. So, the future might be just going deep into miso, getting really good at making miso. Amazing. Boris, this was incredible. I feel like we’re brothers now from Ukraine.
(01:14:08): Before we get to our very exciting lightning round, is there anything else that you wanted to share? Is there anything you wanted to leave listeners with? Anything you want to double down on?
Boris Cherny (01:14:18): Yeah. I think I would just underscore for Anthropic since the beginning, this idea of starting at coding, then getting to tool use, then getting to computer use has just been the way that we think about things. And this is the way that we know the models are going to develop, or the way that we want to build our models. And it’s also the way that we get to learn about safety, study it, and improve it the most.
(01:14:40): So, everything that’s happening right now around just Claude Code becoming this huge multi-billion dollar business, and now all of my friends use Claude Code, and they just text me about it all the time. So, this thing getting big.
(01:14:55): In some ways, it’s a total surprise, because this isn’t the … We didn’t know that it would be this product. We didn’t know that it would start in a terminal, or anything like this.
(01:15:04): But, in some ways, it’s just totally unsurprising, because this has been our belief as a company for a long time. At the same time, it just feels still very early. Like, most of the world still does not use Claude Code, most of the world still does not use AI. So, it just feels like this is 1% on, and there’s so much more to go.
Lenny Rachitsky (01:15:21): Oh, man. That’s insane to think, seeing the numbers that are coming out. You guys just raised a bazillion dollars. I think Claude Code alone is making $2 billion revenue. Anthropic, I think the number you guys put out, you’re making $15 billion in revenue. It’s insane to just think this is how early it still is, and just the numbers we’re seeing.
Boris Cherny (01:15:42): Yeah. Yeah. Yeah. It’s crazy. And the way that Claude Code has kept growing is, honestly, just the users. Like, so many people use it. They’re so passionate about it. They fall in love with the product, and then they tell us about stuff that doesn’t work, stuff that they want.
(01:15:55): And so, the only reason that it keeps improving is because everyone is using it, everyone is talking about it, everyone keeps giving feedback. And this is just the single most important thing. And, for me, this is the way that I love to spend my days just talking to users, and making it better for them.
Lenny Rachitsky (01:16:09): And making miso.
Boris Cherny (01:16:11): And making miso. Well, the miso is not super evolved. You’ve just got to wait.
Lenny Rachitsky (01:16:14): You’ve just got to wait. Well, Boris, with that, we’ve reached our very exciting lighting round. I’ve got five questions for you. Are you ready?
Boris Cherny (01:16:23): Let’s do it.
Lenny Rachitsky (01:16:24): First question, what are two or three books that you find yourself recommending most to other people?
Boris Cherny (01:16:29): I’m a big reader. I would start with a technical book. It is Functional Programming in Scala. This is the single best technical book I have ever read. It’s very weird, because you’re probably not going to use Scala. And I don’t know how much this matters in the future now, but there’s this just elegance to functional programming and thinking in types, and this is just the way that I code, and the way that I can’t stop thinking about coding.
Lenny Rachitsky (01:16:29): Wow.
Boris Cherny (01:16:51): So, you could think of it as a historical artifact. You could think of it as-
Lenny Rachitsky (01:16:51): A deep cut.
Boris Cherny (01:16:54): … something that will level you up.
Lenny Rachitsky (01:16:56): I love this. A never before mentioned book. My favorite.
Boris Cherny (01:16:59): Oh, amazing. Amazing. Okay. Second one is Accelerando by Stross. My big genre is sci-fi. Probably sci-fi and fiction. Accelerando is just this incredible book. And it’s just so fast-paced. The pace gets faster and faster and faster. And I just feel like it captures the essence of this moment that we’re in more than any other book that I’ve read, just the speed of it.
(01:17:23): And it starts as a lift-off is starting to happening, and is starting to approach the singularity. And it ends with this collective lobster consciousness orbiting Jupiter. And this happens over the span of a few decades or something. So, the pace is just incredible. I really love it.
(01:17:41): Maybe I’ll do one more book. The Wandering Earth. Wandering Earth by Liu Cixin. So, he’s the guy that did Three Body Problem. I think a lot of people know him for that. Actually I think Three Body Problem was awesome, but I actually liked his short stories even more. So, Wandering Earth is one of the short story collections, and he just has some really, really amazing stories.
(01:18:01): And it’s also just quite interesting to see Chinese sci-fi, because it has a very different perspective than western sci-fi, and the way that, at least, he, as a writer, thinks about it. So, it’s just really, really interesting to read, and just beautifully written.
Lenny Rachitsky (01:18:15): It’s so interesting how sci-fi has prepared us to think about where things are going. It creates these [inaudible 01:18:21] models of like, “Okay. I see. I’ve read about this sort of world.”
Boris Cherny (01:18:24): Yeah. I think, for me, this was the reason that I joined Anthropic actually, because, like I said, I was living in this rural place. I was thinking these long-time skills, because everything is just so slow out there. At least, compared to SF. And just all the things that you do are based around the seasons, and it’s based around this food that takes many, many months. That’s the way that social events were organized. That’s the way that you organize your time.
(01:18:48): You go to the farmer’s market, and it’s persimmon season, and you know that, because there’s 20 persimmon vendors. And then the next week the season is done, and it’s, like, grape season. And you see this. So, it’s these long-time skills.
(01:19:00): And I was also reading a bunch of sci-fi at the time. And just being in this moment, I was just thinking about these long-time skills. I know how this thing can go. And I felt like I had to contribute to it going a little bit better.
(01:19:12): And that’s actually why I ended up at Ant. And Ben Mann was also a big part of that too.
Lenny Rachitsky (01:19:17): I feel like I want to do a whole podcast just talking about your time in Japan, and the journey of Boris through Japan to Anthropic. But we’ll keep it short. I’ll quickly recommend a sci-fi book to you if you haven’t read it. Have you read Fire Upon The Deep?
Boris Cherny (01:19:32): This is Vinge. Right? Yeah.
Lenny Rachitsky (01:19:33): Yes.
Boris Cherny (01:19:34): It’s great.
Lenny Rachitsky (01:19:34): Okay. That one, it’s so interesting from an AI/AGI perspective. So few people have read that. So, [inaudible 01:19:41]. Yeah. It’s like [inaudible 01:19:42]-
Boris Cherny (01:19:42): I really like the … Yeah. Yeah. Yeah. I like Deepness in the Sky also. I think [inaudible 01:19:49] sequel. Right?
Lenny Rachitsky (01:19:42): Yeah.
Boris Cherny (01:19:50): Yeah. Yeah. Yeah. I think so.
Lenny Rachitsky (01:19:52): Yeah. It’s very long, and complex to get into, but so good. Okay. We’ll keep going through our lightning round. Do you have a favorite recent movie or TV show you’ve really enjoyed?
Boris Cherny (01:19:59): So, I actually don’t really watch TV or movies. I just don’t really have time these days. I did watch … I’m going to bring up another Liu Cixin, but the Three Body Problem series on Netflix I really loved. I thought that was a great rendition of the book series.
Lenny Rachitsky (01:20:12): So, the common pattern across AI leaders is no time to watch TV or movies, which I completely understand. Is there a favorite product you’ve recently discovered that you really love?
Boris Cherny (01:20:22): I’m going to shill a little bit, and just say Cowork, because this is, legitimately, the one product that’s been pretty life-changing for me just because I have it running all the time. And the Chrome integration, in particular, is just really excellent. So, it’s been like … It paid a traffic fine for me, it canceled a couple subscriptions for me. Just the amount of tedious work it gets out of the way is awesome.
(01:20:45): I also don’t know if it’s a product, but maybe also another podcast that I really love … Obviously, besides Lenny is-
Lenny Rachitsky (01:20:51): Obviously.
Boris Cherny (01:20:52): Yeah. It’s the Acquired podcast by Ben and David. It’s just super awesome. I feel like the way that they get into business history, and bring it alive is really, really good. And I would start with the Nintendo episode if you haven’t listened to it.
Lenny Rachitsky (01:21:08): Great tip. With Cowork, just so people understand if they haven’t tried this, basically, you type something you want to get done and it can launch Chrome, and just do things for you. I saw someone went on pat leave from Anthropic, and you had to fill out these medical forms for him, these really annoying PDFs where it just loads up the browser, logs in, fills them out, [inaudible 01:21:29].
Boris Cherny (01:21:30): Yeah. Exactly. Exactly. And it actually just works. Like, we tried this experiment a year ago, and it didn’t really work, because the model wasn’t ready, but now it actually just works and it’s amazing.
(01:21:39): I think a lot of people just don’t really understand what this is, because they haven’t used an agent before, and it just feels very, very similar, to me, to Claude Code a year ago. But, like I said, it’s just growing much faster than Claude Code did in the early days. So, I think it’s starting to break through a bit.
Lenny Rachitsky (01:21:55): And there’s also this Chrome extension that you mentioned that you could just leave standalone that sits in Chrome, and you could just talk to Claude looking at your screen, at your browser, and have it do stuff, have it tell you about what you’re looking at, summarize what you’re looking at, things like that.
Boris Cherny (01:22:08): Exactly. Exactly. For people that are just learning to use Cowork, the thing I recommend is, so, you download the Claude desktop app, you go to the Cowork tab, it’s right next to the code tab. The thing that I recommend doing is start by having it use a tool. So, clean up your desktop, or summarize your email, or something like this, or respond to the top three emails. It actually just responds to emails for me now too.
(01:22:29): The second thing is connect tools. So, if you say, “Look at my top emails,” and then sends back messages … Or put them in a spreadsheet, or something. But, for example, I use it for all my product management. So, we have a single spreadsheet for the whole team. There’s a row for engineer. Every week, everyone fills out a status. And every Monday, Cowork just goes through and it messages every engineer on Slack that hasn’t filled out their status. And so, I don’t have to do this anywhere.
Lenny Rachitsky (01:22:52): Wow.
Boris Cherny (01:22:52): And this is just one prompt. It’ll do everything. And then the third thing is just run a bunch of Claudes in parallel. So, in Cowork you can have as many tasks running as you want. So, it’s, like, start one task, I have this project management thing running, then I’ll have it do something else, then something else, and then I’ll kick these off. And then I just go get a coffee while it runs.
Lenny Rachitsky (01:23:09): There’s a post I’ll link to that shares a bunch of ways people use what was previously Claude Code, and now just you can do through Cowork. Because a lot of this is just like, “Wow. I hadn’t thought I could use it for that.” And once you see … These examples I think are what people need to hear of just like, “Oh, wow. I didn’t know I could do that.” [inaudible 01:23:26].
Boris Cherny (01:23:27): I think a lot of this was also some of this was also inspired by you, Lenny. You had this post about … It was, like, 50 non-technical use cases for Claude Code, or something like this. So, actually one of our PMs used that as a way to evaluate Cowork before we released it, and I think at the point where Cowork was able to do, like, 48 out of the 50, they were like, “Okay. It’s pretty good.”
Lenny Rachitsky (01:23:46): Wow. I did not know that. That is awesome. I’ve become an eval.
Boris Cherny (01:23:53): Yeah. [inaudible 01:23:54].
Lenny Rachitsky (01:23:55): Amazing. I feel like I’m valuable to the future of AI.
Boris Cherny (01:24:01): This is, like, reverse breaking through.
Lenny Rachitsky (01:24:05): Wow. That is so cool. Wow. Okay. I wonder what those last two are. Anyway, okay. Two more questions. Do you have a favorite life motto that you often come back to in work or in life?
Boris Cherny (01:24:14): Use common sense. I think a lot of the failures that I see, especially, in a work environment is people just failing to use common sense. Like, they follow a process without thinking about it. They just do a thing without thinking about it, or they’re working on a product that’s not a good product or not a good idea, and they’re just following the momentum, and not thinking about it.
(01:24:32): I think the best results that I see are people thinking from first principles, and just developing their own common sense. If something smells weird then it’s probably not a good idea. So, I think this is the single advice that I give to coworkers more than anything too.
Lenny Rachitsky (01:24:46): And I feel like that alone could be its own podcast conversation. What is common sense? How do you build? But we’ll keep this short. Final question. So, you’ve got more active on Twitter/X. I’m curious just why and just what’s your experience been with Twitter? The world of Twitter. Because you get a lot of engagement on Twitter/X.
Boris Cherny (01:25:06): So, for a long time I used Threads exclusively, because I actually helped build Threads a little bit back in the day. And I also just like the design. It’s a very clean product.
Lenny Rachitsky (01:25:14): Yeah.
Boris Cherny (01:25:15): I just really like that. I started using Threads, because actually I was bored. So, in December, I was in Europe-
Lenny Rachitsky (01:25:21): Started using Twitter you mean.
Boris Cherny (01:25:23): Oh, yeah. Yeah. Yeah. I started using Twitter, because I was bored. So, my wife and I, we were traveling around in Europe for December. We were just nomad-ing around. We went to Copenhagen, we went to a few different countries. And, for me, it was just a coding vacation. So, every day I was coding, and that’s my favorite kind of vacation, just code all day. It’s the best.
(01:25:43): And, at some point, I just got bored, and I ran out of ideas for a few hours. I was like, “Okay. What do I want to do next?” And so, I opened Twitter, I saw some people tweeting about Claude Code, and then I just started responding. And then I was like, “Okay. Maybe actually a thing I should do is just look for bugs that people have. Maybe people have bugs,” or feedback they have. And so, introduce myself, asked for it, people had a bunch of blogs and feedback.
(01:26:07): And I think they were surprised by the pace at which we were able to address feedback nowadays. For me, it’s just so normal. Like, if someone has a bug, I can probably fix it within a few minutes, because I just started Claude, and as long as the description is good, it’ll just go and do it, and then I’ll go do something else, and answer the next thing.
(01:26:25): But I think for a lot of people it was pretty surprising. So, that was really cool. And, yeah. The experience on Twitter has been pretty great. It’s been awesome just engaging with people, and seeing what people want, hearing about bugs, hearing about features.
Lenny Rachitsky (01:26:38): I saw [inaudible 01:26:38] the other day on Twitter. You’re, like, posting many threads, and it was breaking. And just, like, “Oh, man. What’s going on here?”
Boris Cherny (01:26:45): Yeah. Yeah. Yeah. There was a bug. I hope it’s fixed now.
Lenny Rachitsky (01:26:49): Amazing. Oh, man. Boris, I could chat with you for hours. I’ll let you go. Thank you so much for doing this. You’re wonderful. Where can folks find you online? How can listeners be useful to you?
Boris Cherny (01:27:00): Yeah. Find me on Threads or on Twitter. That’s the easiest place. And, please, just tag me on stuff. Send bugs, send feature requests. What’s missing? What can we do to make the products better? What do you want? I love, love hearing it.
Lenny Rachitsky (01:27:16): Amazing. Boris, thank you so much for being here.
Boris Cherny (01:27:18): Cool. Thanks, Lenny.
Lenny Rachitsky (01:27:20): Bye, everyone.
(01:27:21): Thank you so much for listening. If you found this valuable you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating, or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at LennysPodcast.com. See you in the next episode.
Follow the guest