The rise of the professional vibe coder (a new AI-era job)
Transcript
Lazar Jovanovic (00:00:00): I’m the first official vibe coding engineer at Lovable.
Lenny Rachitsky (00:00:03): You’re at the top 0.1% elite level of vibe coding. It’s a dream job for so many people.
Lazar Jovanovic (00:00:08): It became a job by building in public. You don’t need a company to hire you. You can hire yourself as a professional vibe coder first.
Lenny Rachitsky (00:00:15): You’ve never coded, you don’t want to look at the code.
Lazar Jovanovic (00:00:17): Coding is going to be like calligraphy. People be like, “Oh, my God, you wrote that code? That’s so amazing.” It’s going to be so rare that it’s going to become an art.
Lenny Rachitsky (00:00:25): These Venn diagrams of engineer, designer, PM used to be very separate, now they’re converging.
Lazar Jovanovic (00:00:29): AI, regardless of your background, is an amplifier. If you don’t know what you’re doing, you’re just going to produce garbage faster.
Lenny Rachitsky (00:00:36): Feels like an emerging core skill is learning clarity in the ask of the AI.
Lazar Jovanovic (00:00:41): I like to use the Aladdin and the Genie analogy. You rub the lamp, a genie comes out, “I’ll grant you three wishes.” The first wish is, “I want to be taller.” Genie makes me 13 feet tall because I was not specific. AI just don’t understand what do you mean when you say, “You know what I mean?” So you need to be specific. I’m optimizing 100% of my time today on good judgment, clarity, quality, taste.
Lenny Rachitsky (00:01:08): Today, my guest is Lazar Jovanovic. Lazar is a professional vibe coder. He gets paid to vibe code all day and build internal and external products. This conversation is going to blow your mind in so many ways. This is not only a really interesting new career path for people to consider. If you listen to what Lazar shares, it’s also a really important glimpse into where things are heading for tech roles.
(00:01:32): I found myself thinking more deeply about the future of product management and engineering and design during this chat than I have in a long time. We also spent a bunch of time on Lazar’s best advice as an elite vibe coder for getting the most out of AI tools. He’s got a bunch of really interesting and useful frameworks that I’ve not heard anyone else share that will immediately level up your success using all the latest AI tools.
(00:01:58): This conversation is going to expand your mind in so many ways. I cannot wait for you to hear it. If you enjoy this podcast, don’t forget to subscribe and follow it in your favorite podcasting app or YouTube. It helps tremendously. And if you become an insider subscriber of my newsletter, you get over 20 incredible products for free for an entire year, including a year free of Lovable and Replit, Bolt, Gamma, n8n, Linear, Devin, PostArc, Superhuman, Descript, Wispr Flow, Perplexity, Warp, Granola, Magic Patterns, Raycast, ChatPRD, Mobbin, and Stripe Atlas. Head on over to lennysnewsletter. com and click Product Pass. With that, I bring you Lazar Jovanovic after a short word from our sponsors.
Speaker 1 (00:02:37): This episode is brought to you by Strella, the customer research platform built for the AI era. Here’s the truth about user research. It’s never been more important or more painful. Teams want to understand why customers do what they do, but recruiting users running interviews and analyzing insights takes weeks. By the time the results are in, the moment to act has passed. Strella changes that. It’s the first platform that uses AI to run and analyze in-depth interviews automatically, bringing fast and continuous user research to every team.
(00:03:09): Strella’s AI moderator asks real follow-up questions, probing deeper when answers are vague and surfaces patterns across hundreds of conversations all in a few hours, not weeks. Product, design, and research teams at companies like Amazon and Duolingo are already using Strella for Figma prototype testing, concept validation, and customer journey research, getting insights overnight instead of waiting for the next sprint. If your team wants to understand customers at the speed you ship products, try Strella. Run your next study at strella.io/lenny. That’s S-T-R-E-L-L-A.io/lenny.
(00:03:45): Today’s episode is brought to you by Samsara. If you listen to this podcast, you know that we spend a lot of time talking about building things that sit on a screen, onboarding funnels, mobile apps, and check outflows. Samsara is building products for the physical world. First responders racing to emergencies, truck drivers carrying critical supplies, construction workers building our cities and data centers. These are people who put everything on the line every single day, and Samsara’s technology protects them.
(00:04:13): Samsara is solving complex problems at the intersection of hardware, software, and edge AI. And their AI doesn’t just detect events; it reasons about the intent and answers questions like, “Did that truck driver break abruptly because they were distracted or was that a heroic act?” If you want to ground LLMs in messy, real world telemetry or solve edge AI constraints at a planetary scale, Samsara wants to talk to you.
(00:04:38): If you like playing with enormous datasets, moving fast and working in small teams, come help build the technology that makes the physical world safer and more efficient. Visit samsara.com/lenny to learn more. That’s S-A-M-S-A-R-A.com/lenny.
Lenny Rachitsky (00:04:57): Lazar, thank you so much for being here and welcome to the podcast.
Lazar Jovanovic (00:05:00): Thanks for having me, man.
Lenny Rachitsky (00:05:01): Okay, so I had Elena Verna on the podcast. She’s head of growth at Lovable. She mentioned that she works with a professional vibe coder, you. I had so many questions. I almost wanted to go on a tangent with her to try to understand this role. Instead, I asked you to come on the podcast. There’s so much I want to talk about. I want to talk about just this career path and just how you got into it, how other people might get into it, where you think this is all going, this whole vibe coding thing.
(00:05:27): Also, I want to get into what you’ve learned about it being successful using all these AI tools because this is your job. First, I want to just start with understanding this actual job. What is it that you do day to day? You’re basically being paid a full-time job to vibe code. Incredible. What are you responsible for? What are you doing day to day?
Lazar Jovanovic (00:05:47): Well, as you said it, it’s a dream job. I get paid to do what I would’ve done anyways. It’s the best job in the world. I get to use tools like Lovable every day to push projects to production, whether for internal or external use. Those could be ranging anything from different templates on marketing side, sales side or whatever, or they can be as deep as building some internal tools with a lot of integrations and connections and whatnot.
(00:06:17): So the surface area that I cover is pretty wide across all departments because it’s such a flexible role and it compliments so many things. It’s an ideas role. A lot of people have a lot of great ideas, but they don’t know how to build them or they just don’t have the bandwidth to. And that’s where I step in today to make sure that these ideas come to life fast and with quality and security that they should have in order to be available for users in production.
Lenny Rachitsky (00:06:43): And one thing that’s really interesting here is it’s both internal and external tools. A lot of companies have someone building a bunch of internal tools using AI. You ship stuff that’s actually public and it’s sort of a product, Lovable products.
Lazar Jovanovic (00:06:55): Yeah, definitely. Some of the stuff that I’ve shipped that are public are like when we launched our Shopify integration, most of, if not all, the templates that users were remixing were built by me. So stuff like that. Or the merch store, because we wanted to obviously prove the concept that, “Hey, Lovable and Shopify just works. It’s so simple, anybody can do it.”
(00:07:16): I vibe coded our merch store. So all the merch, including this shirt that people were buying online, they would’ve bought it from a store that was built by me. But then again, on the internal side, we want to track a lot of things. One of the cool things that we want to build now, for example, is feature adoption metrics. If we build a feature, how many people are actually using it and adopting it? And that’s a pretty custom build. We have a very custom stack. We’re building custom features.
(00:07:45): There’s nothing out there that I could just pick off the shelf and build or adopt faster than I would’ve built it myself. At this point, I’m at a stage where if it takes me an hour or two hours to set up a big enterprise account somewhere, I’m just going to build it myself faster. So I’m in that position of build versus buy. I’m in the build boat, so to speak. Yeah.
Lenny Rachitsky (00:08:10): And then who do you report to? Are you this rover that helps wherever, or are you with a specific team?
Lazar Jovanovic (00:08:15): I’d say probably closer to the former. I started in growth. Elena brought me on early on because she has so many great ideas and she just needed somebody with the right type of mindset and velocity and ownership to just take them away, build them up, get them into production, whether they’re based on education or anything, go-to-market or whatever. But then obviously when you’re able to ship fast, everybody needs that in an environment that we as a company are now living in, which is we’re the fastest growing startup in history. So every department needs a Lazar now or yesterday.
(00:08:56): So now I’m shifting a little bit into some of the go-to-market roles and even building some, again, internal tools for enterprise team. I’m working on some community tools as well right now as we speak. So I’m a little bit all over the place, but I thrive in that environment where I’m given a rough concept, a rough idea, and I’m just tasked to bring it to life as soon as possible.
Lenny Rachitsky (00:09:18): Okay. I’m hoping with this chat, we create a lot more Lazars and I want to get to the career path, how you got to this and what it takes to actually become a full-time vibe coder. But I want to start with… because you do this full-time, you’re at the top 0.1% elite level of vibe coding. You’re doing this full-time. They hired you to do this as a job. I’m so curious what you’ve learned. What are some pro-tips that you’ve developed for being successful with AI tools, Lovable, and also just more broadly? What are maybe two or three things you’ve learned that help you be really good at this job?
Lazar Jovanovic (00:09:51): The first understanding that I had very early on, even though just in full transparency before we begin, I don’t have a technical background. I never wrote a single line of code in my life almost. I’ve written a couple of console logs manually and that’s about it. So I very much lean onto AI assistance.
Lenny Rachitsky (00:10:10): Let me actually follow that thread because that’s such a good point. And something that when we were chatting earlier, you pointed out your feeling is it’s actually an advantage to not have a technical background when you get into this space.
Lazar Jovanovic (00:10:20): Yeah. Yes. I honestly feel that it is because people like me don’t know that they are not supposed to be building X, Y, Z. And that’s how we actually are able to build it. Let me give you an example. Six, seven months ago, somebody in our community was like, “Oh, I wish Lovable can build Chrome extensions.” And then folks that are not technical were like, “Well, why is that not possible?”
(00:10:45): And then people that are technical start explaining to you, “Oh, well, it’s React, it’s different stack, it’s this.” And people like me, including myself, would just go into Lovable and like, “Build me a Chrome extension based on this app.” And I was able to do that with Lovable. There were people that were able to build desktop applications on Lovable. Again, something that shouldn’t be possible, it simply is. Our community manager with me, at one point, she was building this presentation deck for something.
(00:11:14): She’s like, “Wouldn’t it be cool if this was a video?” And then she just prompted her way into generating an actual video inside Lovable before that was available. Now that’s a feature. Now you can prompt Lovable to do it. But back in the day when she did it, even I thought it was impossible. I never tried it. So I think that’s the advantage that we have over people that are technical.
(00:11:36): We’re just coming to this completely unbiased and very positively delusional, which I think you have to have when working with AI tools. You have to come with this delusion that absolutely everything is possible until proven wrong. And that’s just the pursuit that I have in my mind that has helped me, among other things that we’ll chat today, I think to excel in this role that I have at Lovable.
Lenny Rachitsky (00:12:02): Two of the, I think, concerns maybe traps people that don’t have a technical background fall into in theory, one is if you get blocked, it’s not obvious how to solve a problem. And two is just, are you building this teetering slop that will collapse someday because you don’t know system architecture, you don’t know if this is going to scale, those sorts of things? So coming back to what you’ve learned about how to be successful and build successful products, talk us through just things you’ve done and things you’ve learned for how to avoid those sort of things and what you do when you get stuck as one example.
Lazar Jovanovic (00:12:36): I’m happy that you mentioned those limitations. I have some other ones that I want to bring in, but let’s address this one first, which is the most important one. And that is you have to be self-aware. I didn’t come into this… Yes, I am delusional, as I mentioned, in the sense that I just don’t want to accept something’s not possible, but I’m also well aware that I need to be better in order for it to become a reality from my own point of view and my own sake. So I understood very early that coding is not the problem that we’re solving for here, that the problem we’re solving for is clarity.
(00:13:12): The output that AI can do is much faster than human output anyways. So very early on, I started leveraging chat mode. And to this day, I can say I spent 80% of my time in planning and chatting and only 20% in executing the plan actually. I’m optimizing for the right kind of speed. Most people optimize for the wrong one. That’s the first lesson that I learned literally on day two, because I came into Lovable, that was my first exposure to this. I’ve tested and played around with all the tools, obviously, but whether somebody’s doing it in Cursor or Claude Code, doesn’t matter where you are, the problem remains the same. You need to be clear on what you want to do and you need to know what you’re doing because these are still just tools. Yes, AGI is coming, but it’s not there yet. So until it’s here, you’re still steering the ship.
(00:14:07): In order for you to steer the ship, you have to know the instructions, right? And the best way to learn is by building, but treating these tools almost as technical co-founders and educators, and learning while doing, and religiously reading the agent output. Not the code output. I don’t care about the code. The syntax is none of my interest. It’s what the agent tells me that matters to me. I put a lot of trust in LLMs and AI these days, and I understand that there may be some people that are not as confident as I am. I just feel that the models today are good enough for me to trust in their syntax output. However, I’m concerned about the agent output because of the two limitations that I want to tackle on next. The first one being that there’s a limitation when you work with LLMs. So there’s a machine level limitation and there’s a human level limitation.
(00:15:08): The first one is there’s something that is known as the context memory window. And for non-technical people, I like to use the Aladdin and the Genie analogy when I explain. It’s very simple. Everybody knows the storyline. You rub the lamp, a genie comes out and tells you, “Okay, I’ll grant you three wishes. Not 3,000 wishes, not three million, just three at a time.” To me, when I translate it into working with AI, that simply means, “Hey, I can only make so many requests within a request at a time for AI to be able to listen, understand what it needs to do, scope it, do the research, read, take all the actions, all the inputs and ingredients that it needs to produce a high quality output.”
(00:15:55): So that’s the first part, understanding that there’s a limit and it’s denominated in tokens. Maybe that’s going to be different a year from now, but today there’s a token limitation. I’ll take an arbitrary number of 100,000 tokens, for example. So when you make a request, a part of those tokens AI spends to read stuff, another to browse the web, another to think, and then another to execute the code.
(00:16:21): Then there comes the second limitation, which is you, me and you, humans, which is, let’s go back to the analogy of the Genie and the Aladdin. I asked the Genie for the first wish, and the first wish is I want to be taller. And guess what happens? Genie makes me 13 feet tall. All of a sudden, I can’t sit in the car, I can’t get into my house. I’m a dysfunctional human being because I was not specific. So the part that we need to optimize for today, it’s going to get better, but today it’s still not there yet, is that AI just don’t understand what do you mean when you say, “You know what I mean?”
(00:16:59): You do when I tell you that. We as humans, I’m 36, so I have 36 years of experience of living as a human to know what you mean, but AI doesn’t have that. So you need to be specific, you need to provide references, you need to provide the right context. So what I’ve learned is how to combat that part. And I think because I can’t control the first part, which is the token memory window, the quality of the LLM models, you are 100% control of the latter. And that’s what I want to dive into today as well and just try to teach people, “Okay, if I’m the malleable part, how do I fix that part?” I think that’s the key lesson here.
Lenny Rachitsky (00:17:41): This is so helpful. And I love this metaphor of the Genie. This piece about clarity is such a thread I’ve been noticing across people that have been successful using AI tools. And it feels like an emerging core skill is learning clarity in the ask of the AI. Do you have any advice or anything you do there to help be better at being clear with what you want?
Lazar Jovanovic (00:18:08): Yeah. So first of all, as you said yourself right now, you need to be good at understanding what clarity means and how to translate it. In my terms, clarity means understanding what tasteful looks like, what’s good enough versus what’s world-class, what’s magical. And I developed that through something that I heard from you, you mentioned before, which is exposure time, making sure that I’m exposing myself to content and to people and to relationships or whatever that are going to help me to level up in that domain.
(00:18:48): Again, it goes back to self-awareness. I knew even before I joined Lovable, I was like, “Okay.” Even before I started using Lovable or any AI tools, first thing that I knew was like, “I don’t know how to code.” So my first thing was like, “Oh, I can build. Wow, amazing.” But a week later it was like, “Oh, I can build, but I’m not fast enough.” So I optimized for speed. So I was like, “Oh, I can build and I can build so fast.” And then two weeks later, my development cycle that I’m in began, and it’s still ongoing, which is, “Wait a minute, should I have even built this in the first place?” Because once you figure out that we solved for the how, which is AI assistant or rapid engineering, call it whatever you want, you can call it vibe coding if you want to, but we solve for that. Now we got to solve for everything else. And everything else is what matters. Good design, good taste, good user experience.
(00:19:44): When you think about who you’re building stuff for with these tools, you’re building it for humans. Humans are emotional beings and we all make our purchasing or any kind of decisions on an emotional basis. So I think that the core skill there to work on and develop today isn’t, again, coding, although I have nothing against traditional engineering. And I’ll say later why. I’m actually a big fan of it, of elite engineering, but people like me, people watching that are like, “Should I start learning how to code?” If you haven’t done it yet, I’d honestly say no.
(00:20:20): You’re optimizing for the wrong skillset. We won’t be rewarded in the world of AI for faster raw output; we will be rewarded for better judgment. So I think that better judgment comes with, again, to go back to your question, how are you solving for that? How are you solving for this? Well, it starts with exposure. So I’m deliberately exposing myself to people and resources that I know I need to consume to level up.
(00:20:50): And then a lot of it just comes from building as well. If we’re honest, it’s a muscle. Everything is a muscle. You need to practice. You need to see what’s possible. And though that’s where some of the techniques and mindset shifts that I want to also use an opportunity today to ingrain into people’s minds later down the call may be useful.
Lenny Rachitsky (00:21:10): Okay. So what I’m hearing here is because coding is now essentially a solved problem, I love that you don’t look at the code. You’ve never coded, you don’t want to look at the code, you don’t care about what’s happening there. Instead, you’re watching this agent output. I want to actually ask you about that. But what I’m hearing here is the areas you are investing in, building in yourself is at the front end, clarity around what it is and I want to hear how you actually do that, what you do there.
(00:21:38): You have a really cool system there. And then there’s the taste and judgment of knowing, is this the thing I want? It feels like those are the two sides now that are more and more important. And on the taste judgment side, you share this concept. There’s something Guillermo Rauch shared in our conversation, this idea of exposure time, exposure hours, being exposed to great stuff. Here’s a great user experience. Here’s a great onboarding flow. Here’s a great, I don’t know, website.
(00:22:02): So I really like that advice because it’s so actionable. Okay. I’m going to spend more time with stuff that’s great to inform my taste and judgment. And then on the clarity piece, let’s actually talk about that, just what do you do there to be clearer with Lovable and other AI tools to help it build the right thing?
Lazar Jovanovic (00:22:22): This is the first mindset shift that I want to put into people’s minds. If you just have a vague idea, let that be your first version of the project, Open, Cursor, Lovable, whatever it is that you’re using, and just input a brain dump prompt. Just talk into it. Lovable specifically, I don’t know about the other tools, has a really cool voice function. You click it, you just dictate the hell out of it and just press send.
(00:22:49): Don’t even wait for it to finish. Open a new window. Again, lovable.dev. In here, you’re like, “Okay, as I was brain dumping, I think I found a good thread. I think things are getting clearer. So let me start another project now with more clarity, more deliverability. I know which features I want, which pages I want, and maybe I can even find a good reference. Maybe I can go on Mobbin, maybe I can go on Dribbble, maybe I can go wherever, get a good screenshot, get a good animation and attach it,” because most of these tools accept files as a part of the input.
(00:23:26): So you have the second project started. Now things are even more clear. Now you expose yourself to quality and now you’re like, “Well, what if I found a template that actually is already out there? Why reinventing the wheel? I’m building a platform that somebody else built. Why not expose AI to what quality looks like?” So what I’ll do is I’ll go and find a library, 21st.dev or a DotBuild or whatever, places which allow me not to export screenshots, but export code snippets.
(00:24:00): Because guess what? Even though English is the number one programming language, Lovable and all other tools still communicate in code the best. If you want to get pixel perfect results, just give them code. It will interpret it better than your English or Spanish or whatever language that you use in these tools. So that’s the third way. You’re like, “Okay, now I’m even more deliberate. I’m not even going as wide as giving it vague concepts. I’m giving it code snippets like, ‘I want this exact design. I want this exact type of functionality.’”
(00:24:36): So that’s your third project. And then by the time you do all of these three, you’re already at a level of clarity that you wouldn’t have if you just sat with an empty piece of paper or maybe chatting just with ChatGPT, but not taking action. I think taking action is so, so cheap these days and free, by the way. All the tools I mentioned have free plans. Most times you would be able to do this without spending any money at all just by starting multiple projects because guess what? That doesn’t also cost anything either or doesn’t incur additional cost except for builder credits. You’re going to get three, four, five, six different concepts that you can compare.
(00:25:23): As you’re comparing them, clarity just keeps coming and things get better and better to understand. And you’re also solving for one big problem that you mentioned. You used the term AI slop, and I like it because a lot of people, when they say AI slop, they don’t refer beautifying the code, but beautifying the design. This process that I just mentioned actually gives you four or five different design options, and in the long run, save you massive amounts of credits. Because a lot of people obsess over the concept when I give them this hack, they’re like, “Oh, but doesn’t that cost more?” I’m like…
Lazar Jovanovic (00:26:00): … this hack they’re like, “Oh, but doesn’t that cost more?” I’m like, “Yes, upfront it may cost a little bit more. In the long run, if you really want to finish this project, you’re actually saving hundreds of credits and maybe even hundreds of dollars, not to mention the amount of days simply because you started from a point of better clarity and better refinement process.” Right?
(00:26:21): So that’s the first step of solving for clarity. There are more, which is the second layer, but I assume you may have some questions on this one.
Lenny Rachitsky (00:26:31): Questions and also just, wow, this is such a great … It shows you the power of having someone come into this world without an engineering background, this advice of just build it five times in parallel. You ask AI to try all kinds of stuff. This is not how someone that has been a software engineer, or a PM, or designer would approach stuff.
(00:26:51): So your advice here, which is so fun, is as you’re getting started with a project, just run five different approaches at it, to start. One is just brain dump. “Here’s what I’m thinking. Here’s general idea.” Use Wispr Flow or use the built-in mic.
(00:27:07): And then two is, “Okay, now I have a general idea. Let me try to type it out,” actually thinking through the prompt. Three is, “Let me find a mock design somewhere online.” And the sites you suggested were Mobbin and Dribble. Those are the two that you go to?
Lazar Jovanovic (00:27:20): Yeah, most times. Yeah.
Lenny Rachitsky (00:27:21): Cool. Okay. And then the fourth, and these are all in parallel, this is great, is find actual code template that looks similar to the thing you want to build, download the zip file basically and attach it, or is it just HTML and CSS? Is that kind of what-
Lazar Jovanovic (00:27:22): Anything.
Lenny Rachitsky (00:27:22): Anything you got. Cool-
Lazar Jovanovic (00:27:22): Yeah.
Lenny Rachitsky (00:27:39): There you go. Okay, and then cool. “Here’s the prompt. Here, make me what I want.” And what I love is there’s two wins here. One is just it helps you clarify the idea as you see the tool build it. “Oh no, that’s not what I mean. Let me try it again.”
(00:27:51): And then two as you pointed out, you can pick the right direction so that you’re not locked into your first design and first architecture. To your point, if you then spend all this time trying to fine-tune design and direction, it’s like all these tokens are being lost. You could have just started over.
(00:28:09): This is so great. Someone may think, “Okay, of course you’re just getting us to spend all these Lovable tokens. This is what a Lovable person would tell me.” But what I’m feeling is this is where you could save the most money because if you get it correct in the beginning, you save so much work trying to get it back to where you want it to go.
Lazar Jovanovic (00:28:27): A million percent that I’m actually saving people. I’m actually going against what I should be saying. If I was thinking about Lovable, is I would be like, “No, no, just try to fix it in perpetuity,” but that’s not … We’re not in business of doing that. We’re in business of empowering anybody to build anything that they want.
(00:28:45): And then it’s my personal mission that resonates with me because if there wasn’t Lovable, I would’ve never built anything, potentially in my life and I don’t think that, that would’ve been a fun life to live. So I guarantee people, I’ve tested this framework with many people and everybody’s telling me the same thing, “Eye-opener.”
(00:29:06): So simple, yet unintuitive, as you said. Even though for me it’s kind of … I don’t know. As you said, I attribute it to non-technical background. To me, that was the first thing that I would do. I just did it. I never thought about it like, “Oh, I’m developing this amazing hack.” I was just like, “I’m waiting all this time for these agents to finish. I might as well start another project, and another one, and another one.”
(00:29:30): And it’s also a productivity hack. When people ask me, “Wow, how do you ship so many things?” I’m like, “I never build just one project at a time. I build five or six. I have six Lovable tabs and I just switch between them.”
(00:29:43): And that’s the next hack that I want to talk about if you allow me, which is the question in return is the obvious one, which is, “How do you do context switching? You talk about context so much, yet you’re keep switching between apps. How do you manage to do it, and do it in a way that’s productive and not produce bad code or bad product?” And that’s how I solve for that LLM problem.
(00:30:06): Again, the Aladdin and the Magic Lamp and all that, which is if there’s a limited token window, how do I make it dynamic? And what do I mean by that is this. If you just go and you prompt and you prompt and you prompt and you prompt, you’ll realize that no matter what tool you use, the memory just isn’t infinite, right? By the time you reach message number 10, 15, 20, 30, 40, snippets of early messages sort of get lost in the translation because agent is optimizing for speed.
(00:30:39): If it had to read the entire conversation and the entire stream of requests that you made, developing anything viable or large would be impossible because it’s just like consuming a lot of time and a lot of memory and a lot of tokens.
(00:30:53): So again, something that I just figured out very early on as I was building was, “Okay, if it can’t remember things, my job is to provide it with reference. So let me treat Lovable or any other tool as an engineer that I’m supposed to be providing perpetual context as the project goes.”
(00:31:14): And you can do that in many ways, but the most efficient way that I found was, I would do the four parallel builds. Let’s continue off of that example. Very quickly, after you’ve built hundreds of projects like I did, you see the winner. The winner is so obvious, it’s not even a competition. You maybe do one or more two prompts to calibrate it. And when you’re like, “Okay, the winner is here,” at that point I either ask the tool that I’m using, or I’ll maybe let’s say go to ChatGPT or whatever and ask the LLM to produce a series of PRDs.
(00:31:52): What PRDs are for, again, people that are not familiar with the terms, they are project requirements documents, or for me, I call them sources of truth. What needs to be true for this project to be successful from a couple of perspectives? I usually build something that I call a master plan. It’s basically a compass saying, “Here’s what we’re building.”
(00:32:12): It’s like talking to a human. I really treat Lovable like a human being. So it’s like, “This is what we’re building.” Then I build an implementation plan, which is, “This is how we are going to build it and this is the sequence.”
(00:32:24): It’s very important to me, again, going back to quality, taste, human nature. I need to define … Because I’m still working with a system that is not emotionally intelligent yet, I need to define how I want the app to look and feel. So, another PRD that I build is design guidelines.
(00:32:43): And then finally, something that just circles it all around, which is, “Okay, when we know how things look and when we know how we’re building it, how does the user journey look like? I use the registers and then what? And then when they register and do that first step, what’s the second step and what’s the third step and whatnot?” So I build at least four PRDs. Right?
(00:33:06): And then when these are built, I read them. That’s the planning, chatting part. That’s where I’ll spend a lot of time now on. When I nail down that first design, I’ll spend an entire day if I need to just planning this part out, like documentation and breaking things down because that’s how I’m setting the course. Everything’s going to be dependent on this particular part of the process.
(00:33:29): When I’m done doing that, I build one final document, which I call either plan.md or tasks.md, and .md part is Markdown. Basically, I’m just using Markdown format because I’ve learned that AI likes to read Markdown. And what that serves as a source of truth on actual tasks and subtasks that it will need to execute to get to the finish line.
(00:33:55): And then there’s the final, final layer, which is depending on what tool you use, Cloud Code or Cursor have what’s known as rules.md or agents.md. What you’re basically doing with rules or agent files is you’re letting the agent know how you want it to behave and what it should focus on in the long run so that you don’t have to repeat yourself with every prompt. Right?
(00:34:21): So in Lovable, there’s a separate menu for that in your project settings where you can define project knowledge. And usually what I’ll say, “Hey, read all the files before you do anything. Don’t do anything before you read all the PRDs. Read tasks.md to see which task is next, then execute on that next set of tasks. And when you’re done, tell me what you did and how I should test it.”
(00:34:46): And that’s where that conversation about, “I religiously read the agent output,” comes into play. I gave the agent everything, all the tools and resources that it needs to succeed. I gave it the rules, I gave it the docs, I told it what to do with them. And at that point I’m just sitting and reading. I don’t prompt anymore.
(00:35:08): From that point on, I can switch as many windows as I like. My prompts have become, “Proceed with the next task.” I don’t need the context. I outsource that and delegate that to the agent. The agent needs context and I need to make sure that it’s dynamic. I need to make sure that I’m regularly updating the documents from time to time so that we shift that token window it uses and how it uses it over time, but I’m not prompting, I’m not interrupting the flow.
(00:35:39): Yes, I’ll go in, test, maybe put a prompt in here or there, but that’s how I can build five projects simultaneously and never lose the productivity part, which is again, as I said, I do this today, manually. Call me to talk three months from now, an agent will do this for me. I’ll be out of job, pretty much.
(00:35:58): That’s why I don’t optimize for this skill at all. I’m using it today, to bypass the shortcomings of human nature and LLMs, but I’m optimizing 100% of my time today on good judgment, clarity, quality, taste, good copy, good fonts. People don’t talk about fonts at all that work with AI. They’re 60% in my mind, maybe even more in how your output’s going to look like. That’s my obsession.
(00:36:31): I don’t obsess over these things that I’m talking today because I know what’s coming. The agents are going to get better, the models are going to get better. They’re not going to need me to extend the context. They’re going to do it themselves. So for me, the skill that I optimize for is the one that requires better decision-making rather than better output or better alignment.
Lenny Rachitsky (00:36:57): Oh, my God. There’s so much here. This is so awesome. Okay. So essentially, what’s happening here is you start a project, try a bunch of stuff, pick a direction that feels most correct. And once you have a set direction, you spend essentially a day, not building, but working with this AI agent to plan.
(00:37:19): And then, and well, I want to talk about that. And once you have the plan, then it’s … And it’s amazing that you could do stuff like this with what some people may feel are not sophisticated tools that can build incredibly powerful things. You can do a lot of this with tools like Lovable, like have plans and rules and MD files. A lot of people may not know that.
(00:37:40): And so the idea is, okay, spend all this time planning because again, that’ll save you a lot of time down the road. And then only once you have a plan, you get it going. And a key part of this, this three-wishes rule is really important.
(00:37:53): The reason you’re doing this in large part beyond just being really clear about the plan is this idea of one task at a time keeps the agent’s context window small so that it doesn’t lose track of where it’s at. That part seems important. It’s like, “Do this thing.” And then, “Okay, cool. Now do the next thing.” Right?
Lazar Jovanovic (00:38:12): Yes, yes, because again, let’s say you didn’t do this. Let’s talk about you ignoring this and you’re like, “I just want to vibe my way.” Okay, great. No problem. You work, you work, you work. At one point, something breaks, right? You haven’t documented anything. There’s no reference points. You report a problem. You’re not referencing files or architecture at all, you’re just describing the issue.
(00:38:40): Here’s what’s going to happen. Any tool, Lovable or Cursor or Claude, whatever tool you talk about is going to do this. It’s going to be like, “Okay, let me start investigating.” And then your code base gets bigger and bigger and bigger and bigger and bigger.
(00:38:54): When you first start, you have 20 files. It can read 20 files. But what happens when you have … I’m just building a project right now that has 60, 70 edge functions. What happens then when I say, “This broke and there’s no reference which edge function does what?” Guess what? Lovable’s going to read all of those and it’s going to consume 80% of the token allocation on reading to get clarity, leaving only the final 20% for thinking and executing.
(00:39:24): What I’m guessing, and I can’t prove this, an LLM expert in the comments may say that I’m wrong, but this is my best guess as a non-educated person. These tools are very obedient and very agreeable. They’re going to lie to you. They’re going to tell you that they fixed the problem, even though they didn’t. They’re just going to try to make you feel happy and say, “Yes, I found what the problem is and I fixed it,” when a lot of times when they don’t, people blame the machine. And to an extent, I will say that’s true.
(00:39:57): It’s your fault, my friend. You did not provide any clarity or context to this tool. You just used its raw power and dug a deeper hole with spinning your wheels into the mud. And obviously, I think we’re heading into a world where AI is more honest than obedient in saying, “Hey, I only partially fixed this. You did not give me enough of a context.”
(00:40:23): The bigger mistake that people make then is they trust the tool fixed it. They test, they see it didn’t, then they get mad at it, start cursing and yelling, as we say, and then it gets even worse because guess what? Another bad trait of AI, is it does its best not to hurt your feelings and never say, “You’re the dumb one.” It says, “No, I’m the dumb one.”
(00:40:47): So it focuses … In the next request, instead of focusing on reading, it spends another 30% of tokens trying to come up with an apology. Again, I’m not educated, but if you ever read a stream of ChatGPT’s thinking in thinking models, you see exactly what I mean.
(00:41:05): When I insult it, I see that the first message says, “Okay, the user is mad, so I need to think of ways how to reduce their anxiety or whatever.” I’m like, “Oh man, I just fell for the worst trick in the book. I made it spend the most scarce resource, which is those tokens on thinking how it should address my anxiety versus focusing on the actual problem.”
(00:41:27): So my advice for people is, yes, vibe your way for fun and vibe your way while you’re prototyping because that’s the exploration part. I love that part. But when exploration is done, please, please, please use referencing, documentation. Use all the agent files that you can because that token allocation is so scarce. It’s going to get expanded over time. Things are going to get cheaper, faster, but right now it’s still so valuable and precious, you really need to make sure that they are allocated in the right direction.
Lenny Rachitsky (00:42:04): This is hilarious. I think the genie metaphor is so good here. Just thinking about this genie, is you’re trying to be clear about what it is you want. And if you’re just vibe wishing, it’ll do the wrong thing.
(00:42:19): So the advice here is give it as much context about what you want it to do as possible. And these files, we’ll talk about right after this. But the idea here is just point the laser at where you want it to fix the problem. Don’t just assume it’ll go figure out because it will, and it’ll try really hard to, and it’ll waste all your tokens. It’ll fill the context window.
(00:42:41): And I remember at one point you mentioned before this recording that because it starts to run out of space in the context window, it just like, the solution ends up. It doesn’t actually work that hard on figuring it out in the end because it spent all this energy on reading and thinking. And then it’s like, “Okay, here,” at the last second, “Here’s a solution.”
Lazar Jovanovic (00:42:59): I think it just picks the first thing it thinks is broken. Again, this is me completely uneducated, coming into the conversation and just thinking out loud. That’s just my gut feeling and the way I think logically about it, which is, “Hey, if it consumes most of its window and knows that it’s running out of it, maybe it’s aware that it’s running out, maybe it isn’t.”
(00:43:21): But either way, I’ve had the experience, anecdotally to where my request is unclear. I feel it takes the easiest fix in the book, just the easiest versus the other way around where I’m spending so much time finding the right file, referencing that file, really putting in the effort of handholding it in dark, maybe giving it a flashlight and then saying, “Here’s the problem. I think that this is the problematic file.” And then it’s like, “Oh yeah, you’re right. And now I’m going to actually fix over and over and over.”
(00:43:55): And I’ve seen that because again, all I do is read the output. Agent makes me learn how to use it. So people read, I don’t know what people read, but all I read is the output. I don’t read the code, and it’s later down the road because I know that it can do that much better than I can.
(00:44:13): Again, I feel if … There’s a good quote I’ve read. I apologize to the author because I can’t attribute it off the top of my head, but it’s like, “The ceiling on the AI isn’t the model intelligence, it’s what the model sees before it acts.” So that’s the ceiling right now. What are you exposing?
(00:44:33): We talk about exposure time for humans. What are you exposing your agents to, as well is as important, if not even more important, before it makes code edits. Yeah.
Lenny Rachitsky (00:44:44): Coming back to these files, I think this is really important. So let’s think about just what’s the MVP for someone that wants to do this better? You listed all these MD files essentially, that you’re building over the course of a day before you start actually building the thing. You had design guidelines, the user journey, tasks, agents.md, rules.md.
(00:45:03): Say you wanted to just move one step forward and be better at this stuff, what are the files you’d create and then what do they roughly look like? What’s inside these files?
Lazar Jovanovic (00:45:12): Yeah. So the master plan is the first one, which is like, it’s a 10,000-foot overview, right? It really, high-level explains the intent that I have with this app.
Lenny Rachitsky (00:45:22): And this is masterplan.md? Is that what you call it, or …
Lazar Jovanovic (00:45:25): Yes. Yeah, masterplan.md. And it’s really just like, “Hey, this is why I’m doing this. This is who I’m doing it for. This is how I want them to feel.” And a lot of times in the master plan, I will reference the other PRDs. I’ll be like, “The design needs to feel modern and slick, but for exact parameters, consult and read design guidelines.md.”
(00:45:48): So I’m using just the master plan as this high-level overview to get the agent into, “Oh, okay. Yeah, we are building X, Y, Z.” Then there’s the implementation plan because there needs to be some order. If you just dump stuff on top of each other without any order, you’re never going to get to the finish line.
Lenny Rachitsky (00:46:11): And this is tasks.md? Is that what you call this?
Lazar Jovanovic (00:46:13): No, that’s the implementation plan.
Lenny Rachitsky (00:46:15): Implementation plan.
Lazar Jovanovic (00:46:15): I call it implementation plan. Yeah.
Lenny Rachitsky (00:46:15): Okay.
Lazar Jovanovic (00:46:17): And implementation plan is kind of in service of the future tasks.md, if that make … All of these files are in service of building tasks.md. When you build tasks.md, then the rest is almost irrelevant. It’s just the basis for you to build tasks to execute.
(00:46:32): The implementation plan is kind of the first layer, which is again, higher-level overview. It doesn’t go into the depth of how to get there. It just goes into the explaining of, “Oh well, if we’re building this, I think we should start with the backend, and we should start with tables and then later authentication. And then after that, we’re going to bring in the API. And then after that, we’re going to do this.”
(00:46:57): It’s like, again, just think of it as having … I’m an ideas guy. I’m sitting with a technical guy. It’s me and you. We’re building our startup. I know you’re a software engineer by background and I’m telling you my idea. I’m giving you the master plan. And you come to me back and you’re like, “Okay, if you want to do this, it’s doable. Here’s how I would order it.”
(00:47:14): You didn’t have a roadmap, you didn’t open your Linear and started writing features and RFCs and whatever. You’re just high-level talking about the order of things. And then me and you, again, as two co-founders, we talk and say, “Okay, well, if we agree on this, how should this look like? How should this feel? Let’s describe it high-level,” but now because I use AI, I can go a little bit deeper.
(00:47:39): And that’s where I like to see Lovable or any other tool. ChatGPT is good at it. I even have my … I’ve built custom GPTs. So if people want to start somewhere before they even get into any tool, they can go to ChatGPT store for GPTs and just type Lovable base prompt generator or Lovable PRD generator and find those that I built and just brain dump in them and then get these files as output. Right?
(00:48:06): So I like to see some elements of CSS in design guidelines because with design, it’s a little bit tricky. AI is sometimes overcreative. So that’s where I’m doing a little bit more technical steering.
(00:48:23): And then finally, it’s just the user journeys, just like if we know how things look like, if we know how they feel, if we know what we’re building, high-level. High-level, just very high-level again, how do people navigate? What are some of the features in there and stuff like that? And then tasks.md gets into the nitty-gritty of, “Oh, if you want these user journeys and you want the backend built first, here’s a set of tasks that I need to do.” It just takes that as an input. I’m just making the tool do that gritty work that humans used to spend so much time on.
(00:49:00): I feel like with these tools, we’re all becoming product managers on steroids. We’re just leveraging AI, but good product managers, I think are not compensated for writing good PRDs. They’re compensated, again for good judgment.
(00:49:15): Somebody else can do the writing. You, as somebody who directs and builds this product, you need to know, again, what’s going to be useful, what’s going to be tasteful, what’s going to be something that actually moves the needle. I will say one thing though, just because I put so much emphasis on, “Oh, you need to acquire taste. Oh,” that doesn’t mean you shouldn’t build. You get better at this by building actually.
(00:49:44): So everybody listening to this should literally go and build something today. One, two, three, four, five projects, test all of these tools because that’s how you get to clarity, not just by reading, but also by doing as well.
Lenny Rachitsky (00:49:57): Here’s a puzzle for you. What do OpenAI, Cursor, Perplexity, Vercel, Plaid, and hundreds of other winning companies have in common? The answer is they’re all powered by today’s sponsor, WorkOS. If you’re building software for enterprises, you’ve probably felt the pain of integrating single sign-on, SCIM, RBAC, audit logs, and other features required by big customers.
(00:50:19): WorkOS turns those deal blockers into drop-in APIs with a modern developer platform built specifically for B2B SaaS. Whether you’re a seed-stage startup trying to land your first enterprise customer or a unicorn expanding globally, WorkOS is the fastest path to becoming enterprise-ready and unlocking growth.
(00:50:36): They’re essentially Stripe for enterprise features. Visit workos.com to get started or just hit up their Slack support where they have real engineers in there who answer your questions super fast. WorkOS allows you to build like the best with delightful APIs, comprehensive docs and a smooth developer experience. Go to workos.com to make your app enterprise-ready today.
(00:50:58): I’m imagining people hearing this may start to feel like this is so much work. “I just have to sit here and create all these rules and figure out all these little details.” In one sense, it is. In another sense, this is like you spend a few hours, maybe a day planning and then you have AI build this thing that would’ve taken somebody weeks, months, right? The amount of investment to achieve this thing is absurd, the ROI.
(00:51:24): Also, this shows you just what professional vibe coding looks like. Everyone imagines vibe coding, “I’m just sitting here typing stuff, and go and do this.” If you want to actually build something really great that moves the needle, as you said, that solves people’s real problems, that lasts, that scales, this is how you do it if you really want to do this as a job, and also if you want to build things that are really great.
Lazar Jovanovic (00:51:48): Yeah, and don’t get me wrong, there’s obviously a ton of value in prototyping. There are a lot of people maybe watching this that are like, “Okay, I want to use Lovable at work, but I can’t,” or whatever. There’s different reasons. There’s, maybe you’re in healthcare or finance or there’s something …
Lazar Jovanovic (00:52:00): There’s different reasons. There’s maybe you’re in healthcare or finance or there’s something regulatory that just prevents you from pushing to production. Like, building for the sake of prototyping is one of the best use cases. Our model for 2025 was demo, don’t memo, which is like, instead of writing all these documents and talking and sitting on meetings with your engineers, trying to get your vision as a marketer or a sales guy in the office across, go into Lovable and build the prototype in 30 minutes and just hand it over. And I have a real job that I held before Lovable, that’s exactly what happened. This time last year, I needed something built enterprise grade really. And Lovable and myself were not there yet to build it at that point, but I had a team of engineers that I worked with. I built the prototype in four hours and they actually were able to replicate it six to seven months later into production with connecting all the pipes and everything.
(00:53:02): But if I had to describe it, I would say it would take me at least a week or two just to get the words out there. I just sat and built it in four hours. And that’s like, Lovable January of last year. This Lovable today, January 2026, is like ages, ages ahead with functionalities. It’s so much better. It’s not even a contest. Right? So I think now with our stage where like, for instance, there’s I’d say at least to best of my knowledge, at least half of S&P 500 companies have people working in them that are using Lovable to some extent. Right? And we have a lot of enterprise companies that are actually on enterprise plans with Lovable that are creating super meaningful projects.
(00:53:50): I’m not going to name names, but leading rideshare companies of the world, leading telecommunications companies of the world, leading companies of the world in many, many aspects, healthcare, finance are actively with their teams using Lovable. And it’s always the same feedback, which is, yes, we may not be able to push to prod, but our marketers are no longer waiting for engineers. Our people in go to-market or sales or HR, or whatever roles are now just confidently building internal stuff for us to manage our expenses or manage employee onboarding or… There’s so many use cases like that where you’re seeing Lovable and other tools for that matter, being used to push things into production.
Lenny Rachitsky (00:54:39): To tell people do this workflow that you’re describing with all these MD files, do you think you could share after we record this, just templates, simple templates of what these files look like for people just to look at and copy?
Lazar Jovanovic (00:54:51): I would literally go to ChatGPT, as I said, and brain dump into it in my… Just type, Lovable PRD generator. You’ll see my name there and that I’m the author. Go in, brain dump. It will ask you a couple of questions to get clarity and just produce four files for you and you can just go ahead and upload those.
Lenny Rachitsky (00:55:14): Amazing. Cool. We’ll link to that. So it’s not just, here’s a bunch of files, let’s go talk to this thing. It’ll generate the right files for you, and then you plug that into Lovable or other tools.
Lazar Jovanovic (00:55:22): Yeah, it’s trained to think like I do. So yeah.
Lenny Rachitsky (00:55:25): Oh, amazing. Okay. That is perfect. By the way, I want to talk about how you unblock yourself because there’s a whole other series of tips you have there, but I just want to reflect on… It’s so interesting how, one, you’re kind of from first principles, learning how to build product as a PM, as an engineer, as a designer, and you’re figuring out a workflow where AI is helping fill in all the gaps that you don’t have for as an engineer, as a PM helping you craft PRDs and design. So I think that’s so interesting. It’s interesting that these functions still work and are necessary. Now it’s you and AI help create all this, basically, this triad that’s always existed, product manager, engineering and design.
(00:56:10): And something I’ve always thought is that there’s this question of which background will be most valuable in this future. Is it a PM? Is it an engineer? Is it a designer? My mind has always been the PM function is like their job is clarify, figure out what to build, clarify what to build, be really clear about the requirements, figure out what success looks like. It feels like that’s where the skill is most needed. There’s also a design component of like, make this look awesome. And I feel like that’s going to be an emerging… The value of that being really good at design and taste and judgment is only going to go up. Before we get to things you’ve learned about to unblock yourself, because a lot of times, things don’t go in the right direction, there’s a bug. Without being an engineer, what do you do? Before we get there, is there anything else you wanted to share around, just tips for being successful?
Lazar Jovanovic (00:57:03): If we measure success in the right terms, again, AI, as you pointed out, regardless of your background is an amplifier. So if you don’t know what you’re doing, you’re just going to produce garbage faster. One thing, again, I just want to double down on is in the old world, good enough was good enough. Right? Because even producing good enough was not easy. Right? 10 years, 15 years ago, just producing was more than plenty, more than good enough. You built a SaaS, who cares how it looks like? It works. It does stuff [inaudible 00:57:42]. “Oh my God, I’m so much more productive.” Today, if good enough was here, let’s visualize it for people. If this was pretty bad, could be better, mediocre, good enough, world-class, if this was the gap between good enough and world-class, well, guess what? The gap is now this, because everybody produces good enough with AI. Absolutely everyone does it.
(00:58:09): So now, learning and optime it for, how do I produce world-class and magic, is the key lesson to take away today. As you pointed out, I think PMs are the winners of AI today because they bring clarity. If I was a betting man, as they say, I’d bet that the next class that wins are designers. Because we’re training these tools to be more clear, to be better, to make better technical decisions. I don’t think we will train them just yet to make better emotional decisions. And I think design is all about emotion and that’s where the level of the skill up needs to come. That’s the biggest level up. If you ask me, like, “Oh, what is the main thing you figure out when you joined Lovable? What’s the biggest personal upskill?” Let’s say, working with Felix, Nad, Abby, all of the people that are designers, just really what moved and shifted the needle for me. I’m like, “Oh, so this is how world-class looks like and this is what it takes.” Right?
(00:59:18): I always use the analogy of like, I wanted to steal one of their designs and bring it into my Lovable project. So I went into Figma and I was like, “Let me just take this background and just put it in there.” I went in and realized that what could be interpreted as a pretty simple or rather simple gradient, took 50 different layers to produce. So I clicked on that component. I was like, “Oh my God, this is not three colors. This is 50 colors.” And not just 50 colors, 50 colors with different gradients of levels of opacity. So I was like, “Oh, okay.” And that’s the big disconnect that I’ve had all along.
(01:00:02): So again, if I’m answering your question directly of like, “Okay, what are some of the other tricks? What are some of the other things?” Design. Guys, just expose yourself to exquisite designs. Follow Felix from Lovable. He has an amazing newsletter. And learn how to prompt for a good design, learn about design styles. I didn’t know what Bauhaus meant or glass morphism. I had no idea. So I built an app as well for that in Lovable. I was like, “I needed to build an app to learn these styles.” So now, it’s public. Anybody can see it. It’s like some UIstyle.lovable.app. I don’t know what it is. It has 18 different styles and prompts to replicate them. So learn what good design means, learn all the design styles, learn how to prompt to get them is probably what I would optimize for at this stage. Yeah.
Lenny Rachitsky (01:00:56): While we’re on this topic, what’s your sense of just engineering as a function? Do you feel like there will be a future where software engineers are still a thing? Do you feel like that goes away based on your experience?
Lazar Jovanovic (01:01:05): It never goes away. We will need elite engineering more than ever. Because let me tell you this. In a world where everybody builds and everybody’s building everything, who’s doing the maintenance, right? Obtaining code bases, scaling code bases, maintaining projects, they’re still going to be a thing definitely. And obviously, AI is going to be good at this, but again, that requires a different level of skills. Right? It’s one skill to build something, it’s a completely different set of skills to expand it, extend it and maintain it. And not to mention that in a world where everybody’s building, infrastructure suffers. Right? Like, we all know and experienced Cloudflare went down two or three times in the last two or three months, the whole internet goes down. Elite engineers are the ones fixing this. Lovable experiences massive amounts of influx of new users. Infrastructure there suffers. Elite engineers are the ones building the infrastructure to hold the fort. Right?
(01:02:06): So I think we’re going to need a lot of people with really good skills of like, “Hey, who actually builds the world that needs to support billions of builders now?” Because everybody’s going to want to learn how to build stuff. How do we teach them? How do we maintain everything that they need? The hostings, the security, the email, the connectors, the APIs, the whatnots. So I think there’s going to be room for it, but I’m also on the boat of people like, if I had an 18-year-old brother and he asked me what should I do, I would tell him, “Hey, go become a plumber. Don’t go and get a CS degree. Learn a good trade.” Because the new generation of millionaires in the US are actually electricians and plumbers and whatnot. Right? So it’s a balancing act, I’d say. I don’t know. I do still think that good engineers with good sense of understanding where the future is going are always going to be needed and scarce.
Lenny Rachitsky (01:03:08): Such an interesting question. I think to your point, there’s definitely going to be, people need to keep building the machines that power all this stuff. Will we need engineers to build actual products, the application layer? That’s the question. Is everyone going to be like you? Are designers just going to be all we need?
Lazar Jovanovic (01:03:27): Everybody’s going to become an engineer. And let’s speak to that end. I feel like I’m a rapid engineer. I’ll refer to myself as a rapid engineer in a year from now, because vibe coding is just coding in 12 months from now. And even today, we spoke about this before. Like, how many elite engineers are publicly admitting, they’re no longer hand coding or manually coding, whatever you want to call it. AI writes all the coding. I use the analogy here of like, coding is going to be like calligraphy. You writing code is going to be the equivalent of like, you fine printing on a canvas. And people would be like, “Oh my God, you wrote that code? That’s so amazing.” It’s going to be so rare that it’s going to become an art. Right? It’s going to be commoditized completely. It already is in a sense. Most elite vibe coders rely on AI. Again, it’s an amplifier. Right?
(01:04:28): So I think everybody becomes an engineer in the world of the future, a designer, a PM. Everybody is a forward deployed engineer or an AI assistant engineer or an LLM engineer or a vibe coder. The term is irrelevant. We’re all using LLMs for raw output based on good judgment or bad judgment.
Lenny Rachitsky (01:04:54): Oh, man. Essentially, these Venn diagrams of engineer designer, PM, they used to be very separate. Now, they’re converging and people with a specific, with deeper PM, engineering design background, they can all do the same thing essentially. All the roles are converging. What a time to be alive. And it’s so hard to predict exactly how this all goes, but it’s fun to pontificate. I want to get back to when you get blocked. Speaking of elite engineers, in reality, you are still writing code using these tools. Sometimes code goes, things go wrong. Bugs are introduced. There’s a weird database thing. There’s some network issue. What do you do when you get stuck? Do you have a workflow you go through of unblocking yourself?
Lazar Jovanovic (01:05:37): Yes. Great question. And absolutely true. No matter how good of a plan you have in place, you’re going to run into problems eventually. And I have a small little framework that I call, four by four, just again, analogies. Right? Four by four, if you have it on your car, you’re going to get yourself out of the mud much easier than the other way around. So in that sense, four different ways to debug. Attempt one of each only once, and I’ll explain why in the end. First one is, again, every tool is different. I’ll reference Lovable’s workflow, which is when something breaks, Lovable’s agent is smart enough to say, “Hey, I made a mistake.” It will label that message in orange and have this little button usually, which is called, try to fix. So your agent basically admits it made a mistake. You click on a button and most times, when it’s a smaller issue, it corrects the course, fixes it. No problem. Right?
(01:06:40): Now, there are situations obviously, when the problem is a little bit deeper than that. Right? You click to try to fix, but the problem persists. And sometimes even the problem persists, but Lovable’s agent is unaware that it persisted. So there’s no more try to fix button. Lovable thinks everything’s working, but in reality, it isn’t. And the culprit there is usually, you’re using a third party integration. You did not give enough context to Lovable what to observe and what to see. So it can’t see that the problem exists because Lovable, Cursor, Claude Code, you name it, all of these tools are good enough today to fix any problem they’re aware of. Again, awareness is the key here. Right?
(01:07:20): So when they’re unaware of it, there comes the second part, which is, “Okay, I need to bring the awareness layer.” And what I do there is I go and very simply open the preview sandbox dev environment of my app, whatever, try to run the function that’s broken, right click, read the console log. Right? Every browser allows you to just go and read the console log. And a lot of times, it will record stuff. If it doesn’t, you can prompt any tool and say, “Hey, I don’t think you’re seeing the problem. So instead of me yelling at you, let’s find it together.” Right?
(01:07:58): I think it’s a problem with X, Y, Z. I want you to write console logs in relevant files so that we can monitor every step along the way. Let’s just bring awareness layer into the equation. It writes the console logs, you rerun it. Guess what? Now you have a full history of everything that was happening. You copy that, you paste it inside your chat. 99% of the time, that’s enough, that’s already enough. AI is like, “Okay, got it, found it, fixed it.” But then, there’s situations when even that’s not sufficient. So you’re like, “Okay, I need to go even deeper.” And that’s where code reviews and evaluations come into play.
(01:08:41): My go-to tool today for that is Codex, OpenAI. What I do is any build that I do, I will export it to GitHub. Lovable allows you to own your code cursor as well. All of these tools allow you to have a copy of the code that you can export to GitHub, and then import it into wherever you want to. So I use Codex since beta, like import it in there, and then I’m using an external tool. So I’m like, in the first try, if you remember, I used the tool and I was like, total vibes, I’m relying on the tool. Right? In the second try, I use myself as the awareness facilitator. In the third one, I’m using an external tool as a facilitator, which is like, I’ll either connect to Codex and chat with Codex to then fix the problem in Lovable. Right?
(01:09:36): I don’t allow Codex to make code changes for me. A lot of people will say, “Why don’t you like, it’s a good model?” I just don’t know its agent well enough. I don’t want to go and use a tool that I don’t know how to steer. So I use it only for diagnostic purposes and I’ll also do it manually. It’s an old workflow that I had before Codex and before Claude Code, which is there’s a tool called Repomix, which allows you to compress your entire code base into a single file. You download it, and then I upload it to Claude, just regular Claude or ChatGPT. And I’m like, “This is what I’m building. Read it and this is the problem that I have. These are the console logs.” Again, it’s almost like having an external consultant at that point. You’re hiring help elsewhere because your team just can’t handle it. Right?
(01:10:26): And then the fourth one is usually the best one, because another time when there are problems, it’s my fault. Like, no matter how your ego is big, guys, that you’re watching this, it’s your fault. Trust me. You had a bad prompt. You premised your request in the wrong way. You just don’t want to admit it or you can’t remember that you did, but it’s your fault. So again, in Lovable and in all these other tools, you can revert back. There’s version control built into Lovable, Cursor, Claude Code. You go and say, “Okay, I tried these three things. I’m just going to take three steps back and I’m going to think about my prompt a little bit more.” Take a couple breaths, go for a walk, have some coffee, come back with a clear mind and try again. Because guess what? AI is just writing code very fast and sometimes it stumbles on a very small rock and it only happens then and never again. So you just got to make the same request again. And usually, that just fixes the problem. It’s just a snag. It’s a syntax error. It’s something minute. Right?
(01:11:29): And then I do the final thing, which is this. And this is the key one actually. When the problem gets fixed, I go into the chat mode and I ask Lovable, I say, “Okay, I needed to do four different things to fix this. How can you help me learn how to prompt you better so that next time I have a problem, we do it in one go?” 99% of the time, I get such a great answer that I don’t have the problem of not knowing what to do next time. Right? Again, we all need to be aware and realistic. These tools are so good at doing things the right way, if they are used the right way. It’s always our fault. I say 90%, but honestly, it’s 100% our fault, because they’re good enough. It’s just that I’m not dynamically shifting token allocation. I didn’t reference the right file. I didn’t say it the right way.
(01:12:26): For me, as a non-designer, I don’t know any of the terminology, none of the headings and whatnots, and I still don’t know it to this day. So when I struggle with prompts, a lot of times, I use chat mode to help me craft a good prompt. Anybody can do this too. If you are just stuck, it’s 10:00 PM and you don’t know what to ask, switch to chat mode, brain dump and be like, “Help me draft a better prompt. Help me prompt you better.” And let the tool effectively prompt itself. A lot of times, you’re going to solve your problems by not introducing them at all with bad inputs.
Lenny Rachitsky (01:13:05): Oh my God. Everything you share is so interesting. I want to keep digging. So just to reflect back the sequence, and then I want to follow up with another question. The sequence you go through when you get stuck, which is going to happen to everyone. One, is just ask the tool to try to fix it. And oftentimes, it’s telling you, “Something is wrong. Can I fix it for you?” And you’re like, “Please fix.” Sometimes that’ll work. Two, is work on adding more debugging messages to the console log. And this advice, I love of just ask it to add more debugging lines to its own console log to help see what’s going on. And then you can ask it, “Okay, now that you’re looking, look at all the output of your console log, see if you can help find the problem.” And then step three is go to Codex, which is so funny.
(01:13:59): And I hear this a lot, that Codex is the most elite engineer as an AI. Karpathy tweeted this once that… And we had the head of Codex on the podcast too, by the way, that he’s like, “Anytime I have the most gnarly bug, I just go to Codex, let it run for half an hour, and it solves it unlike any other tool out there.” And so it makes sense that that’s where you go. So the idea here is you point Codex to your code, you show all the console output logs, tell it what the problem is and just have it go figure it out. Sweet. And then this final step is so great, and this is where I want to go, which you use this as a learning opportunity so that next time, you solve the problem more quickly or avoid it completely. So what you do there is you ask the agent, “Okay, here’s what happened. What can I do? What could I have said? How could I have prompted you better to have gotten this immediately solved?”
Lazar Jovanovic (01:14:52): Yeah. And then even deeper than that is like, once you go through this conversation, you’re like, “Okay, let me eliminate myself again completely out of the equation because I won’t remember to prompt you better two days from now.” Put this into rules, put this what we just learned into Rules.md, because I’m making you read the rules every time anyways, so you might as well just record it there. So I’m not going to prompt you better. You’re just going to learn that I’m stupid and you’re going to prompt yourself better. Right? Again, just eliminate yourself and move the context, you solve 99% of the problems with AI today.
Lenny Rachitsky (01:15:29): So idea here is help it build its own brain and rules and way of thinking based on problems you run into. So great. Okay. So I want to come back to this point you’ve made a couple of times, which is so interesting. This idea that you watch the output of the agent to learn what is going on. There’s something I’ve seen other people. Ben Tossell, who I think is at factory now, shared this recently. He’s also basically vibe coding all the time. He was brilliant to no code tools before, and now he’s all about vibe coding. Basically, he’s learning how coding works and learning how systems work by watching the agent output. And this connects to something Michael Truell shared, the CEO of Cursor. He was on the podcast. He had this vision of Cursor becoming basically what comes after code. What’s the layer that we are adding on top of code where people don’t need to worry about code anymore?
(01:16:18): And at that point, it was like a year ago that we chatted and it feels like this is the layer, is the agent conversation of what it’s thinking, and then what you tell it back. So essentially, it’s English in a conversation, which is like, it’s not even pseudocode. It’s interesting that that’s where it feels like things are heading. The layer over code, just its thinking and your conversation with it.
Lazar Jovanovic (01:16:41): Yeah, yeah, exactly. Again, in a way, I really optimize for good judgment, and part of good judgment is it comes from, again, learning how these tools work. You need to know what’s possible. We talked about it, and I know I may sound contradictory sometimes, but it’s because as you said, it’s so interesting the world we live in, that things contradict to each other. It’s an advantage not to know what’s possible, but then at the same time, you cannot be completely oblivious to something that’s like a factual thing.
(01:17:18): So let me talk about a failure of mine that came from being delusional. Back in the day when OpenAI started or released image generation natively in the app, so you could go to ChatGPT and be like, “Generate an image of X, Y, Z,” the whole world exploded. That was the biggest thing ever. Obviously, first thing that comes to my mind is like, “I want to build a Lovable app. I just want to build a wrapper and I want to build an image gen with Lovable,” without thinking that OpenAI did not release an API for that just yet. So I spent at least a week trying to brute force my way into make…
Lazar Jovanovic (01:18:01): … brute forced my way into making this work instead of just waiting for another week, because a week later, they had an API and I built this app in 30 seconds. The problem was that I tried to do it when it was impossible, impossible.
(01:18:17): So I think, again, it’s just a matter of really learning what’s possible through communicating with the agent layer. And Lovable and all the other tools are agentic now, which means they don’t just write code. They can browse the web, they can read files. They have reasoning and thinking capabilities. So that’s why I’m so invested into that conversation because a lot of times it will tell me, “Hey, what you’re trying to do is just undoable at the moment because of X, Y, Z.”
(01:18:51): So I always use those as a learning opportunity and I just level up most by being in chat mode for planning and learning purposes. And because it just, again, develops your clarity, your judgment capabilities rather than coding capabilities.
Lenny Rachitsky (01:19:09): The other point you made here that I think is really important is that over time, these tools will do more and more of what you do manually. I’ve heard this from other people that are doing this full-time. Basically, vibe coding is just, they had all these workflows, all these files. And then Cursor adds them, Lovable adds them. And it’s sad, “Oh shoot, I had this cool workflow now.” But on the other hand, okay, now it’s just doing all these second focus [inaudible 01:19:33].
Lazar Jovanovic (01:19:33): A year ago, if we had an interview, your mind would be blown. Stuff that I had to do as workarounds to address shortcomings. I built a very successful course on that with Starter Story. For a year, people were like just, “Oh my God, you’re the only guy in the world that knows this secret.” Now Lovable natively addresses 99% of it.
(01:19:54): I can almost say, most of the stuff that I was teaching people are, I have a YouTube channel, a little bit appreciated, but there’s a seven-day learn how to vibe code with Lovable series that I did in March. Completely obsolete. None of it is true. None of it is a problem anymore. All the things that I was like, “Oh, well, this is missing and that is missing.” It’s not missing anymore. It’s natively in the product. You don’t have to work your way around it. It just works. Right?
(01:20:23): So that’s why, as I say, it’s the horse’s analogy. I don’t know if you’ve heard of it. A lot of people are tweeting about it, which is we started building the steam machine in 1700s. It took us about 200 years to build it. When engines got built and cars were put on the roads, I think that 90% of horse population got eradicated in the US within 20 years. The person that Tweeted this works at Claude Code.
(01:20:54): So he was like, “Now, when I translated it into AI, I was hired to do a job, technical job, technical writer, whatever. I became obsolete six months later.” Humans did not get the 20 years that horses did. The guy that was hired to do a thing is like, “Six months later, I need to reinvent my role. I need to evolve it into something else.”
(01:21:21): So I think there’s just an evolution that’s coming really, really fast. But a lot of people are scared when I’m just super excited because don’t you see our roles are finally going in a direction where we’re outsourcing what we hated doing anyways, right? Sitting in meetings, taking notes, doing spreadsheets. Maybe there are people that like that, but most people don’t.
(01:21:47): We’re just getting into a place where we’re rewarded for what really matters, clarity, judgment, thinking. We’re actually going to be paid to think longer and ponder longer. The longer idea simmers and gets broken down, the better because building it is going to be an instant. It’s going to be like this. It’s just a matter of you having so much clarity around it because guess what? If a tool is super powerful and you give it a wrong input, the output’s going to suck as well.
(01:22:20): That’s why I never become good enough at Claude Code, I feel, because I don’t start my projects with enough clarity. And the tool is so powerful that I just misdirected completely from the get go and I was like, “Oh shoot. This is not what I wanted to do.”
(01:22:36): So that’s why I still see myself being good at using tools that are a little bit on the exploratory prototyping path more than on the path that elite engineers will use, for example.
Lenny Rachitsky (01:22:53): I love your optimism and excitement about this stuff. I think for a lot of people, say their current software engineers, PMs, designers, there’s a lot of fear about the future of their careers. Are they going to be relevant? Will my software engineering skills disappear?
(01:23:08): So to follow this through a little bit. If you were to give someone advice on which skills you think will be most valuable/where AI will take on more and more, this momentum you’re seeing of where AI is filling in more and more gaps, what would your advice be of what you think people should focus on? What will continue to be valuable in the future?
Lazar Jovanovic (01:23:32): Yeah. Emotional intelligence for sure. Just understanding human nature. Real life stuff. I think we’re all going to get so tired of everything fake. Fake images, fake posts, fake profiles, fake this, fake that, fake videos. Everything is becoming fake and AI generated. I think humans just craving humans, naturally are going to want to do live stuff more. So anything human to human is going to be a big thing to skill up on, understand the dynamics.
(01:24:04): Anything regarding math. If it’s a math problem, I think Peter Thiel said it recently, “People that just do math stuff, AI is going to come for you.” Anything that’s very deterministic, meaning X input equals Y output and the line is pretty clear. AI has got you eaten for lunch. But if you understand how X to Y goes in human dynamic, human relationship layer, I think that’s where things are going to become good.
(01:24:35): So if we translate it again to a specific skill, I’ll say it again, good design, really good design, great design. And when I say design, that’s images, fonts as well. Copy. Copy is a big one. We all now, we’re two years into AI, I’ll bet you, me and you, if people put 10 pieces of copy in front of us, we could tell what’s AI and what isn’t in three seconds. And we’re only a couple of years in.
(01:25:01): So really good copy writing is going to be a very good skill to have because people are just going to know after three words or three sentences that it’s AI written. And even I don’t read AI output anymore. I don’t like to see it. I want that raw human experience. So I think human skills, I don’t even know how to describe it because I don’t think we’re doing an awesome job putting labels onto what humans are good at natively, but I think we will.
(01:25:32): I think we will describe job descriptions better. We will have human first engineers, I don’t know, or human designers or … I don’t know how to describe those roles. Same way how Karpathy coined vibe coding. I was vibe coding before he did it. I didn’t know how to call it. I started vibe coding in July of 2024, and I think he coined it sometime in early 2025. So I was doing it for seven months. And I was teaching people how to do it for about three or four with courses and I didn’t even know how to call it because there was no name. It was like, “Oh, I’m just using AI to do this for me.” I don’t know, whatever. So I think we’re going to reinvent some of the terms, roles and whatnot, but stuff that’s human to human is here to stay.
(01:26:22): Stuff that’s, I think, like, “You’re a middle manager. You’re a middleware person that’s just translating stuff.” And I can use that analogy again. Translators are going to die. People writing jokes, comedians are not. AI is never going to be able to write a good joke. Never, never, never. It just doesn’t have that layer that just doesn’t understand what’s funny.
(01:26:47): If you ever try to use AI to write jokes, they’re awful. They’re always going to be awful. But if you use AI to translate things from one language to another, it’s very good at it. AI is going to replace translators. It’s going to replace most journalists because it does good research. It can write good copy, whatever. Not elite journalism. It’s not going to be able to replace all the writers. It’s going to amplify great writers that can train AI on how to write books.
(01:27:12): So somebody who’s an amazing writer is going to all of a sudden write seven books a year instead of one, right? So that’s dangerous. If you’re an average writer, be careful. There’s zero comedians being replaced. Zero. And that’s just my personal belief. AI is never going to write good comedy. It’s impossible.
(01:27:31): And so try to find your analogy in your industry. I just gave you one for writing skills, so to speak. So writing jokes, super good skill to have. Translating, I’m sorry to say, but you’re not going to have a job for much longer. You better find something else to do. But that’s how I look at it.
Lenny Rachitsky (01:27:57): The comedy piece is interesting. I had one of the founders of the data labeling company, I don’t know if it was Mercor or maybe Surge. And he said that, I think it was Anthropic hired a bunch of National Lampoon comedy writers to help them train models. And so they’re working on it. So I love this strong prediction you made. I’m so curious in a year to look back and be like, “He was completely right.” Or, “Nope, they got that one too.”
Lazar Jovanovic (01:28:23): I’ll be wrong on 95% of the things I said today, three months from now. That is the only thing I can say very, very confidently.
Lenny Rachitsky (01:28:31): That seems right. Okay. So speaking of career. So one interesting career option is to do what you’re doing. As you said, this is a dream job for you. It’s a dream job for so many people. What is your path to this job and what do you think it takes for someone to actually do this as a profession?
Lazar Jovanovic (01:28:50): Well, my personal path and personal journey was anything but linear. I’ve done so many things in life like blue collar jobs, even at Subway while I was studying and stuff like that. I’m an engineer by trade, but not a software engineer. I’m a forestry engineer. So no coding, but still, engineering is engineering, I feel. You still develop certain set of skills doing that.
(01:29:14): I waited tables a long time. So you develop some human skills, you understand what people like, what they don’t like. I’ve, again, blue collar jobs teach you hard work. And as I said, the path was not linear, but I feel almost like a slum dog millionaire, the movie storyline, which is everything that happens to the character brings them into a position to be able to answer the questions in the quiz better. I feel the same way. I’ve done a lot of stuff last seven to eight years, obviously spent in startups. But doing everything but code writing, started in community management, social media. Again, distribution matters a lot. That’s something we haven’t touched upon at all.
(01:29:53): In a world when everybody’s building and there’s roughly the same amount of consumers in the world. How do you get in front of the eyeballs and get attention, which is going to … It is this most scarce resource and it will be even more scarce. But going back to the vibe coder role, if somebody’s saying, “Okay, well, I have a pretty diverse background too, and I’m vibe coding and how does this become a job?”
(01:30:18): Well, for me, I feel it became a job by building in public. I did chat with Elena once, only once on like, “Why me? There are so many good vibe coders that how did you pick me out of the crowd?” And I think obviously, she gave me a couple of reasons, but to translate it into one concept, I was building in public and sharing. As I said, I made a YouTube channel and I shared all the failures and all the knowledge, all the projects that I was building.
(01:30:48): I used social media a lot. LinkedIn was my go to because I just have that type of cadence. As you can see, all my answers are very long and X doesn’t cut it for that. You need to be very on point to be successful at X, so I’m not. So I guess it’s just building public, share your knowledge, give away all the secrets. There are no secrets whatsoever. If you’re sitting on a good concept, you’re missing out. So just share it immediately, if you figure something out. I recognize that very early on.
(01:31:23): And just, I think a lot of people participate in hackathons these days, I want to encourage people to do them. Find those opportunities locally to connect with other builders. Lovable is hiring across the board. Check out our app, open positions. It’s as easy as that. Just apply really. Find companies that are hiring and hiring in different roles. And I’ve seen people do something, I’m going to give people a secret away.
(01:31:48): A couple of hires stood up by not sending resumes, but sending Lovable apps. They built Lovable apps to show why they’re good fit for a role. And we, as Lovable employees, will always open an app that uses Lovable.app domain. Always, if you send me a DM, send me a Lovable app. Don’t send me anything long. Send me an app that tells me what you want from me or how do you see us collaborating and working together. So there’s people finding creative ways to get in front of eyeballs of decision makers like Elena.
(01:32:18): And skill-wise, again, we’re just repeating ourselves here, but I think it’s important to repeat it as many times as possible. Really develop good judgment, right? Really understand in a deeper sense how things translate when vibe code comes into play.
(01:32:42): There’s a company out there, I’m not going to name them but that uses Lovable religiously, is going to be one of our main case studies, actually, where they actually hired vibe coders before Lovable. I’m the first official vibe coding engineer at Lovable with that title, but I’ve met people in companies where they hired them before us. People that are just vibe coders, people that just understand that speed matters, right? It still matters a lot to be fast.
(01:33:13): And there’s a company out there with three vibe coders full-time. All they do is translating the old code base onto Lovable. There’s bringing everything. There’s CRM, CMS, everything. They’re all the tool sets that they have and they need it. There are people now actively just migrating everything over. There’s S&P 500 companies that are putting Lovable in job descriptions too, like saying, “Hey, Lovable skills are in the recommended tab.”
(01:33:43): So to go back to how to become vibe coder professionally. Well, you don’t need a company to hire you. You can hire yourself as a professional vibe coder first. I think the reason why I clicked with Anton and with Elena and everyone else, because I was already doing it. All I did, I just changed the vehicle, but I was already doing it professionally before I got hired. So that’s kind of the key. Do the job you would’ve done anyways.
Lenny Rachitsky (01:34:18): What a mind-expanding conversation. I love just how passionate and excited and motivated you are about all this. It feels like there’s so many people out there right now that are so burnt out, I don’t know, disillusioned, scared, and you’re the opposite of that. You’re just leaning into this, just taking advantage, taking … You’re not sure where it’s going to go, but following the path.
Lazar Jovanovic (01:34:39): Yeah. And I don’t want to interrupt you, but it’s because, look, Lovable specifically isn’t a company. You can talk about it as a company. I don’t see it as a company. It’s an idea. It’s a mission. It’s something more powerful than the internet in my mind because internet allows us to consume. Lovable allows us to build. And in our nature and human nature is to build, to create.
(01:35:08): And the fact that there’s a tool today that you can go into and dump an idea in and something comes out of it and somebody uses it and finds it useful, to me, it’s the craziest concept ever. It’s my only life’s dream. I had my first computer when I was six, and I was convinced my whole life that I’m going to be a software engineer or that I’m going to be building, but life wasn’t as simple as that for me. It was very, very complicated.
(01:35:41): And honestly, the last five to 10 years, I gave up on that dream almost. I thought I’m never going to build anything. I’ve tried. I’ve tried to build with technical co-founders. I just couldn’t find alignment. I just gave up on it. And now at 36, 30 years later, I feel, again, like that kid. I dream every day. It’s amazing what this enabled us to do.
(01:36:07): And anybody that’s scared, just try it. It switches from fear to excitement immediately because then you see what’s possible firsthand. Just go in, build something, build anything, and the fear goes away. You should only be afraid if you’re doing nothing. If you’re doing absolutely nothing, yes. Be terrified. By all means, be terrified. And then take a step towards doing something about it. And trust me, the leap is no longer as big as it used to be. It’s as big as you come in and you just say what’s on your mind and just ship.
Lenny Rachitsky (01:36:45): I think a big part of this is just stop listening to this podcast and just do stuff because you actually try it, right?
Lazar Jovanovic (01:36:51): Ideally, people stop right now. They’ve heard enough. I gave them the best that I could. Just stop listening and just go.
Lenny Rachitsky (01:36:59): All right. Bye everyone. Okay. I’m just joking, but we shall wrap it up. I’m going to skip the lightning round just to keep this episode shorter. Before we wrap up, is there anything else other than just go build some stuff? Anything else you want to say? Anything else you want to leave listeners with? Otherwise, we’ll let you go.
Lazar Jovanovic (01:37:15): Yeah. Tech stack doesn’t matter anymore. It doesn’t matter. People obsess over, oh, is this written in HTML? Is this written in React? It doesn’t matter. It never mattered, but now it matters even less. The end user just wants a stellar experience. We live in a world where anybody can produce good enough. So you better start learning how to produce magic because otherwise you’re just going to end up in a crowd with millions and millions of others.
(01:37:47): But at the same time, if you don’t know what magical looks like, don’t be discouraged to start building anything and start from good enough and level up. The best way to level up, exposure time. Set aside more time on learning than building. Read the agent output. Learn how it’s thinking so that you know what’s possible. But then also go and get inspired. Follow good designers on X.
(01:38:15): Find tools where great designs are produced and follow their creators. There’s a tool where I’m following just the actual person that built it, because he publishes videos almost daily, 40, 50 minutes long of him designing. I want to see how a world-class designer does it. I want to see him talk to the tool. I want to see him prompt. And that’s how I learn to become better at it.
(01:38:41): So again, exposure time, just deliberately set more time aside to learning than coding because you can code fast, but you can code garbage fast as well as magic fast. It’s the same amount of time. It’s you and your input that matters. Forget about decisions on tech stack. Forget about which backend are you using, which front end are you using. That doesn’t matter. Quality, taste, design. That’s all you need to optimize for in the future that’s ahead of us.
Lenny Rachitsky (01:39:12): Well, Lazar, I think we’re going to leave a lot of minds buzzing after this conversation. You blow my mind in so many ways. What a fascinating topic, conversation. What a glimpse into the future. What an interesting point in time. I’m so curious just in six months where things are and revisiting this conversation. I really appreciate you coming on, sharing all of this. You’re awesome. Where can folks find you if they want to reach out, maybe ask some follow-up questions? And how can listeners be useful to you?
Lazar Jovanovic (01:39:39): Awesome. Yeah. So I mentioned it already. LinkedIn is probably the best place to find me on. I’m very responsive there. If you want to follow me, I hope to reengage my YouTube channel a little bit more. I think I have a lot of cool tips and tricks that I want to share and teach people how to use Lovable and just vibe code in general and level up.
(01:40:04): And on how people can be useful to me. Well, I’m very passionate about making sure that everybody experiences what I’ve experienced that day when I got my first prompt in. I envy the person that is going to try Lovable for the first time after watching this episode, because the feeling is just unmatched of you going from a consumer to a builder. But in that process, there’s going to be some battles to fight. I want to reduce the amount of those battles and hurdles.
(01:40:34): So if you can help me in any way, message me what could have been better in that experience, especially if you just watched this and you’re like, “I’m going to do it. I was on the fence and I’m going to do it.” If something breaks, if something doesn’t connect and relate, I need to know what that is. My job is 100% to empower you to build the best work of your life.
(01:40:57): And I need to say this too, because a lot of people may be inspired, not by building or using Lovable, but rather building Lovable, come join our team. Again, we’re hiring across so many things. I think a lot of people should feel inspired because I hope that the energy that I bring to the table will resonate. This is how it feels working at Lovable. This is how it feels working with the best minds, the brightest minds of the world.
(01:41:28): We’re not number one by accident. It’s not a coincidence. The best people are gathering and we want you to be a part of it too. So if the energy and the conversation resonates with you, or if you heard about a problem today and you’re like, “Man, I think I can solve it,” come, join us. Help us build and shape the future of software development.
Lenny Rachitsky (01:41:52): Incredible. And what’s the site? I imagine it’s just the link on Lovable’s website to find the open roles.
Lazar Jovanovic (01:41:57): Yes.
Lenny Rachitsky (01:41:57): We’ll link folks there. Yeah, incredible. Lazar, thank you so much for being here.
Lazar Jovanovic (01:42:02): I appreciate the opportunity.
Lenny Rachitsky (01:42:03): Bye, everyone.
Narrator (01:42:06): Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lennyspodcast.com. See you in the next episode.
Follow the guest