All Episodes

The real AI revolution isn’t software. It’s farms, mines, and trucks. | Qasar Younis

Qasar Younis
March 8, 2026 1:24:24 28,190 views

Transcript

Lenny Rachitsky (00:00:00): You decided to join Twitter recently, put out your first tweet. Marc Andreessen quote tweeted it and said, “This is the best AI CEO nobody knows.”

Qasar Younis (00:00:08): Our best work is done alone and quietly. Every minute you’re writing something for public consumption, you’re not focusing your very limited time that you have on your customers and your product.

Lenny Rachitsky (00:00:16): You’re building a lot of the future that we’re going to be living in. What does the next couple years look like?

Qasar Younis (00:00:21): Us solving some of these impossible problems like cancer are directly going to be related to this AI boom. Net suffering in humanity overall should go down significantly.

Lenny Rachitsky (00:00:32): A thread that has emerged on this podcast is that AI is coming just in time to save us.

Qasar Younis (00:00:36): The real impact of AI in the next 5 to 10 years really is going to be in farming, mining, construction. These industries, they need autonomy and it couldn’t come soon enough. If you look at farmers, the average age of a farmer is in their late 50s. What does that mean in 10 years from now?

Lenny Rachitsky (00:00:53): There’s a lot of anxiety about what AI is going to do to the world.

Qasar Younis (00:00:56): The core root of fear is misunderstanding. If you at home are very anxious about AI, the best thing that you can do is spend time to understand and you will quickly see the limitations. Get to know it, then actively make the technology be used for good.

Lenny Rachitsky (00:01:15): Today, my guest is Qasar Younis, co-founder and CEO of Applied Intuition. You’ve probably never heard of Qasar or Applied Intuition. This is the most important under the radar AI company and CEO that I’ve ever come across. It’s a $15 billion company that has been growing quietly over the last decade. What they do is they add AI to vehicles like cars, tractors, planes, submarines, mining rigs, and a lot more. 18 out of the top 20 automakers are customers, as well as the biggest global construction, mining, and trucking companies. Also the Department of Defense. They’re basically Waymo or Tesla, but without the hardware.

(00:01:53): Qasar himself was born on a farm in Pakistan, grew up in Detroit, started his career as an engineer at GM, and then at Bosch. He then went on to start a couple companies before starting Applied Intuition. I love everything about this episode and I am so excited to bring it to you. Don’t forget to check out lennysproductpass.com for an incredible set of deals available exclusively to Lenny’s newsletter subscribers. Let’s get into it after a short word from our wonderful sponsors.

(00:02:19): This episode is brought to you by Omni. Many product teams today are in the process of debating how to ship AI analytics. The hard part is obvious. Having an LLM guess at SQL in production is a huge mess and just a bad idea. Omni takes a different approach. They have a semantic layer built in so that when you embed their analytics, the AI actually knows your business definitions, not just your raw tables. You can test queries, validate the reasoning, and lock down permissions before anything hits production. If you want AI analytics in your product without building the whole stack from scratch, check out omni.co/lenny for a free three-week trial. Companies like Perplexity, DBT, and BuzzFeed use Omni to ship analytics their customers can trust. That’s O-M-N-I.co/lenny.

(00:03:08): My podcast guests and I love talking about craft, and taste, and agency, and product market fit. You know what we don’t love talking about? SOC 2. That’s where Vanta comes in. Vanta helps companies of all sizes get compliant fast and stay that way with industry-leading AI, automation, and continuous monitoring. Whether you’re a startup tackling your first SOC 2, or ISO 27001, or an enterprise managing vendor risk, Vanta’s trust management platform makes it quicker, easier, and more scalable. Vanta also helps you complete security questionnaires up to five times faster so that you can win bigger deals sooner. The result? According to a recent IDC study, Vanta customers slashed over $500,000 a year and are three times more productive. Establishing trust isn’t optional. Vanta makes it automatic. Get $1,000 off at vanta.com/lenny.

(00:03:59): Qasar, thank you so much for being here. Welcome to the podcast.

Qasar Younis (00:04:09): Thanks for having me.

Lenny Rachitsky (00:04:10): You’re basically building a lot of the future that we’re going to be living in and people may not even realize this. And there’s two sides to this. On the one side, let me ask you this question. If things go really well, what does the next couple years look like for people with the emergence of AI with physical AI? What’s a vision of the future?

Qasar Younis (00:04:28): Let me take the broader AI point and then the more specific one on physical AI. Macro, think about this like the Industrial Revolution. So if you are sitting, let’s say in the late 1800s, we can focus on a lot of bad things that happen because of Industrial Revolution. You have child labor and you have monopolies emerging and you have abuse of… Wars end up happening. But there’s also, it’s an almost unimaginable present without the values and without the kind of benefits we got of the Industrial Revolution, which is broader access to healthcare like we’ve never seen before. Access to goods, material goods, things like we take for granted, like heating and cooling your home. There’s this great YouTube channel that focuses on POW letters from Germans who are seeing America in the early ’40s, and they’re writing letters back to Germany about what they’re seeing as they’re basically prisoners of war.

(00:05:34): And they’re kind of blown away that the towns that they roll by in these trains as they’re going to their POW camps are all lit up or that there’s cars everywhere. 80% of German towns in World War II did not have electricity. And that’s kind of a mind bending kind of thing because we just assume all this stuff, all this technology is equally distributed. So the positive version is these things that we, let’s say folks who are wealthy or folks who have access to technology, these things everybody has access to. The fact like simply having somebody who’s a coach to you and having that coach very specifically to you, not a generic ChatGPT that’s giving fairly generic answers, that’s a very powerful thing. I think us solving some of these impossible problems like cancer are directly going to be related to this AI boom. So I think net suffering in humanity, I think just like the Industrial Revolution overall should go down and should go down significantly.

(00:06:38): And I’m a fundamental optimist in that view that technology will bring that positivity. In physical AI specifically, again, when you have things like you have your own car and you have the ability, you have your limbs, and you have your senses, and you can drive, you take these kind of things for granted. You jump in your car and you go to the store. For somebody who maybe is disabled or somebody who doesn’t have the money to afford a vehicle, access to mobility that’s nearly free or is free is a big deal. And that simple example of making self-driving cars free for everybody and how that would change the planet, you live in Rwanda and you are two hours from the nearest hospital, that matters. That’s in a very, very true way. And so I think a lot of, let’s say, the negativity around AI comes from people who frankly speaking are living in a very, very good existence.

(00:07:36): And when you live on the other edge of society, yes, and I’m not like some naive person who thinks that there’s no downsides of technology, we can discuss that, but I just see there’s a lot more positive. So when you ask that question, what’s the next… Forget 3 to 5 years, what’s the next 20 years? These things that we take for granted that are bad suddenly are not there. And I think certain diseases, certain inaccessibility to basic services suddenly start going away. One last example of that is you take just the fact that you can message people basically for free. For people old enough, this is not the norm. We came from Pakistan. We couldn’t even communicate back to Pakistan because the long distance was so expensive and so it was handwritten letters.

(00:08:27): Today, you can basically contact anybody on the planet basically for free. There’s obvious downsides for that, but there’s lots of upsides for that, which is being in touch with people that you care about and you love basically for free. And so I think AI has the ability to bring this abundance to many, many more people at a near free cost.

Lenny Rachitsky (00:08:50): On the flip side of this, as you pointed out, there’s a lot of anxiety about what AI is going to do to the world, to jobs, robots. There are these videos coming out of China with these robots with nunchucks, like the stock market is-

Qasar Younis (00:09:05): You know what? I feel the nunchuck union is up at arms with them.

Lenny Rachitsky (00:09:10): How dare they? Yeah. But it’s scary and the market’s reacting more and more to just like, “Oh, wow, these companies are maybe not going to survive long-term.” Again, being at the center of this and building a lot of the stuff that’ll get us there, how do you envision the next couple of years playing out? Are you optimistic? What keeps you optimistic? Any advice to people to help them stay calm through this period?

Qasar Younis (00:09:37): So they’re two separate things, anxiety around technical shift and then the public investors reacting to specific stocks they’ve held. We have to separate those things. So let’s talk about them separately. On the first one, the core root of fear is misunderstanding. And I think if you at home are very anxious about the impact of AI in some variant on your own job, the best thing that you can do is spend time to understand it and you will quickly see the limitations. There’s some great videos on YouTube, which are trying to get Gemini to understand what a cup is by just holding it upside down and then they do it with ChatGPT. So it’s like if the revolution is coming, the AI overlords have to first understand the top and bottom of a cup. And so you realize that you can see the video of nunchuck wielding humanoids, which are pre-programmed and that costs $15 million to do that video.

(00:10:41): Yeah, that is true. It’s not fake. I’m not implying it’s fake, but it’s also not what your brain kind of fills in the gaps. You see nunchuck robots and you just feel like, well, the gap, these are sentient beings that are at their own volition going, rather than it’s a bunch of motors that have been programmed to do a certain thing. If you really want to be impressed, you go to a car factory and we’ve been doing that for 25 years. We have very, very advanced robots moving extremely fast to build things. And why we don’t have anxiety about the car factory, but we have anxiety about the nunchuck robots is because the human being doesn’t… Like that gap, we understand the gap of a welding robot. You see, “Oh, okay, that’s a robot. It’s been programmed to make this weld.”

(00:11:26): But we don’t know the technology. We as in just as an individual human being living in the world. You don’t know how that robot was made to do that nunchuck thing. And so you substitute that with anxiety and fear. And so I would really implore you to learn more about the technology and you start seeing the edges. Now, does that take away from the most fundamental thing that you’re getting at, the string that you’re pulling at? Which is like, is society going to be fundamentally harmed and is this net bad for society? I think in any technical shift, the emergence of WhatsApp, just as an example, there are people who are damaged by that, literally companies that go away, but also humans who are damaged by the advent of that technology. And so I think as members of society and as leaders in society, we can kind of move that funnel in whichever way technology first… Remove the word AI.

(00:12:25): AI is such a emotional word, because it’s wrapped in these things you don’t know. And so that fear then kind of deforms. So let’s just say technology. So I think it’s up to us to recognize this technology can be used for good and technology can be used for bad. And I think that’s where really the focus is. So get to know it and then actively make the technology be used for good as a participant, whether you’re a founder or all the way as an individual employee or citizen of a large company. Then on the second part of the question about public investors and stuff, I don’t have any particular research on this, but this is what my guess is what’s actually happened. Beyond being an engineer, which is my core identity, for lack of better word, I was also, I did an MBA at Harvard.

(00:13:16): And so that was the first time that this, let’s say… I didn’t come from very, very wealthy upbringings. And this was the first time when I went to Harvard, I saw that world, that world of people having private jets and stuff. It was a really eye-opening experience for me. But the real world I was exposed to was high finance and how high finance works. And you might think, as I did from far away, that folks at hedge funds or at large public equities funds are extremely nuanced and thoughtful. And they are on whiteboards with extremely deep and maybe even theoretical math to figure out, should they buy or sell Figma? And that’s not actually how it works.

(00:14:08): Really what these folks are is, and in this specific case, I think what’s happening is they buy and sell stock. They are smart people and they do work hard. It’s not to take that away, but they don’t have a fundamental edge that you would assume that somebody who sits in Skyscraper in New York has. And by the way, that’s why retail investors have become such a active and significant part of the market. So those folks have gone to AI consultants and have gone to people who are literally developers at these firms, and they’ll do something like, “Hey, why don’t you build me this app in a week?”

(00:14:45): And then this consultancy will come back with an app, which kind of looks like maybe a Figma or another, some web app and said, “Hedge fund manager,” they’re like, “well…” And then if the company was sitting there, they would say, “No, no, this just looks like my app, but this is actually not my app. It’s not as deep. It doesn’t have all these things. There’s the integrations with all these other systems.” But for the public investor buy, they’re like, “Yeah, but it only took a few weeks or a month to build this. It took you 500 engineers a couple of years.” This AI thing could be real and the things I’m reading on X about just vibe coding your way to replace billion dollar companies, that might be the case and the market immediately prices in that risk.” And that’s where that selloff comes from.

(00:15:31): That doesn’t necessarily mean all of those. Just within the last 24 hours, I had a… I can’t say his name, but a very, let’s say, calibrated investor who said, “This is the time to buy because these companies are not actually going away.” And so I think those are two… Anxiety within society and the selloff are two very different things. They’re motivated by different things. They’re part of the larger AI narrative, but I wouldn’t conflate those two things. It’s not that the hedge fund investor is like, “I’m worried about society, sell service now.” It’s different than that, or at least that’s my impression.

Lenny Rachitsky (00:16:09): This is the alpha we’ve been talking about, time to buy. This is not investment advice. Well, that’s really good advice.

Qasar Younis (00:16:15): I think the real advice is to fight fear. And I feel that anxiety, especially when I go to Michigan and outside of people in the Silicon Valley bubble, it’s like just try to learn a little bit about the technology that you’re afraid of and you’ll start seeing some of the edges.

Lenny Rachitsky (00:16:32): I love your point about how self-driving cars are essentially robots. We don’t call them that, but they’re robots.

Qasar Younis (00:16:37): Absolutely.

Lenny Rachitsky (00:16:38): And you see a nunchuck-wielding robot, a self-driving car doing bad things could be very dangerous already. And so that’s a really good reframe that if you just think of it’s just another robot and it’s been really good for us.

Qasar Younis (00:16:50): And by the way, the self-driving thing, as an example, whichever way you slice the statistics that are available from self-driving companies, they’re supremely, supremely more safe than human drivers. And I do believe in 20 or 30 years, not that much longer, we’ll look back and we’ll kind of be like, it’s kind of like we think about child labor. In a post-Industrial Revolution, that was a normal thing. You would send kids who are in middle school to go work. It happens in third world countries today. There isn’t a lot of emotion behind it. It is not considered to be exploitative because you have no choice.

(00:17:27): And I think we’ll look back in 25, 30 years, we’re like, people were just tired, under the influence, extremely stressed, going through a traumatic life experience, and then they jump into a car like that. It is crazy. And just for everyone, everyone should really emotionally think about it. Just in the United States, over 30,000 people will die in the next year from these accidents. The old Stalin line, it’s like one death is a tragedy, a million is a statistic, and we just let the statistic kind of go over our head like, “Oh, it’s 30,000 people.” But if you ever have talked to a family of somebody who went through a tragedy like a car accident, it’s unbelievable. And suddenly, all the fear of AI robots goes away and you really see that human impact and you realize actually us driving doesn’t make sense. And it’s not for any other reason than literally people die.

Lenny Rachitsky (00:18:32): I’ve become a huge… I have a Tesla and I’ve just used self-driving all the time now. Just like a few months ago it got very good and it used to be nerve-wracking and now it’s like, “Wow, this is much better than I am.”

Qasar Younis (00:18:44): And you’re not doing driving as a job. Imagine if you’re a commercial truck driver or you work in a mine or you work… There, a little bit of intelligence, a helping hand in that very dangerous task, it’s incredible. And I think there’s something about the human brain where when you bring up that reality of self-driving trucks, immediately people are like, “Well, what about the trucking jobs?” Now, needless to say, we don’t have enough people who want to do that job. So leave that fact to the side. I think the fact that you really focus on is the fact that people die from trucking accidents. We can’t throw out the baby with the bathwater.

(00:19:29): And so I think I implore everybody who thinks about AI broadly and physical AI specifically to always recognize that your monkey brain is programmed because of hundreds of thousands of years of living out in the wild and being in the cave that when you hear the rustle in the bush, that is you think it’s a snake because that’s what our ancestors were programmed. So now when something new enters our psyche, your view isn’t, well, if mines became autonomous, well, wouldn’t that lose jobs? It’s like, those are awful jobs that people die in. And the best evidence is that people don’t want to work in them. That’s the best evidence. Nobody’s clamoring to go work in a mine in a remote area. And so intelligence can help make that reality much, much better.

Lenny Rachitsky (00:20:24): People are seeing AI advance in all these different ways on the software side and they see all these models being released. It’s driving 100% of people’s code now. What’s really cool about you is you see the hardware side of this, and I think one of the biggest changes to our lives will probably be robots walking around doing things for us. Do you have a sense of just how close we are just robots around us day-to-day?

Qasar Younis (00:20:45): So I would think about this, the framing here matters again on a spectrum. So there are robots around us like Roombas, that clean your carpet where you’re sleeping. There’s robots around you when you make a coffee, that’s an automated machine that is taking an input and doing a bunch of things based on what you need. So what you’re really talking about is how fast can we go up that spectrum to where you have a robot that can take on lots of tasks with little guidance. And the way that I would think about this is, let’s say we’re sitting in, this podcast is happening not in 2026, but 2006, and you’re asking me the same question about mobile and you say, “Well, mobile is coming.” This is, remember, pre-iPhone, which comes out in 07, everyone has got those flip phones. So we have mobile. It’s not like a completely… So we have some robots around us already, but it’s like, okay, so when are we going to… And you asked me in 2006, “When are we going to get that Star Trek phone that can do everything?”

(00:21:48): And I think at that time, I would say, because I don’t even know the iPhone is coming a year later, I would say, “Well, Lenny, I don’t know. Maybe it’s one to five years.” And if it’s not five years later that Uber, WhatsApp, Instagram, Snapchat are all products and they’re being consumed by many, many, many millions of people. So what happens when you think about sitting in 2006 and why can’t your brain figure out that Instagram is coming? Instagram is very hard to even conceive without phones that have an app store, have cameras on both sides are available, generally available that lots of people have it. And the fact that people are comfortable being on social networks. In 2006, it’s still an early thing. This is pre-Twitter and Facebook is not that big and MySpace is, but it’s not the same type of private kind of communication.

(00:22:37): And so the point I’m making is I think it can come pretty fast, but the way and the form factor will come is hard to pick just like it’s hard to figure out Instagram’s going to happen because the intelligence in that particular type of hardware, which will be generally available, that’s a keyword, generally available, is really going to impact the use cases. So I think the most obvious use cases that will come early are going to be use cases where you get the most amount of bang for buck. And the bang for buck is a car that drives itself or a mining robot, which is a mining vehicle, which is now intelligent. And the reason is all that, let’s say engineering required to make this giant machine that moves dirt has already been done. It’s been done over the last 50, 60 years. So then you’re just putting a little bit of intelligence into it and leveraging everything else that the companies and people have developed.

(00:23:38): So I think, and I’m not just pitching my own book. I mean, we’re a physical AI company. I continue to believe that I think our brain emotionally loves the humanoid concept because we’re monkeys. But actually, just more pragmatically, it’s actually just putting intelligence into things that already exist all around us. And then once that happens, then new applications will emerge, which I think we’ll talk about in five to seven years, which we’ll start seeing. So let’s just move forward five to seven years and let’s see what reality exists. And then maybe we can try to jump into the future from there. I think generally speaking, every single car company on the planet right now is working on a product that’s like a Tesla FSD product, every single car company there without exception. Many, many companies are working in versions of that that become fully autonomous within a cheap sensor suite.

(00:24:31): So the fundamental difference, just to simplify it all, the Tesla approach versus the Waymo approach, just to really keep it simple, is the Waymo approach is lots of sensors and lots of compute and maps. And the Tesla version is very few sensors, no maps, no high fidelity maps and just generalizing here and cheaper compete for the lack of better word. And the Tesla version of a product, this is in the industry is called an L2++ product, is going to be available everywhere because it’s literally cheaper and it doesn’t require HD maps. The Waymo product functions better in a geographically constrained area. So you fast forward five years, both of these types of technologies will be much more ubiquitous. L2++ and L4 will be much more ubiquitous, not only in the Bay Area or in parts of China, but really globally. There are companies working on this globally.

(00:25:26): So now, I don’t know if you remember, but NAV systems used to be a big deal in cars. You would pay thousands of dollars and NAV systems were kind of the thing that everybody wanted. We’re at that moment for L2++ systems where people are willing to pay thousands of dollars for a semi-automated vehicle. It will not be a long time. You’re already seeing this happen in China, where the downward pricing pressure for that autonomous product, for the lack of better word, will become close to free.

(00:25:53): So now you fast-forward five to seven years and every car has some level of autonomy. So now you have to mentally live in that reality that everybody who’s buying a car, they just get FSD with it. Now you start seeing a different world because now the average person isn’t wondering is self-driving in a car. They use it all the time. They don’t wonder, are NAVs… And so what you have in NAV systems is a CarPlay emerges and Android Auto emerges, and it’s very natural. People are like, “Oh, I have my phone. I just plug it in.” And it wasn’t a big revolution, but the CarPlay and Android Auto revolution is actually huge. It brings free navigation and free applications to your car and it’s fairly ubiquitous. And so I think the next thing that happens in five to seven years is then full autonomy becomes the thing that everyone expects.

(00:26:38): And so I think all of that, and you will see a clear decrease in injuries and death because of that, because you have some intelligence helping you, help you drive. Now, again, I’m using the consumer of vehicle analogies just so people can understand it, but this is the same in construction. It’s the same in mining. It’s the same in defense. It’s in every one of these verticals. There’s these big physical machines that humans are interacting with. That teaming up with that machine is the future. The productivity unlock from just you looking at a machine, not like a sentient being, but almost like a physical agent of something you’re trying to accomplish, unlocks things that I think are very hard to think about. So I love things like Moltbook and I love, let’s say, OpenClaw revolution that’s happening for the lack of a better word, but I think the big impact, that’s still such a small part of society.

(00:27:35): My barometer of impact is like you go to the Detroit airport and you sit in a gate and you look around and you’re like, how many people here are using OpenClaw? And it’s like, let me look into like, you might be the only person who knows what that is. And whereas everybody, they’re living their lives there. And so it’s like to them, actually the impact of AI is going to be in this physical world. I see you also have a… Yeah, there you go.

Lenny Rachitsky (00:28:00): Check it out. I’m a convert, OpenClaw.

Qasar Younis (00:28:03): There you go. Perfect. Perfect.

Lenny Rachitsky (00:28:04): Peter’s coming on the pod soon, so I got some [inaudible 00:28:07].

Qasar Younis (00:28:07): Yeah. So I think the real impact of AI in the next 5 to 10 years really is going to be in farming, in mining, in construction and self-driving trucks. That’s where you’re going to have a real impact. Though I think… I mean, I love this stuff that’s happening on these platforms, but it’s still segregated to frankly developers and a very, very small part of society.

Lenny Rachitsky (00:28:33): I wasn’t planning to spend so much time here, but this is extremely interesting. And I think it’s important for people to hear from folks like you about where things are heading, because as I said, everyone’s just like, “What is happening? What is going to be my future?” The jobs piece is really interesting. And a thread that has emerged on this podcast recently is that people are afraid AI will take their jobs, but in reality, AI is coming just in time to save us because populations are declining, people are aging, and we need something to help us there. I know this is something you… And this is something Marc talked about and you’re really close with him. Help us feel better about just how AI isn’t going to take our jobs and actually going to save us.

Qasar Younis (00:29:13): Yeah. I think honestly speaking, these industries, they need autonomy and it couldn’t come soon enough, frankly speaking. This is not like people are not fighting for those trucking jobs. If you look at farmers, the average age of a farmer is in their late 50s, 58 or so. What does that mean in 10 years from now? That means many of those farmers are going to be retiring if they’re not already retired, and 20 years we have even a bigger problem. That, by the way, is every verticals like that. And my hypothesis here, sometimes people say McDonald’s can’t hire or the mine and local quarry can’t hire, and where are all the people? The people are still here. I think just the trade off is just not worth it anymore. In the 1980s and the 1990s, doing the long haul trucking job was what the family has to sacrifice, the father not being there for days and weeks on end.

(00:30:15): And today, that same working class family can make that decision and say, “You know what? I will drive for Uber or DoorDash and I’m willing to do that because I can turn that app off and pick up my kid and I prioritize that.” That is where I think this kind of intelligence kind of revolution in the real world is really, I think, is going to fill those gaps in rather than an entire industry is suddenly gone and it’s just automated.

(00:30:44): I don’t believe that future, mainly because the realities of actually replacing an entire industry’s robots is still, that’s too complex. One day it will happen, but it’s not happening anytime soon. But the entire society will be different by that point. And I think, again, use the Industrial Revolution as a good version of that. The earlier question, if I’m somebody who is not in the AI ecosystem and I have this anxiety, and how would I deal with it? Reading history books is a great way to really understand how society deals with this. And there’s a lot of literature because industrial revolution doesn’t happen like in the dawn of Christianity where not many people are writing and not many people are reading. Lots of people are writing, lots of people are reading in the last 150 years. And you can read both the people who are impacted by the Industrial Revolution, people who are benefiting from it. And writ large, it’s a very positive experience.

(00:31:44): And that doesn’t mean… Again, there are downsides. We should mitigate the downsides. But the thing that we can’t do, and this is maybe specifically as America or society, as the global population as a whole, there’s this impetus to say, “We got to pump this break on…” Again, don’t say AI, say technology, pump this break on technology. The issue then is the American economy really ends up stuttering, and that impacts the lowest end of the labor market way more than anybody else. And so in the attempt to help the people who are the most marginalized, we actually hurt them the most. And the statistics between Europe and America are pretty explicit. But in the last decade, basically the American economy is now grown at a much higher pace. And that growth hasn’t come from Detroit, Michigan. That growth has come from Mountain View and it’s come from Sunnyvale. It’s come from the Bay Area, which is another way of saying it’s because of new frontier technologies.

(00:32:49): So putting brakes on frontier technologies because we’re afraid of unintended consequences will actually have real intended consequences on people who we’re trying to help the most. And the reality is very, very fundamental, in a future that does not take care of the average worker and the average person in America will have much bigger problems. So we need a solution that takes that into account, but that solution isn’t just pump the brakes, AI is bad, or frontier technology is bad, or technology’s bad, or whatever thing that you don’t like. I think that’ll have really, really bad consequences.

Lenny Rachitsky (00:33:27): One of the reasons that we don’t pump the brakes is just fear of China and competition with China, the nunchuck robots being a recent example of like, “Oh shit.” I know you have a contrarian take on just how much of a threat China is and how they’re approaching things.

Qasar Younis (00:33:43): The summary version of this is I think the way we recently read as a company, we read this book, House of Huawei, which is just a really great interesting book. And Huawei is a really amazing company for the reason that it makes great technology. But the couple hundred thousand people that work at Huawei, about a quarter of them are members of the Communist Party. And Huawei’s goal is not to grow profits or shareholders. It’s a private company. It’s really an extension of the state. So literally the name Huawei means China’s ambition. So imagine if you had a company called MAGA and half of the company or a quarter of the company was a certain political party and they said, “Our goal isn’t to make profits or to… Our goal is just the expansion of…” It’s not even a company anymore. It’s something else. And so I think we incorrectly, when we specifically speak of Americans, we think about China, we impart our understanding of markets and companies onto China.

(00:34:46): So we think Huawei, since they make phones, they must be just like Apple. It’s like, no, no, no, no. Actually, that’s not like Apple at all. And so I think the first thing I would implore everybody who thinks about China, especially with anxiety in America is you’re not comparing companies to companies. This is not apples to apples. This is very, very different. And so imagine instead of thinking OpenAI is competing against DeepSeek, you say OpenAIs competing against the Chinese government. Instead of Apple competing against Huawei, Apple’s competing against the Chinese government. And you can even remove the word Chinese government. Government is the best word to define what this organization is, but it’s not a for-profit, privately-owned, independent group of people who are working on projects together to build products to market. So that’s the first very important thing. You cannot treat China like another America, or another Europe, or another whatever.

(00:35:37): Number two is if your goal isn’t to make profits, you can do incredible research and it can be extremely compelling. But like we’ve seen, if the system is not sustainable, that’s also not a company and that’s not sustainable. Let me give a very stark example of that. Chinese EVs are really lauded as being this exceptionally interesting product and you constantly get the streamer of, I would say, fairly shallow analysis which says, look how good China, isn’t like how bad Munich, Detroit, Tokyo, Seoul, or the other epicenters for automotive globally. There is a Chinese EV-like company in America. It’s called Rivian, makes great products, but they lose a lot of money making those products. And therefore, the company is not very highly valued. I think if you said top 50 or top 100 companies in the Bay Area, I’m not sure Rivian would even make that list.

(00:36:34): And it’s not that the products are bad or the people at Rivian are incompetent or they’re not working hard. It’s just the business is a tough business. The EV business in automotive is a tough business. So how can we hold these realities? So we say, look how amazing these Chinese EV companies are and look how bad the home team is. It’s just because the home team is being assessed for being a business. It has to make profits. And because it doesn’t, it gets hammered by public investors. The other thing is not even a company. Now, if we do it apples to apples, America just has to build great EVs. That means Tesla and everybody else combined and we don’t care about profits, I think America would field some very good products and there would be wow products. So the comparisons are really, really off and I think that creates a misunderstanding.

(00:37:21): I think then maybe the most philosophical question, can China succeed and does that mean America has to fail or vice versa? If you believe in open and free markets, you believe everybody can succeed in those markets. And that’s been proven for over 100 years. And I think what we’re experiencing right now is how does China play in that ecosystem? Because I said open and free markets and those are not open and free markets. But that doesn’t necessarily mean that you have to have an antagonistic relationship. It certainly doesn’t mean that China’s incompetent, and it certainly doesn’t mean that it doesn’t warrant our attention and our kind of, let’s say, focus, but it’s also not a one-to-one comparison. I think we should be very careful in implying it’s a one-to-one comparison.

(00:38:06): And by the way, that five-minute explanation is never going to get to the average person sitting at an airport in Detroit, Michigan waiting for their flight. All they consume is China bad. It’s not like that. It’s not that simple. It’s way more nuanced.

Lenny Rachitsky (00:38:23): This episode is brought to you by Lovable. Not only are they the fastest growing company in history, I use it regularly and I could not recommend it more highly. If you’ve ever had an idea for an app but didn’t know where to start, Lovable is for you. Lovable lets you build working apps and websites by simply chatting with AI. Then you can customize it at automations and deploy it to a live domain. It’s perfect for marketers, spinning up tools, product managers, prototyping new ideas, and founders launching their next business. Unlike no-code tools, Lovable isn’t about static pages. It builds full apps with real functionality and it’s fast. We used to take weeks, months, or years, you can now do over a weekend. So if you’ve been sitting on an idea, now is the time to bring it to life. Get started for free at lovable.dev. That’s lovable.dev.

(00:39:14): So you decided to join Twitter recently, put out your first tweet. Your first tweet was just like, “Hello, I’m going to start tweeting.” That tweet got two million views. Elon replied to you, Marc Andreessen quote tweeted it and said, “This is the best AI CEO nobody knows, follow for free alpha.” Elad Gill, famed investor, describes you as the most successful, most quiet company in AI. And to me, this is really interesting because most founders are told, “Build in public, build a following, be loud, get out there, talk all the time about what you’re doing.” You did the opposite. You were very under the radar, stay quiet, build, build, build, and then decided later, “Okay, now it’s time to talk about our story.” So I think this counter narrative is really interesting and I think will inspire a lot of founders to not feel like they have to do this. What was your philosophy of just staying quiet and then starting to be public?

Qasar Younis (00:40:06): Yeah, it’s a great point. So number one, it was intentional. And I think if it was up to me, we would do that forever. I think we’re very much inspired by folks more like a Berkshire Hathaway and less like, let’s say, a Silicon Valley darling. And I’ll tell why I changed the views and then just before some founders go and take that advice immediately without really thinking about it, I can do that because I’m known in the ecosystem. I know these folks personally, and so I don’t need to have a brand out there that is getting Elad to remember me and think about me. And so if I’m doing my first two companies, we’re a lot less known is before I really came to YC. So all of our company values can be reduced to these two words of radical pragmatism. So before you take the advice, make sure it applies to your situation.

(00:41:07): One of the reasons, Naval, who’s one of our investors and a friend, says fame itself is like a tool and it’s powerful. Now, if you don’t have a network and you can get a following, that’s a fantastic way to recruit people to your company, to recruit investors, to your mission, and then of course the customers. But for us, and I think that wasn’t a hard requirement, 10 plus years ago. The other thing is I think Peter and I, the old saying about life is kind of like you do things and then you rationalize the thing that you do. So I think fundamentally Peter and I, we don’t get a lot of, my co-founder, Peter Ludwig, we don’t get a lot of emotional satisfaction out of doing very public things. And I think if I was really to play armchair psychologist and really try to get to the root of why, beyond the rational view, which is focus on your customers, focus on the product.

(00:42:07): Every minute you’re doing a podcast, every minute you’re doing an X post, every minute you’re writing something for public consumption, you’re not focusing your very limited time that you have on your customers and your product. And ultimately, that’s the only thing that’s going to produce and yield results. But the reality of the situation today in 2026 is even a company like us that’s known or somebody like me is known in the ecosystem, you still want to get that broader message out. And that’s what I talk a little bit about on X. So it is definitely contrarian, but it’s not just contrarian for contrarian’s sake. It plays a little bit of our own psychology. And then I would say just to finish that thought there is I grew up, I’m an immigrant. I came to the US from Pakistan when I was a kid, have a little bit of a weird name and you feel like anybody… I grew up in Detroit and Warren, Michigan specifically for all those at home.

(00:43:06): And when you feel that you’re a little bit on the edge of society or you’re not maybe in the mainstream, and this resonates with some people, that’s not as resonant with everybody, you feel very skeptical of the mainstream because you’re just on the outside for so long. And I think you can trace a bunch of founders’ psychology to this feeling of being an outcast actually. And so then you find yourself in a situation where you’re like the COO of YC. And the narrative of I’m an outsider is like, I don’t know if there’s anything more inside than being the YC COO. So I think that reconciliation, I think over my career also has had to happen, which is maybe that’s just kind of a weird kind of thing.

(00:43:52): And so when I talked to Marc Andreessen who really pushed me to go online or Elad or whoever it is, their view is leave your baggage and your trauma in the background and really let’s think more pragmatically. And the pragmatic thing here is whether I like to do these types of things or not, fundamentally it helps get the message out. And the message can be something very small and myopic like what’s happening in physical AI and machines become intelligent or much larger, which is what’s happening in society through this fundamental change that we’re going through.

(00:44:28): I’ve had the rare privilege or the experience of seeing the full economic spectrum. I’ve really seen the extreme ends of both sides and truly, I really mean that. And so somebody like Marc, who’s close to our company says, “Well, those are some ideas that are worth getting out beyond just they’re promoting whatever your company or something like that.” And that I can get behind, which is like the debate and discussion about ideas and what’s happening to our society because of these technical changes. And so here I am.

Lenny Rachitsky (00:45:08): Amazing. Okay. So there’s a few threads I want to follow there. One is you were, as you said, COO at YCombinator, you saw a lot of startups up close. This is your third startup on your own. Something that I hear you talk about is that successful companies almost always show traction very early. A lot of founders here are like, “No, just keep trying fighting and maybe we’ll be the next Figma and Notion four years in. We’ll figure it out.” What’s your experience there and what’s your advice to founders who aren’t seeing traction early?

Qasar Younis (00:45:38): Nuances. If I was starting another company, I’d call it Nuance. So I think what you’re saying is correct. I continue to believe that. I think good companies tend to have traction fairly early and then just sustain it for a decade plus. To the founders that’s toiling, let’s say if you’re listening and you’re about two years into your company and you’re maybe having a tough time getting money and building that first product that consumers or businesses really love either through retention or dollars, two years is the difficult time. The heuristic that I would use is if the information I’m getting from the market is not informing me on a more and more specific path, I would consider resetting. And what I mean by reset is oftentimes, and this is wearing my YC hat, seeing hundreds and thousands of companies, is oftentimes it’s like the co-founding, literally the foundation upon which the house is built is not correct.

(00:46:36): And it’s like, imagine you built this house and every time you put a cup of water and it slides off the table and it falls on the ground and you keep adjusting the table, it’s like maybe the foundation is actually wrong. The whole house is off kilter. And the foundation might not only be your co-founders, it could be the market that you’re in. It could be the phase of life that you’re in and the amount of effort that you’re willing to put into that thing in order to make it successful. There’s a bunch of reasons that a company can fail and you have to be able to somehow say, “I don’t know what is the reason. I’m just going to have to have a hard reset here.” One thing I would tell founders, and I tell Applied is creating a founder class in itself, people who’ve worked at Applied Intuition are now starting their own companies.

(00:47:16): We have a thousand plus engineers and over time they’re starting their own firms. And I say to all of them is, just imagine the first time you’re going to do a startup for the first three year, it’s a zero. Just rid yourself of the expectation that it’s going to be successful and that you’re a craftsperson. If this was a woodworking podcast and you said the first table that you built was wobbly, you wouldn’t say, “Well, go work at Crate & Barrel.” You’d say, “Oh, that’s the first table. We’re going to keep at it.”

(00:47:53): Being a founder is its own muscle and you want to exercise that muscle. But I think a lot of founders, especially early in their founding career, put such an incredible pressure on themselves to make it great out of the gate that they actually miss the thing that you’re getting in that first round, which is learning and building that muscle. And the second, third time. And I think it’s not random that my third company is the most successful company. I think that you see that more often than not. There are funds which are almost exclusively focused on multi-time founders for this reason.

Lenny Rachitsky (00:48:26): What I love about that advice is often the best ideas come from when you have low expectations, you’re just playing around, you’re just tinkering, you’re not like, I’m going to build the next great, I don’t know, Google, it’s just you having fun. And that’s how I found this world that I’m in right now, this path. And OpenClaw is a good example of that.

Qasar Younis (00:48:46): I think why that advice is so difficult is if you hear this and you’re in the proverbial war, you’re like, “What the hell are these people talking about having fun?” I think this is hard. And so you have to hold these contrasting kind of or conflicting views in your head, which is like it’s deeply very, very important and you should give it your all and it’s also not that important. And that’s a really hard thing to reconcile and keep in balance.

Lenny Rachitsky (00:49:16): And the way that you approached this company where you stayed quiet, that I think helps a lot where you’re not-

Qasar Younis (00:49:21): Absolutely. Even at YC when I became COO, I told Sam Altman, was the president. And I told Sam, “Let’s not announce this for a year because if the partners don’t want me to be COO, it’s not a successful thing. I don’t have the pressure of the public scrutiny that why were you COO only for six months or something like that?” And I think you have to be very honest with yourself as a founder and as a human being that those things matter. What people think about you matter and it impacts yourself. And having this spotlight on you. I always say it’s very easy to pivot before you raise money and before you have employees. Nobody cares. The moment you raise money and more importantly, the moment you hire employees, employees joined a very specific mission. And you go and you walk into the office and there’s 10 of them, you say, “Guys, turns out this is wrong. We’re going on a different mission.”

(00:50:16): Imagine if this was war. It’s like, “What the hell? We’re attacking that hill and now we just say that hill’s not important.” How do you know the next hill is important? And you as a lead, you lose a lot of credibility. And it’s not only for the superficialness of being a credible leader, it’s a practical nature of when you’re very, very public, the startup becomes your identity, and then suddenly you’re having to reconcile that actually that thing is not correct. So we have these core values in the company, and early in the company, I used to have this line which says, “Our best work is done alone and quietly.” And I deeply believe that. And so founders, I would think of it that way, but it’s for pragmatic reasons. It’s not like some, just because it’s cool to be under the radar. It just allows you to maybe work in a bit more peace.

Lenny Rachitsky (00:51:09): I love these core values you’ve shared so far. The last one, the best work is done alone quietly. I’m so on board with that. Radical pragmatism is the other one you shared earlier. Are there a couple more there? These are gems.

Qasar Younis (00:51:20): Yeah. Those are, I would say, the meta values. We have very specific, let’s say, operating principles. And this is real, as tactical as advice I can give to founders. So come up with your values when you’re getting a little bit of traction. And the reason I say that is early enough where you… And the way you come up with the values is not like, what values should we have as philosophers? No, no. You should figure out why are we being successful? Literally write down the 5 to 10 things that are the reasons you are being successful and those become your values and you kind of read. And so we did that. And so our first one was going to speed above everything. And it was us being fast. The second one is never disappoint the customer. Technical mastery, high output matters. All the way down to ones that are not obvious, like laugh a lot.

(00:52:10): That’s been our core value from at the beginning of the company’s history. And it’s like when you’re working on intense things, if you don’t have the ability to keep grounded, have perspective, laughter and humor also is a way to get subtle feedback and a slightly different taste than this sucks. You can say, “It’s not the best.” And that is slightly… And so you’re really creating the framework in which people are learning how to behave with each other within the company. And so today the values really serve us as almost like a… They’re like guiding principles. And so we do new team meetings that every week, Peter and I, where we meet all the new team members, and we’re almost always just talking about the values and some level of detail and depth. Yeah.

(00:52:51): Another value, half the work is follow-up. Just taking notes and following up. That is the business. It’s not more complex than that.

Lenny Rachitsky (00:52:59): Laugh a lot is my new favorite company value.

Qasar Younis (00:53:03): Yeah.

Lenny Rachitsky (00:53:03): Sounds like a wonderful place to work. And then on the last piece there, there’s this book that just came out by Stripe Press about maintenance and how valuable and underappreciated the maintenance part of work is.

Qasar Younis (00:53:13): Absolutely. Absolutely. Yeah. I think if there’s a takeaway that you get from, let’s say, a bit of my philosophy on where we started the conversation around why… Being promotional has all these negative connotations in it, but so I’m careful using that word, but why not be promotional? It’s because there’s costs to everything. And so if you can focus on the craft and making the product really, really good and really listening to your customers, you have a much higher likelihood of success. And then you can always then go and scale there. A part of that is the thing that you’re talking about. Maintenance or another version of… My roots are in automotive engineering, and automotive engineering is actually exercising quality.

(00:54:06): You’re building these very complex machines at scale. People talk about rockets being really, really complex. You only got to set up a rocket even at the highest, once every couple of days. You’re making a car every 30 seconds and you have to make it extremely cheap and it’s globally competitive. So you really get into the nuance and minutiae of how a factory runs. And a factory is about safety and maintenance. There’s not a lot of complex things. It’s like when you break down what is being operationally strong, operationally strong is keeping an eye on a handful of things and make sure you’re doing them really, really good. And I’m one of those believers that there’s this adage, “A man who cannot command himself is not fit to command others.” And that maintenance aspect is a part of that. It’s like, well, if you maintain yourself and your own work, you maintain your team, you maintain the company, the products are almost, they come out of all of that system.

(00:55:05): And I think a lot of founders don’t think about their company as a system or almost as a machine. But I would implore you to do that because then you really focus on the craft of the machine and building the machine and making it more hygienic and making it more well-tuned. Just like you’ll meet people who really love cars and they really obsess about the maintenance of cars. They will detail underneath the driver’s seat as somebody who details my own cars. Nobody’s going to look at that, but it’s under that same ethos of really caring a lot about the craft and being… And frankly speaking since you have a limited amount of time, it’s hard to really care about X and also making sure your company’s hygienic. And there’s different reasons at different points of your company that you should do different things, but that’s kind of a little bit of the ethos.

Lenny Rachitsky (00:55:56): I love how it keeps coming back to just staying quiet, just working on the thing and not talking about it. Your point, last point there makes me think of that the score takes care of itself. Classic.

Qasar Younis (00:56:07): Yeah. Yeah. So Joe Montana is actually one of our investors. And in our series D post, the post was the valuation takes care of itself. Very much we fall into that category. And it’s like sometimes people will come to our office and they’ll say, “Oh, it’s such a clean office. You guys must have this giant cleaning staff.” And it’s like, actually we clean our office. Just like in Japanese schools, as I mentioned, I lived in Japan, the students clean their own schools. We have a cleaning zen every week and everyone cleans the area around them. And I think it’s important that… There’s something about this ethos of also not getting so wrapped up in your own narrative of, I’m a Stanford software engineer and I do AI. It’s like, clean up your desk. So there’s some basic things like that. And I don’t know what that larger philosophy is, but it is a philosophy that we kind of drive towards.

(00:57:05): And I think our claim to fame, which is kind of a crazy reality, is we’ve never spent any money we’ve ever raised in the history of the company, which is kind of, it almost sounds like it’s made up. As if your company’s almost 10 years old, 1,000 engineers plus. And so we’re a functioning business without using capital that we’ve raised. And I think it’s somehow connected to us cleaning the office. I don’t know how, but it’s like-

Lenny Rachitsky (00:57:30): You’re saving all these cleaning costs. It all makes sense.

Qasar Younis (00:57:34): We still have people clean, but we also are, our employees also are aware of their surroundings. And I think there’s a direct line between be quiet and alone and clean your desk and well-written software. And I don’t know what that thing is, but it all falls in the same arc.

Lenny Rachitsky (00:57:52): I know you also have a no shoe policy for that same reason to get things going.

Qasar Younis (00:57:55): Yeah. Yeah. And it also influenced by Japan. I worked there and we had a similar office set up. The other way to think about this maybe, as again, I’m just trying to impart everything I’ve known to founders, because I feel like that information is so limited and everyone’s kind of trying to make it up, frankly speaking.

Lenny Rachitsky (00:58:14): This is the alpha.

Qasar Younis (00:58:15): Yeah, yeah, yeah, exactly. I would implore you as a founder to really try to take the best of Japan and the best of Germany, the best of China, the best of Detroit, the best of Silicon Valley. And I think sometimes people take that Steve Jobs line and they really deform it where they say great artist steal. What he is really talking about is the less magnanimous version of that is be humble and learn from everything around you as a leader and be well-rounded. I think reading should be… There’s a Charlie Munger line where he says, “I’ve never met anybody very, very successful who doesn’t read all the time.” And I very much fall into that category as well. And so if you unpack why that is, why does reading a physical book make you a better founder? I just ask that question the most direct way is I’m not reading, especially my ethos of reading is read old books. Don’t read anything new. Read old books because time has filtered out a lot of the noise. So you get a lot of signal.

(00:59:25): And in your life, you’re going to 1,000 books maybe you’ll read. In the best case scenario, you’re going to read probably 50 to 100 books, which is kind of crazy for the average person. So you’re not going to read many. So don’t read low quality content. You read, there are true pillars of human ideas out there. You consume those ideas and then it’s up to you to interpret how those ideas then reflect upon the business that you are leading or the technology that you’re developing. I absolutely believe reading of a book Like Malcolm X’s autobiography will make you a better founder. And again, it’s like the whole cleaning zen all the way to clean code. It’s not directly one-to-one related.

(01:00:10): I think we always want these very simple if then statements. But I think being a well-rounded founder where you understand society around you and history around you, that somehow makes you build a better product. And I don’t know how and how and why, but I think it absolutely is true. I do see a connection there. And then people like Charlie Munger who are not an AI founder obviously also believe that. And I think there’s some pattern there.

Lenny Rachitsky (01:00:37): It’s interesting, this is the same. It’s a metaphor for LLMs. You feed all his data, somehow they become almost conscious. How does that happen? No one fully knows. It’s so interesting how similar you are to Marc Andreessen and your way of thinking and the way you consume content.

Qasar Younis (01:00:52): We’re both bald.

Lenny Rachitsky (01:00:54): There’s a thread here of just here’s important ingredients to being really successful.

Qasar Younis (01:00:59): Yeah. I mean, Marc, we’re fortunate enough to choose our investors and that’s a true privilege. I didn’t have that in my first company. We spent years, we didn’t raise a dollar. So I’d certainly appreciate it. But if I’ve ever had a mentor, Marc would fall into that category. And so I knew him before Applied and we debated and talked a lot. And I think Marc is also like that. He really consumes content actually outside of this little industry that we’re in. And then I think it makes him actually a better investor.

Lenny Rachitsky (01:01:34): Yeah. Well, point people to your website. You have a list of the books that you recommend and love, and it’s very long and very not what you often see. I can’t help but just ask, are there a few books that have most influenced your thinking, most influenced your life?

Qasar Younis (01:01:46): Yeah, that list is, I’ve been thoughtful about that. And the reason I use books like the autobiography of Malcolm X as an example is I know that’s not on the top of the list. Everyone’s going to… If I say High Output Management, classic Andy Grove, you guys know that. So it partly is-

Lenny Rachitsky (01:02:03): Yeah, the number one most mentioned book here.

Qasar Younis (01:02:05): Yeah, yeah, exactly. It’s partly the theatrics of also entertaining you, but also giving you new information as a listener. I think a good… The books I’m currently reading, and this is kind of a random slaw of books. I’m reading the vibe coding book that came out where our whole company’s reading that, which is a new book and it goes against my grain of my heuristic. But The Emperor of All Maladies, the cancer book, fantastic book. I’m almost done with it. I think it changes the way I think. You read that and it changes the way… And I think that’s the ultimate test. When a piece of material changes your existing framing on life, this is good. In the LLM use case, this is somehow related in the sense of diverse data makes your understanding of the world more rich and nuanced, and therefore it’s better.

(01:03:02): But yeah, so I’m always inspired to give more wacky kind of examples rather than the obvious ones. But the obvious ones that I find really wasn’t being theatrical, I think Sam Walton’s book, Made in America is an unbelievable book. It’s very, very good. He wrote it on his deathbed. My American Journey is also very good. Colin Powell’s book. It’s not on my website, but it’s really good. I’m somebody who tries to connect some of these dots from us being cave people to now living in Silicon Valley working as a venture-backed AI company founder. And so books like Guns, Germs, and Steel really are top of that list of fantastic, fantastic book, or Collapse, also same author.

(01:03:50): So yeah, but my point to founders is read this stuff. You can still go to physical bookstores, read this stuff that is both old and well-regarded and you know nothing about. I often, when I’m trying to find the next book to read, I remember the way I picked up SPQR, which is the book on Roman history, I was like, “I don’t actually know a lot about Roman history. I know the high level stuff.” So think about all the ideas in the universe from philosophy, to history, to Jainism, to the rise of Japan as a feudal state, areas you don’t really know, and then just find the best book in that space. And I think you just start filling in the blanks. And so often that’s kind of like the way that I grock the ecosystem is like, what don’t I know anything about? And let me find the best piece of material in that. And yeah, it doesn’t work well.

Lenny Rachitsky (01:04:46): I like that. So Marc’s philosophy is this barbell strategy of only today news like X and 10 years ago books. I love that you’re like, no, just upside down to you almost, just only [inaudible 01:04:57].

Qasar Younis (01:04:57): Yeah. Marc was a heavy influence in me getting on X.

Lenny Rachitsky (01:05:02): All right, so now that’s the beginning.

Qasar Younis (01:05:07): He’s propagated that view. And the thing that I really do agree with him is on as our company becomes a larger, more influential and impactful company as society, it is my responsibility as a co-founder of the company and the CEO of the company to at least propagate my ideas to our first AI founder community, and then larger the technology leadership, and then the world at large. And so that’s part of it. And I think in that way, Marc has really taken it… You think about VCs, not long ago, you would never even know who their names are. They’re like PE guys or hedge fund managers. You can’t even think of names. There’s just blobs of ominous sounding obsidian corporation or something like that.

(01:05:55): And it’s only A16Z and a couple of other folks, John Doerr who really kind of created the, “Hey, I’m going to be the individual investor and I’m going to propagate a certain set of ideas.” And that’s going to create gravity within Silicon Valley to influence founders to then make certain types of companies. And then of course they invest in those.

Lenny Rachitsky (01:06:15): On this thread about reading to find almost areas you disagree and haven’t thought about, I know one of your approaches to management and one that maybe of value is to encourage your leaders to listen to naysayers to not create this positive reinforcement cycle. Talk about why that’s so important, how you operationalize that.

Qasar Younis (01:06:35): So imagine I’m not the founder of the company and Peter’s not the co-founder. It’s just a generic company. The ideal situation with a generic company is one that you can put in lots of different competing ideas. The culture is one where you’ll shake those ideas out. There’s not an emotion in it. Whoever brings the idea, the best idea wins. So why can’t companies do that? Frankly speaking, a lot of times it’s the founders. And the founders are told by popular media and the way that human beings experience life and our tribal kind of outlook, that you have to have this hard view. And everyone has, if they’re not following you, then maybe you’re a weak leader or something like that. And I think we just don’t believe that. We don’t just believe in that philosophy. And I think we believe that what maybe a more tactical way of saying that is we take inputs of the environment, our customers, specifically, our employees, our competitors, of our investors, and of what’s happening in society write large, and that impacts our strategy.

(01:07:42): And I think it’s one of the reasons we’ve been very, very successful. We’re not so arrogant to think that we just have the answers because we had the ambition to start a company. And I think that permeates into a very specific culture. And I think the culture that we’ve built is one, it’s also not being contrarian for contrarian’s sake. Well, one view I do have is emotions are generally not helpful in making rational decisions. They’re almost like the opposite of it. And sometimes passion and leadership are supposed to be about… They’re supposed to be, again, magnanimous or emotional. And we just don’t believe that. And that’s, I think, a bit more of our kind of Midwest roots showing for Peter and I. And so, I think it’s not that we disagree with everything in the room. What we specifically say is speak up, speak up.

(01:08:39): Everyone has to speak up because that one person with their one experience, because they worked at Zoox, or Waymo, or wherever, Tesla, or they worked at a Chinese company, whatever it is. That one idea they have in their head when the debate is happening about what we should do in space, literally the space of space, and something maybe we don’t know much about. That one person’s one idea, they have to feel comfortable sharing it. Even if they’re the most junior person or they feel that they didn’t get their way in the last debate or they feel whatever anxiety that they might have, they have to share that opinion that, guys, this is actually the right idea or this is the wrong idea. And if you can create that environment where the best idea wins. Gandhi has this line, truth is what stands as the test of time.

(01:09:27): And I think there’s become a little bit of a meme in the Bay Area like truth seeking as a culture, but it kind of is like that. We’re trying to find the best idea. Maybe truth is the wrong word, maybe it’s the best idea. Find the best idea and then let’s go full bore against the best idea. Let’s maybe use a counterfactual. Why do companies fail when they have great talent and they have seemingly all the same components that an Applied Intuition have? It’s because maybe the best ideas are not being surfaced and certainly maybe they’re not actually being adopted. Or more often than not, when I think about companies that have been very successful is they have momentum going in a specific direction and that wall of sound overwhelms any new sound that’s emerging which is, “Hey, the market’s changing, the market’s changing.” You just can’t even hear that because there’s all this momentum going in a particular direction.

(01:10:23): A good example, I had front row seats for this when I worked at Google, it was the era where Facebook was emerging. And here’s Google, and people don’t remember Google in the late 00s and early teens. Google wasn’t just a company. It was the apex predator of Silicon Valley. Apple was just the MacBook Air had just come out. Steve Jobs was getting in the right direction, but nothing like Apple is today. Amazon, AWS was still a young thing. Nvidia was teetering off of bankruptcy. All these giant companies that you think of, Microsoft was run by Balmer, Twitter was a small thing. But Google was this already larger than life, number one company. Everybody wanted to work at Google, and there were not many companies with that stature.

(01:11:12): And then in the periphery, this little company Facebook starts emerging. And Google, who has the best engineers on the planet, is making a billion in cash flow a month, tries to fight this little company. And I remember Facebook at that time maybe had 1,000 people, and Google was like 15X, 20X the size with a lot of cashflow. And why couldn’t Google fight Facebook? It’s because Google is not Facebook. It’s like the Confucian saying, how does a gorilla learn how to fly? By not being a gorilla. The way that Google would’ve won the social media wars by being a social media company, it’s just fundamentally not. And so this happens in companies all the time, which is you’re just going in one direction with momentum consciously or unconsciously because that’s where all the employees are, that’s what the culture is, that’s what they work on. And then something changes in the market and you just can’t even move there.

(01:12:08): And I think that also can happen or surprisingly at really small companies. So where founders have a view and it’s like that view is the view it’s going to be. And actually that can be just 10 degrees off from what was the correct path. And the whole company’s kind of led astray. They were in the right market. They might’ve even been solving the right problem, but they were just a little off. And so we’re so scared of failing and so scared of losing that I will humble myself and listen to other people and they say, “Hey, we’re five degrees off course here.” And it’s like, okay, let’s maybe fix the course. And then once that becomes your culture, then it’s really hard to lose because everybody’s not about fulfilling a preset path. They’re just about finding how to win.

Lenny Rachitsky (01:12:53): This is exactly what I wanted to ask about. Everyone listening to this either is like, “Oh yeah, we’re very open-minded. We’re absolutely going to listen to everyone’s opinions and decide rationally the right path in practice.”

Qasar Younis (01:13:05): Almost never happens.

Lenny Rachitsky (01:13:06): Right. Or they’re just like, “We know we’re not good at this. We’re just too nice to each other.” How do you do this at a company that isn’t good at this? Does it have to be the CEO top down in your experience? Does it have to be part of the culture? How do you operationalize at a company that’s not like you?

Qasar Younis (01:13:24): The middle way is typically the right way. And it’s hard to find the middle way because these are conflicting ideas. The guardrails or the posts you just set. One side is like, we’re just going to go on the other side as we’re too maybe almost unsure. And you have to somehow, once you do have that debate, you have to then confidently walk down that path. And again, this is conflicting. I just said, be humble enough to listen to what’s going on. But then once that decision is made, the decisive… In our values, that first value, the speed value, which is specifically the wording is move fast, move safe. That’s specifically the wording. We assess our managers on adherence to those values. Literally, we compensate and promote against those values. They’re not just abstract values. So the behavior we’re actually looking at under speed is decisiveness.

(01:14:19): So we’re setting up a system that is looking at these conflicting kind of things. One is be open and the other one’s make decisions quickly and you have to hold those intention. This is why you as the co-founder or founder of the company get paid the big bucks. You got to do that. You have to know when to bluff and when to hold them and when to fold them, as I say. So at some point, and that point comes sometimes faster than you think, you will not get any more information and you have to make a decision. So you’re walking this very, very thin line.

Lenny Rachitsky (01:14:56): Your point about emotions was extremely interesting, and I want to make sure people don’t take away the wrong takeaway here. So what I actually found really helpful, which I think is aligned with what you were suggesting is taking emotions out of it. The way I’ve used this in my work is when you have to make a hard decision, pretend nobody’s feelings would be hurt and emotions are not involved. What would you do if nobody cared? If they’re like, “Totally great.” So what would you do in that world? And then that tells you, okay, that’s actually the right thing to do. And then it’s, okay, how do I help people feel okay about this? How do I deal with the downsides of this path?

Qasar Younis (01:15:32): Yeah, I think that’s the obvious version of it. I think maybe another version of thinking is it’s an emotion is, let’s take another route. You, as the leader or as the engineer who is getting a direction, you already have some preset view that this is my idea. That’s an emotional construct and it’s around ownership and feeling of ownership. So yeah, I really fall into that category of the more… So even maybe most fundamentally, what is an emotion? An emotion is these set of reactions that a framework that’s been imparted in your brain through life experiences. And those life experiences might not actually may have not been optimized for you to make a decision in a product review. And so the more you can pull that away, a good heuristic would be the same decision being made by multiple people in the company gets the same result.

(01:16:33): So you’re removing a little bit of that almost like filter. You can almost think of that emotion as a filter. So I like to have the raw image come through, the raw decision come through so we can consistently classify it again and again, not to get too abstract, but I don’t know if that makes sense.

Lenny Rachitsky (01:16:50): Yeah, it makes sense. And I have one last question, but there’s an interesting trend I’ve noticed with people talking about AGI. The missing piece I’ve been hearing more and more is just emotions are what creates consciousness potentially. Michael Pollan has a new book out about consciousness and his take is it’s not just more intelligence, it’s actually emotions that led to the consciousness.

Qasar Younis (01:17:10): I think it’s, let’s say, undermining how complex human thought is to think that it’s just, let’s say, the inputs and outputs of, or let’s say association, for the lack of better word, of ideas, facts, words, letters. It’s not just as associations, and creativity is kind of a little bit of that as well. The old saying of technical mastery is mastering the complex, and I think computers do that really well, and creativity is mastering the simple. I’m sure I’m going to eat my words on this as the best artist in three years.

Lenny Rachitsky (01:17:53): That’s beautiful. That’s beautiful.

Qasar Younis (01:17:55): Yeah, yeah. And so I think, and this again goes back to my philosophy of consume broad inputs, but then try to remove that filter, see things as honestly as they possibly can, create a culture in the company that is also similar and doesn’t put any weight on who the idea came from or where it came from. But then, ultimately, as a leader, you decide, and by the way, you got to be right. That’s the other thing I think a lot of founders just, we don’t emphasize enough. Founders love to take credit for things. It’s just human nature. Everybody does.

(01:18:33): But the reality is you have to be right. It’s not enough to just start a company. It’s not enough to have this vision of the world. You have to be right. And the evidence is, is the company is a sustainable standalone business, because we’re talking specifically about venture-backed AI companies in Silicon Valley. All of my, but I should have said this at the beginning, all of my advice is specifically for that narrow group, which is founders of AI venture-backed companies in the Bay Area.

Lenny Rachitsky (01:19:03): Speaking of that, last question. I’ve been wanting to get to this because it’s an interesting spicy take that you have as the last question. I know you have to run after this. You have this view that a lot of CEOs in Silicon Valley don’t actually have great taste. I’m excited to hear what your experience there and just what you think.

Qasar Younis (01:19:18): Yeah. I also want to be careful to imply that I do. I fall into that group. I think it is true because a lot of the… Couple of reasons, both taste in the most, let’s say, artistic sense and in the most running a company and what should be the HR policy for point X. A lot of that is I think they’re just not exposed to a lot of interesting good things. And that’s been a theme in this whole conversation is like, just get more and more exposure. It’s very unfortunate when I meet somebody who, and I’m not particularly thinking of anyone in particular. So if this is you and you’re one of my friends, I apologize. I’m not talking about you, but it’s like you grow up in Cupertino, you go to Berkeley, and the first thing you do when you come out of school is you start a company and then that’s all you do for 20… You’ve never even been an employee.

(01:20:14): And why I think that’s so important. I spent over a decade working in large organizations and truly large, more than 100,000 employees, like a General Motors or like a Bosch. And when you’re in the back alley of that organization, the bowels of those organizations, you learn how bad it is to be an employee. The bureaucracy above you, leadership doesn’t know what’s going on, the industry, your antiquated tools, all this stuff. Why that’s so important to experience as an individual is then when you become a leader, you’re making policies and you’re creating culture and you have to keep that in mind. And a bunch of founders, we just never had, frankly, the fortune of being at the bottom of the totem pole. And that’s just one version of how to… That doesn’t obviously seem like consume the photos of Bresson or Picasso or whoever it might be, but it’s something similar.

(01:21:15): There’s something similar about, you can sometimes meet founders, and maybe a good heuristic here is there’s some founders that would be good at lots and lots of things, actually not just being a founder of an AI company in the Bay Area. And there’s something about taste there, because really what you’re talking about is understanding humans and understanding life, and then being able to discern with some judgment what is good and what’s not good, because that’s really what we’re talking about. And so if your life experience is very narrow, you could still be good or have the ability to discern what’s good and what’s not good. But I think there’s something about if you’ve backpacked for a few years around the world, I somehow believe that’s going to be a better founder.

(01:22:02): It’s like, I don’t know how I can… There’s no peer-reviewed research that I can point to that says that. So I think that’s what I’m getting at. There is some developing of taste. Yeah.

Lenny Rachitsky (01:22:14): Well, I feel like we have helped people build their taste, feed their model with more insights and different perspectives in this conversation. I feel like we could chat for hours, but I know you got to run.

Qasar Younis (01:22:25): Yeah, I’m not sure if there are any real takeaways other than that.

Lenny Rachitsky (01:22:28): Okay, zero.

Qasar Younis (01:22:30): We really went everywhere. I’m sorry if you had a particular line of questions you wanted to go down.

Lenny Rachitsky (01:22:34): We went in all the perfect directions.

Qasar Younis (01:22:36): Okay, good. Good, good.

Lenny Rachitsky (01:22:37): Qasar, thank you so much for doing this. Thank you so much for being here. Final question, just where can folks find you online. How can listeners be useful to you?

Qasar Younis (01:22:45): That’s a great question. I love to hear what are books that I don’t know. So that’s always good. Some of my favorite books have been just randomly recommended to me, so I’ll take that. Of course, I consume research as well. And so if there’s something particularly novel that’s going on, obviously all the mainstream stuff we as a company and me as an individual are going to consume, but things that are a bit off the beaten path, we’re always looking for that. But yeah, and then if you have a particular opinion about specifically our domain, physical AI and how AI is going to impact mines, farms, construction sites, robotax, all of that stuff, I’m always interested to hear new opinions on that, or even old opinions maybe with a different viewpoint.

(01:23:34): So yeah, if you see me, you can see me online. Of course, I’m always around as well. I’m always open to that feedback.

Lenny Rachitsky (01:23:42): And you’re on Twitter now. There you go.

Qasar Younis (01:23:43): Yeah, exactly. Yeah. Follow me there. Yeah, exactly. That’s the answer, follow me there.

Lenny Rachitsky (01:23:44): That’s the call to action.

Qasar Younis (01:23:51): I’m channeling my inner Marc.

Lenny Rachitsky (01:23:53): There you go. Qasar, thank you so much for doing this and for being here.

Qasar Younis (01:23:57): Yeah, thanks for having me. It was a lot of fun.

Lenny Rachitsky (01:23:59): Bye, everyone.

(01:24:00): Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lennyspodcast.com. See you in the next episode.

Watch on YouTube

Follow the guest