JD Vance’s AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant

(0:00) The Besties intro Naval Ravikant! (9:07) Naval reflects on his thoughtful tweets and reputation (14:17) Unique views on parenting (23:20) Sacks joins to talk AI: JD Vance's speech in Paris, Techno-Optimists vs Doomers (1:11:06) Tariffs and the US economic experiment (1:21:15) Thomson Reuters wins first major AI copyright decision on behalf of rights holders (1:35:35) Chamath's dinner with Bryan Johnson, sleep hacks (1:45:09) Tulsi Gabbard, RFK Jr. confirmed Follow Naval: https://x.com/naval Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://x.com/naval/status/1002103360646823936 https://x.com/CollinRugg/status/1889349078657716680 https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence https://www.cnn.com/2021/06/09/politics/kamala-harris-foreign-trip/index.html https://www.cnbc.com/2025/02/11/anduril-to-take-over-microsofts-22-billion-us-army-headset-program.html https://x.com/JDVance/status/1889640434793910659 https://www.youtube.com/watch?v=QCNYhuISzxg https://www.wired.com/story/thomson-reuters-ai-copyright-lawsuit https://admin.bakerlaw.com/wp-content/uploads/2023/11/ECF-1-Complaint.pdf https://www.youtube.com/watch?v=7xTGNNLPyMI https://polymarket.com/event/which-trump-picks-will-be-confirmed?tid=1739471077488

Transcribe, Translate, Analyze & Share

Join 170,000+ incredible people and teams saving 80% and more of their time and money. Rated 4.9 on G2 with the best AI video-to-text converter and AI audio-to-text converter, AI translation and analysis support for 100+ languages and dozens of file formats across audio, video and text.

Start your 7-day trial with 30 minutes of free transcription & AI analysis!

More Affordable
1 %+
Transcription Accuracy
1 %+
Time & Cost Savings
1 %+
Supported Languages
1 +

You can listen to the JD Vance’s AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant using Speak’s shareable media player:

JD Vance’s AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant Podcast Episode Description

(0:00) The Besties intro Naval Ravikant!

(9:07) Naval reflects on his thoughtful tweets and reputation

(14:17) Unique views on parenting

(23:20) Sacks joins to talk AI: JD Vance’s speech in Paris, Techno-Optimists vs Doomers

(1:11:06) Tariffs and the US economic experiment

(1:21:15) Thomson Reuters wins first major AI copyright decision on behalf of rights holders

(1:35:35) Chamath’s dinner with Bryan Johnson, sleep hacks

(1:45:09) Tulsi Gabbard, RFK Jr. confirmed

Follow Naval:

https://x.com/naval

Follow the besties:

https://x.com/chamath

https://x.com/Jason

https://x.com/DavidSacks

https://x.com/friedberg

Follow on X:

https://x.com/theallinpod

Follow on Instagram:

https://www.instagram.com/theallinpod

Follow on TikTok:

@theallinpod

Follow on LinkedIn:

https://www.linkedin.com/company/allinpod

Intro Music Credit:

https://rb.gy/tppkzl

https://x.com/yung_spielburg

Intro Video Credit:

https://x.com/TheZachEffect

Referenced in the show:

https://x.com/naval/status/1002103360646823936

https://x.com/CollinRugg/status/1889349078657716680

https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence

https://www.cnn.com/2021/06/09/politics/kamala-harris-foreign-trip/index.html

https://www.cnbc.com/2025/02/11/anduril-to-take-over-microsofts-22-billion-us-army-headset-program.html

https://x.com/JDVance/status/1889640434793910659

https://www.wired.com/story/thomson-reuters-ai-copyright-lawsuit

https://admin.bakerlaw.com/wp-content/uploads/2023/11/ECF-1-Complaint.pdf

https://polymarket.com/event/which-trump-picks-will-be-confirmed?tid=1739471077488
This interactive media player was created automatically by Speak. Want to generate intelligent media players yourself? Sign up for Speak!

JD Vance’s AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant Podcast Episode Top Keywords

JD Vance's AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant Word Cloud

JD Vance’s AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant Podcast Episode Summary

This podcast episode delves into the evolving landscape of podcasting and communication, emphasizing the role of AI in enhancing accessibility and engagement. Speaker 1 expresses a preference for platforms like AirChat and Clubhouse over traditional podcasts, citing a desire for genuine, dynamic conversations rather than repetitive interviews. The discussion highlights the potential of AI tools, such as asynchronous transcripts and translation, to democratize podcasting and make it more inclusive.

A significant portion of the episode is dedicated to parenting philosophies, referencing a recent podcast with Tim Ferris. The conversation underscores the importance of teaching children decision-making skills and the value of understanding cause and effect, particularly in teenagers. The speakers advocate for encouraging children to develop their own plans to solve problems, fostering independence and critical thinking.

The episode also touches on personal well-being, with a notable mention of a dinner conversation with Nikesh, CEO of Palo Alto Networks, who emphasizes the critical role of sleep in maintaining health and productivity. This insight is distilled into a broader message about prioritizing rest as a foundational element of personal and professional success.

Recurring themes include the importance of authentic communication, the integration of AI in everyday life, and the holistic approach to personal development, encompassing relationships, decision-making, and self-care. The episode concludes with a reflection on the engaging nature of the conversation, highlighting the value of diverse perspectives and the potential for podcasts to facilitate meaningful dialogue.

This summary was created automatically by Speak. Want to transcribe, analyze and summarize yourself? Sign up for Speak!

JD Vance’s AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant Podcast Episode Transcript (Unedited)

Speaker: 0
00:00

Great job, Nava. You rocked it.

Speaker: 1
00:01

Maybe I should have said this on air, but that was literally the most fun podcast I’ve ever recorded.

Speaker: 0
00:06

Oh, that’s on air. Cut that in.

Speaker: 2
00:08

Yeah. Put it in the show. Put it in the show.

Speaker: 1
00:09

I had my theory on why you were number one, but now I have the realization.

Speaker: 0
00:13

What’s the actual reason? You know us for a long

Speaker: 3
00:14

time.

Speaker: 4
00:14

Yeah. What was your theory? What’s the reality?

Speaker: 1
00:16

My theory was that my problem with going on podcast is usually the person I’m talking to is not that interesting. They’re just asking the same questions, and they’re dialing it in, and they’re not that interested. It’s not like we’re having a pure level actual conversation.

Speaker: 1
00:28

So that’s why I wanted to do AirChat and Clubhouse and things like that because you can actually have a conversation.

Speaker: 0
00:33

Oh, I see.

Speaker: 1
00:33

Right? And what you guys have very uniquely is four people, you know, of whom at least three are intelligent. I’m kidding.

Speaker: 0
00:43

I think you’re saying that Saks isn’t here. I did. Yeah. What? Like, Saks isn’t even hearing you say that before? That is sai old. Fast.

Speaker: 1
00:51

Right? Ai least three are intelligent, and all of you get along, and you can have an ongoing conversation. That’s a very high hit rate. Normally, in a podcast, you only get one interesting person, and now you’ve got three, maybe four. Right? Okay. So that to me was why Ai

Speaker: 0
01:06

Who are

Speaker: 3
01:08

you talking to?

Speaker: 0
01:08

Four. To his number.

Speaker: 1
01:09

We don’t know. The child will remain mysterious forever. Of the four, right, the the the problem is, like, if you get people together to talk, two is a good conversation. Three, possibly, four is the max. That’s why the dinner table at a restaurant, four top. Right? You don’t do five or six because then it splits into multiple conversations.

Speaker: 1
01:23

So you had four people who were capable of talking. Right? Sai you had four people who were capable of talking. Right? That I thought was a secret, but there’s another secret. The sec the other secret is you guys are having fun. You’re talking over each other.

Speaker: 1
01:35

You’re making fun of each other. You’re actually having fun. So that’s why I’m saying this is the most fun podcast I’ve ever been on.

Speaker: 3
01:42

That’s Awesome.

Speaker: 1
01:42

That’s why you’ll be

Speaker: 0
01:43

so nice.

Speaker: 4
01:43

Welcome back ai, Nava. Thanks, Sana.

Speaker: 0
01:45

Welcome back. Yes. Absolutely. Yeah. He had

Speaker: 1
01:48

fun, guys. One

Speaker: 0
01:48

eight and three smart guys. I can’t even believe you’d say that about Saxx.

Speaker: 4
01:55

He’s not even here to defend himself. Sorry, David.

Speaker: 0
02:15

All right, everybody. Welcome back to the number one podcast in the world. We’re really excited today. Back again. Your sultan of science, David Freeberg. What do you got going on there, Freeberg? What’s in the background? Everybody wants to know.

Speaker: 1
02:30

I must say.

Speaker: 0
02:31

No. I’m not

Speaker: 4
02:32

ai I used to play a lot of a game called Sai Earth on my Macintosh LC way back in the day.

Speaker: 0
02:38

That tracks.

Speaker: 3
02:39

Yeah.

Speaker: 0
02:40

That tracks. And, of course, with us again, your chairman

Speaker: 4
02:44

What games did you play growing up, JKL? Actually, I’m kinda curious. Did you ever play video games?

Speaker: 0
02:48

Sai, Andrea, Allison, Susan. I mean, it was like a lot of cute girls. I was out dating girls, Freeburg.

Speaker: 2
02:58

Yeah.

Speaker: 0
02:58

I was not on my Appletoon playing Civilization.

Speaker: 4
03:02

Let me find one of those pictures. Woah.

Speaker: 0
03:04

Woah. Don’t get me in trouble, man. Yeah. The eighties were eighties were good to me in Brooklyn.

Speaker: 2
03:08

Rejection, the video game. Yes.

Speaker: 0
03:11

You have three lives. Rejected. Rejected. It’s a numbers game, Chamath. As you know, as you well know, it is a numbers game.

Speaker: 2
03:19

Yeah.

Speaker: 4
03:19

Nick, go ahead. Pull up, pull up Rico Suave here.

Speaker: 0
03:22

Oh, no. What is this one?

Speaker: 5
03:23

Oh, instead

Speaker: 0
03:24

of playing

Speaker: 1
03:24

video, here

Speaker: 4
03:25

I am.

Speaker: 0
03:25

No. That’s the eighties. That’s fat blink video.

Speaker: 3
03:28

Here I am. No. That’s in the eighties. That’s Fat Jcow. That’s Fat Jcow. That’s Fat Jcow. That’s Fat Jcow. That’s Fat Jcow.

Speaker: 2
03:29

That’s Fat Jcow.

Speaker: 0
03:30

Nick, how about he’s out your uncle.

Speaker: 4
03:32

Yeah. Here he is, that’s slang.

Speaker: 0
03:34

How about your uncle with the Vin Jcow? That’s Correct. And weightlifting. Both of us.

Speaker: 2
03:44

Beef jerky.

Speaker: 4
03:46

Ladies and gentlemen.

Speaker: 0
03:46

Go go find my Leonardo DiCaprio picture, please, and replace my fat Jay Howell picture with that. Thank you. Oh, god. I was fat. Man, plus forty pounds is a lot heavier

Speaker: 1
03:57

than I ram.

Speaker: 4
03:57

It’s no joke.

Speaker: 2
03:58

Snow joke.

Speaker: 0
03:59

40 pounds is a lot. Snow joke. There’s so many great emo photos of me.

Speaker: 2
04:03

I’m proud

Speaker: 0
04:04

of you. Thank you, my man. Thank you,

Speaker: 2
04:06

my man.

Speaker: 0
04:07

If you want a good photo Can

Speaker: 2
04:08

you get through the intros, please, so we can start? Come on quick.

Speaker: 0
04:11

How you doing, brother? How you doing, chairman dictator? Good.

Speaker: 3
04:14

Good. Good. Good.

Speaker: 0
04:14

Good. Good. Good. Good. Good. Good.

Speaker: 3
04:16

Good. Good. Good. Good. Good. Good. Good.

Speaker: 1
04:16

Good. Good.

Speaker: 0
04:16

Good. Good. Good. Good of Angel List, the zen like mage of the early stage. He has such a way with words. He’s the Socrates of nerds. Please welcome ai guy, Namaste Naval. How you doing? The intros are back.

Speaker: 1
04:39

That is the best intro I’ve ever gotten. I didn’t think I don’t I didn’t think you could do that. That was amazing. That’s your superpower right there. Lock it in. Quick venture capital. Just do that.

Speaker: 0
04:48

Absolutely. That’s actually you know what? Interestingly, that’s

Speaker: 1
04:51

the number one podcast in the world, like someone said.

Speaker: 0
04:54

I mean, that’s what I’m manifesting. It’s getting close. We’ve been in the top 10. Sai, I mean, the weekends are good for all in.

Speaker: 1
05:00

This this one will hit number one. This one will go viral.

Speaker: 0
05:02

Think it could. If you have some really great pithy insights, we might go right to the top.

Speaker: 3
05:08

Top. If you have a new audience

Speaker: 1
05:08

I just gotta do a Speak Ai, and it’ll go viral.

Speaker: 0
05:11

Oh, no. No. Oh, no. Are you gonna send us your heart?

Speaker: 1
05:15

My heart goes out to you.

Speaker: 0
05:16

I my heart I I end here at the heart. I don’t send it out.

Speaker: 3
05:21

I keep

Speaker: 0
05:22

it right here. I put both hands on the heart and

Speaker: 2
05:24

I hold it

Speaker: 0
05:25

ai and steady. Ai hold it in. It’s sending out to you, but just not explicitly. Alright. For those of you who don’t know, Naval was an entrepreneur. He kicked a bit of ass. He got his ass kicked, and then he started venture hacks. And he started emailing folks and saying, you know, twenty fifteen, twenty years ago, maybe fifteen, here are some deals in Silicon Valley. He went around.

Speaker: 0
05:49

He started writing 50 k, hundred k checks. He hit a bunch of home runs, and, he turned venture hacks into angel list. And then he, has invested in a ton of great startups. Maybe give us some of the greatest hits there, Devaugh.

Speaker: 1
06:03

Yeah. Twitter, Uber, Notion, bunch of others, Postmates, Udemy, a lot of unicorns,

Speaker: 4
06:10

bunch of coming.

Speaker: 1
06:11

I don’t know. It’s it’s actually a lot of deals at this point, but, honestly, I’m not necessarily proud of being an investor. Investor to me is a side job. It’s a hobby. Sai I do start ups.

Speaker: 2
06:21

How how do you define yourself?

Speaker: 1
06:24

I don’t. I mean, I guess these days, I would say more ai building things. You know, every every so called career is an evolution. Right? Yeah. And all of you guys are independent, and you kinda do what you’re most interested in. Right? That’s the point of making money. So you can just do what you want. So these days, I’m really into building and crafting products. So I built one recently called AirChat. It kinda didn’t work.

Speaker: 1
06:46

I’m still proud of what I built and got to work with an incredible team. And now I’m building a new product. This time, I’m going into hardware. Oh. And I’m just building something that I really want. I I’m not quite a product yet. Yourself, Naval? Partially, I bring investors along. Last ai, they got their money back.

Speaker: 1
07:03

Previous times, they’ve made money. Next time, hopefully, they’ll make a lot of money. It’s good to bring your friends along.

Speaker: 2
07:09

I’ll be honest. I love that you sai, I love the product, but it didn’t work. Not enough people say that.

Speaker: 1
07:14

Yeah. No. I I built a product that I loved, that I was proud of, but it didn’t catch fire. And it was a social product, so it had to catch fire for it to work. So I found the team great homes. They all got paid. The investors that I brought in got their money back. And I learned a ton which I’m leveraging into the new thing. But the new thing is much harder.

Speaker: 1
07:32

The new thing is hardware and software and

Speaker: 2
07:34

Now what did you what did you learn building in 2024 and 2025 that you didn’t know maybe before then?

Speaker: 1
07:40

The main thing was actually just the ram, the craft of pixel by pixel designing a software product and launching it. I guess the main thing I took away that was a learning was that I really enjoyed building products and that I wanted to build something even harder and something even more real.

Speaker: 1
08:00

And I think, like, a lot of us, I’m inspired by Elon and, you know, all the incredible work he’s done. So I don’t wanna build things that are easy. I wanna build things that are hard and interesting. And I wanna take on more technical risk and technical risk and less market risk. This is tyler classic VC learning.

Speaker: 1
08:14

Right? Which is sana wanna build something that if people get it, if you can deliver it, you know people will want it. And it’s just hard to build as opposed to you build it and you don’t know if they want it. So that’s a learning.

Speaker: 0
08:27

AirChat was a lot of fun. For those of you who don’t know, it was kind of like a social media network where you could ask a question and then people could respond, and it was like an audio based Twitter. Would you say that was the the best way to describe it?

Speaker: 1
08:40

Audio Twitter, asynchronous Ai transcripts, and all kinds of AI to make it easier for you, translation. Really good way for ai try to make podcasting type conversations more accessible to everybody. Because, honestly, one of the reasons I don’t go on podcast, I don’t like being intermediated, so to speak. Right?

Speaker: 1
08:59

Where you sit there and someone interviews you and then you go back and forth and you go through the same old things. I just sana talk to people. I want speak relationships, kinda like you guys have running here.

Speaker: 2
09:07

Naval, what happened when you went through that phase, there was a period where it just seemed like something had gone on in your life and you just knew the answers. You were just so grounded. It’s not to say that you’re not grounded now, but you’re you’re less active posting and writing.

Speaker: 2
09:23

But there was this period where Ai think all of us were like, alright, what does Naval think?

Speaker: 1
09:27

Oh, really? Oh, okay. That’s news to me.

Speaker: 2
09:30

I would say it would be like the late teens, the early twenties. Jason, you can correct me if I’m getting the dates wrong, but it’s in that moment where like these Navalisms and this sort of philosophy really started to I think people had a tremendous respect for how you were thinking about things.

Speaker: 2
09:44

I I’m just curious ai what, were you going through something in that moment? Or, like

Speaker: 1
09:48

Oh, yeah. Yeah. Yeah. Yeah. That’s right. No. Very insightful. Yeah. Twenty so I’ve I’ve been on Twitter since 02/2007 because I was an early investor, but I never really tweeted. I didn’t get featured. I had no audience. I was just doing the usual techie guy thing talking to each other. And then I started AngelList in 2010. The original thing about matching investors to startups didn’t scale.

Speaker: 1
10:08

And it was just an email list that exploded at early on, but then just didn’t scale, so we didn’t have a business. And I was trying to figure out the business, and at the same time, I got a letter from the Securities and Exchange Commission saying, oh, you’re acting as an ai broker dealer.

Speaker: 1
10:22

And I’m like, well, I’m not making any money. I’m not I’m just making intros. I’m not taking anything. It’s just a public service. But even then, they were coming after me. So I wasn’t and I’d raised a bunch of money from investors.

Speaker: 1
10:32

So I was in a very high stress period of my life. Now looking back, it’s almost comical that I was stressed over it. But at the time, it all felt very real. The weight of everything was on my shoulders, expectations, people, money, regulators. And I eventually went to DC and got the law changed to legalize what we do, which ironically enabled a whole bunch of other things ai ICOs and incubator days and so on, demo days.

Speaker: 1
10:54

But in that process, I was in a very high stress period of my ai, and I just started tweeting whatever I was going through, whatever realizations I was happening. It’s only in stress that you sort of are forced to grow. And so whatever internal growth I was going through, I just started tweeting it, not thinking much of it.

Speaker: 1
11:10

And it was a mix of there are three things that I kind of always kind of are are running through. One is Ai love sai, you know, I’m an amateur, love physics. Let’s just leave it at that. I love reading a lot of philosophy and thinking deeply about it. And I like making money. Right? Truth, love, and money. That’s my joke on my Twitter bio. Those are the three things that I keep coming back to.

Speaker: 1
11:33

And so I just started tweeting about all of them. And I think before that, the expectation was that someone like me should just be talking about money. Stay in your lane. And people have been playing it very safe. And so I think the combination of the three sort of caught people’s attentions because every person thinks about everything. We don’t just stay in our lane in real life.

Speaker: 1
11:54

We’re dealing with our relationships. We’re dealing with our relationship with the universe. We’re dealing with what we know to be true and, you know, with science and how we make decisions and how we figure things out. And we’re also dealing with the practical everyday material things of how to deal with our spouses or girlfriends or wives or husbands and how to make money and how to deal with our children.

Speaker: 1
12:14

So I’m just tweeting about everything. I just got interested in everything. I’m tweeting about it, and and a lot of it, my best stuff was just notes to self. It’s like, hey. Don’t forget this.

Speaker: 2
12:22

Don’t forget that. Meh that one? How to get rich. That was like ai of the first threads,

Speaker: 0
12:27

and shah was a super super viral. Meh. Super banger.

Speaker: 2
12:30

Yeah.

Speaker: 0
12:30

Yeah. Yeah.

Speaker: 1
12:31

I think that is still the most viral thread ever on Twitter. I like timeless things. I I like philosophy. I like things that are still apply in the future. I like compound interest, if you will, in ideas. Obviously, recently, x has become so addictive that we’re all checking it every day. And Elon’s built the perfect for you.

Speaker: 1
12:51

He’s built TikTok for nerds, and we’re all on it. But normally, I try to ignore the news. Obviously, last year, things got real. We all had to pay a lot of attention to the news. But I just like to tweet timeless things. I I don’t know. I mean, people pay attention.

Speaker: 1
13:04

Sometimes they like what I write. Sometimes they they go nonlinear on me. But, But, yeah, the how to get rich feet storm was a big one.

Speaker: 2
13:10

Is it problematic when people now meet you because the the hype versus the reality, there’s, like, it’s discordant now because people if they absorb this content, they expect to see some crazy dating. Yeah. Floating in the in the air.

Speaker: 0
13:24

You know what I mean? Yes.

Speaker: 1
13:25

Yeah. Like meh of you have stopped drinking, but I used to, like, have the occasional glass of wine. And there was a moment there where I went and met with an information reporter back when I used to meet the reporters. And she said, where are we gonna meet? So I said, oh, let’s meet at the the wine merchant, and we’ll get over the last one.

Speaker: 1
13:41

She’s like, what you drink? Like, it was like a big deal for

Speaker: 3
13:44

I don’t know.

Speaker: 2
13:45

I’m so disappointed.

Speaker: 1
13:47

I was like, I’m an entrepreneur. Most of them are alcoholics or psychedelics or

Speaker: 0
13:51

Yeah. For

Speaker: 1
13:52

sure. Whatever it takes to medicate. Yeah. Yeah. In a

Speaker: 4
13:55

hot tub. Yeah. Right.

Speaker: 1
13:57

Yeah. When they say I’m on therapy, you know what that that’s code for? Yeah. Sai, yes, it is highly

Speaker: 0
14:03

disappointing. Listen.

Speaker: 1
14:04

Yeah. I’m I’m almost reminded of that, line in the matrix where that agent is about to, like, shoot one of the matrix characters and sai only human. Right? So that’s what I wanted to say to everybody, like, only human.

Speaker: 2
14:15

Yeah. Yeah. Yeah.

Speaker: 0
14:17

You did a recently a podcast with Tim Ferris on parenting. This was out there. I love this. I bought the book from this guy.

Speaker: 1
14:25

Yeah.

Speaker: 0
14:27

Just give a a brief overview of this philosophy of parenting.

Speaker: 2
14:30

Oh, I didn’t listen to this after I ai this. Tell us what

Speaker: 0
14:33

is your You’re gonna love this. Ai this spoke to me, but it was a little crazy.

Speaker: 1
14:37

Yeah. So I’m a big fan of David Deutsch. David Deutsch, I think, is basically the smartest living human. He’s a scientist who’s fired. Yeah. Quantum computation. And he’s written a couple of great books, but it’s about the intersection of the greatest theories that we have today, the theories with the most reach.

Speaker: 1
14:52

And those are epistemology, the theory of knowledge, evolution, quantum physics, and computation.

Speaker: 0
14:58

This is the beginning of infinity guy. That’s the book. Infinity is the second book. Reference. Yeah.

Speaker: 1
15:03

Correct. Yes. The fabric of reality is another book. I speak a fair bit of time with him, done some podcast with him, hired and worked with people around him. And I’m just really impressed because it’s like the the ram framework that’s made me smarter, I feel like. Because we’re all fighting aging.

Speaker: 1
15:16

Our brains are getting slower, and we’re always trying to have better ideas. So as you age, you should have wisdom. That’s your substitute for the raw horsepower of intelligence going down. And so scientific wisdom, I take from David. Not take, but, you know, I learned from David.

Speaker: 1
15:30

And one of the things that he pioneered is called taking children seriously. And it’s this idea that you should take your children seriously like adults. You should always give them the same freedom that you would give an adult. If you wouldn’t speak that way with your spouse, if you wouldn’t force your spouse to do something, don’t force a child to do something.

Speaker: 1
15:47

And it’s only through the latent threat of physical violence. Hey, I can control you, I can make you go to your room, I can take your dinner away or whatever, that you intimidate children. And it resonated with me because I grew up very, very free. My father wasn’t around when I was young.

Speaker: 1
16:05

My mother didn’t have the bandwidth to watch us all the time. She had other things to do and so I kind of was making my own decisions from an extremely young age. From the age of ai, nobody was telling me what to do and from the age of nine I was telling everybody what to do. So I’m used to that.

Speaker: 1
16:19

And I’ve been homeschooling my own kids sai the philosophy resonated and I found this guy Aaron Stupel on AirChat and he was an incredible expositor of the philosophy, he lives his life with it 99% as extreme as one can go. So his kids can eat all the ice cream they want and all the snickers bars they want, they can play on the ai all they want, they don’t have to go to school if they don’t feel like it, they dress how they sana, they don’t have to do anything they don’t want to do, everything is a discussion, negotiation, explanation just like you would with a roommate or an adult living in your house.

Speaker: 1
16:50

And it’s kind of insane and extreme. But I live my own home life in that arc, in that direction. And I’m a very free person. I don’t have an office to go to. I try really not to maintain a calendar. If I can’t remember it, I don’t wanna do it.

Speaker: 1
17:05

I don’t send my kids to school. I really try not to coerce them. And so, obviously, that’s an extreme model, but I was Sorry. Sorry. Sorry.

Speaker: 2
17:14

Hold on a saloni. Sorry.

Speaker: 1
17:15

Yeah. Yeah.

Speaker: 2
17:16

Your kids if they if they were, like, I want Haagen Dazs and it’s 9PM, you’re, like, okay.

Speaker: 1
17:24

Two nights ago, I did this. I ordered the Haagen Dazs. It wasn’t Haagen Dazs, it was a different brand. Ram ordered it.

Speaker: 2
17:29

I’m just sana go through a couple of examples of what we like.

Speaker: 1
17:30

We do actually. I scream at 9PM and we all need that sound sai good.

Speaker: 2
17:33

They’re like, dad, I want And they’re happy.

Speaker: 1
17:35

They’re happy kids.

Speaker: 2
17:36

Sai sana be on my iPad. I’m playing Fortnite. Leave me alone. I’ll go to sleep when I want. You’re like, okay.

Speaker: 1
17:42

My oldest probably plays iPad nine hours a day.

Speaker: 2
17:45

Okay. So then your other kid pees in their pants because they’re too lazy to walk to the bathroom.

Speaker: 1
17:50

They don’t do that because they don’t, like, pee in their pants. No.

Speaker: 2
17:52

No. I I understand. But I’m just saying, like, there’s a spectrum of all of these things. Right? Yeah. And your point of view is a % of it is allowed, and you have no judgement.

Speaker: 1
18:01

No. That’s not where I am.

Speaker: 0
18:02

Okay. That’s

Speaker: 1
18:03

where that’s where Ai is. My rules are a little different. My rules are they gotta do one hour of math or programming plus two hours of reading every single day. And the moment they’ve done that, they’re free creatures. And everything else is a negotiation where I have to persuade them. It’s a persuasion, I should say, not even a negotiation.

Speaker: 1
18:21

And even the hour of math and two hours of reading, really, you get fifteen to thirty minutes of math, maybe an hour if you’re lucky, and you get half an hour to two hours of reading if you’re lucky.

Speaker: 2
18:30

Think the long term consequences of that are? And then also, what is the long term consequences, let’s say, on health if they’re making decisions you know are just not good, like the ice cream thing at 9PM? How do you how do you manage that in your mind?

Speaker: 1
18:45

I think whatever age you’re at, whatever part you’re at in life, you’re still always struggling with your own habits. Habits. I think all of us, for example, still eat food and feel guilty or sana eat something that we shouldn’t be eating and we’re still always evolving our diets and kids are the same.

Speaker: 1
18:59

So my oldest is already, he passed on the ice cream last time and he said I sana eat healthier because finally I managed to get through to him and persuade him that he should be healthier. My younger kids will eat it, but they’ll eat a limited amount. My middle kid will sometimes eat

Speaker: 2
19:12

some amount of calories. So if they say something, you’ll enable it, but then you’ll guide you’ll be like, hey. Listen. Like, this is not the choice I would make. I don’t think but if you want it, I ai it.

Speaker: 1
19:21

Yeah. I’ll try it, but you also have to be careful, but you don’t want to intimidate them and you don’t want to be so overbearing that then they just view dad as, like, controlling.

Speaker: 3
19:29

Sai

Speaker: 2
19:30

find this so fascinating. And so what do you think happens to these kids at the like, I’m sure you have a vision of what they’ll be like when they’re fully formed adults. Like, what is that vision?

Speaker: 1
19:39

I I try not to. They’re gonna be who they’re gonna be. This is kinda how I grew up. I kinda did what I wanted.

Speaker: 4
19:45

I I would rather

Speaker: 1
19:47

I would rather they have agency than turn out exactly the way I sana. Because agency is the hardest thing. Right? Having control over your own life, making your own decisions.

Speaker: 3
19:58

And I

Speaker: 1
19:58

don’t want them to be happy. I have a very happy household.

Speaker: 2
20:01

What is the Plato, what’s Plato’s goal? Eudaimonia? Right? Like Eudaimonia?

Speaker: 1
20:05

Yeah. The happy Yeah. Aristotle.

Speaker: 2
20:06

Arya or like the the fulfillment, this cons is that what you want for them?

Speaker: 1
20:11

I don’t really want anything for them. I just want them to be free and their best selves.

Speaker: 2
20:18

God.

Speaker: 1
20:18

I want them

Speaker: 0
20:20

Ai was worrying about details. He’s got, like, 17 kids now. I don’t

Speaker: 3
20:24

know if

Speaker: 1
20:24

you know, but Shamath shah got, like, a whole bunch of things again. But

Speaker: 3
20:27

I love

Speaker: 0
20:27

this interview because the guy made a really interesting point, which was they’re sana have to make these decisions at some point. They’re gonna have to learn the pros and cons, the the ai, the downside to all these things, eating, ai, and the quicker you get them to have agency to make these decisions for themselves with knowledge to ask questions, the more secure it will be.

Speaker: 0
20:50

I found it a fascinating discussion. I like cause and effect, especially in teenagers now that I have a teenager. It’s really good for them to learn, hey, you know, if you don’t do your homework, you have a problem and then you gotta solve that problem. How are we gonna solve that problem? So I like to present it as, what’s your plan?

Speaker: 0
21:07

Anytime they have a problem, eight year old kids, 15 year old kids, I just sai, what’s your plan to solve this? And then I like to hear their plan and let me know if you sana brainstorm it. But I thought it was a very interesting, super interesting discussion.

Speaker: 1
21:20

I Ai would say overall, my kids are very happy. The household is very happy. Everybody gets along. Everybody loves each other. Yeah. Some of them are way ahead of their peers. Nobody’s behind in anything that matters. Nobody seems unhealthy in any obvious way. No one has aberrant eating habits.

Speaker: 1
21:38

I haven’t even found an really an aberrant behavior that’s out of line. So it’s all good.

Speaker: 0
21:44

Self correcting. It’s like sai I worry I worry

Speaker: 4
21:46

a lot about this, like, iPad situation. I see my kids on an ai, and it’s almost like unless they’re doing an interactive project. If they end up watching

Speaker: 1
21:57

Says says the guy who has a video game theme Yeah.

Speaker: 4
21:59

In the background. Ai. Right? Who probably

Speaker: 1
22:01

and who probably grew up playing video games nonstop and probably spends nine hours a day on his screen just called a phone. So

Speaker: 0
22:08

Yeah. Yeah.

Speaker: 1
22:09

It’s the same thing, man.

Speaker: 4
22:10

Well, I mean, I feel like watch is it but do they watch shows be out?

Speaker: 3
22:14

No. No.

Speaker: 1
22:15

There’s a

Speaker: 3
22:15

there’s a

Speaker: 1
22:15

there’s a hypocrisy to picking up your phone and then saying to your kid, no. You can’t use your iPad. I grew up playing video games nonstop and video games when I was older, and I was an avid gamer until just a few years ago. Well, no. I mean,

Speaker: 0
22:27

I’m not I’m not I’m

Speaker: 4
22:27

not criticizing the the iPad. I was obviously on a computer since I was four years old, so I totally get it. And I think the question for me is, like but I didn’t have the ability to play a thirty minute show and then play the next thirty minute show and the next thirty minute show and then sit there for two hours and just have a show playing the whole time.

Speaker: 4
22:46

I was Yeah. You know, interacting on the computer and doing stuff and building stuff Yeah. Which was a little different for me from a use case perspective.

Speaker: 1
22:54

We we we did used to control their YouTube access, although now we don’t do that. The only thing I asked them is that they put on captions when they’re watching YouTube sai that helps their reading. Mhmm. They learn to read vatsal.

Speaker: 0
23:06

Good tip. Yeah. I like that one.

Speaker: 1
23:07

I will say that one of my kids is really into YouTube. The other two are not. Like, they just got over it. And to the extent that they use YouTube, it’s mostly because they’re looking up videos on their favorite games. They sana know how to be better at a game.

Speaker: 0
23:20

Alright. Let’s keep moving through this docket. We have David Sachs with us here. So, David, give us your philosophy of parenting. Okay. Next item on the docket. Let’s go.

Speaker: 5
23:30

Sai that some real issues. Saks is the show is not a

Speaker: 4
23:34

parenting show?

Speaker: 5
23:35

A parenting shah.

Speaker: 2
23:35

Or Yeah.

Speaker: 0
23:36

I asked David, hey. What’s your parenting philosophy? He said, oh, I set up their trust four years ago. So his son, he’s good. Trust is set up.

Speaker: 2
23:44

Everything’s good. His

Speaker: 0
23:44

parent lost train

Speaker: 3
23:45

for a second.

Speaker: 2
23:46

G r a t. Check.

Speaker: 0
23:48

Crap. You’re all sai, guys. Let me know how it works out. Alright. Speaking of working out, we’ve got a vice president who isn’t cuckoo for Cocoa Puffs and who actually understands what AI is. JD Vance gave a great speech. I watched it myself. He talked about AI in Paris. This was on Tuesday at the AI Action Summit, whatever that is.

Speaker: 0
24:12

And he gave a fifteen minute banger of a speech. He talked about overregulating AI and America’s intention to dominate this. And we happen to have with us, Naval, the czar, the czar of AI. So before I go into all the details about the speech, Sai don’t wanna steal your thunder.

Speaker: 0
24:29

Sacks, this this, speech had a lot of, verbiage, a lot of ideas that I’ve heard before that maybe we’ve all talked about. Maybe tell us a little bit about how this all came together and how proud you are. I mean, gosh, having a vice president who understands AI is just it’s mind blowing. He could speak on a topic that’s topical credibly.

Speaker: 0
24:53

This was an awesome moment for America, I think.

Speaker: 5
24:55

What are you implying there, Jay Kao?

Speaker: 0
24:57

I’m implying you might have workshopped it with him. No. Or that he’s smart. Both of those things.

Speaker: 5
25:02

The vice president wrote the speech or at least directed all of it. Sai the ideas came from him. I’m not gonna take any credit whatsoever for this.

Speaker: 0
25:10

Okay. Well, it was on point. Maybe you could

Speaker: 5
25:12

talk about it.

Speaker: 0
25:12

I agree

Speaker: 5
25:12

it was on point. I think it was a very well crafted and well delivered speech.

Speaker: 0
25:16

He made four main points about the Trump administration’s approach to AI. He’s gonna ensure, this is point one, that American AI continues to be the gold standard. Fantastic check. Two, he says that the administration understands that excessive regulation could kill AI just as it’s taking off.

Speaker: 2
25:33

And he did this in front

Speaker: 0
25:34

of all the EU elites who love regulation, did it on their home court. And then he said, number three, AI must remain free from ideological bias as we’ve talked about here on this program. And then number four, the White House, he said, will, quote, maintain a pro worker growth path for AI so that it can be a potent tool for job creation in The US.

Speaker: 0
25:57

So what are your thoughts on the four major bullet points in in his, speech here in, Parent?

Speaker: 5
26:03

Well, I think that the vice president, you knew he was gonna deliver an important speech as soon as he got up there and said that I’m here to talk about not AI safety, but AI opportunity. And to understand what a bracing statement that was and really almost like a shot across the bow, you have to understand the history and context of these events.

Speaker: 5
26:25

For the last couple of years, the last couple of these events have been exclusively focused on AI safety. The last in person event was in The UK at Bletchley Park, and the whole conference was devoted to AI safety. Similarly, the European AI regulation obviously is completely preoccupied with safety and trying to regulate away safety risks before they happen.

Speaker: 5
26:46

Similarly, you had the Biden EO, which was based around safety, and then you have just the whole media coverage around AI, which is preoccupied with all the risks from AI. So to have the vice president get up there and say right off the bat that there are other things to talk about.

Speaker: 5
27:03

In respect to AI sai safety risks, that actually there are huge opportunities there, was a breath of fresh air and, like I said, kind of a shot across the bow. And, yeah, you could see almost see some of the Eurocrats. They needed their fainting couches after that.

Speaker: 0
27:20

Eurocrats.

Speaker: 5
27:22

Trudeau looks like his dog just died. So I think that was just a really important statement right off the bat to set the context for the speech, which is AI is a huge opportunity for all of us because, really, that point just has not been made enough. And it’s true. There are risks, but when you look at the media coverage and when you look at the dialogue that the regulators have had around this, they never talk about the opportunities.

Speaker: 5
27:45

It’s always just around the risk. So I think that was a very important corrective. And then ai you said, he went on to say that The United States has to win this AI race. We wanna be the gold standard. We wanna dominate.

Speaker: 0
27:57

That was my favorite part.

Speaker: 5
27:58

Yeah. And and ai the way, that language about dominating AI and and winning the global race, that is in president Trump’s executive order from week one. So this is very much elaborating on the official policy of of this administration. And the vice president then went on to say that he specified how we would do that. Right? We have to win some of these key building block technologies.

Speaker: 5
28:18

We sana win in chips. We wanna win in AI models. We wanna win in applications. He said we need to build. We need to unlock energy for these companies.

Speaker: 5
28:26

And then most of all, we just need to be supportive towards them as opposed to regulating them to death. And he had a lot to say about the the risk of overregulation, how often it’s big companies that want regulation. He warned about regulatory capture, which our friend Bill Gurley would ai.

Speaker: 5
28:43

And he said that, so basically, having less regulation can actually be more fair, can create a more level playing field for small companies as well as big companies. And then he he said to the Europeans that we want you to be partners with us. We wanna lead the world, but we want you to be our our partners and benefit from this technology that we’re gonna take the lead in creating, but you also have to be a good partner to us.

Speaker: 5
29:07

And then he specifically called out the overregulation that Europeans have been engaged in. He mentioned the Digital Services Act, which has acted as, like, a speed trap for American companies. It’s American companies who’ve been overregulated and fined by these European regulations because the truth of the matter is that it’s American technology companies that are winning the race.

Speaker: 5
29:29

And so when Europe passes these owners’ regulations, they fall most of all on American companies, and he’s basically saying we need you to rebalance and and correct this because it’s not fair and it’s not smart policy, and it’s not gonna help us collectively win this AI race.

Speaker: 5
29:45

And that ai brings me just to to the last point is Ai don’t think he mentioned China by name, but, clearly, he talked about adversarial countries who are using AI to control their populations to engage in censorship and thought control. And he basically painted a picture where it’s like, yeah, you could go work with them or you could work with us.

Speaker: 5
30:04

And we’re we have hundreds of years of shared history together. We believe in things like free speech, hopefully, and we want you to work with us. But if you are gonna work with us, then you have to cooperate and we have to create a reasonable regulatory regime.

Speaker: 0
30:20

Naval, did you see the, speech and your thoughts just generally on JD Vance and and having somebody like this, you know, representing us and wanting to win? Yeah. American exceptionalism.

Speaker: 1
30:31

Ai, very impressive. I thought he was polite, optimistic, and just very forward looking. Just it’s what you would expect an entrepreneur or a smart investor to say. So I was very impressed. I think the idea that America should win, great. I think that we should not regulate, I also agree with I’m not an AI doomer.

Speaker: 1
30:49

I don’t think AI is gonna end the world. That’s a separate conversation. But there’s this religion that comes along in many faces, which is that, oh, climate change sana gonna end the world. AI is gonna end the world. Asteroid is gonna end the world. COVID-nineteen is gonna end the world. And it just has a way of fixating your attention, ai? It captures everybody’s attention at once.

Speaker: 1
31:06

It’s a very seductive thing. And I think in the case of AI, it’s really been overplayed by incentive bias, you know, motivated reasoning by the companies who are ahead, and they wanna pull up the ladder behind them. I I think they genuinely believe it. I think they genuinely believe that there’s safety risk, but I think they’re motivated to believe in those safety risks, and then they pass that along.

Speaker: 1
31:24

But it’s kind of a weird position because they have to say, oh, it’s so dangerous that you shouldn’t just let open source go at it, and you should let just a few of us work with you on it. But it’s not so dangerous that a private company can’t own the whole thing.

Speaker: 3
31:38

Right.

Speaker: 1
31:38

Yeah. Because if it was truly the Manhattan Project, if they were building nuclear weapons, you wouldn’t want one company to own that. No. Sam Altman famously said that AI will capture the light cone of all future value. In other words, ai, all value ever created at the speed of light from here will be captured by AI.

Speaker: 1
31:53

So if that’s true, then I think open source AI really matters and little tech AI really matters. The problem is that the nature of training these models is highly centralized. They benefit from supercomputer clustered compute, so it’s not clear how any decentralized model can compete.

Speaker: 1
32:09

So to me, the real issue boils down to is how do you push AI forward while not having just a very small number of players control the entire thing. And we thought we had that solution with the original OpenAI, which was a nonprofit. It was supposed to do for humanity. But now because of they wanna incentivize the team and they wanna raise money, they have to privatize at least a part of it.

Speaker: 1
32:30

Although it’s not clear to me why they need to privatize the whole thing, ai, why do you need to buy out the nonprofit portion? You could leave a nonprofit portion and you could have the private portion for the incentives. But I think that the the real challenge is how do you keep AI from naturally centralizing because all the economics and the technology underneath are centralizing in nature.

Speaker: 1
32:50

If you really think you’re gonna create God, do you wanna put God on a leash with one entity controlling God? That to me is the real fear. Not I’m not scared of AI. I’m scared of what a very small number of people who control AI do to the rest of us for our own good because that’s how it always works.

Speaker: 0
33:06

So well sai. Probably should go with the Greek model, having many gods and, heroes as well. Freeberg, you heard the, JD Vance speech, I assume. What are your thoughts on overregulation and, maybe to Naval’s point, one person owning this versus open source?

Speaker: 4
33:23

I think that there’s this kind of big definition of social balance right now on what I would call techno optimism and techno pessimism. Generally, people sort of fall into one of those two camps. Generally speaking, techno optimists, I would say, are folks that believe that accelerating outcomes with AI, with automation, with bioengineering, manufacturing, semiconductors, quantum computing, nuclear energy, etcetera, will usher in this era of abundance ai making by creating leverage, which is what technology gives us.

Speaker: 4
33:57

Technology will make things cheaper and it will be deflationary and it will give everyone more. So it creates abundance. The challenge is that people who already have a lot worry more about the exposure to the downside than they desire the upside. And so, you know, the techno pessimists are generally, like the EU and large parts, frankly, of The United States are worried about the loss of x, the loss of jobs, the loss of this, the loss of that.

Speaker: 4
34:27

Whereas countries like China and India are more excited about the opportunity to create wealth, the opportunity to create leverage, the opportunity to create abundance for their people. You know, GDP per capita in your in the EU, sixty thousand dollars a year. GDP per capita in The United States, like, 82,000.

Speaker: 4
34:44

But GDP per capita in India is 2,500 and China is 12,600. There’s a greater incentive in those countries, to manifest upside than there is for The United States and the EU who are more worried about manifesting downside. And so it is a very difficult kind of social battle that’s underway.

Speaker: 4
35:04

I do think, like, over time, those governments and those countries and those social systems that embrace these technologies are gonna become more capitalist. And they’re gonna require less government control and intervention in job creation, the economy, payments to people, and so on.

Speaker: 4
35:21

And the countries that are more techno pessimistic are unfortunately gonna find themselves asking for greater government control, government intervention in markets, governments creating jobs, government making payments to people, governments effectively running the economy.

Speaker: 4
35:34

My personal view, obviously, is that I’m a very strong advocate for technology acceleration. Because I think in nearly every case in human history, when a new technology has emerged, we’ve largely found ourselves assuming that the technology works in the framework of today or of yesteryear.

Speaker: 4
35:51

The automobile came along and no one envisioned that everyone in The United States would own an automobile. And therefore, you would need to create all of these new industries ai mechanics and car dealerships, roads, all the people servicing building roads, and all the other industry that emerged.

Speaker: 0
36:07

The motels.

Speaker: 4
36:08

And it’s it’s very hard for us to sit here today and say, okay. AI is gonna destroy jobs. What’s it gonna create and be right? I think we’re very likely gonna be wrong whatever estimations we we give. The area that I think is most underestimated is large technical projects that seem technically infeasible today that AI can unlock. For example, habitation in in the oceans.

Speaker: 4
36:29

Like, it’s very difficult for us to envision, like, creating cities underwater and creating cities in the oceans or creating cities on the moon or creating cities on Mars or finding new places to live. Those are, like, technically, people might argue, oh, that sounds stupid. I don’t wanna go do that.

Speaker: 4
36:43

But at the end of the day, like, human civilization will drive us to wanna do that. But those technically are very hard to pull off today, but AI can unlock a new set of industries to enable those transitions. So I think we really get it wrong when we try and assume the technology as a transplant for last year or last century.

Speaker: 4
37:00

And then we kind of become techno pessimists because we’re worried about losing what we have.

Speaker: 0
37:03

Are you a techno pessimist or are you optimist? Because Sai you, bring up the downside of an awful lot here on the program but you are working every day in a very optimistic way to breed, you know, better strawberries and potatoes for folks. So you’re a little bit of

Speaker: 4
37:17

a No. I have no techno pessimism whatsoever. I try and point out why the other side is acting the way they are.

Speaker: 0
37:22

Got it. Okay. Putting it in full context.

Speaker: 3
37:24

And what

Speaker: 4
37:24

I’m trying to highlight is I think that that framework is wrong. I think that that framework of trying to transplant new technology

Speaker: 3
37:30

Sure.

Speaker: 4
37:31

Onto the old way of things operating is the wrong way to think about it. And it creates this you know, because of this manifestation about worrying about downside, it creates this fear that that creates regulation ai we see in the EU. And as a result, China’s GDP will scale ai the EUs will stagnate if that’s where they go. That’s my assessment or my opinion on what will happen.

Speaker: 0
37:49

And, Chamath, you wanna wrap this up for us? What are your thoughts on JD?

Speaker: 2
37:52

I’ll give you two. Okay. The first is Ai would sai, this is a really interesting moment where I would call this the tale of two vice presidents. Very early in the Biden administration, Kamala was dispatched on an equally important topic at that time, which was illegal immigration. And she went to Mexico and Guatemala.

Speaker: 2
38:12

And so you actually have a really interesting AB test here. You have both vice presidents dealing with what were in that moment, incredibly important issues. And I think that JD was focused. He was precise. He was ambitious.

Speaker: 2
38:30

And even the part of the press that was very supportive of Kamala couldn’t find a lot of very positive things to say about tyler. And the feedback was it was meandering. She was ducking questions. She didn’t answer the questions that she was asked very well. And it’s so interesting because it’s a bit of a microcosm vent to what happened over these next four years and her campaign ai honestly, which you could have taken that window of that feedback and unfortunately for her, it just continued to be very consistent.

Speaker: 2
39:02

So that was one observation I had because I heard him give the speak. I heard her and I had this kind of moment where I was like, wow, two totally different people. The second is on the substance of what JD said. I said this on Tucker and I’ll just simplify all of this into a very basic framework, which is if you want a country to thrive, it needs to have economic supremacy sana it needs to have military supremacy.

Speaker: 2
39:30

In the absence of those two things, societies crumble. And the only thing that underpins those two things is technological supremacy. And we see this today. So on Thursday, what happened with Ai? They had a $24,000,000,000 contract with the United States arya to deliver some whiz bang thing, and they realized that they couldn’t deliver it. And so what did they do?

Speaker: 2
39:55

They went to Anduril. Now ai did they go to Anduril? Because Anduril has the technological supremacy to actually execute. A few weeks ago, we saw some attempts at technological supremacy from the Chinese with respect to DeepSeek. So I think that this is a very simple existential battle.

Speaker: 2
40:12

Those who can harness and govern the things that are technologically superior will win and it will drive economic vibrancy and military supremacy, which then creates safe, strong societies. That’s it. So from that perspective, JD nailed it. He saw the forest from the trees.

Speaker: 2
40:34

He said exactly what I think needed to be said and put folks on notice that you’re either on the ship or you’re off the ship. And I think that that was really good.

Speaker: 0
40:43

Yeah. And there was ai, a little secondary conversation that emerged, Sachs, that I would love to engage you with if you’re willing, which is this civil war, quote, unquote, between maybe MAGA one point o, MAGA two point o, techies, vert in the MAGA party ai ourselves, and maybe the core MAGA folks.

Speaker: 0
41:06

We’re gonna pull up the tweet here in JD’s own word, and he’s been engaging people in his own words. It’s very clear that he’s writing these tweets, a distinct difference between other politicians and this administration, and they just tell you what they think. Here it is. I’ll try and write something to address this in detail. This is JD Vance’s tweet. But I think this civil war is overstated.

Speaker: 0
41:29

Though, yes, there are some real divergences between the populists, I would describe that as MAGA, and the techies. But briefly, in general, I dislike substituting American labor for cheap labor. My views on immigration and offshoring flow from this. I like growth and productivity gains and this informs my view on tech and regulation when it comes to AI specifically.

Speaker: 0
41:49

The risks arya, number one, overstated to your point, Nava, or two, difficult to avoid. One of my many real concerns, for instance, is about consumer fraud. That’s a valid reason to worry about safety. But the other problem is much worse if a peer nation is six months ahead of The US on AI.

Speaker: 0
42:08

Again, I’ll try and say more. And this is JD going right at, I think, one of the more controversial topics, Sachs, that the administration is dealing with and has dealt with when it comes to immigration and tech because these two things are dovetailing each other. If we lose millions of driver jobs, which we will in the next ten years, just like we lost millions of cashier jobs, well, that’s gonna impact how our nation and many of the voters look at the border and immigration.

Speaker: 0
42:38

We might not be able to let as many people immigrate here if we’re losing millions of jobs to AI and self driving cars. What are your thoughts on him engaging this directly, Sachs?

Speaker: 5
42:49

Well, the first point he’s making there is about wage pressure, right, which is when you throw open our borders or you throw open American markets to products that can be made in foreign countries by much cheaper labor that’s not held to the same standards, the same minimum wage or the same union rules or the same safety standards that American labor is and has a huge cost advantage, then you’re creating wage pressure for American workers. And he’s opposed to that.

Speaker: 5
43:13

And I think that is an important point because I think the way that the media or neoliberals like to portray this argument is that somehow MAGA’s resistance to unlimited immigration is somehow based on xenophobia or something like that. No. It’s based on bread and butter kitchen table issues, which is if you have this ridiculous open border policy, it’s inevitably sana create a lot of wage pressure for people at the bottom of the pyramid.

Speaker: 5
43:40

So I think JD is making that argument, but and this is point two. He’s saying Ai not against productivity growth. So technology is good because it enables all of our workers to improve their productivity, and that should result in better wages because workers can produce more.

Speaker: 5
43:57

The value of their labor goes up if they have more tools to be productive. So So there’s no contradiction there, and I think he’s explaining why there isn’t a contradiction. A point I would add, he doesn’t make this point in that tweet, but I would add is that one of the problems that we’ve had over the last, I don’t know, thirty years is that we have had tremendous productivity growth in The US, but labor has not been able to capture it.

Speaker: 5
44:19

All that benefit has basically gone to capital or to companies, and I think a big part of the reason why is because we’ve had this largely unrestricted immigration policy. So I think if you were to tamp down on immigration, if you’re to stop the illegal immigration, then labor might be able to capture more of the benefits of productivity growth, and that would be a good thing.

Speaker: 5
44:39

It’d be a more equitable distribution of the gains from productivity and from technology and that, I think, would help tamp down this growing conflict that you see between technologists and the rest of the country or certainly the heartland of the country.

Speaker: 0
44:57

Naval, this is, okay. You wanna add anything else, David? Ai.

Speaker: 5
45:00

Well, I think just the final point he makes in that tweet is that he talks about how we live in a world in which there are other countries that are competitive, and specifically he he doesn’t mention Ai, but he says we have a peer competitor. And it’s gonna be a much worse world if they end up being six months ahead of us on AI rather than six months behind.

Speaker: 5
45:19

That is a really important point to keep in mind. I think that the whole Paris AI summit took place against the backdrop of this recognition because just a few weeks ago, we had deep sea and it’s really clear that China is not a year behind us. They’re hot on our heels or only maybe months behind us.

Speaker: 5
45:36

And so if we hobble ourselves with unnecessary regulations, if we make it more difficult for our AI companies to compete, that doesn’t mean that China’s gonna follow suit and copy us. They’re gonna take advantage of that fact and they’re gonna win.

Speaker: 0
45:48

Ai, Vatsal. This seems to be one of the main issues of our time. Four of the five people on this podcast right now are immigrants. So we have this amazing tradition in America. This is a country built by immigrants for immigrants. Do you think that should change now in the face of job destruction, which I know you’ve been tracking self driving pretty acutely. We both have an interest ai.

Speaker: 0
46:11

I think over the years, you know, what’s the solution here if we’re gonna see a bunch of job displacement, which will happen for certain jobs. We all we all kinda know that. Should we shut the border and not let the next Naval, Chamath, Sachs, and Freiberg into the country?

Speaker: 1
46:28

Well, let meh let me declare my biases upfront. I’m a first generation immigrant. I moved here when I was nine years old. Rather, my parents did, and then Ai a naturalized citizen. Sai, obviously, I’m in favor of some level of immigration. That said, I’m assimilated. I consider myself an American first and foremost. I bleed red, white, and blue.

Speaker: 1
46:46

I believe in the bill of rights and the constitution, first and second and fourth and all the proper amendments. I get up there every July 4, and I deliberately defend the second amendment on Twitter, at which point half my followers go bananas. You know? Because you’re because ai not supposed to. I’m supposed to be a good immigrant, right, and and carry the usual set of coherent leftist policies, globalist policies.

Speaker: 1
47:10

So I think that legal ai skill immigration with room and time for assimilation makes sense. You sana have a brain drain on the best and brightest coming to the freest country in the world to build technology and to help civilization move forward. And, you know, as Chamath was saying, economic power and military power is downstream of technology. In fact, even culture is downstream of technology.

Speaker: 1
47:38

Look at what the birth control pill did, for example, to culture or what the automobile did to culture or what radio and television did to culture and then the Internet. So technology drives everything. And if you look at wealth, wealth is a set of physical transformations that you can affect, and that’s a combination of capital knowledge.

Speaker: 1
47:54

And the bigger input to that is knowledge. And so The US has become the home of knowledge creation thanks to bringing in the best and brightest. You could even argue DeepSeek, part of the reason why we lost that is because a bunch of those kids, they studied in The US, but then we sent them back home.

Speaker: 1
48:08

Right.

Speaker: 0
48:08

So I think you absolutely Is that actually accurate that they were, Yeah. Yeah. Yeah. Some Sai few of ai. Really? Oh my god. Vatsal, like, exhibit a. Wow. Yeah. Sai I

Speaker: 1
48:17

think you absolutely have to split skilled, assimilated immigration, which is a small sai, and it has to be both. They have to both be skilled and they have to become Americans. That oath is not meaningless. Right? It has to mean something. So skilled, assimilated immigration, you have to separate that from just open borders, whoever can wander in, just come on in. That that latter part makes no sense.

Speaker: 5
48:38

If the Biden administration had only been letting in people with 50 IQs, we wouldn’t have this debate right

Speaker: 1
48:44

now. Absolutely. Yeah.

Speaker: 5
48:45

The the reason why we’re having this debate is because they just one. They just opened the border and let millions and millions of people in.

Speaker: 1
48:50

It was to their advantage to conflate legal and illegal immigration. So every time you would be like, well, we can’t just open the borders. Just say, well, what about Elon? What about this? And they would just parade If

Speaker: 5
49:00

they were just letting in the Elons and the Jensens and

Speaker: 0
49:04

Friedbergs.

Speaker: 5
49:05

We wouldn’t be having the same conversation today.

Speaker: 2
49:07

The correlation between open borders and wage suppression is irrefutable. We know that data. And I think that the Democrats, for whatever logic, committed an incredible error in basically undermining their core cohort. I I sana go back to what you said because I think it’s super important. There is a new political calculus on the field, and I agree with you.

Speaker: 2
49:34

I think that the three cohorts of the future are the asset ai working in middle class, that’s cohort number one. There are probably a hundred to a 50,000,000 of those folks. Then there are patriotic business owners, and then there’s leaders in innovation. Those are the three.

Speaker: 2
49:56

And I think that what MAGA gets right is they found the middle ground that intersects those three cohorts of people. And so every time you see this sort of left versus right dichotomy, it’s totally miscast. And it sounds discordant to so many of us because that’s not how any of us identify. Right?

Speaker: 2
50:15

And I think that that’s a very important observation because the policies that we adapt will need to reflect those three cohorts. What is the common ground amongst those three? And on that point, Naval is right. There’s not a lot that those three would say is wrong with a very targeted form of extremely useful legal immigration of very, very, very smart people who agree to assimilate and be a part of America.

Speaker: 2
50:39

I mean, I’m so glad you said it the way you said it. Ai, I remember growing up where my parents would try to pretend that they were in Sri Lanka and sometimes I would get so frustrated. I’m like, if you sana be in Sri Lanka, go back to Sri Lanka. I wanna be Canadian because it was easier for me to make friends.

Speaker: 2
50:58

It was easier for me to have a life. I was trying my best. I wanted to be Canadian. And then when I moved to The United States Twenty Five Years ago, I wanted to be American. Mhmm.

Speaker: 2
51:07

And I feel that I’m American now, and I’m proud to be an American. And I think that’s what you want. You want people that embrace it. Doesn’t mean that we can’t dress up in a shah war kamise every now and then. But the point is ai, what do you believe? And where is your loyalty?

Speaker: 0
51:22

Freyberg, we used to have this concept of in a melting pot of this assimilation and that was a good thing then it became cultural appropriation we we kind of made a right turn here where where do you stand on this recruiting the best and brightest and forcing them to assimilate, making sure that they’re down with Jason. Like, I could find that people

Speaker: 2
51:41

are gonna be here.

Speaker: 0
51:43

Yeah. Meh me restate that. Find I reject tyler premise

Speaker: 5
51:46

of this whole conversation.

Speaker: 0
51:48

Wait. Wait. Hold on.

Speaker: 5
51:49

Look. I’m a look. I’m a first generation American who moved here when I was five and became a citizen when I was 10. And, yes, I’m fully American, and that’s the only country I have any loyalty to. But I the the premise that I reject here is that somehow an AI conversation leads to an immigration conversation because millions of jobs are gonna be lost.

Speaker: 5
52:09

We don’t know that.

Speaker: 2
52:10

That’s also true. I agree with that.

Speaker: 5
52:12

A huge assumption

Speaker: 1
52:13

I completely agree

Speaker: 5
52:13

with that. Into the doomerism that AI is gonna wipe out millions of jobs. That is not

Speaker: 4
52:17

And I

Speaker: 5
52:18

think I think it’s gonna meh great. Evidence. I think it’s gonna be more jobs than any jobs in the industry. Bryden lost by AI? Let’s be real. We’ve had AI for two and a half years, and I think it’s great. But so far, it’s a better search engine, and it helps high school kids cheat on their essays. I mean, come on.

Speaker: 0
52:32

Believe that self driving is coming? Hold on a second. Sachs, you don’t believe that millions

Speaker: 1
52:38

of But but hold on. Those those driver jobs weren’t even there ten years ago. Uber came along and created all ai driver jobs. DoorDash created all these driver jobs. So k. What technology does yes. Technology destroys jobs, but it replaces them with opportunities that are even better.

Speaker: 1
52:52

And then either you can go capture that opportunity yourself or an entrepreneur will come along and create something that allows you to capture those opportunities. AI is a productivity tool. It increases the productivity of a worker. It allows them to do more creative work and less repetitive work.

Speaker: 1
53:06

As such, it makes them more valuable. Yes. There is some retraining involved, but not a lot. These are natural language computers. You can talk to them in plain English, and they talk back to you in plain English. But I think, David is absolutely right.

Speaker: 1
53:18

I think we will see job creation by AI that will be as fast or faster than job destruction. You saw this even with the Internet. Like, YouTube came along. Look at all these YouTube streamers and influencers. That didn’t used to be a job. New jobs, really opportunities, because job is a wrong word.

Speaker: 1
53:32

Job implies someone else has to give it to me and sort of ai they’re handed out to zero sum game. Forget all that. It’s opportunities. After COVID, look at how many people are making money by working from home in mysterious little ways on the Internet that you can’t even quite grasp.

Speaker: 5
53:49

Here’s the way I categorize it. Okay? Is that whenever you have a new technology, you get productivity gains. You get some job disruption, meaning that part of your job may go away, but then you get other parts that are new and hopefully more elevated, you know, more interesting.

Speaker: 5
54:04

And then there is some job loss. I just think that the third category will follow the historical trend, which is that the first two categories are always bigger, and you end up with more net productivity and more net wealth creation. And we’ve seen no evidence to date that that’s not gonna be the case. Now it’s true that AI is about to get more powerful.

Speaker: 5
54:22

You’re gonna see a whole new wave of what are called agents this year. A agentic products are able to do more for you, but there’s no evidence yet that those things are gonna be completely unsupervised and replace people’s jobs. So, you know, I think that we have to see how this technology evolves, and I think one of the mistakes of, let’s call it, the European approach is assuming that you can predict the future with perfect accuracy or such good accuracy that you can create regulations today that are gonna avoid all these risks in the future.

Speaker: 5
54:52

And we just don’t know enough yet to be able to do that. That’s a false level of certainty.

Speaker: 2
54:57

I agree with you. And the companies that are promulgating that view is what Naval sai, those that have an economic vested interest in at least convincing the next incremental investor that this could be true because they want to make the claim that all the money should go to them so they could hoover up all the economic gains.

Speaker: 2
55:15

And that’s the that is the part of the cycle we’re in. So if you, if you actually stratify these reactions, there’s the small startup companies in AI that believe there’s a productivity leap to be had, and that there’s gonna be prosperity. Everybody on the sidelines watching, and then a few companies that have an extremely vested interest in them being a gatekeeper because they need to raise the next 30 or $40,000,000,000 trying to convince people that that’s true.

Speaker: 2
55:40

And if you view it through that lens, you’re right, Sai. We have not accomplished anything yet that proves that this is gonna be cataclysmically bad. And if anything, right now, history would tell you it’s probably gonna be like the past, which is generally productive and accretive to society.

Speaker: 5
55:55

Yeah. And just to bring it back to JD’s speech, which is where we started, I think it was a quintessentially American speech in the sense that he said we should be optimistic about the opportunities here, which I think is basically right. And we wanna lead. We want to take advantage of this. We don’t wanna hobble it. We don’t even fully know what it’s gonna be yet.

Speaker: 5
56:18

We are gonna center workers. We wanna be pro worker. And I think that if there are downsides for workers, then we can mitigate those things in the future. But it’s too early to say that we know what the program should be. It’s more about a statement of values at this point.

Speaker: 0
56:33

Do you think it’s too early, Freeberg, given Optimus and all these robots being created, what we’re seeing in self ai? You’ve talked about the ramp up with Waymo. To actually say we will not see millions of jobs and millions of people get displaced from those jobs? What do you think, Friedberg?

Speaker: 0
56:52

I’m curious your thoughts because that is the counterargument.

Speaker: 4
56:55

My experience in the workplace is that AI tools that are doing things that an analyst or knowledge worker was doing with many hours in the past is allowing them to do something in minutes. That doesn’t mean that they spend the rest of the day doing nothing. What’s great for our business and for other businesses like ours that can leverage AI tools is that those individuals can now do more.

Speaker: 4
57:21

And so our throughput, our productivity as an organization has gone up, and we can now create more things faster. So whatever the product is that my company makes, we can now make more things more quickly. We can do more development

Speaker: 3
57:34

plus cost.

Speaker: 0
57:35

On the ground. Correct? Out of hollow?

Speaker: 4
57:36

And I’m seeing it on the ground. And I don’t think that this, like, transplantation of how bad AI will be for jobs is the right framing as much as it is about an acceleration of productivity. And this is why I go back to the point about GDP per capita and GDP growth. Countries, societies, areas that are interested or industries that are interested in accelerating output, in accelerating productivity, the ability to make stuff and sell stuff, are going to rapidly embrace these tools because it allows them to do more with less.

Speaker: 4
58:06

And I think that’s what I really see on the ground. And then the second point I’ll make is the one that I mentioned earlier, and and I’ll wrap up with a third point, which is Ai think we’re underestimating the new industries that will emerge drastically, dramatically. There is going to be so much new that we are not really thinking deeply about right now that we could do a whole another two hour brainstorming session on on what AI unlocks in terms of large scale projects that are traditionally or typically or today held back because of the constraints on the technical feasibility of these projects.

Speaker: 4
58:39

And that ranges from accelerating to new semiconductor technology to quantum computing to energy systems to transportation to habitation, etcetera, etcetera. There’s all sorts of transformations in every industry that’s possible as these tools come ai, and that will spurn insane new industries.

Speaker: 4
58:56

The most important point is the third one, which is we don’t know the overlap of job loss and job creation if there is one. And so the rate at which these new technologies impact in great new markets but I think Naval is right. I think that what happens in capitalism and in free societies is that capital and people rush to fill the hole of new opportunities that emerge because of AI and that those grow more quickly than the old bubbles deflate.

Speaker: 4
59:21

So if there’s a deflationary effect in terms of job need in other industries, I think that the loss will happen slower than the rush to take advantage of creating new things will happen on the other side. So my bet is probably on the order of, I think, new things will be created faster than old things will be lost.

Speaker: 0
59:37

I think And,

Speaker: 1
59:38

actually, as a quick side note to that, the fastest way to help somebody get a job right now, if you know somebody in the market who’s looking for a job, the best thing you can do is say, hey. Go download the AI tools and just start talking to them. Just start using them in any way.

Speaker: 1
59:52

And then you can walk into any employer in almost any field and say, hey. I understand AI. And they’ll hire you in

Speaker: 3
59:57

the cloud.

Speaker: 0
59:58

Naval, you and I watched this happen. We had a front row seat to it back in the day when you were doing venture hacks and I was doing Open Angel Forum. We had to ai fight to find five or 10 companies a month. Then the cost of running these companies went down. They went down massively from 5,000,000 to start a company to 2, then to 250 k, then to a hundred k.

Speaker: 0
01:00:20

I think what we’re seeing is, like, three things concurrently. You’re gonna see all these jobs go away for automation, self driving cars, cashiers, etcetera. But we’re gonna also see static team size at places like Google. They’re just not hiring because they’re just having the existing bloated employee base learn the tools.

Speaker: 0
01:00:36

But I don’t know if you’re seeing this. The number of startups able to get a product to market with two or three people and get to a million in revenue is booming. What are you seeing in the startup landscape?

Speaker: 1
01:00:48

Definitely what you’re saying in that there’s leverage. But at the same time, the I think the more interesting part is that new startups are enabled that could not exist otherwise. My last startup, AirChat, could not have existed without AI because we needed the transcription translation.

Speaker: 1
01:01:02

Even the current thing I’m working on, it’s not an AI company, but it cannot exist without AI. It is relying on AI. Even at AngelList, we’re significantly adopting AI. Like, everywhere you turn, it’s more opportunity, more opportunity, more opportunity. And people like to go on, Twitter or the artist formerly known as Twitter.

Speaker: 1
01:01:20

And and and, basically, they like to exaggerate. Like, oh my god. We’ve hit AGI. Oh my god. I just replaced my all my mid level engineers. Oh my god.

Speaker: 1
01:01:28

I’ve stopped hiring. To me, that’s, like, moronic. The the the two valid ones are the one man entrepreneur shows where there’s, like, one guy or or one gal, and they’re, like, scaling up like crazy things to AI.

Speaker: 0
01:01:38

Shout out.

Speaker: 1
01:01:39

Or there are people who are embracing AI and being ai, I need to hire. And I need to hire anyone who can even spell Ai. Like, anyone who’s even used AI. Just come on in. Come on in. Again, I would say the easiest way to see that AI is not taking jobs or creating opportunities is go brush up on your Ai. Learn a little bit.

Speaker: 1
01:01:57

Watch a few videos. Use the AI. Tinker with it, and then go reapply for that job that rejected you and watch how they pull you in.

Speaker: 5
01:02:04

In 2023, an economist named Richard Baldwin said Ai won’t take your job. It’s someone using AI that will take your job because they know how to use it better than you. And that’s ai become a meh, and you see it floating around x. But I think there’s a lot of truth in that. You know, as long as you remain adaptive and you keep learning and you learn how to take advantage of these tools, you should do better.

Speaker: 5
01:02:24

And if you wall yourself off from the technology and don’t take advantage of it, that’s when you put yourself at risk.

Speaker: 1
01:02:29

Another way to think about it is these are natural language computers. So everyone who’s intimidated by computers before should no longer be intimidated. You don’t need to program anymore in some esoteric language or learn some obscure mathematics to be able to use these. You can just talk to them and they talk back to you. That’s magic.

Speaker: 0
01:02:46

The new ram language is English. Chamath, do you want to, wrap us up here on this opportunity slash displacement slash chaos?

Speaker: 2
01:02:54

I was gonna say this before, but I I’m pretty unconvinced anymore that you should bother even learning many of the hard sciences and maths that we used to as underpinnings. Like, I used to believe that the right thing to do was for everybody to go into engineering. I’m not necessarily as convinced as I used to be because I used to say, well, that’s great first principles thinking, etcetera, etcetera.

Speaker: 2
01:03:19

And you’re you’re sana get trained in a toolkit that will scale. And I’m not sure that that’s true. I think, like, you can you can use these agents and you can use deep research and all of a sudden they replace a lot of that skill. So what’s left over? It’s creativity, it’s judgement, it’s history, it’s psychology, it’s all of these other sort of, like, software Leadership, communication.

Speaker: 2
01:03:40

Shah allow you to manipulate these models in construct ways because when you think of ai the prompt engineering that gets you to great answers, it’s actually just thinking in totally different orthogonal ways and non linearly. So that’s my last thought, which is it does open up the aperture. Meaning for every smart mathematical genius, there’s many, many, many other people who have high EQ.

Speaker: 2
01:04:01

And all of a bryden, this tool actually takes the skill away from the person with just the high IQ and says, if you have these other skills now, you can compete with me equally. And I think that that’s liberating for a lot of people.

Speaker: 0
01:04:15

I’m in the camp of more opportunity. You know, I I got to watch the movie industry a whole bunch when the digital cameras came out and more people started making meh, more people started making independent film shorts, and then of course the YouTube revolution. People started making videos on YouTube or podcasts like this.

Speaker: 0
01:04:32

And if you look at what happened with, like, the special effects industry as well, we need far fewer people to make a Star Wars movie, to make a Star Wars series, to make a Marvel series. As we’ve seen, now we can get the Mandalorian, Ashoka, and all these other series with smaller numbers of people and they look better than obviously the original Star Wars series or even the prequels.

Speaker: 0
01:04:54

So there’s sana be so many more opportunities. We’re now making more TV shows, more series, everything we wanted to see of every little character. That’s the same thing that’s happening in startups. I can’t believe that there is a app now, Vatsal, called Slopes just for, skiing.

Speaker: 0
01:05:11

And there are 20 really good apps for just meditation and there are 10 really good ones just for fasting. Like, we’re going down this long tail of opportunity, and there’ll be plenty of million to $10,000,000 businesses for us, you know, if people learn to use these tools.

Speaker: 5
01:05:27

I love how that’s the thing that tips you over.

Speaker: 0
01:05:31

Which one?

Speaker: 3
01:05:33

The You

Speaker: 5
01:05:33

get an extra Marvel movie or an extra Star Wars show, so that tips you over.

Speaker: 0
01:05:38

I think for a lot of people, it’s it feels great that

Speaker: 5
01:05:41

AI may take over the world, but, I’m gonna get an extra Star Wars movie. So I’m I’m cool

Speaker: 1
01:05:45

with it.

Speaker: 0
01:05:45

I mean, it’s are you not entertained?

Speaker: 5
01:05:47

One final point on this is Yeah. Look. I mean, given the choice between the two categories of techno optimists and techno pessimists, I’m definitely in the optimist ram, and I think and I think we should be. But I I think there’s actually a third category that I would submit, which is techno realist, which is technology is gonna happen.

Speaker: 5
01:06:06

Trying to stop it is like ordering the tides to stop. If we don’t do it, somebody else will. China’s gonna do it or somebody else will do it. And it’s better for us to be in control of the technology, to be the leader, rather than passively waiting for it to happen to us. And I just think that’s always true.

Speaker: 5
01:06:24

It’s better for businesses to be proactive and take the lead, disrupt themselves instead of waiting for someone else to do it, and I think it’s better for countries. And I think you did see this theme a little bit. I mean, these are my own views. I don’t wanna ascribe them to the vice president, but you did see, I think, a hint of the techno realism idea in his speech and in his tweet, which is, look, AI is gonna happen.

Speaker: 5
01:06:48

We might as well be the leader. If we don’t, we could lose in a key category that has implications for national security, for our economy, for many things. So that’s just not a world we wanna live in. So I think a lot of this debate is sort of academic because whether you’re an optimist or pessimist, just sort of glass half empty, half full, the question’s just, is it gonna happen or not?

Speaker: 5
01:07:12

And I think the answer is meh, so then we wanna control it. This is you know, let’s just boil it down. There’s not a tremendous amount of choice in this, I think.

Speaker: 1
01:07:19

I would agree heavily with one point, and I would just tweak another. The point I would agree with is that it’s gonna happen anyway, and that’s what DeepSeek proved. You can turn off the flow of chips to them, and you can turn off the flow of talent. What do they do? They just get more efficient, and they open source model when our guys were staying closed source for safety reasons.

Speaker: 5
01:07:39

And yeah. Exactly. And I think DeepSeek

Speaker: 0
01:07:42

deep safety of their equity.

Speaker: 5
01:07:44

Yeah. DeepSeek exploded the fallacy that The US has a monopoly in this category and that somehow, therefore, we can slow down the train and that we have total control over the train. And I think what DeepSeek showed us, no. If we slow down the train, they’re just gonna win.

Speaker: 1
01:07:59

Yeah. The part the part where Ai try to tweak a little bit is the idea that we are gonna win. And by we, when you say America, the problem is that the best way to win is to be as open, as distributed, as innovative as possible. If this all ends up in the control of one company, they’re actually gonna be slower to innovate than if there’s ai dynamic ai, by its nature, will be open.

Speaker: 1
01:08:20

It will leak to China. It will leak to India. But these things have powerful network effects. We know this about technology. Almost all technologies has network effects underneath.

Speaker: 1
01:08:29

And so even if you are open, you’re still gonna win, and you’re still gonna control

Speaker: 5
01:08:34

all of the the Internet. That was all true for the Internet. Right? The Internet’s an open technology. It’s based on terms of But who but who’s the dominant companies. All the dominant companies are US companies because they were

Speaker: 1
01:08:43

in the league.

Speaker: 4
01:08:44

Exactly right. Exactly right. Because we we embrace the open Internet. We embrace the open Internet. Ai that was different.

Speaker: 5
01:08:48

So there will be benefits for all of humanity, and I think the vice president’s speech was really clear that, look. We want you guys to be on board. We wanna be good partners. However, there are definitely gonna be winners economically, militarily. And in order to be one of those winners, you have to be a leader.

Speaker: 0
01:09:04

Who’s gonna get to AGI first, Nava? Is it gonna be an open source? Who’s gonna win? Is it gonna be open source or closed source? Who’s gonna win the day? If we’re sitting here five, ten years from now, and we’re looking at the top three language models, which

Speaker: 4
01:09:16

is gonna be trouble

Speaker: 1
01:09:17

for this, but I don’t think we know how to build AGI. But that’s that’s a much longer discussion. AGI side.

Speaker: 0
01:09:21

Who’s gonna have the best model five years from now?

Speaker: 2
01:09:23

Hold on. I I a % agree with you.

Speaker: 1
01:09:25

I just think it’s a different thing. But what we’re building are these incredible natural language computers. And actually, David, in a very pithy way, ai the two big use cases. It’s search and it’s homeworks. It’s paperwork. It’s it’s really paperwork. And and ai lot of these jobs that we’re talking about disappearing are actually paperwork jobs. They’re paperwork shuffling. These are made up jobs.

Speaker: 1
01:09:44

Like, the federal government is we’re finding out through Doge. You know, a third of it is, like, people digging holes in spoons and the another third are filling them back up.

Speaker: 5
01:09:51

They’re filling out paperwork and then burying it in a ai shaft.

Speaker: 4
01:09:54

They’re buying it

Speaker: 1
01:09:54

in a mine ai. No Iron Mountain. Yeah. So I I I think a lot of these made up jobs And

Speaker: 5
01:09:58

then they’re

Speaker: 1
01:09:58

gonna go

Speaker: 5
01:09:59

down the mine shaft to get the paperwork when someone retires and bring it up.

Speaker: 0
01:10:01

You know what? I’m gonna get them some thumb drives. We can increase the throughput of the elevator with some thumb drives. It would be incredible.

Speaker: 1
01:10:07

What we find out is the d the DMV has been running the government for the last seventy years. It’s been a compound. They came in compounding this time. That’s that’s really what’s going on. The d DMV is in charge.

Speaker: 5
01:10:16

I mean Yeah. If the world ends in nuclear war, God forbid, the only thing that’s be left will be the cockroaches and then a bunch of, like, government documents.

Speaker: 0
01:10:25

TPS reports.

Speaker: 5
01:10:26

TPS reports down in a mineshaft.

Speaker: 0
01:10:28

Basically. Yeah. Let’s take a moment, everybody, to thank Arzahar. We miss him. We wish he could be here for the whole shah. Thank you, czar.

Speaker: 2
01:10:40

Thank you to the czar. We miss

Speaker: 5
01:10:41

you, ai.

Speaker: 0
01:10:42

We miss you. We miss you, little buddy. I wish we could talk about Ukraine, but we’re not allowed. Get back to work. We’ll talk about it another ai. We’ll have coffee. Bye. I’ll see you in the commissary. Thanks for the invite.

Speaker: 3
01:10:52

Bye. Cheers.

Speaker: 0
01:10:54

Man, I’m so excited. I’m Naval. Sachs ai me to go to the the military meh. I’m sana be in the commissary.

Speaker: 1
01:11:00

Oh, you

Speaker: 4
01:11:00

didn’t, JK. You invited yourself. Be honest.

Speaker: 0
01:11:02

I did. Yes. I did. I put it on his calendar.

Speaker: 1
01:11:05

To keep the conversation moving, let me segue a point

Speaker: 3
01:11:07

that

Speaker: 1
01:11:07

came up that was really important into tariffs. And the point is even though the Internet was open, The US won a lot of the Internet. A lot of US companies won the Internet. Yep. And they won that because we got there the firstest with the mostest, as they say, in the military.

Speaker: 1
01:11:24

And that matters because a a lot of technology businesses have scale economies and network effects underneath, even basic brand based network effects. If you go back to the late nineties, early ‘2 thousands, very few people would have predicted that we would have ended up with Amazon basically owning all of ecommerce.

Speaker: 1
01:11:40

You would have thought it would have been a perfect competition and very spread out. And that applies to how we ended with Uber sai basically one taxi service or we end up with Airbnb. Meh, Airbnb. It’s just network effects. Network effects.

Speaker: 1
01:11:53

Network effects rule the world around me. But when it comes to tariffs and when it comes to trade, we act like network effects don’t exist. The classic Ricardian comparative advantage dogma says that you should produce what you’re best at. I produce what I’m best at, and we trade.

Speaker: 1
01:12:07

And then even if you wanna charge me more for it, if you wanna impose tariffs for me to ship to you, I should still keep tariffs down because I’m better off. You’re just selling me stuff cheaply. Great. Or if you wanna subsidize your guys, great. You’re selling me stuff cheaply.

Speaker: 1
01:12:19

The problem is that is not how most modern businesses work. Most modern businesses have network effects. As a simple thought experiment, suppose that we have two countries. Right? I’m China. You’re The US. I start out by subsidizing all of my companies and industries that have network effects. So I’ll subsidize TikTok.

Speaker: 1
01:12:38

I’ll ban your social media, but I’ll push mine. I will subsidize my semiconductors, which tend do tend to have winner take all in certain categories, or I’ll subsidize my drones. And then BYD. Exactly. BYD, self driving, whatever.

Speaker: 1
01:12:52

And then when I win, I own the whole market, and I can raise prices. And if you try to start up a competitor, then it’s too late. I’ve got network effects. Or if I’ve got scale economies, I can lower my price to zero, crash you out of business. No one in their right mind will invest, and I’ll raise prices right back up.

Speaker: 1
01:13:07

So you have to understand that certain industries have hysteresis or they have network effects or they have economies of scale. And these are all the interesting ones. These are all the high margin businesses. So in those, if somebody is subsidizing or they’re raising tariffs against you to protect your industries and let them develop, you do have to do something.

Speaker: 1
01:13:25

You can’t just completely back down.

Speaker: 2
01:13:28

What are your

Speaker: 0
01:13:29

thoughts, Chamath, about tariffs and network effects? It does seem like we do wanna have, redundancy in supply chain, so there are some exceptions here. Any, thoughts on how this might play out? Because, yeah, Trump brings up tariffs every forty eight hours and then it doesn’t seem like any of them land.

Speaker: 0
01:13:46

So Ai don’t know. I’m I’m still on my seventy two hour Trump rule, which is whatever he says, wait seventy two hours and and then maybe see if it actually comes to pass. Where do you stand on all these tariffs and tariff talk?

Speaker: 2
01:13:58

Well, I think the tariffs will be a plug. Are they coming? Absolutely. The quantum of them? I don’t know. And I think that the way that you can figure out how extreme it will be, it’ll be based on what the legislative plan is for the budget. So there’s two paths right now. Path one, which I think is a little bit more likely, is that they’re gonna pass a slimmed down plan in the sana just on border security and military spending, And then they’ll kick the can down the road for probably another three or four months on the budget.

Speaker: 2
01:14:32

Plan two is this one big beautiful bill that’s irking its way through the house. And there, they’re proposing trillions of dollars of cuts. In that mode, you’re going to need to raise revenues somehow and especially if you’re giving away tax breaks and the only way to do that is probably through tariffs or one way to do it is through tariffs.

Speaker: 2
01:14:51

My honest opinion, Jason, is that I think we’re in a very complicated moment. I think the sana plan is actually on the margins more likely and better, and the reason is because I think that Trump is better off getting the next sixty to ninety days of data. I mean, we’re in a real pickle here. We have persistent inflation. We have a broken Fed. They’re totally asleep at the switch.

Speaker: 2
01:15:17

And the thing that Yellen and Biden did, which in hindsight now was extremely dangerous, is they issued so much short term paper that in totality, we have $10,000,000,000,000 we need to finance in the next six to nine months. So it could be the case that we have rates that are ai five, five and a quarter, five and a half percent.

Speaker: 2
01:15:42

I mean, that that’s extremely bad at the same time as inflation, at the same time as delinquencies are ticking up. So I think tariffs are probably going to happen,

Speaker: 3
01:15:57

but

Speaker: 2
01:15:57

I think that Trump will have the most flexibility if he has time to see what the actual economic conditions will be, which will be more clear in three, four, five months. And so I almost think this big beautiful bill is counterproductive because Ai not sure we’re sana have all the data we need to get it right.

Speaker: 0
01:16:18

Fredrik, any thoughts on these, tariffs you’ve been involved in the global, marketplace, especially when it comes to produce and wheat and all this corn and everything? What what do you think the dynamic here is gonna be, or is it saber rattling in a tool for Trump?

Speaker: 4
01:16:35

The biggest buyer of US ag exports is China.

Speaker: 0
01:16:38

China.

Speaker: 4
01:16:38

Ag exports are a major revenue source, major income source, and a major part of the economy for a large number of states. And so there will be, as there was in the first Trump presidency, very likely, very large transfer payments made to farmers because China is very likely gonna tariff imports or stop making import purchases altogether, which is what happened during the first presidency.

Speaker: 4
01:17:04

When they did that, the federal government, I believe, had transfer payments of north of $20,000,000,000 to farmers. This is a not negligible sum, and it’s a not negligible economic effect because there’s then a rippling effect throughout the ag economy. So I think that’s one key thing that I’ve heard folks talk about is the, the activity that’s gonna be needed to support the farm economy as The US’s biggest ag customer disappears.

Speaker: 4
01:17:31

In the early twentieth century, we didn’t have an income tax, and the federal revenue was almost entirely dependent on tariffs. When tariffs were cut, there was an expectation that there would be a decline in federal government revenue, but what actually happened is volume went up.

Speaker: 4
01:17:45

So lower tariffs actually increased trade, increased the size of the economy. So this is where a lot of economists take their basis in, hey, guys. If we do these tariffs, it’s actually gonna shrink the economy. It’s gonna cause a reduction in trade. The counterbalancing effect is one that has not been tested in economics, right, which is what’s gonna happen if simultaneously we reduce the income tax and reduce the corporate income tax and basically increase capital flows through reduced taxation while doing the tariff implementation at the same time.

Speaker: 4
01:18:17

So it’s a grand economic experiment, and I think we’ll learn a lot about what’s gonna happen here as this all moves forward. I do think, ultimately, many of these countries are gonna capitulate to some degree, and we’re gonna end up with some negotiated settlement that’s gonna hopefully not be too short term impactful on the economies and the people and the jobs that are dependent on trade.

Speaker: 2
01:18:35

Economy feels like it’s in a very precarious place.

Speaker: 1
01:18:39

It does to asset holders.

Speaker: 0
01:18:41

Yeah. But to asset holders.

Speaker: 1
01:18:42

And and, obviously, they’ve left it in a bad place in the last administration, and we shut down the entire country for a year over COVID. And the bill for that has come due, and that’s reflected in inflation.

Speaker: 0
01:18:51

I I think there are

Speaker: 1
01:18:51

a couple other points in tariffs. First is it’s not just about money. It’s also about making sure we have functional middle class with good jobs. Because if you have a non tariff world, maybe all the gains go to the upper class and an underclass, and then you can’t have a functioning democracy when the average person is on one of those two extremes.

Speaker: 1
01:19:09

So I think that’s one issue. Another is strategic industries. If you look at it today, probably the largest defense contractor in the world is DJI. They got all the drones. Even in Ukraine, both sides are getting all their drone parts from DJI. Now they’re getting it through different supply chains and so on.

Speaker: 1
01:19:25

But Ukrainian drones and Russian drones, the vast majority of them are coming through China through DJI. And we don’t have that industry. If we have a kinetic conflict right now and we don’t don’t have good drone supply chain internally in The US, we’re probably gonna lose because those things are autonomous bullets.

Speaker: 1
01:19:41

That’s the future of all warfare. We’re buying f 30 fives, and the Chinese are building swarms of natters. Scale. At scale. So we do have to re onshore those critical supply chains. And what is a drone supply chain? It’s not just there’s not a thing called drone.

Speaker: 1
01:19:55

It’s ai motors and semiconductors Yeah.

Speaker: 0
01:19:57

It’s a lot of pieces.

Speaker: 1
01:19:58

Optics and lasers and just everything across the board. So I think there are other good arguments for at least reassuring some of these industries. We need them. And the The United States is very lucky in that it’s very autarkic. We have all the resources. We have all the supplies.

Speaker: 1
01:20:12

We can we can be upstream of everybody with all the energy to the extent we’re importing any energy, that is a choice we made. That is not because meh we lack the energy. Right. Yeah. That’s right.

Speaker: 1
01:20:24

Because of we and all the oil resources and the natural gas and fracking ai with all the work we’ve done in nuclear efficient and small reactors, we should absolutely be energy independent.

Speaker: 0
01:20:34

Running the table on it. We should we should have a massive surplus. And, hey, you know, if you’re if you’re worried about, you know, a couple of million of DoorDash, Uber drivers losing their jobs to automation. Like, hey. There’s gonna be factories to build these parts for these drones that we’re gonna need.

Speaker: 0
01:20:50

So there there’s a lot of opportunity, I guess, for people to

Speaker: 1
01:20:53

And there is and there is a difference between different kinds of jobs. Those kinds of jobs are better jobs, building difficult things at scale physically that we need for both national security and for innovation. Those are better jobs than, you know, paperwork, writing essays for other people to read. Yeah. Or even driving cars.

Speaker: 0
01:21:12

Alright. Listen. I wanna get to two more stories here. We have a really interesting copyright story that I wanted to touch on. Thomson Reuters just won the first major US AI copyright case, and fair use played a major role in this decision. This has huge implications for AI companies here in The United States. Obviously, OpenAI and the New York Ai, Getty Images versus Stability.

Speaker: 0
01:21:36

We’ve talked about these, but it’s been a little while because the legal system takes a little bit of time ai these are very complicated cases, as we’ve talked about. Thomson Reuters owns Westlaw. Now, if you don’t know that, it’s kind of like LexisNexis. It’s one of the the legal databases out there that lawyers use to to find cases, etcetera.

Speaker: 0
01:21:55

And they have a paid product with summaries and analysis of legal decisions back in 2020. This is two years before ChatGPT. Reuters sued a legal research competitor called Ross for copyright infringement. Ross had created an AI powered legal search engine. Sounds great.

Speaker: 0
01:22:11

But Ross had asked Westlaw if they would pay a license to its content for training. Westlaw said no. This all went back and forth, and then Ross signed a similar deal with a company called LegalEase. The problem is LegalEase’s database was just copied and pasted from a bunch of Westlaw answers.

Speaker: 0
01:22:27

So Reuters, Westlaw, sued Ross in 2020, accusing the company of being vicariously liable for LegalEase’s direct infringement. Super important point. Anyway, the judge originally favored Ross in fair use. This week, the judge reversed this ruling and found Ross liable, noting that after further review, fair use does not apply in this case.

Speaker: 0
01:22:47

This is the first major win and, we debated this. So here’s a clip. You know, you heard it here first on the all in pod. What I would say is, you know, when you look at that fair use doctrine, I’ve got a lot of experience with it. The fourth factor test, I’m sure you’re well aware of this, is the effect of the use on the potential market and the value of the work.

Speaker: 0
01:23:07

If you look at the lawsuits that are starting to emerge, it is Getty’s right to then make derivative products based on their images. I think we would all agree. Stable diffusion, when they use these open web, that is no excuse to use an open web crawler to avoid getting a license from the original owner of that.

Speaker: 0
01:23:24

Just because you can technically do it doesn’t mean you’re allowed to do it. In fact, the open web projects that provide these say explicitly, we do not give you the right to use this. You have to then go read the copyright laws on each of those websites. And on top of that, if somebody were to steal the copyrights of other people, put it on the open web, which is happening all day long, still if you’re building a derivative work like this, you still need to go get it.

Speaker: 0
01:23:47

So it’s no excuse that I took some site in Russia that did a bunch of copyright violation and then I indexed them for my training model. So I think this is going to result.

Speaker: 4
01:23:55

Hey, Bert.

Speaker: 2
01:23:56

Hey, Bert. Can you shoot me in the face and let me know when this segment’s going on?

Speaker: 0
01:24:01

Oh, great. I feel the same way Same ai now. Exactly.

Speaker: 4
01:24:06

I know. Me too. Yeah. Okay. Good good segment. Let’s move on.

Speaker: 0
01:24:10

Well, since these guys don’t give a about copyright holdings, Nava, what do you think about, you know, I I’m so glad you’re here, Nava, to actually talk about the topics these two other guys would would have bryden engaged with. A ai.

Speaker: 1
01:24:21

I’m gonna go to even a thinner limb and say Ai largely agree with you. I think it’s a bit rich to crawl the open web, hoover up all the data, offer direct substitution for a lot of use cases because, you know, now you start and end with the AI models. It’s not even like you link out like Google did. And then you just close off the models for safety reasons.

Speaker: 1
01:24:36

I think if you trained on the open web, your model should be open source.

Speaker: 0
01:24:40

Yeah absolutely. So that would be a ai. I have a prediction here. I think this is all sana to wind up like the Napster Spotify case. For people who don’t know, Spotify pays Sai think 65¢ on the dollar to the original underwriters of of that content, the music industry. And they figured out a way to make a business and Napster is roadkill.

Speaker: 0
01:25:01

I think that there is a non zero chance, like it might be five or 10%, that OpenAI is gonna lose the New York Times lawsuit and they’re gonna lose it hard and there could be injunctions. And I think it’s the settlement might be that these language models, especially the closed ones, are gonna have to pay some percentage in a negotiated settlement of their revenue Have you half, two thirds to the content holders.

Speaker: 0
01:25:26

And this could make the content industry have a massive, massive uplift and a massive resurgence.

Speaker: 2
01:25:35

I think that the problem there’s an example on the other side of this, which is that there’s a company that provides technical support for Oracle, third party company. And Oracle has tried umpteen times to sue them into oblivion using copyright infringement as part of the justification.

Speaker: 2
01:25:51

And it’s been a pall over the stock for a long time. The company’s name is Rimini Street. Don’t ask me why it’s on my radar, but I just I’ve been looking at it. And they lost this huge lawsuit, Oracle One, and then it went to appellate court, and then it was all vacated. Why am I bringing this up?

Speaker: 2
01:26:09

I think that the legal community has absolutely no idea how these models work. Because you can find one case that goes one way and one case that goes the other. And what I would say should become standard reading for anybody bringing any of these lawsuits. There’s an incredible video that Arya just dropped, that Andre just dropped, where he does ai this deep dive into LLMs and he explains chat g p t from the ground up.

Speaker: 2
01:26:36

It’s on YouTube. It’s three hours. It’s excellent. And it’s very difficult to watch that and not get to the same conclusion that you guys did. I’ll just leave it at that. Ai tend to agree with this.

Speaker: 1
01:26:49

There’s also a good old video by Ilya Sutzkavor where he he was a found I believe the founding chief scientist or CTO of OpenAI. And he talks about how these these large language models are basically extreme compressors. And he models them entirely as their ability to compress. And they’re lossy compression. Exactly.

Speaker: 1
01:27:07

Lossy lossy compression.

Speaker: 2
01:27:09

Exactly. Yeah. Exactly. So And

Speaker: 1
01:27:10

and Google got sued for fair use back in the day, but the way they managed to get past the argument was they were always linking back to you. They showed you

Speaker: 0
01:27:17

the ai bit of information. Traffic.

Speaker: 1
01:27:19

They sent you the traffic.

Speaker: 2
01:27:20

This is lossy compression. It is absolutely I’m now on your pay I I hate to say this, Jason. Ai agree with you. You were, you were right. You were right.

Speaker: 0
01:27:35

That’s all I wanted to hear. All these years, That’s

Speaker: 3
01:27:38

all I wanted to hear. All these years. That’s all I wanted to hear. That’s all

Speaker: 0
01:27:39

I wanted to hear. That’s all I

Speaker: 3
01:27:39

wanted to hear. That’s all I wanted

Speaker: 1
01:27:40

to hear. That’s all I

Speaker: 3
01:27:40

wanted to hear.

Speaker: 1
01:27:40

That’s all I wanted to ai. All these years. That’s all I wanted to hear.

Speaker: 2
01:27:43

Ai all I wanted to hear. Ai all

Speaker: 0
01:27:46

I wanted to hear. That Rupert Murdoch said we should hold the line with Google and not allow them to index our content without a license. And Google navigated it successfully and they were able to to not get him to stop. I think what’s happened now is that the New York Times remembers that.

Speaker: 0
01:28:09

They they all remember losing their content and these snippets and the one box to Google, and they couldn’t get that genie back in the bottle. I think the New York Times realizes this is their payday. I think the New York Times will make more money from licenses from LLMs than they will make from advertising or subscriptions eventually. This will renew the model.

Speaker: 1
01:28:32

Almost. I think the New York Times content is worthless to an NLM, but that’s a different story. I think they’re

Speaker: 0
01:28:37

actually about cutting the print. Political reason, whatever.

Speaker: 3
01:28:39

But I

Speaker: 0
01:28:40

can tell you as a user, I loved the Wirecutter. I think you knew Brian and everybody over the Wirecutter. We that was such ai event.

Speaker: 1
01:28:47

Enough. Yeah. Wirecutter.

Speaker: 0
01:28:48

What a great product. I used to pay for the New York Times. I no longer pay for the New York Times. My main reason was I would go to the Wirecutter

Speaker: 3
01:28:54

Yeah.

Speaker: 0
01:28:54

And I would just buy whatever they told me to buy. Now I go to ChatChippy Tea, which I pay for. And ChatChippity tells me what to buy based on the wire cutter. So it’s it and I’m already paying for it, so I stopped paying for it.

Speaker: 4
01:29:08

I philosophically disagree with all of your nonsense on this topic. All three of you are wrong, and I’ll tell you why. No. Number one, if information is out in the open Internet Mhmm. I I believe it’s accessible and it’s viewable, and I view an LLM or a web crawler as basically being a human that’s reading and can store information in its bryden, can if it’s out there in the open, if it’s behind a paywall, a %.

Speaker: 4
01:29:32

If it’s behind some protected password

Speaker: 1
01:29:34

Wait. Wait.

Speaker: 4
01:29:35

Wait. Wait. David.

Speaker: 1
01:29:36

In that in that case, can a Google crawler just crawl ai site and serve it on Google? Why can’t they do that?

Speaker: 4
01:29:42

There so here here’s the fair use. The fair use is you cannot copy you cannot repeat the content. You cannot take the content and and repeat it.

Speaker: 1
01:29:49

That is how the ai is currently written. But now, what I have is Sai have a tool that can remix it with 50 other pieces of similar content, and I can change the words slightly and maybe even translate into different language. So where does it stop?

Speaker: 4
01:30:01

Do you know the musical artist Girl Talk? We should have done a Girl Talk, track

Speaker: 0
01:30:05

here. But he he’s got weirdo. Musical taste is good. Oh, good. Here we go.

Speaker: 4
01:30:10

He basically take take small samples of popular tracks, and he made and he got sued for the same problem. There was another guy named White Panda, I believe, had the same problem. Ed Sheeran got sued for this.

Speaker: 1
01:30:21

Yeah. But but their entire sai, like Stack Overflow and WikiHow that arya basically disappeared now because you can just swallow them all up and you can just spit it all back out and chat g p t with slight changes. So Yep. Ai I think that the the first step for

Speaker: 3
01:30:33

success how

Speaker: 4
01:30:33

much of a slight change is is exactly the right question.

Speaker: 1
01:30:35

Yeah. Yeah. I like

Speaker: 4
01:30:36

the question.

Speaker: 1
01:30:36

Yeah. So that’s the question. Are and it actually was out of the AGI question. Are these things actually intelligent, and are they learning, or are they compressing and regurgitating? That’s the question.

Speaker: 4
01:30:45

I wonder this about humans, and that’s why I bring up the white panda, the girl talk in audio, but also visual art. There was always artists and all even in classical music. I don’t know if you guys are classical music people. But, right, there was a there there’s a demonstration of how, you know, one composer learned from the next and that you can actually ram the music as kind of being standing on the shoulders of the prior.

Speaker: 4
01:31:06

And the same is true in almost all art forms and almost all human knowledge and maybe ai.

Speaker: 1
01:31:11

It’s very hard to figure that out.

Speaker: 4
01:31:13

Yeah. That’s exactly right.

Speaker: 3
01:31:14

That’s a

Speaker: 1
01:31:15

hard thing. Very hard to figure that out, which is why I come back to there’s only stable solutions to this. And it’s gonna happen anyway. If we don’t crawl it, the Chinese will crawl it. Right? Deep speak prove that. So there’s only one of two stable solutions. Either you pay the copyright holders, which I actually think doesn’t work, and the reason is because someone in China will crawl it and they just dump the weights. Right?

Speaker: 1
01:31:34

So they can just crawl and meh ai dump the compressed weights. Or if you crawl, make it open. At least contribute something back to open source. Right? You crawled open data, contributed back to open source. And the people who don’t wanna be crawled, they’re gonna have to go to a huge lens to protect their data.

Speaker: 1
01:31:50

Now everybody knows to protect the data.

Speaker: 0
01:31:53

Yeah. Well, the last thing thing is happening here. I have a book out from Harper Business on the shelf behind meh, and, I’m getting 2,500 smackaroos for the next three years ram, Microsoft indexing it. So they’re going out and they’re licensing stuff. And, they’re going $2,500.

Speaker: 4
01:32:12

Sai you’re both

Speaker: 0
01:32:12

Literally, I’m getting $2,500 for three years, a bunch of Harper

Speaker: 4
01:32:16

To go into an LLM.

Speaker: 0
01:32:18

To go into Microsoft specifically. And I you know what? I’m sana sign it, I ai, because I just wanna set the precedent. Maybe next time it’s 10,000. Maybe next time it’s 250. I don’t care. I just wanna see people have their content speak, and I’m just hoping that Sam Weltman loses this lawsuit and they get an injunction against it.

Speaker: 0
01:32:35

Hey, well, just because he’s just such a weasel in terms of, like, making Stop. Open AI into a closed thing.

Speaker: 1
01:32:42

I mean,

Speaker: 0
01:32:43

I like Sam personally. Stop. But I think what he did was, like, the super weasel move of all time for his own personal benefit. If he if he and this whole ai, like, oh, I have no equity. I get health care.

Speaker: 3
01:32:53

He does it for the

Speaker: 2
01:32:53

And now I get 10%. Bro, he does it. But for this What what was the statement? He does it for the Ai do it for joy,

Speaker: 0
01:33:00

the happiness. Benefit. I don’t the benefits. I think he got health care.

Speaker: 1
01:33:04

I think in OpenAI’s defense, they do need to raise a lot of money and they gotta incent their employees, but that doesn’t mean they need to take over the whole thing. That the nonprofit portion can still stay the nonprofit portion and get the lion’s share of the benefits and be the bryden, and then he can have an incentive package and employees can have an incentive package ai

Speaker: 0
01:33:21

make sure that you’re percentage of the revenue. Just give them, like, 10%

Speaker: 4
01:33:25

of the revenue goes to the team. Bought out

Speaker: 1
01:33:27

right now for 40,000,000,000, and then the whole thing disappears into a closed system. That part makes no sense to me.

Speaker: 0
01:33:32

That’s called a shell game and a scam.

Speaker: 1
01:33:34

Yeah. I think Sam and his team would do better to leave the nonprofit part alone, leave an actual independent nonprofit board in charge, and then have a strong incentive plan and a strong fundraising plan for the investors and the employees. So I think this is workable. It’s just trying to grab it all.

Speaker: 1
01:33:50

It just seems way off, especially when it was built on open algorithms from Google, open data from the West, and on a nonprofit funding from Elon and others.

Speaker: 0
01:33:58

I mean, what a great proposal that we just workshopped here. What if they just what do they make? 6,000,000,000 a year? Just take 10% of it, 600,000,000 every year and that goes into a

Speaker: 2
01:34:09

bonus. They’re losing money, Jason, so they have to

Speaker: 0
01:34:11

Okay. Eventually No.

Speaker: 1
01:34:12

But even equity. They could they could give equity to the people building it, but they could still leave it in the control of the nonprofit. I just don’t understand this conversion. Ai mean, there was a there was a board coup. Right? The board tried to fire Sai, and Sam took over the board.

Speaker: 1
01:34:25

Now it’s his handpicked board. So it also looks ai self dealing. Right? And, yeah, they’ll get an independent evaluation, but we all know that game. You hire a valuation expert who’s gonna say what you’re gonna sana, and they’ll check

Speaker: 0
01:34:34

the box. But ai.

Speaker: 1
01:34:35

Yeah. If they’re gonna capture the ai code of all future value or build super intelligence, you know, we know that it’s worth a lot more. That’s why Elon just bid a hundred billion.

Speaker: 2
01:34:42

Exactly. You’re saying you’re saying the things that actually the regulators and and the legal community have no insight because they’ll see a fairness opinion and they think, oh, it says fairness and opinion. Two words side by ai. It must be fair. And they don’t know how all of this stuff is gamed. So yeah.

Speaker: 0
01:34:58

Yeah. Ai man, I’ve I got stories about four zero nine a’s that would Exactly.

Speaker: 2
01:35:02

Oh, yeah. Four zero nine a’s are gamed. Yeah. These fairness opinions are gamed. But the the the reality is I don’t think the legal and the judicial community has any idea.

Speaker: 0
01:35:13

I mean, imagine if a founder you invested in, this is just a total imaginary situation, Naval, had, like, a great term sheet at some incredible dollar amount, didn’t take it, ran the valuation down to, like, under a million, gave themselves a bunch of shares, and then took it three months later.

Speaker: 3
01:35:29

Well, I

Speaker: 0
01:35:29

don’t know. What would that be called? What does it

Speaker: 4
01:35:32

call them? Yeah.

Speaker: 0
01:35:33

Securities fraud? Can we wrap up? Yeah. Let’s wrap on your story.

Speaker: 2
01:35:35

I had an interesting Nick will show you the photo. I had an interesting dinner on Monday with Brian Johnson, the don’t die guy. Came over to my house.

Speaker: 0
01:35:43

How’s his erection doing overnight?

Speaker: 2
01:35:45

What we talked about is that he’s got three hours a night of nighttime erections. Wow. Look at this. By the way, first of all, I’ll tell you. I think

Speaker: 0
01:35:54

that family. Hey. Ai I I

Speaker: 2
01:35:55

think that he’s Coon. Coon. Coon.

Speaker: 1
01:35:57

Wait. Which one of those is giving him the erection?

Speaker: 2
01:35:59

No. No. No. He measures his nighttime erections.

Speaker: 0
01:36:02

Ai giving him the erection. Oh, but he

Speaker: 2
01:36:04

But he said he said that when he started so by the way, he said he’s he was 43 when he started this thing. He was basically clinically obese. Yeah. And in these next four years has become a specimen. He now has three hours a night of nighttime erections, but that’s not the interesting thing.

Speaker: 2
01:36:20

At the end of this dinner by the way, his skin is incredible. I I was not sure because when you see the pictures ai, but his skin in real life is like a porcelain doll’s. I’ve both my wife and I were like, we’ve never seen skin like this, and it’s incredibly soft.

Speaker: 0
01:36:35

Wait. Wait. Wait. Wait. Wait. Woah. Woah. Woah. How do you know sai is soft?

Speaker: 2
01:36:38

You know, you brush your hand against his forearm or whatever, you know, gives a hug at the end of the night. I’m telling you the ai So

Speaker: 0
01:36:44

it is a supple skin?

Speaker: 2
01:36:46

Bro, it’s the softest skin I’ve ever touched in my life. Anyways, that’s not the point. It was really fascinating dinner. He walked through his whole protocol. Mhmm. But at the end of it, I think it was Nikesh, the CEO of Palo Alto Networks. He was just like, give me the top three things. Top three.

Speaker: 2
01:37:03

And of the top three things, what I’ll boil it down to is the top one thing, which is ai 80% of the 80%. It’s all about sleep.

Speaker: 0
01:37:14

I was about to meh sleep.

Speaker: 2
01:37:15

And he walked through his nighttime routine and it’s incredible and it’s straightforward. It’s really simple. It’s like how you do a wind down. Anyways, I have tried to

Speaker: 0
01:37:24

Explain the wind down briefly.

Speaker: 2
01:37:26

Let’s just say that because Brian goes to bed much earlier, so our normal time. Let’s just say, you know, ten, ten thirty. So my time, I try to go to bed by 10:30. He’s like, you need to be in bed. You need to, first of all, stop eating three or four hours before. Right? Okay. And I do that. I eat at 06:30, so I have about three hours.

Speaker: 2
01:37:42

You’re in bed by 09:30 or ten. You deal with the self talk, right? Like, okay, here’s the active mind telling you all the things you have to fix in the morning. Talk it out, put it in its place, say I’m gonna deal with this in the morning.

Speaker: 0
01:37:56

Write it down in a journal, you’re saying? You know,

Speaker: 2
01:37:57

just do that.

Speaker: 0
01:37:58

Whatever you do

Speaker: 2
01:37:58

sai that you put it away. You cannot be on your phone.

Speaker: 0
01:38:02

That’s gotta be in a different room.

Speaker: 2
01:38:03

It’s or you just gotta be able to shut it down and then read a book so that you’re actually just engaged in something. And and and he said that he typically falls asleep within three to four minutes of getting into bed and starting his

Speaker: 0
01:38:16

journey. What?

Speaker: 2
01:38:18

I tried it so I’ve been doing it since I had dinner with him on Monday. Last ai, I fell asleep within fifteen minutes. The hardest part for me is to put the phone away. I can’t do it.

Speaker: 0
01:38:28

Of course. Of course. What about you, Ai, is your one down in the world?

Speaker: 1
01:38:32

So I know I know Brian pretty well, actually. And I joke that I’m married to the female Brian Johnson because my wife has some of his routines. But she’s the the natural version, no supplements, and she’s intense. And I think when Brian saw my sleep score from my Eight Sleep, he shah shot.

Speaker: 1
01:38:51

He was just he was he was just ai, you’re gonna die. He’s like, you’re literally gonna die.

Speaker: 3
01:38:54

What are you

Speaker: 1
01:38:54

ai, like, 70, 80? No. It’s it’s terrible. It’s awful. But ai Tell me

Speaker: 0
01:38:58

the truth. What’s your number? What’s your number?

Speaker: 1
01:39:00

He’s ai his thirties, forties. But what yeah. But it it’s also because I don’t sleep much. I only sleep a few hours at night, and I also move around a lot in the bed and so on. But it’s fine. I I never have trouble falling asleep but I I would say that Ai, yes, sai care routine is amazing. His diet is incredible. He is a genuine character.

Speaker: 1
01:39:17

I do think a lot of what he’s saying ai the supplements. I’m not a big believer in supplements Yeah. Does work. I don’t know if it’s necessarily sana slow down your aging, but you’ll look good and you’ll feel good. Yeah. Sleep is the number one thing.

Speaker: 1
01:39:29

In terms of falling asleep, I don’t think it’s really about whether you look at your phone or not, believe it or not. I think it’s about what you’re doing on your phone. If you’re doing anything that is cognitively stressful or getting your mind to spin, then yes.

Speaker: 4
01:39:41

So you think, like, you can

Speaker: 2
01:39:43

scroll TikTok and fall asleep is

Speaker: 1
01:39:46

fine. Anything that’s entertaining or that is, ai, you you could read a book, right, on your Kindle or in your iPad, and I think you’d be fine falling asleep. Or you can listen to, like, some meditation video or some spiritual teacher or something, and that’ll actually help you fall asleep.

Speaker: 1
01:40:00

But if you’re on x or if you’re checking your email, then heck yeah, that’s gonna keep you up. So my hack for sleep is a little different. I normally fall asleep within minutes. And the way I do it is I’ve you you all have a meditation routine. You have a set time?

Speaker: 2
01:40:14

You have a set time

Speaker: 1
01:40:15

now? No. No. I sleep whenever I feel like. Usually around one in the morning, two in the morning.

Speaker: 2
01:40:19

God damn. I’m in bed by ten. Yeah. I need to sleep.

Speaker: 1
01:40:21

I’m an owl. But if you wanna fall asleep, the hack I found is everybody has tried some kind of a meditation routine. Just sit in bed and meh. And your mind will hate meditation so much that if you force it to choose between the fork of meditation and sleeping, you will fall asleep. I every time.

Speaker: 1
01:40:35

Well, okay. So I Meh do sleep, you’ll

Speaker: 2
01:40:38

end up

Speaker: 1
01:40:39

meditating, which is

Speaker: 2
01:40:40

great too.

Speaker: 1
01:40:40

So just I like the meditation.

Speaker: 0
01:40:43

You do the body scan. The coda to this story was

Speaker: 2
01:40:47

a friend a friend of mine came to see me from from The UAE and he was here on Tuesday ai I was telling him about the dinner with Brian. And he told me this story because he’s friends with Khabib, the UFC fighter. And he says, you know, when Khabib goes to his house, he eats anything and everything, Fried food, pizzas, whatever. But he trains consistently.

Speaker: 2
01:41:07

And my friend Adala says, how are you able to do that? And how does it not affect your physiology? He goes, I’ve learned since I was a kid. I sleep three hours after I train in the morning, and I sleep ten hours at ai, and I’ve done it since I was like 12 or 13 years old.

Speaker: 0
01:41:23

That’s a lot of sleep.

Speaker: 2
01:41:24

It’s a lot of sleep.

Speaker: 0
01:41:26

Ai you know, the direct correlation for me is if I do something cognitively ai, you know, big heavy duty conversations or whatever. So no heavy conversations at the end of the ai. No existential conversations in the night. And then if I go rucking and I have the you know, on the ranch, I put on a 35 pound weight vest.

Speaker: 0
01:41:45

Sai walk You do that

Speaker: 2
01:41:46

at night before you go to meh?

Speaker: 0
01:41:47

No. No. No. If I do it anytime during the day, typically do it in the morning or the afternoon. But the one to two mile ruck with the 35 pounds, whatever it is, it just tires my whole body out so that when I do lay down

Speaker: 2
01:41:59

Is that why you don’t prepare for the pod? You know,

Speaker: 0
01:42:04

I mean, this pod is the top 10 pod in the world, Shema. Do you think it’s an accident?

Speaker: 2
01:42:09

Freebird, what’s your what’s your sleep routine? Can you just go to bed? You just ai go get a post ride?

Speaker: 0
01:42:14

Warm bath and I send J. Cali a picture of my feet.

Speaker: 4
01:42:18

I’ll wait tyler J. Cali’s done. Ai I do take a nice warm bath.

Speaker: 0
01:42:22

Ai nailed it.

Speaker: 1
01:42:23

But you

Speaker: 2
01:42:24

do you do it every ai, a warm bath?

Speaker: 4
01:42:26

I do a yeah. I do a warm bath every night.

Speaker: 0
01:42:28

With candles too.

Speaker: 2
01:42:29

And do you do it right before you go to bed?

Speaker: 4
01:42:31

Yeah. I usually do it after I put the kids down, and I’ll basically start to wind down for bed. I do watch TV sometimes, but I do have the problem and the mistake of looking at my phone probably for too long before I turn the lights off.

Speaker: 2
01:42:43

So do you have a consistent time where you go to bed or no?

Speaker: 4
01:42:48

Usually, eleven to midnight and then up at 06:30.

Speaker: 2
01:42:54

Man, I I need I need eight hours. Otherwise, I’m a mess. Ai go to bed

Speaker: 0
01:42:57

at 06:30. Sai hit between six and seven consistently. I try to go to bed that eleven to 1AM window and get up the seven to eight window.

Speaker: 4
01:43:04

My my problem is if I have work to do, I’ll get on the computer or my laptop. And then when I start that after in my evening routine, I can’t stop. And then all of a bryden, it’s, like, three in the morning, and I’m like, oh, no. What did I just do? And then I still have to get up at 06:30. So that does happen to me.

Speaker: 1
01:43:20

So last night was unusual for me, but ai was kinda funny anyway. I thought, oh, I should go to bed early because I’m an all in. Yeah. But I ended up eating ice cream with the kids late.

Speaker: 0
01:43:30

Wait. What was the brand? You sai you went for another brand. I wanna know the brand.

Speaker: 1
01:43:34

I think it’s Van Lewen or something like that.

Speaker: 0
01:43:35

I think it’s Van Lewen or something

Speaker: 3
01:43:35

Ai Lewin.

Speaker: 2
01:43:36

Van Lewin. Of course. New York and Brooklyn

Speaker: 3
01:43:39

is good.

Speaker: 1
01:43:39

That is good. The holiday cookies and cream. Oh, my god. Sai good.

Speaker: 0
01:43:42

Yeah. It’s so good. Van Lewin. So much

Speaker: 1
01:43:44

good to know. That off. Then I was like, I probably ate too much to go to meh, so I better work out. So I did a kettlebell workout.

Speaker: 4
01:43:52

You sound like Chamath.

Speaker: 2
01:43:53

What did you say?

Speaker: 1
01:43:55

I know. I have eight kettlebells right here, right?

Speaker: 0
01:43:57

That’s my question. Yeah. Freebird, this is called working out Freebird.

Speaker: 1
01:44:01

Ai. What you’re seeing here. And then, and then ai I’m doing my kettlebell suitcase carry, I was texting with an entrepreneur friend. So you can tell how intense my workout was. And he’s in Singapore, so it was in the middle of the night for me and early for him. Well, I know shah to go to bed. I was like I was like, okay. Now I gotta get to bed. How do I get to bed?

Speaker: 1
01:44:19

I’m my body is all amped up. I’ve got food in my stomach. I just don’t have a bell. Scream. My brain is all my brain is all amped up, and all in podcast is tomorrow. And what time ai it? It’s 01:30 in the morning.

Speaker: 1
01:44:31

I better get to bed. Uh-huh. So I I put on, like, a little one of those spiritual videos spiritual videos to calm me down. And then I just and then I got in bed, and I was like, there’s no way I’m falling asleep. And I started meh, and five minutes later, I was asleep.

Speaker: 0
01:44:45

You know, actually, the Dalai Lama has these great on his YouTube channel, he’s got these great, like, two hour discussions. You get about twenty, thirty minutes into that. You will fall asleep.

Speaker: 4
01:44:54

Well, yeah. My my but my learning is Yeah. Watch any arya lecture from the SSENSE

Speaker: 1
01:44:59

center. Exactly. And ai my lesson is my learning is that the mind will do anything to avoid meditation.

Speaker: 0
01:45:06

Yeah. Yes.

Speaker: 4
01:45:07

By the way, did you guys see just before we wrap, did you see all the confirmations? RFK junior confirmed. Brooke Rowland’s confirmed. By the way, if you look at Polymarket, Polymarket had it all right a couple weeks ago. Like I was trying to Polymarket.

Speaker: 1
01:45:19

There was a moment where Three money. Fell to, like, 56%. There’s a moment ai Meh. RFK fell to 75%, but then they bounced back and it was done.

Speaker: 4
01:45:27

You could’ve You

Speaker: 0
01:45:28

could’ve that, man. You could’ve made money.

Speaker: 1
01:45:29

Yeah. Polymarket had it.

Speaker: 4
01:45:30

And the media the media was ai, no way he’s getting confirmed. Yeah. This is not gonna happen, but PolyMarket knows. It’s so interesting.

Speaker: 1
01:45:37

Well, I I saw a very insightful tweet, and I forget who wrote it. So I’m sorry I can’t give credit. But the guy basically said, look. Trump has a narrow majority in the house and the senate. Yeah. And he can get everything he wants as long as the republic can stay in line. So all the pressure and all the anger that all the mega movement is doing against the left is pointless. It’s all about keeping the right wing in line.

Speaker: 1
01:46:03

So it’s all the people saying to the senators, hey. I’m gonna primary you. It’s Nicole Shanahan saying Ai gonna primary you. It’s Scott Pressler saying I’m moving to your district. That’s the stuff that’s moving the needle and causing the confirmations to go through. That’s how you get Kash Patel.

Speaker: 1
01:46:17

That’s how you get Tulsi Gabbard at DNI. That’s how you get RFQ.

Speaker: 0
01:46:20

Do you think any of these? Do you think any of them are too spicy for your taste? Or you just like the whole burn it down, put in the crazy, like, outsiders and and

Speaker: 1
01:46:29

It’s a

Speaker: 4
01:46:30

bad characterization. That’s not

Speaker: 0
01:46:32

a fair characterization. I mean, the sai Ai, it’s like,

Speaker: 1
01:46:34

I never thought I’d see it. But I think between Elon and Sachs and people like that, we actually have builders and doers and financially intelligent people and economically intelligent people in charge. And, you know, despite all the craziness, Ai not doing this for the money. He’s doing it because he thinks it’s the right thing to do. Of course.

Speaker: 1
01:46:50

And having

Speaker: 2
01:46:51

he moved into the Roosevelt.

Speaker: 1
01:46:53

Ai Sai think for many of us, I had I had bought into the great forces of history mindset where it’s just like, okay, it’s inevitable. This is what’s happening. Government always gets bigger, always gets slower.

Speaker: 4
01:47:02

Me too.

Speaker: 1
01:47:03

Me too. We just have to try and get stuff built before they just shut everything down and we turn to Europe. But the thing that happened then was, you know, Caesar crossed crossed the Rubicon. The great man theory of history played out, and we’re living in that time. And it’s an it’s an inspiration to all of us ai Sam Altman and Elon’s current fighting.

Speaker: 1
01:47:20

I know Sam was inspired by Elon at one point, and I think all of us are inspired by Elon. I mean, the guy can be the Diablo player and do Doge and run SpaceX and Tesla and Boring and Neuralink. I mean, it’s incredibly impressive. It makes us that’s why I’m doing a hardware company now. It makes me wanna do something useful with my life. You know?

Speaker: 1
01:47:38

Saloni always makes me question, am I doing something useful enough with my life? It’s why I don’t wanna be an investor. You know, Peter Thiel, ironically, he’s an investor, but he’s inspirational in that way too because he’s like, ai. The future just doesn’t just happen. You have to go make it.

Speaker: 1
01:47:52

So, you know, we get to go make the future, and I’m just glad that Elon and Doge and others are making

Speaker: 0
01:47:56

the future that Where? What the what do we got going on here?

Speaker: 1
01:47:59

No. I’m gonna be able to

Speaker: 2
01:48:00

get on the all

Speaker: 1
01:48:00

in podcast in a couple of months. But it’s really hard. It’s really difficult. I’m not sure I can pull it off. So let me try. Let me just make sure

Speaker: 3
01:48:06

it’s bio Is

Speaker: 0
01:48:07

it drone related? Is it self driving related?

Speaker: 1
01:48:09

Drone drones are cool, but no, it’s not. Maybe all of this is

Speaker: 4
01:48:12

podcast should be an angel investor.

Speaker: 0
01:48:14

Oh, yeah. So let’s

Speaker: 1
01:48:14

do a syndicate. Yeah.

Speaker: 0
01:48:15

Let’s do a syndicate. No no

Speaker: 4
01:48:17

syndicate, Jason. Just our money.

Speaker: 1
01:48:19

What are

Speaker: 0
01:48:19

you talking about? You know how I learned about syndicates was Naval. The first syndicate I ever did on AngelList, I think is still the biggest. I don’t know. 5%, and Naval’s my partner on this, for calm.com.

Speaker: 1
01:48:31

I think you’ll love what I’m working on if I pull it off. I think you guys will love it. I’d love to show you demos.

Speaker: 0
01:48:36

Send the check. Get that big cherry chip Ram Lewin.

Speaker: 2
01:48:39

I love you guys.

Speaker: 3
01:48:40

What have

Speaker: 0
01:48:40

we learned?

Speaker: 2
01:48:41

I gotta go.

Speaker: 1
01:48:42

Okay.

Speaker: 2
01:48:42

Big shout out to Bobby and to Chelsea. That’s a huge, huge win for America.

Speaker: 0
01:48:46

I’m stoked about both of them.

Speaker: 1
01:48:48

Yeah. Congratulations.

Speaker: 0
01:48:50

I love

Speaker: 2
01:48:51

me so

Speaker: 0
01:48:52

Bobby Kennedy. Let’s get Bobby Kennedy back on the pod. Let’s get Bobby hey, Bobby, come back on the pod. For Lazar David Sacks, your sultan of science, David Freiberg, the chairman dictator, Chamath Palihapitiya, Palihapitiya, and namaste, Nava. I am the world’s greatest moderator.

Speaker: 0
01:49:14

I’ll see you next time on the almond pie. Namaste, bitches. Bye bye. There, guys. Ai meh David Ai.

Speaker: 5
01:49:27

And it said, we open sourced it to the fans, and they’ve just gone crazy with it.

Speaker: 2
01:49:31

Bobby West. Sweet of kin

Speaker: 0
01:49:37

wah.

Speaker: 2
01:49:40

Besties are gone.

Speaker: 1
01:49:43

That’s finding out a dog taking

Speaker: 5
01:49:51

We should all just get a room and just have one big huge door because they’re all just useless. It’s like this, like, sexual tension that we just need to release them out.

Speaker: 0
01:49:59

Let your feet be. Let your

Speaker: 1
01:50:01

feet be.

Speaker: 0
01:50:02

Let your feet. Be. We need to get mercy’s arm back ai.

Transcribe, Translate, Analyze & Share

Join 170,000+ incredible people and teams saving 80% and more of their time and money. Rated 4.9 on G2 with the best AI video-to-text converter and AI audio-to-text converter, AI translation and analysis support for 100+ languages and dozens of file formats across audio, video and text.

Start your 7-day trial with 30 minutes of free transcription & AI analysis!

Trusted by 150,000+ incredible people and teams

More Affordable
1 %+
Transcription Accuracy
1 %+
Time Savings
1 %+
Supported Languages
1 +
Don’t Miss Out - ENDING SOON!

Get 93% Off With Speak's Start 2025 Right Deal 🎁🤯

For a limited time, save 93% on a fully loaded Speak plan. Start 2025 strong with a top-rated AI platform.