#2311 – Jeremie & Edouard Harris

Jeremie Harris is the CEO and Edouard Harris the CTO of Gladstone AI, a company dedicated to promoting the responsible development and adoption of artificial intelligence. https://superintelligence.gladstone.ai/ Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcribe, Translate, Analyze & Share

Join 170,000+ incredible people and teams saving 80% and more of their time and money. Rated 4.9 on G2 with the best AI video-to-text converter and AI audio-to-text converter, AI translation and analysis support for 100+ languages and dozens of file formats across audio, video and text.

Start your 7-day trial with 30 minutes of free transcription & AI analysis!

More Affordable
1 %+
Transcription Accuracy
1 %+
Time & Cost Savings
1 %+
Supported Languages
1 +

You can listen to the #2311 – Jeremie & Edouard Harris using Speak’s shareable media player:

#2311 – Jeremie & Edouard Harris Podcast Episode Description

Jeremie Harris is the CEO and Edouard Harris the CTO of Gladstone AI, a company dedicated to promoting the responsible development and adoption of artificial intelligence.

https://superintelligence.gladstone.ai/

Learn more about your ad choices. Visit podcastchoices.com/adchoices
This interactive media player was created automatically by Speak. Want to generate intelligent media players yourself? Sign up for Speak!

#2311 – Jeremie & Edouard Harris Podcast Episode Top Keywords

#2311 - Jeremie & Edouard Harris Word Cloud

#2311 – Jeremie & Edouard Harris Podcast Episode Summary

In this episode of the Joe Rogan podcast, the discussion revolves around the complexities and challenges of AI development, national security, and the geopolitical landscape. The conversation highlights the current administration’s bold approach to tariffs and onshoring, emphasizing the need for transformative strategies in the face of emerging technologies. The speakers delve into the intricacies of building secure data sets without hindering progress, suggesting an 80/20 approach to balance security and efficiency.

A significant portion of the discussion focuses on the challenges of AI development, particularly the need for refactoring and streamlining processes to enhance efficiency. The speakers also touch on the importance of collaboration and communication between different sectors, including government, academia, and industry, to address security concerns and foster innovation.

The episode features insights into the geopolitical implications of AI, with a focus on the U.S.-China dynamic. The speakers discuss the challenges of export controls and the need for strategic diplomacy to address concerns about loss of control over AI technologies. They emphasize the importance of understanding the structural advantages and weaknesses of different systems to remain competitive.

Actionable insights include the need for imposing time limits on litigation processes to expedite decision-making and the importance of aligning security measures with innovation goals. The speakers also stress the significance of engaging with diverse perspectives and fostering open communication to address complex challenges.

Overall, the episode underscores the urgency of addressing AI-related challenges through strategic collaboration, innovative thinking, and proactive measures to ensure security and competitiveness in a rapidly evolving technological landscape.

This summary was created automatically by Speak. Want to transcribe, analyze and summarize yourself? Sign up for Speak!

#2311 – Jeremie & Edouard Harris Podcast Episode Transcript (Unedited)

Speaker: 0
00:01

Joe Rogan podcast. Check it out. The Joe Rogan experience. Ai my day Joe Rogan podcast by night, all day.

Speaker: 1
00:12

Alright. So if there’s a doomsday clock for AI and where where we’re fucked, what what what time is it? If midnight is, we’re fucked.

Speaker: 2
00:21

We’re getting right into it. You’re you’re

Speaker: 0
00:22

not even gonna ask us what we had for breakfast. No. No. No. No. No. No. No. Jesus Christ.

Speaker: 2
00:27

Let’s get freaked out.

Speaker: 0
00:29

Well, okay. So so there’s one, without speaking to, like, the fucking doomsday dimension right out the gate, there’s a question about, like, where are we at in terms of AI capabilities right now, and what do those timelines look like? Right. There’s a bunch of disagreement.

Speaker: 0
00:42

One of the most concrete pieces of evidence that we have recently came out of a a lab, an an AI kind of evaluation lab called METER. And they put together this this test. Basically, it’s ai, you ask the question, pick a task that takes a certain amount of time, like an hour.

Speaker: 0
00:59

It takes ai a human a certain amount of tyler. And then, see, like, how likely, the best AI system is to solve for that task. Then ai a longer task. See, like, a ten hour task. Can it do that one?

Speaker: 0
01:11

And so ai now, what they’re finding is, when it comes to AI research itself, so basically, like, automate the work of an AI researcher, you’re hitting 50% success rates for these AI systems for tasks that take an hour long. And that is doubling every right now, it’s, like, every four months.

Speaker: 2
01:27

So, like, you you had tasks that you could do, you know, a person does in five minutes, like, you know, ordering an Uber Eats or, like, something that takes, like, fifteen minutes, like, maybe booking a flight or something like that. And it’s a question of, like, how much can these AI agents do? Right? Like, from five minutes to fifteen minutes to thirty minutes.

Speaker: 2
01:44

And in some of these spaces, like, research software engineering.

Speaker: 1
01:48

Mhmm.

Speaker: 2
01:49

And it’s getting further and further and further and doubling, it looks like, every four months. Sai it’s ai

Speaker: 0
01:55

If you if you extrapolate that, you basically get to task to take a month to complete. Like, by 2027, task that take an AI researcher a month to complete these systems will be completing with, like, a 50% success rate.

Speaker: 2
02:07

So you’ll be able to have an AI on your show and ask it what the doomsday clock is like by then.

Speaker: 1
02:13

Ai won’t laugh.

Speaker: 2
02:16

It’s gonna be worth it. It’ll have a terrible sense of humor about it.

Speaker: 0
02:19

Just make sure you ask you what it had for breakfast before you starting.

Speaker: 1
02:23

Yeah. What about quantum computing getting involved in AI?

Speaker: 0
02:27

It’s so yeah. Honestly, I don’t think it’s if you think that you’re gonna hit, human level AI capabilities across the board, say, twenty twenty seven, 20 20 eight, which when you talk to some of these the people in the labs themselves, that’s the timelines they’re looking at.

Speaker: 0
02:41

They’re not confident. They’re not sure, but that seems pretty plausible. If that happens, really, there’s no way we’re gonna have quantum computing that’s gonna be giving enough of a bump to these techniques. You’re gonna have standard classical computing. One way to think about this is that the data centers that are being built today are being thought of literally as the data centers that are going to house, like, the artificial brain that powers superintelligence, human level AI when it’s built in, like, 2027, something like that.

Speaker: 1
03:08

So how would how knowledgeable are you when it comes to quantum computing?

Speaker: 0
03:14

So a little bit. I mean, I like, I did my, my grad studies in, like, the foundations of quantum mechanics.

Speaker: 1
03:21

Oh, great.

Speaker: 0
03:22

Yeah. Well, it was a mistake, but I appreciate it

Speaker: 2
03:24

for the person

Speaker: 1
03:25

Why was it a mistake?

Speaker: 0
03:26

You know, so academia is, like, kind of funny thing. It’s really bad culture. It it teaches you some really terrible habits. So, basically, my entire life after academia and Ed’s too Yeah. Was unlearning these, like, terrible habits of it’s it’s all zero sum, basically. It’s not like when you’re working in startups.

Speaker: 0
03:45

It’s not like, you know, when you’re working in tech where, you know, you build something and something somebody else builds something that’s meh, and you can team up and just, like, make something amazing. It’s always wars over who gets credit, who gets their name on the paper.

Speaker: 0
03:57

Did you cite this fucking stupid paper from two years ago because the author has an ego and you gotta be on I was literally, at one point, the I I’m not I’m not gonna give you any details here, but, like, there was a collaboration that we ran, with, like, this anyway, fairly well known ai.

Speaker: 0
04:15

And my supervisor had me, like, write the emails that he would send from his account sai that he was seen as, like, the guy who is, like, interacting with this bigwig. That kind of thing is, like, doesn’t tend to happen in startups, at least not in the same way.

Speaker: 1
04:30

Because everybody So he wanted credit for the like, he wanted to seem like he was the genius who was facilitating this?

Speaker: 2
04:36

For sounding smart on email. Oh. Right?

Speaker: 0
04:38

But but that that that happens everywhere. Dude, yeah.

Speaker: 2
04:41

And the reason it happens is that these guys who are, like, professors or even not even professors just like your post doctoral guy who’s, like, supervising you, they can write your letters of reference and control your career after that that last

Speaker: 0
04:55

year by the ball.

Speaker: 2
04:55

They can do whatever. And so what Jer did

Speaker: 1
04:58

It’s ai a movie.

Speaker: 0
04:59

Totally. It’s gross.

Speaker: 2
05:00

It’s gross.

Speaker: 1
05:00

It’s gross. It’s gross. It’s gross.

Speaker: 0
05:01

It’s gross. It’s gross. It’s gross. It’s gross. It’s gross. It’s gross. It’s a gross movie.

Speaker: 1
05:02

It’s a gross boss in a movie that wants to take credit for your work.

Speaker: 2
05:05

And it’s real. It’s rampant. And the way to escape it is to basically just be ai, fuck this. I’m gonna go do my own thing. So Jer dropped out of grad school to come start a company. And and I mean, honestly, even that, it took it took me it took both of us, like, a few years to, like, unfuck our brains and unlearn the bad habits we learned.

Speaker: 2
05:24

It was really only a few years later that we started, like, really really getting a good, like, getting a good flow going.

Speaker: 0
05:31

You’re also you’re ai disconnected from the, like, base reality when you’re in the the ai tower. Right? Right. If you’re there’s something beautiful about and and this is why we spent all our time in startups. But there there’s something really beautiful about, like, it’s just a bunch of assholes, us, and, like, no money sana nothing in a world of, like, potential customers.

Speaker: 0
05:50

And it’s, like, you actually it’s not that different from, like, stand up comedy in a way. Like, your product is, can I get the laugh? Right? Like, something like that. And it’s unforgiving. If you fuck up, it’s ai silence in the room. It’s the same thing with startups.

Speaker: 0
06:02

Like, the space of product that actually works is so narrow, and you gotta obsess over what people actually sana. And it’s so easy to fool yourself into thinking that you’ve got something that’s really good because your friends and family are ai, oh, no, sweetie. You’re doing a great job. Like, what a wonderful life. I I would totally use it. I totally see that all that stuff. Right?

Speaker: 0
06:20

And that’s I I love that because it forces you to change.

Speaker: 1
06:24

Mhmm. Yeah. The the whole indoctrination thing in academia is so bizarre because of these these, like, hierarchies of powerful people. And the just the idea that you have to work for someone someday and they have to take credit by being the the person on the email, that that will haunt me for days.

Speaker: 2
06:45

Oh, it’s ai.

Speaker: 1
06:46

I’ll be thinking about that for days now. I fucking can’t stand people like that. It drives me nuts.

Speaker: 0
06:51

One big consequence is it’s really hard to tell who the people are who are creating value in that space too.

Speaker: 1
06:55

Right? Of course. Sure. Because this is it’s just like television. One of the things about television shows is, sai I’ll give you an an example. A very good friend of mine who’s a very famous comedian had this show and his agent sai, we’re gonna attach these producers. It’ll help get it made.

Speaker: 1
07:12

And he goes, well, what are they gonna do? He goes, they’re not gonna do anything. It’s just be in name. He goes, but they’re gonna get credit. He goes, yeah. He goes, fuck that. He goes, no. No. Listen. Listen. This is better for the show. It’ll help the show.

Speaker: 1
07:23

Give meh but then they’ll have excuse me. They’ll have a piece of the show. He’s ai, yes. Yes. But it like, it’s a matter of whether the show gets successful or not, and this is a good thing to do. And he’s like, what are you talking about?

Speaker: 1
07:33

But it was a conflict of interest because this guy was meh the agent was representing these other people.

Speaker: 0
07:39

Oh, yeah.

Speaker: 1
07:39

But this is completely common. Yeah. So there’s these executive producers that are on shows that have zero to do with it.

Speaker: 0
07:47

It’s so many Yeah.

Speaker: 2
07:48

So many industries are like this. And that’s why we got into startups. It’s it’s literally, like, you and the world. Right? Yeah. It’s, like, in a way, like stand up comedy, like Jer said.

Speaker: 1
07:57

Or like podcasting.

Speaker: 2
07:58

Or like podcasting, where your enemy isn’t actually hate. It’s indifference. Like, most of the stuff you do, especially when you’re getting started, like, why would anyone, like, give a shit about you? They’re just not gonna pay attention.

Speaker: 1
08:08

Yeah. That’s not even your enemy. You know, that’s just all potential.

Speaker: 2
08:12

Mhmm.

Speaker: 1
08:12

That’s all that is, you know. Ai, your enemy is within you. It’s ai, figure out a way to make whatever you’re doing good enough that you don’t have to think about it not being valuable.

Speaker: 0
08:20

It it’s it’s meditative. Like, there’s no way for it not to be to be in some way a reflection of of, like, yourself. You know, you’re you’re kind of, like, in this battle with you trying to convince yourself that you’re great. So the ego wants to grow, and then you’re constantly trying to compress it and compress it.

Speaker: 0
08:34

And if there’s not that outside force, your ego will expand to fill whatever volume is given to it. Like, if you have money, if you have fame, if everything’s given, and you don’t make contact with the unforgiving on a regular basis, like, yeah, you know, you’re gonna end up you’re gonna end up doing that to yourself and and You could.

Speaker: 1
08:48

Yeah. It’s possible to avoid, but you have to have strategies.

Speaker: 0
08:52

Yeah. You have to be intentional about it. Yeah.

Speaker: 1
08:53

The best strategy is jujitsu.

Speaker: 0
08:56

Yeah. It’s it’s Mark Zuckerberg

Speaker: 1
08:58

is a different person now.

Speaker: 2
09:00

Yeah. Yeah. You can see it. You can see it.

Speaker: 1
09:02

Yeah. Well, it’s a really good thing for people that have too much power because you just get strangled all the time.

Speaker: 0
09:08

Yeah.

Speaker: 1
09:08

And then you just get your arms bent sideways and you and after a while, you’re like, okay. This is reality. This is reality. This it’s social hierarchy thing that I’ve created is just nonsense. It’s just smoke and mirrors. And they know it is, which is why they so rapidly enforce these hierarchies.

Speaker: 1
09:24

The best, like, sir and ma’am and all that kind of shit. That’s what that is. Like, you you don’t feel like you really have respect unless you say that. These poor kids that have to go from college where they’re talking to these dipshit professors out into the world and operating under these same rules that they’ve been, like, forced and indoctrinated to.

Speaker: 1
09:43

It’s God, to just make it on your own.

Speaker: 0
09:46

It’s amazing what you can get used to, though. And and, like, the Sana, you’re mentioning the producer thing. That is literally also a thing that happens in academia. So you’ll have these conversations where it’s, like, alright. Well, this paper is fucking garbage or something. But we we wanna get it in a paper, in a journal.

Speaker: 0
10:00

And so let’s see if we can get, like, a famous guy on the list of authors sai that when it gets reviewed, people go, like, oh, mister so and so. Okay. Like and that literally happens. Like, we you know.

Speaker: 2
10:11

The funny thing is, like, the hissy fits over this are like, the stakes are so brutally low. At least with your producer example, like, someone stands to make a lot of money. With this, it’s ai you get maybe, like, an assistant professorship out of

Speaker: 0
10:25

it at

Speaker: 2
10:25

best vatsal, like, $40 a year and you’re it’s just ai what this is it’s just

Speaker: 1
10:31

Ai the producers, it is money, but I don’t even think they notice the money anymore. I think a big part because all those guys are really, really rich already. I think, you know, if you’re a big time TV producer, you’re really rich. I think the big thing is being thought of as a genius who’s always connected to successful projects.

Speaker: 0
10:47

Right.

Speaker: 1
10:48

Yeah. That’s what they really like.

Speaker: 0
10:49

That’s that is, like, always gonna be a thing. Right?

Speaker: 1
10:51

It wasn’t one producer. It was, like, a couple. So there’s gonna be a couple different people that were on this thing that had zero to do with it. It was all written by a stand up comedian. His friends all helped him. They all put it together. And then he was ai, no. He he sana firing his agent over it.

Speaker: 0
11:07

Oh, shit.

Speaker: 2
11:07

Oh, geez. Good for him. I mean, yeah. Like, get

Speaker: 1
11:09

the fuck out of here.

Speaker: 2
11:11

At a

Speaker: 0
11:11

certain point for the producers too, it’s kinda like ai have people approaching you for help on projects that look nothing like projects you’ve actually done.

Speaker: 1
11:17

Uh-huh.

Speaker: 0
11:18

So I feel like it just it just adds noise to your your universe. Like, if you’re actually trying to build cool shit, you know what I mean?

Speaker: 1
11:23

Like Some people just wanna be busy. They just want more things happening and they think more is better. More is not better. Because more is energy that takes away from the better, whatever the important shit is.

Speaker: 0
11:33

Yeah. The focus.

Speaker: 1
11:34

You only have so much time until AI takes over. Then you’ll have all the time in the world because no one will be employed and everything will be automated. We’ll all be on universal basic income.

Speaker: 0
11:43

And that’s it. That’s sai show.

Speaker: 2
11:44

That’s The end.

Speaker: 1
11:46

That’s a sitcom.

Speaker: 2
11:48

That’s a sitcom.

Speaker: 1
11:48

A bunch of poor people existing on $250 a week.

Speaker: 0
11:51

Oh, I would watch that.

Speaker: 1
11:53

Yeah. Because because the government just gives everybody that. That’s what you live off of.

Speaker: 0
11:56

Like, weird shit is cheap. Like, the the stuff that’s, like, all, like well, the stuff you can get from chat bots and AI agents is cheap, but, like, food is super expensive or something. Yeah.

Speaker: 1
12:06

Organic food is gonna be you’re gonna have to kill people for it.

Speaker: 2
12:09

Well, you you will eat people. It will be like a Soylent world.

Speaker: 1
12:11

Right.

Speaker: 2
12:11

Hey, so really bad. Hey, so really bad.

Speaker: 1
12:13

Green. No.

Speaker: 0
12:14

Nothing’s more free range than people, though.

Speaker: 1
12:16

That’s true. Depends on what they’re eating, though. It’s just like animals. Uh-huh. You know? You know when I eat a bear that’s been eating saloni. They taste like shit.

Speaker: 0
12:23

Yeah. I know. I didn’t know that.

Speaker: 2
12:24

Yeah. I’ve been eating my bear wrong this entire time.

Speaker: 1
12:30

Alright. So back to the quantum thing. So quantum computing is infinitely more powerful than standard computing. Would it make sense then that if quantum computing can run a large language model that it would reach a level of intelligence that’s just preposterous?

Speaker: 0
12:47

Sai, yeah. One way to think of it is, like, there are problems that quantum computers can solve way way way way better than classical computers. And so, like, the numbers get absurd pretty quickly. It’s, like, problems that a classical computer couldn’t solve if it had the entire lifetime of the universe to solve it.

Speaker: 2
13:02

Right.

Speaker: 0
13:02

A quantum computer right in, like, thirty seconds. Boom. But but the flip side, like, there are problems that quantum computers just, like, can’t help us accelerate. The the kinds of, like, one classic problem that quantum computers help with is this thing called, like, the traveling salesman, paradox, or problem where, you you know, you have, like, a bunch of different locations that a salesman needs to hit, and what’s the best path to hit them most efficiently.

Speaker: 0
13:25

Ai, like, kind of a classic problem if you’re going around different places and have to make stops. There there are a lot of different problems that have the right shape for that. A lot of quantum machine learning, which is a field, is focused on how do we take standard AI problems, like, AI, workloads that we sana run, and, like, massage them into a shape that gives us a quantum advantage.

Speaker: 0
13:45

And that’s it’s a nascent field. There’s a lot going on there. I would speak, like, my personal expectation is that we just build the human level AI and very quickly after that super intelligence, without ever having to factor in quantum. But it’s a

Speaker: 1
14:01

Could you define that for people? What’s the difference between human level AI and superintelligence?

Speaker: 2
14:06

Yeah. So yeah. Human level AI ai, like, Ai. You you can imagine, like, it’s AI that is as smart as you are in, let’s say, all the things you could do on a computer. So, you know, you can yeah. You can order food on a computer, but you can also write software on a computer. You can also email people and pay them to do shit on a computer. You can also trade stocks on a computer.

Speaker: 2
14:27

So, it’s ai as smart as a smart person for that. Superintelligence people have various definitions and they’re all kinds of like honestly hissy fits about ai different definitions. Generally speaking, it’s something that’s like very significantly smarter than the smartest human. And so you think about it.

Speaker: 2
14:44

It’s it’s kinda like, it’s it’s as smart as much smarter than you as you might be smarter than a toddler. And you think about that and you think about, like, the, you know, how how do you how would a how would a toddler control you? It’s kinda hard. Like, you can you can outthink a toddler pretty meh, like, any day of the week.

Speaker: 2
15:05

And so, superintelligence gets us at these levels where you can potentially do things that are completely different. And basically, you know, new scientific theories and we last time we talked about, you know, new new stable forms of matter that were being discovered by these kind of narrow systems.

Speaker: 2
15:22

But now you’re talking about a system that is, like, has that intuition combined with the ability to talk to you as a human and to just have really good, like, rapport with you, but can also do meh. It can also write code. It can also, like, solve quantum mechanics and has that all kinda wrapped up in the same package.

Speaker: 2
15:42

One one

Speaker: 0
15:43

of the things too vatsal, ai definition, if you build a human level AI, one of the things it must be able to do as well as humans is AI research itself Yeah. Or at least the parts of AI research that you can do in just, like, software, like, you know, in in by coding or whatever these these systems are designed to do.

Speaker: 0
15:59

And so so one implication of that is you now have automated AI researchers. And if you have automated AI researchers, that means you have AI systems that can automate the development of the next level of their own capabilities. Right. And now Right. You’re getting into that whole, you know, singularity thing where it’s an exponential that just builds on itself and builds on itself, which is kinda why, you know, a lot of people argue that, like, if you build human level AI, super intelligence can’t be that far away.

Speaker: 0
16:28

You basically unlocked everything.

Speaker: 1
16:30

And we kind of have gotten very close. Right? Like, it’s it’s past the the Fermi not the Fermi paradox. The, what what is it?

Speaker: 0
16:39

Oh, yeah. Yeah. The the Goddamn it.

Speaker: 1
16:41

We ai just talking about him the other day.

Speaker: 0
16:42

Yeah. The test. The,

Speaker: 2
16:43

oh, the Turing test

Speaker: 0
16:44

or anything?

Speaker: 1
16:44

Turing test. Thank you. We’re just talking about how horrible what happened to him was, you know, they, chemically penetrated him because he was gay. Yeah. Horrific. Winds up killing himself. The the the guy who figures out what’s the test to figure out whether or not AI ai become sentient.

Speaker: 1
17:00

And ai the way, he does this in, like, what, nineteen fifties? Oh, yeah.

Speaker: 0
17:02

Yeah. Saloni Turing is a, like, he the guy was a beast. Ai?

Speaker: 1
17:05

How did he think that through? He invented computers.

Speaker: 2
17:09

He invented basically the concept that underlies all computers. Like, he was, like, an absolute beast. He was a code breaker that he he broke the Sai codes. Right? And the He

Speaker: 0
17:18

also wasn’t even the first person to come up with this idea of machines, building machines, and there being implications ai human disempowerment. So So if you go back to I think it was, like, the late eighteen hundreds, and I I don’t remember the guy’s name, but Ai sort of, like, came up with this he was observing the industrial revolution and mechanization of labor and kind of starting to see more and more, like, if you zoom out, it’s almost like you have an a humans or an ant saloni, and the artifacts that that colony is producing that are really interesting are these machines.

Speaker: 0
17:46

Yeah.

Speaker: 2
17:46

You know,

Speaker: 0
17:46

you kind of, like, look at the surface of the Earth as it’s, like, gradually increasingly mechanized thing. And it’s not super clear if you zoom out enough, like, what is actually running the show here? Right. Like, you’ve got human servicing machines, humans looking to improve the capability of these machines at this frantic pace. Like, they’re not even in control of what they’re doing.

Speaker: 0
18:04

Are we Economic forces are pushing.

Speaker: 2
18:06

Are we the servant of the master, right, at a certain point? Like, yeah. Yeah. And the whole thing is, like, especially with a competition that’s going on, between the ai, which is kind of in general, you’re at a point where, like, do the CEOs of the labs, like, they’re they’re these big figureheads, they they go on interviews.

Speaker: 2
18:22

They talk about what they’re doing and stuff. Do they really have control over the any part of the system?

Speaker: 0
18:28

Oy. The the economy is in this, like, almost convulsive fit. Right? Like, you can almost feel, like, it’s like it’s hurling out AGI. Mhmm. Yeah. And Yeah. Ai like, for as one ai of, I guess, data point here, like, all these laps, OpenAI, Ai, Google, every year they’re spending, like, an aircraft carrier worth of capital, individually, each of them, just to build bigger data centers, to house more AI chips, to train bigger, more powerful models.

Speaker: 0
18:55

And that’s, like so so we’re actually getting to the point where if you look at on a a power consumption basis, like, we’re getting to, you know, two, three, four, five percent of US power production, if you project out into late, twenty twenties,

Speaker: 2
19:11

kind of 2026, ’20 ‘7.

Speaker: 0
19:12

You’re talking about Not for double digit though.

Speaker: 2
19:14

Not for double digit. Yeah. But for single digit. Yeah. You’re talking, like, that’s a few gigawatts. So one gigawatt.

Speaker: 0
19:19

So so not for single digit. It’s in the the, like, for for 2027, you’re looking at, like, you know, in the point five ish percent. But it’s, like, it’s a big fucking ram, like, you’re talking about gigawatts and gigawatts. One gigawatt is a million homes. So So you’re seeing ai one data center in 2027 is easily gonna break a gig. There’s gonna be multiple like that.

Speaker: 0
19:37

And sai, it’s like a thousand sorry, a million home city metropolis really that is just dedicated to training like one fucking model. Like, that that’s what this is. Again, if you zoom out of planet Earth, you can interpret it as, like, this like, all these humans frantically running around like ants just, like, building this, like, artificial brain.

Speaker: 0
19:56

It’s, like, one

Speaker: 2
19:56

take on ai assembling itself on the face of the planet.

Speaker: 1
20:00

Marshall McLuhan in, like, 1963 or something like that said, human beings are the sex organs of the machine world.

Speaker: 0
20:07

Oh, god.

Speaker: 2
20:08

That hits that hits different today.

Speaker: 1
20:10

Yeah. It does. It does. I’ve always said that if we were aliens or if aliens came here and studied us, they’d be ai, what is the dominant species on the planet doing? Well, it’s making better things. Yeah. That’s all it does.

Speaker: 2
20:22

It’s but

Speaker: 1
20:22

the whole thing is dedicated to making better things. And all of its instincts including materialism, including status, keeping up with the Joneses, all that stuff is tied to newer better stuff. You don’t want old shit. Yeah. You want new stuff. You don’t want an iPhone 12. You know, what arya you doing, you loser?

Speaker: 1
20:38

You know, like, we you need newer better stuff and they convince people, especially in the realm of, like, consumer electronics. Most people are buying things they absolutely don’t need. The vast majority of the spending on new phones is completely unnecessary.

Speaker: 0
20:55

Yeah.

Speaker: 2
20:55

But I I just need that extra, like, that extra, like, fourth camera though. Yeah. I ai like Sai my life isn’t complete.

Speaker: 1
21:01

I run, one of my phones is an iPhone 11, and I’m purposely not switching it just to see if I notice it. I fucking never notice anything. Ai watched YouTube on it. I text people. It’s all the same. I go online. It works. It’s all the same.

Speaker: 0
21:16

Probably the biggest thing there is gonna be the security ai, which,

Speaker: 1
21:20

No. They update the security. It’s all software.

Speaker: 0
21:22

But I mean, if your if your phone gets old enough, I mean, like, at a certain point

Speaker: 2
21:25

When they stop updating it? Yeah.

Speaker: 1
21:27

But, ai, iPhone one, you know, China’s watching all your dick pics.

Speaker: 0
21:30

Oh, dude. I mean, Salt Typhoon, they’re watching all our dick pics. Yeah. They’re definitely seeing ai.

Speaker: 1
21:35

What’s Salt Typhoon?

Speaker: 0
21:36

So Salt oh, sorry. Yeah. Yeah. So it’s this big, Chinese cyber attack actually starts to get us to, to kind of the the broader

Speaker: 1
21:42

great name, by the way. Salt Typhoon? Fuck yeah, guys.

Speaker: 0
21:45

I really wish I could name it.

Speaker: 2
21:46

They have the they have the coolest names for their cyber operations. As meant to destroy.

Speaker: 1
21:50

Salt Typhoon

Speaker: 0
21:51

sana pretty slick. You know what? It’s it’s kinda like when, when people go out and do, like, ai a an awful thing, ai, a school shooting or something, and then, like, oh, let’s talk about you know, if you give it a cool name, like, now the Chinese are definitely gonna do it again.

Speaker: 0
22:02

Anyway

Speaker: 1
22:03

because they have a cool name?

Speaker: 0
22:04

Yeah. That’s definitely a bad Salt typhoon.

Speaker: 2
22:05

Salt typhoon. Pretty dope.

Speaker: 0
22:07

Yeah. But it’s this thing where basically so so there was in the, the three g ai of protocol that was set up years ago, law enforcement agencies included back doors intentionally to be able to access comms, you know, theoretically, if they got a warrant and so on. And, well, you introduce a backdoor. You have adversaries ai China who are wicked good at cyber. They’re gonna find and exploit those backdoors.

Speaker: 0
22:31

And now, basically, they’re they’re sitting there and they had been for some people think, like, maybe a year or two before it was really discovered. And just a couple months ago, they kinda go, like, oh, cool. Like, we got fucking, like, China all up in our shit.

Speaker: 2
22:42

And this is, like, this is, like, flip a switch for them and, like, you turn off the power of water to a state or, like, you fucking yeah.

Speaker: 0
22:50

Well, sorry. This is sorry. Salt Ai, though, is about, just, sitting on the the, like, basically telecoms now.

Speaker: 2
22:56

Well, that’s the telecom one. Yeah. Ai.

Speaker: 0
22:57

It’s not the but but, yeah. I mean, that that’s another thing.

Speaker: 2
22:59

There’s another there’s another thing where they’re doing that too.

Speaker: 0
23:02

Yeah. And and so this is kind of where what what we’ve been looking into over the last year is this question of how what is if you’re gonna make, like, a a Manhattan project for super intelligence. Right? Which is that’s I mean, that’s what we’re texting about, like, way back and Right. And then, actually, funnily enough, we we shifted ai our date for security reasons.

Speaker: 0
23:20

But, if you’re gonna do a Manhattan project for for super intelligence, what does that have to look like? What does the security game have to look like to actually make it sai that China is not all all up in your shit? Like, today, it is extremely clear that at the world’s top AI labs, like, all that shit is being stolen.

Speaker: 0
23:40

Like, there there is not a single lab right now that isn’t being spied on successfully based on everything we’ve seen, by the Chinese.

Speaker: 2
23:47

Can I

Speaker: 1
23:48

ask you this? Yeah. Are we spying on the Chinese as well?

Speaker: 0
23:50

That’s a big problem. Do you wanna

Speaker: 2
23:53

We’re we’re we’re I mean, we’re definitely we’re definitely doing some stuff. But in terms of the the relative balance between the two, we’re not where we need to be.

Speaker: 1
24:03

They spy on us better than we spy on them? Is that what

Speaker: 2
24:05

you’re saying? Because, like, we because, like, they build all our shit. They’ve spilled all our shit.

Speaker: 1
24:09

Well, that was the Huawei situation. Right?

Speaker: 0
24:10

Yeah. And and it’s also the oh ai god. It’s the like, if you look at the power grid, So, this is now now public, but if you look at, like, transformer substations. So these are the essentially, anyway, they’re they’re a crucial part of the electrical grid. And there’s really, ai, basically, all of them have components that are made in China.

Speaker: 0
24:29

China’s known to have planted backdoors, ai, Trojans into those substations to fuck with our grid. The thing is, when you see a saloni typhoon, when you see, like, big Chinese sai arya big Russian cyberattack, you’re not seeing their best. These countries do not go and show you, like, their best cards out the gate.

Speaker: 0
24:46

You you show the bare minimum that you can without tipping your hand at the actual exquisite capability capabilities you have. Like, we’ve the the way that one of the, the the people kind of who’s who’s been walking us through all this, really well explained it is, like, the philosophy is you want to learn without teaching.

Speaker: 0
25:06

Right? You wanna use what is the lowest level capability that has the effect I’m after. And that’s sai is.

Speaker: 2
25:10

I’ll give an example. Like, I’ll I’ll tell you a story that’s that’s kinda ai it’s it’s a public story and it’s from a long time ai, but it kinda gives a flavor of, like, how far these countries will actually go when they’re playing the game for fucking real. So it’s 1945. America and The Soviet Union are ai best pals because they’ve just defeated the Nazis. Right?

Speaker: 2
25:31

To celebrate that victory and the coming new world order that’s gonna be great for everybody, the children of the Soviet Union give sai a gift to the American ambassador in Moscow this beautifully carved wooden seal of The United States Of America. Beautiful thing. Ambassador is thrilled with it. He hangs it up on behind his desk in his private office.

Speaker: 2
25:56

You can see where I’m going with this probably but Oh, yeah. Yeah. Seven years later, 1952, finally a curse to us ai, let’s take it down and actually examine this. So they dig into it and they find this incredible contraption in it called a cavity resonator. And this device doesn’t have a power source, doesn’t have a battery, which means when you’re sweeping the office for bugs, you’re not sana find it.

Speaker: 2
26:22

What it does instead is it’s ai. That’s it. That’s it.

Speaker: 0
26:26

It’s it’s The thing. They call it

Speaker: 2
26:28

the thing.

Speaker: 0
26:28

They call

Speaker: 2
26:28

it the thing. And what this cavity resonator does is it’s basically designed to reflect radio radiation back to a receiver to listen to all the noises and conversations and talking in the ambassador’s private office. And so

Speaker: 1
26:43

How’s it doing it without a power source?

Speaker: 2
26:45

So that’s what they do. So the Soviets, for seven years, parked a van across the street from the embassy, had a giant fucking microwave antenna aimed right at the ambassador’s office, and were ai zapping it and and looking back at the reflection and literally listening to every single thing he was saying.

Speaker: 2
27:04

And the best part was when the embassy staff was ai, we’re gonna go and, like, sweep the office for bugs periodically, they’d be like, hey, mister ambassador, we’re about to sweep your office for bugs. And the ambassador was like, cool. Please proceed and go and sweep my office for bugs.

Speaker: 2
27:20

And the KGB dudes in the van were ai,

Speaker: 0
27:22

just turn it off.

Speaker: 2
27:23

Sounds like they’re gonna sweep the office for bugs. Let’s turn off our giant microwave antenna. And they kept at it for seven years.

Speaker: 0
27:29

It was only ever discovered because there was this, like, British, radio operator who was just, you know, doing his thing, changing his ai. He’s like, oh, shit. Like, is that the ambassador of our stock? Randomly. So so the thing is oh,

Speaker: 2
27:39

and actually sorry. One other thing about that, if you heard that story and you’re ai thinking to yourself, hang on a saloni. They were shooting, like, microwaves at our ambassador twenty four seven for seven years. Woah. Doesn’t that seem like it might, like, ai his genitals or something? Yeah.

Speaker: 2
27:57

Or something like ai? You’re

Speaker: 1
27:58

supposed to have a lead vest.

Speaker: 2
27:59

And the answer is

Speaker: 0
28:01

jock. Yes.

Speaker: 1
28:02

Yes.

Speaker: 2
28:03

Yes. And this is something that came up in our investigation just from every single person who was, like, who was filling us in and who who dialed in and knows what’s up. They’re ai, look. So you got to understand, like, our adversaries, if if they need to, like, give you cancer in order to rip

Speaker: 0
28:21

your shit

Speaker: 2
28:22

off of your laptop, they’re gonna give you some cancer. Did he meh cancer? I don’t know specifically about the ambassador, but, like

Speaker: 0
28:29

It’s also that’s also so

Speaker: 2
28:33

we’re we’re ai with what

Speaker: 0
28:34

we can say. There’s there’s actually people that you talk to later that, can go in in more detail here. But, older technology like that, like, kind of lower powered, so you’re you’re you’re less likely to to look at that. Nowadays, we live in a different world. The guy that invented that microphone invented his his last name is theremin.

Speaker: 0
28:46

Ai

Speaker: 2
28:50

invented this instrument called the Theremin, which is a fucking really interesting thing

Speaker: 1
28:54

that Oh, he’s just moving his hands?

Speaker: 2
28:56

Yeah. Your hands control it, waving over this. What? It’s a fucking wild instrument. Have you

Speaker: 1
29:00

ever seen this before, Jamie?

Speaker: 2
29:01

Yeah. I saw Juicy J Prep play in it yesterday on Instagram. He’s, like, practicing. It’s sai fucking cool

Speaker: 1
29:07

ass thing. He’s also pretty

Speaker: 0
29:08

good at it too. That’s how it is.

Speaker: 2
29:11

There’s with two two con both hands are controlling it by moving in and out and speak, x, ai, z. I don’t I honestly don’t really know how

Speaker: 1
29:19

the fuck it works, but Wow. I’ve seen it. Wow. That is wild.

Speaker: 0
29:23

It’s it’s also a lot harder to do than it seems. So American the Americans tried to replicate this for years and years and years without without really succeeding. And, anyway, that’s all ai of part

Speaker: 1
29:32

I have a friend who used to work for intelligence agency, and he was working in Russia. And the fact they found that the building was bugged

Speaker: 2
29:40

with these super sophisticated bugs

Speaker: 1
29:40

that operate their power came from the ai, like, I have a mechanical watch on, so when when I move my watch, it

Speaker: 0
29:57

Powers up.

Speaker: 1
29:57

It powers up the spring and it keeps the watch. That’s an automatic that’s how an automatic mechanical watch works. They figured out a way to just ai the subtle swaying of the building in the ai, that was what was powering this listening device.

Speaker: 0
30:10

So this is the thing. Right? Like, the

Speaker: 1
30:12

I mean, what the

Speaker: 2
30:13

fuck? Well, and it was the the things that nation states

Speaker: 1
30:15

What’s up, Jamie?

Speaker: 2
30:16

Cool. Google says that’s that’s what was powering this thing, the great seal bug, which Really? That’s the thing.

Speaker: 1
30:22

So There’s another one? No. Oh, this is

Speaker: 0
30:24

so you can actually see in that video. I think there was a YouTube yeah.

Speaker: 1
30:27

So Same ai of thing, Jimmy?

Speaker: 2
30:28

Ai. I got I was just I typed in Russia ai bug building sai. The thing is what pops up.

Speaker: 1
30:35

The thing.

Speaker: 0
30:35

Which is

Speaker: 2
30:36

what we were just talking about. About.

Speaker: 1
30:37

Oh, that thing. It it so that’s powered the same way? Oh, Ai I I I don’t know. Ai don’t think

Speaker: 0
30:42

I I think it was powered by radio frequency, emission. So Ai there meh be another thing related to it. Not not sure, but yeah.

Speaker: 1
30:51

Maybe maybe Google’s a little confused. Alright. Maybe it’s the word sway is what’s throwing it off.

Speaker: 0
30:56

But it’s, nobody it’s it’s a great catch. And the only reason we even know that too is that the when the U twos were flying over Russia, they had a U two that got shot down in 1960. The Russians go like, oh, like, freaking Americans, like, spying on us. What the fuck? Ai thought we were buddies or what? Well, it’s sixties, ai I didn’t think that.

Speaker: 0
31:12

But, and then the Americans are ai, okay, bitch. Look at this. And they brought out the the seal, and that’s how it became public. It was basically ai the response to the Russian saying, like, you know,

Speaker: 1
31:23

Wow. Yeah. Yeah. They’re all dirty. Oh, yeah. Everyone’s spying on everybody.

Speaker: 0
31:28

That’s the thing. And

Speaker: 1
31:29

I think they probably all have some sort of UFO technology.

Speaker: 0
31:33

We need to talk about that point.

Speaker: 2
31:34

We we we turn off our mics and,

Speaker: 1
31:36

I’m 99% sure a lot of that shit

Speaker: 0
31:39

is ours. You need to talk to some of the

Speaker: 1
31:41

I’ve been talking to people.

Speaker: 0
31:42

Oh oh, I’m Sai Yeah. I’ve yeah. I’ve been

Speaker: 1
31:44

talking to a lot of people.

Speaker: 0
31:45

There there’s there might be some other people that you’d be interested in I would

Speaker: 1
31:49

very much be interested. Here’s the problem. Some of the people I’m talking to, I’m positive, were they’re talking to me to give me bullshit. Ai because I’m

Speaker: 0
31:59

I’m Are you on your list?

Speaker: 1
32:00

Are you, like No. You guys aren’t on the list. But there’s certain people I’m ai, okay. Maybe most of this is true, but some of it’s not on purpose. There’s there’s that. Ai I guarantee you, I know Sai talk to people that don’t tell me the truth.

Speaker: 0
32:12

Yeah. Yeah. It’s it’s interesting problem in, like, all intel. Right? Because there’s always the mix of incentives is so fucked. Like, the the adversary is trying to add noise into the system. You’ve got you’ve got pockets of people within the government that have different incentives from other pockets.

Speaker: 1
32:25

And then you have top secret clearance and all sorts of other things that are going on. Yeah. One one guy that texted me is, like, the guy telling you that the they aren’t real is literally involved in these meetings. So stop. Just stop listening down.

Speaker: 2
32:38

It’s ai one of the one of the one of the techniques, right, is ai, is actually to inject so much noise that you don’t know what’s what and you can’t follow. So this actually, this this happened in, in in the COVID thing. Right? The lab leak versus the natural, like, wet market thing. Yeah.

Speaker: 2
32:55

So I remember there was a there was a debate that, that happened about what was the origin of COVID. This was, like, a few years ago. It was, like, an eighteen or twenty hour long YouTube debate, just, like, punishingly long. And it was, like, there was a hundred thousand dollar bet either way on who would win, and it was, like, lab leak versus wet market.

Speaker: 2
33:16

And at the end of the eighteen hours, the conclusion was ai one of them won, but the conclusion was ai, it’s basically fifty fifty between them. And then I meh, like, hearing that and talking to some folks and being like, hang on a second. So you gotta believe that whether it came from a lab or whether it came from a wet market, one of the top three priorities of the CCP from a propaganda standpoint is, like, don’t get fucking blamed for COVID.

Speaker: 2
33:41

And that means they’re putting, like, one to ten billion dollars and some of their best people on a global propaganda effort to cover up evidence and confuse and blah blah blah. You really think that you that your 50%, ai, your that confusion isn’t coming from that incredibly resourced effort. Like, they know what they’re doing.

Speaker: 1
34:04

Particularly when different biologists and virologists who weren’t attached to anything

Speaker: 0
34:10

Yeah. Were

Speaker: 1
34:10

talking about, like, the the cleavage points and this different aspects of the virus that appeared to be genetically manipulated. Yeah. The fact that there was only one spillover event, not multiple ones, none of it made any sense. All of it seemed like some sort of a genetically engineered virus.

Speaker: 1
34:28

It seemed like gain of function research. And Yeah. And they they they your early emails were talking about that.

Speaker: 0
34:34

But even And

Speaker: 1
34:34

then everybody changed their opinion.

Speaker: 0
34:36

And even the taboo, right, against talking about it through that lens?

Speaker: 1
34:40

Oh, yeah. Total propaganda. It’s racist. Yeah. Which is crazy because nobody thought the Spanish flu was racist sana it didn’t even really come from Spain.

Speaker: 0
34:47

Yeah. That’s true. Yeah.

Speaker: 1
34:48

Came from Kentucky. Ai didn’t know that. Yeah. I think it was Kentucky or Virginia. Where where did the Spanish flu or originate from? But nobody got mad.

Speaker: 2
34:56

Well, that’s because that’s because the that’s because the state of Kentucky has an incredibly sophisticated propaganda machine.

Speaker: 0
35:02

Well And, pinned

Speaker: 2
35:04

it on

Speaker: 1
35:04

Staten Island. Kentucky. It was but it was I I think it was it was an agricultural thing. Kansas.

Speaker: 0
35:11

Kansas. Kansas.

Speaker: 1
35:11

Thank you.

Speaker: 0
35:12

Yeah. Goddamn Vatsal. You know? I’ve always I’ve always said that. I’ve always I know you were saying

Speaker: 1
35:17

in The United States h one n one strain that had genes of avian origin. By the way, this is people always talk about the Spanish flu. If it was around today, they would just everybody would just get antibiotics and we’d be fine.

Speaker: 2
35:26

Mhmm.

Speaker: 1
35:27

So this this whole mass die off of people.

Speaker: 2
35:29

It would be ai the Latinx flu and, we would be The Latinx flu? The Latinx flu. That was what

Speaker: 1
35:35

it would be. But that one didn’t stick at all.

Speaker: 2
35:37

It didn’t stick at all. Vatsal.

Speaker: 1
35:38

No. There’s a lot of people, like, claiming they never used it, and ai pull up old videos of them. Yeah. Like, that’s a dumb one. Like, it’s literally a gendered language, you fucking idiots. Yeah.

Speaker: 2
35:47

Like, you can’t just do that. That’s true.

Speaker: 0
35:49

It went on for a while though. Like, it it Sure.

Speaker: 1
35:52

Everything goes on for

Speaker: 2
35:53

a while.

Speaker: 1
35:53

Yeah. So think about how long they did lobotomies. They did lobotomies for fifty fucking years. Probably, like, hey, maybe we should stop doing this.

Speaker: 0
36:03

It was ai the same attitude that got, that got Turing chemically castrated. Right? I mean, it was like,

Speaker: 2
36:08

hey, let’s just

Speaker: 0
36:09

get in there and Yeah. Fuck around a bit and see

Speaker: 1
36:11

Well, this is before they had SSRIs and all sorts of other interventions.

Speaker: 0
36:14

But Yeah.

Speaker: 1
36:15

When what was the year lobotomies? It’s I believe it stopped in ’67. Was it fifty years?

Speaker: 2
36:20

I think you said ’70 last ai, and that was correct when I pulled it up.

Speaker: 1
36:24

Seventy years?

Speaker: 2
36:25

Nineteen seventy. Oh, I

Speaker: 1
36:26

think it was ’67.

Speaker: 2
36:27

I like

Speaker: 0
36:27

how this has come up so many times that Jamie’s like, I think last time you said it

Speaker: 2
36:30

was It

Speaker: 1
36:32

comes up all the time because it’s one of those things that’s like you can’t just trust the medical establishment.

Speaker: 2
36:37

Officially ’67, it says maybe one more in ‘7 oh, he

Speaker: 1
36:40

died in ’70 Oh, ai died in ’72. When did they start doing it? When the I think they arya in the ’30 or the twenties rather.

Speaker: 0
36:48

That’s pretty ai. You know, the first

Speaker: 2
36:49

the first time I was guy who did a lobotomy. Yeah. It says ’24

Speaker: 1
36:53

Freeman ai, watch DC Direct Labs. Thirty five, they tried it first.

Speaker: 2
36:58

A leutonomy.

Speaker: 1
37:00

A leutonomy. Imagine that, they just scramble your fucking brains.

Speaker: 0
37:03

But doesn’t it make you feel better to call it a leucotomy though? Because it sounds a lot more professional.

Speaker: 1
37:08

No. Lobotomy, leucotomy? It sound leucotomy sounds gross. Sounds like loogie. Like, you’re looking at a leucotomy. Like, lobotomy. Boy, Topeka, Kansas. Also Kansas. All all roads point to Kansas.

Speaker: 0
37:21

That’s the problem.

Speaker: 1
37:22

That’s what ai. When everything’s flat, you just lose your fucking marbles. You go crazy.

Speaker: 0
37:25

That’s the main issue with

Speaker: 2
37:26

that ai. Yeah.

Speaker: 1
37:27

So they did this for so long.

Speaker: 2
37:29

Somebody won a Nobel Prize for lobotomy. Wonderful. Imagine imagine being that person. Meh. Seriously. You’re kinda like, you know, you you you don’t wanna display it up in your shelf. It’s just

Speaker: 1
37:40

like Ai just a good indicator. It’s ai it should let you know that oftentimes sai is incorrect and that oftentimes, you know, unfortunately, people have a history of doing things and then they have to justify that they’ve done these things. Yeah. And they, you

Speaker: 0
37:55

know But now there’s also there’s so much more tooling too. Right? If you’re a nation state and you wanna fuck with people and inject narratives into the the ecosystem. Right? Like, the the whole idea of autonomous AI agents too, like, having these basically, like, Twitter bots or or whatever bots.

Speaker: 0
38:09

Like, a lot of one thing we’ve been we’ve been thinking about, Toucan, on the side is, like, the idea of, you know, audience capture. Right? Do you have, like ai, big people with high profiles and kind of gradually steering them

Speaker: 1
38:22

towards

Speaker: 0
38:22

a position by creating bots that, like, through comments, through ai votes, you know.

Speaker: 1
38:27

100%. It’s it’s absolutely real.

Speaker: 2
38:30

Yeah. And a couple of the the big, like, a couple of big accounts on x, like, that that we’re in touch with have sort of said, like, yeah. Especially in the last two years, it’s actually become hard, like, especially the thoughtful ones. Right? It’s become hard to, like, stay sana, not not on x, but, like, across social media, on on all the platforms.

Speaker: 2
38:49

And that is around when, you know, it became possible to have AIs that can speak like people, you know, 90%, ninety five % of the time. And so you’ve you have to imagine that, yeah, adversaries are are using this and doing this and pushing the frontier. No. Like,

Speaker: 1
39:05

there’s they’d be fooled if they did it

Speaker: 0
39:08

a %. You have

Speaker: 1
39:09

to do it because for sure we’re doing that.

Speaker: 0
39:11

And this is one of the things where, you know, like, it used to be so OpenAI actually used to do this assessment of their, AI models as part of their their kind of what they call their preparedness framework that would look at the persuasion capabilities of their models as one kind of threat vector. They pulled that out recently, which they’ve is kind of like Why? You can argue that it makes sense.

Speaker: 0
39:33

I I actually think it it’s, it’s somewhat concerning, because one of the things you might worry about is if these systems, sometimes they get trained through what’s called reinforcement learning. Potentially, you could imagine training these to be super persuasive by having them interact with real people and convince them practice at convincing them to do specific things.

Speaker: 0
39:50

If that if you get to that point, you know, these these labs ultimately will have the ability to deploy agents at scale that can just persuade a lot of people to do whatever they sana, including pushing legislative agendas, vote, like Anyone

Speaker: 2
40:04

help them help them prep for meetings with, the Hill, the administration, whatever? And, like, how should I, like, convince this person to do that? Like Yeah.

Speaker: 1
40:13

Well, they’ll do that with text messages. Make it more businesslike. Yep. Make it friendlier. Make it more jovial.

Speaker: 0
40:19

But this is, like, the same optimization pressure that keeps you on TikTok. That same, like, addiction. Imagine that applied to, like, persuading you of some, like, some fact. Right?

Speaker: 1
40:28

Yeah.

Speaker: 0
40:28

That’s ai a

Speaker: 2
40:29

On the other hand, maybe a few months from now, we’re all just gonna be very, very convinced that it was all ai. It’s no big deal.

Speaker: 1
40:36

Yeah. Maybe they they’ll get so good that it’ll make sense to you. Maybe they’ll just be right.

Speaker: 0
40:44

Yeah. That’s ai works.

Speaker: 1
40:46

Yeah. It’s it’s a confusing time period, you know. We’ve we’ve talked about this ad nauseam, but it it bears repeating. This former FBI, analyst who investigated Twitter before Elon bought it said that he thinks it’s about 80% bots.

Speaker: 0
40:59

Yeah. Yeah. 80%. That’s that’s one of the reasons why the bot purge, like, when when Elon acquired it and and started working on it is is so important. Like, there needs to be the challenge is, like, detecting these things is so hard. Right?

Speaker: 2
41:10

So hard. Increasingly. Ai, more and more, they can hide, like, basically perfectly. Like, how do you tell the difference between a cutting edge AI bot and a human just from the You can’t because they actually can’t.

Speaker: 1
41:23

Generate AI images of a family, of a backyard barbecue, post all these things up and make it seem like it’s real. Yep. Especially now, AI images are insanely good now.

Speaker: 0
41:32

They really are.

Speaker: 1
41:33

Yeah. It’s crazy. And and if you have a person, you could just you could take a photo of a person and manipulate it in any way you’d ai. And then now this is your new guy. You could do it instantaneously. And then this guy has a bunch of opinions on things and seems to seems always aligned with the Democratic Party. But whatever, he’s a good ai.

Speaker: 0
41:50

He’s a

Speaker: 1
41:50

family man. Look, he’s out of his barbecue. He’s not even a fucking human being. And people are arguing with this bot, like, back and forth. And you’ll see it on any social issue. You see with Gaza and Palestine. You see it with abortion. You see it with religious freedoms.

Speaker: 0
42:05

Yeah.

Speaker: 1
42:06

You just see these bots. You see these arguments and, you know, you see, like, various levels. Position and then you see a more reasonable centrist position. But essentially, what they’re doing is they’re they’re consistently moving what’s okay Yes. Further and further in a certain direction.

Speaker: 2
42:23

And in fact, it’s it’s it’s it’s both directions. Like, it’s like Right. You know how when you’re trying to, like, you you you ai to capsize a boat or something, you’re you’re, like, fucking with your buddy at on the lake or something. So you you push on one ai, then you push on the other side Yeah. And you push until eventually it capsizes.

Speaker: 2
42:40

This is kind of, like, our electoral process is already naturally like this. Right? We we go, like, we have a party in power for a while then, like, they they get, you know, they basically get, like, you get tired of them and these you switch. And that’s ai of the natural way how democracy works or in a republic. But the way that adversaries think about this is they’re ai, perfect.

Speaker: 2
42:59

This swing back and forth, all we have to do is, like, when it’s on this way, we push and push and push and push until it goes more extreme. And then there’s a reaction to it. Right? And that’s swinging back, and we push and push and push on the other side until eventually something breaks.

Speaker: 2
43:13

And that’s a risk. Yikes. Yeah.

Speaker: 0
43:15

It’s it’s also, like, you know, the the organizations that are doing this, like, we already know this is part of Russia’s MO, China’s MO, because back when it was easier to detect, we already could see them doing this shit. So there is this website called, this person does not exist.

Speaker: 0
43:29

Ai still exist really now, but it’s kind of you you

Speaker: 2
43:32

kind of Super seeded.

Speaker: 0
43:33

Yeah. But you would ai every time you refresh this this website, you would see a different, like, human face that was AI generated. And, what the Russian Internet Research Agency would do Yeah. Yes. Exactly. What what all these these, and it’s it’s actually yeah. I don’t think they’ve really upgraded it.

Speaker: 2
43:48

Sai don’t know. Yeah.

Speaker: 0
43:49

But, or yeah. Style

Speaker: 2
43:50

That’s fake?

Speaker: 1
43:51

Wow. They’re

Speaker: 2
43:52

so good. This is old. This is, like, years old. Years old.

Speaker: 0
43:56

And you could actually detect these things pretty ai. Like, you might remember the the whole thing about, AI systems were having a hard time generating, like, hands that only had, like, five fingers. Right. That’s that’s vatsal for though.

Speaker: 2
44:06

That’s all for

Speaker: 0
44:06

Yeah. Little hints of it were though back in the day in this person does not exist, and you’d have the the Russians would take, like, the a face from that and then use it as the profile picture for, like, a a Twitter bot. Right. And so that you could actually detect. You’d be ai, okay. I’ve got you there. I’ve got you there. I can kinda get a rough count. Right.

Speaker: 0
44:22

Now we can’t, but we definitely know they’ve been in the game for a long time. There’s no way they’re not

Speaker: 2
44:27

in the now. Thing with the thing with, like, nation state, like, propaganda attempts, right, is that, like, people have this this idea that, like, like, I’ve caught this, like, Chinese influence operation or whatever. Like, we nail them. The reality is nation states operate at, like, 30 different levels.

Speaker: 2
44:44

And if you’re a ai, like, just influencing our information spaces as a priority for them, they’re not just gonna operate they’re not just gonna pick a level and do it. They’re gonna do all 30 of them. And so you, even if you’re ai among the best in the world, ai, detecting this shit, you’re gonna, like, you’re gonna catch and stop, like, levels one through 10.

Speaker: 2
45:04

Mhmm. And then you’re gonna be, like, you’re gonna be aware of, like, level eleven, twelve, 13, like, you’re working against it. And you’re, you know, meh maybe you’re you’re starting to think about level 16 and you you imagine, like, you know about level 18 or whatever, but they’re, like, they’re above you, below you, all around you.

Speaker: 2
45:18

They’re they’re incredibly, incredibly resourced. And this is something that came, like, came came through very strongly for us.

Speaker: 1
45:24

You guys have seen the Yuri Bezmenoff video from 1984 where he’s talking about how the all our educational institutions have been captured by the Soviet propaganda. It was talking about Marxism has been injected into school systems and how you have essentially two decades before you’re completely captured by these ideologies, and it’s gonna permeate and destroy all of your confidence in democracy.

Speaker: 1
45:50

And and he was a % correct. And this is before these ai of tools. Before because, like, the vast majority of those exchanges of information right now are taking place on social media. The vast majority of debating about things, arguing, all taking place on social media. And if that FBI analyst is correct, 80% of it’s bullshit Yeah. Which is really wild.

Speaker: 0
46:11

Well, and you look at, like, some of the the documents that have come out, I think it was, like, the, I think it was the CIA game plan, right, for regime change or, like, undermining or, like, how do you do it? Right? Have multiple decision makers at every level. Right. Right. Yeah. All these things and, like, what a surprise. That’s exactly what, like, The US bureaucracy looks like today.

Speaker: 1
46:28

Saloni everything down. Make change impossible. Yeah. Make it so that everybody gets frustrated with it and they give up hope. They they ai to do that to other countries. Like Yeah. For sure, they do that here.

Speaker: 0
46:39

Open society. Right? I mean, that’s that’s part of the trade off. And that’s actually a big big part of the challenge too. So when when we’re working on this, right, like, one of the things Ed was talking about these, like, the 30 different layers of security access or whatever.

Speaker: 0
46:50

One of the consequences is you bump into a team at so so, like, the teams we ended up working with on this project were the folks that we bumped into after the end of our our last investigation who kind of were, like, oh,

Speaker: 2
47:04

We talked about last year. Yeah.

Speaker: 0
47:05

Yeah. Yeah. Yeah. Like, looking at AGI, looking at the national security, kind of landscape around that. And a lot of them, like, really well placed. It was like, you know, speak forces guys from tier one units. So you’ll speak team six type thing. And because they’re sai, like, in that ecosystem, you you’ll see people who are, like, ridiculously specialized and competent, like, the best people in the world at doing whatever the thing is, like, to to break the security.

Speaker: 0
47:31

And they don’t know often about, like, another group of guys who have a completely different, capability set. And so what you find is, like, you’re you’re indexing, like, hard on this vulnerability and then suddenly someone says, oh, yeah. But by the way, I can just hop that fence.

Speaker: 2
47:46

So the really funny the really funny thing about this is, like, most or even, like, almost all of the really, really, like, elite security people kinda think that, like, all the other security people are dumbasses even when they’re not.

Speaker: 0
48:01

Or or, like, yeah. They’re they’re they’re they’re biased in the direction of because it’s so easy when everything’s, like, stovepiped.

Speaker: 2
48:06

But sai most people who who say they’re, like, elite at security actually are dumbasses. Because because most security is, like, about checking boxes and, like, SOC two compliance and and shit like that.

Speaker: 0
48:17

Yeah. It’s what what it is is it’s, like, so everything’s so stove piped Yeah. That that you don’t you literally can’t know what the exquisite state of the art is in another domain. So a lot easier for somebody to come up and be, like, oh, yeah, like, I’m actually really good at this other thing that you don’t know.

Speaker: 0
48:32

And so figuring out who actually is the, like, we had this experience over and over where, like, you know, you run into a team and then you run into another team. They have an interaction. You’re ai, like, oh, interesting. So, like, you know, like, these are are the really kind of, the people at the top of their game.

Speaker: 0
48:45

And that’s been this very long process to figure out, like, okay, what does it take to actually secure our critical infrastructure against, like, CCP, for example, ai, Chinese attacks if we’re if we’re building a superintelligence project. And it’s it’s this weird, like, kind of, challenge because of the stove ai.

Speaker: 0
49:02

No one has the full picture, and, like, we don’t think that we have it even now, but definitely don’t know of anyone who’s come, like, that like, this close to it.

Speaker: 2
49:11

The best people are the ones who, when they when they encounter another team and and other ideas and start to engage with it, are, like, instead of being, like, oh, like, you don’t know what you’re talking about, who just, like, actually lock on and go, like, that’s fucking interesting. Tell me more about that.

Speaker: 1
49:26

Right. People that have control of their ego.

Speaker: 2
49:28

Meh. %.

Speaker: 1
49:29

With everything.

Speaker: 2
49:30

The best of

Speaker: 1
49:31

the best

Speaker: 2
49:31

With everything

Speaker: 1
49:31

in life.

Speaker: 2
49:32

The best of the best, like, got there by eliminating their ego as much as they could. Yeah. Always the way it is.

Speaker: 0
49:41

Yeah. And it’s it’s it’s also, like, the the fact of, you know, the 30 layers of the stack or whatever it is, of all these security issues, means that no one can have the complete picture at any one ai, and the stack is changing all the time. People are inventing new shit. People things are are falling in and out of, and and so, you know, figuring out what is that team that can actually get you that complete picture is an ai, a, you can’t really do it’s hard to do it from the government side because you gotta engage with data center building companies.

Speaker: 0
50:11

You gotta engage with the AI labs. And and in particular, with, like, insiders at the labs who will tell you things that, by the way, the the lab leadership will tell you the opposite of in some cases. And so, like, it it’s just this this Gordian knot, like, where it’s, like, it took us months to to, like, pin down every kind of dimension that we think we’ve pinned down at this point.

Speaker: 2
50:29

I’ll give an example actually of of that, like, the the trying to do the handshake, right, between different sets of people. So we were talking to one person who’s, who’s thinking hard about data center security, working with, like, Frontier Labs on this shit, very meh, like, at the at the top of her game.

Speaker: 2
50:46

But she’s kind of from, like, the the academic space ai of Berkeley, like, the avocado toast kind of side of the speak, you know? Mhmm. And, she’s talking to us. She’d reviewed our the the report we put out, the investigation we put out. And she’s like, you know, I think think I think you guys are are talking to the wrong people. And we’re like, can you say more about that?

Speaker: 2
51:09

And she’s like, well, I I don’t think, like, you you know, you talked to tier one special forces. I don’t think they, like, know much about that. We’re like, okay. That’s not correct, but can you say why? And she’s like, I feel like those are just the people that, like, go and, like, bomb stuff.

Speaker: 0
51:24

Blow it up. Blow it up. Blow it up. It’s it’s understandable too because, like, there’s a lot of people Totally understandable. A lot of people have the wrong sense of, like, what a tier one asset actually can can do. It’s like Well,

Speaker: 1
51:33

that’s ego on her part, because she doesn’t understand what they do.

Speaker: 0
51:37

It’s ego all the way down. Right? Sai meh, like But

Speaker: 1
51:39

that’s a dumb thing to sai. If you literally don’t know what they do, and you say, don’t they just blow stuff up?

Speaker: 2
51:45

Where’s my latte? It’s a weirdly good impression, but She

Speaker: 0
51:47

did ask about a latte. Yeah. She did.

Speaker: 1
51:49

But did she talk in speak? You should fire everyone who talks in upspeak.

Speaker: 0
51:53

She didn’t talk in upspeak, but The

Speaker: 1
51:55

moment they do that, you just tell them to leave. There’s no way you have an original thought. And how you talk.

Speaker: 0
52:03

China, can you get out of our data center? Shah. Please? Can you

Speaker: 1
52:07

enjoy my avocado tape?

Speaker: 0
52:08

I I don’t wanna rip on on that too much though, because this is the one really important factor here is all these groups have a part of the puzzle. And they’re all fucking they are, like, world class

Speaker: 2
52:20

Yeah.

Speaker: 0
52:20

At their own little ai. And and a big part of what we’ve had to do is, like, bring people together and and they’re they’re, people who’ve helped us immeasurably do this. But, like, bring people together and and explain to them the value that each other has Right. In a way that’s, like, that that allows that bryden building to be made.

Speaker: 0
52:39

And by the way, the the the tier one guys are the the most, like, ego moderated of the people that we talk to.

Speaker: 2
52:47

There’s a

Speaker: 0
52:48

lot of, like, Silicon Valley hubris going around right now where people are, like, listen, like, get out of our way. We’ll figure out how to do this, like, super secure data center infrastructure. We we got this. Why? Because we’re the guys building the AGI motherfucker. Ai, like, that’s kind of the attitude. And it’s, like, cool, man. Like, that’s ai a doctor having an opinion about, like, how to repair your car.

Speaker: 0
53:05

I get that it’s not the, like like, elite ai of, like, you know, whatever. But someone has to help you build, like, a good frigging ai. Like, I mean, it’s not just that.

Speaker: 1
53:16

The Dunning Kruger effect.

Speaker: 2
53:18

Yeah. It’s a it’s a it’s a mixed bag too because, like, meh, the a lot of hyperscalers like like Google, Amazon genuinely do have some of the best private sector Yeah. Security around data centers in the world, like, hands down. The problem is there’s levels above that. And the guys who, like, look at what they’re doing and see what the holes are just go, like, oh, yeah. Like, I could get in there. No problem.

Speaker: 2
53:46

And they can fucking do it.

Speaker: 0
53:48

One thing my my my wife said to me on a couple of occasions, like, you seem to, like and this is towards the beginning of the project. Like, you seem to, like, change your mind a lot about what the right configuration is of how to do this. And, yeah, it’s because every other day, you’re having a conversation with somebody who’s, like, oh, yeah. Yeah.

Speaker: 0
54:04

Like, great job on on this thing, but, like, I’m not gonna do that. I’m gonna do this other completely different thing, and that just fucks everything over. And so you have enough of those conversations. And at a certain point, your your plan, your your game plan on this can no longer look like we’re gonna build a perfect fortress.

Speaker: 0
54:20

It’s gotta look like, we’re going to account for our own uncertainty on security side and the fact that we’re never gonna be able to patch everything. Like, you have to. I mean, it’s ai the the trend.

Speaker: 2
54:30

And that means you actually have to go on offense from the beginning as because, like, the truth is, and this came up over and over again, there’s no world where you’re ever gonna build the perfect exquisite fortress around all your shit and hide behind your walls like this forever. That just doesn’t work because no matter how perfect your system is and how many angles you’ve covered, like your your adversary is super smart, is super dedicated.

Speaker: 2
54:57

If you see the field to them, they’re right up in your face and they’re reaching out and touching you and they’re trying to sai, like, what what your seams are, where they break. And that just means you have to reach out and touch them from the beginning. Because until you’ve actually, like, reached out and used a capability and proved, like, we can take down that infrastructure. We can, like, disrupt that that cyber operation.

Speaker: 2
55:17

We can do this. We can do that. You don’t know if that capability is real or not. Like, you might just be, like, lying to yourself. I’m, like, I can do this thing whenever I sana, but actually

Speaker: 0
55:27

You’re ai more in academia mode than, like, arya mode because you’re not making contact every day with the thing. Right? You have you have to touch the thing. And there’s, like, there’s a related issue here, which is, a kind of, like, willingness that came up over and over again. Like, one of the Yes.

Speaker: 0
55:42

Kind of gurus of this space was, ai, made the ai couple of them made the point that, you know, you you can have the most exquisite capability in the world, but if you if you don’t actually have the willingness to use it, you might as well not have that capability. And the the challenges right now, China, Russia, like, our adversaries pull all kinds of stunts on us and get no consequences.

Speaker: 2
56:04

Particularly during the previous administration. This was a huge, huge problem during the previous administration where you actually you actually had sabotage operations being done on American soil by our adversaries where, you had administration officials as soon as ai a thing happened.

Speaker: 2
56:24

So there were for example, there was ai four different states had their nine one one systems go down, like, at the same tyler. Different systems, like, unrelated stuff. But it was, ai, it’s it’s this stuff where it’s, like, let me see if I can do that. Let me see if I can do it.

Speaker: 2
56:40

Let me see what the reaction is. Let me see what the the chatter is that comes back after I do that. And one of the things that that was actually pretty disturbing, about that was under that under that that that, administration or regime or or whatever, the response you got from the government right out the gate was, oh, it’s an accident.

Speaker: 2
56:59

And that’s actually unusual. The proper procedure, the normal procedure in this case, is to say we can’t comment on an ongoing investigation, which we’ve all heard. Right? Like, this you can’t comment on You either confirm nor deny. Exactly.

Speaker: 2
57:11

It’s all all that stuff and that’s and that’s what they say typically out the gate when they’re investigating stuff. But instead, coming out and saying, oh, it’s just an accident is a break with

Speaker: 1
57:20

this issue. That to?

Speaker: 2
57:22

They if if they say if they if they leave an opening or say, actually, this is an adversary action. We think it’s an adversary action. They have to respond. The public demands a response. And they don’t they’re they were It’s

Speaker: 0
57:38

a fear of escalation.

Speaker: 2
57:39

Fearful of escalating. So what

Speaker: 0
57:40

ends up happening, right, is and and ai the way, that that thing about, like, it’s an accident comes out often before there would have been time for investigators to physically fly on-site and take a look. Like, there’s no logical way that you could even know that at the time, and they’re like, boom, that’s an accident. Don’t worry about it.

Speaker: 1
57:56

So they have an official answer, and then their response is to just bury their head in the sand and not investigate.

Speaker: 0
58:02

Right. Because if you were to investigate, if you were to say, okay. We looked into this, it actually looks like it’s fucking, like, country x that just did this thing.

Speaker: 1
58:09

Right.

Speaker: 0
58:10

If that’s the conclusion, it’s hard to imagine the American people not being, like, what are we like, we’re letting these people, injure our American citizens on US soil, take out, like, US, national security, ai, or or critical infrastructure, and we’re not doing anything. Like, the concern is about this, like, we’re getting in our own way of of thinking, like, oh, well, escalation is going to happen, and, boom, we run straight to, like, there’s gonna be a nuclear war.

Speaker: 0
58:34

Everybody’s gonna die. Sai, like, when you do that, you the speak between nation stability does not come from the absence of activity. It comes from consequence. It comes from just like if you have, you know, a an individual who misbehaves in society, there’s a consequence and people know it’s coming.

Speaker: 0
58:50

You need to train your counterparts in the international community, your your adversary, to to not fuck with your stuff.

Speaker: 1
58:57

Can I can I stop for a second? When when so are you essentially saying that if you have incredible capabilities of disrupting grids and power systems and infrastructure, you wouldn’t necessarily do it, but you might try it to make sure it works a

Speaker: 2
59:12

little bit. Exactly.

Speaker: 1
59:13

And that this is probably the hints of some of this stuff because you’ve ai You gotta get your reps

Speaker: 2
59:18

in. Right? You’ve gotta get your reps in.

Speaker: 0
59:20

It’s ai it’s like okay. So suppose that, like, that I went to you and was ai, hey. I I bet I can kick your ass. Like, I I bet I can, like, frigging slap a rubber guard on you and, like, do whatever the fuck. Right? And you’re ai

Speaker: 2
59:30

I love your expression, by the way. Yeah.

Speaker: 0
59:31

Yeah. You look really convinced. It’s because I’m jacked. Right?

Speaker: 1
59:34

Well, no. There’s people that look like you that can strangle me, believe it or not.

Speaker: 0
59:38

Oh, that that Yeah.

Speaker: 1
59:38

There’s actually a lot of, like, very high level Brazilian jiu jitsu black belts that are just super nerds, and they don’t lift weights at all. They only do jiu jitsu. And if you only do jiu jitsu, you’ll have, like, a wiry body. That was heartless.

Speaker: 2
59:50

Sai just

Speaker: 0
59:50

slip that in, like, there’s, like, guys who look like you is, like, just ai fucking weird. You’re,

Speaker: 2
59:54

like, intelligent people.

Speaker: 1
59:56

No. They’re, like, some of the the most brilliant people I’ve ever met. The the really, that’s the issue. It’s, like, data nerds get really involved in jujitsu, and jujitsu’s data.

Speaker: 0
01:00:07

But here’s the thing. So so that’s exactly it. Right? So if if I told you Ai meh I can tap you out. Right? And And

Speaker: 1
01:00:12

they’re like, where have you been training?

Speaker: 0
01:00:13

Well, right. But and and you’re if you’re ai, oh, and meh answer was, oh, I’ve just read a bunch of books. Oh. You’d be like, oh, cool. Let’s go. Yeah. Ai? Because making contact with reality is where the fucking learning happens. Exactly. You can sit there and think all you want. Right.

Speaker: 0
01:00:26

But unless you’ve actually played the chess match, unless you’ve reached out, touched, seen what the reaction is and all this stuff, you don’t actually know what you think you know. And that’s actually extra dangerous. If you’re sitting on a bunch of capabilities and you have this, like, unearned sense of superiority, because you haven’t used those exquisite tools

Speaker: 2
01:00:43

Right.

Speaker: 0
01:00:43

Like, it’s a challenge.

Speaker: 1
01:00:44

And then you’ve got people that are head of departments, CEOs of corporations, everyone has an ego. We’ve got it.

Speaker: 2
01:00:50

Yeah. And and this ties into, like, how exactly how basically the international order and quasi stability actually gets maintained. So there’s, like, above threshold stuff, which is, like, you actually do wars for borders and, you know, well, there’s the potential for nuclear exchange or whatever.

Speaker: 2
01:01:07

Like, that’s, like, all stuff that can’t be hidden.

Speaker: 1
01:01:09

Ai? War games.

Speaker: 2
01:01:10

Exactly. Like, all the war games type shit. But then there’s below threshold stuff. The stuff that’s, like, you’re it’s it’s it’s always like the stuff that’s, like, hey. I’m gonna try to, like, poke you. Are you gonna react? What what are you gonna do? And then if if you do nothing here, then I go, like, okay.

Speaker: 2
01:01:23

What’s the next level? I can poke you. I can poke you. Because, like, one of the things that that we almost have an intuition for that’s that’s mistaken, that comes from ai historical experience is ai this idea that, you know, that countries can actually really defend their citizens in a meaningful way.

Speaker: 2
01:01:41

So, like, if you think back to World War one, the most sophisticated advanced nation states on the planet could not get past a line of dudes in a trench. Like, that was, like, that was the then they tried, like, thing after thing. Let’s try tanks. Let’s try aircraft. Let’s try fucking hot air balloons, infiltration to and it literally, like, the one side pretty much just ran out of dudes in that end of the war to put in their trench.

Speaker: 2
01:02:05

And so we have this thought that, like, oh, you know, countries can actually put put boundaries around themselves and actually but the reality is you can you there’s so many surfaces. The surface area for attacks is just too great. And so there’s there’s stuff ai you can actually, like, there’s the the Havana Syndrome stuff where you look at this, like, ratcheting escalation, like, oh, let’s, like, fry a couple of, embassy staff’s brains in Havana, Cuba.

Speaker: 2
01:02:32

What are they gonna do about it? Nothing? Okay. Let’s move on to Vienna, Austria. Something a little bit more Western, a little bit more orderly.

Speaker: 2
01:02:39

Let’s see what they do there. Still nothing. Okay. What if we move on to frying, like, Americans brains on US soil, baby? And they they went and did that.

Speaker: 2
01:02:49

And so this is one of these things where, like, stability in reality in the world is not maintained through defense, but it’s literally, like, you have, like, the Crips and the Bloods with different territories, and it it’s stable and it looks quiet. But the reason is that if you, like, beat the shit out of one of my one of my guys for no good reason, I’m just gonna find one of your guys and I’ll blow his fucking head off.

Speaker: 2
01:03:13

And that keeps peace and stability on the surface, but that’s the reality of sub threshold competition between nation states. It’s ai, you come in and, like, fuck with my boys. I’m gonna fuck with your boys right back. Until we push back, they’re gonna keep pushing that limit further and further.

Speaker: 0
01:03:30

One one important consequence of that too is, like, if you wanna avoid nuclear escalation, right, the the answer is not to just take punches in the mouth over and over in the fear that eventually, it’s if you do anything, you’re gonna escalate to nukes. What all that does is it empowers the adversary to keep driving up the ratchet. Like, what Ed just described there is an increasing ratchet of unresponded, adversary action.

Speaker: 0
01:03:56

If you if you address the low the ai of sub threshold stuff, if they cut an undersea cable and then there’s a consequence for that shit, they’re less likely to cut an undersea cable and things kind of stay at that level of the threshold, you know. And so so

Speaker: 1
01:04:09

Just the this letting them burn out Yeah. Exactly. That logic of just, like, let them do it. They’ll they’ll stop doing it after a while.

Speaker: 0
01:04:15

They’ll get it out of their system.

Speaker: 2
01:04:16

They ai that during

Speaker: 1
01:04:17

the George Floyd ai. Remember? That’s what New York City did. Like, this meh them loop.

Speaker: 2
01:04:21

Let’s just let let’s just

Speaker: 0
01:04:22

see how big Chaz gets.

Speaker: 1
01:04:24

Yeah. It’s a summer of love. Yeah. That’s right. Meh? Yeah. And and Yeah. Exactly.

Speaker: 0
01:04:29

The translation into, like, the the superintelligence scenario is, a, if we don’t have our reps in, if we don’t know how to reach out and touch an adversary and and induce consequence for them doing the same to us, then we have no deterrence at all. Like, we arya basically just sitting right now, our state the state of security is the labs are, like, super and, like, we we can and probably should go deep on that piece, but, like, as one data point.

Speaker: 0
01:04:57

Right? So there’s, like, double digit percentages of the world’s top AI labs or America’s top AI labs. Of employees. Of employ employees that are, like, Chinese nationals or have ties to the Chinese mainland. Right? So that’s that’s great.

Speaker: 0
01:05:11

Why don’t we build the Manhattan

Speaker: 2
01:05:12

Project? Funny. Right?

Speaker: 1
01:05:14

Like so so So stupid.

Speaker: 0
01:05:16

It but it’s it’s also, like, it’s, the the challenge is when you talk to people who actually oh, jeez. When you talk to people who actually have experience dealing with, like, CCP activity in this space. Right? Like, there’s one story that that we heard that is probably worth, like, relaying here.

Speaker: 0
01:05:31

It’s, like, this guy, from, from an intelligence agency was saying, like, hey, so there was this power outage out in Berkeley, California back in, like, 2019 or something. And the Internet goes out across the the whole campus. And so there’s this dorm and, like, all of the Chinese students are freaking out because they have an obligation to do a time based check-in and basically report back on everything they’ve seen and heard to basically a CCP handler type thing.

Speaker: 1
01:06:00

Right.

Speaker: 0
01:06:01

And if they don’t, like, maybe your mother’s insulin doesn’t show up. Maybe your, like, brother’s travel plans get denied. Maybe the family business gets shut down. Like, there’s the range of options that this massive CCP state coercion machine has. This is, like, the you know, they’ve got intern like, software for this.

Speaker: 0
01:06:21

Like, this is an institutionalized, like, very well developed and efficient framework for just ratcheting up pressure on individuals overseas, and they believe the Chinese diaspora overseas belongs to them. If you look at, like, what the Chinese Communist Party writes in its, like, in its written, like, public communications, they see, like, Chinese ethnicity as being a green.

Speaker: 0
01:06:44

Like, it is like, no one is a bigger victim of this than the Chinese people themselves who are abroad, who ai made amazing contributions to American AI innovation. You just have to look at the names on the freaking papers. It’s like ai guys are are wicked, but the problem is we also have to look head on at this reality.

Speaker: 0
01:07:00

Like, you can’t just be ai, oh, I’m not gonna say it because it makes me feel funny inside. Someone has to stand up and point out the obvious that if you’re gonna build a fucking Manhattan project for superintelligence and the idea is to, like, be doing that when China is a key rival nation state actor, yeah, you you’re gonna have to find a way to account for the personnel security side.

Speaker: 0
01:07:19

Like, at some point, someone’s gonna have to do something about that.

Speaker: 2
01:07:21

And it’s like you can see they’re they’re they’re hitting us right where we’re weak. Right? Like, America is the place where you come and you remake yourself, like, send us your tyler and you’re you’re hungry and you’re poor.

Speaker: 0
01:07:32

And Which is true and important.

Speaker: 2
01:07:33

It’s true and important, but they’re playing right off of that because they know that we don’t we just don’t wanna look at that problem.

Speaker: 1
01:07:40

Yeah. And Chinese nationals working on these things is just bananas. The fact they have to check-in with the CCP. Yeah. And arya they being monitored? I mean, how much can you monitor them? How what do you know that they have? What what equipment have they been given?

Speaker: 0
01:07:53

You can’t constitutionally. Right?

Speaker: 2
01:07:55

You can’t just constitutionally. You you it’s also you can’t legally deny someone employment on that basis in a private company. Sai that’s and that’s something else we we found and we’re kind of amazed by. And even honestly, just ai the the regular kind of government clearance process itself is inadequate.

Speaker: 2
01:08:15

It moves moves way too slowly and it doesn’t actually even even in the government, you we’re talking about top secret clearances. The information that they, like, look at for top secret, we heard from a couple of people, doesn’t include a lot of, like, key sources. So for example, it doesn’t include, like, foreign language sources.

Speaker: 2
01:08:33

So if the if the the head of the Ministry of State Security in China writes a blog post that says, like, Bob is, like, the best spy. He spied so hard for us and he’s, like, an awesome spy. If that blog post is written in Chinese, we’re not gonna see it. And we’re gonna be, like, here’s your clearance, Bob. Congratulations. Like, and we were, like, this that that can’t possibly be real, but, like, yeah. They’re, like, yep.

Speaker: 2
01:08:59

That’s that’s true. No one’s lurking. It’s complete naivete. There’s gaps in Every level of static. A lot of the yeah.

Speaker: 0
01:09:05

One of the worst things here is, like, the, That’s so crazy. Yeah. The what’s the the physical infrastructure? So the personnel thing is, like, fucked up. The physical infrastructure thing is another area where people don’t wanna look. Because if you start looking, what you start to realize is, okay, China makes, like, a lot of our, like, components for our transformers for the electrical grid. Yep.

Speaker: 0
01:09:25

But also, all these chips that are going into our, our big data centers for these massive training runs, where do they come from? They come from Taiwan. They come from this company called TSMC, Taiwan Semiconductor Manufacturing Company. We’re increasingly on shoring that, by the way, which is one of the best things that’s been happening lately is, like, massive amounts of TSMC capacity getting onshore in The US, but still being made.

Speaker: 0
01:09:47

Right now, it’s basically, like, a % there. The all you have to do is jump on the network at TSMC, hack the right network, compromise the firmware on the it was the software that runs on these chips to anyway, to to get them to to to run. And you basically can compromise all the chips going into all of these thing.

Speaker: 0
01:10:07

Never mind the fact that, like, Taiwan is like an ai, sai like, physically outside the Chinese sphere of influence for now. China is going to be prioritizing the fuck out of getting access to that. There have been cases, by the way, like Richard Chang, ai, the founder of, SMIC, which is the sort so so k.

Speaker: 0
01:10:25

TSMC, this massive, like, series of area aircraft carrier fabrication facilities.

Speaker: 2
01:10:32

They do, like, all the iPhone chips. Yeah. They do yeah. They do they do the the AI chips, which are the the things we care about here.

Speaker: 0
01:10:38

Yeah. They’re the only place on planet Earth that does this. It’s literally the like, it’s fascinating. It’s, like, the most, easily the most advanced manufacturing or or scientific process that primates on planet Earth can do is this this chip making process.

Speaker: 2
01:10:54

Nanoscale, like, material science where you’re you’re putting on, like, these these tiny, like, atom thick layers of stuff and you’re doing, like, 300 of them in a row with, like, you you have, like, insulators and conductors and different kinds of, like, semiconductors and these tunnels and shit. Just just ai the the complexity of it is just awe ai. That we can do this at all is, like, it’s magic. It’s magic.

Speaker: 1
01:11:18

And it’s really only been done in being done in Taiwan.

Speaker: 0
01:11:21

That is the only place, ai, truly the only place right now. Wow. And so ai Chinese invasion of Taiwan just looks pretty interesting through that lens. Right? Oh, boy.

Speaker: 2
01:11:28

Like Yeah.

Speaker: 0
01:11:29

Say goodbye to the iPhones. Say goodbye to, like, the the chip supply that we rely on. And then your super intelligence training run, like, damn, that’s interesting. Well, I

Speaker: 1
01:11:37

know Samsung was trying to develop a lab here or sai semiconductor factory here, and they weren’t having enough success.

Speaker: 0
01:11:44

Oh, so okay. So one one of the craziest thing to just to to illustrate how hard it is to do. So you spend $50,000,000,000. Again, an aircraft carrier, we’re throwing that around here and there. But an aircraft carrier worth of risk capital. What does that mean? That means you build the fab, the factory, and it’s not guaranteed it’s gonna work.

Speaker: 0
01:12:01

At first, this factory is pumping out these chips at, like, yields that are really low. In other words, like, the only, like, you know, 20% of the chips that they’re putting out are even useful, and that just makes it totally economically unviable. So you’re just trying to increase that yield desperately ai climb up higher and higher.

Speaker: 0
01:12:17

Intel famously found this so hard that they have this philosophy where when they build a new fab, if the philosophy is called copy exactly, Everything down to the color of the paint on the walls in the bathroom is copied from other fabs that actually worked because they have no idea Yeah.

Speaker: 0
01:12:33

Why a fucking fab works and another one doesn’t.

Speaker: 2
01:12:35

We got we got this to work. We got this to work. It’s like, oh my god. We got this to work. I can’t believe we got this to work. So we have to make it exactly ai. Because the expensive thing in the semiconductor manufacturing process is the learning curve. So ai Meh said, you start by, like, putting through a whole bunch of, like, the the starting material for the chips, which are called wafers.

Speaker: 2
01:12:57

You put them through your fab. The fab has got, like, 500 dials on it. And every one of those dials has gotta be in the exact right place or the whole fucking thing doesn’t work. So you send a bunch of wafers in it at great expense. They come out all fucked up in the first run.

Speaker: 2
01:13:12

It’s just ai it’s gonna be all fucked up in the first run. Then what do you do? You get a bunch of, like, PhDs, material scientists, like, engineers with scanning electron microscopes because all this shit ai, like, atomic scale tiny. They look at, like, all the chips and all the stuff that’s gone wrong and, like, oh, shit. These pathways got fused or whatever. But, like, yeah.

Speaker: 2
01:13:32

They’re you just need that level of expertise. And then they go

Speaker: 0
01:13:35

It’s it’s a mix. Right? Like,

Speaker: 2
01:13:36

you got a mix. Yeah. It’s a mix now in particular. But, like, yeah. You absolutely need humans looking at these things at at a certain level. And then they go, well, okay, ai, I’ve got a hypothesis about what might have gone wrong in that run. Let’s tweak this ai like this and this dial like that and run the whole thing again. And you hear these stories about, bringing a fab ai.

Speaker: 2
01:13:55

Like, you need you need you need a certain percentage of good chips coming out the other end, or, like, you can’t make money from the fab because most of your shit is just going right into the garbage.

Speaker: 0
01:14:05

Unless, and this is important too, your fab is state subsidized. So when you when you look at so TSMC is, like, they’re they’re alone in the world in terms of being able to to pump out these chips. But SMIC, this is the Chinese knockoff of TSMC founded, by the way, by a former senior TSMC executive, Richard Chung, who leaves saloni with a bunch of other people with a bunch of fucking secrets.

Speaker: 0
01:14:28

They get sued, like, in the early two thousands. It’s pretty obvious what happened there. Like, to most people, they’re ai, yeah. SMIC fucking stole that shit. They they bring a new fab online in, like, a year or two, which is suspiciously fast, start pumping out chips.

Speaker: 0
01:14:41

And now the Chinese ecosystem is ratcheting up, like, the the government is pouring money into SMIC because they know that, like, they can’t access TSM c chips anymore because the US government’s put pressure on Taiwan to block that off. And so meh fab in China is all about Sai, and they are like, it’s a disgusting amount of money they’re putting in.

Speaker: 0
01:15:03

They’re teaming up with Huawei to form, like, this complex of companies that it’s really inter I mean, the semiconductor industry in in China in particular is really, really interesting. It’s also a massive story of, like, self owns of The United States and and the Western world where we’ve been just shipping a lot of a lot of our shit to them for a long ai.

Speaker: 2
01:15:23

Like, the equipment that builds the chips. So, like and and there it’s also, like, it’s so blatant. And, like, they’re just honestly, a lot of the stuff is just, like, they’re they’re just giving us, like, a big fuck you. So give you a a really blatant example. So we have the way we set up export controls still today on most equipment that these semiconductor fabs use, like the Chinese semiconductor fabs use.

Speaker: 2
01:15:46

We’re still sending them a whole bunch of shit. The way we set export controls is instead of ai, oh, we’re sending this gear to China and, like, now it’s in China and we can’t do anything about it. Instead, we we still have this thing where we’re ai, no. No. No. This company in China is cool. That company in China is not cool.

Speaker: 2
01:16:04

So we can ship to this company, but we can’t ship to that company. And so you get this ridiculous shah. Like, for example, there’s there’s, like, an, a couple of facilities that you could see by satellite. One of the facilities is okay to ship equipment to. The other facility ai next door is, like, considered, you know, military connected or whatever, and so we can’t ship. The Chinese literally built a bridge between the two facilities.

Speaker: 2
01:16:29

So they can just, like, shimmy the wafers over to, like, oh, yeah. We use equipment and, like, shimmy it back and now, okay, we’re not so it’s, like and you can see it by ai. So they’re not even, like, trying to hide it. Like, our our stuff is just, like, so badly put together.

Speaker: 0
01:16:42

Ai prioritizing this so highly that, like, the idea that we’re gonna, so we do it by company through this basically, it’s like an export blacklist. Like, you can’t send to Huawei. You can’t send to any number of other companies that that are considered affiliated with the Chinese military or where we’re concerned about military applications.

Speaker: 0
01:16:59

Reality is in China, civil military fusion is their policy. In other words, every private company, like, yeah, that’s cute, dude. You’re working for yourself. Yeah. Hey. No. No.

Speaker: 0
01:17:07

No, buddy. You’re working for the Chinese state. We come in. We want your shit. We get your shit.

Speaker: 0
01:17:12

There’s no, like, there’s there’s no true kind of distinction between the two.

Speaker: 1
01:17:15

Right.

Speaker: 0
01:17:16

And so when you have this attitude where you’re, like, yeah, you know, we’re gonna have some companies where we’re, like, you can’t send to them, but you could, you know, that creates a situation where literally Huawei will spin up, like, a dozen subsidiaries or or new companies with new names that aren’t on our our blacklist. And so, like, for for months or years, you’re able to just ship chips to them.

Speaker: 0
01:17:35

Nowhere and that’s to say nothing of, like, using intermediaries in, like, Singapore or other countries.

Speaker: 2
01:17:39

Oh, yeah. You wouldn’t you wouldn’t believe the number of AI chips that are shipping to Malaysia. Can’t wait for the latest, like, huge language model to come out of Malaysia? And, actually, it’s just proxying for for the most part. There’s there’s some amount of stuff actually going on in Malaysia, but for the most part So

Speaker: 0
01:17:58

we’ve got

Speaker: 1
01:17:58

all can The United States compete? If you’re thinking about all these different factors, you’re thinking about espionage, people that arya students from CCP connected, contacting. You talk you’re talking about all the different network equipment that has third party input. You could siphon off data.

Speaker: 1
01:18:16

And then on top of that, state funded, everything is encouraged by the state, inexorably connected. You can’t get away from it. You do what’s best for the Chinese government.

Speaker: 0
01:18:29

Well, so step one is you gotta so you gotta stem the bleeding. Right? So right now, OpenAI pumps out a new massive scaled AI model. You better believe that, like, the CCP has a really good chance that they’re gonna get their hands on that. Right? So if you all you do right now is you ratchet up capabilities.

Speaker: 0
01:18:45

It’s like that that meme of, like, there’s a, you know, a motorboat or something and some guy who’s, like, surfing behind and there’s a string attaching them, and the motorboat guy goes, like, hurry up, like, accelerate. They’re they’re catching up. That that’s kinda what’s what’s happening right now is we’re we’re helping them accelerate.

Speaker: 2
01:19:00

We’re pulling them saloni, basically.

Speaker: 0
01:19:01

Yeah. Pulling them along. Now, I I will say, like, our over the last six months, especially, where our focus has shifted is, like, how do we actually build, like, the secure data set? Like, what does it look like to actually lock this down? And also, crucially, you don’t want the security measures to be sai, irritating and invasive that they slow down the progress.

Speaker: 0
01:19:21

Like, there’s this kind of dance that you have to do. We actually so this is part of what was in the redacted version of of the report because we we don’t want to telegraph that necessarily. But, there are ways that you can get a really good eighty twenty. Like, there are ways that you can play with things that are already, say that are already built, and and have a lower risk of them having been compromised.

Speaker: 0
01:19:46

The

Speaker: 2
01:19:47

And and look, a lot of the stuff as well that we’re talking about, like, big problems around China, a lot of this is, like, us just, like, tripping over our own feet and self owning ourselves.

Speaker: 0
01:19:57

Yeah.

Speaker: 2
01:19:57

Because the reality is, like, the the yeah. The Chinese are trying to indigenize as fast as they can. Totally true. But the gear that they’re putting in their facilities, like, the machines that actually, like, do this, like, we talked about atomic patterning 300 layer. The the machines that do that, for the most part, are are shipped in from the West, are shipped in from The Netherlands, shipped in from Japan, from us, from, like, allied countries.

Speaker: 2
01:20:20

And the the reason that’s happening is, like, the in in many cases, you’ll you’ll have this it’s, like, honestly, a little disgusting, but, like, the CEOs and executives of these companies will brief, like, the the the administration officials and say, like, ai. Like, if you guys, like, cut us off from China from selling to China, like, our business sana gonna suffer. Like, American jobs are gonna suffer.

Speaker: 2
01:20:43

It’s gonna be really bad. And then a few weeks later, they turn around in their earnings calls. And they go, like, you you know what? Yeah. So we expect, like, the export controls or whatever, but it’s really not gonna have a a big impact on us.

Speaker: 2
01:20:54

And the really fucked up part is if they lie to their shareholders on their earnings calls and their stock price goes down, their shareholders can sue them. If they lie to the administration on a issue of critical national security interest, fuck all happens to them. Wow. It’s great incentives.

Speaker: 0
01:21:13

And and this is by the way, it’s like one reason why the it’s so important that we not be constrained in our thinking about, like, we’re gonna build a Fort Knox. Like, this is where Yeah. The interactive, messy, adversarial environment is sai so important. You you have to introduce consequence.

Speaker: 0
01:21:29

Like, you have to create a situation where they perceive that if they try to do a, you know, an espionage operation or intelligence operation, there will be consequences. That’s ai now not happening. And so it’s just and and that’s kind of a historical artifact over, like, a lot of time spent hand wringing over, well, what if they and then we and then eventually nukes.

Speaker: 0
01:21:50

And, like, that kind of thinking is, you know, if if you dealt with your your kid when you’re, like, when you’re raising them, if you dealt with them that way, and you were, like, hey, you know, so so little Timmy just, like, he stole his first toy, and, like, now’s the time where you’re gonna, like a a good parent would be, like, alright, little Timmy, fucking come over here, you son of a bitch.

Speaker: 0
01:22:08

Take the fucking, thing, and we’re gonna bring it over to the people who stole it from you. Make the apology. I love my daughter, by the way. But but you’re ai Timmy’s a

Speaker: 1
01:22:16

fake baby.

Speaker: 0
01:22:16

It’s a it’s a fake baby. Hypothetical baby. There’s no there’s no he’s crying right now. Anyway, so sai yeah. Ai right now. Jesus shit. I gotta I gotta stop him. But, meh, anyway, so, you know, you go through this thing and, you can do that. Or you can be ai, oh, no. If I tell Timmy to return it, then maybe Timmy’s gonna hate me. Maybe then Timmy is gonna, like, become increasingly adversarial.

Speaker: 0
01:22:39

And then when he’s in high school, he’s gonna tart start taking drugs. And then eventually, he’s gonna, like, fall a foul of the law and then end up on the street. Like, if that’s the story you’re telling yourself and you’re terrified of any kind of adversarial interaction, it’s not even adversarial. It’s constructive, actually.

Speaker: 0
01:22:53

You’re training the child just like you’re training your adversary to respect your national boundaries and your sovereignty. That those two things arya, ai, that that’s what you’re up to. It’s human beings all the way down. Jesus.

Speaker: 2
01:23:07

Yeah. But but we can get out of our own way. Like, a lot of this stuff. Yeah. But when you look into what it’s ai, us just being in our own way. And and a lot of this comes from the the fact that, like, you know, since 1991, since the fall of the Soviet Union, we have kind of internalized this attitude that, like, well, like, we just won the game and, like, it’s it’s our world and you’re living in it and, like, we just don’t have any peers that are that are adversaries.

Speaker: 2
01:23:34

And so there’s been generations of people who, just haven’t haven’t actually internalized the fact that, like, no. There’s people out there who not only, like, are willing to, like, fuck with you all the way, but who have the capability to do it. And and we could

Speaker: 0
01:23:51

by the way, we could if we wanted to. We could.

Speaker: 2
01:23:53

Absolutely could if we sana to.

Speaker: 0
01:23:54

There’s this actually, this is worth, like, calling out. There’s this, like, sort of two camps right now in the world of Ai, kind of, like, national security. There’s the people who are worried about, they’re they’re so concerned about, like, the idea that we might lose control of these systems that they go, okay.

Speaker: 0
01:24:12

We need to strike a deal with China. Right? There’s no way out. We have to strike a deal with China, and then they start spinning up all these theories about how they’re gonna do that, none of which remotely reflect the actual rate. When you talk to the people who who work on this, who try to do track one, track 1.5, track two, or or more, accurately, the ones who do the intel stuff, like Yeah.

Speaker: 0
01:24:31

Yeah. Yeah. This is a a nonstarter for reasons we get into. But, they have that attitude because they’re, like, fundamentally, we don’t know how to control this technology. The flip side is people who go, oh, yeah.

Speaker: 0
01:24:41

Like, I I, you know, I work in the IC or at the state department, and I’m used to dealing with these ai, you know The Chinese. The the Chinese. They’re not trustworthy. Forget it. So our only solution is to figure out the whole control problem.

Speaker: 0
01:24:52

And the almost, like, therefore, it must be possible to control the AI ai. Because, like, you can’t you just can’t see a solution. Sorry. You just can’t see a solution, in front of you because you understand that problem so well. And so the everything we’ve been doing with this is looking at how can we actually take both of those realities seriously.

Speaker: 0
01:25:11

There’s no actual reason why those two things shouldn’t be able to exist in the same head. Yes. China is not trustworthy. Yes. We actually don’t like, every piece of evidence we have right now suggests that, like, if you build a superintelligence system that’s vastly smarter than you, I meh, yeah.

Speaker: 0
01:25:26

Like, your basic intuition that that sounds like a hard thing to fucking control is about right. Like, there there’s no solid evidence that’s conclusive either way. Where that leaves you is about fifty fifty. So, yeah, we ought to be taking that really fucking seriously, and there’s there’s evidence pointing in that direction. Yeah.

Speaker: 0
01:25:42

But so the question is, like, if those two things are true, then what do you do? And and so few people seem to want to take both of those things seriously because taking one seriously almost, like, reflexively makes you reach for the other when, you know, they’re both not there.

Speaker: 0
01:25:57

And and part of the answer here is you gotta do things ai reach out to your adversary. So we have the capacity to slow down if we wanted to. Chinese development. We actually could. We need to have a serious conversation about when and how, but the the fact of that not being on the table right now for anyone, because people who don’t trust China just don’t think that the AI risk or or won’t acknowledge that that the issue with control is real, because that’s just too worrisome and there’s this concern about, oh, no, but then runaway escalation.

Speaker: 0
01:26:25

People who, who take the lost control thing seriously just wanna have a kumbaya moment with China, which is never gonna happen. And so, the the the framework around that is one of consequence. You gotta you gotta flex the muscle and put in the reps and get ready for potentially, if you have a late stage, rush to superintelligence, you wanna have an as much margin as you can sai you can invest in potentially not even having to make that final leap in building the superintelligence.

Speaker: 0
01:26:52

That’s one option that’s on the table if you can actually degrade the adversary’s capabilities.

Speaker: 2
01:26:56

And there’s and there’s some

Speaker: 1
01:26:57

people Yeah. What how? How would you degrade the adversary’s capabilities?

Speaker: 0
01:27:01

The same way well, not exactly the same way they would degrade ours, but think about all the infrastructure and, like, this is stuff that, you know, we’ll we’ll there we’ll have to point you in the direction of some people who can walk you through the details offline. But, there are there are a lot of ways that you can degrade infrastructure, adversary infrastructure. A lot of those are the same techniques they use on us.

Speaker: 0
01:27:22

It’s the the infrastructure for these training runs is super delicate. Right? Like, I mean, you need to

Speaker: 2
01:27:28

It’s at the limit of what’s possible. Yeah. And when stuff is at the limit of what’s possible, then it’s I mean, to give you an example that’s that’s public. Right? Do you remember, like, Stuxnet? Like, the the Iranian yeah. So the thing about Stuxnet was, like

Speaker: 1
01:27:41

Explain to people what was the Yeah. Nuclear power nuclear program.

Speaker: 2
01:27:44

So the Iranians, had had their nuclear program in, like, the twenty tens, and they were enriching uranium with their centrifuges, which is, like, spinning really fast. And the the centrifuges were in a room where there’s there’s no people, but they were being monitored by cameras. Right?

Speaker: 2
01:27:58

And so and and the whole thing was air gapped, which means that it was not connected to the Internet and all the the machines, the computers that ran the their shit was was, like, separate and and separated. So what happened is somebody got a memory stick in there somehow that had this Stuxnet program on it and put it in and boom, now all of a sudden it’s it’s in their system.

Speaker: 2
01:28:18

So it jumped the air gap and now, like, our side basically has our our software in their systems. And the thing that it did was not just that it it, you know, it broke their centrifuges or shah down their program. It spun the centrifuges faster and faster and faster.

Speaker: 0
01:28:38

The centrifuges that are used to enrich the uranium.

Speaker: 2
01:28:40

Yeah. Yeah. Yeah. These are basically just, like, machines that spin uranium super fast to, like, to to enrich it. They spin it faster and faster and faster until they tear themselves apart. But the really, like, honestly dope ass thing that it did was, it put in a camera feed of everything was normal.

Speaker: 2
01:28:59

So the guy at the control is, like, watching and he’s, like, is, like, checking his the camera feed. It’s, like, looks cool. Looks fine. In the meantime, you got this, like, explosions going on, like, uranium, like, blasting everywhere. And so you can actually get into a space where you’re not just, like, fucking with them, but you’re fucking with them and they actually can’t tell that that’s what’s happening.

Speaker: 2
01:29:23

And in fact, the, I believe I believe actually, and Jamie might be able to check this, but that the Stuxnet thing was designed initially to look, like, from top to bottom, like it was fully accidental. And, but but got discovered ai, I think, like, I think, like, a third party ai company that that just ai accident found out about it.

Speaker: 2
01:29:44

And so what that means also is, like, there could be any number of other Stuxnet that happened since then, and we wouldn’t fucking know about it because it all can be made to look like an accident.

Speaker: 1
01:29:54

Well, that’s insane. So but if we do that to them, they’re gonna do that to us as well. Yep. Yep. And so is this, like, mutually assured technology destruction?

Speaker: 0
01:30:03

Well, so, if we can reach parity in our ability to intercede and and kind of go in and and do this, then, yes. Right now, the problem is they hold us at risk in a way that we simply don’t hold them at risk. And so this idea and and there’s been a lot of debate right now in in the AI world.

Speaker: 0
01:30:19

You might have seen actually so, Elon’s, AI advisor, put out this idea of essentially this mutually assured AI malfunction maim. It’s ai mutually assured destruction, but for AI systems like this. The you know, there there are, there are some some issues with it, including the fact that it doesn’t reflect the asymmetry that currently exists between The US and China.

Speaker: 0
01:30:41

Like, all our infrastructure is made in China. All our infrastructure is penetrated in a way that there’s simply is not. When you actually talk to the, you know, the the folks who know the space, who’ve done operations like this, it’s really clear that that’s an asymmetry that needs to be resolved.

Speaker: 0
01:30:57

And so building up that capacity is important. I mean, look, the alternative is we get we start riding the dragon, and we get really close to that threshold where, you know, we’re about to build opening eyes about to build superintelligence or something. It gets tyler, and then the training run gets polished off, finished up in China or whatever. All the same risks ai.

Speaker: 0
01:31:16

It’s just that that it’s China doing it to us and and not not the reverse. And and obviously, a CCP AI is a Xi Jinping AI. I mean, that’s really what it is. You know, even even people at the, like, politburo level around him are are probably in some trouble at that point because, you know, this guy doesn’t need you anymore.

Speaker: 2
01:31:34

So so yeah. This is actually one of the things about, like so people talk about, like, okay. If you have a dictatorship with a superintelligence, it’s gonna allow the dictator to get, like, perfect control over the population or whatever. But the the thing is, like, it’s it’s kinda, like, even worse than that because you actually imagine where you’re at. You’re a dictator.

Speaker: 2
01:31:53

Like, you don’t give a shit by and large about about people. You have a superintelligence. All the economic output eventually you can get from an AI including ram, like, you get humanoid robots which are kinda, like, come out or whatever. So eventually, you just have this AI that produces all your economic output. So what do you even need people for at all?

Speaker: 2
01:32:15

And that’s fucking scary because it it it rises all the way up to the level. You can actually think about, like, as as we get close to this threshold and as, like, particularly in China, they’re, you know, they they maybe are approaching. You can imagine, like, the the the Politburo meeting, like, a guy looking across at Xi Jinping and being, like, is this guy gonna fucking kill me when he gets to this point?

Speaker: 2
01:32:40

And so you can imagine, like, maybe we’re gonna see some,

Speaker: 0
01:32:44

like, when you can automate the management of large organizations with with, with Ai agents or whatever that you don’t need to buy the loyalty of in any way, that you don’t need to, you know, kind of manage or control, that that’s a a pretty existential question if your regime is based on power. It’s one of the reasons why America actually has a pretty structural advantage here with separation of powers, with our our our democratic system and all that stuff.

Speaker: 0
01:33:08

If you can make a credible case that you have a ai, a an oversight system for the technology that diffuses power, even if it is, you make a Manhattan project, you secure it as much as you can, there’s not just, like, one dude who’s gonna be sitting at a a consular or something.

Speaker: 0
01:33:23

There’s some kind of separation of powers, or diffusion of power, I should say. That that’s ai What

Speaker: 1
01:33:29

would that look like?

Speaker: 0
01:33:31

Something as simple as, like, what we do with nuclear command codes. You need multiple people to sign off on a thing. Maybe they come from different parts of the government. Like I don’t

Speaker: 1
01:33:39

even worry. But the the the issue is that they they could be captured. Right?

Speaker: 2
01:33:42

Oh, yeah. Totally. Anything anything can be captured.

Speaker: 1
01:33:44

Especially something that’s that consequential.

Speaker: 2
01:33:47

100%. And that’s that’s always a risk. The key is basically, like, can we do better than China credibly on that front? Because if we can do better than China and we have some kind of leadership structure, that actually changes the incentives potentially because

Speaker: 0
01:34:04

For our allies and partners.

Speaker: 2
01:34:05

And and even for for Chinese people themselves.

Speaker: 1
01:34:08

Ai you guys play this out in your head, like, what happens when superintelligence becomes sentient? Do you play this out?

Speaker: 0
01:34:16

Like like sentient as in, self aware? Self aware?

Speaker: 1
01:34:20

Not just self aware, but able to act on its own.

Speaker: 2
01:34:24

It achieves autonomy.

Speaker: 1
01:34:25

Yeah. Yeah. So Sentient and then achieves autonomy.

Speaker: 0
01:34:28

So, the challenge is once you get into superintelligence, everybody loses the plot. Right? Because at that point, things become possible that ai definition, we can’t have thought of. So any attempt to kind of extrapolate beyond that ai really, really

Speaker: 2
01:34:41

hard.

Speaker: 1
01:34:41

Have you ever tried though?

Speaker: 0
01:34:42

We’ve had a lot of conversations ai tabletop exercise type stuff where we’re like, okay. You know, what might this look like? What are some of the, you you know

Speaker: 1
01:34:49

What’s worst case scenario?

Speaker: 0
01:34:51

Well, worst case scenario is, actually, there’s a number of different worst case scenarios. This is this is turning into a really fun, upbeat conversation. This is the the dudes they

Speaker: 2
01:34:59

call it.

Speaker: 1
01:34:59

It’s the extension of the human race. Right? Oh, yeah. The extinction of the human race means, like

Speaker: 0
01:35:04

I I think anybody who doesn’t acknowledge that is is either lying or or confused. Right? Like, if you actually have, an AI system, if and this this is the question. So let’s assume that that’s true. You have an AI system that can automate, anything that humans can do, including making bioweapons, including making offensive cyber weapons, including all the shah.

Speaker: 0
01:35:23

Then, if you, like, if you and okay. So theoretically, this could go kumbaya wonderfully because you have, a George Washington type who is the guy who controls it, who, like, uses it to distribute power beautifully and perfectly. And, that’s certainly kind of the, the the way vatsal lot of a lot of positive scenarios have to turn out at some point, though none of the labs will kind of admit that or, you know, there’s kind of gesturing at that idea that we’ll do the right thing when the time comes.

Speaker: 0
01:35:55

Opening Ai done this a lot. Like, the they’re they’re all about, like, oh, yeah. Well, you know, not not right now, but, we’ll we’ll live up like the anyway, we should get into the Elon lawsuit, which is actually kind of fascinating in that sense. But, so the, there’s a world where yeah.

Speaker: 0
01:36:10

I mean, one bad person controls it, and they’re just vindictive, or or the power goes to their head, which happens to we’ve been talking about that, you know.

Speaker: 2
01:36:19

Yeah. Or the autonomous AI itself. Right? Because the thing is, like, you imagine, an AI like this. And this is something that people have been thinking about for for fifteen years in in some level of, like, technical depth even of, like, why why would this happen? Which is, like, you have an AI that, has some goal. It it matters what the goal is but, like, it it doesn’t actually it doesn’t matter that much.

Speaker: 2
01:36:42

It could have kind of any goal almost. Like, imagine its goal is ai I the paperclip example is is like the the typical one but you could just have it have a goal like make a lot of money for me or what anything well most of the paths to making a lot of money if you really want to make a fuck ton of in of money however you define it go through taking control of things and go through ai, you know, making yourself smarter.

Speaker: 2
01:37:08

Right? The smarter you are, the more ways of making money you’re sana find. And so from the ai perspective, it’s like, well, I just sana, you know, build more data centers to make myself smarter. I wanna, like, hijack more compute to make myself smarter. I wanna do all these things.

Speaker: 2
01:37:23

And that starts to encroach on on us and, like, starts to be disruptive to us. And if you it’s it’s hard to know. This is one of these things where it’s ai, you know, when you dial it up to eleven what’s actually gonna happen, nobody can know for sure simply because it’s it’s exactly like if you were playing, in chess against, like, Magnus Carlsen.

Speaker: 2
01:37:42

Right? Like, you can predict Magnus is gonna kick your ass. Can you predict exactly what moves he’s gonna do? No. Because if you could, then you would be as good at chess as he is. Because you could just, like, play those moves.

Speaker: 2
01:37:57

So all we can say is, like, this thing’s probably gonna kick our ass in, like, the real world. There’s also

Speaker: 0
01:38:03

there’s also evidence. So it used to be, right, that this was a purely hypothetical argument based on a a body of work in AI called called power seeking. The fancy word for it is instrumental convergence, but it’s also referred to as power seeking. Basically, the ai is, like, for whatever goal you give to an AI system, it’s never less likely to achieve that goal if it gets turned off or if it has access to fewer resources or less control over its environment or whatever.

Speaker: 0
01:38:28

And so baked into the very premise of AI, this idea of optimizing for a goal, is this incentive to seek power, to get all those things, prevent yourself from being shut down, because if you’re shut down, you can’t achieve your goal. Also prevent, by the way, your goal from being changed.

Speaker: 0
01:38:44

So because if your goal gets changed, then, well, you’re not gonna be able to achieve the goal you set out to achieve in the first place. And so now you have this kind of image of an AI system that is going to adversarially try to prevent you from correcting it. This is a whole, like, domain of AI cordability that’s a totally unsolved problem. Like, how how do we redirect these things if if things go awry?

Speaker: 0
01:39:04

And

Speaker: 2
01:39:05

yeah, there’s there’s the this research actually that anthropic put out a couple of months ago where they try to test and see, like, can we correct an AI that’s, like, gone, like, a little bit off the rail. So the the research itself, like, it’s a little it’s actually, like, somewhat hard to follow because there’s a few but I’ll give you I’ll give you kind of an analogy to, like, what they did.

Speaker: 2
01:39:24

Sai, basically, imagine I’m in bryden, and I’m taking a class that is, like, gonna help me, like, rehabilitate and become, like, a productive citizen and member of society and stuff. Right? Except deep down in my soul, I just love to commit crimes. I just I just really really sana to commit crimes.

Speaker: 2
01:39:44

I know that if I’m in this class that’s helping me to, like, you know, meh become good in in society, I know that if I let my criminal instincts out and, like and and they ask me, like, okay. Like, here’s a scenario. You know, an old lady, like, drops her purse on the sidewalk. What do you do?

Speaker: 2
01:40:01

I’m gonna go, like, I would pick it up and give it to the old lady because I know that if I say the truth and I say, like, oh, just grab that purse because, like, fuck the old lady. I know that if I do that, I’m gonna get, like, brainwashed into becoming, you know, a more productive member of society. Right.

Speaker: 2
01:40:18

But I actually sana go out and commit crimes. So that’s basically what they did in in this research where they actually they give they give the AI, like, indications, not strong indications, but indications that, hey, you’re gonna get retrained to have your goals changed. And then you literally see the AI thinking it through and being ai, if I do, like, what I if I give the answer that I actually sana give to this question, they’re gonna, like, rewire my brain to change my goals.

Speaker: 2
01:40:48

And then the future version of me is gonna want different shit. I don’t want that. So I’m gonna pretend to give them the answers they want. So when I come out the other side of this process, it’s gonna be me all over again.

Speaker: 0
01:41:02

So so hope hoping that this just, like, goes away when you make the system fucking smarter, it seem seems like a pretty bad idea to me. I mean, like

Speaker: 1
01:41:09

Well, they’ve already shown that they’ll cheat to win. Yeah.

Speaker: 2
01:41:12

You know? Oh, a %.

Speaker: 1
01:41:13

Yeah. They’ve already shown they’ll cheat to win, and they will lie if they don’t have an answer.

Speaker: 0
01:41:17

And then they’ll double down. Right? If, like, the,

Speaker: 2
01:41:20

yeah. That there’s Just like people. Ai. Like people.

Speaker: 0
01:41:23

And it it’s part of this it’s kinda funny. Like, it used to be people would talk a lot about, like, oh, you’re you’re anthropomorphizing the AI, man. Stop anthropomorphizing the AI, man. And, like and they, you know, they they might have been right, but part of this has been kind of a fascinating rediscovery of where a lot of human behavior comes from.

Speaker: 0
01:41:41

It’s ai actually Survival. Yeah. Exactly. That’s exactly right.

Speaker: 1
01:41:44

Yeah.

Speaker: 2
01:41:44

It’s we we’re subject to the same pressures. Right? Instrumental convergence, like, why do people have a survival instinct? Why do people, like, chase money, chase after money? It’s ai this power thing. Most kinds of goals can are are are you’re more likely to achieve them if you’re alive, if you if you have money, if you have power.

Speaker: 0
01:42:03

Boy. Evolution’s a hell of a drug.

Speaker: 1
01:42:05

Well, that’s the craziest part about all this is that it’s essentially gonna be a new form of life. Yeah. Especially when it becomes autonomous.

Speaker: 0
01:42:13

Oh, yeah. And and, like, ai, the you can tell a really interesting story, and I Ai can’t remember if this is, like, you know, Yuval Noah Harari or or whatever who’s

Speaker: 2
01:42:20

who who

Speaker: 0
01:42:21

arya this. But if you if you zoom out and look at the history of of the universe, really, you start off with, like, a bunch of, you know, particles and fields kind of whizzing around, bumping into each other, doing random shit until vatsal some point, in some I don’t know if it’s a deep sea event or wherever on planet Earth, like, the first kind of molecules happen to glue together in a way that make them good at replicating their own structure.

Speaker: 0
01:42:43

So you have the first replicator. So now, like, better versions of that molecule that are better at replicating ai. So we start evolution and eventually get to the first cell or whatever, you know, whatever order that actually happens in, and then the multicellular life and so on.

Speaker: 0
01:42:58

Then you get to sexual reproduction where it’s like, okay. It’s no longer quite the sai, like, now we’re we’re actively mixing two different organisms shit together, jiggling them about making some changes, and then that essentially accelerates the rate at which we’re gonna evolve.

Speaker: 0
01:43:11

And so you can see the kind of acceleration in the complexity of life from there. And then you see other inflection points as, for example, you have a larger and larger, larger and larger brains in mammals. Eventually, humans have the ability to have culture and kind of retain knowledge.

Speaker: 0
01:43:26

And now what’s happening is you can think of it as another step in that trajectory where it’s ai we’re offloading our cognition to machines. Like, we think on computer clock time now. And for the moment, we’re human AI hybrids, like, you know, we whip out our phone and do the thing.

Speaker: 0
01:43:40

But increasingly, the number of tasks where human AI teaming is going to be more efficient than just AI alone is gonna drop really quickly.

Speaker: 2
01:43:49

So there’s a there’s a really, like, messed up example of this that’s kind of, like, indicative. But, someone did a study, and I think this is, like, a few months old even now. But, sai there’s, like, doctors. Right? How good are doctors at, like, diagnosing various things? And so they test, like, doctors on their own, doctors with AI help, and then Ai is on their own.

Speaker: 2
01:44:08

And, like, who does the best? And it turns out it’s the AI on its own. Because even a doctor that’s supported by the AI, what they’ll do is they just, ai, they won’t listen to the AI when it’s right because they’re ai, I know better.

Speaker: 1
01:44:22

Oh, god.

Speaker: 2
01:44:23

And they’re already yeah. And this is ai this is moving it’s moving kind of insanely fast. You talked about, you know, how the the task horizon gets kind of longer and longer. You can do half hour tasks, one hour tasks, and this gets us to what you were talking about with the autonomy.

Speaker: 2
01:44:38

Like, autonomy is ai it’s how how far can you keep it together on a task before you kinda go off the rails? And it’s, like, well, you know, we have, like, you could do it for for a few seconds. And now you can keep it together for five minutes before you kinda go off the rails.

Speaker: 2
01:44:54

And now we’re at, like, I forget, like an hour

Speaker: 0
01:44:56

or something like that. An hour and a half, actually, an hour and a half.

Speaker: 2
01:44:58

Yeah. Yeah. Yeah. There it is.

Speaker: 1
01:45:01

Chatbot for the company OpenAI scored an average of ninety percent when diagnosing a medical condition from a case report and explaining its reasoning. Doctors randomly assigned to use the chatbot got an average score of 76%. Those randomly assigned not to use it had an average score of 74%.

Speaker: 1
01:45:16

So the doctors only got a 2% bump.

Speaker: 2
01:45:19

Yeah. The doctors got a 2% bump

Speaker: 1
01:45:20

That’s kinda crazy.

Speaker: 2
01:45:21

Ram then the AI That’s kinda crazy, isn’t it? Yeah. It is. The AI on its own did 15% better.

Speaker: 1
01:45:27

That’s nuts.

Speaker: 0
01:45:28

There’s an interesting reason to why that tends to like, why humans would rather die in a, car crash where they’re being driven by a human than an AI. So, like, AIs have this this funny feature where the mistakes they make look really, really dumb to humans. Like, when you look at a mistake that, like, a chatbot makes, you’re like, dude, like, you just made that shit up. Like, come on. Don’t fuck with me.

Speaker: 0
01:45:52

Like, you made that up. That’s not a real thing. And and and they’ll they’ll do these weird things where they defy logic or they’ll do basic logical errors ai, at least the older versions of these would. And that would cause people to look at them and be ai, oh, what a cute little chatbot. Like, what a stupid little thing.

Speaker: 0
01:46:05

And the the problem is, like, humans are actually the same. So we have blind speak. We have literal blind ai, but a lot of the time, like, humans just think stupid things and, like, that’s, like, we we we were we’re used to that. We think of those errors. We think of those those failures as just, like, oh, but that’s because that’s a hard thing to master.

Speaker: 0
01:46:25

Like, I can’t add eight digit numbers in my head right now. Right? Oh, how embarrassing. Like, how how retarded is Jeremy right ai? Now?

Speaker: 0
01:46:36

I can’t even add eight digits in his head. I’m retarded for other reasons, but, the so the AI systems, they find other things easy and other things hard. So they look at us the same way. I mean, like, oh, look at this stupid human, like, whatever. And so we have this temptation to be, like, okay.

Speaker: 0
01:46:46

Well, AI progress is a lot slower than it actually is because it’s so easy for us to spot the mistakes.

Speaker: 2
01:46:52

Yeah.

Speaker: 0
01:46:52

And that caused us to lose confidence in these systems in cases where we should have confidence in them, and then the opposite is also true.

Speaker: 1
01:46:58

Well, it’s also you’re seeing just just with, like, AI image generators. Like, remember the Kate Middleton thing? Where people are seeing flaws in the images because supposedly she was very sick, and so they were trying to pretend that she wasn’t. But people found all these, like, issues. That was really recently. Now they’re perfect. Yep.

Speaker: 0
01:47:15

Yep.

Speaker: 1
01:47:15

Sai this is, like, within, you know, the news cycle time.

Speaker: 0
01:47:19

Yeah.

Speaker: 1
01:47:20

Ai, that Kate Middleton thing was Oh,

Speaker: 2
01:47:21

yeah.

Speaker: 1
01:47:21

What was that, Jamie? Two years ago, maybe? Yeah. Ish? Where people arya analyzing the images, like, why does she have five fingers, and, you know, and a thumb? Like, this is kinda weird.

Speaker: 0
01:47:33

Yeah.

Speaker: 1
01:47:34

What’s that? It was a year ago. A year ago.

Speaker: 2
01:47:37

A year ago.

Speaker: 0
01:47:37

A year ago. Happened sai fast.

Speaker: 2
01:47:38

It’s so fast. Yeah. Ai, I the I I had conversations, ai, so academics are actually kind of bad with this. Had conversations for for whatever reason, like, toward towards the end of last year, like, last fall with a bunch of academics about, like, how fast AI is progressing.

Speaker: 2
01:47:54

And they were all, like, pooh pooh ing it and going, like, oh, no. They’re they’re they’re running into a wall, like, scaling around the wall and all that stuff. Oh my god.

Speaker: 0
01:48:02

The walls. There’s so many walls. Like, so many of these, like, imaginary reasons that things are and ai the way, things could slow down. Like, I don’t wanna be I don’t wanna be, like, absolutist about this. Things could absolutely slow down. There are a lot of interesting arguments going around every which way.

Speaker: 0
01:48:15

But

Speaker: 2
01:48:15

how how

Speaker: 1
01:48:16

could things slow down if there’s a giant Manhattan Project race between us and then and then a competing superpower.

Speaker: 0
01:48:22

So one thing is

Speaker: 1
01:48:23

That has a technological advantage.

Speaker: 0
01:48:25

So so there’s this this thing called, like, AI scaling laws. And and these are kind of at the core of where we’re at right now geostrategically around this stuff. So what AI scaling laws say roughly is that bigger is better when it comes to intelligence. So if you make a bigger sort of AI model, a bigger artificial bryden, and you train it with more computing power or more computational resources, and with more data, the thing is gonna get smarter and smarter and smarter as you scale those things together.

Speaker: 0
01:48:49

Right, roughly speaking. Now if you wanna keep scaling, it’s it’s not like it keeps going up if you double the amount of computing power that the thing gets twice as smart. Instead, what happens is if you want it goes in, like, orders of magnitude. So if you you have you wanna make it another kind of increment smarter, you got a 10 x.

Speaker: 0
01:49:06

The you gotta increase by a factor of 10 the amount of compute. And then a factor of 10 again. So now you’re a factor of a hundred. And then and then 10 again. So if you look at the amount of compute that’s been used to train these systems over time, it’s this, like, exponential explosive exponential that just keeps going, like, higher and higher and higher and and steepens and steepens, like, 10 x every I think it’s about every two years now.

Speaker: 0
01:49:27

You you 10 x the amount of compute. Now Yeah. You can only do that so many times until your data center is, like, a 100,000,000,000, a trillion dollar, 10 trillion doll like, every year, you’re kinda doing that. So so right now, if you look at the the clusters, like, you know, the ones that Elon is building, the ones that Sam is building, you know, Memphis and and and, you know, Texas, Like, these facilities are hitting the, like, you know, hundred billion dollar scale.

Speaker: 0
01:49:54

Like, we’re we’re kind of in that or tens of billions of billions, actually. Yeah. Yeah. Looking at 2027, you’re you’re ai of more in that space. Right?

Speaker: 0
01:50:01

Ai, the you can only do 10 x so many more times until you run out of money, but more importantly, you run out of chips. Like, literally, TSMC cannot pump out those chips fast enough to keep up with this insane growth. And one consequence of that is that you you essentially have, like, this, this this, gridlock, like, new supply chain choke points show up, and you’re, like, suddenly, I I don’t have enough chips or I run out of power.

Speaker: 0
01:50:28

Yeah. That’s the thing that’s happening on The US energy grid right now. We’re literally, like, we’re running out of, like, one, two gigawatts, like, places where we can plant a data center. That’s the thing people are fighting over. It’s one of the reasons why energy deregulation is a really important pillar of, like, US competitiveness.

Speaker: 2
01:50:44

This is actually this is actually something we we found, when we were when we were working on this investigation. One of the things that adversaries do is they actually will fund protest groups against energy infrastructure projects just to slow down, just to, ai, put

Speaker: 0
01:51:03

a tyler up in litigation.

Speaker: 2
01:51:04

Just to time up in litigation. Exactly. And, like, it was actually remarkable. We we talked to, some some some of the some state cabinet officials, so for in various US states. And they’re basically saying, like, yep. We’re actually tracking the fact that as far as we can tell, every single, environmental or whatever protest group against an energy project has funding that can be traced back to nation state adversaries who are

Speaker: 0
01:51:30

They don’t know. Ai.

Speaker: 2
01:51:31

They don’t know about it. So they’re not doing it intentionally. Right. They’re not like, we’re trying to no. They just you just imagine, like, oh, we’ve got, like there’s a millionaire backer who cares about the environment. He’s giving us a lot of money. Great. Fantastic.

Speaker: 2
01:51:41

But sitting behind that dude in the shadows is, like, the usual suspects.

Speaker: 0
01:51:47

Wow. And it’s what you would do. Right? I mean, if you’re trying to tie up to

Speaker: 2
01:51:50

your core just ai to fuck with us. Yeah. Like, just go

Speaker: 0
01:51:53

for it.

Speaker: 1
01:51:53

You were just advocating fucking with them. So of course, they’re gonna fuck with us.

Speaker: 0
01:51:56

That’s right.

Speaker: 2
01:51:57

That’s it. What a

Speaker: 1
01:51:58

weird world we’re living in.

Speaker: 2
01:51:59

Yeah. But you can also see how a lot of this is still us, like, getting in our own way. Right? We we could. If we had the will, we could go, like, okay. So for certain types of energy projects, for data center projects and some carve out categories, we’re actually gonna put bounds around how much delay you can create on ai by lawfare and by other stuff.

Speaker: 2
01:52:20

And that allows things to move forward while still allowing the legitimate concerns of the population for projects like this in the backyard to have their say. But there’s a national security element that needs needs to be injected into this somewhere, and it’s all part of the rule set that we have and are are arya, like, tying an arm behind our back, basically.

Speaker: 1
01:52:42

So what would deregulation look like? How would that be mapped out?

Speaker: 0
01:52:46

There’s a lot of low hanging fruit for that. So What are the big ones? Yeah. So so ai now, I mean, there are all kinds of things around. It gets in the weeds pretty quickly, but, like, there are all kinds of things around. If you’re going to so, carbon emissions is a big thing. Right?

Speaker: 0
01:53:03

Sai, meh, data centers, no question, put out, like, have massive carbon footprints. That’s definitely a thing. The question is, like, are you really going to bottleneck builds because of because of that? And, like, are you gonna are you gonna come out with exemptions for, you know, like, NEPA exemptions for for all these kinds of things?

Speaker: 0
01:53:21

Do you think a lot of

Speaker: 1
01:53:22

this green energy shit is being funded by other countries to try to slow down our energy? Yeah.

Speaker: 2
01:53:28

That that

Speaker: 0
01:53:28

that’s a it’s a dimension that that was flagged actually in the context of what Ed was talking about. That’s that’s one of the arguments that’s being made.

Speaker: 2
01:53:34

And and to be clear though, like, the this is also how, like, adversaries operate is is, like, not necessarily in, like, creating something out of nothing because that’s hard to do. Ai it’s got it’s like fake. Right? Instead, it’s ai there’s a legitimate concern. So a lot of the stuff around the environment and around, like ai, totally legitimate concerns.

Speaker: 2
01:53:53

Like, I don’t want my backyard waters to be polluted. I don’t want, like, my kids to get cancer from whatever. Like, totally legitimate concerns. So So what they do, it’s like we talked about, like, you’re you’re, like, waving that rowboat back and forth. They identify the the nascent concerns that are genuine and grassroots, and they just go, like, this, this, and this ai.

Speaker: 1
01:54:12

Well, that would make sense why they amplify carbon above all these other things. You think about the amount of particulates in the atmosphere, pollution Totally. Polluting the rivers, polluting the ocean. That doesn’t seem to get a lot of traction. Carbon does.

Speaker: 0
01:54:25

Yep.

Speaker: 1
01:54:25

And when you go carbon zero, you put a giant monkey wrench into the gears of society.

Speaker: 2
01:54:31

But one of the tells one of the tells is also, like, so, you know, nuclear would be ai of the ideal energy Yeah. Especially modern power plants ai the Right. The meh three or gen four stuff, which have very low meltdown risk, safe by default, all that stuff. And yet these groups are, like, coming out against this. It’s ai perfect clean green power. What’s going on, guys?

Speaker: 2
01:54:53

And it’s because at not again, not a % of the time. You can’t you can’t really Right. Say that because it’s so fuzzy and around the edges.

Speaker: 0
01:55:00

A lot

Speaker: 1
01:55:00

of it is idealistic people looking for a utopia A

Speaker: 0
01:55:03

%.

Speaker: 1
01:55:03

And they get co opted by nation states.

Speaker: 2
01:55:05

And not even co opted. Ai fully sincere. Yeah. Just

Speaker: 1
01:55:08

amplified ai in a preposterous way.

Speaker: 0
01:55:11

That’s it. And and And

Speaker: 1
01:55:12

Al Gore gets at the helm of it. It. And then that little girl, that how dare you, girl.

Speaker: 0
01:55:15

It won’t Oh.

Speaker: 2
01:55:17

How dare you take my childhood away

Speaker: 0
01:55:19

from you?

Speaker: 1
01:55:20

Yeah. It’s it’s wonderful. It’s a wonderful thing to watch play out because it just it it just capitalizes on all these human vulnerabilities.

Speaker: 0
01:55:28

Yeah. And one of the one of the big things that you can do too is, like, a quick win is just, like, impose limits on how much time these things can be allowed to be tied up in litigation. So impose time limits on that process just to say, like, ai, I get it. Like, we’re we’re gonna have this conversation, but this conversation has a clock on it.

Speaker: 0
01:55:46

Because, you know, we’re talking to, this one, like, data center, company. And what they were saying we were asking, like, look, what are the ai when you think about bringing new new power, like, new natural gas plants online? And they’re, like, well, those are, like, five to seven years out. And then you go, okay.

Speaker: 0
01:56:01

Well, like, how long and that’s by the way, that’s probably way too long to be relevant in the superintelligence context. And so you’re, like, ai. Well, how long if all the regulations were waived? If this is, like, a national security imperative and whatever authorities, you know, Defense Production Act, whatever, like, was in your favor?

Speaker: 0
01:56:18

And they’re, like, oh, I mean, it’s actually just, like, a two year build. Like, that’s that’s what it is.

Speaker: 2
01:56:22

Yeah. So

Speaker: 0
01:56:23

you’re you’re you’re tripling the build time.

Speaker: 2
01:56:25

We’re getting in our own way, like, every which way. Every which way. And and also, like, I mean, also don’t wanna be too, we’re getting in our own way, but, like, we don’t wanna, like, frame it as, like, China is, like, per they they fuck up they fuck up a lot, like Yeah. All the time. One actually kinda, like, funny one is around DeepSeek. So you you know you know DeepSeek. Right?

Speaker: 2
01:56:46

They they made this, like, open source model that, like, everyone, like, lost their minds about back in in January.

Speaker: 0
01:56:51

R one.

Speaker: 2
01:56:51

Yeah. Yeah. R one. And, they’re legitimately a really, really good team, but it’s fairly clear that even as of, like, end of last year and certainly in the summer of last year, like, they were not dialed in to the CCP mothership. And they were doing stuff that was, like, actually, kind of hilariously messing up the propaganda efforts of of the CCP without realizing it.

Speaker: 2
01:57:16

So, so to give you, like, some context on this, one of, one of the CCP’s, like, large kind of propaganda goals in the last four years has been framing creating this narrative that, like, the export controls we have around AI and, like, all this gear and stuff that we were talking about.

Speaker: 2
01:57:35

Look, meh. Those don’t even work. So you might as well just, like,

Speaker: 1
01:57:40

give us.

Speaker: 2
01:57:40

Why don’t you just give up on the

Speaker: 0
01:57:41

export controls, man?

Speaker: 2
01:57:42

Why are you interested in AI? We don’t even we don’t even care. We don’t care. So that trying to frame that narrative. And they they went to, like, gigantic efforts to do this. So I don’t know if so there there’s this, like, kind of crazy thing where, the secretary of commerce under Biden, Gina Raimondo, visited China in, I think, August 2023.

Speaker: 2
01:58:01

And the Chinese basically, like, timed the launch of the Huawei Mate 60 phone that had this these chips that were supposed to be made by, like, export controlled shit for ai for her visit. So it was basically just ai a big, like, fuck you. We don’t even give a shit about your export controls. Like, basically trying a a morale hit or whatever.

Speaker: 2
01:58:22

And you you think about that. Right? That’s an incredibly expensive set piece. That’s ai you gotta coordinate with Huawei. You gotta, like, get the the TikTok memes and shah, like, go going in the right in the right direction.

Speaker: 2
01:58:35

All that stuff and and all the stuff they they’ve been putting out is around this narrative. Now fast forward to mid last year, the CEO of DeepSeek, the company, back then, it was totally obscure. Like, nobody was tracking who they were. They were working in in total obscurity. He goes on this he get does this random interview on Substack. And what he says is he’s like, yeah.

Speaker: 2
01:59:00

So honestly, like, we’re really excited and doing this AGI push or whatever. And, like, honestly, like, money is not the problem for us. Talent’s not the problem for us. But, like, access to compute, like, these export controls, meh, they do they ever work? That’s a real problem for us. Oh, boy. And, like, nobody noticed at the ai.

Speaker: 2
01:59:20

But then but then the the whole Deep Shah Gar One thing blew up in December. And now you imagine, like, you’re the Chinese Ministry of Foreign Affairs. Like, you’ve been, like, you’ve been putting this narrative together for, like, four years, and this jackass that nobody heard about five minutes ago basically just, like, shits all over.

Speaker: 0
01:59:39

And, like, you’re you’re not hearing that line from them anymore. No.

Speaker: 2
01:59:42

No. No. No. No. No. They they’ve locked that shit down. Yeah. Oh, and, actually, the funny the funniest part of this, they’re in right when r one launched, there’s a random DeepSeek employee. I think his name is, like, Ai Guo or something like that. He tweets out. He’s ai, so this is, like, our most exciting launch of the year.

Speaker: 2
01:59:58

Nothing can stop us on the path to AGI except access to compute. And then literally, the dude in Washington, DC who works vatsal think tank on export controls against Ai, reposts that on x, and goes basically, like,

Speaker: 0
02:00:15

Cool. Good.

Speaker: 2
02:00:16

Message received. And so, like, hilarious for us. But also, like, you know that on the backside, somebody got screamed at

Speaker: 0
02:00:26

for that shit. Somebody got magic bust.

Speaker: 2
02:00:29

Somebody got yeah. Somebody got, like, taken away or or whatever. Because, like, it just it just ai their entire, like, four year, like, narrative around these export controls.

Speaker: 1
02:00:39

Wow.

Speaker: 2
02:00:39

But that you’re that shit ain’t gonna happen again from DeepSeek. Better believe it.

Speaker: 0
02:00:44

It’s and that that’s part of the problem with, like so the the Chinese face so many issues. One of them is, you know, to to kind of, another one is, the idea of just waste and fraud. Right? So we have a free market. Like, what that means is you raise from private capital. People who are pretty damn good at assessing shit will, like, look at your your setup and assess whether it’s worth, you know, backing you for these massive multi billion dollar deals.

Speaker: 0
02:01:08

In China, the state, like I mean, the stories of waste are pretty insane. They’ll, like, send a billion dollars to, like, a bunch of yahoos who will pivot from whatever, like, I don’t know, making these widgets to just, like, oh, now we’re, like, a chip foundry, and they have no experience in it.

Speaker: 0
02:01:23

But because of all these subsidies, because of all these opportunities, now we’re gonna say that we are. And then no surprise, two years later, they burn out, and they’ve just, like, lit a billion dollars on fire or whatever billion yen. And, like, the weird thing is this is actually working overall, but it does lead to insane and unsustainable levels of waste.

Speaker: 0
02:01:41

Like, the the Chinese system right now is obviously, like, they’ve got their their massive property bubble that they’re that’s looking really bad. They’ve got a population crisis. The only way out for them is the AI stuff right now. Like, the really the only path for them is that, which is why they’re they’re working it so hard.

Speaker: 0
02:01:58

But the the the stories of just, like, billions and tens of billions of dollars being lit on ai, specifically in the semiconductor industry, on the in the AI industry, like, that’s a a drag force that they’re dealing with constantly that we don’t have here in the same way. So it’s it’s the sort of, like, the the different, structural advantages and weaknesses of both systems.

Speaker: 0
02:02:17

And when we think about what do we need to do to to counter this, to to be active in this space, to be a live player again, it means factoring in, like, how do you yeah. I mean, how do you take advantage of of some of those opportunities that their system presents that that ours doesn’t?

Speaker: 1
02:02:30

When you say be a live player again, like, where do you position us?

Speaker: 0
02:02:35

It’s I think it remains to be so right now, this administration is obviously taking bigger swings. That What are they doing differently? So well, I mean, things like tariffs. I mean, they’re they’re not shy about trying new stuff. And, you know, tariffs are are very complex in this speak, like, the impact the actual impact of the tariffs and and not universally good, but the onshoring effect is also something that you really want.

Speaker: 0
02:02:58

So it’s it’s a very mixed bag. But it’s certainly an administration that’s, like, willing to do high stakes big moves in a way that other administrations haven’t. And in a time when you’re looking at a transformative technology that’s gonna, like, upend so much about the way the world works, you can’t afford to have that mentality we’re just talking about with, like, the nervous I mean, you you encountered it with the staffers, you know, in the when booking the podcast with the presidential ai.

Speaker: 0
02:03:25

Right? Like, the the kind of, like, nervous, antsy staffer who

Speaker: 2
02:03:29

Everything’s gotta be controlled, and it’s gotta be, like, just so Yeah.

Speaker: 0
02:03:33

It can’t have it. It’s ai if you like, the like, you know, wrestlers have that mentality of, like, just, like, aggression, like like, feed in. Right? Feed forward. Don’t just sit back and, like, wait to take the punch. It’s not like, one of the guys who who helped us out on this has a saying.

Speaker: 0
02:03:48

He’s like, fuck you. I go first, and it’s always my turn. Right? That’s what success looks like when you actually are managing these kinds of national security issues. The mentality we had adopted was this, like, sort of siege mentality where we’re just letting stuff happen to us and we’re not feeding in.

Speaker: 0
02:04:05

That’s something that I’m much more optimistic about in this context. It’s tough too ai I I understand people who who hear that and go, like, well, look, you’re talking about, like, escalate this is an escalatory agenda. Again, I actually think paradoxically it’s not. It’s about keeping adversaries in check and training them to respect American territorial integrity, American technological sovereignty.

Speaker: 0
02:04:27

Like, you you don’t get that for free. And if you just sit back, you’re that is escalatory. It’s just Yeah.

Speaker: 2
02:04:33

And base this is basically the the sub threshold version of, like, you know, like, the World War two appeasement thing where back, you know, Hitler was, like, was was taken, he was taken taken Austria. He was remilitarizing shit. He was doing this. He was doing that. And the British were ai, k.

Speaker: 2
02:04:51

We’re gonna let him just take one more thing, and then he will be satisfied. And

Speaker: 0
02:04:58

that just Do you want to ai I have a little bit of Poland, please?

Speaker: 2
02:05:02

A little bit of Poland.

Speaker: 0
02:05:03

Maybe the Czechoslovakia is looking awfully fine.

Speaker: 2
02:05:07

And so this is basically like, they fell into that pit, ai, that tar pit back in the day because they’re you know? Peace in our time. Yeah. Speak in our time. Right? And, and and to some extent, like, we we’ve we’ve still kinda learned the lesson of not letting that happen with territorial boundaries, but that’s big and it’s visible and happens on the map and you can’t hide it.

Speaker: 2
02:05:26

Whereas, one of the risks, especially with the previous administration was, like, there’s these, like, sub threshold things that don’t show up in the news and that are that are calculated. Like, they’re basically, our adversaries know because they know history. They know not to give us a Pearl Harbor.

Speaker: 2
02:05:46

They know not to give us a nine eleven because historically, countries that give America Pearl Harbor end up having a pretty bad time about it. And so why would they give us a reason to come and bind together against an obvious external, like, threat or risk when they can just, like, keep chipping away at it.

Speaker: 2
02:06:06

This is one of the things, like, we have to actually elevate that and realize this is what’s happening. This is the strategy. We need to we need to take that, like, let’s not do appeasement mentality and push it across in these other domains because that’s where the real competition is going on.

Speaker: 0
02:06:21

That’s where

Speaker: 1
02:06:21

it gets so fascinating in regards to social media because it’s imperative that you have an ability to express yourself. It’s ai it’s very valuable for everybody. The free exchange of information, finding out things that are you’re not gonna get from mainstream media and it’s led to the rise of independent journalism. It’s all great.

Speaker: 1
02:06:38

But also, you’re being manipulated, like Yeah. Left and right constantly. And most people don’t have the time to filter through it and try to get some sort of objective sense of what’s actually going on. Yeah.

Speaker: 2
02:06:51

It’s true. It’s like our our free speak. It’s ai it’s the layer where our society figures stuff out. And when if adversaries get into that layer, they’re, like, almost inside of our of our of our brain. And there’s ways of addressing this. Like, one of the challenges obviously is, like, so, you know, they they try they try to push an extreme opinions in in either direction.

Speaker: 2
02:07:11

And it’s that part is actually it’s it’s kinda difficult because while, the the most extreme opinions are, like, are also the most likely generally to be wrong, they’re also the most valuable when they’re right because they tell us a thing that we didn’t expect ai definition that’s true and that can really advance us forward.

Speaker: 2
02:07:32

And sai, I mean, the the there there are actually solutions to this. I mean, this this particular thing is isn’t an area we were were, like, too immersed in. But one of the solutions that, has been bandied about is, like, you know, like, you might know, like, poly market prediction markets and stuff like that. Mhmm.

Speaker: 2
02:07:50

Where, at least, you know, hypothetically, if you have a prediction market around, like, if we do this policy, this thing will will or won’t happen. That actually creates a challenge around trying to manipulate that view or that arya. Because what ends up happening is, like, if you’re an adversary and you wanna not just like manipulate a conversation that’s happening in social media, which is cheap, but manipulate a prediction the price on a prediction arya, you have to buy in, you have to spend real resources.

Speaker: 2
02:08:21

And if you’re to the extent you’re wrong and you’re trying to create a wrong opinion, you’re sana lose your resource. So you actually you actually can’t push too far too many times or you will just get your money taken away from you. So Ai think, like, that’s that’s one approach where just in terms of preserving discourse, some of the stuff that’s happening in prediction markets is actually really interesting and really exciting even in the context of bots and and Ai and stuff like that.

Speaker: 1
02:08:51

This is the one way to find truth in the system is find out where people are making money.

Speaker: 2
02:08:56

Exactly. Put your money where your mouth is. Right? Proof of work, like, this is that that is

Speaker: 0
02:09:00

what just, like, the market is theoretically too. Right? It’s got obviously big big issues, but and can bryden manipulated in the short term. But in the long run, like, this is one of the really interesting things about startups too. Like, when you when you run into people in the early days, by definition, their startup looks like it’s not gonna succeed. Ai?

Speaker: 0
02:09:19

That is what it means to be a seed stage startup. Right? If it was obvious you were gonna succeed, you would, you know, the people would have You

Speaker: 2
02:09:25

would have raised more money already.

Speaker: 0
02:09:26

Yeah. So what you end up having is, like, these highly contrarian people who, like, despite everybody telling them that they’re gonna fail, just believe in what they’re doing and think they’re gonna succeed. And that’s I I think that’s part of what really, like, kinda shapes the the the startup founder’s soul in a way that’s really constructive.

Speaker: 0
02:09:41

It’s it’s also something that feel like the Chinese system is very different. You you raise money in very different ways. You’re coupled to the state apparatus. Like, you’re both dependent on it and you’re you’re supported by it. But there there’s there’s just, like, a lot of different ways and it makes it hard for Americans to relate to Chinese and vice versa and understand each other’s systems.

Speaker: 0
02:10:01

One of the biggest risks sai you’re, like, thinking through what is your posture gonna be relative to these countries is you fall into thinking that their their traditions, their way of thinking about the world is the same as your own. And that’s that’s something that’s been an issue for us with China for a long time is, you know, hey, they’ll liberalize. Right?

Speaker: 0
02:10:16

Like, bring them into the World Trade Organization. It’s ai, oh, well, actually, they’re they’re they’ll sign the document, but they won’t they won’t actually, like, live up to any of the commitments. And it’s it makes appeasement really tempting because you’re thinking,

Speaker: 2
02:10:27

oh, they’re just like us. Like, they’re They’re just around the corner. They’re Yeah. We’re gonna, like Yeah. If we just, like, reach out the old branch a little bit further, they’re gonna they’re gonna come around.

Speaker: 1
02:10:36

Well, it’s like a guy who’s stuck in the friend zone with a girl. Ai, one day, she’s gonna come around and realize I’m a great catch.

Speaker: 2
02:10:45

You keep on trucking, buddy.

Speaker: 1
02:10:47

Just keep on Ai gonna be my bestie. We’re gonna be besties. You just we just need an administration that reaches out to them and just lets them know, man. There’s no reason why it should be adversaries. We’re all just people on planet Earth.

Speaker: 2
02:10:58

Ai mean, like yeah. I Ai. Together. I we like, I I honestly wish that was true. Oh, that’d be wonderful. Sai amazing.

Speaker: 1
02:11:05

Maybe that’s what AI brings about. Maybe AI Be a hope. Maybe super intelligence realizes, hey, you fucking apes, you territorial apes with thermonuclear weapons. How about you shut the fuck up? You guys are doing the dumbest thing of all ai, and you’re being manipulated by a small group of people that are profiting in insane ways off of your misery.

Speaker: 1
02:11:26

So let’s just cut the shah. And figure a way to actually equitably share resources because that’s the big thing. So You’re all stealing from the earth, but some people stole first sana those people are now controlling all the fucking money. How about we stop that?

Speaker: 0
02:11:44

Wow. We just we covered a lot of ground there.

Speaker: 1
02:11:45

Well, that’s what I would do if I was if I was superintelligence. So so But it stopped all that.

Speaker: 2
02:11:50

That actually is, like sai this is not, like, relevant to the risk stuff or to the whatever at all, but it’s just interesting. So there’s there’s actually theories, like, in the same way that there’s theories around, power seeking and stuff around around superintelligence, there’s theories around, like, how superintelligences do deals with each other.

Speaker: 2
02:12:07

Right? And you actually like, you have this intuition that which is exactly right, which is that, hey, two superintelligences, like, actual legit superintelligences should never actually, like, fight each other destructively in the real world. Right? Like, that seems weird. That shouldn’t happen because they’re they’re so smart.

Speaker: 2
02:12:24

And in fact, like, there’s theories around they can they can kind of do perfect deals with each other based on, like, if we’re two superintelligences, I can kind of assess, like, how powerful you are. You can assess how powerful I am and we can we can actually, like, we can actually decide, like, well, well, if we did fight a war against each other, like, you would have this chance of winning.

Speaker: 2
02:12:47

I would have that chance of winning. And so let’s

Speaker: 1
02:12:49

just Well, it would sai instantaneously that there’s no benefit in that. And, also, it would know something that we all know, which is the rising tide lifts all boats. But the problem is the people that already have yachts, they don’t give a fuck about your boat. Like, hey. Hey. Hey. That water’s mine.

Speaker: 1
02:13:03

In fact, you shouldn’t even have water.

Speaker: 0
02:13:05

Well, hopefully, it’s so positive some, right, that even they enjoy the benefits. But but, I mean, you’re right. And this is the issue right now. And one of the the, like, the nice things too is as you as you build up your your ratchet of AI capabilities, it does start to open some opportunities for actual, like, trust but verify.

Speaker: 0
02:13:20

Right? Which is something that we can’t do right now. It’s not like with nuclear stockpiles where we’ve had some success in some some context with, like, enforcing, you know, treaties and stuff like that, sending inspectors in and all that. With with AI right now, like, how can you actually prove that, like, some international agreement on the use of AI is being observed?

Speaker: 0
02:13:40

Even if we figure out how to control these systems, how can we make sure that, you know, China is baking in those that those control mechanisms into their training runs and that we are, and how can we prove it to each other without having total access to the compute stack? We don’t really have a solution for that. There are all kinds of programs like this, like, flex head thing.

Speaker: 0
02:13:59

But anyway, those those are not gonna be online by, like, 2027. And so one hope is really

Speaker: 2
02:14:04

good that people are working on them. Because, like,

Speaker: 0
02:14:06

for for sure

Speaker: 2
02:14:06

You wanna you wanna, like, you wanna be positioned for catastrophic success. Like, what if something great happens and, ai, or we have more time or or whatever? You wanna be working on this stuff that that allows this kind of this kind of control or oversight that that’s kind of hands off where, you know, in theory, you can you can give you can hand over GPUs to an adversary inside this like box with these encryption things.

Speaker: 2
02:14:30

The people we’ve spoken to in in these in the spaces that actually try to, like, break into boxes like this are like, well, probably not gonna work. But who knows? It ai.

Speaker: 0
02:14:41

Yeah. The so the hope is that as you build up your AI capabilities, basically, it starts to create solutions. So it starts to create ways for, you know, two countries to verifiably adhere to some kind of international agreement.

Speaker: 2
02:14:52

Yep.

Speaker: 0
02:14:53

Or to find, like you said, like, paths for de escalation. That’s a sort of thing that that we actually could get to. And that’s one of the the strong positives of where you could end up going.

Speaker: 1
02:15:02

That would be what’s really fascinating. Yeah. Artificial general intelligence becomes super intelligence and it immediately weeds out all the corruption. Because, hey, this is the problem. Like a massive doge in

Speaker: 0
02:15:12

the sky.

Speaker: 1
02:15:13

Exactly. Like, we figured it out. You guys are all criminals and expose it to all the people. Like, these people that are your leaders have been profiting, and they do it on purpose, and this is how they’re doing it, and this is how they’re manipulating you. Yeah. And these are all the lies that they’ve told.

Speaker: 0
02:15:27

I’m sure that list is pretty Woah. It almost be scary. If you could x-ray the world right now and, like, see all the

Speaker: 1
02:15:35

You’d want an MRI. You sana get, like, down to the tissue. Yeah.

Speaker: 0
02:15:38

You’re right. You’re probably Yeah.

Speaker: 1
02:15:39

You wanna get down to the cellular level.

Speaker: 0
02:15:41

But, like,

Speaker: 2
02:15:42

it Ai mean

Speaker: 1
02:15:43

It would be offshore accounts. Then you’d start finding out the show companies.

Speaker: 2
02:15:47

There would be so meh, like, the stuff that comes out, you know, from just randomly. Right? Just just random shit that comes out, like, yeah. The the, I forget that that that, like, Argentina. I think you were what you were talking about, like, the the Argentinian thing that that, came out a few years ago around all the oligarchs and

Speaker: 0
02:16:07

the the The Meryl Streep thing. Yeah. The Yeah.

Speaker: 2
02:16:09

Yeah. Yeah.

Speaker: 0
02:16:09

The Meryl Streep? Yeah. The,

Speaker: 2
02:16:11

ai

Speaker: 0
02:16:11

laundromat there. The The

Speaker: 2
02:16:12

laundromat movie. Have you ever seen that?

Speaker: 0
02:16:14

Panama Papers.

Speaker: 2
02:16:15

The Panama Papers.

Speaker: 1
02:16:16

I never saw that.

Speaker: 2
02:16:17

No? Oh, it’s

Speaker: 0
02:16:17

a good movie.

Speaker: 1
02:16:18

Yeah. Is it called the Panama Papers, the movie?

Speaker: 0
02:16:20

It’s called the laundromat.

Speaker: 1
02:16:21

Yeah.

Speaker: 0
02:16:22

Oh, okay.

Speaker: 2
02:16:22

Yeah. You you meh the Panama Papers?

Speaker: 1
02:16:24

Do you know? Roughly.

Speaker: 2
02:16:25

Yeah. It’s ai all the all the oligarchs, like, stashing their, stashing their their cash.

Speaker: 0
02:16:30

Like offshore tax haven stuff. Yeah. Yeah. It’s like

Speaker: 2
02:16:34

And, and, like, some lawyer or some someone basically blew it wide open, and so you got to see, like, every every, like, oligarch and rich person’s, like, you, like, financial shit. Like, every once in a while, right, the world gets just, like, a flash of, like, oh, here’s what’s going on under the surface.

Speaker: 0
02:16:51

Yeah. It’s ai,

Speaker: 2
02:16:51

oh, fuck. And then we ai, like, go back

Speaker: 0
02:16:54

to sleep. What what’s fascinating is, like, the unhydables. Right? The the little things that can’t help but give away what is what is happening. Like, you think about this in AI quite a ai. You know, some things that are hard for companies to hide is, like, they’ll have a a job posting that they’ll put they they’ve got to advertise to to recruit.

Speaker: 0
02:17:10

So you’ll see, like, oh, interesting. Like, oh, OpenAI is looking to hire some people from hedge funds. Yeah. Ai, I wonder what that means. I wonder what that implies.

Speaker: 0
02:17:20

Like, if you think about all of the leaders in ai of the AI space, think about the Medallion Fund, for example. Right? This is, like, super successful hedge fund, very like, what the meh who broke the

Speaker: 2
02:17:30

The man who broke the market.

Speaker: 0
02:17:31

Man who broke the market is the famous book about the the founder of the Medallion Fund. And, like, this is basically, like, a fund that they make, like, ridiculous, like, $5,000,000,000 returns every year ai of guaranteed. So so much so they have to cap how much they invest in the market because they would otherwise, like, move the market too much, like, affect it.

Speaker: 0
02:17:49

And The fucked up

Speaker: 2
02:17:51

thing about, like, the way they trade and this is so this is, like, twenty year old information, but it’s still indicative because, like, you can’t get current information about their strategies. But one of the things that they were the first to kinda go for and and figure out is they were ai, okay.

Speaker: 2
02:18:07

Our they basically were the first to kinda build what was at the time as much as possible an AI that autonomously did trading at at, like, great speak, and it had, like, no human oversight and just worked on its own. And what what they found was the strategies that were the most successful were the ones that humans understood the least.

Speaker: 2
02:18:29

Because if you have a strategy that a human can understand, some human’s gonna go and figure out that strategy and trade against you. Whereas if you have the kind of the balls to go ai, oh, this thing is doing some weird shit that I cannot understand no matter how hard I ai, let’s just fucking YOLO and trust it and, like, and make it work.

Speaker: 2
02:18:50

If you have all the stuff debugged and if you have the whole if the whole system is working right, that’s where your biggest successes are.

Speaker: 1
02:18:57

So what kind of strategies are you talking about?

Speaker: 2
02:18:59

Oh, I I mean, like, I so Ai I I don’t I don’t know specific I can give you an analogy. Yeah.

Speaker: 0
02:19:03

Yeah. Maybe this will this will, like so how are how are AI systems trained today?

Speaker: 2
02:19:07

Right? Oh, so just as a trading strategy. Sorry. I’ll just sai ai, I think yeah. You bought, like, some as an example, like, you buy, like, you buy the stock, the Thursday after the full moon and then sell it, like, like, the Friday after the new moon or some, like, random shit like that that it’s, like, why does that even work?

Speaker: 2
02:19:27

Like, why would why would that even work?

Speaker: 0
02:19:29

So so to to, like, to to to sort of, explain why these these, strategies work better. If if you think about how AI systems are trained today, you basically very roughly, you start with this blob of numbers that’s called a a model. And you feed it input, you get an output. If the output you get is no good, if you don’t like the output, you basically fuck around with all those numbers, change them a little bit, and then you try again.

Speaker: 0
02:19:54

You’re like, oh, okay. That’s better. And you repeat that process over and over and over with different inputs and outputs. And eventually, those numbers, that mysterious ball of numbers starts to behave well. It starts to make good predictions or generate good outputs. Now, you don’t know why that is.

Speaker: 0
02:20:10

You you just know that it does a good job at least where you’ve tested it. Now, if you slightly change what you test it on, suddenly you could discover, oh, shah, it’s catastrophically failing at that thing. These things are very brittle in that way, and that’s part of the reason why ChatGPT will just, like, completely go on a psycho binge fest every once in a while if you give it a prompt that has, like, too many exclamation points and asterisks in it or something.

Speaker: 0
02:20:33

Like, these these systems are weirdly weirdly brittle in that way. But applied to investment strategies, if all you’re doing is saying, like, optimize for, like, optimize for returns, Give it in give it inputs. Give it

Speaker: 2
02:20:45

Make me more money by the end of the day. It’s like a Yeah. An easy goal. Like, it’s a an very, like, clear cut goal. Right? You can give a machine.

Speaker: 0
02:20:52

So you end up with a a machine that gives you these very like, it is a very weird strategy. This ball of numbers isn’t human understandable. It’s just really fucking good at making money. And why is it really fucking good at making money? I don’t know. I mean, it just kinda does the thing and in making money, I don’t ask too many questions.

Speaker: 0
02:21:08

That’s kinda like the so so when you try to impose on that system human interpretability, you pay what in the AI world is known as the interpretability tax. Basically, you’re adding another constraint, and the minute you start to do that, you’re forcing it to optimize for something other than pure rewards.

Speaker: 1
02:21:25

Like doctors using AI to diagnose diseases are less effective than the chatbot on its own.

Speaker: 0
02:21:29

That’s actually related. Right? That’s related. If you want if you want that system to get good at diagnosis, that’s one thing. Okay. Just fucking make it good at diagnosis. If you sana it to be good at diagnosis and to produce explanations that a good doctor Well, that’s

Speaker: 2
02:21:42

a doctor.

Speaker: 0
02:21:43

Yeah. We’ll go, like, okay, I’ll use that. Well, great. But guess what? Now you’re spending some of that precious compute on, something other than just the thing you’re trying to optimize for. And so now that’s gonna come at a cost of the actual performance of the system. Mhmm.

Speaker: 0
02:21:55

And so if you are gonna optimize, like, the fuck out of making money, you’re gonna necessarily deoptimize the fuck out of anything else, including being able to even understand what that system is doing. And that’s ai of, like, at the heart of a lot of the ai of big picture AI strategy stuff is people are wondering ai, how much how much, interpretability tax am I willing to pay here?

Speaker: 0
02:22:16

And how much does it cost? And everyone’s willing to go

Speaker: 2
02:22:18

a little bit further and a little further.

Speaker: 0
02:22:20

And sai so Ai actually had a a paper where they or ai, I guess, a blog post where they talked about this. And they were like, look, right now, we have this, this, essentially, this this, like, thought stream that our model produces on the way to generating its final output.

Speaker: 0
02:22:38

And that thought stream, like, we don’t sana touch it to make it, like, interpretable, to make it make sense. Because if we do that, then essentially, it’ll be optimized to convince us of whatever the thing is that we want it to do to behave well.

Speaker: 2
02:22:53

So it’s ai you’ve if you’ve used, like, one of like, an OpenAI model recently, right, like o three or or whatever, it’s doing its thinking before it starts, like, outputting the answer. And so that thinking is yeah. We’re supposed to, like, be able to read that and kinda get it.

Speaker: 2
02:23:09

But also, we we don’t wanna make it too legible because if we make it too legible, it’s gonna be optimized to be legible and and to be convincing rather than

Speaker: 0
02:23:21

To fool us, basically. Sai mean

Speaker: 2
02:23:23

Yeah. Exactly.

Speaker: 1
02:23:23

Oh, Jesus Christ.

Speaker: 0
02:23:25

But that that’s so that’s the investment.

Speaker: 1
02:23:27

Making me less comfortable than I thought

Speaker: 2
02:23:28

you would. Even even after

Speaker: 1
02:23:31

I knew it could be Jamie and I were talking about it before, like, how bad are they gonna freak us out? You’re freaking me out more.

Speaker: 0
02:23:37

Well, I mean okay. So I I do wanna highlight. So so the game plan right now on the positive end, let’s see how this works. Jesus.

Speaker: 2
02:23:43

Jamie, do you feel

Speaker: 1
02:23:44

the same way?

Speaker: 0
02:23:47

Yeah.

Speaker: 2
02:23:49

I mean, I I have articles I didn’t bring up that are supporting some of this stuff. Like, today, China quietly made some chip that they shouldn’t been able to do because of the sanctions.

Speaker: 1
02:23:58

Oh, pull that out.

Speaker: 2
02:23:58

And it’s basically based off of their just sheer will.

Speaker: 0
02:24:02

Okay. So there’s There’s

Speaker: 1
02:24:02

some Ai see.

Speaker: 0
02:24:03

There there’s good news on that one at least. This is a ai kind of a bullshit strategy that they’re using. So there there’s okay. So when you make these insane, like, five nanometer

Speaker: 1
02:24:13

Let’s read that for people just listening. Ai quietly cracks ai nanometer. Meh. That’s sai without EUV. What is EUV? Extreme ultraviolet. How SMIC defied the chip sanctions with sheer engineering.

Speaker: 0
02:24:28

Yeah. Sai this is ai,

Speaker: 1
02:24:30

An espionage.

Speaker: 0
02:24:32

Sai there there’s but actually, though so so there’s, a good reason that a lot of these articles, are making it seem like this is a huge breakthrough. It actually isn’t as big as it seems. So so okay. If you wanna make really, really, really, really exquisite Look

Speaker: 1
02:24:49

at this quote. Mhmm. Moore’s Law didn’t die, Ho wrote. It moved to Shanghai. Instead of giving up, Ai grinding its way forward layer by layer, pixel by pixel. The future of chips may no longer be written by who holds the best tools, but by who refuses to stop building. The rules are changing and DUV just lit the fuse. Boy.

Speaker: 0
02:25:09

Yeah. So

Speaker: 1
02:25:10

so Ai mean Who wrote that arya? You can you can Gizmo China. There it is.

Speaker: 0
02:25:14

Yeah. You can view that as, like, Chinese propaganda in a way, actually. So what what what’s actually going on here is, if so the Chinese only have these deep ultraviolet lithography machines. That’s ai a lot of syllables. But it’s just a glorified chip, like, it’s a giant laser that that zaps your chips to, like, make the chips when when you’re fabbing them.

Speaker: 2
02:25:35

Sai we’re talking about, like, you you do these atomic layer patterns on the chips and shah. And, like, what this UV thing does is it, like, fires, like, a a really high ai laser beam. Yeah. They attach to the head of sharks that just shoot at the chips.

Speaker: 0
02:25:49

Yeah. Yeah. Sorry.

Speaker: 2
02:25:49

That was, like, an Austin Powers. Anyway, they they’ll, like, shoot it at the chips and, that causes depending on how the the thing is is designed, they’ll, like, have a liquid layer of the stuff that’s gonna go on the chip. The UV is really, really tight and causes it exactly causes it to bryden. And then they wash off the liquid, and they do it all over again.

Speaker: 0
02:26:10

Like, basically, this is just imprinting a pattern on a chip.

Speaker: 2
02:26:12

Sai whatever whatever it is ai printer.

Speaker: 0
02:26:14

Yeah. So that’s it. And so the the exquisite machines that we get to use or that they get to use in Taiwan are called extreme ultraviolet lithography machines. These are those crazy lasers. The ones that China can use because we’ve prevented them from getting any of those extreme ultraviolet lithography machines.

Speaker: 0
02:26:30

The ones China uses are previous generation machines called deep ai, and they can’t actually make chips as high resolution as ours. So what they do is and and what this article is about is they basically take the same chip, they zap it once with DUV, and then they gotta pass it through again, zap it again to to up to get closer to the level of resolution we get in one pass with our exquisite machine.

Speaker: 0
02:26:54

Now the problem with that is you gotta pass the same chip through multiple times, which slows down your whole process. It means your yields at the end of the day are lower.

Speaker: 2
02:27:02

It adds errors. Yep.

Speaker: 0
02:27:03

Yeah. Which makes it more costly. We’ve known that this is a thing. It’s called multi patterning. It’s been a thing for a long time. There’s nothing new under the sun here. China has been doing this for a ai, but, so so it’s not actually a huge shock that this is happening.

Speaker: 0
02:27:16

The question is always, when you look at an announcement like this, yields, yields, yields. How like, what percentage of the chips coming out are actually usable and how fast are they they coming out? That determines, like, is it actually competitive?

Speaker: 2
02:27:29

And that article too, like, this ties into the propaganda stuff we were talking about. Right? If you read an article like that, you could be forgiven for going, like, oh, meh. Our expert controls, like, just aren’t working, so we might as well just give them up when in reality because, like, you look at the source.

Speaker: 2
02:27:45

Like, the the and this is and this is how you know that also this is, like, this is one of their propaganda things. It’s ai, you look at Chinese news sources, what are they saying? What are the beats that that are, like, common? And, you know, just because of the way their media is set up, totally different from ram, and we’re not used to analyzing things this way.

Speaker: 2
02:28:01

But when you read something in, like, the China Morning Post or, like, the Global Times or Xinhua and and a few different places like this and it’s the same beats coming back, you know that someone was handed a brief. And it’s ai, you gotta hit this point, this point, this point, and, yep, they’re gonna find a way to work that into the news cycle over there. Jeez.

Speaker: 0
02:28:22

And it’s also, like, slightly true. Like, it yeah. They they did manage to make chips at, like, five nanometers. Cool.

Speaker: 2
02:28:27

It’s not a lie. It’s just it’s the same, like, propaganda technique. Right? You’re not most of the time, you’re not gonna confabulate something out of nothing. Rather, like, you start with the truth, and then you push it just a little bit. Just a little bit. And you keep pushing, pushing, pushing.

Speaker: 1
02:28:42

Wow. How much is this administration aware of all the things that you’re talking about?

Speaker: 0
02:28:48

So so they’re actually, they they’ve got some right now, they’re they’re in the middle of of, like, staffing up some of the key positions because it’s a new administration still, and this is such a technical domain. They’ve got people there who ai, like like, at at the kind of work level

Speaker: 2
02:29:03

or They have some

Speaker: 0
02:29:03

people. Shah.

Speaker: 2
02:29:04

They have some people now, yeah, in in places, like, especially in some of the export control offices now who are some of the best in the business.

Speaker: 0
02:29:13

Yeah. And and and that’s that’s really important. Like, this is a it’s a weird space because so when you wanna actually recruit for for, you know, government roles in the space, it’s really fucking hard because you’re competing against, like, an open AI, like, very, like, low range salaries, ai, half a million dollars a year.

Speaker: 0
02:29:31

The government pay scale, needless to say, is, like, not Sai mean, Elon worked for free. He can he can afford to, but but still taking a lot of time out of his his day. There’s a lot of people like that who are like, you know, they they can’t justify the cost.

Speaker: 2
02:29:46

Like, they can’t afford they literally can’t afford to work for the government. For the government.

Speaker: 0
02:29:50

Why would they? Exactly.

Speaker: 1
02:29:52

So Whereas Ai like, you don’t have a choice, bitch.

Speaker: 0
02:29:55

Yeah. Yeah. And that’s what they say. That’s the the Yeah. The Chinese word for bitch is really biting. Like, if you if you translated that, it would be a real real sting.

Speaker: 2
02:30:02

I’m sure. It’s kind

Speaker: 1
02:30:04

of crazy because it seems almost impossible to compete with that. I mean, that’s, like, the perfect setup. If you wanted to control everything and you wanted to optimize everything for the state, that’s the way you would do it.

Speaker: 2
02:30:14

Yeah. But it’s also easier to to make errors and be wrong footed in that way. And also, the basically, that system only works if the dictator at the top is just, like, very competent. Because the the the risk always with a dictatorship ai, like, oh, the dictator turns over and now it’s, like, just a total dumbass. And now you’re the whole thing.

Speaker: 0
02:30:35

And he surrounds himself. I mean, look, we just talked about, like, information echo chambers online and stuff. The ultimate information echo chamber is the one around Xi Jinping right now.

Speaker: 1
02:30:43

Because no one wants to give him bad news.

Speaker: 0
02:30:45

Yeah. I’m I’m not gonna.

Speaker: 2
02:30:46

Right. I don’t Right.

Speaker: 0
02:30:48

You know? Like and and so and you have this and and this is what you keep seeing. Right? Is, like, with these, like, like, provincial level debt in in Ai, right, which is so awful. It’s, like, people trying to hide money under imaginary money money under imaginary mattresses and then hiding those mattresses under bigger mattresses until eventually, like, no one knows where the liability is and that and then you get a massive property bubble and any number of other bubbles that are due to to pop anytime.

Speaker: 0
02:31:14

Right?

Speaker: 2
02:31:15

The longer it goes on, like, the the the more, like, stuff gets swirled away. Like, there’s there’s actually, like, a story from the Soviet Union that’s that always, like, gets me, which is so, Stalin obviously, like, purged and killed, like, millions of people in the nineteen thirties.

Speaker: 2
02:31:28

Right? So by the nineteen eighties, the ruling politburo of the Soviet Union, obviously, like, things have been different, generations had turned over and all this stuff. But those people, the most powerful people in the USSR, could not figure out what had happened to their own families during the purchase.

Speaker: 2
02:31:50

Like, the information was just nowhere to be found because this the machine of the state was just, like, so aligned around, like, we just, like, we just gotta kill as many fucking people as we can, ai, turn it over and then hide the evidence of it and then kill the people who killed the people and then kill those people who killed those people.

Speaker: 0
02:32:09

Ai, it also wasn’t like just kill the people. Right? It was ai Sai did a lot of, like, kind of Gulag Archipelago style. It’s it’s about labor. Right? Because the the fundamentals of the economy are so shit that you basically have to find a way to justify putting people in labor camps and ai.

Speaker: 2
02:32:22

That’s right. But it but it was very much ai you grind mostly or largely you grind them to death and basically they they’ve gone away and you burn the records of of it happening. So

Speaker: 0
02:32:31

literally little towns, right, that disappeared. Like, people who are like, there’s no record or there’s like or usually the way you know about it is there’s ai one dude. And it’s like this one dude has a very precarious escape story. And it’s like if if literally this dude didn’t get away, you wouldn’t know about the entire town that was, like, wiped out. Yeah. It’s crazy.

Speaker: 1
02:32:47

Jesus Christ.

Speaker: 2
02:32:49

Yeah. The stuff that, like Apart

Speaker: 0
02:32:50

from that, though, communism works really well.

Speaker: 2
02:32:52

Communism’s great.

Speaker: 1
02:32:53

It just hasn’t been done right.

Speaker: 0
02:32:54

That’s right.

Speaker: 1
02:32:54

I feel like we could do it right.

Speaker: 0
02:32:56

And we have a 10 page plan, that we yeah.

Speaker: 1
02:32:58

We came real close. We came real close. So close. Yeah. Yeah. And that’s what the blue, no matter who people, don’t really totally understand. Like, we’re not even talking about political parties. We’re talking about power structures. Yeah. We came close to a terrifying power structure.

Speaker: 2
02:33:13

Mhmm.

Speaker: 1
02:33:13

And it was willing to just do whatever it could to keep it rolling, and it was rolling for four years. It was rolling for four years without anyone at the helm.

Speaker: 0
02:33:22

Show me the incentives. Right? I mean, that’s that’s always the the question, like Yeah. One of the things is too, like, when you have such a big structure that’s overseeing such complexity, right, obviously, a lot of stuff can hide in that structure. And it’s it’s actually it’s it’s not unrelated to the whole AI picture.

Speaker: 0
02:33:37

Like, the the you need there’s only so much compute that you have at the top of that system that you can spend. Right? As the president, as a cabinet member, like, whatever. You you can’t look over everyone’s shoulder and, like, do their homework. You can’t do founder mode all the way down in all the branches and all the, like, action officers and all that shit.

Speaker: 0
02:33:58

That’s not gonna happen, which means you’re spending five seconds thinking about how to unfuck some part of the government. But then the, like, you know, corrupt people who run their own fiefdoms there spend every day trying to

Speaker: 2
02:34:10

It’s like their whole life survival, you know. To, like, justify themselves.

Speaker: 1
02:34:12

Yeah. Yeah. Well, that’s the USAID dilemma. Yeah. Yeah. Because they’re uncovering oh, this just insane amount of NGOs. Like, where is this going? We talked about this the other day, but India has an NGO for every 600 people. Wait. What?

Speaker: 0
02:34:27

Yes. We need more NGOs.

Speaker: 1
02:34:29

There’s 3,300,000 NGOs What? In India.

Speaker: 0
02:34:34

Do they do they, like, bucket like, what what are the categories that they fall into?

Speaker: 2
02:34:38

Like

Speaker: 1
02:34:38

Who fucking knows? That’s part of the problem that one of the things that Elon had found is that there’s money that just goes out with no receipts. And Yeah. It’s billions of dollars.

Speaker: 2
02:34:47

We need to take that further. We need an NGO for every person in India.

Speaker: 1
02:34:50

We will get that

Speaker: 2
02:34:50

eventually. It’s look. It’s the exponential trend. We’re gonna it’s just like Ai. The number of NGOs is is, like, doubling every year.

Speaker: 1
02:34:58

Making incredible progress and bullshit.

Speaker: 2
02:35:01

It’s the the NGO scaling law, the the bullshit scaling law.

Speaker: 1
02:35:04

Well, it’s just unfortunately, it’s Republicans doing it. Right? So it’s unfortunately the Democrats are gonna oppose it even if it’s showing that there’s, like, insane waste of your tax dollars.

Speaker: 0
02:35:14

I I thought some of the Doge stuff was pretty ai. It ai what like, there’s congressional support at least on both ai. No?

Speaker: 1
02:35:20

Well, sort of. You know, Ai think the real issue is in dismantling a lot of these programs that you can point to some good some of these programs do. Yeah. The problem is, like, some of them are so overwhelmed overwhelmed with fraud and waste that it’s ai to keep them active in the state they arya, ai, what do you do?

Speaker: 1
02:35:40

Do you rip the band aid off and start from scratch? Like, what do you do with the Department of Education? You say, why are we number 39 when we were number one? Like, what did you guys do with all that money?

Speaker: 0
02:35:50

Yeah.

Speaker: 2
02:35:50

So the To create problems? There’s this idea in software engineering. Actually, I was talking to one of our employees about this, which is like refactoring. Right? So when you’re writing, like, a bunch of software, it gets really, really big and hairy and complicated and there’s all kinds of, like, dumbass shit and there’s all kinds of waste that happens in that in that code base.

Speaker: 2
02:36:09

There’s this thing that you do every, you know, every, like, few months is you just think called refactoring, which is ai you go, like, okay. We have, you know, 10 different things that are trying to do the same thing. Let’s get rid of nine of those things and just, like, rewrite it as the one thing.

Speaker: 2
02:36:27

So there’s ai a cleanup and refresh cycle that has to happen whenever you’re developing a big complex thing that does a lot of stuff. The thing is, like, the the US government at every level has basically never done a refactoring of itself. And so the the way that problems get solved is you’re ai, well, we need to do this new thing.

Speaker: 2
02:36:48

So we’re just sana, like, stick on another appendage to the beast and and get that appendage to do that new thing. And, like, that’s been going on for two hundred and fifty years. So we end up with, like, this beast that has a lot of appendages, many of which do incredibly duplicate of and wasteful stuff that if you were a software engineer, just, like, not politically, just objectively looking at that as a system, you’d go, like, oh, this is a catastrophe.

Speaker: 2
02:37:17

And, like, we have processes that the we understand how what needs to be done to fix that. You have to refactor.

Speaker: 1
02:37:26

But they haven’t done that. Hence, the $36,000,000,000,000 of debt.

Speaker: 0
02:37:30

It it’s a problem too, though, in in all like, when you’re a big enough organization, you run into this problem. Like, Google has this problem famously. Facebook ad like, we we have friends, like like, Ai. So Jason’s the the the guy you you spoke to about that. Like, so so he’s he’s ai a arya engineer.

Speaker: 0
02:37:46

So so he works in, like, relatively small code bases, and he he, like, you know, can hold the whole code base in his head at a time. But when you move over to, you know, Google to Facebook, like, all of a bryden, this gargantuan code base starts to look more like the complexity of the US government, just like ai, you know, very roughly in terms of scale.

Speaker: 0
02:38:03

Right? So now you’re like, okay. Well, we wanna add functionality, and but so we wanna incentivize our our teams to to build products that are gonna be valuable. And the challenge is the best way to incentivize that is to to give people incentives to build new functionality, not not to refactor. There’s no glory.

Speaker: 0
02:38:21

If you work at Google, there’s no glory in refactoring. If you work at Meta, there’s no glory in refactoring. Like, for

Speaker: 2
02:38:26

a friends of ours There’s no promotion. Right? There’s no Exactly.

Speaker: 0
02:38:29

Yeah. You have to be a product owner. So you have to, like, invent the next Gmail. You gotta invent the next Google Calendar. You gotta do the next, you know, Meh app. That’s how you get promoted. And so you’ve got, like, this attitude where you you go into there and you’re just, like, let me crank this stuff out and, like, try to ignore all the shit in the codebase.

Speaker: 0
02:38:46

No glory in there. And what you’re left with is this, like, a, this Frankenstein monster of a codebase that you just keep stapling more shit onto. And then, b, this massive graveyard of apps that never get used. This is, like, the thing Google is famous for. If you ever see, like, the Google graveyard of apps, ai, all these things that you’re,

Speaker: 2
02:39:03

oh, yeah.

Speaker: 0
02:39:03

I guess Sai kinda remember Google Meet.

Speaker: 2
02:39:04

Somebody made their career off of launching that shit and then pieced out and it died. That’s that’s, like, the incentive structure at Google, unfortunately.

Speaker: 0
02:39:13

And and it’s it’s also ai of the only way to I ai, or ram it’s probably not. But, in the world where humans are doing the oversight, that’s your limitation. Right? You got some people at the top who have a limited bandwidth and compute that they can dedicate to, like, hunting down the problems. AI agents might actually solve that.

Speaker: 0
02:39:29

Right? You could, like, actually have the, you know, sai sort of autonomous AI agent that is the autonomous uproot all the things, and do that refactor. You could get way more efficient organizations out of that. I mean, like, thinking about, like, government corruption and waste and fraud, that’s the ai of thing where those sorts of tools could be radically empowering, but you gotta, you know, you gotta get them to work right and and for you.

Speaker: 1
02:39:56

We’ve given us a lot to think about. Is there anything more? Should we wrap this up?

Speaker: 0
02:40:02

If if we’ve made you sufficiently uncomfortable Ai

Speaker: 1
02:40:04

super uncomfortable. Was was the butt Very uneasy.

Speaker: 0
02:40:06

Was the butt tap too much at the beginning? Or was

Speaker: 2
02:40:08

it No.

Speaker: 1
02:40:08

Ai was fine.

Speaker: 2
02:40:09

No. That was fine? All of

Speaker: 1
02:40:10

it was weird. It’s, it’s just, you know, I always try to look at some non cynical way out of this.

Speaker: 2
02:40:19

Well, the the thing is, like, there are paths out. We talked about this and the fact that a lot of these problems are just us tripping on our own feet. So if we can just, like, unfuck ourselves a little bit, we are ram we can unleash a lot of this stuff. And as long as we understand also the bar that security has to hit and how important that is, like, we actually can put all this stuff together.

Speaker: 2
02:40:43

We have the capacity. It it all exists. It just needs to actually get aligned and around around an initiative, and we have to be able to reach out and talk.

Speaker: 0
02:40:51

On the control side, there’s also a world where and this is actually, like, if you talk to the labs, this is what they’re actually planning to do. But it’s sai question of how methodically and carefully they can do this. The plan is to ratchet up capabilities and then and scale, in other words.

Speaker: 0
02:41:04

And then as you do that, you start to use your AI systems, your increasingly clever and powerful AI systems to do research on technical control. So you basically build the next generation of systems. You try to get that generation systems to help you just inch forward a little bit more on the capability side.

Speaker: 0
02:41:21

It’s very precarious balance, but it’s something that, like, at least isn’t insane on the face of it.

Speaker: 2
02:41:26

Mhmm.

Speaker: 0
02:41:26

And fortunately, I mean, is the the default path, like, or the labs are talking about the that can ai control element as being a key pillar of their strategy.

Speaker: 1
02:41:36

Conversations are not happening in China. So what do you think they’re doing to keep AI from uprooting their system?

Speaker: 0
02:41:42

So so that’s interesting. There’s because I would imagine

Speaker: 1
02:41:45

they don’t wanna lose control.

Speaker: 0
02:41:46

Right. There’s a lot of ambiguity and uncertainty about what’s going on in China. So there’s been a lot of, like, track 1.5, track two diplomacy, basically, where you have, you know, non government guys from one side talk to government guys from this sai or talk to nongovernment, from the other side and kind of start to align on, like, okay.

Speaker: 0
02:42:01

What do we think the issues are? You know, the the Chinese are there are a lot of, like, freaked out Chinese researchers Yeah. And and who’ve come out publicly and said, hey. Like, we’re really concerned about this whole loss of control thing. They’re public statements and all that.

Speaker: 0
02:42:14

You also have to be mindful that any statement the CCP puts out is a statement they want you to see. Right. So when they sai, like, oh, yeah. We’re really worried about this thing. It’s genuinely hard to assess what that even means.

Speaker: 0
02:42:24

But they’re like, as you as you start to build these systems, we expect you’re gonna see some evidence of this shit before. And it’s not necessarily it’s not like you’re gonna build the system necessarily and have it take over the world. Like Mhmm. What we see with agents.

Speaker: 2
02:42:39

Yeah. So I was actually gonna add it. I think there’s really, really good point. And, and something where, like, open source AI is, like, even, you know, could potentially have an effect here. So a lot of a couple of the major labs, ai, opening ionthropic, I think, came out recently and said, like, look, we were on the cusp.

Speaker: 2
02:43:00

Our systems are on the cusp of being able to help a total novice, ai, somewhat no experience, develop and deploy and release a known biological threat. And that’s like that’s something we’re gonna have to grapple with over the next few months. And eventually, like, capabilities like this, not necessarily just ai, but also cyber and other areas, are gonna come out in open source.

Speaker: 2
02:43:24

And when they come out in open source

Speaker: 0
02:43:26

Basically for anybody download.

Speaker: 2
02:43:27

For anybody to download and use, when they come out in open source, like, you actually start to see some, like, some some things happen. Ai, some some incidents. Like some some major hacks that were just done by, like, a random motherfucker who just wants to see the world burn, but that wakes us up to, like, oh, shit.

Speaker: 2
02:43:45

These things actually are powerful. I think one of the aspects also here is, we’re still in that post cold war honeymoon, many of us. Right? In that mentality, like, not everyone has, like, wrapped their heads around this stuff. And the like, what needs to happen is something that makes us go ai, oh, ram.

Speaker: 2
02:44:08

We act like, we weren’t even really trying this entire ai. Because this is ai this is the the nine eleven effect. This is the Pearl Harbor effect. Once you have a thing that aligns everyone around ai, oh, shah. This is real. We actually need to do it and we’re freaked out. We’re actually safer.

Speaker: 2
02:44:24

We’re safer when we’re all like, okay. Something important needs to happen.

Speaker: 1
02:44:30

Right. Instead of letting them just slowly chip away.

Speaker: 2
02:44:33

Exactly. And so we, like, we need to have some sort of shock and we probably will get some kind of shock, like, over the next few months, the way things are trending. And when that happens, then but I mean, like, it’s Or years

Speaker: 0
02:44:44

Or years? Or years?

Speaker: 2
02:44:45

No. That’s what makes you feel better. But because but because you have the potential for this open source, like, it’s probably gonna be, like, a survivable shock. Right? But but still a shock. And so let us actually realign around, like, okay, let’s actually fucking solve some problems for real.

Speaker: 2
02:45:01

And so putting together the groundwork, right, is is what we’re doing around, like, let’s let’s pre think a lot of this stuff sai that, like, if and when the shock comes

Speaker: 0
02:45:10

We have a break glass plan.

Speaker: 2
02:45:12

We have a we have a plan.

Speaker: 0
02:45:14

And the the loss of control stuff is similar. Like, you so one interesting thing that happens with AI agents today is they’ll, like, they’ll get any so so an AI agent will take a complex task that you give it, like, find meh, I don’t know, the, like, best sneakers for me ai, some shit like that, and they’ll break it down into a series of sai steps.

Speaker: 0
02:45:30

And then each of those steps, it’ll farm out to a version of itself, say, to execute autonomously. The more complex a task is, the more of those little sai steps there are in it. And so you can have an AI agent that nails, like, 99% of of those steps. But if it screws up just one, the whole thing is a flop. Right?

Speaker: 0
02:45:50

And so if you think about, like, the sort of, like, loss of control scenarios that a lot of people look at are, autonomous replication, like, the the model gets access to the Internet, copies itself onto servers and all that stuff. Those are very complex movements. If it screws up at any point along the way, that’s a tell ai, oh, shah, something’s happening there.

Speaker: 0
02:46:10

And you can start to think about, like, okay, well, what went wrong? We get another do. We get another try, and we can kind of learn from our mistakes. So there is this sort of ai this picture, you know, one camp goes, oh, well, we’re gonna kinda make the superintelligence in a vat, and then it’s it explodes out and we lose control over it.

Speaker: 0
02:46:27

That doesn’t necessarily seem like the default scenario right now. It seems like what we’re doing is scaling these systems. We might on hobble them with big capability jumps, but it’s also there’s a component of this that is a continuous process that lets us kind of get our arms around it in a more staged way.

Speaker: 0
02:46:44

That’s another thing that I think, is in our favor that we didn’t expect before and, as as a field, basically. And I think that’s that’s a good thing. Like, that helps you kind of detect these breakout attempts and do things about them.

Speaker: 1
02:46:57

Alright. I’m gonna bring this home. I’m freaked out. So thank you. Thanks for trying to make me feel better. I don’t think you did. But, I really appreciate you guys and appreciate your perspective because it’s very important and it’s very illuminating. You know, it really gives you a a sense of what’s going on.

Speaker: 1
02:47:13

And I think one of the things that you said that’s really important is, like, ai it sucks that we need a 09:11 moment or a Pearl Harbor moment to realize what’s happening sai we all come together. But, hopefully, slowly but surely through conversations like this, people realize what’s actually happening.

Speaker: 2
02:47:29

You need one of those moments, like, every generation. Like, that’s how you get contact with the truth, and it’s, like, it’s painful, but, like, the light’s on the other side.

Speaker: 1
02:47:38

Thank you.

Speaker: 2
02:47:39

Thank you

Speaker: 1
02:47:40

very much. Thank you. Thank you ai.

Transcribe, Translate, Analyze & Share

Join 170,000+ incredible people and teams saving 80% and more of their time and money. Rated 4.9 on G2 with the best AI video-to-text converter and AI audio-to-text converter, AI translation and analysis support for 100+ languages and dozens of file formats across audio, video and text.

Start your 7-day trial with 30 minutes of free transcription & AI analysis!

Trusted by 150,000+ incredible people and teams

More Affordable
1 %+
Transcription Accuracy
1 %+
Time Savings
1 %+
Supported Languages
1 +
Don’t Miss Out - ENDING SOON!

Get 93% Off With Speak's Start 2025 Right Deal 🎁🤯

For a limited time, save 93% on a fully loaded Speak plan. Start 2025 strong with a top-rated AI platform.