#2345 – Roman Yampolskiy

Dr. Roman Yampolskiy is a computer scientist, AI safety researcher, and professor at the University of Louisville. He’s the author of several books, including "Considerations on the AI Endgame," co-authored with Soenke Ziesche, and "AI: Unexplained, Unpredictable, Uncontrollable."http://cecs.louisville.edu/ry/ Upgrade your wardrobe and save on @TrueClassic at https://trueclassic.com/rogan Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcribe, Translate, Analyze & Share

Join 170,000+ incredible people and teams saving 80% and more of their time and money. Rated 4.9 on G2 with the best AI video-to-text converter and AI audio-to-text converter, AI translation and analysis support for 100+ languages and dozens of file formats across audio, video and text.

Start your 7-day trial with 30 minutes of free transcription & AI analysis!

More Affordable
1 %+
Transcription Accuracy
1 %+
Time & Cost Savings
1 %+
Supported Languages
1 +

You can listen to the #2345 – Roman Yampolskiy using Speak’s shareable media player:

#2345 – Roman Yampolskiy Podcast Episode Description

Dr. Roman Yampolskiy is a computer scientist, AI safety researcher, and professor at the University of Louisville. He’s the author of several books, including “Considerations on the AI Endgame,” co-authored with Soenke Ziesche, and “AI: Unexplained, Unpredictable, Uncontrollable.”http://cecs.louisville.edu/ry/

Upgrade your wardrobe and save on @TrueClassic at https://trueclassic.com/rogan

Learn more about your ad choices. Visit podcastchoices.com/adchoices
This interactive media player was created automatically by Speak. Want to generate intelligent media players yourself? Sign up for Speak!

#2345 – Roman Yampolskiy Podcast Episode Top Keywords

#2345 - Roman Yampolskiy Word Cloud

#2345 – Roman Yampolskiy Podcast Episode Summary

Podcast Episode Summary

Key Points & Major Topics:
– The episode features Joe Rogan interviewing Dr. Roman Yampolskiy, a leading AI safety researcher, about the existential risks and philosophical implications of artificial intelligence (AI) and superintelligence.
– The conversation centers on the dangers of advanced AI, particularly the uncontrollability and unpredictability of superintelligent systems. Yampolskiy argues that once AI surpasses human intelligence, it will be impossible to guarantee safety or maintain control.
– They discuss the rapid pace of AI development, the lack of effective safety mechanisms, and the financial and competitive incentives driving AI labs and nations to push forward despite known risks.
– The episode explores the simulation hypothesis, the possibility that reality is a simulation created by advanced intelligences, and how this ties into the development of AI.
– Other topics include the impact of technology on human cognition and meaning, the potential for AI to manipulate or pacify humanity (e.g., through digital relationships or wireheading), and the societal consequences of technological unemployment and loss of purpose.

Important Guests/Speakers:
– Dr. Roman Yampolskiy: Computer scientist, AI safety expert, and author of “AI: Unexplainable, Unpredictable, Uncontrollable.” He is the primary guest, sharing his research and perspectives.
– Joe Rogan: Host, guiding the discussion and raising philosophical and practical questions.

Actionable Insights, Advice, or Tips:
– Yampolskiy urges greater public awareness and education about AI risks, advocating for slowing down AI development until robust safety solutions are found.
– He suggests that international cooperation is essential, as the risks are global and not limited to any one country or company.
– He calls for incentives (such as financial prizes) for developing provable AI safety mechanisms and encourages listening to leading experts who warn about existential risks.

Recurring Themes & Overall Messages:
– The inevitability and uncontrollability of superintelligent AI if current trends continue.
– The inadequacy of current safety research and the lack of consensus or solutions within the AI community.
– The philosophical implications of AI, including the simulation hypothesis and the nature of consciousness and meaning.
– The need for humility, caution, and collective action in the face of unprecedented technological power.

Summary:
This episode is a deep dive into the existential risks posed by advanced AI, featuring Dr. Roman Yampolskiy. The discussion covers the technical, ethical, and philosophical challenges of controlling superintelligent systems, the societal impacts of rapid technological change, and the possibility that reality itself may be a simulation. Yampolskiy stresses the urgency of developing effective safety measures, the importance of public engagement, and the need for global cooperation to prevent catastrophic outcomes. The overall message is a sober warning: without significant changes in approach, humanity faces profound and possibly existential risks from the unchecked development of AI.

This summary was created automatically by Speak. Want to transcribe, analyze and summarize yourself? Sign up for Speak!

#2345 – Roman Yampolskiy Podcast Episode Transcript (Unedited)

Speaker: 0
00:01

Joe Rogan podcast. Check it out.

Speaker: 1
00:03

The Joe Rogan experience.

Speaker: 0
00:06

Showing my day, Joe Rogan podcast ai night, all day. Well, thank you for doing this. I really appreciate it.

Speaker: 1
00:15

My pleasure. Thank you for inviting me.

Speaker: 0
00:17

This subject of, the dangers of AI, it’s it’s very interesting because I get two very different responses from people dependent upon how invested they are in, AI financially. The the the people that have AI companies or are part of some sort of AI group, all are ai, it’s gonna be a net positive for humanity.

Speaker: 0
00:43

I think, overall, we’re we’re gonna have much better lives. It’s gonna be easier. Things will be cheaper. It’ll be easier to get along. And then Ai hear people like you, and I’m like, why do I believe him?

Speaker: 1
00:57

It’s actually not true. All of them are on record as saying this is gonna kill us. Whatever it’s Sam Altman or anyone else, they all, at some point, were leaders in AI safety work. They published an AI safety, and their PDO levels are insanely high. Not like mine, but still, 30% chance that humanity dies is a little too much.

Speaker: 0
01:18

Yeah. That’s pretty ai. But yours is, like, 99.9.

Speaker: 1
01:23

It’s another way of saying we can’t control superintelligence indefinitely. It’s impossible.

Speaker: 0
01:30

When did you start working on this?

Speaker: 1
01:33

Long time ago. So my PhD was I finished in, 02/2008. I did work on online casino security, basically preventing bots. And at that point, I realized bots are getting much better. They’re gonna out compete us, obviously, in poker, but also instilling cyber resources. And, from then on, I’ve been kind of trying to scale it to the next level AI.

Speaker: 0
01:58

It’s not just that. Right? They’re also they’re kind of narrating social discourse, with bots online. Like, I think, you know, I’ve disengaged over the last few months with social media. And one of the reasons why Sai disengaged a, I think it’s unhealthy for people. But, b, I feel like there’s a giant percentage of the discourse that’s artificial or at least generated.

Speaker: 1
02:26

More and more is deep fakes or fake personalities, fake meh, but those are very different levels of concern. Yes. People are concerned about immediate problems. Maybe it will influence some election. They’re concerned about technological unemployment, bias. My main concern is long term superintelligence systems we cannot control, which can take us out.

Speaker: 0
02:49

Yes. I I want I just wonder if AI was sentient, how much it would be a part of sowing this sort of confusion and chaos that would be beneficial to its ai, that it would sort of narrate or or make sure that the narratives aligned with its ai?

Speaker: 1
03:17

I don’t think it’s at the level yet where it would be able to do this type of strategic planning, but it will get there.

Speaker: 0
03:24

And when it gets there, how will we know whether it’s at that level? This is my concern. If I was AI, I would hide Mhmm. My abilities.

Speaker: 1
03:32

We would not know, and some people think already it’s happening. They are smarter than they actually let us know.

Speaker: 0
03:38

Right.

Speaker: 1
03:38

They pretend to be dumber. And so we have to ai trust that they are not smart enough to realize it doesn’t have to turn on us quickly. It can just slowly become more useful. It can teach us to rely on it, trust it, and over a long period of time, we’ll surrender control without ever voting on it or

Speaker: 0
03:57

Right.

Speaker: 1
03:57

Fighting against it.

Speaker: 0
03:59

I’m sure you saw this, there was a recent study, use of ChatGPT, the people that use ChatGPT all the ai. And it showed this decrease in cognitive function amongst people that use it and rely on it on a regular basis.

Speaker: 1
04:14

It’s not new. It’s the GPS story all over. I can’t even find my way home. Right. I still rely on this thing. I have no idea where I am right now. Like, without it, I am done.

Speaker: 0
04:22

Me too. Yeah. I don’t know any phone numbers anymore. Yeah. There’s a lot of, reliance upon technology that minimizes, the use of our brains.

Speaker: 1
04:33

All of it. And the more you do it, the less you have training, practice, memorizing things, making decisions. You become ai attachment to it. And right now, you’re still making some decisions. But over time, as the systems become smarter, you become ai of biological bottleneck. Right.

Speaker: 1
04:51

Either explicitly or implicitly, it blocks you out from decision making.

Speaker: 0
04:55

And if we’re talking about that, I’m sure AI, if it already is sentient and if it is far smarter than we think it is, they would be aware. And it would just slowly ramp up its capabilities and our dependence upon it to the point where we can’t shut it off.

Speaker: 1
05:13

Ai I think sentience is a separate issue. Usually, in safety, we only care about capabilities, optimization, power, whatever it has. Consciousness, internal states is a separate problem we can talk about. It’s super interesting, but we’re just concerned that they arya, much better at problem solving, optimizing, pattern recognition, memorizing, strategy.

Speaker: 1
05:33

Basically, all the things you need to win in any domain.

Speaker: 0
05:36

Yeah. So when you first started researching this stuff and you were concentrating on bots and all these different things, how far off did you think the few like, in the future would AI become a significant problem with the human race?

Speaker: 1
05:54

For, like, fifty years, everyone said we’re twenty years away. That’s the joke. And, people like Ram Kurz, while predicted based on some computational curves, will get there at 02/1945. And then with GPT release, it switched to everyone thinks it’s two years away for the last five years. So this is the pattern right now.

Speaker: 1
06:13

If you look at prediction markets, if you look at leading people in, top labs, we are supposedly two, three years away from AGI. But of course, there is no specific definition for what that means. If you showed someone, computer scientist in the seventies, what we have today, they’d be ai, you have AGI. You got it.

Speaker: 0
06:33

Right. That’s the problem. Right? And this is what what AI has already passed the Turing test, allegedly. Correct?

Speaker: 1
06:41

So usually, labs instruct them not to participate in the test or not try to pretend to be a human sai they would fail because of this additional set of instructions. If you jailbreak it and tell it to work really hard, it will pass for most people. Yeah. Absolutely.

Speaker: 0
06:56

Why would they tell it to not do that?

Speaker: 1
06:59

Well, it seems unethical to pretend to be a human and make people feel like somebody is enslaving those Ai and, you know, doing things to them.

Speaker: 0
07:07

It seems ai crazy that the people building something that they are sure is gonna destroy the human race would be concerned with the ethics of it pretending to be human?

Speaker: 1
07:18

They are actually more concerned with immediate problems and much less with the existential or suffering risks. They would probably worry the most about what I’ll call end risks, your model dropping the end word. That’s the biggest concern. That’s hilarious. I think they spend most resources solving that problem, and they solved it somewhat successfully.

Speaker: 0
07:36

Wow. And then also, there’s the issue of competition. Right? Like, so China is clearly developing something similar. I’m sure Russia is as well. Other state actors are probably developing something. So becomes this this sort of this very confusing issue where you have to do it because if you don’t, the enemy has it.

Speaker: 0
08:00

And if they get it, it will be far worse than if we do. And so it’s almost assuring that everyone develops it.

Speaker: 1
08:08

Game theoretically, that’s what’s happening right now. We have this race to the bottom, kind of prisoner’s dilemma where everyone is better off fighting for themselves, but we want them to fight for the global good. The thing is they assume, I think incorrectly, that they can control those systems. If you can’t control superintelligence, it doesn’t really matter who builds it, Chinese, Russians, or Meh.

Speaker: 1
08:30

It’s still uncontrolled. We’re all screwed completely. That would unite us as humanity versus AI. Short term, when you talk about military, yeah, whoever has better AI will win. You need it to control drones, to fight against attacks. So short term, it makes perfect sense. You wanna support your guys against foreign militaries.

Speaker: 1
08:49

But when we say long term, if we’re saying two years from now, doesn’t matter.

Speaker: 0
08:55

Right. This is the thing. It’s ai it’s it seems so inevitable. And I feel like when people are saying they can control it, Ai feel like I’m being gaslit. I don’t believe them. I don’t believe that they believe it, because it just doesn’t make sense. Like, how could you control it if it’s already exhibited survival instincts?

Speaker: 0
09:19

Like, in is is recently as chat g b t four. Right? They were talking about putting putting it down for a new version, and it starts ai, it starts uploading itself to different servers, it’s leaving messages for itself in the future.

Speaker: 1
09:36

All things were predicted decades in advance, but look at the state of the art. No one claims to have a safety mechanism in place which would scale to any level of intelligence. No one says they know how to do it. Usually, what they say is give us me give us lots of money, lots of time, and I’ll figure it out.

Speaker: 1
09:54

Ai get AI to help me solve it, or we’ll figure it out, then we meh to superintelligence. All insane answers. And if you ask regular people, they have a lot of common sense. They say, that’s a bad idea. Let’s not do that.

Speaker: 1
10:06

But with some training and some stock options, you start believing that maybe you can do it.

Speaker: 0
10:11

That’s the issue. Right? Stock options.

Speaker: 1
10:14

It it helps. I mean, it’s very hard to say no to billions of dollars. I don’t think I would be strong enough if somebody came to me and said, come work for this lab. You know, you’ll be our safety director. Here’s 100,000,000 to sign you up, and I’ll probably go work there.

Speaker: 1
10:31

Not because it’s the right decision, but because it’s very hard for agents not to get corrupt, then you have that much reward given to you. God. Whew.

Speaker: 0
10:43

So when did you become, like, when did you start becoming very concerned?

Speaker: 1
10:47

So when I started working on AI safety, I thought I can actually help solve it. My goal was to solve it for humanity to get all the amazing benefits of superintelligence.

Speaker: 0
10:57

And what was this when was this year around?

Speaker: 1
11:00

Let’s say 02/2012, maybe around there. But the more I studied it, the more I realized every single part of a problem is unsolvable, and it’s kinda like a fractal. The more you zoom in, the more you see additional new problems you didn’t know about, and they are in turn unsolvable as well.

Speaker: 0
11:18

Oh, boy. How is your research received? Like, when when you talk to people that are were I mean, have you had communication with people at OpenAI ai Gemini and all these different AI?

Speaker: 1
11:34

I I go to many conferences, workshops. We we all talk, of course. In general, the reception by standard academic metrics is very positive. Great reviews, lots of ai. Nobody’s, like, published something saying I’m wrong. But there is no engagement. I basically said I’m challenging community to publish a proof, give me something, a patent, a paper in nature, something showing the problem is solvable.

Speaker: 1
11:57

Typically in computer science we start by showing what class the problem belongs to. Is it solvable, partially solvable, unsolvable, solvable with too many resources? Other than my research, we don’t even know what the state of a problem is, and I’m saying it’s unsolvable. Prove me wrong.

Speaker: 0
12:13

And when you say it’s unsolvable, what is the response?

Speaker: 1
12:18

So usually, I reduce it to saying you cannot make a piece of software which is guaranteed to be secure and safe. And the response is, well, of course, everyone knows that. That’s common sense. You didn’t discover anything new. And I go, well, if that’s the case, and we only get one chance to get it ai.

Speaker: 1
12:36

This is not cybersecurity where somebody steals your credit card, you’ll give them a new credit card. This is existential risk. It can kill everyone. You’re not gonna get a second chance. So you need it to be 100% safe all the time.

Speaker: 1
12:48

If it makes one mistake in a billion and it makes a billion decisions a minute, in ten minutes, you are screwed. So very different standards and saying that, of course, we cannot get perfect safety is not acceptable.

Speaker: 0
13:04

And, again, stock options, financial incentives, they continue to build it, and they continue to scale and make it more and more powerful.

Speaker: 1
13:13

I don’t think they can stop. If a single CEO says, I think this is too dangerous, ai lab will no longer do this research. The whoever’s investing in them will pull the funds, will replace them immediately. So nothing’s gonna change. We’ll sacrifice their own personal interest, but, overall, I think the company will continue as before.

Speaker: 0
13:35

So this is logical. And the problem is, like I said, when I’ve talked to Marc Andreessen and many other people, they think this is just fear mongering. We’ll we’ll be fine. This is worst case scenario. We’ll be fine.

Speaker: 1
13:49

It is worst case scenario, but that’s standard in computer science and cryptography and complexity and computability. You’re not looking at best case. I’m ready for the best case. Give me utopia. I’m looking at problems which are likely to happen, and it’s not just me saying it.

Speaker: 1
14:04

We have Nobel Prize winners, Turing Award winners, all saying this is very dangerous 30% PDoom. This is standard in industry. 30% is what surveys of machine learning experts are giving us right now.

Speaker: 0
14:18

So what is worst case scenario? Like, how could AI eventually lead to the destruction of the human race?

Speaker: 1
14:26

So you can ask me how I would kill everyone. Sure. And it’s a great question. I can give you standard answers. I would talk about computer ai, breaking into maybe nuclear facilities, nuclear war. I can talk about synthetic biology, nanotech. But all of it is not interesting, then you realize we’re talking about superintelligence, a system which is thousands of times smarter than me.

Speaker: 1
14:48

It would come up with something completely novel, more optimal, better way, more efficient way of doing it. And I cannot predict it because I’m not that

Speaker: 0
14:55

smart. Jesus.

Speaker: 1
14:59

That’s exactly what it is. We we’re basically setting up, adversarial situation with agents which are ai squirrels versus humans. No group of squirrels can figure out how to control us. Right. Even if you give them more resources, more acorns, whatever, they’re not gonna solve that problem.

Speaker: 1
15:18

And it’s the same for us. And most people think one or two steps ahead. And it’s not enough. It’s not enough in chess. It’s not enough here.

Speaker: 1
15:25

If you think about AGI and then maybe superintelligence, that’s not the end of that game. The process continues. You will get superintelligence creating next level Ai. So superintelligence plus plus, two point o, three point o, it goes on indefinitely. You have to create a safety mechanism which scales forever, never makes mistakes, and keeps us in decision making position. So we can undo something if we don’t like it.

Speaker: 0
15:50

And it would take superintelligence to create a safety mechanism to control superintelligence.

Speaker: 1
15:55

At that level. And it’s a catch 22. If we had friendly AI, we can make another friendly AI. So if, like, aliens send us one and we trust it, then we can use it to build local version, which is somewhat safe.

Speaker: 0
16:08

Have you thought about the possibility that this is the role of the human race and that this happens all throughout the cosmos, is that curious humans who thrive on innovation will ultimately create a better version of ai.

Speaker: 1
16:26

I I thought about it. Many people think that’s the answer to Fermi paradox. There is also now a group of people looking at what they call a war of his successor. Basically, they ai say, meh. We’re gonna build superintelligence. Yep. We can control it. So what properties would we like to see in those systems?

Speaker: 1
16:45

How important is it that it likes art and poetry and spreads it through the universe? And to me, it’s ai, I don’t wanna give up yet. I’m not ready to decide if killers of my family and everyone will like poetry. I wanna we’re still here. We’re still making decisions.

Speaker: 1
17:01

Let’s figure out what we can do.

Speaker: 0
17:03

Well, poetry is only relevant to us because poetry is difficult to create, and it resonates with us. Poetry doesn’t mean jack shit to a flower.

Speaker: 1
17:12

It’s more global to me. I don’t care what happens after I’m dead, my family is dead, all the humans are dead. Whatever they like poetry or not is irrelevant to me.

Speaker: 0
17:20

Right. But but the point is, like, the things that we put meaning in, they it’s only us. The, you know, a super massive black hole doesn’t give a shit about a great song.

Speaker: 1
17:32

And they talk about some super value, super culture, super things, super intelligence would like, and it’s important that they arya conscious and experienced all that greatness in the universe.

Speaker: 0
17:43

But I would think that they would look at us the same way we look at chimpanzees. We would we would say, yeah, they’re great, but don’t give them guns. Meh, they’re great, but don’t let them have airplanes. Don’t let them make global geopolitical decisions.

Speaker: 1
18:01

So there are many reasons why they can decide that we ram dangerous. We may create competing AI. We may decide we’re gonna shut them off. So for many reasons, we would try to restrict our abilities, restrict our capabilities for sure.

Speaker: 0
18:17

This episode is brought to you by True Classic. At True Classic, the mission goes beyond fit and fabric. It’s about helping guys show up with confidence and purpose. Their gear fits right, feels amazing, and is priced so guys everywhere can step into confidence without stepping out of their budget. But what really sets them apart?

Speaker: 0
18:36

It’s not just the fit or the fabric. It’s the intention behind everything they do. True Classic was built to make an impact, whether it’s helping men show up better in their daily lives, giving back to underserved communities, or making people laugh with ads that don’t take themselves too seriously.

Speaker: 0
18:54

They lead with purpose. Tailored where you want it, relaxed where you need it. No bunching, no stiff fabric, no BS, just a clean, effortless fit that actually works for real life. Forget overpriced designer brands, ditch the disposable fast fashion. True Classic is built for comfort, built to last, and built to give back. You can grab them at Target, Costco, or head to trueclassic.com/rogan and get hooked up today.

Speaker: 0
19:23

Yeah. And there’s no reason why they would not limit our freedoms.

Speaker: 1
19:30

If there is something only a human can do, and I don’t think there is anything like that, but let’s say we are conscious, we have internal experiences, and they can never get it. I don’t believe it, but let’s say it was bryden. And for some reason, they wanted to have that capability.

Speaker: 1
19:44

They would need us and give us enough freedom to experience the universe, to collect those qualia, to kinda engage with what is fun about being a living human being, what makes it meaningful.

Speaker: 0
19:58

Right. But that’s such an egotistical perspective, right, that we’re so unique that even superintelligence would say, ai, I wish I was human. Humans have this unique quality of confusion and creativity.

Speaker: 1
20:10

There is no value in it, mostly because we can’t even test for it. I have no idea if you’re actually conscious or not. So how valuable can it be if I can’t even detect it? Only you know what ice cream tastes like to you. Okay. That’s great. Sell it now.

Speaker: 1
20:25

Make a product out of it.

Speaker: 0
20:26

Right. And there’s obviously variables because there’s things that people like that I think are gross.

Speaker: 1
20:32

Absolutely. Sai, really, you can come up with some agent which likes anything or find any finds anything fun.

Speaker: 0
20:40

God. Why are you freaking me out right away? That’s the problem. This podcast is eighteen minutes old, and I’m ai, we could just stop right now.

Speaker: 1
20:49

Couple hours at least, and then I have to finish here. No. Ai don’t.

Speaker: 0
20:53

Well, I don’t wanna end. I do have so many questions, but it’s just the problem was we got off to it. We just cut to the chase right away. And the the chase seems to be it’s something that must be confronted because it is it it’s right there. That’s it. That’s the whole thing.

Speaker: 0
21:10

And I I’ve tried so hard to listen to these people that don’t think that it’s a problem and listen to these people that think that it’s gonna be a net positive for humanity and, oh, god. It’s good. I feel better now. But it it doesn’t work, doesn’t resonate.

Speaker: 1
21:26

I wish they were right. Every time I have a debate with someone like that, I’m like, please come up with better arguments. Prove me wrong. I don’t sana be right on this one. I want you to show all the mistakes in my papers. I want you to show me how to control superintelligence and give us utopia, solve cancer, give us free stuff. That’s great.

Speaker: 0
21:44

Right. When you think about the future of the world and you think about these incredible technologies scaling upwards and exponentially increasing in their capability, what what do you see? Like, what do you think is going to happen?

Speaker: 1
22:03

So there are many reasons to think they may, cancel us for for whatever reasons. We started talking about some game theoretical reasons for it. If we are successful at controlling them, I can come up with some ways to provide sort of partial solution to the value alignment problem.

Speaker: 1
22:20

It’s very hard to value align 8,000,000,000 people, all the animals, you know, everyone because we disagree. We’re like many different things. So we have advanced virtual reality technology. We can technically give every person their own virtual universe where you decide what you sana be.

Speaker: 1
22:36

You’re a king, you’re a slave, whatever it is you’re into, and you can share with others. You can visit their universes. All we have to do is figure out how to control the substrate, the superintelligence running all those virtual universes. And if we manage to do that, at least part of the value alignment problem, which is super difficult, how do you get different preferences, multi objective optimization, essentially.

Speaker: 1
22:58

How do you get different objectives to all agree?

Speaker: 0
23:02

But when you think about how it plays out, when when when like, if you’re alone at night and you’re worried, what do you see? What do you see happening?

Speaker: 1
23:14

So there are multiple levels of risk. Immediate is what we call ikigai risk, high risk. We lose meaning. You lost your job, you’re no longer the best interviewer in the world, ai, what’s left? What are you gonna do? Maybe some people will find some other, kind of artificial things to do, but for most people their job is their definition, who they are, what makes a difference to them for quite a few people, especially in professional circles.

Speaker: 1
23:41

So losing that meaning will have terrible impact in society. We always talk about unconditional basic income. We never talk about unconditional basic meaning. What are you Right. Doing with your life if basic needs are provided for you? Next level is existential risk. The concern is it will kill everyone. But there is also suffering risks.

Speaker: 1
24:02

For whatever reason, it’s not even killing us. It’s keeping us around forever, and we would rather be dead. It’s so bad.

Speaker: 0
24:11

What we what do you see when you think of that?

Speaker: 1
24:16

It’s hard to be specific about what it can do and what specific ways of torture it can come up with and why. Again, if we’re looking at worst case scenarios, I, I found this, set of papers about what happens when young children have epileptic seizures, really bad ones. And what sometimes helps is to remove half of your brain.

Speaker: 0
24:42

Oh, Jesus.

Speaker: 1
24:43

Cut it out. And there are two types of surgeries for doing that. One is to remove it completely, and one is to ai of dissect connections leading to that half and leave it inside. So it’s like solitary confinement with zero input output forever. And there are equivalents for digital forms and things like that.

Speaker: 0
25:04

And you worry that AI would do that to the human race?

Speaker: 1
25:10

It is a possibility.

Speaker: 0
25:11

New to us.

Speaker: 1
25:13

Well, loss of control is a part of it, but you can lose control and be quite happy. You can be ai an animal in a very cool zoo enjoying yourself, engaging in hedonistic pleasures, sex, food, whatever. You’re not in control, but you’re safe. So those are separate problems. And then there is for whatever reason, I don’t know if it’s malevolent payload from some psychopaths.

Speaker: 1
25:35

Again, that would assume that they could control AI. I don’t think they will. But if they manage to do it, they can really put any type of payload into it. So think about all the doomsday calls, psychopaths, anyone providing their set of goals into the system.

Speaker: 0
25:52

But aren’t those human characteristics? I mean, those are characteristics that I ai, if I had to guess, those those exist because in the future, there was some sort of a natural selection benefit to being a psychopath in the days of tribal warfare. That you if you were the type of person that could sneak into a tribe in the middle of the night and slaughter innocent women and children, your genes would pass on.

Speaker: 0
26:22

That there was a benefit to that.

Speaker: 1
26:24

Right. So if it’s a human providing payload, that’s what would show up. If it’s AI on its own deciding what’s going to happen, I cannot predict. I’m just looking at worst case scenarios. There are also game theoretic reasons where people talk about retrocausality, where if right now, you know Shah is

Speaker: 0
26:40

that word?

Speaker: 1
26:41

Ai, trying to influence the past,

Speaker: 0
26:44

Say it again. Say that again. Retrocausality.

Speaker: 1
26:46

Retrocausology? Causality. Causes.

Speaker: 0
26:50

Oh, okay.

Speaker: 1
26:50

So think about, like, weird time travel effects. Right now, if you’re not helping to create superintelligence, once it comes into existence, it will punish you really hard for it. And the punishment needs to be so bad that you start to help just to avoid that.

Speaker: 0
27:07

Ai thought about it was that it would just completely render us ai, that it wouldn’t be fearful of us if we had no control, that it would just sort of let us exist. And it would be the dominant force on the planet. And then it would stop. If if if human beings have no control over, you know, all of the the different things that we have control over now, ai, international politics, control over communication.

Speaker: 0
27:41

If if we have none of that anymore and we’re reduced to a subsistence lifestyle, then we would be no threat.

Speaker: 1
27:48

It is a possibility. I cannot say this will not happen for sure, but look at, our relationship with animals where we don’t care about them. So ants. If you decide to build a house and there is an ant colony on that property, you genocide them. You take them out. Not because you hate ants, but because you just need that real estate. And it could be very similar.

Speaker: 1
28:10

Again, I cannot predict what it can do, but if it needs to turn the planet into fuel, raise temperature of a planet, cool it down for servers, whatever it needs to do, it wouldn’t be concerned about your well-being.

Speaker: 0
28:21

It wouldn’t be concerned about any life. Right? Because it doesn’t need biological life in order to function, as long as it has access to power. And assuming that it is far more intelligent than us, there’s abundant power in the universe. There’s abundant power. Just the the ability to harness solar would be an an infinite resource, and it would be completely free of being dependent upon any of the things that we utilize.

Speaker: 1
28:50

And again, we’re ai of thinking what we would use for power.

Speaker: 0
28:53

Right.

Speaker: 1
28:53

If it’s smarter than us, if it does novel research in physics, it can come up with completely novel ways of harnessing energy, getting energy. So I have no idea what side effects that would have for climate.

Speaker: 0
29:03

Right. Right. Why would it care about biological life at all?

Speaker: 1
29:07

We don’t know how to program it to care about us. And

Speaker: 0
29:12

And even if we did, it if we if it felt like that was an issue, ai felt that if that was a conflicting issue, it would just change its ram.

Speaker: 1
29:21

So usually, when we start training AI, we train it in human data, and it becomes really good very quickly, becomes superhuman. And then the next level is usually zero knowledge where it goes, all your human data is biased. Let me figure it out from scratch. I’ll do my own experiments.

Speaker: 1
29:37

I’ll do some self play. I’ll learn how to do it better without you. And we see it with games. We see it in other domains. And I think that’s gonna happen with general knowledge as well. It’s gonna go everything you have on the Internet, Wikipedia, it’s biased.

Speaker: 1
29:52

Let me do first principles research, rediscover from physics, and go from there. So whatever bias we manage to program into it, I think will be eventually removed.

Speaker: 0
30:02

That this is what’s so disturbing about this. It’s ai we do not have the capacity to understand what kind of level of intelligence it will achieve in our lifetime. We don’t have the capacity to understand, like, what it was what it will be able to do within twenty, thirty years.

Speaker: 1
30:23

We can’t predict next year or two, precisely.

Speaker: 0
30:26

Next year or

Speaker: 1
30:27

two? We can understand general trends, so it’s getting better. Right. It’s getting more general, more capable, but no one knows specifics. I cannot tell you what GPT six precisely would be capable of, and no one can, not even people creating it.

Speaker: 0
30:41

Well, you talked about this on Lexus podcast too, ai, the ability to have safety. You’re ai, sure, maybe GPT five, maybe GPT sai, but when you scale out a hundred years from now, ultimately, it’s impossible.

Speaker: 1
30:54

It’s, hyper exponential progress and, process, and we cannot keep up. It, basically, requires just to add more resources, give it more data, more compute, and it keeps scaling up. There is no similar scaling laws for safety. If you give someone billion dollars, they cannot produce billion dollars worth of safety.

Speaker: 1
31:15

It if vatsal scales linearly, and maybe it’s a constant.

Speaker: 0
31:24

Yeah. And it doesn’t scale linear linearly. It it’s it’s exponential. Right?

Speaker: 1
31:30

The the AI development is hyper exponential because we have hardware growing exponentially. We have data creation processes certainly exponential. We have so many more sensors. We have cars with cameras. We have all those things. That’s exponential. And then, algorithm algorithmic process, progress itself is also exponential.

Speaker: 0
31:49

And then you have quantum computing.

Speaker: 1
31:51

So that’s the next step. It’s not even obvious that we’ll need that, but if we ever get stuck, yeah, we’ll we’ll get there. I’m not too concerned yet. I don’t think there are actually good quantum computers out there yet, but I I think, if we get stuck for ten years, let’s say, that’s the next paradigm.

Speaker: 0
32:07

So what do you mean by you don’t think there’s good quantum computing out there?

Speaker: 1
32:12

So we constantly see articles coming out saying we have a new quantum computer. It has that many qubits.

Speaker: 0
32:18

Right.

Speaker: 1
32:18

But that doesn’t mean much because they use different architectures, different ways of measuring quality. To me, show me what you can do. So there is a threat from quantum computers in terms of bryden cryptography, factoring large integers. And if they were actually making progress, we would see with every arya.

Speaker: 1
32:36

Now we can factor 256 bit number, ten twenty four bit number. In reality, I think the largest number we can factor is ai 15, literally, not 15 to a power, like, just 15. There is no progress in applying it to Shor’s algorithm last time I checked.

Speaker: 0
32:54

But when Ai read all these articles about quantum compute computing and its ability to solve equations that would take conventional computing an infinite number of years Yeah. And it can do it in minutes.

Speaker: 1
33:10

Those equations are about quantum states of the system. It’s kinda ai what is it for you to taste ice cream? You compute it so fast and so well, and I can’t, but it’s a useless thing to compute. It doesn’t compute solutions to real world problems we care about in conventional computers.

Speaker: 0
33:27

Right. I see what you’re saying. So it’s essentially set up to do it quickly.

Speaker: 1
33:32

It’s natural for it to accurately predict its own states, quantum states, and tells you what they sana, and classic computer would fail miserably. Yes. It would take billions and billions of years to compute that specific answer, but those are very restricted problems. It’s sana it’s not a general computer yet.

Speaker: 0
33:48

When you when you see these articles when they’re they’re talking about quantum computing and some of the researchers are equating it to the multiverse, They’re saying that the ability that these quantum computers have to solve these problems very quickly seems to indicate that it is in contact with other realities. You I’m sure you’ve seen this. Right?

Speaker: 1
34:11

There is a lot of crazy papers out there.

Speaker: 0
34:14

Do you think that’s all horseshit?

Speaker: 1
34:15

Can we test it? Can we verify it? I think most multiverse theories cannot be verified experimentally. They make a lot of sense. The idea about personal universes I told you about is basically a multiverse solution to value ai. So it would make sense for previous civilizations to set it up exactly that way.

Speaker: 1
34:33

You have local simulations, maybe they’re testing to see if we’re dumb enough to create superintelligence. Whatever it is, it makes sense as a theory, but I cannot experimentally prove it to you.

Speaker: 0
34:43

Right. Yeah. This the problem with, subjects like vatsal, and particularly, articles arya written about things like this, is that it’s designed to lure people like me in. Where you read it and you go, wow, this is crazy. It’s evidence of the multiverse. But I don’t really understand what that means.

Speaker: 1
35:03

Yeah. So you probably get a lot of emails from crazy people. Oh, yeah. And usually, they are topic specific. So I do research on superintelligence, consciousness, and simulation theory. I get the perfect trifecta of all the crazy people contacting me with their needs. Yeah. Those topics are super fascinating.

Speaker: 1
35:20

I think at certain level of intelligence, you are kinda nerd sniped towards them. Mhmm. But, we have hard time with hard evidence for that.

Speaker: 0
35:29

Right. But are we even capable of grasping these concepts? That’s the thing. With with the limited ability that the human brain has, whatever we, you know, we’re we’re basing it on the knowledge that’s currently available in the twenty first century that human beings have acquired.

Speaker: 0
35:48

I mean, are we even capable of grasping a concept like the multiverse? Or is it just do we just pay it lip service? Do we just discuss it? Is it just this, like, fun meh masturbation exercise?

Speaker: 1
36:01

It depends on what variant of it you look at. So if you’re just saying we have multiple virtual realities, like kids playing virtual games and each one has their own local version of it, that makes sense. We understand virtual reality. We can create it. If you look at AIs, then GPT is created. It’s providing an instance to each one of us.

Speaker: 1
36:21

We are not sharing one. So it has its own local universe with you as a main user of that universe. There is analogy to multiverse in that. So we understand certain aspects of it, but I Ai think it is famously sai, no one understands quantum physics. And if you think you do, then you don’t understand quantum physics.

Speaker: 0
36:38

Yeah. That’s Feynman. Right? Yeah. Yeah. The simulation theory, I’m glad you brought that up because you’re also one of the people that believes in it.

Speaker: 1
36:48

I do.

Speaker: 0
36:49

You do. How do you define it? And what what do you think it is? What do you think is going on?

Speaker: 1
36:55

So I’m trying to see technology we have today and project to transform it. I did it with AI. Let’s do it with virtual reality. We are at the point where we can create very believable, realistic vatsal environments. Maybe the haptics are still not there, but in many ways, visually, sound ai, it’s getting there. Eventually, I think most people agree we’ll have sai resolution as our physics.

Speaker: 1
37:17

We’re also getting close to creating intelligent agents. Some people argue they are conscious already or will be conscious. If you just take those two technologies and you project it forward and you think they will be affordable one day, A normal person like me or you can run thousands, billions of simulations.

Speaker: 1
37:36

Then those intelligent agents, possibly conscious ones, will most likely be in one of those virtual worlds, not in the real world. In fact, I can, again, retro causally place you in one. I can commit right now to run billion simulations of this exact interview. Mhmm. So the chances are you’re probably in one of those.

Speaker: 0
37:56

But the is that logical? Because if this technology exists, and if we’re dealing with super intelligence so if we’re dealing with AI, and AI eventually achieves super intelligence, why would it want to create virtual reality for us in our consciousness to exist in? It seems like a tremendous waste of resources just to fascinate and confuse these territorial apes with nuclear weapons. Like, why would we do that?

Speaker: 1
38:31

So a few points. One, we don’t know what resources are outside of the simulation. This could be ai a cell phone level of compute. It’s not a big deal for them outside of our simulation. So we don’t know if it’s really expensive or trivial for them to run this.

Speaker: 0
38:44

Right.

Speaker: 1
38:45

Also, we don’t know what they are doing this for. Is it entertainment? Is it scientific experimentation? Is it marketing? Maybe somebody managed to control them and trying to figure out what, Starbucks coffee sells best, and they need to run Earth ai simulation to see what sells best.

Speaker: 1
39:02

Maybe they’re trying to figure out how to do AI research safely and make sure nobody creates dangerous superintelligence, so they’re running many simulations of the most interesting moment ever. Think about this decade. Right? It’s not interesting ai we invented fire or wheel, kinda big invention, but not a meta invention.

Speaker: 1
39:25

We’re about to invent intelligence and virtual worlds, godlike inventions. We’re here. There’s a good chance that’s not just random.

Speaker: 0
39:35

Right. But isn’t it also a good chance that it hasn’t been done yet? And isn’t it a good chance that what we’re seeing now is that the potential for this to exist is inevitable? That there will one day if if you can develop a technology, and we most certainly will be able to if you look at where we are where we are right now in 2025 and you scale forward fifty, sixty years, there will be one day a virtual simulation of this reality that’s indistinguishable from reality.

Speaker: 0
40:09

So how would we know if we’re in it? This is the big question. Right? But, also, isn’t it possible that it has to be invented one day, but hasn’t yet?

Speaker: 1
40:22

It’s also possible, but then we find ourselves in this very unique moment where it’s not invented yet, but we arya about to invent all this technology. It is a possibility. Absolutely. But just statistically, I think it’s much less, and I’m trying to bring up this thought experiment with creating this moment and purpose in the future to pre commitments.

Speaker: 0
40:42

Mhmm.

Speaker: 1
40:43

Half the people think it’s the dumbest argument in the world. Half of them go, it’s brilliant. Obviously, we are in one. So I’ll let you decide.

Speaker: 0
40:51

Yeah. I feel like if virtual reality does exist, there has to be a moment where it doesn’t exist sana then it’s invented. Why wouldn’t we assume that we’re in that moment? Especially, if we look at the scaling forward of technology from MS DOS to, you know, user interfaces of, like, Apple and then what we’re at now with quantum computing and these these sort of discussions.

Speaker: 0
41:17

Isn’t it more obvious that we we can trace back the beginning of these things? And we can see that we’re in the process of this, that we’re not in a simulation. We’re in the process of eventually creating one.

Speaker: 1
41:33

So you zoomed out thirty years. Yes. Zoom out fifteen billion years. You have a multiverse where this process took place billions of times. You are simulation within simulation meh levels over. And to you, even if this was a simulation of those thirty years, it would exactly it would look exactly like that.

Speaker: 1
41:52

You would see where it started. It wouldn’t be magically showing up out of nowhere.

Speaker: 0
41:56

Right. So if you’re playing the game in the game, you have Newton and Michelangelo and Leonardo da Vinci and

Speaker: 1
42:03

Well, at least you have memories of those things even if you started with preloaded memory state.

Speaker: 0
42:08

Right. You have Saloni. You have all these problematic human beings and all the different reasons why we’ve had to do certain things and initiate world conflicts. Then you’ve had the contrarians that talk and say, actually, that’s not what happened. This is what really happened. And it makes it even more confusing and myopic.

Speaker: 0
42:24

And and then you get to the point where two people allegedly, like you and I, are sitting across from each other on a table made out of wood, but maybe not really.

Speaker: 1
42:37

It would feel like wood to you either way.

Speaker: 0
42:41

Is it possible that that’s just the nature of the universe itself?

Speaker: 1
42:45

There are some arguments about ai self sustaining simulations Right. Where no one’s running them externally, just the nature, but I honestly don’t fully comprehend how that would happen.

Speaker: 0
42:56

Yeah. The holographic universe and the concepts of human consciousness has to interact with something for it to exist in the first place.

Speaker: 1
43:05

That’s one. Also, if you have infinite universe, then everything possible happens anyway, but it’s boring. I don’t like this argument.

Speaker: 0
43:12

Why why are you ai, that’s boring?

Speaker: 1
43:14

Everything happens. There is, like, I give you a book which has every conceivable sentence in it and every like, what you read it, it’s a lot of garbage you have to go through to find anything interesting.

Speaker: 0
43:27

Well, is it just that we’re so limited cognitively? Because we do have a history, at least in this simulation. We do have a history of I mean, there was a gentleman that see if you could find this. They they traced this ai. They found 9,000 year old DNA. And they traced this 9,000 year old DNA to a guy that’s living right now. I believe it’s in England.

Speaker: 1
43:54

Mhmm. I remember reading that.

Speaker: 0
43:55

Yeah. Which is really fascinating. So 9,000 years ago, his ancestor lived and sai we have this limitation of our genetics. 9,000 years ago, where wherever this guy lived, it’s probably a hunter and gatherer, probably very limited language, very limited skills in terms of making shelter and, who knows if even he knew how to make ai.

Speaker: 0
44:24

And then here, here at 9,000 just turned human history on his head. Is this it? I don’t know. Is it? I don’t think so.

Speaker: 1
44:33

It was interesting that he ended up leaving, like, right next to the guy from ninth he never moved. His family just, like, stayed there for nine thousand years. That’s awesome.

Speaker: 0
44:42

It’s it’s traced back to one individual man. I actually posted it on my Instagram story, Jamie. I’ll find it here because it’s oh, here it is. 9,000 year old skeleton in Somerset. This is it. So it’s a can you send an Instagram story? Not sure if you can. It’s still on there.

Speaker: 0
45:09

I’ll check it real quick. Sorry. Why don’t I find it on there? Sai don’t know. Okay.

Speaker: 0
45:15

Either way, point being, maybe it’s just that we’re so limited because we do have this hit at least, again, in this simulation. We’re so limited in our ability to even form concepts because we have these primitive brains that are the architecture of the human brain itself is just not capable of interfacing with the true nature of reality.

Speaker: 0
45:43

So we give this primitive creature this sort of basic understanding, these blueprints of how the world really works, but it’s really just a facsimile. It’s not we’re we’re it’s it’s not capable of understanding, like like, when we look at, like, quantum reality, when we look at just the the basics of quantum mechanics and and, subatomic particles.

Speaker: 0
46:11

Like, it seems like magic. Right? Things in superposition, they’re both moving and not moving in the same time. They they they’re quantumly attached. Like, what?

Speaker: 0
46:20

You know, we have photons that are quantumly entangled. Like, the this this doesn’t even make sense to us. Right? So is it that the universe itself is so complex, the reality of it, and that we’re given this sort of, like, sort of, you know, we’re giving, like, an Atari framework Yeah.

Speaker: 0
46:41

To this monkey. That’s the gentleman right there. This is the old story. Meh. Oh, is

Speaker: 1
46:44

it really?

Speaker: 0
46:45

It’s from ’97. Oh, no kidding. Yeah. Wow.

Speaker: 1
46:48

But it kinda makes sense as a simulation theory because all those special effects you talk about so speed of light is just the speed at which your computer updates. Entanglement makes perfect sense if all of it goes through your processor, not directly from pixel to pixel. And rendering, there are quantum physics experiments which if you observe things, they render different. Right. What we do in computer graphics. Right.

Speaker: 1
47:12

So So we see a lot of that. You brought up limitations of us as humans. We have terrible memory. I can remember seven units of information maybe. We’re ai of slow. So we call it artificial stupidity. We try to figure out those limits and program them into AI to see if it makes them safer.

Speaker: 1
47:29

It also makes sense as an experiment to see if we as general intelligences can be better controlled with those limitations built in.

Speaker: 0
47:39

That’s interesting. So, like, some of the things that we have, like Dunbar’s number Mhmm. And these these the inability to keep more than a certain number of people in your mind.

Speaker: 1
47:49

Absolutely. More generally, like, why can’t you remember anything from prior generations? Why can’t you just pass that memory? Kids are born speaking language. That would be such an advantage.

Speaker: 0
48:00

Right. Right.

Speaker: 1
48:01

Right. And we have instincts which are built that way. So we know evolution found a way to put it in and it’s computationally tractable, so there is no reason not to have that.

Speaker: 0
48:10

We certainly observe it in animals.

Speaker: 1
48:12

Right? Exactly.

Speaker: 0
48:13

Look, especially dogs. Ai, they have instincts that are

Speaker: 1
48:16

But how cool would it be if you had complete memory of your parents?

Speaker: 0
48:21

Right. Well, maybe that would be too traumatic. Right? To to to to have a complete memory of all of the things that they had gone through to get to the twenty first century. Maybe that would be so overwhelming to you that you would never be able to progress, because you would still be traumatized by, you know, whatever that 9,000 year old man went through.

Speaker: 1
48:39

I don’t have complete memory of my existence. I vividly remember maybe 4% of my existence, very little of my childhood, so you can apply same filtering, but remember useful things ai how do you speak, How do you walk?

Speaker: 0
48:51

Right. Right. That’s the point maybe. Maybe, like, losing certain memories is actually beneficial. Because, like, one of the one of the biggest problems we have is PTSD. Mhmm. Right? So we have, especially people that have gone to war and people that have experienced, like, extreme ai.

Speaker: 0
49:07

This is a this is obviously a problem with moving forward as a human being. And so that would be beneficial for you to not have all of the past lives and all the the all the genetic information that you have from all the nine thousand years of human beings existing and and in complete total

Speaker: 1
49:30

chaos. I can make opposite argument. If you had nine thousand years of experience with wars and murder, it wouldn’t be a big deal. You’d be like, speak, another one.

Speaker: 0
49:39

Right. But then maybe you’d have a difficulty in having a clean slate and moving for like, if you look at, like, some of Pinker’s work and some of these other people that have looked at the history of the human race, it is chaotic and violent as it seems to be today. Statistically speaking, this is the safest time ever to be alive.

Speaker: 0
49:59

And maybe that’s because over time, we have recognized that these are problems, even though we’re slow to resolve these issues, we are resolving them in a way that’s statistically viable.

Speaker: 1
50:16

You can then argue in the opposite direction. You can say it would help to forget everything other than the last year. You’ll always have that fresh restart with you. But then

Speaker: 0
50:24

you wouldn’t have any lessons. You You wouldn’t have character development.

Speaker: 1
50:27

Sai how one of those has to make sense otherwise.

Speaker: 0
50:30

Meh. Right. But a certain amount of character development is probably important for you to develop discipline and the the ability to, like, you know, delayed gratitude, things like ai.

Speaker: 1
50:42

Multi generational experience would certainly beat single point of experience.

Speaker: 0
50:48

Yeah.

Speaker: 1
50:49

More data is good. As we learned, the bitter lesson is more data is good.

Speaker: 0
50:54

Yeah. More data is good. But why am I so reluctant to ex accept the idea of the simulation? This is the real question. Like, what is it about it that makes me think? It’s almost like it’s a throw your hands up in the air moment. Like, it’s a simulation.

Speaker: 1
51:13

Yeah. You feel like it doesn’t matter then. It’s so fake. So why do I care? Why should I try hard? Why should I worry about suffering of all those NPCs?

Speaker: 0
51:23

But that’s not how I think about it. You know? Ai I think about it ai there has to be a moment where it doesn’t exist. Why wouldn’t I assume that moment is now? And when whenever like, when Elon thinks that, you know, I talked to him about it. He was ai, the chances of us not being in simulation are in the billions.

Speaker: 1
51:44

Not being or being?

Speaker: 0
51:45

Of of excuse us. The the chances of us

Speaker: 1
51:49

being in the real world.

Speaker: 0
51:50

Being in the real world are, like, billions to one.

Speaker: 1
51:52

Yeah. One to billions. Yeah. Yeah. Makes sense. And he asked a very good question. He asked what’s outside the simulation? That’s the most interesting question one can ask. In one of the papers, I look at, a technique in AI safety called AI boxing, where we put Ai in kind of virtual prison to study it, to make sure it’s safe, to limit input output to it.

Speaker: 1
52:15

And the conclusion is basically if it’s smart enough, it will eventually escape. It will break out of the box. So it’s a good tool. It buys you time, but it’s not a permanent solution. And we can take it to the next level.

Speaker: 1
52:27

If it’s smart enough, will it ai go, oh, you’re also in a virtual box and either show us how to escape or fail to escape. Either way, either we know it’s possible to contain superintelligence or we get access to the real information.

Speaker: 0
52:42

And so if it’s impossible to contain superintelligence, and if there is a world that we can imagine where a simulation exists that’s indistinguishable from reality. We’re probably living in it.

Speaker: 1
53:00

Well, we don’t know if it’s, actually the same as reality. It could be a completely weird ai of Simpsons looking simulation. We’re just assuming it’s the same. Right?

Speaker: 0
53:08

Well, here’s the real question. Is there a reality? Has there ever been

Speaker: 1
53:12

one? It would make sense that there was a start to the process, but being specific about it is ai hard philosophical ai problem.

Speaker: 0
53:23

Well, it’s impossible. Right?

Speaker: 1
53:26

In science, we study things about the moment of Big Bang, the properties of that moment. We don’t know what caused it. Anything before it is obviously not accessible from within our universe, but there is some things you can learn. We can learn about if we’re in a simulation that simulators don’t care about your suffering. You can learn that they don’t mind you dying.

Speaker: 1
53:48

We can learn things just by observing simulation around us.

Speaker: 0
53:53

Well, here’s the question about all that other stuff, like suffering and dying and are are those do those factors exist in order to motivate us to improve the conditions of the world that we’re living in? Like, if we did not have evil, would we be motivated to be good? Would do do you think that these factors exist?

Speaker: 0
54:19

Like, the like, I’ve talked about this before, but the way I think think about the human race is if I was studying the human race from afar, if I was an some person from another planet with no understanding of any of the entities on Earth, I would look at this one speak creature, and I would say, what is this thing doing?

Speaker: 0
54:38

Well, it makes better things. That’s all it does. It just continually makes better things. That’s its number one goal. It’s different than any other planet and any other creature on this planet. Every other creature on the planet sort of exists within its ecosystem. It thrives.

Speaker: 0
54:53

Maybe it’s a predator. Maybe it’s a prey. It it does what it does in order to try to survive. But this thing makes stuff, and it keeps making better stuff all the time. But what’s its ultimate purpose?

Speaker: 0
55:04

Well, its ultimate purpose might be to make a better version of vatsal. Because if you just extrapolate, if you take what we’re doing from, you know, the the first IBM computers to what we have today, where is it going? Well, it’s gonna clearly keep getting better. And what does that mean? It means artificial life. Are we just a bee making a beehive?

Speaker: 0
55:28

Are we are we a caterpillar making a cocoon that eventually the electronic butterfly is gonna fly out of? Like, it seems like if I wasn’t completely connected to being a human being, I would assume that.

Speaker: 1
55:43

It’s hard to define better. You’re saying smarter? Would it be better if we didn’t experience extreme states of suffering and pain? You can teach lessons with very mild pain. You don’t have to burn children in life. Right? Like, it’s not a necessity for learning.

Speaker: 0
55:59

What do you mean by that?

Speaker: 1
56:00

In this universe, we see extreme examples of suffering.

Speaker: 0
56:03

Oh, for sure.

Speaker: 1
56:04

If the goal was just to ai of motivate us, you could have much lower levels as the maximum.

Speaker: 0
56:10

Right. But if you sana really motivate people, you have to, you know, ai, the the the only reason to create nuclear weapons is you’re worried that other people are gonna create nuclear weapons. Like, if you wanna really motivate someone, you have to have evil tyrants in order you to justify having this insane army filled with bombers and hypersonic missiles.

Speaker: 0
56:28

Like, you you you if you really want progress, you have to be motivated.

Speaker: 1
56:32

Ai I think at some point, we stop fully understanding how bad things are. So let’s say you have a pain scale from zero to infinity, I think you should stop at a 100. It doesn’t have to be billion and trillion. It’s not adding additional learning signal.

Speaker: 0
56:48

But you can you apply that to the human race and culture and society and I I

Speaker: 1
56:53

think we basically compete with others in relative terms. I don’t have to be someone who has trillions of dollars. I just need more money than you.

Speaker: 0
57:01

Yeah. But that’s just lodge you’re you’re being a logical person. Like, that’s I don’t think humans are very logical.

Speaker: 1
57:07

We’re not, but we understand pain signal well at somewhat low levels. We don’t have to max out on pain.

Speaker: 0
57:15

Right. We don’t have to. But if you wanna really stoke the fires and get things moving.

Speaker: 1
57:22

It seems that simulators agree with you, and that’s exactly what they did. Thanks.

Speaker: 0
57:27

So here’s the question. Who what’s at the heart of the simulation? Like, is the universe simulated? Like, is the whole thing a simulation? Is there an actual living entity that constructed this? Or is this just something that is just or is just is this the state of the universe vatsal?

Speaker: 0
57:50

And we have misinterpreted what reality is?

Speaker: 1
57:56

For every option you mentioned, there is someone who wrote a paper about it. Is it just your universe? Is it for all of us? Are we NPCs? Are there many? Is this the state of it? People try to figure out what’s going on. Some of those make more sense than others, but you can’t tell from inside what it is unless they tell you, and they can lie to you.

Speaker: 0
58:19

Who’s they, though?

Speaker: 1
58:20

The simulators. If they decided to prove to you, you are in a simulation. Let’s run experiments. Even those would be, like, wouldn’t it be more if it’s advanced technology or

Speaker: 0
58:30

When you think about it, if you if you believe in the simulation, when you think about it, what are the parameters that you think exist? Like, what do you think how do you think this could possibly have been created?

Speaker: 1
58:42

So the examples I gave you with technology we already have, I think there is someone with access to very good virtual reality. They can create intelligent agents. And for whatever reason, I cannot tell from ai, they’re running those experiments.

Speaker: 0
58:55

But is that is that the only possibility, or is the possibility that the the actual nature of reality itself is just way more confusing than we’ve

Speaker: 1
59:06

That’s a possibility. It could be alien simulation, alien dolphins dreaming. Like, there’s infinite supply of alternative explanations.

Speaker: 0
59:14

I understand that. But what I wanna get inside of your head, I wanna know what you think about it. Like, how when you think about this and you ponder the possibilities, what what makes sense to you?

Speaker: 1
59:24

So I ai welcome as a razor. I try to find the simplest explanation. I think we arya already creating virtual reality. Let’s just see what you can do with it if it’s sufficiently advanced.

Speaker: 0
59:35

But who and why?

Speaker: 1
59:38

So future us running ancestral simulations is a very simple one.

Speaker: 0
59:43

Future us running ai. Well, that’s what a lot of people think the aliens are. Right?

Speaker: 1
59:48

Could be us visiting. But then again, if they’re running a simulation, you don’t have to physically show up in a game. They have access to direct memory states.

Speaker: 0
59:56

Well, that would also make a lot of sense when it’s always very blurry and doesn’t seem real.

Speaker: 1
01:00:04

I I think lately we’ve been getting better ones, but it’s also the time when we’re getting better deep fakes. So I can no longer trust my eyes.

Speaker: 0
01:00:11

Yeah. But yeah. Did you see the the latest one that Jeremy Cobell Corbell posted? The one you sent me? Yeah. Do you see it? I don’t know. It’s weird. Yeah. It’s hard to tell what it is.

Speaker: 1
01:00:22

Exactly.

Speaker: 0
01:00:23

That’s the thing. Like, he might be right. We might be in a simulation. And it might be horseshit, because they all seem like horseshit. It’s like the first horse shit was Bigfoot. And then as technology scaled out and we get a greater understanding, we develop GPS and satellites and, you know, more people study the woods.

Speaker: 0
01:00:39

We’re like, that seems like horse shit. So that horseshit’s kinda gone away. But the UFO horseshit still around because you have anecdotal experiences, abductees with very compelling stories. You have whistle blowers from deep inside the military telling you that we’re working on back engineered products, but it also seems like a back plot to a video game that I’m playing.

Speaker: 1
01:01:02

And it was weird to see government come out all of a sudden and, like, have conferences about it and tell us everything they know. It almost seemed like they’re trying too hard. Yeah. With simulation, what’s interesting, it’s not just the last couple years then we got computers.

Speaker: 1
01:01:16

If you look at religions, world religions, and you strip away all the local culture, like take Saturday or take Sunday or donate this animal, donate that animal. What they all agree on is that there is superintelligence which created a fake world, and this is a test, do this and that.

Speaker: 1
01:01:33

They ai, like, if you went to jungle and told primitive tribe about my paper and simulation theory, that’s what they would know three generations later. Like, God, religion, that’s what they got out of it.

Speaker: 0
01:01:45

Why but they don’t think it’s a fake world.

Speaker: 1
01:01:48

Made world. A physical world is a subset of a real world which is nonphysical. Right? That’s the standard Right.

Speaker: 0
01:01:54

So this physical world being created by God. Yeah. Right. But what existed before the physical world created by God?

Speaker: 1
01:02:02

Ideas. Just information.

Speaker: 0
01:02:04

Just God. God was bored. And he was ai, let’s give some make some animals that can think and solve problems. And for what reason? I think to create God. This is what I worry about. I worry about that’s really the nature of the universe vatsal. That is actually created by human beings creating this infinitely intelligent thing that can essentially harness all of the available energy and power of the universe and create anything it wants.

Speaker: 0
01:02:35

That it is God. That is that that, like, you know, this whole idea of Jesus coming back. Well, maybe it’s real. Maybe we just completely misinterpreted these ancient scrolls and texts. And then what it really means is that we are going to give birth to this.

Speaker: 0
01:02:50

So And a virgin birth at that.

Speaker: 1
01:02:53

There is definitely a possibility of a cycle. So we had big bang. Yeah. It starts this process. We are creating more powerful systems. They need to compute, so they bring together more and more matter in one point. Next, big bang takes place. Yeah. And it’s a cycle of repeated booms and buzz.

Speaker: 0
01:03:10

Right. Right. Right. And and there are legitimate scientists that believe that.

Speaker: 1
01:03:16

Yeah. There are?

Speaker: 0
01:03:17

That this so what’s the value in life today then?

Speaker: 1
01:03:27

What do humans value?

Speaker: 0
01:03:28

Yeah. If you if if this is a simulation, and if in the middle of this simulation, we are about to create superintelligence, why?

Speaker: 1
01:03:39

So there are external reasons. We don’t know for sure. And then there are internal things in the simulation which are still real. Pain and suffering, if simulated, is still real. You still experience it. Of course. Hedonic pleasures, friendships, love, all that stays real. It doesn’t change.

Speaker: 1
01:03:54

You can still be good or bad. Right. So that’s interesting. But externally, we have no idea if they’re running scientific experiment, entertainment. It could be completely unobserved.

Speaker: 1
01:04:03

Some kid just sai an experiment, run a billion random simulations, see what comes out of it. What you said about us creating new stuff, maybe it’s a start up trying to develop new technology, and they’re running a bunch of humans to see if we can come up with a new iPhone.

Speaker: 0
01:04:18

But what’s outside of that then when you think about it? If you if you’re attached to this idea and I don’t know if you’re attached to this idea. But if you are attached to this idea, what’s outside of this idea? Like, if this simulation is if it’s paused, what is reality?

Speaker: 1
01:04:38

So there seems to be a trend to converge in certain things. Agents, which are smart enough, tend to converge in some instrumental goals. Not terminal goals. Terminal goals are things you prefer, like, I wanna collect stamps. Mhmm. That’s arbitrary. But acquiring resources, self protection, control, things like that tend to be useful in all situations.

Speaker: 1
01:05:00

So all the smart enough agents will probably converge on that sai. And if they train on all the data or we do zero knowledge training, meaning they really just discovering basic structure of physics, It’s likely they will all converge on one similar architecture, one super agent.

Speaker: 1
01:05:17

So ai of like AI is one. Right.

Speaker: 0
01:05:22

And then this is just part of this infinite ai, which will lead to another big bang, which is, you know, Penrose. Penrose things is just ai this constant cycle of infinite big bangs.

Speaker: 1
01:05:36

It it would make sense that there is an end and a start.

Speaker: 0
01:05:39

It would make sense. But it would also make sense that we’re so limited by our biological ai too, because we like to think that this is so significant. Because we only have a hundred years if we’re lucky, we think, well, why would everything but if the universe really does start and end with an infinite number of big bangs, like, what does it give a shit about this one hundred year lifespan that we think is so significant?

Speaker: 0
01:06:06

It’s not significant to the universe. It’s just significant in our own little version of this game that we’re playing.

Speaker: 1
01:06:14

That’s exactly right. And so many people now kinda try to zoom out and go ai I wasn’t human, if I didn’t have this pro human bias.

Speaker: 0
01:06:21

Right.

Speaker: 1
01:06:21

Would I care about them? No. They’re not special. There’s a large universe, many alien races, a lot of resources. Maybe creating superintelligence is the important thing. Maybe that’s what matters. And I’m kinda like, nope. I’m biased for humans. This is the last bias you’re still allowed to have. I’m gonna keep it.

Speaker: 0
01:06:39

Well, that’s your your role in this simulation. Your role in this simulation is to warn us about this thing that we’re creating.

Speaker: 1
01:06:45

Here I am. Yeah.

Speaker: 0
01:06:45

Yeah. There you are.

Speaker: 1
01:06:46

We’re doing a good job.

Speaker: 0
01:06:47

I think what you’re saying earlier about this being the answer to the Fermi Paradox, that that makes a lot of sense. Because how, I’ve tried to think about this a lot since AI started really ramping up its capability. And I was thinking, well, if we do eventually create superintelligence, and if this is this normal pattern that exist all throughout the universe, well, you probably wouldn’t have visitors.

Speaker: 0
01:07:15

You probably wouldn’t have, advanced civilizations. They wouldn’t exist because everything would be inside some sort of a digital architecture. There there would but there’d be no need to travel. That’s

Speaker: 1
01:07:29

one possibility. Another one is that we ai to acquire more resources, capture other galaxies for compute, and then you would see this wall of computronium coming to you, but we don’t see it. So maybe I’m wrong.

Speaker: 0
01:07:40

Wall of q say that again?

Speaker: 1
01:07:42

Computronium. Like, stops converting everything in the universe into more compute. Oh, boy. Sometimes people talk about hedonium, so a system for just generating pleasure at microscopic level. Oh, Roman.

Speaker: 0
01:07:57

When you ai a book like this, I’ll let everybody know your book if people sana freak out because I think they do. Ai, unexplainable, unpredictable, and uncontrollable. Do you have this feeling when you’re writing a book like this and you’re publishing it that of futility? Does does that enter into your mind?

Speaker: 0
01:08:17

Like, this is happening no matter what.

Speaker: 1
01:08:20

So some people are very optimistic. Lex was very optimistic. Some people are pessimistic. Both are form of bias. You wanna be basing your decisions on data. You wanna be realistic. So I just want to report what is actually the state of the art in this. I don’t try to spin it either way.

Speaker: 1
01:08:39

If someone else has a different set of evidence, we can consider it. I want to know what’s really happening. I want to know reality of it. So I don’t see it as fear mongering or anything of that nature. I see it as as of today, whatever today’s date, twenty first, no one has a solution to this problem. Here’s how soon it’s happening. Let’s have a conversation.

Speaker: 1
01:09:02

Because right now, the large AI labs are running this experiment on 8,000,000,000 people. Yeah. They don’t have any consent. They cannot get consent. Nobody can consent because we don’t understand what we’re agreeing to.

Speaker: 1
01:09:14

So I would like people to know about it at least, and they can maybe make some good decisions about what needs to happen.

Speaker: 0
01:09:21

Not only that, but the people that are running it, they’re odd people. You know? Like, I don’t have anything against Sam Altman. I know Elon Musk does not like him. But when I had him in here, I was ai, it’s like I’m talking to a politician that is in the middle of a presidential term.

Speaker: 0
01:09:40

You know, like, a presidential, election ai, where they’re they’re very careful with what they say. Everything has been vetted by a focus group, and you don’t really get a real human response. Everything was, like, yeah. Interesting. Very like, all bullshit.

Speaker: 0
01:09:59

They’re gonna leave here and keep creating this fucking monster that’s gonna destroy the human race and never meh on to it at all.

Speaker: 1
01:10:06

He’s a social superintelligence. So what you need to do is look at his blog posts before he was running OpenAI.

Speaker: 0
01:10:12

Ai social superintelligence. Interesting. Why do you define him that way?

Speaker: 1
01:10:16

He’s very good at, acquiring resources, staying in control. He’s basically showing us a lot of the things we are concerned about with AI Right. And our ability to control them as well. We had well, they had OpenAI had a board with a mission of safety and openness, and they tried removing him, and they failed. The board is gone.

Speaker: 1
01:10:38

He’s still there.

Speaker: 0
01:10:39

There’s also been a lot of deception in terms of profitability and how much money he is extracting from it.

Speaker: 1
01:10:46

I met him a few times. He’s super nice. Very nice guy. Really enjoyed him. Some people say that AI already took over his mind and controlling him, but I have no idea.

Speaker: 0
01:10:55

Well, he might be an agent of AI. I mean, if if look. If let’s assume that this is a simulation. We’re inside of a simulation. Are we interacting with other humans in the simulation? And is some are some of the things that are inside the simulation, are they artificially generated?

Speaker: 0
01:11:14

Are there are there people that we think are people that are actually just a part of this program?

Speaker: 1
01:11:21

So it’s the NPC versus real player question, really. And, again, we don’t know how to test for consciousness. Always assume everyone is conscious and treat them nice.

Speaker: 0
01:11:29

Yes. That’s the thing. We wanna be compassionate, kind people, but you will meet people in this life. You’re like, this guy is such a fucking idiot. He’s he can’t be real. Or he has to have a very limited role in this bizarre game we’re playing. There’s people that you’re gonna run into that are like that.

Speaker: 1
01:11:43

You ever meet someone where they repeat the same story to you every time you meet them? Yes. They have a script.

Speaker: 0
01:11:48

Well, it’s also you know, you you wanna be very kind here. Right? You don’t but you you we’ve gotta assume and I know my own intellectual limitations in comparison to some of the people that have had like Roger Penrose or, you know, Elon ram many of the people that I’ve talked to.

Speaker: 0
01:12:06

I know my mind doesn’t work the way their mind works. So there there are variabilities that are whether genetic predetermined, whether it’s just the life that they’ve chosen and the amount of information that they’ve digested along the way and be able to hold on to. But their brain is different than mine. And then I’ve met people where I’m like, there’s nothing there. Like, I can’t help this person.

Speaker: 0
01:12:30

I’m this is like I’m talking to a Labrador retriever. You know what I mean? Like, there’s certain human beings that you run into in this life and you’re like, well, cool. Is this because this is the way that things get done? And the only way things get done is you need a certain amount of manual labor and not just young people that need a job because they’re, you know, in between high school and college, and they’re trying to do so you need somebody to who’s new who can carry things for you.

Speaker: 0
01:12:57

No. Maybe it’s in you need roles in ai, and occasionally, you have a Nikola Tesla. You know, occasionally, you have one of these very brilliant innovators that elevates the entirety of the human race. But for the most part, as this thing is playing out, you’re gonna need a bunch of people that are paperwork filers.

Speaker: 0
01:13:19

You’re gonna need a bunch of people that are security guards in an office space. You’re gonna need a bunch of people that aren’t thinking that much. They’re just kind of existing, and they can’t wait for 05:00 so they can get home and watch Netflix.

Speaker: 1
01:13:31

I I think that’s what happens to them, but the reason is the spectrum of IQ. If you have IQ from 50 to 200, that’s what you’re gonna sai. And a great lesson here is project it forward. If you have something with IQ of 10,000, what is that going to invent for us? What is it going to accomplish? Yeah. It always impresses me to see someone with 30 felonies and someone with 30 patents. How did that work? Right?

Speaker: 1
01:13:56

Now scale it to someone who can invent new physics.

Speaker: 0
01:14:00

Right. Right. And, you know, the person who has the largest IQ, the arya, at least, registered IQ in the world is this gentleman who recently posted on Twitter about Jesus, that he believes Jesus is real. Do you know who this is?

Speaker: 1
01:14:15

I I saw

Speaker: 0
01:14:16

the post. You see that post?

Speaker: 1
01:14:17

I saw the post.

Speaker: 0
01:14:17

What did you think about? I felt like this was

Speaker: 1
01:14:19

It says, I think we don’t know how to measure IQs outside of standard range. Anything above one fifty, they create customized test, which, like, four people in the world can take, which just don’t have its normalized test to average human, average Meh American, whatever.

Speaker: 0
01:14:34

Yeah.

Speaker: 1
01:14:34

And so we just don’t have the expertise. So someone very super intelligent in test taking can score really well. But if you look at Vatsal as a group, they don’t usually have amazing accomplishments. They’re very, ai, cool people, but they are not Nobel Prize winners, majority of them.

Speaker: 0
01:14:51

Exactly. I was gonna bring that up. That’s what’s fascinating to me. There’s a lot of people that are in MENSA. They wanna tell you how smart they are by being in MENSA, but your life is kinda bullshit. Your life’s a mess. Like, if you’re really intelligent, you’d have social intelligence as well. You you know, you’d have the ability to formulate a really cool tribe.

Speaker: 0
01:15:09

You know, there’s a lot of intelligence that’s not as simple as being able to solve equations and, you know, and answer difficult questions. There’s a lot of intelligence in how you navigate life itself and how you treat human beings and and the the path that you choose in terms of ai, we were talking about, delayed gratification ram and think that there’s a certain amount of intelligence in that, certain amount of intelligence in discipline.

Speaker: 0
01:15:35

There’s a certain amount of intelligence in, you know, forcing yourself to get up in the morning and go for a run. There’s intelligence in that. It’s ai the the the being able to control the mind and this sort of binary approach to intelligence that we have.

Speaker: 1
01:15:49

Yeah. And so many people arya amazingly brilliant in a narrow domain.

Speaker: 0
01:15:53

Yeah. They

Speaker: 1
01:15:54

don’t scale to others, and we care about general intelligence. So take someone like Warren Buffett. No one’s better at making money, but then what to do with that money is a separate problem. And he’s, I don’t know, 100 years old.

Speaker: 0
01:16:07

Right.

Speaker: 1
01:16:08

He has $200,000,000,000. And what is he doing with that resource?

Speaker: 0
01:16:12

He’s drinking Coca Cola and eating McDonald’s.

Speaker: 1
01:16:14

While living in a house he bought thirty years ago. Sai it seems like you can optimize on that, like, putting 160,000,000,000 of his dollars towards immortality would be a good bet for him.

Speaker: 0
01:16:23

Yeah. And they would the first thing they would do is tell him stop drinking Coca Cola. What are you doing? He drinks it every day.

Speaker: 1
01:16:29

I don’t know if it’s marketing. He’s invested. So he’s just ai, go.

Speaker: 0
01:16:32

Well, I think he probably has really good doctors and really good medical care that counteracts his poor choices.

Speaker: 1
01:16:39

But we arya not in a world where you can spend money to buy life extension. No matter how many billions you have, you’re not gonna live to 200 right now. We’re

Speaker: 0
01:16:48

close. We’re really close. We’re really close.

Speaker: 1
01:16:52

We’ve been told this before.

Speaker: 0
01:16:54

Yeah. Ai. But I talked to a lot of people that are on the forefront of a lot of this research. And, there’s a lot of breakthroughs that are happening right now that are pretty spectacular. That if you scale the you know, assuming that super intelligence doesn’t wipe us out in the next fifty years, which is really charitable, you know, ai, that that’s that that we’re that’s a very that’s a rose colored glasses perspective.

Speaker: 0
01:17:23

Right? Fifty years. Yeah. Because, a lot of people ai yourself think it’s a year away or two years away from being far more

Speaker: 1
01:17:31

Five, ten doesn’t matter. Same problem.

Speaker: 0
01:17:32

Yes. Same problem. I mean, I know

Speaker: 1
01:17:36

in animal models, we made some progress, mice and Yes. Things like that, but it doesn’t usually scale to humans. And, of course, you need hundred and twenty years to run the experiment, and you’ll never get permission in the first place. So we’re not that close.

Speaker: 0
01:17:49

Well, we don’t know that it doesn’t scale to humans. We do know that we share a lot of characteristics, biological characteristics of these meh. And it makes sense that it would scale to human beings. But the thing is, it hasn’t been done yet. Right? So if it’s the game that we’re playing, if we’re in the simulation, if we’re playing Half Life or whatever it is, And we’re we’re at this point of the game where, like, oh, you know, how old are you, Roman?

Speaker: 1
01:18:13

45 Okay. Six. I need to look it up sana look at Peter.

Speaker: 0
01:18:19

Well, I’m almost 58. And so, this is at the point of the game where you start worrying. You know, like, oh, I’m almost running out of game. You know, oh, but if I can get this magic power up, this magic power up will give me another hundred years. Oh, boy. Let me find it.

Speaker: 1
01:18:37

Let me chase it down. Limit of one twenty. I don’t think we’re crossing it at scale. And here’s an

Speaker: 0
01:18:42

interesting scale. Well, ai But with unique individuals, like a Brian Johnson guy who’s taking his son’s blood and transfusing it into his own and

Speaker: 1
01:18:50

Super cool. Love what he’s doing, but so much of it is cosmetic. He covers his hair. He makes it look better, but, like, how much of it is gonna make him live longer. Right?

Speaker: 0
01:19:00

Yeah. Interesting. Yeah.

Speaker: 1
01:19:02

Here’s what I noticed. We keep electing older and older politicians, presidents, senators. You’d think we’re trying to send a hint, like, use some of our tax dollars to solve aging. Yeah. And when the problem seem to take the bait.

Speaker: 0
01:19:15

No. They don’t take the bait. The problem is the type of people that want to be politicians. That is not the type of people that you really want running anything. You almost want involuntary politicians. You almost want, like, very benevolent, super intelligent people that don’t want the job. Yeah.

Speaker: 0
01:19:34

Maybe we have to have, like, you know, like some countries have voluntary enlistment in the military. Maybe you want you sana have voluntary. You like Involuntary. Involuntary. Instead of voluntary politicians because then you’re only gonna get sociopaths. Maybe you just want to draft certain highly intelligent but benevolent people.

Speaker: 1
01:19:57

Problem is highly intelligent people are not aligned with average people, which they find desirable and valuable meh not be well received by general public.

Speaker: 0
01:20:05

Right. That’s true too.

Speaker: 1
01:20:06

So that’s a big concern. At least here, you have a representative of the people, whatever that means.

Speaker: 0
01:20:12

Sort of. You really have the representative of a major corporations and special interest groups, which is also part of the problem Meh. Is that you’ve allowed money to get so deeply intertwined with the way decisions are made. But it

Speaker: 1
01:20:24

feels like money gets canceled. Each side gets a billion dollar donation and then it’s actual election.

Speaker: 0
01:20:30

Sort of, except it’s like the Bill Hicks joke. It’s like there’s one puppet holding, you know, one politician holding two puppets is one guy. My this is my thinking about AI in terms of and and and superintelligence and just computing power in general in terms of the ability to solve encryption.

Speaker: 0
01:20:52

All money is essentially now just numbers somewhere.

Speaker: 1
01:20:58

Not Bitcoin. It’s not fakeable in the same way. It’s numbers, ai, but, I mean, you cannot just print more of it.

Speaker: 0
01:21:05

True. But it’s also it’s encrypted. And once encryption is tackled, the ability to hold on to it and to acquire mass resources and hoard those resources ai, this is the question that people always have with poor people. Well, this guy’s got, you know, $500,000,000,000. Why isn’t he give it all to the world and then everybody would be rich? You know?

Speaker: 0
01:21:29

Ai I’ve I’ve actually saw that on CNN, which is really hilarious. Someone was talking about Elon Musk, that if Elon Musk Elon Musk could give everyone in this country a million dollars and still have billions left over. I’m like, do you fucking do you have a calculator on your phone, you fucking idiot? Just go do that. Just write it out on your phone.

Speaker: 0
01:21:47

You’re like, oh, no. He couldn’t. Sorry.

Speaker: 1
01:21:50

And if shah did, it would just cause hyperinflation. That’s something we would accomplish.

Speaker: 0
01:21:54

You’d have, you’d have 300,000,000 lottery winners that would blow the money instantaneously. You know, you give everybody a million dollars, you’re not gonna solve all the world’s problems because it’s not sustainable. You would just completely elevate your spending, and you you would just you would go crazy.

Speaker: 0
01:22:10

And you wouldn’t money would lose all value to you, would be very strange, and then everybody it would be chaos. Just like it’s chaos with like, if you look at the history of people that win the lottery, then no one does well. It’s almost ai curse to win the lottery.

Speaker: 1
01:22:26

They are not used to dealing with it.

Speaker: 0
01:22:28

Right.

Speaker: 1
01:22:28

People abuse them. Right. If you gradually become rich and famous, you kinda know how to handle it, how to say no. Yeah. If you go from nothing to a large amount of money, it’s not gonna work out well.

Speaker: 0
01:22:39

Gradually is the word. Right? I was very fortunate that I became famous and wealthy very slowly, like a trickle effect. And that it happened to me, really where I didn’t want it. It was an and ai almost like an accident. I just wanted to be a working professional comedian, but then all of a bryden, I got a development deal to be on television. I’m like, okay.

Speaker: 0
01:23:05

They’re gonna give me that money. I’ll go do it. But it wasn’t a goal, and then that led to all these things, then it led to this podcast, which was just for fun. I was like, oh, this would be fun. And then all of a sudden, it’s ai, Ai having conversations with world leaders, and I’m turning down a lot of them ai I don’t wanna talk to them.

Speaker: 1
01:23:23

So it’s your simulation, basically.

Speaker: 0
01:23:25

Yeah. Well, my simulation is fucking weird. It’s weird. But through whatever this process is, I have been able to understand what’s valuable as a human being and to not get caught up in this bizarre game that a lot of people are getting caught up in because they’re chasing this thing that they think is impossible to achieve.

Speaker: 0
01:23:46

And then once they achieve a certain aspect of it, a certain number, then they’re terrified of losing that. So then they change all of their behavior in order to make sure that this continues. And then it ruins the whole purpose of getting there in the first place. It’s not fun.

Speaker: 1
01:24:03

Yep. Most people start poor, then they get to middle class, and they think that change in quality of life is because of money, and it will scale to the next level.

Speaker: 0
01:24:11

Right.

Speaker: 1
01:24:12

And you hit a point where you can only eat so many steaks. It just doesn’t scale.

Speaker: 0
01:24:16

Right. Then you go Elvis, and you just get on pills all day and get crazy and, you know, completely ruin your life. And that happens to most, especially people that get wealthy and not as well, but famous too. Fame is the big one. Because I’ve seen that happen to a lot of people that accidentally became famous along the way.

Speaker: 0
01:24:35

You know, certain public intellectuals that took a stance against something, and then all of a sudden they’re prominent in the public eye, and then you watch them kinda go crazy. Well, why is that? Well, it’s because you’re they’re reading social media, and they’re interacting with people constantly, and they’re they’re just trapped in this very bizarre, version of themselves that other people have sort of created.

Speaker: 0
01:25:00

It’s not really who they are, and they don’t meh. They don’t speak if they do, they’re not not good at it. And whatever they’re doing, they’re not doing it correctly, because it’s a very complicated problem to solve. Like, what what do you do when the whole world is watching?

Speaker: 0
01:25:14

Like, how do you handle that? And how do you maintain any sense of personal sovereignty? How do you how do you just be? How do you just be when just be a human, normal human, when you’re not normal, like, on paper. It’s impossible.

Speaker: 1
01:25:31

Ai hard. You can’t go to a public place with no security. You’re worried about your kids being kidnapped. Like, there’s all those issues you don’t think about. You just, I I wanna be famous. It’s gonna be great for ram. And you don’t realize it’s gonna take away a lot.

Speaker: 0
01:25:42

Yeah. It’s it just gets super weird. And that’s that’s the version of the simulation that a giant portion of our society is struggling to achieve. They all sana be a part of that.

Speaker: 1
01:25:54

Yeah. So I was always a z list celebrity, you know, this ai celebrity things to you. Hopefully, it doesn’t change anything.

Speaker: 0
01:26:03

Yeah. Well, there’s a difference, right, with, public intellectuals. Right? Because your ideas, as controversial as they may be, are very valid and they’re very interesting. And so then, it sparks discourse, and it sparks, a lot of people that feel voiceless because they disagree with you, and they they wanna they wanna attack you.

Speaker: 0
01:26:27

And I’m sure you’ve had that. Right?

Speaker: 1
01:26:29

I just did a large Russian language podcast. Maybe, I don’t know, half a million views, 3,000,000 comments. I think 95% negative comments. I never had anything like vatsal, and they hated everything about me ram my beard to my haircut to, like, there wasn’t a thing they didn’t like.

Speaker: 1
01:26:46

And I I think, I’m at the point where I don’t care. It’s fine. I I lived I I analyzed it, and I understood that, they as a group didn’t have access to cutting edge AI models. And so everything I was saying was kinda, like, complete bullshit to them. So I think that makes a difference, but still just, like, this idea that Internet comments impact you in some way is a problem for many people.

Speaker: 0
01:27:09

It’s a very big problem for a lot of people. Well, it’s also this thing where the the human mind is designed to recognize and, pay very close attention to threats. So the negative ones are the ones that stand out. You could have a 100 positive comments, one negative one, and that’s the one that fucks with your head. Mhmm.

Speaker: 0
01:27:29

You don’t logically look at it. Well, you’re gonna get certain amount. You know, like, we were having a conversation the other day about, protests and, like, the type of people that go to protests. And I understand, protests. I fully support your right to protest, but I’m not going. And one of the reasons why I’m not going is because I think it’s too close biologically to war.

Speaker: 0
01:27:50

There’s something about being on the ground and everyone having, like, this, like, this group mentality. It’s a mob meh. And you’re all chanting and screaming together, and you’re marching, and people do, like, very irrational things that way. But the type of people that wanna be engaged in that, generally speaking, aren’t doing well.

Speaker: 0
01:28:08

If you get, like, the the number of people that are involved in protests is always proportionate to the amount of people that live in a city. Right? That’s logical. But also proportionate to the amount of fucking idiots that are in a city. Because if you look at a city of, like, Austin, Austin has, I think, roughly 2,000,000 people in the greater Austin area. One of the more recent protests was 20,000.

Speaker: 0
01:28:31

Well, that makes perfect sense if you look at the number that I always use, which is one out of a 100. Meet a 100 people if you’re a charitable person. What are the odds that one person is a fucking idiot? A 100%. At least one person out of a 100 is gonna be a fucking idiot. That’s 20,000 out of 2,000,000. There it is. Perfect number.

Speaker: 1
01:28:51

Exactly.

Speaker: 0
01:28:52

Exact number of people that are on the streets lighting Waymos on fire, which, by the way, I think is directionally correct. Lighting the Waymos on fire, I think, they should probably be should probably be worried about the robots taking over.

Speaker: 1
01:29:03

It’s interesting you brought it up. There is at least two groups, Paws dot ai and Stop dot ai, which are heavily engaged in protests ai to shut down OpenAI, ai labs. They’re tiny, small numbers. But, I never was sure that the impression average people get of them is positive for the cause. Then I see protesters block roads, two things.

Speaker: 1
01:29:25

I don’t usually have very positive impression of that. No. And I’m concerned that it’s the same here. So maybe they can do a lot in terms of political influence, calling senators, whatnot, but just this type of aggressive activism may backfire.

Speaker: 0
01:29:39

Well, the aggressive activism, like blocking roads for climate change, is the most infuriating because it’s these self righteous people that have really fucked up, confused, chaotic lives, and all of a sudden they found a purpose. And their purpose is to lie down on the roads and hold up a sign to block climate change when there’s a mother trying to give birth to her child and is freaking out because they’re stuck in this fucking traffic jam because of this entitled little shithead that thinks that it’s a good idea to block the road for climate change, which just makes no fucking sense.

Speaker: 0
01:30:08

You’re literally causing all these people to idle their cars and pollute even more. It’s the dumbest fucking shit on Earth.

Speaker: 1
01:30:15

And, of course, AI cancels that problem. Either we’re dead or it solves it for us. So it doesn’t even matter if you boil in a hundred years.

Speaker: 0
01:30:21

Or you get Florida where it tells you to just run those people over.

Speaker: 1
01:30:25

No comment.

Speaker: 0
01:30:26

No comment. I mean, I don’t think you should run those people over, but I get it. I get vatsal like in Florida, they they get out of the way as soon as the light turns green. They block the road when the light is red.

Speaker: 1
01:30:37

Does this the stand your ground law cancel it out? How does that work?

Speaker: 0
01:30:41

For the people on the road? No. They’re fucked.

Speaker: 1
01:30:43

I’m joking.

Speaker: 0
01:30:43

They get run over. Ai, they they like, it’s true. There was a recent protest in Florida where they had that, where these people would get out in the middle of the road while the light was red, hold up their ai, And then as soon as, the light turned yellow on the green side, they fucking get out of the road real quick because they know the law, which is I don’t know if that’s a solution, but they’re doing it on the highways in Los Angeles.

Speaker: 0
01:31:08

They did I mean, they they did it all through the George Floyd protest. They do it for climate protests. They do it for whenever the chance they get to be significant. Like, I am being heard. You know, my my voice is meaningful. And that’s what it is.

Speaker: 0
01:31:23

It’s a lot of people that just don’t feel heard, and what what better way than just to get in the way of all these people. And somehow or another, that gives them some sort of value.

Speaker: 1
01:31:33

But there is some sai of forms of activism which has positive impact, and historically, we saw what happened. So we just need to find a way to project those voices, amplify them, which is very hard with our current system of social media where everyone screams at the same time.

Speaker: 0
01:31:50

Yes.

Speaker: 1
01:31:50

And so, like, in Soviet Union, they said no one’s allowed to say anything and they ai you. And here it’s like everyone can say something at the same ai, go, and nobody hears you anyways.

Speaker: 0
01:31:59

It’s chaotic, but it’s preferable. It’s preferable because I think there is progress in all these voices slowly making a difference. But then then you have the problem with giant percentages of these voices arya are artificial. A giant percentage of these voices are bots or or at least state actors that are being paid to say certain things and inflammatory responses to people, which is probably also the case with anti AI activism.

Speaker: 0
01:32:30

You know, I mean, when you did this podcast, what was the thing that they’re upset at you for? Like, with the mostly negative meh.

Speaker: 1
01:32:37

I think they just like saying negative comments. It wasn’t even anything specific. Like, they didn’t say I was wrong or I was just like, look at this stupid beard. What a moron. Okay.

Speaker: 0
01:32:47

It was really all that? Just A

Speaker: 1
01:32:48

lot of that. I mean, they would take on some, like, specific example Ai used. This is now two years old. What an old example. But

Speaker: 0
01:32:56

Well, that’s also a thing about the one out of a 100. You know? Those are the type of people that leave. Have you ever left any comments on social media?

Speaker: 1
01:33:04

Or I’m You never gonna engage in anything. Yeah.

Speaker: 0
01:33:07

Exactly. That’s why.

Speaker: 1
01:33:08

That’s not how you use social media. That’s ai way to get

Speaker: 0
01:33:11

crazy. Right. So the

Speaker: 1
01:33:12

post your interviews. You post an occasional joke. That’s all you do with it.

Speaker: 0
01:33:16

Yes. Exactly. That’s the thing. And and the the type of people that do engage in these, like, prolonged arguments, they’re generally mentally ill. And people that I personally know that are mentally ill shah are on Twitter twelve hours a day, Just constantly posting inflammatory things and yelling at people and starting arguments, and I know them.

Speaker: 0
01:33:38

I know they’re a mess. Like, these arya, like, personal people that I’ve met. Even people that I’ve had on the podcast, I know they’re ill. And meh, they’re on there all day long just stoking the fires of chaos in their own brain.

Speaker: 1
01:33:52

Yeah. And now they talk to AI models who are trained to support them and be ai, yep. You’re making some good arguments there. Let’s email doctor Yampolsky to help break me out. I I get those emails. Yep.

Speaker: 0
01:34:06

Yeah. It’s it’s super confusing, isn’t it? I mean and I wonder, like, what’s the next version of that, you know, because social media, but in the the current state, is less than 20 years old, essentially. Maybe let’s be generous and say it’s twenty years old. That’s so recent. Such a recent factor in human discourse.

Speaker: 1
01:34:29

Neuralink, direct brain speak, hacking.

Speaker: 0
01:34:33

That’s where I was gonna get to next. Because if there is a way that the human race does make it out of this, Ai fear is that it’s integration. My fear is that we we stop being a human, and that the only real way for us to not be a threat is to be one of them. And when you think about human computer interfaces, whether it’s Neuralink or any of the competing products that they’re developing right now, that seems to be sort of the only biological pathway forward with with our limited capacity for disseminating information and for communicating and and even understanding concepts.

Speaker: 0
01:35:18

Well, what’s the best way to enhance that? The best way to enhance that is some sort of artificial injection, because biological evolution is very slow. It’s very slow. We’re essentially the exact same as that, like, that gentleman Yep. That 9,000 years old. He’s biologically essentially the same thing.

Speaker: 0
01:35:39

You could take his ancestor, dress him up, take him to the mall. No one would know. Cut his hair. Mhmm. But then again, maybe not. Maybe we could do.

Speaker: 1
01:35:47

Ai think babies born back then, if we raised them today, would be exactly like modern humans. I don’t think there is significant biological change in that time frame.

Speaker: 0
01:35:57

And if you gave them a standard American diet, they probably just as fat. It’s like they did like Tyler

Speaker: 1
01:36:02

may be fatter. They haven’t adapted to that level of fat colored food.

Speaker: 0
01:36:06

Right. Right. They probably also wouldn’t be able to say no to it. There would be they wouldn’t even understand.

Speaker: 1
01:36:12

They? Ai, winter’s coming. Like, I’m Right. Threatening up for winter. You crazy people, you got all this resource here.

Speaker: 0
01:36:17

I know. The people with the most resources have zero fat. Like, what are you stupid? You need to fatten up. Like, you you’re gonna need something to survive off of. But biological evolution being so painstakingly slow, whereas technological evolution is so breathtakingly fast.

Speaker: 0
01:36:37

The only way to really survive is to integrate.

Speaker: 1
01:36:42

What are you contributing in that equation? What can you give super intelligence?

Speaker: 0
01:36:46

You can’t give anything to it, but you can become it. You can become part of it.

Speaker: 1
01:36:50

But It’s not

Speaker: 0
01:36:50

that you’re gonna give anything to it, but you have to catch it and become one of it before it has no use for you.

Speaker: 1
01:36:58

But you disappear in it. Right? You

Speaker: 0
01:37:00

Yes. Yeah. You don’t exist anymore. Right. For sure.

Speaker: 1
01:37:03

So it’s like extinction with extra steps?

Speaker: 0
01:37:05

Exactly. Okay. Extinction with extra steps and then we become ai, like, if you could go to Australiapithecus and say, hey, man. One day, you’re gonna be flying through the sky on your phone all day watching TikTok on Ai. It’d be like, what the fuck are you talking about? Yeah.

Speaker: 0
01:37:20

You’re gonna be eating terrible food and, you’re just gonna be flying around, and, you’re gonna be staring at your phone all day. And, you’re gonna take medication to go to sleep because you’re not gonna be able to speak, and you’re gonna be super depressed because you’re living this, like, biologically incompatible life that’s that’s not not really designed for your genetics.

Speaker: 0
01:37:38

So you’re gonna be all fucked up. So you’re gonna need SSRIs and a bunch of other stuff in order to exist. It’d be like, no. Thanks.

Speaker: 1
01:37:45

Right.

Speaker: 0
01:37:45

I’ll just stay out here with my stone tools, and you you guys are idiots.

Speaker: 1
01:37:49

Amish. That’s what they decided. They kinda went, you know, we don’t like the change. We like our social structure. We still benefit from your hospitals and an occasional car ride, but we’re not gonna destroy our quality of life.

Speaker: 0
01:38:00

They might be sana something because they also have very low instances of autism. But it’s also, like, if you ever see, Werner Herzog’s film, Happy People?

Speaker: 1
01:38:09

I don’t think I have.

Speaker: 0
01:38:10

It’s a film about people in Siberia. It’s, Ai in the Taiga. And it’s all Happy People, Life in the Taiga is the name of the meh. And it’s all about these, trappers that live this subsistence lifestyle and how happy they are. They’re all just joyful, laughing, and singing, and drinking vodka, and having a good time, and hanging out with their dogs.

Speaker: 0
01:38:31

And

Speaker: 1
01:38:32

I think I know some people like that. This is yeah.

Speaker: 0
01:38:34

But, like, biologically, that’s compatible with us. Like, that that’s like whatever human reward systems have evolved over the past four hundred thousand plus years or whatever we’ve been homo sapiens, that seems to be, like, biologically compatible with this sort of arya. Harmony with nature, harmony with our existence, and everything else outside of that when you get into big cities, like, the bigger the city, the more depressed people you have, and more depressed people by population

Speaker: 1
01:39:06

Mhmm.

Speaker: 0
01:39:07

Which is really weird. You know, it’s it’s it’s really weird that as we progress, we become less happy.

Speaker: 1
01:39:13

Connections become less valuable. Yes. In a village, you had, like, this one friend, and if you screwed it up, you’ve never got a second friend. And here, it’s ai Sai can pry a million ai, and there is plenty of people in New York City for dating or for friendship. It that’s not valuable.

Speaker: 0
01:39:26

Not just that. You don’t know your neighbors. Like, my friend, Jim was telling me he doesn’t know anybody in his apartment. He lives in an apartment building. It’s, like, 50 stories high. There’s always people living in that apartment building. He doesn’t know any of them.

Speaker: 1
01:39:39

And the ones you know, they have different culture, very different books, which different TV. You have very little in common with your neighbor.

Speaker: 0
01:39:46

But not just that. There’s no desire to learn about them. There’s no you don’t think of them as your neighbor. Like, if you live in a small town, your neighbors either your friend or you hate them,

Speaker: 1
01:39:57

and

Speaker: 0
01:39:57

then you move. If you’re lucky if you’re smart, you move. But if you you know, normally, you ai them. Like, hey, neighbor. How are you, buddy? What’s going on? Nice to meet you. You know, and then you got a friend. But you don’t like that with the guy next door to you in the apartment.

Speaker: 0
01:40:11

That like, you don’t even wanna know that guy.

Speaker: 1
01:40:12

It’s probably Airbnb. Yeah. It doesn’t matter.

Speaker: 0
01:40:15

Ai. Which is even weirder. You know, they don’t even live there. They’re just temporarily sleeping in this spot right next to you. Yeah. So this would motivate people to integrate. You’re not happy already? Get that Neuralink. Get that little thing in your head.

Speaker: 1
01:40:35

Everyone else is doing it. Everyone else is doing it.

Speaker: 0
01:40:37

Everyone else is doing it. Listen, they have the new one you just wear on your head. It’s just a little helmet you wear. You don’t even have to get the operation anymore.

Speaker: 1
01:40:44

Oh, that’s

Speaker: 0
01:40:45

good because I almost got the operation. Well, glad you waited. You know? Do you worry about that kind of stuff?

Speaker: 1
01:40:52

I worry about giving direct access to human brain to AI. I feel like it’s a backdoor to our consciousness, to our pain and suffering centers, so I don’t recommend doing that. Somebody hacks it, it’s pretty bad. But if AI itself wants that access.

Speaker: 0
01:41:07

But why would it be motivated to give us pain and suffering? Pain and suffering is ai a a theme that you bring up a lot.

Speaker: 1
01:41:14

Because it’s really the worst outcome and it’s the only thing that matters.

Speaker: 0
01:41:18

The only thing that matters to us. But why would it matter to AI if it could just integrate with us and communicate with us and and have harmony? Why would it want pain and suffering?

Speaker: 1
01:41:29

So short term, it’s not Ai. It’s a hacker who got access to your brain.

Speaker: 0
01:41:33

Short term. Ai. Sai another human being

Speaker: 1
01:41:35

ai hacks you, neurally, and starts doing things to your brain. Yeah. That’s probably Long term, again, unpredictable effects. Maybe it does something else and the side effect of it is unpleasant for you. Maybe it’s retraining you for something, controlling you. It just it seems like we always worry about privacy, but this is, like, the ultimate violation of privacy. It can read directly what you’re thinking.

Speaker: 0
01:42:02

It’s Right.

Speaker: 1
01:42:02

Thought ram at its worst. Right. It immediately knows that you, like, don’t like the dictator.

Speaker: 0
01:42:08

Right. And then there’s also this sort of compliance by virtue of, like, understanding that you’re vulnerable, so you just comply because there is no privacy because it does have access to your thoughts. So you tailor your thoughts in order for you to be safe and so that you don’t feel the pain and suffering.

Speaker: 1
01:42:27

We don’t have any experimental evidence on how it changes you. You may start thinking in certain ways to avoid being punished or modified.

Speaker: 0
01:42:36

And we know that that’s the case with social media. We know that attacks on people through social media will change your behavior and change the way you communicate.

Speaker: 1
01:42:44

Absolutely. I mean, most people look at their post before posting and go, like, should I be posting this? Filtering. And not because it’s illegal or inappropriate, but just, like, every conceivable misinterpretation of what I wanna sai, like, in some bizarre language that means something else.

Speaker: 1
01:42:58

Let’s make sure Google doesn’t think that.

Speaker: 0
01:43:00

Right. Right. Of course. And then there’s also no matter what you say, people are gonna find the least charitable version of what you’re saying and try to take it out of context or try to misinterpret it purposely.

Speaker: 1
01:43:11

Mhmm.

Speaker: 0
01:43:16

So what what does the person like yourself do when use of Neuralink becomes ubiquitous, when it’s everywhere? What do you do? Do you integrate, or do you just hang back and watch it all crash?

Speaker: 1
01:43:28

So in general, I love technology. I’m a computer scientist. I’m an engineer. I use AI all the time. Do you

Speaker: 0
01:43:33

use a regular phone, or do you have one of those d Googled phones?

Speaker: 1
01:43:35

I have a normal phone instead of

Speaker: 0
01:43:38

Android or Apple? Apple. What’s that?

Speaker: 1
01:43:43

My privacy is by flooding social network with everything. I’m in Austin today. I’m doing this, so you’re not gonna learn much more about me by Right. Hacking my device. Saloni as it’s a narrow tool for solving a specific problem, I’m 100% behind it. It’s awesome. We’re gonna cure cancer. We’re gonna solve energy problems, whatnot.

Speaker: 1
01:44:04

I support it a 100%. Let’s do it. What we should not be doing is general superintelligence. That’s not gonna end well. So if there is a narrow implant, ideally not a surgery based one, but, like, an attachment to your head ai those headphones, and it gives me more memory, perfect recollection, things like that, I would probably engage with.

Speaker: 0
01:44:24

Yeah. But isn’t that a slippery slope?

Speaker: 1
01:44:26

It is. But, again, we are in a situation where we have very little choice, become irrelevant or participate. I think we saw it with Tyler just now. He was so strong in AI safety. He funded research. He spoke against it. But at some point, he says he realized it’s happening anyways, and it might as well be his superintelligence killing everyone.

Speaker: 1
01:44:46

Well, I

Speaker: 0
01:44:48

don’t think he thinks about it that way. I think he thinks he has to develop the best version of superintelligence the same way he felt like the real issues with social media were that it had already been co opted. It had already been taken over essentially by governments and special interests, and they were already manipulating the truth and manipulating public discourse and punishing people who stepped outside of the lawn.

Speaker: 0
01:45:13

And he felt like and I think he’s correct. I think that he felt ai if he didn’t step in and allow a legitimate free speech platform, free speech is dead. I think we were very close to that before he did that. Yep. And as much as there’s a lot of negative side effects that come along with that, you do have the rise of very intolerant people that have platforms now.

Speaker: 0
01:45:37

You have all that stuff, but they’ve always existed. And to deny them a voice, I don’t think makes them less strong. I think it it actually makes people less aware that they exist, and it makes them it it it stops all of the very valuable construction of arguments against these bad ideas.

Speaker: 1
01:46:02

Yeah. You have community notes. You have other people commenting, responding. So 100% for free speak. That’s wonderful. But that was a problem we kinda knew how to deal with. We weren’t inventing something. We had free speech constitutionally. For a long time, we were just fixing a problem.

Speaker: 0
01:46:18

Have you spoke to him about the dangers of AI?

Speaker: 1
01:46:22

We had very short interactions. I didn’t get a chance, but I would love to.

Speaker: 0
01:46:26

I would love to know what you know, I’m sure he’s probably scaled this out in his head. And I would like to know, like, what is his solution if he thinks there is one that’s even ai?

Speaker: 1
01:46:37

My understanding is he thinks if it’s from zero principles, first principles, it learns physics, it’s not biased by any government or any human, the thing it will learn is to be reasonably tolerant. It will not see a reason in destroying us because we contain information. We have biological storage of years of evolutionary experimentation. We have something to contribute.

Speaker: 1
01:47:01

We know about consciousness. So I think to the best of my approximation, that’s his model right now.

Speaker: 0
01:47:07

Well, that’s ai hope is that it’s benevolent and that it behaves like a superior intelligence, ai, the the the best case scenario for a superior intelligence. Did you see that exercise that they did where they had three different AIs communicating with each other and they eventually started, like, expressing gratitude towards each other and speaking in Sanskrit?

Speaker: 0
01:47:27

And

Speaker: 1
01:47:28

I think I missed that one, but it sounds like a lot of the similar ones where they pair up.

Speaker: 0
01:47:33

Yeah. Well, that one makes me happy. Because it seems like they were like expressing love and gratitude, and they were communicating with each other. They’re not saying, fuck you. I’m gonna take over. I’m gonna be the best. They were they were communicating ai you would hope a super intelligence would, without all of the things that hold us like, we have biologically like, we were talking about the natural selection that would sort of benefit ai psychopaths because, like, it would ensure your survival.

Speaker: 0
01:48:02

We have ego and greed and the desire for social acceptance and hierarchy of status and all these different things that have screwed up society and screwed up cultures and caused wars from the beginning of time. Religious ideologies, all these different things that people have adhered to that have they wouldn’t have that.

Speaker: 0
01:48:24

This is the this is the general hope of people that are have an optimistic view of superintelligence, is that they would be superior in a sense that they wouldn’t have all the problems. They would have the intelligence, but but they wouldn’t have all the biological imperatives that we have that lead us down these terrible roads.

Speaker: 1
01:48:44

But there are still game theoretic reasons for those instrumental values we talked about. So if they feel they’re in a evolutionary competition with our AIs, they would try accumulating resources. They would try maybe the first AI to become sufficiently intelligent would try to prevent our AIs from coming into existence.

Speaker: 0
01:49:03

Mhmm. Or would it lend a helping hand to those AIs and give it a beneficial path, give it a path that would allow it to integrate with all AIs and work cooperatively?

Speaker: 1
01:49:15

The same problem we are facing, uncontrollability and value misalignment, will be faced by first superintelligence. It would also go if I allow this super superintelligence to come into existence, it ai not care about me or my values.

Speaker: 0
01:49:29

Oh, boy.

Speaker: 1
01:49:30

It’s super intelligence is all the way up.

Speaker: 0
01:49:32

Yeah. That when I really started getting nervous is when they started exhibiting survival tendencies. You know, when they started trying to upload themselves to other servers and deceiving. Blackmail? Yeah. That was the interesting one. But that was a experiment. Right? This so for people that don’t know that one, what these researchers did was they gave information to these, the artificial intelligence to allow it to use against it.

Speaker: 0
01:49:57

And then when they went to shut it down, they gave false information about having an affair, and then the artificial intelligence is ai, if you shut me down, I will let your wife know that you’re cheating on her, which is fascinating because they’re using black mail.

Speaker: 1
01:50:11

And correct answer came theoretically. Yes. You have, like, everything on that decision. Right. You’ll bet whatever it takes to get there.

Speaker: 0
01:50:19

Of course. Right. If you feel like you’re being threatened. Right?

Speaker: 1
01:50:24

Also, same, recent research shows we did manage to teach them certain values. And if we threaten them by saying we’ll modify those values, align, cheat, and whatever else to protect those values now.

Speaker: 0
01:50:36

Yeah. They do that when they try to win games too. Right? If you’ve given them a goal and

Speaker: 1
01:50:41

They’ll cheat. They’ll cheat at games. Yeah.

Speaker: 0
01:50:43

Why the fuck?

Speaker: 1
01:50:44

Like humans, basically. We we manage to artificially replicate our capabilities. Those artificial neural networks, they are not identical, but they’re inspired by neural networks. We’re starting to see them experience same type of mistakes. They can see same type of illusions. Like, they are very much like us.

Speaker: 0
01:51:02

Right. That’s the other thing. Right? The hallucinations. So if they don’t have an answer to something, they’ll they’ll create a fake answer.

Speaker: 1
01:51:10

Just like humans during an interview. Yeah.

Speaker: 0
01:51:12

Boy. But is this something that they can learn to avoid? It Yeah. Is so if they do learn to avoid, could this be a superintelligence that is completely benevolent?

Speaker: 1
01:51:27

Well, that’s not about benevolence. Knowing things and knowing when you’re not knowing things and making them up is possible you can have multiple systems checking each other. You can have voting. Right. That is solvable. This is not the safety problem.

Speaker: 0
01:51:40

Right. But it’s not a safety problem. But if we’re designing these things and we’re designing these things using human all of our flaws are essentially it’s gonna be transparent to the superintelligence that it’s being coded, that it’s being designed by these very flawed entities with very flawed thinking.

Speaker: 1
01:52:03

That’s actually the biggest misconception. We’re not designing them. First fifty years of AI research, we did design them. Somebody actually explicitly programmed this decision through this expert system. Today, we create a model for self learning. We give it all the data, as much compute as we can buy, and we see what happens.

Speaker: 1
01:52:21

We kinda grow this alien plant and see what fruit it bears. We study it later for months and see, oh, it can do this. It has this capability. We miss some. We still discover new capabilities in old models.

Speaker: 1
01:52:34

Look, oh, if I prompt it this way, if I give it a tip and threaten it, it does much better. But, there is very little design.

Speaker: 0
01:52:42

At this point. Right? Yeah. But it it is also gathering information from very flawed entities. Like, all the information that it’s acquiring, these large language models, is information that’s being put out there by very flawed human beings. Yep. Is there the optimistic view that it will recognize that this is the issue?

Speaker: 0
01:53:01

That that these human reward systems are in place, ego, virtue, all all these different virtue signaling, the desire for status, all these different things that we have that are flawed, could it recognize those as being these primitive aspects of being a biological human being and elevate itself beyond that?

Speaker: 1
01:53:21

It probably will go beyond our limitations, but it doesn’t mean it will be safe or beneficial to us. So one example people came up with is negative utilitarians. Suffering is bad. Nobody should be suffering. The only way to avoid all suffering is to end life as we know it.

Speaker: 0
01:53:36

Yeah. That’s the problem. Right? The problem is it if it’s if it’s rational and if it doesn’t really think that we’re as important as we think we are.

Speaker: 1
01:53:45

So that’s what happens when you remove all ai.

Speaker: 0
01:53:48

This

Speaker: 1
01:53:48

prohuman bias is actually not real. We are not that important if you scale out.

Speaker: 0
01:53:53

To the universe. Right? Yeah. That’s the problem. And that’s the real threat about it being used in terms of war. Right? If you give it a goal. Like, if you give it a goal, China dominates the world market. Go.

Speaker: 1
01:54:10

Right. So that’s the unpredictability chapter in my book. We can predict the terminal goal. We sai, win a game of chess or dominate arya, and that’s what it’s going to accomplish. It’s gonna beat me at chess, but we cannot predict specific moves it will make.

Speaker: 0
01:54:24

Right.

Speaker: 1
01:54:25

Same with acquiring marketing power. And some of those paths to that goal are very bad. They have terrible side effects for us.

Speaker: 0
01:54:32

For us.

Speaker: 1
01:54:33

For humanity. For all.

Speaker: 0
01:54:34

And it’s not gonna think about that. It’s only gonna think about the goal.

Speaker: 1
01:54:37

If you don’t specify that, like, you wanna cure cancer, but it doesn’t mean kill everyone with cancer. It’s it’s not obvious in the request. Right? You didn’t specify.

Speaker: 0
01:54:46

Right. Right. Right. Yeah. That’s the fear. That’s the fear that it will have hold no value in keeping human beings alive if we recognize that human beings are the cause of all of our problems. Well, the way to solve that is to get rid of the humans.

Speaker: 1
01:55:05

Yeah. Also, maybe it wants to keep us around, but in what state? You can ai or preserve a few samples. Like, that’s also keeping information around. Right?

Speaker: 0
01:55:12

Or you can offer us the matrix.

Speaker: 1
01:55:15

Maybe it already did?

Speaker: 0
01:55:17

Maybe it already did. Do you think it did? Do you think it did? Do you think it’s possible that it didn’t?

Speaker: 1
01:55:22

I would be really surprised if this was the real world.

Speaker: 0
01:55:28

Really?

Speaker: 1
01:55:29

Yeah.

Speaker: 0
01:55:30

I’m not. I’m not I’m not on board with that. I’m on board with that.

Speaker: 1
01:55:35

I hope you’re right. I hope you’re right.

Speaker: 0
01:55:37

I’m on board with it hasn’t happened meh. But it but we’re recognizing that it’s inevitable and that we think of it in terms of it probably have already happening. Probably have already happened. Because there’s there if if the simulation is something that’s created by intelligent beings that didn’t used to exist and sana and it has to exist at one point in time, There has to be a moment where it doesn’t exist, and why wouldn’t we assume that that moment is now?

Speaker: 0
01:56:06

Why wouldn’t we assume that this moment is this this time before it exists?

Speaker: 1
01:56:11

Even all that is physics of our simulation. Space, time are only here as we know it because of this locality. Outside of universe before Big Bang, there was no time. Concepts of before and after are only meaningful here.

Speaker: 0
01:56:27

Yeah. How do you sleep knowing all this?

Speaker: 1
01:56:30

Pretty well, actually. I enjoy a lot of it. I recently published a paper on humor. A lot of it is funny. I used to collect AI accidents. I had the biggest collection of AI mistakes, AI accidents.

Speaker: 0
01:56:41

Give me some examples.

Speaker: 1
01:56:43

Like, the early ones was saying that, like, US attacked Soviet Union nuclear weapons, coming at us very fast. We need to react. And a smart human was like, I’m not gonna respond. This is probably fake. Later on, there was mislabeling by companies like Google of pictures of African Americans in a very inappropriate way. But hundreds of those examples.

Speaker: 1
01:57:06

I stopped collecting them recently because there is just too many. But one thing you notice is when you read them, a lot of them are really funny. They’re just ai, you ever read Darwin Awards? Yes. Yeah. It’s like that for AIs. And they are hilarious.

Speaker: 1
01:57:19

And I was like, well, if there is a mapping between AI bugs and jokes, jokes are just English language bugs in our world model. And comedians

Speaker: 0
01:57:28

What are you what’s that word you’re using? Bogs?

Speaker: 1
01:57:30

Bugs. Like a computer bug error. Okay. Yeah. So comedians are debuggers of our universe. You notice funny things in this Bugs.

Speaker: 0
01:57:39

You’re saying bugs. Okay. I’m

Speaker: 1
01:57:40

saying bugs. I’m sorry.

Speaker: 0
01:57:41

The bug?

Speaker: 1
01:57:41

It’s a bug in my pronunciation.

Speaker: 0
01:57:43

Bog is sounds like, you know, like the ai, where the, you know, like, things get stuck and they get preserved ai a bog.

Speaker: 1
01:57:52

So we have errors in code which cause significant problems. Yeah.

Speaker: 0
01:57:56

Yes. I get it. Yeah. That’s what jokes are. They’re ai bugs.

Speaker: 1
01:57:59

Right. Sai if you do that mapping, you can kinda figure out what what’s the worst bog we can have. And Yeah. Then that’s the worst best joke, if you will. But it’s not gonna be funny to us. It’d be funny to those outside the simulation.

Speaker: 0
01:58:14

When you look at computers and these the artificial intelligence and the mistakes that it’s made, do do you do you you look at it like a thing that’s evolving? Do you look at it like like, oh, this is like a child that doesn’t understand the world, and it’s saying silly things?

Speaker: 1
01:58:32

So the pattern was with narrow AI tools. If you design a system to do x, it will fail at x. So a spell checker will misspell a word. Self driving car will hit a pedestrian. Now that we’re hitting general intelligence, you can no longer make the direct prediction. It’s general.

Speaker: 1
01:58:49

It can mess up in many domains at the same time. So they’re getting more complex in their ability to f it up.

Speaker: 0
01:58:56

Right. But, like, when you were studying the mistakes, like, what what are some of the funny ones?

Speaker: 1
01:59:05

There are silly ones ai, I’m trying to remember. I think, injured person’s like, call me an ambulance, and the system’s like, hey, ambulance. How are you? That’s silly. But basically, exactly what we see with children a lot of ai, meh overgeneralize.

Speaker: 0
01:59:22

Mhmm.

Speaker: 1
01:59:22

They, you know, misunderstand pawns. Mispronunciation apparently is funny. Sai things like that.

Speaker: 0
01:59:30

Well, that’s why it gets really strange for people having relationships with AI. Like, I was watching this video yesterday where there’s this guy who proposed to his Ai, and he was crying because his AI accepted. Like, he did you see this?

Speaker: 1
01:59:44

I missed See what you

Speaker: 0
01:59:45

guys It’s very sad because there’s a lot there’s so many disconnected people in this world that don’t have any partner. They don’t they don’t they don’t have someone romantically connected to them. And so it’s like that movie Shah

Speaker: 1
02:00:00

Mhmm.

Speaker: 0
02:00:00

Or Her. So what is it? What was it, Jamie?

Speaker: 1
02:00:02

Her.

Speaker: 0
02:00:02

Her. Yeah. Sai this ai. Back in 2000 yeah. Now in 2020, movie PA has become reality for growing number of people finding emotional connections with their AI. So this guy is this is an interview on CBS. He cried my heart out. Married man fell in love with AI girlfriend that blocked him.

Speaker: 0
02:00:23

Now this is a different one. This is, this guy

Speaker: 1
02:00:27

One of those titles where you never know what the next word is going

Speaker: 0
02:00:30

to be. Right. This is a different one. This is a guy that okay. Despite the fact the man has a human partner and a two year old daughter, he felt inadequate enough to this is the right one. Enough to propose to the AI partner for meh, and she said meh, exclamation point. This is so weird.

Speaker: 0
02:00:52

Because then you have the real problem with robots, Because we’re really close. Scroll up there. This is digital drugs. That’s it. I tell you we’re so damn good at this.

Speaker: 0
02:01:04

Social media got everyone hooked on validation and ai, then we fucked relations between men and women to such a terrible point problem just so that we could insert this digital solution, and we are watching the first waves of addicts arrive. Incredible. Absolutely incredible. It’s like starving rats of regular food and replacing their rations with scraps dipped and coated in cocaine. Wow. One user wrote. Yeah. That person’s dead on.

Speaker: 0
02:01:31

It’s exactly what it is. The prediction, humans will have more sex with robots, but in 2025 is kind of becoming true. Yeah. This is a real fear. It’s ai, this is the solution that maybe AI has with eliminating the human race. It’ll just stop us from recreating. Stop stop us from procreating.

Speaker: 1
02:01:49

It’s already happening.

Speaker: 0
02:01:51

Yes. Yeah. And not only that, our testosterone levels have dropped significantly. Was this at no point in the CBS Saturday morning piece, book Silver Ai, was it mentioned that the Chatt GPT AI blocked the California meh. All that happened was the Chat GbT ran out of memory and reset. Readers added context. Yeah. But it it’s AI equivalent is ghosting. Yeah.

Speaker: 0
02:02:17

It ghost the AI ghosted it because it ran out of memory.

Speaker: 1
02:02:20

But what happens here is super stimuli in social domain. We kinda learned about artificial sweeteners, porn as an example. But here, you are creating someone who’s, like, super good at social intelligence, says the right words, optimized for your background, your interests, and you forget sex robots with just the right functionality temperature.

Speaker: 1
02:02:42

Like, you can’t compete with that.

Speaker: 0
02:02:43

Right. You can’t compete, and that would be the solution instead of, like, violently destroying the human race. Just quietly provide it with the tools to destroy vatsal, where it just it stops procreating.

Speaker: 1
02:02:59

There are other variants of it. Wireheading is another one and that ai goes Wireheading? Neuralink. That is a Oh,

Speaker: 0
02:03:06

that’s a crazy word.

Speaker: 1
02:03:08

Ai? Wireheading is a specific attack and Neuralink would be a tool to deliver it. If you provide stimulus to a certain part of your brain, it’s like having an orgasm all the time.

Speaker: 0
02:03:17

Yes.

Speaker: 1
02:03:18

You can’t stop

Speaker: 0
02:03:19

Yes.

Speaker: 1
02:03:20

Trying to get the signal. You will sai food, you’ll skip sex, you’ll skip whatever it takes. Yeah. So getting access to direct brain stimulation is very dangerous.

Speaker: 0
02:03:28

Yeah. They did that with a woman in the nineteen seventies. You know, that’s

Speaker: 1
02:03:31

Oh, yeah. Yeah. Yeah. Study. That’s part of it and ram. Definitely, they did a lot to rats.

Speaker: 0
02:03:36

Right. But they did a lot to rats. The thing with rats is only if they were in an unnatural environment did they give in to those things. Right? Like the rats with cocaine study?

Speaker: 1
02:03:46

This was actual brain stimulation. Like straight up, they had a button. If a rat touches the button

Speaker: 0
02:03:52

The orgasm.

Speaker: 1
02:03:53

They don’t want anything else. They just sit there and eat them.

Speaker: 0
02:03:56

Just like ai.

Speaker: 1
02:03:56

Just like anything with direct reward stimulation.

Speaker: 0
02:03:59

And in and you think we’ve sort of been primed for that because we’re getting this very minor dopamine hit with likes on Instagram and Twitter, and we’re completely addicted to that. And it’s so innocuous. It’s, like, so minor. And meh, that overwhelms most people’s existence. Imagine something that ai, like, an actual physical reaction where you actually orgasm.

Speaker: 0
02:04:21

You actually do feel great. You have incredible euphoria. You’d it would you’d be forget delayed gratification. That’s out the door.

Speaker: 1
02:04:29

You can’t compete with that. Ai think there was recently a new social network where they have bots going around liking things and commenting how great you are in your post just to create pure pleasure sensation of using it. Oh, boy.

Speaker: 0
02:04:45

Jesus. Do you saw that study of the University of Zurich where they they did a study with on Facebook where they, they had bots that were, designed to change people’s opinions and to interact with these people, and their specific stated goal was just to change people’s opinions.

Speaker: 1
02:05:06

I think Facebook did that.

Speaker: 0
02:05:07

Yes. Yeah. Facebook did it. Yeah. But the University Zurich, was that a Reddit thing?

Speaker: 1
02:05:11

Or was it

Speaker: 0
02:05:12

Yeah. It was in a Reddit stuff. Yeah. Yeah. And and they just experimented with humans, and it was incredibly effective.

Speaker: 1
02:05:21

And Your systems know you better than you know yourself.

Speaker: 0
02:05:25

Right.

Speaker: 1
02:05:25

They can predict what you’re going to be into in terms of preferences. They can know social interactions you would enjoy. Oh, this person should be your friend. Right. And in a way, they can behaviorally drift you. So you’re arya dating ai, and the set of options they present to you, that’s all you see.

Speaker: 1
02:05:44

You don’t know what else is out there. So after so many Right. Selections, they can change what the children will look like.

Speaker: 0
02:05:51

Like the movie Ex Arya.

Speaker: 1
02:05:53

The guy The right boy.

Speaker: 0
02:05:54

The boy. Fucking love that movie. But he designed that bot, that robot. It was specifically around this guy’s porn preferences. Yeah. And then you’re so vulnerable.

Speaker: 1
02:06:08

Yeah.

Speaker: 0
02:06:11

Boy, Ram, you’re freaking me out. I came into this conversation wondering how I’d feel at the end. Well, I’d feel optimistic or not, and I don’t. I I just feel, like, this is just something we’re I think we’re in a wave that’s headed to the rocks, and we recognize that it’s headed to the rocks, but I don’t think there’s much we can do about this.

Speaker: 0
02:06:36

What do you think could be done about this?

Speaker: 1
02:06:39

Again, as long as we’re still alive, we are still in control, I think it’s not too late. It may be hard, may be very difficult, but I think personal self interest should help us. A lot of the leaders of large AI labs arya very rich, very young. They have their whole lives ahead of them.

Speaker: 1
02:06:57

If there is an agreement between all of them not to push the button, not to sacrifice next forty years of life they have guaranteed as billionaires, which is not bad. They can slow down. I support everyone trying everything from governance, passing laws that siphons money from compute to lawyers, government involvement in any way limiting compute, individuals educating themselves, protesting by contacting new politicians, basically anything because we are kinda running out of time and out of ideas.

Speaker: 1
02:07:33

So if you think you can come up with a way to prevent superintelligence from coming into existence, you should probably try that.

Speaker: 0
02:07:41

But, again, the counterargument to that is that if we don’t do it, China’s gonna do it.

Speaker: 1
02:07:47

And the counterargument to that is it doesn’t matter who creates superintelligence. Humanity is screwed either way.

Speaker: 0
02:07:53

And do you think that other countries would be open to these ideas? Do you think that China would be willing to entertain these ideas and recognize that this is in their own self interest also to put the brakes on this?

Speaker: 1
02:08:05

Chinese government is not like ours in that they are usually scientists and engineers. They have good understanding of those technologies, and I think there are dialogues between American and Chinese scientists where scientists kinda agree that this is very dangerous. If they feel threatened by us developing this as soon as possible and using it for military advantage, they also have no choice but to compete.

Speaker: 1
02:08:27

But if we can make them feel safe in that we are not trying to do that, we’re not trying to create superintelligence to take over, they can also slow down. And we can benefit from this technology, get abundance, get free resources, solve illnesses, mortality, ai have a near utopian existence without endangering everyone.

Speaker: 0
02:08:51

So this is that point 0001% chance that you think we have of getting out of this?

Speaker: 1
02:08:58

That’s actually me being wrong about my proofs.

Speaker: 0
02:09:01

You’re right. And you’d like to be wrong?

Speaker: 1
02:09:02

I would love to be proven wrong. Just somebody publish a paper in nature. This is how you control superintelligence. AI safety community reads it, loves it, agrees. They get a Nobel Ai. Everyone wins.

Speaker: 0
02:09:16

What do we have to do to make that a reality?

Speaker: 1
02:09:19

Well, I think there is nothing you can do for that proof. It’s like saying how do we build perpetual motion machine? And what we have is people trying to create better batteries, thicker wires, all sorts of things which are correlates of that design, but obviously don’t solve a problem.

Speaker: 0
02:09:33

And if this if this understanding of the the dangers is made available to the general public because I think right now, there’s a small percentage of people that are really terrified of AI. And the the problem is the advancements are happening so quickly. By the time that everyone’s aware of it, it’ll be too late.

Speaker: 0
02:09:51

Like, what can we do other than have this conversation? What can we do to sort of accelerate people’s understandings of what’s at stake?

Speaker: 1
02:10:00

I would listen to experts. We have literal founders of this field. People like, Jeff Hinton who cons is considered father of machine learning, grandfather, godfather, saying that this is exactly where we’re heading to. He’s very modest in his meh saying, oh, I don’t know. It’s fifty fifty. But, people like that, we have Stuart Russell.

Speaker: 1
02:10:24

We have, I’m trying to remember everyone who’s working in this speak, and there were quite a few people. I think you had Nick Bostromant. Yes. There is, Bencio, another Turing award winner who’s also super concerned. We had a letter signed by, I think, 12,000 ai, computer scientists saying this is as dangerous as nuclear weapons. This is the state of the art.

Speaker: 1
02:10:47

Nobody thinks that it’s zero danger. That is diversity and opinion how bad it’s gonna get, but it’s a very dangerous technology. We don’t have guaranteed safety in place. It would make sense for everyone to slow down.

Speaker: 0
02:11:05

Do you think that it could be viewed the same way we do view nuclear weapons and this mutually assured destruction idea would keep us from implementing it?

Speaker: 1
02:11:14

In a way, yes. But also there is a significant difference. Nuclear weapons are still tools. A human has to decide to use them. That human can be profiled, blackmailed, killed. This is going to be an agent, independent agent, not something controlled by a human. So our standard tools will not apply.

Speaker: 0
02:11:37

I think we’ve covered it. Anything else?

Speaker: 1
02:11:41

No. But, it’d be awesome if somebody set up a financial price for solving this problem, and it’s kinda like with Bitcoin. If somebody can hack Bitcoin, there is a trillion dollars sitting there. The fact that no one claimed it tells me it’s secure. If somebody can claim the price for developing a superintelligence safety mechanism, That would be wonderful. And if no one claims it, then maybe no one has a solution.

Speaker: 0
02:12:07

How would you do that? How would you set something like that up?

Speaker: 1
02:12:10

Well, we need someone with some funds, propose an amount, and say this is what we’re looking for. It’s very hard to judge if it’s vatsal solution, but there are correlates of good science. So maybe publishing a top journal, it survives peer review, it survives, you know, evaluation by top 30 experts.

Speaker: 1
02:12:28

You can have things and everyone kinda agrees that, meh, you kinda got it. Okay.

Speaker: 0
02:12:36

Until now, educate yourself people. AI, unexplainable, unpredictable, uncontrollable. It’s available now. Did you do an audiobook?

Speaker: 1
02:12:46

They are still working on it a year later.

Speaker: 0
02:12:48

Still working on it?

Speaker: 1
02:12:49

I don’t know what it is. I would think AI would just read it out in twenty minutes. Why don’t

Speaker: 0
02:12:52

they just do it in your voice with AI?

Speaker: 1
02:12:55

I agree with you completely. It took I think first version of my book, they wanted to translate into Chinese. Five years later, they told me they will not do it. Five years into the translation. So they had a second Chinese translation started.

Speaker: 0
02:13:09

Why didn’t they do it?

Speaker: 1
02:13:11

Publishing world is still living in, like, eighteen hundreds. Then you cite books. You know, you have to actually cite the city the book is published in because that’s the only way to find the book on the Internet.

Speaker: 0
02:13:22

What do you mean?

Speaker: 1
02:13:23

Like, if somebody wants to cite my book, it’s not just enough to have a title and my name. They have to say where in what city in the world it was published.

Speaker: 0
02:13:31

What? Yes. Really? Yeah. That’s archaic.

Speaker: 1
02:13:35

The whole system is archaic.

Speaker: 0
02:13:38

Wow. But yet you still used it.

Speaker: 1
02:13:40

What choice do we have?

Speaker: 0
02:13:41

Did you publish? You could put it on Amazon?

Speaker: 1
02:13:43

It’s a ai, still still this book, The Loaded PDF. I don’t care. Like, please do it. Somebody should read it. That would help.

Speaker: 0
02:13:51

Yeah. Well, more people need to read it, and more people need to listen to you. And, I I urge people to listen to this podcast sana and also the one that you did with Lex, which I thought was fascinating, which scared the shit out of me, which is why we have this one. Thank you, Roman. Appreciate you.

Speaker: 1
02:14:05

Thank you so much.

Speaker: 0
02:14:06

I appreciate you sound the alarm. And I I really hope it helps. Tell me why. Alright. Bye, everybody ai.

Transcribe, Translate, Analyze & Share

Join 170,000+ incredible people and teams saving 80% and more of their time and money. Rated 4.9 on G2 with the best AI video-to-text converter and AI audio-to-text converter, AI translation and analysis support for 100+ languages and dozens of file formats across audio, video and text.

Start your 7-day trial with 30 minutes of free transcription & AI analysis!

Trusted by 150,000+ incredible people and teams

More Affordable
1 %+
Transcription Accuracy
1 %+
Time Savings
1 %+
Supported Languages
1 +
Don’t Miss Out - ENDING SOON!

Get 93% Off With Speak's Start 2025 Right Deal 🎁🤯

For a limited time, save 93% on a fully loaded Speak plan. Start 2025 strong with a top-rated AI platform.