#2372 – Garry Nolan

Garry Nolan, PhD, is an immunologist and professor at Stanford University School of Medicine. He is also a business executive and Executive Director of the Board of the Sol Foundation, a research and advocacy center focused on UAP studies. www.thesolfoundation.org Hunt with confidence using onX Hunt. Start your trial today at: https://huntsmarter.smart.link/srwbpznr2 This video is sponsored by BetterHelp. Visit https://BetterHelp.com/JRE Learn more about your ad choices. Visit podcastchoices.com/adchoices
Your partner in AI voice technology
Transform voice into your most valuable asset.
Capture, transcribe, and analyze audio and video with the Speak platform - or work closely with the team on custom solutions and conversational AI agents.
Try Speak Free Book Consult
Free trial includes 30 minutes , 30 minutes with a work email.
What you can do
Capture, transcribe, and analyze audio, video, or text
Summaries, action items, themes, quotes, and key moments
White-label embeds, repositories, and exports for real workflows
Trusted, fast, global
Users
250,000+
Languages
100+
Exports
DOCX, SRT, VTT, CSV

You can listen to the #2372 – Garry Nolan using Speak’s shareable media player:

#2372 – Garry Nolan Podcast Episode Description

Garry Nolan, PhD, is an immunologist and professor at Stanford University School of Medicine. He is also a business executive and Executive Director of the Board of the Sol Foundation, a research and advocacy center focused on UAP studies.

www.thesolfoundation.org

Hunt with confidence using onX Hunt. Start your trial today at: https://huntsmarter.smart.link/srwbpznr2

This video is sponsored by BetterHelp. Visit https://BetterHelp.com/JRE

Learn more about your ad choices. Visit podcastchoices.com/adchoices
This interactive media player was created automatically by Speak. Want to generate intelligent media players yourself? Sign up for Speak!

#2372 – Garry Nolan Podcast Episode Top Keywords

#2372 - Garry Nolan Word Cloud

#2372 – Garry Nolan Podcast Episode Summary

Based on the provided context, the phrase “has joined the group” refers to someone becoming a member of a group, band, club, or team. Throughout the conversation, there are multiple references to joining various groups, inviting members, and welcoming new people. Specific examples include:

– “we joined the band”
– “He should’ve joined the…”
– “Join the team.”
– “Welcome to the club.”
– “add one more bestie.”
– “they’re in, they’re in.”
– “invite you to…”

These statements all indicate the act of someone joining or being added to a group or collective. However, the context does not specify exactly who “has joined the group” in a particular instance. The general meaning is clear: it signifies the addition of a new member to a group. If you are looking for a specific individual who joined a specific group, that information is not explicitly provided in the context.

Continue reading the full guide (click to expand)

This summary was created automatically by Speak. Want to transcribe, analyze and summarize yourself? Sign up for Speak!

#2372 – Garry Nolan Podcast Episode Transcript (Unedited)

Speaker: 0
00:01

Joe Rogan podcast. Check it out. The Joe Rogan experience. Ai meh day. Joe Rogan podcast by night, all day.

Speaker: 1
00:12

Derek, very nice

Speaker: 0
00:13

Want to run this on your own file?
Upload audio, video, or text and get a transcript, summary, and insights in minutes.
Try Speak Free Book Consult For voice partners, white-label, routing, and advanced workflows
Free trial includes 30 minutes (60 with a work email)
to meet you, sir. Nice to meet you as well.

Speaker: 1
00:14

Thank you for doing this. I really appreciate it. Tell everybody what you do. Tell everybody what your official position is. You you’re a professor at the School of Medicine sai Stanford. What what do you do?

Speaker: 0
00:26

So my day job is in cancer research and cancer biology, mostly immunology and cancer. Much of what my laboratory does is not so much the biology of cancer, but developing instruments that create the data that allow us to analyze the complexities of how the immune system interfaces with tumors and how tumors basically reenable, the immune system to help the cancer itself.

Speaker: 0
00:54

So the problem has been we don’t have the ability to collect enough data or not until recently to collect and understand what all of that means. So we’ve been kind of poking in the dark for decades. And so probably for the last twenty years, I’ve developed a number of instruments and turned them into companies that allow everybody to access a level of information they couldn’t get before.

Speaker: 1
01:16

So explain that. The the immune system allows the tumors

Speaker: 0
01:24

So what happens is that there’s sort of a there’s a dance between, the mutations that initiate a tumor and then, sort of an evolution of how the tumor eventually learns how to trick the immune system to not recognize it. So we have all kinds of inter I mean, literally every day, every person, you’ll develop five cancer like objects inside of your body.

Speaker: 0
01:50

But the immune system, and your body has a way of shutting it down very quickly. But with enough time and with enough variation, tumors will eventually evolve in a way that trick the immune system not only into not ai them, but in fact to help them and feed them in a way to create an inflammatory environment that actually then the tumor uses to propagate its own cell division and then metastasis.

Speaker: 1
02:15

So it’s a normal function of natural human biology to create tumors?

Speaker: 0
02:21

It’s not so much a normal function. It’s a byproduct of what evolution is that when the genes mutate when a cell divides or if you go out and, you know, stand in the in the sun, too meh, for instance, you get skin cancers because you’re getting ionizing radiation that’s changing the DNA, making a mutation, and some of those random mutations will initiate a cancer.

Speaker: 0
02:43

So for instance, I have, a mutation called midFE three one eight k. It’s a mutation that I was born with. It didn’t it wasn’t in my family, and it causes both melanoma and kidney cancer, which I’ve had both. I’ve had a dozen, melanomas alone. You know, we didn’t find that out until a couple of years ago, but I’ve been following it over the years and we basically figured out, okay, it’s going to have to be this.

Speaker: 0
03:08

So we had my my genome sequenced. But there’s that’s just one of hundreds of different kinds of mutations that can occur that are on a path towards creating a cancer. But the cancer can’t survive if the immune system recognizes it. So eventually what happens is there’s this detente that is reached between the immune system and the cancer where the immune system basically ignores the cancer.

Speaker: 0
03:34

So Jim Allison here in Houston, won the Nobel Prize back in 2018, for understanding one of these turn off signals that the immune system you that the cancer is used to turn off the immune system and that by showing he could block it, his ai, Pam Sharma, ran a bunch of clinical trials at MD Anderson that showed, in fact, that this could actually turn a five percent survival disease in melanoma to a fifty percent survival.

Speaker: 0
04:03

And that then created the whole immunotherapy field that the world is, taking advantage of today.

Speaker: 1
04:10

Wow. So so what is what is cancer actually doing? Like, how how did it how do tumors develop this ability to trick the immune system? Is this something that other animals have?

Speaker: 0
04:23

Oh, yeah. Oh, yeah. So

Speaker: 1
04:24

it’s a constant?

Speaker: 0
04:25

It’s a constant bound. Sai so so, for instance, there there are proteins on your cell surface, and we’ll get too immunologically deep about it. They’re called major histocompatibility complex proteins. So for instance, if I were to try to just randomly do a a tissue transplant from me to you, it’s very likely that it would be rejected.

Speaker: 0
04:39

And, from me to you, it’s very likely that it would be rejected, and it’s because of those MHC proteins that it’s rejected. What’s happening is that your cells are presenting your internal cell biology to the immune system, and it’s saying, okay. You’re you’re a friend, not a foe.

Speaker: 0
04:59

So when cancer usually initiates, there are disruptions that happen and proteins are made incorrectly, etcetera. And so what these MHC proteins are doing in some cases is they’re presenting the internal damage to the body and the body is saying, oh, there’s something wrong with this cell.

Speaker: 0
05:17

We better wipe it out. We kill it. These same proteins are what the immune system uses, for instance, to go after viruses. So when you get a virus infection inside of the cell, the body has a way of chopping those proteins up inside of the cell presenting it via MHC and then the immune system attacks it.

Speaker: 0
05:34

So what one of the first things that actually tumors do is they learn to turn off the MHC proteins inside of themselves. So the ability to show that I’m damaged is shut down. And so the immune system doesn’t go on full alert for that. But then there are other mutations ai divide when you’re not supposed to, you know, avoid this kind of induced cell death called apoptosis, and not others.

Speaker: 0
06:00

And so it cancer doesn’t just ai start and then the next day, you’ve got it. It’s a progression of events. You have these precancerous lesions. You have, like, a benign tyler, which eventually becomes a metastatic tyler. And and so the but the immune system is key at every stage of the development because if you can reactivate the immune system in just the right way, then you can prevent the cancer from basically, spreading or from metastasizing or from killing you, essentially.

Speaker: 1
06:35

Is there a potential for given the understanding of this, is there a potential for using this for organ transplant patients where locally would stop recognizing this as a foreign

Speaker: 0
06:48

organ? That’s exactly what is done. In fact, you when you get a tissue transplant or an organ transplant, you’re suppressing the immune system. The problem with that suppression is that you then put yourself at risk

Speaker: 1
07:03

Right.

Speaker: 0
07:04

Of cancer because what you’re doing is you’re turning off the immune system’s ability to to combat and go after a cancer the moment it forms. So most people who are under immune suppression are at risk both of, let’s say, virus infections, bacterial infections, but also further cancers.

Speaker: 1
07:21

So would the the potential be to turn that off locally sai you could turn that off to this on the specific organ?

Speaker: 0
07:28

That would be a great thing to do if we could. Right now, the only things that we have are systemic. So yeah. I mean, for instance, if you could deliver to the organ that you’re transplanting anti immunosuppressives locally, that would be great. We don’t have that meh, but that would be via a form of gene therapy.

Speaker: 1
07:50

But the problem would that be that if you like, let’s say you had a a lung transplant. If you had a lung infection, it would be catastrophic.

Speaker: 0
07:57

Do you wanna come work in my lab? You’re you’re you’re accepted as a graduate student in the Stanford Department of Pathology. Wow.

Speaker: 1
08:04

That was easy. Yeah. Yeah. I have a few friends that have had organ transplants. Yeah. And it’s, you know, it’s very disturbing knowing that they’re so vulnerable to any kind of infection because of these medications that they have to take in order for the body to accept the transplant.

Speaker: 0
08:21

One of the problems is that there are literally hundreds of different types of immune cells. And, you know, really until recently and frankly until, a technology ai lab developed about over a dozen years ago, We couldn’t look at all of the immune cell types all at once in a single picture.

Speaker: 0
08:38

So I came from a laboratory, Meh and Lee Herzenberg, and I was a grad student at Stanford, and they had developed an instrument called the fluorescence activated cell sorter. And that allowed you to look at three proteins at a ai. And if you could know ahead of time what the cell types were that expressed the proteins that you’re interested in, you could look at just those three cell types.

Speaker: 0
09:00

Then I came up with a way to look at 50 or 60 proteins at a ai, sort of stepping up what they had already taught me how to do. And then suddenly, that gave us the ability to look at nearly every cell type in the body and immune cell types. And then that gave us the, let’s say, the raw data to build math mathematical models that we could do better predictions of what outcomes would be.

Speaker: 1
09:24

And how is that, like, how are you what arya you applying in in terms of, like, real world scenarios? How are you applying this?

Speaker: 0
09:31

Well, so for instance, there’s a kind of leukemia called AML, acute myelogenous leukemia. It starts in the bone marrow, and it is a distorted version of a myeloid cell type. It starts as a as a stem cell and that stem cell goes down a number of different paths. And depending upon the person, the disease is sufficiently different that it might follow a slightly different path towards what becomes the disease itself.

Speaker: 0
10:03

And so being able to trace the path and to know which steps along the way that it takes to become what becomes then the metastatic, lymph leukemia, could only be accomplished by having enough markers that allowed us to trace everybody along the path. It’s kind of like if I wanted to follow you from who you are as an egg through development through to who you are today and I had snapshots every month, I need different markers to measure what you are as an egg versus what you are as a baby versus what you are as an adult.

Speaker: 0
10:40

And so each of those different markers in my world would be different proteins that tell me something about an adult leukemia versus a baby leukemia. And then we use something called pseudo time, which is a mathematical concept that allows us to stitch together those photographs.

Speaker: 0
10:58

I could take a random box of photos of you from an egg to who you are today, and I could just by hand put together the most likely path and sequence of what you were from the earliest to the latest. But we needed the data and we needed the means and the instruments to collect that information sai that then the math could come to play.

Speaker: 1
11:16

That’s such a fascinating thing about human beings is the biological variability. Is that everybody is we’re so the same. Mhmm. Two lungs, a heart, but so different in how our body reacts to things and what what happens to us and environmental factors, diet Right. Stress, all all sorts of different factors. And you’re ai of piecing together this puzzle Right. Of all these things.

Speaker: 0
11:47

And but what you’re doing is you you still have to pay homage to the fact that those differences exist. And so while, you know, the my cancer might be the same class of, let’s say, melanoma as another person’s, The complexity of what allowed that cancer to become are so different that the drugs that would work for me might not work for another person.

Speaker: 0
12:10

And so that’s what basically requires us to personalize the medications in a way that gives the right drug to the right person. So Ai started probably half a dozen companies and sold them, places like Roche, etcetera. Actually, my most recent company we sold to meh x Genomics, which enabled us enables them now, because of a patent I created back in 2011 to scale up the amount of information that we can collect at a time that then when layered on top of what, for instance, ten x Genomics already did, which is doing what’s called single cell genomic analysis, we could scale that up a 100 fold to get a 100 fold amount the information.

Speaker: 0
12:55

But the problem with that is that I can collect all that data and make an analysis of a cancer for you, but it might be a little bit different than another person. So what we have to do then is develop techniques that allow us to narrow in on what the differences might be sai that when I develop a drug for person x, it works for person x and not for person ai.

Speaker: 0
13:21

Right? The right way. So there’s a lot of personalization in medicine that is required. The the diversity that makes humanity great and that makes humanity able to survive in the face of so many challenges is that there are individual differences that one person might survive and another won’t.

Speaker: 0
13:42

It’s the same thing with cancers. And it’s the same thing with drugs. I mean, the you know, for instance, with certain drugs one of the first things I learned in pharmacology when I was, you know, way back in the day is that there’s always a benefit to damage ratio that you’re having to deal with.

Speaker: 0
14:01

That a drug has a positive outcome, but there are side effects. And so as scientists or as clinicians, we make a choice based on the statistics. Who will benefit the most and will it benefit the most? But by the way, there’s all these side effects that might affect you. And, you know, overall, globally, sixty percent of people will survive.

Speaker: 0
14:24

But since I don’t know anything more about your specific disease, I am ai law required to give you the sixty percent drug until I know or can distinguish that your disease is a different subclass than the sixty percent. And that’s in fact a lot of what, pharmaceutical companies are doing is they’re trying to marry a diagnostic to the disease vatsal, the disease subtype vatsal, so that if you can show that ninety percent of the people of this kind of subclass will survive, you have to, by law, choose that diagnostic to make sure that the person doesn’t have the subclass before you give them the 60% drug.

Speaker: 0
15:06

Does that make sense?

Speaker: 1
15:07

Yes. Yeah. It does. The narrative has always been over the, you know, last few decades, stay out of the sun. Mhmm. But recently, people have started saying, no. It’s actually you need to become accustomed to the sai, and the real issue is people using sunscreen all the time and then going out and getting burned.

Speaker: 1
15:27

Obviously, your situation is very different. Yeah. Because you you have a specific gene.

Speaker: 0
15:32

And I’m Irish.

Speaker: 1
15:33

Yeah. So that’s that’s the problem. Right? Yeah. The genes of the people that lived in cloudy ass places ai hundreds

Speaker: 0
15:41

of thousands of years. And my mother when we were kids I mean, I’m 64 years old. So when I was a kid, you know, we’d go to the beach in Connecticut, and they’d smother me in in, you know, coconut oil.

Speaker: 1
15:52

Oh, yeah. Right. Baby oil when

Speaker: 0
15:54

I was

Speaker: 1
15:54

a kid. Everybody had baby oil and everybody got barbecued.

Speaker: 0
15:57

Yeah. Plus, I worked in the, you know, in the fields as a kid for, you know, farm farm labor.

Speaker: 1
16:02

And that’s not good.

Speaker: 0
16:02

That wasn’t good. The burning,

Speaker: 1
16:04

the that’s the real the damage to the skin and then it manifests itself as cancer far later in life.

Speaker: 0
16:09

Right. Right. There’s all these subtle, let’s call them, smoldering mutations that are waiting for a second or a third hit

Speaker: 1
16:18

Right.

Speaker: 0
16:18

To occur. Or for, you you know, instance, you get old enough so that your immune system is kinda going wonky, and it no longer is able to take care of something that twenty years ago, it would have been able to heal Right. Heal perfectly well.

Speaker: 1
16:31

That makes sense. Sai is there any this this narrative that you need to be in the sun more and that just don’t get burned. Is that reality?

Speaker: 0
16:42

Well, it depends on who I mean, for someone like me, no. But there are positives obviously for the sun. I meh, I mean, vitamin d. Right. As an example. But they’re also, you know, resetting your clock in the morning rather than taking melatonin at night. Go and just, you know, in a bright use use glass to shield out the ultraviolet and get some bright light. It’s the UV that’s the danger. It’s not light.

Speaker: 1
17:08

So for you, you don’t you don’t ever just go sit in the sun?

Speaker: 0
17:12

Not anymore. No. But I was

Speaker: 1
17:13

Because of the meh.

Speaker: 0
17:14

Because I was an idiot when I was a kid. I mean, I would go use tanning tanning beds ai I thought Sai wanted to look, you know, tan. Right. And I did tan back then, but, you know, obviously can’t anymore.

Speaker: 1
17:25

Yeah. You don’t really see those anymore, do you?

Speaker: 0
17:27

No. You do.

Speaker: 1
17:29

Maybe in, like, Seattle.

Speaker: 0
17:30

Some people do. Yeah. There’s, you know, I mean, I I think there’s obviously, there’s a benefit to light. I mean, I’m not saying don’t go out and do it. And if and, you know, I I think as well, there’ll come a day, and I was just talking with some friends of mine at dinner last night, is, you know, maybe with things like CRISPR, I could rub a CRISPR ointment on my body.

Speaker: 0
17:52

It would fix the single point mutation in my skin, and then I could enjoy the sun again.

Speaker: 1
17:59

Is that really potentially Oh, yeah.

Speaker: 0
18:01

I think oh, yeah. No. I think How

Speaker: 1
18:03

far away are we then?

Speaker: 0
18:04

I think honestly, I mean, people always say five years is sort of like this horizon. But, no, I really I mean, I know people who are already developing systems for delivering genes, you know, RNA to cell. I know that’s a dirty word in some, but there are formulations of RNA that probably won’t be as problematic as some of the things that maybe, the COVID vaccine might have done.

Speaker: 1
18:26

Right. Yeah. RNA ai now you say and people clench.

Speaker: 0
18:29

They yes. Exactly. Yeah. Ai, I mean, your your your cells are full of RNA. Sai, I mean, you can’t get away from the fact that your cells are full of RNA.

Speaker: 1
18:38

It’s just the messenger RNA.

Speaker: 0
18:40

Yeah. Yeah. But it’s also the means by which they delivered it. Mhmm. Right? I mean, the the means by which it was delivered was a formulation of a nucleotide that by itself, was meant to be something called an adjuvant. An adjuvant is something which activates the immune system you want.

Speaker: 0
18:57

I mean, when you get a vaccination, you are co injected with something that hyperactivates the immune system to say come hither.

Speaker: 1
19:05

Right.

Speaker: 0
19:06

And, and most of the pain that you get from an injection is not the vaccine vatsal. It’s the adjuvant.

Speaker: 1
19:11

Right. This episode is brought to you by OnX Hunt. Hunters, listen up. Millions of hunters use the OnX Hunt app, and here’s why. It turns your phone into a GPS that works anywhere even without cell phone service. You’ll see exactly where you are, every property line, and who owns the land.

Speaker: 1
19:29

You can connect your cellular trail cams, drop custom waypoints, dial in the wind, and a whole lot more. Whether you’re chasing elk on public, finding the back corners of your deer lease, or knocking on doors for permission, OnX Hunt gives you the knowledge and confidence to make every hunt more successful.

Speaker: 1
19:48

No more second guessing boundaries, wasting daylight, or wondering what’s over the next ridge. You’ll know every single step. The best hunters aren’t lucky. They’re prepared. This is how you get there. So before your next hunt, get OnX Hunt.

Speaker: 1
20:03

Download it today and use the code j r e for 20 off your membership at onyxhunt.com. And so the problem with this was that it it turned your whole body into, like, a a spike protein factory.

Speaker: 0
20:19

Yeah. Well, at least locally.

Speaker: 1
20:20

Yeah. Yeah.

Speaker: 0
20:21

No. I’ve read some of the some of the work.

Speaker: 1
20:23

But not always locally. Right? Because didn’t some they didn’t a lot of they didn’t aspirate with a lot of people?

Speaker: 0
20:29

Yeah. Yeah. They were they were I’m not even

Speaker: 1
20:31

aspirated with anybody. They didn’t with the president on TV.

Speaker: 0
20:34

But if you if you get infected by a virus, it’s all over your your whole body anyway. So it’s whether the spike protein itself was problematic. And, so, you know, I I I know I’ll annoy somebody one side or the other by saying anything around this area, and I’m not here to cause any controversy.

Speaker: 0
20:54

But, you know, your immune system works. But if you can trick your immune system into getting ahead of the game, then that’s a good thing. The question is back to this cost benefit ratio, is the benefit to the larger statistical population worth it knowing that some people are gonna be hurt by it or not?

Speaker: 0
21:19

That’s the question. So for instance, you know, back to cancer and vaccines, there’s a number of cancer vaccines that are coming down the pike that for people like me would be I mean, given that I get something chopped off of me four times a year.

Speaker: 1
21:34

Really?

Speaker: 0
21:35

Oh, yeah. You should see me. I look ai I’ve been in a war zone. You know, some people say, oh, that’s hot.

Speaker: 1
21:40

And that’s the only they say it’s hot. Wow. Someone’s into cutters? Yeah. Exactly. Exactly. Sai that’s so fascinating. But is there another way that could potentially deal with those things other than cutting them off? Or is that the only way to remove it from your system?

Speaker: 0
21:58

Right now, it has to be cut off.

Speaker: 1
22:01

So the the issue is that once those the melanoma once these lesions are on your on your skin, they will expand.

Speaker: 0
22:09

Yes. Luckily, most of mine are what are called have been called surface spreading. Although one of mine was what’s called a nodal, which basically dives right in. And believe it or not, my dog found it and was sniffing at it on my arm. Really? And, like, started, like, scratching at it, and it stopped bleeding.

Speaker: 0
22:23

You know, I’ll show you the scar. What

Speaker: 1
22:25

kind of dog do you have?

Speaker: 0
22:26

He well, this was fifteen years ago. He was a Pomeranian, but you know, you can see the scar there.

Speaker: 1
22:31

Oh, that’s crazy.

Speaker: 0
22:33

And, it wouldn’t stop bleeding and so, you know, I went in and had it looked at it and they said another week and it would have metastasized. Yeah. Wow. Yeah. He has a

Speaker: 1
22:43

What a great dog.

Speaker: 0
22:43

He was great. Yeah. He was great. But, you know, there are, so for instance, if you can catch most of these cancers early, then that’s what’s important. So I think probably one of the most important, let’s say, changes to our medical system that could be initiated would be, frankly, the use of things like MRI, not CT scans because CT scans are known to cause cancer.

Speaker: 0
23:07

Which is so crazy.

Speaker: 1
23:08

Yeah. Like, when did we figure that out? Ai mean, they there was,

Speaker: 0
23:11

like, a big study just published recently that said, here’s what happens to people once CT scans were implemented and you see this sudden spike in the I mean, again, it’s this cost benefit ratio. If you didn’t have it, certain people wouldn’t have, you know, wouldn’t know that they have a giant tumor in there. Right.

Speaker: 0
23:32

I mean, so for instance, I had, when I had kidney cancer, I was actually at a restaurant with friends doing a business deal actually. And I went to the bathroom and it was blood. And I said, okay. We gotta go to the, you know, we gotta go to the, you know, to the emergency room ai now.

Speaker: 0
23:46

And then they did a CT scan and they see this The the brachial tree around my kidney was just a big diffuse mess, and they came in and said, you’ve got you’ve got cancer.

Speaker: 1
23:57

Did you have to have your kidney removed?

Speaker: 0
23:59

Yeah. Yeah. Yeah. It was, but, you know, it’s okay. I’m alive.

Speaker: 1
24:03

It’s nice to have two of them.

Speaker: 0
24:04

Yes. Exactly. I’m alive, but, you know, it is is this early detection is important. I mean, I was lucky that it hadn’t metastasized yet. It’s called it was called, clear cell renal cell carcinoma. But, you know, so surveying the body and these companies that are out there right now which do it, I think arya really important because even if you are young and you have no suspicion you’re gonna have cancer, having that baseline against which you can compare later changes is important because I could do, for instance, a CT scan or an MRI of you.

Speaker: 0
24:45

And I find lots of little anomalies and they’re generally in the field called phantomas. There are these objects that meh be worrisome but we won’t know that they’re worrisome and certainly I could do a biopsy of them and poke a needle into your chest to pick out a piece of it.

Speaker: 0
25:05

But if I come back six months and it’s changed, then maybe it’s something we need to go after more seriously. So getting those kinds of regular scans, I think, is probably one of the more, important things that could be bryden, but not by a CT scan.

Speaker: 1
25:22

Which is crazy because we were doing them for so long. Yeah. Did they they still do CT scans though? Because it’s necessary It’s

Speaker: 0
25:29

necessary for certain things.

Speaker: 1
25:31

Ai. Which is letting people know this might cause cancer. It’s just ai, yikes.

Speaker: 0
25:36

Yeah. But maybe, for instance, there’d be a way, to treat someone with a drug ahead of time that would minimize the effect of the CT scan. Right? So that you know, because the CT scans are generally causing oxidative damage. And so if you could provide a local antioxidant and I’m not saying that something like this exists.

Speaker: 1
25:58

Right.

Speaker: 0
25:58

Ai a bit of a naive, statement. But if you could do that locally to the area that’s being imaged or to the whole body, then maybe CT scans could be lessened in their problematic outcomes.

Speaker: 1
26:11

I would say innovative and hopeful. Okay. Yes. I would naive. Yeah. I don’t think it’s naive because you’re recognizing the issue.

Speaker: 0
26:18

Right. Thank you.

Speaker: 1
26:19

So how well, this was also a problem with x rays. Right? Like x-ray technicians, ai, I I’ve seen some of those images of people’s hands because the technician used to have to use their own hand Yeah. To check to make sure that the x-ray was functional. Right. And over the years, they go, hey, what the fuck is wrong with my hand?

Speaker: 1
26:37

And then they ai, oh, boy.

Speaker: 0
26:38

Right. Yeah. Well, it’s interesting because what’s what’s happening with X rays or CT scans is a fast forward of the kind of random damage that causes cancer in the first place. And so because it’s ram, let meh kinda go back a little bit as to why does cancer happen in the first place.

Speaker: 0
26:56

So let’s go way back in evolution to the first time that that there were single cells versus the first time that two cells met each other and said it was better to, to join forces and cooperate rather than to divide at each other’s expense. So in the process of that happening, those two cells came together or three or four cells. They basically said, together, we’re better than alone.

Speaker: 0
27:18

But there were actually social compacts and contracts that at the genetic level were being formed between all of these cells. And so as things got more and more complex, more and more complex contracts were formed to the point at which what could happen is that any one of the breaking of a complex contract could actually then initiate a cascade that becomes cancer.

Speaker: 0
27:41

So rather than we thinking of cancer as being a forward progression in evolution, it’s actually another way to think about it is that it’s a devolution back to the core fire of the desire to divide. And so by breaking the contract, ai by breaking the controls on the system, cancer is allowed to blossom.

Speaker: 0
28:05

So the problem is that every tissue type, whether your lung or brain or whatever, has a whole different ecosystem of contracts that have been formed. And so there’s no one size fits all drug that will kill off all cancers because the contracts are different. It’s not like you can bring in a lawyer and fix, you know, agricultural contracts versus ai or whatever. Yeah.

Speaker: 0
28:31

So that’s the you know, you have to have a flexible enough mindset because if you get stuck in this, it’s a forward evolution as opposed to that it’s a breaking of contracts. You might miss out on an opportunity for, how to develop a a, a therapy or a drug that would help people.

Speaker: 1
28:52

One of the things, that I I wanted to ask you on I don’t even know if you know anything about this, but is there a connection between IVF and the amount of because you you have to take some pretty extreme hormones. There’s a lot of stuff that women have to take. Is there a connection between that and hormonal related tumors?

Speaker: 0
29:13

I honestly don’t know. So I don’t wanna opine in and have half ai colleagues send me emails tomorrow scolding me.

Speaker: 1
29:20

Okay. Good. Well, I’m glad you answered that way. I I I was told by someone who I really trust that there is. And then we tried to Google it, and it said there’s not, but that’s not surprising.

Speaker: 0
29:31

Probably, there hasn’t been the right kind of study yet. And if there is not, there should be. I mean, certainly, any hormonal imbalance is not a good thing. I mean, you imbalance the metabolism of the system and you can I mean, so for instance, back to my specific disease, with the meh f, There’s all kinds of things like n acetyl sana acetyl cysteine, betaine, all these other drugs that are out there for longevity?

Speaker: 0
30:00

Well, if I look into the metabolism of what my cancer is, every single one of those is a is a disaster for

Speaker: 1
30:06

me. Accelerates.

Speaker: 0
30:07

Yeah. Yeah. You know? Not good. Sai because there’s all these feedback mechanisms. Right. You know, people often say, you know, scientists are not religious. There’s nothing that inspires more more awe in me than knowing the complexity of the cell and knowing the complexity of ai. Right.

Speaker: 0
30:30

And seeing all this feedback and mechanism and knowing that underneath that is a universe with particles, etcetera, that enabled something like us to exist. I just sit in awe of that.

Speaker: 1
30:42

Oh, yeah. It’s awe inspiring for sure. I mean, anybody who doesn’t think it is is not paying attention or they’re purposely being ignorant.

Speaker: 0
30:49

Right.

Speaker: 1
30:50

Yeah. We get a lot of that though.

Speaker: 0
30:52

Oh, yeah. Well, that’s okay. You know, teachers are here to hopefully teach and not preach.

Speaker: 1
30:57

Hopefully. Yeah. Because of your specific type of cancer and your situation, like, do you have to, like, very closely monitor your diet?

Speaker: 0
31:08

I probably shouldn’t eat as much meat as I do. Meat? Yeah. Ai meat? Well, because, you know, fats. And a lot of them the fats dissolve a fair number of toxins. You know, it’s it’s not necessarily a good thing. I mean, that’s been relatively well shown that too much meat as opposed to I’m not advocating vegetarianism. I think there’s a there’s a happy medium.

Speaker: 0
31:32

Ai mean, we grew up in an environment where we had both. I mean, we’re omnivores. Mhmm. And we succeeded, I think, because we’re omnivores, as a society, as a, you know, as a civilization. So, but, you know, charred meat

Speaker: 1
31:47

for

Speaker: 0
31:47

That’s the

Speaker: 1
31:48

issue, though, isn’t it? Yeah. Isn’t it burnt?

Speaker: 0
31:50

Yeah. I mean, it’s carcinogens. I mean Yeah. You know, you’re making all kinds of it’s a it’s a witch’s brew of nastiness, that tastes good. But, you know, the reason why it tastes good is because the humans who survived learned to use fire to kill off the bacteria in rotten meh.

Speaker: 0
32:08

And so the flavor of that probably was engineered into our evolution. But, again, it’s a cost benefit.

Speaker: 1
32:18

But didn’t the cooking of it also allow us to absorb more protein?

Speaker: 0
32:23

I’m not sure about that.

Speaker: 1
32:24

I believe so.

Speaker: 0
32:25

Okay. That could be.

Speaker: 1
32:26

I believe that’s the case that that cooking meat actually allows it to be more easily absorbed by the body.

Speaker: 0
32:32

It could be broken down more readily. Yeah. But certainly, it kills bacteria. So, you know, day old or three day old deer Right. You know, that you’re still bear. Or not yeah. Yeah. So, you know, I mean, yeah, we’re not vultures that seem to have digestive systems that can handle all of that.

Speaker: 1
32:49

Mhmm. So you should eat less meat. What what else? Do you avoid sugar, which seems to be a real problem with cancer?

Speaker: 0
32:57

Yeah. I avoid yeah. I avoid too much sugar. Yeah. Thanks for this, by the way.

Speaker: 1
33:02

Is that sugar free?

Speaker: 0
33:02

It’s no.

Speaker: 1
33:03

But it’s not?

Speaker: 0
33:04

No. It’s not.

Speaker: 1
33:04

Meh, we have sugar free ones.

Speaker: 0
33:07

No. Because the sugar free ones have stuff in them that are just as bad as xylitol and all the other ones.

Speaker: 1
33:10

What about the Stevia? Yeah. That was the Stevia bad for you?

Speaker: 0
33:14

I don’t think so. I haven’t seen anything on that. But, you know, I mean, look, Like I said, I’m 64. It’s way too late. And every time that, let’s say, scientists make some grand prediction of what’s good or bad, five years later, we find and update Right. What it should have been.

Speaker: 0
33:30

I mean, I often say this, and this is this is true. The goal of science or scientist is to be right today, even wrong today, but right or tomorrow. Because we’re always back checking what the results are and what they mean in the context of a bigger picture.

Speaker: 1
33:45

I like how you say good science because that’s part of the problem is that ego gets attached to ideas that have already been discussed and published.

Speaker: 0
33:55

Right.

Speaker: 1
33:55

And then people are very reluctant to accept new evidence ai contrary to that.

Speaker: 0
34:01

Yeah. I mean, as always, as I often say, you know, in the context of something I know we’ll get to later, it’s the data off the curve, which is more important than what we already predict. You know, predictions are great, but when there’s a data point off the curve, at least in my lab, that’s where we spend the most time at our lab meetings is trying to figure out why that data point is off the curve.

Speaker: 0
34:23

Is it because the machine was wrong? It was a glitch? Or does it mean something that we need to make sense of? And that’s, of course, where all advances come from in the sciences is by the fact that the data off the curve, somebody was curious enough about what it meant to go after it and then say, shah, okay.

Speaker: 0
34:44

Now I now that I’ve stepped back and see the bigger picture, now I can create a model that incorporates that data point off the curve and why it happened.

Speaker: 1
34:52

This is an ad for BetterHelp. The Internet is a breeding ground for miss information. Even a simple search for ways to get rid of a headache can produce millions and millions of results ram taking pain relievers to detoxes to medication to cold compresses. It’s overwhelming. And even when you do find something that’s true that works for other people, it might not work for you.

Speaker: 1
35:15

In some cases, it’s better to just ask a living, breathing expert. If you have a headache that won’t go away, go talk to a doctor. And if you’re struggling with your mental health, consult a credentialed therapist. You can learn a lot about yourself in therapy, like how to be kind to yourself and how to be the best version of you.

Speaker: 1
35:35

Whether you sana learn how to better manage stress, improve your relationships, gain more confidence, or something else. It starts with therapy. Try it for yourself with BetterHelp. Millions have benefited from their services, and there’s a reason people rate it so highly. As the largest online therapy provider in the world, BetterHelp can provide access to mental health professionals with a diverse variety of expertise. Talk it out with BetterHelp.

Speaker: 1
36:04

Our listeners get 10% off their first month at betterhelp.com/jre. That’s better,help,.com/jre. One of the reasons why I was really excited to have this conversation with you about the research that you do is that Ai think it’s really important to illuminate to the general public the sheer scope of the task of trying to figure out what is going on all these different things that can go wrong and right in the human body.

Speaker: 1
36:38

Yeah. And that it requires this fucking insane amount of work. Yep. By many, many, many, many people.

Speaker: 0
36:45

And, you know, and then the amount of data that had to be collected now. And sai here’s the difference is that, you know, there’s data, there’s evidence, there’s conclusions and proof, and that’s an uphill climb. But proof, the next one up is meaning. My lab has been largely responsible, at least partly responsible for the data deluge that’s out there in the world, both in how to do tissue biopsy analysis, how to do single cell analysis, etcetera.

Speaker: 0
37:13

And, you know, data felt good for a while. It was like this, you know, this feedback loop of, oh, wow. I can get all this data. And then suddenly you look at it and you go, oh, what the fuck does it mean? And so humanity has this habit of backing itself into a corner and then suddenly ai this eureka moment that gets it out.

Speaker: 0
37:35

And so our eureka moment about two years ago was artificial intelligence where suddenly I had the ability so normally Ai would collect all this data and go, okay. Well, it seems myeloid suppressor cells are important here and t regulatory cells are important here. Okay. Ai get on the phone or send an email to whoever the local expert is either on Stanford campus or around the world and try to get some information from them.

Speaker: 0
37:58

But then now you’re dealing with hundreds of cell ai, each individually of which of which have thousands of variations themselves. And each subtle variation means something. And there’s no expert for any of that, but AI can be, at least in part, that expert. So suddenly, I have 22,000,000 papers published, you know, in the in all the fields of ai, you know, several tens of millions just in, you know or several millions just in immunology saloni.

Speaker: 0
38:29

And AI can be the sleuth for me, can be both the angel and the devil on my shoulder that can make sense of things in ways that I never would have been able to before, especially with agentic AI. So we, for instance, in my lab have developed, an agentic AI that is basically an immune an immunologist scientist in a box.

Speaker: 0
38:50

We can give it the raw data and we can pose a question in natural language, and then we say, hey, make sense of this and turn it into a network. Normally, that would have taken a graduate student along with a couple of postdocs months and months and months to put it all together.

Speaker: 0
39:06

Now in three hours, we can get pictures and ai of how all that data fits together in ways that I never could have done before. You know, at the beginning, it ai in the beginning, it did a lot of hallucinations, which you probably heard about in AI. But my answer to my colleagues is some of my best students hallucinate. Right? Right. And and so but, you know, the human’s still in the loop.

Speaker: 0
39:30

And so with all of this together, now we can make meaning out of the data, and we can skip a lot of the intermediary steps and speed it up. And it’s just getting better. I mean, we, for instance, have put in a couple of papers now where so for instance, in where my special one of my recent specialties is what’s called the tumor immune interface.

Speaker: 0
39:51

So you have the tyler, you have the immune system which is coalescing on near And then in some cases, the tumor creates a boundary, a barrier between itself and the immune system, where there might be certain kinds of cells that the immune system the tumor has told the immune system ignore us, we’re not here.

Speaker: 0
40:13

And then but what we now can do is there’s well, on the on the other side of when you look at, let’s say, complex patient populations, you find these things called tertiary lymphoid structures. So your body has 220 or so lymph nodes. Okay? And lymph nodes are where the immune system makes decisions, let’s say.

Speaker: 0
40:39

It turns out that in the middle of tumors, the body has evolved a mechanism to create what essentially looks like a lymphoid structure in the middle of the tumor. It’s sort of a forward camp of immune cells that the more of those you see in a tumor, the better will be your outcome as a patient.

Speaker: 0
40:58

And so we used a cohort of, colorectal cell, basically, colon cancer patients where we looked at hundreds of biopsies. And we did that pseudo time analysis where we looked for mature tertiary lymphoid structures and then we looked for immature, slightly less mature, even more less mature, etcetera.

Speaker: 0
41:24

And we were able to backtrack to the cell types which need to come together that would then form the more mature. What use is that? It’s a nice paper. But it also now tells us what we might do to create more of these in the tyler. Because the more we already know from multiple kinds of tumor types now that the more of these tertiary lymphoid structures you have, the better off will be your outcome with chemotherapy.

Speaker: 0
41:51

So it might ai, for instance, that once we know that you have a disease like this, we could give you some kind of therapy, a virus or what have you that goes and homes to the tumor, seeds the beginnings of these initiators with there’s the cytokines that are produced that are necessary for initiating the formation of these objects.

Speaker: 0
42:12

And so there’s a huge benefit to that, but we never would have found those in my lab at least without the Ai. Because it Wow. It basically did the work for us. That’s fascinating. Yeah.

Speaker: 1
42:27

Now arya you using, like, a standard large language model? Or are you do you have, like, a a specific structure that’s built that interfaces with large language?

Speaker: 0
42:37

Correct. So we use, well, we can use pretty much any of the LLMs, but right now, we find that OpenAI is the best ram us at least. And then we create an agentic overlay. Basically, what’s called, you probably know, chain of thought Mhmm. Which is a series of questions. So how we taught it was we basically came up with here’s a here’s a 100 kinds of questions a scientist would ask about the immune system.

Speaker: 0
43:02

And then we tell Chatt GBT, now create a thousand questions like this. So, you know, it’s artificial data or artificial questions. We curate those to make sure that they’re good. Then we do a 100 hypotheses and we create thousands of types of hypotheses, etcetera, and the same for tests, that you might run.

Speaker: 0
43:27

So now ram a to z, we have, an agentic AI that you give it raw data, it knows what to do with the data, it then generates ai for you, and then it literally tells you the kinds of experiments you should do next to prove or disprove the hypothesis from the raw data.

Speaker: 1
43:47

It’s a genius in the lab with you.

Speaker: 0
43:49

Exactly.

Speaker: 1
43:50

Is is OpenAI learning from this agentic AI? Oh, yeah. So there there’s a mutually beneficial relationship. Yeah.

Speaker: 0
43:58

I mean, we’re not working with them directly.

Speaker: 1
44:00

I Ai you use it and because you use it with your AI

Speaker: 0
44:05

Right.

Speaker: 1
44:05

It’s it’s benefiting from it.

Speaker: 0
44:06

And we first thought to turn it into a company because that’s ai of one of the things we do in my lab is if because I’ve always thought that it’s important to give back to the taxpayer the money that they’ve invested in us, and the best way to do that is commercialization. I’m totally, you know, unapologetic about that even though that got me in a lot of trouble at Stanford in the early days when, you know, making money was you know, commercialization was evil, and even at Stanford.

Speaker: 0
44:34

And so, I think that that’s an important process because scientists are good at asking maybe the questions and coming up with solutions, but scientists aren’t the best at commercializing it and turning it into a product that can be used or testing it, you know, in large communities.

Speaker: 0
44:51

So the AI that we developed, we thought, okay. Well, maybe we can do this. We thought, you know what? AI is moving so fast. Why don’t we just give this to the community? Why don’t we open source this?

Speaker: 0
45:04

We can use it for maybe specific targeted purposes, but we’re basically gonna publish the whole thing on GitHub, to let other people use it. Because we’ve seen other people make claims about stuff that they’ve already made, and it’s ai, oh, ours is better. So why don’t we just put it on GitHub and let people learn from it?

Speaker: 1
45:20

The commercial the resistance to the commercialization, what was the initial argument?

Speaker: 0
45:25

So back when I was a grad student in the in the, eighties, basic research as opposed to translational research, was considered the height of intellectual desire. Right? Basic research and and we’re not here to make money. We’re here to discover things. And that’s important.

Speaker: 0
45:48

And nearly every major discovery and every major therapy in the world came from basic research. But then, you know, there were limits to how much money you could give to basic research and then there was a desire at a certain point to say, hey, let’s are you gonna do anything about this?

Speaker: 1
46:04

You know, are

Speaker: 0
46:05

you gonna make any so translational research became a push. So this ai, at Stanford by the name of Paul Berg, who won the Nobel Prize for gene fit for recombinant DNA way back in the day, And Paul came up with this concept bed to you know, bench to bedside, meaning that we don’t have to be either or.

Speaker: 0
46:29

We can be part of an arya, and Stanford wanted to be an enable within the medical school, both the basic research, which we were great at, as well as bringing it directly to the patients as well. So to link clinicians and the desires of clinicians with the basic researchers. I mean, most scientists would be happy just to study anything.

Speaker: 0
46:51

You know, just point me at something, and I’ll be happy if I can get interested in it. So, and we’re no more happy than when somebody recognizes the value of what we do. Right. But basic research was sort of the height and there was a push against anybody trying to commercialize.

Speaker: 0
47:10

So when I started as an assistant professor so I started as a grad student. I went to MIT to work with this guy, David Baltimore, who won the Nobel for, reverse transcriptase. And then I wanted to come straight back to Stanford because I already felt that it was a positive environment for commercialization.

Speaker: 0
47:26

My bosses my former bosses’ meh, Lennon Lee Herzberg, had two of the biggest patents at Stanford. They had the fluorescence activated cell sorter and then what are called humanized antibodies, which brought in hundreds and hundreds of millions of dollars to Stanford. And they should they gave personally most of their own money away.

Speaker: 0
47:44

They didn’t they made kept enough to survive, but then they gave most of the money away and they ran the wrong lab off of a lot of that money. But so I had learned from them about how to still do basic research but commercialize on the side. And so I wanted to bring that back, but the department that I came into, the department of pharmacology at the time, I was warned by many professors, don’t commercialize that.

Speaker: 0
48:10

And I ignored them, and I went and started a company that went public on Nasdaq. And many of those same professors came back to me, you know, years later and sitting in my office asking me how to start a company.

Speaker: 1
48:21

Why did you was it just a courageous decision to ignore them? What did you was it instinctual?

Speaker: 0
48:28

It just was instinct it was like because I couldn’t see the NIH funding what I wanted to do. Mhmm. So I had developed a way this will sound scary, but I developed a way to use retroviruses and make libraries of retroviruses to reverse the process of evolution in a way that rather than viruses hurting the cell, I set it up so that viruses would help the cell.

Speaker: 0
48:51

And once they help the cell, I would figure out what they did. And so we sold hundreds of millions of dollars of targets that way using retroviral libraries, to basically find targets and use use some of the benefits of ai, but to our advantage.

Speaker: 1
49:15

Just the concept of reversing evolution is fascinating because it comes with there’s so many ethical implications. But if you didn’t have any of those

Speaker: 0
49:25

Yeah.

Speaker: 1
49:26

And you could do that large scale

Speaker: 0
49:28

Well, I had developed in David’s lab, along with this guy Warren Speak, a means. It’s called the two nine three t retroviral producer system. It was a way to make large numbers of these viruses very quickly. It really followed on the work of this ai, Richard Mulligan, who’d also been a postdoc with with David Baltimore, who developed shah was called the three t three based retroviral production system.

Speaker: 0
49:51

And he developed a new Paul Berg’s lab at Stanford. So there’s a lot of sort of, you know, interbreeding here. But the problem with that was it took three months. So I had brought with me a cell line called 293 that I introduced to the lab and said, hey. Maybe we could use this to make viruses quickly. I won’t go into the details of why, but we could do it in three days rather than three months.

Speaker: 0
50:13

And so that now I mean, tens of thousands of labs use that worldwide. It probably generates the most money for me, every year over any of my other inventions just because Stanford, rather than patenting it, licenses it. And licenses are forever, whereas patents have a seventeen year lifespan. So Stanford made a good choice there.

Speaker: 1
50:35

So do you think it was just a ai, academic ai? Like, we shouldn’t be focusing on money, we should be focusing on the work Yes. And they missed the forest for the trees?

Speaker: 0
50:44

But then people I mean, they eventually learned. You ai? I mean and it’s it I I wouldn’t say that it’s the it’s the way that people think anymore, but it there’s still a little bit of a I mean, you shouldn’t walk into the lab thinking I’m here to make money.

Speaker: 1
50:59

That’s what they’re worried about.

Speaker: 0
51:00

Yeah. Right? Right.

Speaker: 1
51:01

That’s they’re worried about the bastard decision of it all. Right.

Speaker: 0
51:04

And so Stanford in the early days set up very clear lines about once you start a company and you license the patent or the idea to the company, you can still be involved with the company, but there’s not a pipeline of technology now from your laboratory to that company. So they set up, you know, an oversight board for each of these licenses that make sure that, you know, the students are not being abused.

Speaker: 1
51:33

Mhmm.

Speaker: 0
51:33

You know, because you don’t want students you don’t wanna be, you know, covertly getting your students to do something that then you’re gonna walk behind the back door and then hand hand over to a company.

Speaker: 1
51:43

Patent it.

Speaker: 0
51:44

Yeah. You know, so, you know, there’s but it’s it’s sai interesting that there’s often very much a lot of worry that that’s gonna happen. But, frankly, more often is the case that the company doesn’t need the inventor anymore. In fact, I can’t tell you the number of times that once the company is set up, they want nothing more to do with me because they have their own thing to do.

Speaker: 0
52:07

They don’t want the crazy academic coming in and vetoing their ideas. I mean, there’s there’s places for that where people like, you know, Steve Jobs needs to hold on to the the image of what he wants the company to be as opposed to I would probably be be fired from a company within a week ai I just don’t like telling people tell me what to do.

Speaker: 0
52:31

That’s just a fact.

Speaker: 1
52:34

Yeah. So where where you’re at right now with this cancer research, when will this be applied in real world scenarios?

Speaker: 0
52:50

It already is. It is. Already is. I mean, you know, I mean, look at who just won the Nobel Prize last year, David Baker at Google with the, you know, ability to predict protein structure, etcetera, and protein structure. Once you know the protein structure, now you can predict molecules that might come into it.

Speaker: 0
53:06

So go back to the stuff that I’m trying to do with looking at the complexities of the dance of how the immune system talks or doesn’t to cancer. You know, if we can find a particular place that might be an Achilles heel along the way towards the shutting down that is different, for instance, than, what the current drugs are, Well, maybe we should aim at that.

Speaker: 0
53:30

There’s there’s so many more opportunities that are suddenly opening up in front of us because the AI and the data is letting us look at a network of how the system is working. I mean, before, it used to be you’d look at a computer chip and you’d see just a computer chip with a few ai.

Speaker: 0
53:50

But imagine now that you, as a scientist, have a microscope that’s looking at the complexities of the wiring diagram that’s connecting this resistor to that capacitor to that diode to this transistor. That’s where we are now. And so now suddenly we can say, well, I don’t wanna do that because it’ll kill the chip.

Speaker: 0
54:10

But the chip is malfunctioning, so let me put here put a little bit of pressure there, and now I can reactivate the immune system or the chip to work in the right way again.

Speaker: 1
54:21

So when you’re talking about things ai with your particular issue with melanoma, when you’re talking about CRISPR potentially developing some sort of a a topical solution that you could put on that would fix whatever issue that you have. Is this something that this AI that you developed or this overlay of the AI would would actually assist CRISPR in figuring out how to create something like this?

Speaker: 0
54:48

Yeah. Because maybe it’s not one place I need to press, but two or three at the same time. Right. And so when you’re talking about a complex feedback network, I mean so, yeah, we’re in Texas, so people do oil refinery. You know, maybe you need to turn this valve here a little bit and that valve there and that one there to make everything work just right because something’s wrong over there.

Speaker: 1
55:09

Mhmm.

Speaker: 0
55:09

And so that’s really what we’re you this is where AI has the, let’s say, the omniscient view that no human can. And that’s what excites me about it is because I’m limited in how much I can keep in my mind at any one time or no.

Speaker: 1
55:26

Right.

Speaker: 0
55:27

But with the right question, the prompt, the prompt engineering, and then with the right backbone structure behind the scenes that agentic Ai is now, you know, ai, now I have the ability to ask the questions and get answers in near real time. And so I, you know, I wish I was 30 years old again because I would move into this area so fast and be there’s I mean, I can already see with the work that we’re doing dozens of potential new target opportunities that last year didn’t exist at all.

Speaker: 1
56:01

Well, I got good news for you. With AI and with CRISPR, you might be 30 again.

Speaker: 0
56:06

May oh, I would love it. I would love it.

Speaker: 1
56:08

I think that’s on the

Speaker: 0
56:09

I would love it.

Speaker: 1
56:10

I think that’s on the menu in about two or three decades.

Speaker: 0
56:13

Mhmm. I ai I hope it

Speaker: 1
56:15

will survive. Ai I’m just being Yeah. No. Realistic. Realistic. I don’t even know if I’m being realistic. Don’t give false hope. Well well, yeah. But don’t give false hope. But Ai meh, with the exponential discoveries, the exponential increase in the technological evolution just that we’ve seen in our ai.

Speaker: 1
56:31

And then Ai think AI is some new thing that is gonna throw all that into the just a giant monkey wrench into the gears of our understanding of how quickly technology evolves.

Speaker: 0
56:42

Well, look at Neuralink as a an example in Elon Musk’s stuff. You know, the woman now who can think her thoughts and make stuff happen.

Speaker: 1
56:50

Mhmm.

Speaker: 0
56:51

Because she’s otherwise paralyzed.

Speaker: 1
56:53

Right. Right?

Speaker: 0
56:54

I think it was Neuralink that, just showed some of these results. So fast forward, I mean, we’re already in an exponential increase in what it is that we’re gonna be able to accomplish, and AI will help us accomplish some of these things faster. I can see a time where, you know, I could maybe apply something.

Speaker: 0
57:10

I don’t necessarily want a surgical implant, but maybe some sort of net over my head that allows me to think through these problems. And I the AI becomes an adjunct to my thought processes, not only what it is that I think, but maybe even provides information back to meh, back into my system Mhmm.

Speaker: 0
57:30

Directly without having to go through the ears sai that I can much more quickly come to conclusions. Now there’s all kinds of apocalyptic scenarios you can imagine by that as well, But I’m an optimist at heart, perhaps, again, naively so. Me too. But I prefer that kind of an outcome because if you if you’re not an optimist, then there’ll be no progress because all you’ll do is worry about disaster.

Speaker: 1
57:56

Yes. That’s a good point. But, also, realistically, we might be giving birth to a new life form.

Speaker: 0
58:02

Yes. And I think we are.

Speaker: 1
58:04

A superior one. And,

Speaker: 0
58:07

you know, I welcome the day of our AI overlords running the government rather than hopefully in an unbiased way.

Speaker: 1
58:14

I’ve said that too, and people get horrified because they’re like, well, people are gonna be programming AI.

Speaker: 0
58:20

Do do you read up to a point? Are are are you a sci fi fan? Do you know, the work of Ian Banks, the Culture series No. Or Neil Asher, the Polity Universe as he calls it? They’re like so, basically, both of them postulate a future where AI more or less benignly rules humanity.

Speaker: 0
58:42

When did they write this stuff? Oh, probably ten, fifteen years ago, but it’s still but Neil Asher still has stuff coming out regularly. He they’re both, Ian Banks unfortunately died of cancer about ten years ago, Scottish writer. Neil Asher is still alive and writes regularly, and his stuff they’re both great, full of ideas. I’ll check it out. And but but the AIs are also hilarious.

Speaker: 0
59:04

I mean, it’s not like I mean, they’re they get into their own ai along the way, and some of them are dark, and rogue. And so they’re a lot of fun to read. And Iain Banks especially is hilarious in the his writing style. You would love it.

Speaker: 1
59:22

So the idea of a benign AI or a benevolent AI Mhmm. Ruling over us, I think people are horrified by that. But meh, at the same time, constantly terrified by human corruption, which is ubiquitous.

Speaker: 0
59:37

Yes.

Speaker: 1
59:38

And ubiquitous in Meh, where we’re supposed to be the the torch bearer for the the greatest experiment in self government the world has ever seen.

Speaker: 0
59:49

Mhmm.

Speaker: 1
59:49

This is us. Yep. And we’re corrupt as

Speaker: 0
59:52

fuck. Exactly.

Speaker: 1
59:54

Because it’s humans. Because humans are kind of gross in a lot of ways Right. At least some of us.

Speaker: 0
01:00:00

That’s because we live in a scarcity society. Right. And if AI enables a post scarcity, maybe we have nothing to do but sit around and try out various new drugs. Yeah.

Speaker: 1
01:00:12

Well, this is where we meh into socialism because a lot of people think that one of the reasons why we’re in a scarcity ai is because small groups of people have gathered up most of the resources Right. And are in constant control of them. Right. Especially when you deal with resources that are the Earth’s resources. Right. Ai, who are you Right.

Speaker: 1
01:00:28

To be sucking the blood of the Earth out and selling it for a $100 a barrel? Right.

Speaker: 0
01:00:32

Right. Don’t get me arya. Don’t get

Speaker: 1
01:00:34

me started either.

Speaker: 0
01:00:36

Yeah. No. But, I mean, I that that again, my optimism is that, you know, with enough push and pull, AI will enable us to, move towards a post scarcity environment.

Speaker: 1
01:00:52

I think so too. And I think in doing so, it’ll expose vampires because the resistance to Yes. Exposing this is going to be fantastic. Right. Which is gonna be very interesting to watch because they have no choice but to be transparent.

Speaker: 0
01:01:07

And they have no choice but to start using AI. So you’re gonna see AI is going to be inculcating itself across society in various ways where it becomes indispensable.

Speaker: 1
01:01:17

Mhmm. And

Speaker: 0
01:01:17

then it will start to move up the food chain, where eventually even the CEO, who’s probably, you know, the psychopath in chiefs. Right. Our CEOs. We we know that the studies have shown that Mhmm. There’s more psychopathic tendencies and leaders than there are in followers.

Speaker: 1
01:01:33

And you know about corporate environments because of just selling inventions.

Speaker: 0
01:01:38

Yes. There’s that’s real. Oh, it’s Yeah.

Speaker: 1
01:01:41

It’s real and it’s weird. It’s weird when you encounter them. When you encounter, like, complete sociopathic CEOs.

Speaker: 0
01:01:48

But but look at how I mean, I’ll probably get in trouble for saying this, but I don’t care. This is the Joe Rogan

Speaker: 1
01:01:53

show where, you know You’re

Speaker: 0
01:01:55

probably in trouble just for being here. Meh. Oh, I already am. That’s okay. I don’t care. So, you know, imagine two tribes. One tribe is relatively, you know, civilized and just wants to live in harmony with its environment. Another has a psychopathic leader who can enrage his, followers or the other tribes people to attack the other one.

Speaker: 0
01:02:20

But there’s a gene set that makes a person, you know, psychopathic and also a gene set that probably makes, somebody more likely to be a follower. Well, which gene survive? Right? We we know. Right? And suddenly now but when those tribes were separated and independent, it was perfectly fine.

Speaker: 0
01:02:40

But now you live in an environment where we don’t know where the edge of one tribe begins and another ends. And suddenly, you have this environment where psychopathic individuals can move freely, and aren’t obvious. Right. Right? Now, again, I’m sure there’s some social ai who will send me a boatload of emails, saying how stupid that idea is.

Speaker: 0
01:03:03

But I don’t

Speaker: 1
01:03:04

think it is stupid. But I think also, when you’re dealing with office environments and the the culture of a specific corporation, humans have an ability to act like they’re supposed to act in that world. And it makes it very difficult to discern who’s a sociopath Right. Because you’re all ai of following an act. Right.

Speaker: 0
01:03:26

Yes. The rules there are the rules that you’re supposed to follow, and then there’s the edge of the rules. Now but but I’ve lived at the edge of the rules. I mean, I if I followed my rules as told to me by the chairman of my first department, then I wouldn’t be here today. So I ignored him, and I basically found I got permissions from the deans to do what I did, and they basically overruled the chairman.

Speaker: 0
01:03:55

But that’s only because I dared to do it. Yeah. Because because you have to believe in the value of what you’re trying to do.

Speaker: 1
01:04:04

Right. Well, that’s see this is the problem that I have with corporations because I think as a structure, when you have something that has an obligation to its shareholder

Speaker: 0
01:04:15

Yeah.

Speaker: 1
01:04:15

To consistently make more money

Speaker: 0
01:04:18

Mhmm.

Speaker: 1
01:04:18

Every quarter, every year, constantly, you’re in a constant growth ai, then you have to do whatever it takes. Yes. Like, you have to survive. If you sana survive as a CEO, we don’t want some fucking ai shithead

Speaker: 0
01:04:32

Mhmm.

Speaker: 1
01:04:32

Ruining our stock profile

Speaker: 0
01:04:34

Right.

Speaker: 1
01:04:34

Our portfolio. Get get to ai, bro. Right. Get shit done. And if you wanna survive and succeed as a CEO, it it encourages sociopathy.

Speaker: 0
01:04:43

The stock market, as valuable as it is, is the great whitewashing and money laundering system that allows you to separate your morals from what it is that the stock market is doing Right. To the people.

Speaker: 1
01:04:56

And if you’re part of a corporation, there’s this diffusion of responsibility because it the whole machine might be doing evil, but I’m a good guy. I just work in this department.

Speaker: 0
01:05:06

I’m an unapologetic capitalist, you know, unlike many of my colleagues Good for you. At Stanford. I mean, it’s like, you do it because it’s the best thing for now. But I, you know, I hope to live in a world where there will be this kind of post scarcity environment where we do let AI do a lot of the stuff that would otherwise be the place where corruption manipulates the systems.

Speaker: 1
01:05:32

My only fear with AI really is automation and the complete removal of a gigantic swath of the American workforce Yes. And the global workforce. Yeah. That scares the shit out of me.

Speaker: 0
01:05:42

That’s coming.

Speaker: 1
01:05:43

Yes. That’s why it scares the shit out of meh. It’s because I ai it’s inevitable, and I just don’t think any solution other than universal basic income is gonna remedy that. And even that, the problem I have with that is that goes against human nature. Meh. And that’s a problem. And it removes people’s identity, removes their sense of worth.

Speaker: 1
01:06:01

Mhmm.

Speaker: 0
01:06:03

Yeah. I agree. No. I don’t I I’m in some ways happy that I’m 64 years old, ai I’m not gonna have to deal with some of the problems.

Speaker: 1
01:06:11

I think you’re gonna have to deal with

Speaker: 0
01:06:12

it, dude. I think you’re gonna live. Ai thank you. Yeah. No. I know.

Speaker: 1
01:06:16

You’re Also, you’re you’re privy to a lot of information, and you’re gonna know when things are really valuable and working. Yeah. When you think of the potential for AI, I think there’s a saloni, ai? There’s a battle. I think there’s a real problem with AI in in terms of military objectives.

Speaker: 0
01:06:41

Mhmm. It’s

Speaker: 1
01:06:41

a real problem because it’s not going to make moral and ethical Right. Decisions. It’s just gonna say, like, well, the decision that they have Right.

Speaker: 0
01:06:49

Cleared answers. Programmed to do this. Yeah. If

Speaker: 1
01:06:52

you want me to succeed, I’ll just kill everybody there, and then you’ll have the land. You can get minerals out of it.

Speaker: 0
01:06:57

Right.

Speaker: 1
01:06:58

Yeah. Vatsal scares the shit out of me.

Speaker: 0
01:07:00

It you know, I Sai I I think it should. And, I don’t know what the I don’t know what the answer is, but there’s plenty of people working in the area. I mean, I try to keep to the positive aspects of what I think AI can do in science. Ai, I mean, for instance, it’s enabled me to take my lab from 30 people down to six. Right?

Speaker: 0
01:07:23

I don’t because I don’t need to produce I mean, so I’ve it’s it’s actually already reduced the workforce in my own lab because I don’t need to produce any more data anymore. I need to make meaning of the data.

Speaker: 1
01:07:34

Right. What I think every invention that’s been truly groundbreaking throughout human history has scared people, and they’ve worried about the potential negative side effects including the printing press. Right?

Speaker: 0
01:07:46

Mhmm.

Speaker: 1
01:07:47

Right. They there’s a lot of people in the beginning that ai, this should not be a thing. This is terrible. This is gonna ruin society. People thought books were going to ruin things. Right.

Speaker: 0
01:07:57

There’s a

Speaker: 1
01:07:57

lot of people that thought writing was going to ruin your memory. You shouldn’t write. Oh, really? Sai

Speaker: 0
01:08:03

didn’t mean this.

Speaker: 1
01:08:04

Some crazy thoughts that people had Mhmm. In terms of things that turned out to be incredibly beneficial, but they looked at the downside of it and go, this this could ruin us all.

Speaker: 0
01:08:14

Well, Ai you know, I mean, we know about these, glasses and Ai and other things that would be sort of omniscient of your environment

Speaker: 1
01:08:25

Mhmm.

Speaker: 0
01:08:25

And therefore allow you to remember, you know, where did I leave my keys Right. Today. Right. Right.

Speaker: 1
01:08:30

Let me rewind Let me rewind.

Speaker: 0
01:08:31

And then my personal hard drive. I don’t I would want that, but I don’t want it uploaded into meh.

Speaker: 1
01:08:37

You don’t want anybody in control of it and then offering you ads for things

Speaker: 0
01:08:41

Right.

Speaker: 1
01:08:41

You know? Right. You know, maybe you have a thought, like, boy, it wouldn’t be Ho Ho be nice right now.

Speaker: 0
01:08:47

Right. Ai.

Speaker: 1
01:08:48

You know, and then, like, why don’t you buy some Ho Hos? Right. They’re on sale right now.

Speaker: 0
01:08:51

But I think what’s interesting about AI is, you know, we see it as a tool as opposed to actually, pretty soon, it will be a colleague, and then pretty soon, it will be an entity Yeah. That, maybe has rights. And we already see it talking about people saying, well, does AI have consciousnesses? Right. Right?

Speaker: 0
01:09:11

Whether it has consciousness in terms of the consciousness that some people think about as, you know, embodied in space time as opposed to thinking and looking like consciousness is almost irrelevant to me. I’m looking for a partner that I can interact with and work with or help me. Mhmm.

Speaker: 0
01:09:31

So whether it’s conscious or not or whether it acts like it’s conscious doesn’t matter so much to me as to whether or not I can use it and work with it and it can you know, I’m Ai an introvert as it turns out. I would love to have somebody that I can talk to endlessly about just what it is that I’m interested in as opposed to having to deal with small talk at a party.

Speaker: 1
01:09:52

Yeah. No. I get it. I get it. When you think about the evolution of this stuff, it one of the things that ai of freaks me out is if it seems like integration is our only option for survival.

Speaker: 0
01:10:06

Mhmm.

Speaker: 1
01:10:06

And that what we’re looking at right now, when we we see just a a normal biological person like you or I without any sort of electronic interface that’s permanently a part of us, I think that is going to be as weird as someone today who doesn’t have a cell phone.

Speaker: 0
01:10:25

Yep.

Speaker: 1
01:10:25

I agree. And I think that’s a really

Speaker: 0
01:10:27

It’s coming.

Speaker: 1
01:10:28

Yeah. That’s the the cell phone is ai the best now. Like, Elon has famously said, we’re already cyborgs. We just carry it with you.

Speaker: 0
01:10:34

Right.

Speaker: 1
01:10:35

And eventually, it

Speaker: 0
01:10:36

It’ll be way more integrated.

Speaker: 1
01:10:38

Yeah. This is super inefficient to to be actually have to go look things up and Mhmm. Use your thumbs and ai up stuff or Mhmm. It’s just going and and even talking to it and asking a question and waiting for the response. Mhmm. That’s inefficient in comparison to a human neural interface that allows you to instantaneously access large language models Right. Like that. Right.

Speaker: 1
01:10:58

Not only that, but then, why do we have a 100 and I mean, how many different fucking languages do we have? I I don’t even know. Thousands?

Speaker: 0
01:11:06

Yeah. I don’t know. And dialects. Yeah. And and all of that.

Speaker: 1
01:11:08

How about one universal language that everybody with a chip gets? Mhmm. And then, boy Mhmm. Boy, do we have a soup of ideas flowing around and no problem with language barriers, no problem with cultural barriers.

Speaker: 0
01:11:22

But then do you have a problem with the edge of who you are versus who the other person is?

Speaker: 1
01:11:27

I don’t think that I think that goes away. Yeah. I think that goes away and we become a hive mind. Mhmm. I think that’s Yeah.

Speaker: 0
01:11:34

I was getting at you.

Speaker: 1
01:11:34

Yeah. I think that’s ultimately the evolution of human beings. And, ai, I know you’ve done a lot of work with UAPs and the ai, and I think you’ve done some really fantastic work, and you’re very objective in your analysis of what this whole situation is. When I look at artificial intelligence and I look at this this thing that’s clearly taking place right now, and I see what what human beings are like in comparison to what they used to be ai, and especially when you look at like ancient hominids.

Speaker: 1
01:12:08

The alien archetype, this thing that everybody sees supposedly or one of the many different ones Right. That kind of looks like what we seem to be going in the direction of being. Right. Yeah.

Speaker: 0
01:12:24

I mean Which is one of

Speaker: 1
01:12:25

the reasons why I find it so odd.

Speaker: 0
01:12:28

So if you just for a moment take UAP and aliens out or ET or inter dimensionals or whatever it is you wanna call them out of the question and fast forward what humanity is going to do Right. In a thousand years, and our ability to expand into the local galaxy. We’re not gonna go as ourselves.

Speaker: 0
01:12:49

We’re gonna go as AI conjoined entities like

Speaker: 1
01:12:54

An avatar.

Speaker: 0
01:12:55

And yeah. And so when you go somewhere, let’s say we don’t have warp drive, you’re not gonna send yourself. You’re gonna send an AI intermediary who’s gonna establish humanity or whatever it is we think humanity will be in a thousand or five thousand years in that local environment.

Speaker: 0
01:13:11

And so I think the extent to whatever it is that UAP are here today is somebody else’s civilization’s version of just this.

Speaker: 1
01:13:21

Mhmm.

Speaker: 0
01:13:21

And that you wouldn’t the the principle us behind whatever this is that we might be allegedly, etcetera, dealing with isn’t the thing that’s gonna show up. You know? So to the extent that Neil deGrasse Tyson is right about anything, the person who gets on the ship at the beginning or whatever it is that sends it off is not the same thing that gets off on the other side.

Speaker: 0
01:13:43

But you’re sana send missionaries or intermediaries or probes or whatever. And then if you’re sana to interact with the locals, you’re going to make something that looks more or less like the locals rather than something that whatever it was that you were a million years ago.

Speaker: 0
01:14:01

Does that make sense?

Speaker: 1
01:14:03

Right. I get what you’re saying. So you you make something that looks like the locals sai that they’re more likely to accept that it’s a real thing?

Speaker: 0
01:14:11

That’s a real thing, but not you’re not gonna make something that looks like a human because then you’d mistake it as a human. Right. But you might make something that looks more or less enough like a human, but enough like an alien that you’re gonna recognize it as an alien.

Speaker: 0
01:14:24

And, again, I’m just speculating. So Right. So the Daily Mail don’t say, you know, put an article out tomorrow.

Speaker: 1
01:14:30

Oh, they’re gonna do

Speaker: 0
01:14:30

it anyway. I know they’re gonna do it anyway. Some of the stuff that I’m seeing supposedly having quoted sai saying is ridiculous, but Yeah.

Speaker: 1
01:14:37

They got me too. It’s

Speaker: 0
01:14:38

They get everybody. It’s the nature.

Speaker: 1
01:14:40

How did you even get involved in this? Let’s bring it to that. Like, so your what was your initial introduction to this? Did you have any interest in the idea of UAPs or UFOs?

Speaker: 0
01:14:54

I mean, I had a general and so once YouTube started becoming a thing and, you know, you’re clicking around and I said, oh, UFOs. That’s kinda cool. Ai you know, I read nothing but sci fi.

Speaker: 1
01:15:05

I mean,

Speaker: 0
01:15:05

I’m, you know, pathetically narrow in that sense. And so I followed, you know, I I followed the usual kinds of things that you would see on the early days of YouTube, and I came across this thing called the Atacama mummy. You probably know that little that little mummy that was claimed to be, an alien baby.

Speaker: 1
01:15:24

Is this the Peruvian one?

Speaker: 0
01:15:26

Yes. It was no. It’s, Chilean.

Speaker: 1
01:15:28

Oh, okay. So this is the original one? The original one. Long ago.

Speaker: 0
01:15:31

And so I reached out to the people who were claiming to represent the owner of the thing. And I said What year was this? 02/2010, 02/2011. And I said, hey, I can tell you what it is. Why don’t you you know, I can tell you if it’s human or not. If you would get me a piece of its, you know, first of all, send me some X rays of the thing.

Speaker: 0
01:15:52

So I did the first thing I did with those X rays was it turned out that at Stanford, we had the world’s expert who wrote the book on pediatric bone disorders. And I brought it to ram, and I said, what do you think this is? And he said, well, I haven’t really seen this before, but it could be this gene, this gene, this gene, etcetera. I said, but here’s oh, there it is.

Speaker: 0
01:16:11

There it is. Yeah. And and so yeah. It looks weird, doesn’t it? Super. And so sai the expert told me, okay.

Speaker: 0
01:16:24

I need this view of an x-ray, this view, this view, this view. And so we got that and came back and he said, okay. Well, you know, we need to get some DNA sequencing, he said. I said, okay. So we got a piece of the bone from actually the rib and the rib was important to use because that would be, I felt, an area that would be least likely to be contaminated by bacterial, you know, you know, degradation.

Speaker: 0
01:16:49

And so I got a little bit of bone marrow out and I did the sequencing. Long story short, I had to bring in once Ai done that, there was a lot of DNA that didn’t make sense, but it was it’s old DNA. It wasn’t that old actually, but it was degraded. So I had to bring in experts at Stanford who knew how to fix the degradation.

Speaker: 0
01:17:08

And then I had to bring in an expert in South American genetics who also happened to be at Stanford. And then we brought in a team of students, and then I brought in Roche, diagnostics. I had sold a sequencing company to Roche, about two a few years earlier. So I brought in the team that actually knew how to help me assemble the genome.

Speaker: 0
01:17:32

And then we published a paper which said it’s human, it was, a female, and here are some mutations that it might that might explain what it looked like. They did have some mutations in in gene. And then the UFO community hated me because I had disproven that as not being a baby, not being an alien.

Speaker: 0
01:17:55

But of course, that picture that you showed, I mean, it was worldwide news and literally the title of one of the things is Stanford Ai Sequences Alien Baby. And so, you know, and so but the paper stands the test of time. Nobody’s disproven what it is that I showed despite the fact that some people want to say that I was a CIA plant and I was paid off by the CIA, etcetera.

Speaker: 1
01:18:21

Of course.

Speaker: 0
01:18:21

But what what that had done was that I didn’t realize, but I’d kind of hoped was that it sana up a flag, to a scientific community that already existed that I wasn’t aware of of scientists who were deeply involved with the government in the analysis of UAP that I wasn’t privy to. And so literally about a month after the, the the movie came out about that thing, I got a knock at my door, and it was representative of the CIA and an aerospace company unannounced, and they sai, we wanna talk to you.

Speaker: 0
01:19:03

And they wanted, my help with a number of military and diplomatic personnel who’d been, they claimed, harmed by things. They’d either heard stuff, etcetera. And long story short, the majority of the 100 or so people that I had privy to their medical records ended up being the first of the Havana Syndrome patients.

Speaker: 0
01:19:29

Oh. They’d heard things in their head, etcetera. But what they had done was they had shown me the data literally that day in my office. They brought out the MRIs. They brought out the x rays and the damage in the brain, etcetera. It was clear.

Speaker: 0
01:19:43

I mean, it wasn’t it was not just data, it was evidence that something had happened. It wasn’t somebody’s story, it was evidence, that was repeatable. And so that took us about three or four years to figure out what they were. And it was at about the time that actually the Havana Events were occurring that we realized that all the symptoms of what it is that we were seeing in this group of patients were matching what it was that the Havana Syndrome individuals had.

Speaker: 0
01:20:12

So in a way, that was good because that meant that those 90 or so patients who matched, we could hand over to the national security people. And, you know, it became a real thing. And now there’s, like, a DOD website that has anomalous health incidents where people can come forward and report the stuff that they’ve got, and here’s the ways you can use the Veterans Administration to seek medical help.

Speaker: 0
01:20:36

Whereas previously, they’d been shooed away as we don’t wanna hear about this.

Speaker: 1
01:20:40

What do they think it is?

Speaker: 0
01:20:41

It’s an energy weapon of some ai, sai microwave or other energy or gamma energy weapon. And that sounds, okay, that sounds crazy except no one would admit or no one would deny that we have the capability to do it. It’s basically if if you take the front off your microwave and turn it on and put your face near it, you’ll get burned.

Speaker: 0
01:21:00

So this is just a way to direct the microwaves or sound waves

Speaker: 1
01:21:04

At specific individuals. Vatsal

Speaker: 0
01:21:05

specific individuals and

Speaker: 1
01:21:07

And do do you think it was meh? Or do no. So these are targeted people with specific intention to get those people because they had some function that they wanted to They

Speaker: 0
01:21:17

wanted to annoy they wanted to get them out of the out of the way.

Speaker: 1
01:21:20

Oh, because they

Speaker: 0
01:21:21

were in Havana. Because they were in Havana. And but it’s been used all over the world. You know, I still get, emails from military personnel saying this and this and this happened to me. Here’s my medical records. And so now Ai just I know they know that I’m a safe place to approach because then I know where to send them on the inside.

Speaker: 0
01:21:42

But what was interesting was that once we had set that aside and I’ve advised the Sana Intelligence Committee and I’ve advised them a house on things. I wrote a white paper for them years ago on what I thought needed to be done. But what was interesting were the remaining 10 people who had, you know, who didn’t have Havana Syndrome but had a series of other problems.

Speaker: 0
01:22:06

And several of them had said that part of their problem was initiated because they’d come in contact with what they had claimed to be a UFO. By the way, I just noticed that you have a UFO on the wall behind you.

Speaker: 1
01:22:16

Yeah. We’re all in over here.

Speaker: 0
01:22:20

That, so that got me introduced to what, you know, people like Jacques Vallee, who you’ve had on the

Speaker: 1
01:22:30

show, I think. Tyler.

Speaker: 0
01:22:32

Great guy. He he became my mentor, who essentially took me out of the wilderness. I could have gone down 20 different rabbit holes. And he lives in San Francisco and we would meet regularly and we still meet regularly. And he basically gave me a formulation of how to think about this, you know, that I never would have been able to get from 20 different, you know, or 100 YouTubes or what have you and introduced me to the right people.

Speaker: 0
01:23:02

That eventually led me to meet Lou Elizondo. And I actually, two weeks before that article came out in The New York Times, met Lou in Crystal City overlooking the Pentagon, and he showed me the videos that were about to come out. And that was my first time that I had met him. And then through all of them, I met Dave Grush and Carl Nell.

Speaker: 0
01:23:21

And Dave and I are in regular contact, and I’m, you know, I mean, I just wanna say upfront, I hope that the Trump administration understands the value of what David can bring to them and put him in a position of authority that gives, him not the ability necessarily to make decisions, but to give the necessary information to the right people, because I think there’s great commercial value here that is being missed, not just the are we alone, etcetera.

Speaker: 0
01:23:52

I think there’s extraordinary commercial value. I mean, imagine a civilization that’s a million years ahead of us. How many technology revolutions allow these objects to move as we clearly see something motivating itself or maneuvering around the atmosphere. Right. So if we could scrape just the tiniest bit of understanding off of the top of that, what would that do to change our own civilization?

Speaker: 0
01:24:21

I mean, silicon, a grain of sana, makes us who we are today. Everything that is that’s around me right here is all run off of silicon. Right? I mean, compute. But imagine that there’s other inventions, other ways of manipulating reality that we don’t appreciate yet because our physics just isn’t there meh.

Speaker: 0
01:24:41

If we can understand that so the government might say, well, we need to keep this behind closed doors for weaponization or we don’t sana disrupt energy production or what have you. That’s fine. But maybe there’s too much secrecy and that maybe that there’s an aspect of that that could be taken advantage of.

Speaker: 0
01:25:01

So Carl Nell and I have gotten in positive arguments about this, about that, well, it’s not black and white that we keep something secret or we put it into the public domain. Maybe there’s a middle domain where you have a public private partnership opportunity, and actually that’s now Carl has now adopted, this at least in part that maybe companies come to the fore or investment forum places come to the fore where they will put money in, as options to fund, let’s say, public scientists to come in behind the scenes with the right levels of clearances to study stuff that would propel society forward again.

Speaker: 1
01:25:41

But this is assuming two things. One, that we have actually recovered these things.

Speaker: 0
01:25:46

Right.

Speaker: 1
01:25:47

And then, another one is that it’s from a society from somewhere else Mhmm. That’s far more advanced than we are today

Speaker: 0
01:25:55

Right.

Speaker: 1
01:25:58

Which might not be correct. It might not be that it’s from somewhere else. It might be that it’s from somewhere here

Speaker: 0
01:26:08

Mhmm.

Speaker: 1
01:26:08

Or a dimension that we don’t have access

Speaker: 0
01:26:14

to. Right.

Speaker: 1
01:26:14

Right? Yeah. This is assuming that all this stuff is real. Right. But when you’re talking about the government and back engineering of things, ai, so the big argument this is the narrative. The big argument has been that they have recovered these things and that these things are now in the hands of defense contractors.

Speaker: 1
01:26:29

And that there’s been a misappropriation of funds, ai to congress, and it’s always gonna stay secret because if it didn’t, everybody would go to jail and everyone would get sued. Yeah. Right? Is that fair?

Speaker: 0
01:26:42

Yeah. I mean, that’s fair. I mean, Ai but I would say amnesty would be one way to.

Speaker: 1
01:26:47

This is were you were you in the age of disclosure documentary?

Speaker: 0
01:26:50

Briefly. Yes.

Speaker: 1
01:26:51

Yeah. Okay. Which Ai thought was very good.

Speaker: 0
01:26:53

Very good.

Speaker: 1
01:26:54

And I can’t wait for that to come out. I’ve been talking people, like, how can I say it? I don’t know. I could see it. It’s not out yet. Yeah. And I don’t know why. Whoever it is, go Netflix. Yo, Meh. Go buy that.

Speaker: 0
01:27:03

It’s Yeah.

Speaker: 1
01:27:03

Really good.

Speaker: 0
01:27:04

Yeah. Exactly.

Speaker: 1
01:27:05

It’s really good.

Speaker: 0
01:27:05

It’s a great show. I mean Yeah. And it has a number of officials, and I think I sent you guys some of the videos, basically coming forward. I mean, you know, Marco Rubio, our current secretary of state. I mean, you saw He’s

Speaker: 1
01:27:18

in it. Yeah. He’s in

Speaker: 0
01:27:18

it, like, for, like, ten minutes

Speaker: 1
01:27:20

Mhmm.

Speaker: 0
01:27:20

Saying some remarkable things. You know, senator rounds, you know, you name it. More recently, Chelsea Gabbard Yes. Coming out and saying, well, there’s something going on.

Speaker: 1
01:27:29

I think one of the most fascinating things is Hal Puthoff’s depictions of, descriptions rather of what happened during the Bush administration. Herbert Walker Bush.

Speaker: 0
01:27:39

Right.

Speaker: 1
01:27:39

So in I believe it was 1990, they came to Hal Puthoff and a bunch of other experts and said, we would like you to we want a numerical value placed Right. On all the positives and the negatives of disclosure. Mhmm. Because Mhmm. We have ai we have acquired these crafts from somewhere else.

Speaker: 1
01:28:02

We believe they’re not of this world and, we have not made them and we’re talking about letting the general public know.

Speaker: 0
01:28:10

Right.

Speaker: 1
01:28:10

And ai they overwhelmingly said that the positives were dwarfed by the negatives. Right. The negatives being banking, religion, government Right. Societal structure, ai sai would fall apart if we knew we weren’t alone. Not only we’re not alone, but something is infinitely more sophisticated than us. Mhmm.

Speaker: 1
01:28:28

And might be responsible for us being here

Speaker: 0
01:28:31

in the first place.

Speaker: 1
01:28:32

Right. That’s where it gets super squirrelly.

Speaker: 0
01:28:35

Right. Right. Well, you could imagine The book of Enoch

Speaker: 1
01:28:38

and there’s a lot of

Speaker: 0
01:28:39

I I mean, I I think it’s a little bit overwrought as to what humanity’s reaction will be. People are more worried today about putting food on the table than they would be about, you know, ethereal or, supposed aliens. I mean, they would mostly, I think on on the assumption that they’re not gonna basically show up, at your local Arya, and start, you know, interacting with you, I think the the fact of revealing that we’re not alone is actually more of a hopeful thing to meh.

Speaker: 0
01:29:11

Sai because, you know, how many TV shows right now are about the apocalypse Right. Of a thousand different ai. Yeah. Wouldn’t it be nice to know that somebody got beyond it? Yes. That there’s not a cliff that we all have to walk over? Right.

Speaker: 0
01:29:24

And if so, how do we not walk over the edge of the cliff? I mean, that to me is a is a hopeful outcome. Now Hal and Eric and all the people are all good friends. Hal is probably for all of the things that he says positively, is probably the tightest clam I’ve ever met in terms of making sure that he doesn’t go over the the line.

Speaker: 1
01:29:44

Yeah. He knows too much. Yeah. That’s the thing. He has to be very careful who he’s talking to and what he says. I like to mind meld him the Spock thing where you ai all the information.

Speaker: 0
01:29:55

But it’s people like him and Jacques and Kit Green and a number of others, and I sat around a table with them for several years, like, every twice a year. And I looked around the table and thought, the things that these people know or claim to know, I wanna know. Mhmm. And the opportunity that’s here, and why can’t we get this information out if it’s real?

Speaker: 0
01:30:22

And so rather than arguing with people about the matter, that’s, for instance, why I created the Seoul Foundation, which is a charitable group of academics. I started with David Grosch and Peter Skafish. David, of course, had to leave because he had meh, responsibilities he wanted to go take care of.

Speaker: 0
01:30:43

And, actually, we’ve now had for three years in a row, a symposium, first at Stanford, then at San Francisco, and the next one is now in Italy. So I’m gonna plug it sol2025.org. You can go look if you wanna go to

Speaker: 1
01:31:00

sols0l?s0lasinthesun. 2025.org.dotorg.

Speaker: 0
01:31:05

And the purpose of that was not to advocate that any of this is real, but was to create an environment with win within which academics or professionals or just laypeople interested in the subject matter could come and talk about it in a very professional manner. Right? Just to bounce around ai, not to advocate for, you know, they’re here or they’re reptilians or they’re this or they’re that, but to like some of the things you raised, what are the ethical issues?

Speaker: 0
01:31:35

What are the religious issues? So we have put out a number of white papers, for instance, where we had, a member of the Catholic hierarchy write a paper on the issues related to Catholicism and religion. We’ve had Timothy Galladay, who’s actually on our advisory committee, talk about USOs and the those issues. We’ve talked about near space issues. Peter is running, a study on experiencers.

Speaker: 0
01:32:01

Not that the experiences are necessarily real, but what are the what are the kinds of, psychosocial matters that need to be considered for people who say that they’ve this has happened to them. So there’s a group in The UK called Unhidden, which is basically a bunch of psychiatrists, a group of professional psychiatrists who say, okay.

Speaker: 0
01:32:25

Well, there’s a trauma associated with this. Whether it’s real or not, we don’t know. But what are the kinds of rules that we should or, provisions that we should provide, to the public and to ai? So when somebody shows up at your doorstep, you know, in therapy and says this, you don’t you you shouldn’t immediately reach for the, you know, the the antihysteria or schizophrenia drugs.

Speaker: 1
01:32:51

Right. Right.

Speaker: 0
01:32:52

Now I was lucky enough in my neighborhood, our next door neighbor who moved in for a while was the chair of psychiatry at Stanford. And so we go over to have, dinner with her and her husband. And, you know, like, one of the first things that she says, hey. What do you do? Blah blah blah. And I happened to mention the UFO thing, and she just sort of, like, sat back in her seat.

Speaker: 1
01:33:13

Okay. Oh, you might be a kook.

Speaker: 0
01:33:15

Okay. And but it wasn’t but it took, you know, a year or so until she finally realized that I wasn’t, and then I was approaching this ram a very scientific manner. I had my beliefs as to what I think it is that I’m dealing with and that there is some sort of reality to this.

Speaker: 0
01:33:30

But that’s separate than the scientist in me that says, well, if I sana talk about this scientifically, here are the things that I need to prove or disprove. So that has led, for instance, to my production or study of materials that Jacques Vallee had brought to me, some metals and other things that had chains of evidence associated with them being at some, UAP or UFO landing.

Speaker: 0
01:33:56

And so interestingly, some of these metals were very unusual. Super high purity silicon, strange magnesium ratios, the the isotope ratios are wrong, etcetera. Now that’s not proof of anything, but it’s proof that somebody engineered them. So it’s that plus the medical. Those are the kinds of reality based tests that I can do to provide to my colleagues to say, here is data and evidence.

Speaker: 0
01:34:27

Evidence isn’t proof of evidence of anything. Evidence ai in a court of law is just evidence that you provide to the jury of peers. Right. Right? So but I sort of have gone a step further. And that is I’m like, okay. Well, if these things are let’s say we get some advanced material.

Speaker: 0
01:34:47

How do I prove that this advanced material was made by some superior intellect? Well, probably the atomic positioning of how the material is made is going to be more advanced than even our most advanced computer chip. So how do you determine that? Well, you need some sort of atomic imager that might tell you where the positions of the atoms are and what the bond structures are that you say, well, that’s something I can measure and I can have it I can give those results to somebody else and they can say, meh, it’s right or it’s not.

Speaker: 0
01:35:20

But at least I can say no human at least that I know of could make this. So I started a company that I’ve raised money for with this new idea that I have for how to make an atomic imager and we’re doing it. And so, you know, we’ve raised the money. We’re building it already. And I know it will work.

Speaker: 0
01:35:38

So when I have it, whether or not it’s useful for looking at UAP materials is almost immaterial because I know what how useful it will be for the nanomaterials, the metamaterials, the alloys that the government, etcetera, uses for biology, etcetera. So rather than predicting what a protein structure arya DNA or a chromosome arm looks like, I’ll be able to read its structure directly.

Speaker: 1
01:36:03

Sai Ai wanna bring you back to the you said it was 10 people that didn’t have Havana syndrome, that they had some sort of an injury that was associated with the UAP event. Mhmm. What was their thing? Did they have an implant or was there a No.

Speaker: 0
01:36:17

Some of them had, like, they had what you would call white matter disease in their bryden, ai, they had been exposed to something. So white matter disease, if if you have, for instance, multiple sclerosis, and you look in the brain with MRI, you’ll see these white areas which are basically dead tissue scar tissue.

Speaker: 0
01:36:36

They had things like that. One person, one of the pictures that I had was that they had claimed to have seen something in their backyard. They shown a flashlight at it. And the moment they did, they got zapped. And then you see the picture of the guy in in the back of it back of his neck, this huge welt and a bruising and a scarring, that could there there’s there’s no reasonable way you could have gotten something like that, just by exposing yourself to a flame as a for instance or a blowtorch.

Speaker: 0
01:37:15

And so it’s these kinds of events that and the unfortunate issue with these is that they’re not repeatable. They’re one off anecdotes.

Speaker: 1
01:37:25

Right.

Speaker: 0
01:37:26

You and you certainly can’t put a person in a place where they become bait for these kinds of events to occur. And, so you’re

Speaker: 1
01:37:36

sort of Some people would volunteer for that,

Speaker: 0
01:37:38

though. Somebody might.

Speaker: 1
01:37:39

Yeah. To go get zapped. Do you know about

Speaker: 0
01:37:41

the Travis Walton story. Right? Very much. Yeah.

Speaker: 1
01:37:44

Yeah. What did you what do you think of that?

Speaker: 0
01:37:46

You know, he’s kept to his story over all of the years.

Speaker: 1
01:37:49

That’s what’s so confusing.

Speaker: 0
01:37:51

Yeah. I mean, he’s had no reason. I I don’t know that he’s profited off of it. He you know, I find it fascinating, you know, but it’s Yeah. It’s it’s it’s the irreproducibility of the events that the skeptics I call them more pseudo skeptics, they’re sudists ai nudists, they’re sudists that use these one offnesses of these events to disparage the entire, you know, idea of it.

Speaker: 0
01:38:23

Ai sounds ridiculous. Well, of course, it sounds ridiculous because you’re talking about something that ai outside

Speaker: 1
01:38:27

sai shah zaps people. Yeah. That’s ridiculous.

Speaker: 0
01:38:30

It and and I don’t think that even he would propose, Travis, that he was purposefully hurt.

Speaker: 1
01:38:37

Right.

Speaker: 0
01:38:38

I mean, if you walk across an airfield and get in the plume of a jet engine, you’re gonna get hurt. Right. You know? Yeah.

Speaker: 1
01:38:46

And his story is that he was taken aboard to heal him. Yeah. That there was something something happened to him during that event. And but the crazy part is that all the other people that are in the truck, they witnessed it, and then they passed polygraph examinations.

Speaker: 0
01:39:00

Right.

Speaker: 1
01:39:00

They also told the same story independently when they took them and separated them. And then Travis Walton shows up five days later with the same clothes on

Speaker: 0
01:39:09

Right.

Speaker: 1
01:39:09

With this crazy story.

Speaker: 0
01:39:10

Right. You know, so when people say that, you know, there’s no evidence or where’s the evidence, my first question to them is, well, what have you have you read any books about any of this? Do you have you spent even a moment looking into it? And, you know, I would point them at books like by Robert Powell and Michael Sword’s UFOs and Government, which is not a, a proposal that any of this is real.

Speaker: 0
01:39:33

It’s just the story of, these events over decades. And so there’s there’s books like that, dozens of them, that tell the story of data and evidence. How you contextualize it is, you know, up to your personal biases, let’s say. But there’s plenty of evidence. But if people haven’t looked into it, if they have an opinion about it, and they haven’t looked into it, they’re more like priests than they are scientists.

Speaker: 1
01:40:03

Yeah. That’s it’s also the public. The general public narrative is UFO equals kook. Right. You’re a kook. You believe in that? That’s ridiculous. Mhmm. That’s ridiculous.

Speaker: 0
01:40:14

And and I don’t believe in anything. I believe in the data and the evidence and and the evidence. There’s not enough evidence for me to tell a colleague of mine it’s real. Right. But there’s enough evidence for me to say there’s a question worth answering.

Speaker: 1
01:40:27

So when you were talking about magnesium and these whatever these alloys are, what is specifically wrong with them that you don’t think that it was manufactured by, like, a standard sort of a alloy plant in The United States or somewhere else.

Speaker: 0
01:40:45

Right. So the, so the silicon that I’m talking about is from an event in Ubatuba, Brazil, which interestingly and there’s another piece of it that appears to have been magnesium, but both of them are of a purity that is unusual for the day in the ai late nineteen fifties. So the magnesium and I did an atomic mapping of, the ai piece of silicon down to a level of, where it’s ai 99.999% silicon.

Speaker: 0
01:41:14

And so one piece of it had, magnesium ratios that were earth normal. And these are these were impurities, let’s say. The other piece were way off, earth normal. So for instance, anywhere on earth, if you look at the ratios of what the three magnesium isotopes are, 24, 25, 26, it should be ai, 80%, 11%, 9% more or less.

Speaker: 0
01:41:44

And anywhere in our solar system, that’s more or less what the values should be of the ratios. And that has to do with stellar evolution and how, you know, radioactive compounds might decompose to whatever. But we got this we got this ratio that was just way, way off. So by luck, I came across a postdoc at Stanford.

Speaker: 0
01:42:12

And he and a graduate student, they’re both in applied physics, who arya interested in UAP. And I said, I’ve got these ratios. What do you think it means? And so they looked so they looked at the ratios and, the the weird one, and they said, well, let me let’s do some calculations.

Speaker: 0
01:42:27

And so it turns out that the ratios that we have could have been generated from normal magnesium ratios if you expose normal magnesium ratios to a neutron source for nine hundred years at the level of an atomic bomb every few seconds. Okay? So so you they yeah. Wow. So so it’s like I’m looking and this this data is literally two weeks old. But the calculations are are are meh.

Speaker: 0
01:43:01

So you’re like, okay. Well, where and how you know, the the chance of getting that number correct on three things is low, you know, to put ai mildly. But to say that you had exposed these things to that kind of a neutron source means something interesting, right? So again, it doesn’t prove anything other than that the result is mathematically and materially true. So what does it what does it mean?

Speaker: 0
01:43:37

Again, it’s just it just for a scientist like me who loves data off the curve, it’s catnip. I can’t help myself but want to know and understand more about it.

Speaker: 1
01:43:51

Yeah. I mean, just what you said Is what you said about the magnesium ratios, ai, that’s is there ever been any debunkers that have some sort of a an explanation for why you would find that? No. I mean They think that your measurements are off?

Speaker: 0
01:44:09

Well, I mean, the only way you I mean, you could create that ratio artificially by purifying each of those isotopes and then premixing them to that ratio. But why you would blow it up over a beach in Ubatuba, Mexico in the late nineteen fifties and then let it sit in a museum in Argentina for fifty years until Jacques Vallee ended up going and grabbing a piece of it and bringing it to me in a measure on an instrument in the engineering department at Stanford?

Speaker: 1
01:44:38

Why? Could you do it physically back then? Would that be possible?

Speaker: 0
01:44:43

It would have been very hard. It would have been very, very hard. You could. But in the late nineteen fifties, we were still busy, trying to isolate and separate uranium, isotopes for making more bombs. I mean, let’s let’s be serious. What do humans separate isotopes for? To make bombs or to do, health related, tagging, which is really only something that came to the fore in the sixties and seventies.

Speaker: 1
01:45:15

And this predates that by a decade?

Speaker: 0
01:45:16

It predates it. So it’s it’s unusual. It’s possible. Mhmm. But, Ai meh, again, with any of these things, why why, for instance, would one of the supposed pieces that came from that event of the magnesium at a level of purity that only Dow Chemical at the time had the ability to create.

Speaker: 1
01:45:40

Now what else was at this ai, and what is the story behind this site?

Speaker: 0
01:45:44

A fisherman, sees this glowing object that, kind of released something which then exploded, and he picked up pieces of it. And, there’s some chains of evidence of how it got to either a newspaper, in Brazil or to this, South American Museum, etcetera, and different studies have been done, by different people over time.

Speaker: 0
01:46:08

And, the the surprise to me was that the piece that I had was silicone, whereas the lore was that it was magnesium. So I’ve been in contact with the people who talk about it as being magnesium saying, well, it’s, you know, your results don’t dispute mine. It just says that maybe there was something different.

Speaker: 0
01:46:27

Is that him

Speaker: 1
01:46:29

there? Travis Walton?

Speaker: 0
01:46:30

That’s Travis? Yeah.

Speaker: 1
01:46:31

That’s Travis. Travis Bobwood. That’s good. The game is.

Speaker: 0
01:46:36

Sai, you know, I I don’t I don’t know what it means. I mean, I I published probably one of the first peer review papers on a UAP material from an event in Council Bluffs, Ai. And the event was an object is seen rotating, lights flashing, etcetera. Something appears to drop from the object. The police saw it.

Speaker: 0
01:46:59

Other several other groups saw it in the nineteen seventies. They all converged on the locale, and this was, like, in February or something. It was winter. And there was this big pile of molten metal in the middle of this field, probably thirty, forty pounds of it. And people ai to explain it away as well.

Speaker: 0
01:47:17

The helicopter had a giant vat of molten meh, and then you calculate, how far and how big a container you would have to have to carry molten metal of this type. And so I analyzed it, and, with a a ai that we invented in my lab actually called, multiplex ion beam imaging, which is a kind of what’s called secondary ion mass speak, which what you do is you shoot a beam of ions at an object like a sandblaster.

Speaker: 0
01:47:43

It ionizes the material on the target, and then you shoot off and measure the mass of the objects that you just sandblasted off. And so what we found was nothing unusual in terms of isotope ratios, except we found a mixture of metals that depending on where you looked in the sample was different.

Speaker: 0
01:48:05

So it would be like iron, titanium, and chromium of a certain ratio here, but a different ratio of those things over there and over here. So what that meant was that whatever this stuff was didn’t come completely premixed. Wasn’t like a a milkshake. Right. It was a slurry of partially mixed materials that somebody decided to drop off. So, again, this is just data. Uh-huh.

Speaker: 0
01:48:31

But my purpose of publishing it was first, and this was published in the Progress in Aerospace Sciences, peer reviewed. The purpose was to show you’re not going to get thrown out of the academy for publishing this stuff. As long as you don’t make crazy conclusions and you just sai, here’s the data, to show people that you can publish this stuff as long as you’re scientifically careful in how far you go.

Speaker: 0
01:48:59

You leave yourself plenty of diplomatic exits in the verbiage that you use. And it was part of what then got me to start the Soul Foundation, along with Dave and others to say, look, it’s okay to do this as long as you’re careful. And that’s why people I mean, Avi Loeb came after me, because he had ai of the same pushback from his community where all he was doing was saying, the questions on the table, I’m not saying it’s true.

Speaker: 0
01:49:29

It’s just you can’t push this off the table. So he had the same kind of righteous indignation that I have that propels me to say, well, I’m going to show you why you can’t take this off the table.

Speaker: 1
01:49:42

So when they found this puddle of molten metal and it’s a bunch of different mixtures, so it seems like there’s a bunch of different stuff that was there and it wasn’t perfectly mixed. Is there some sort of ai, have you theorized some sort of a a reason why they will any person or any creature or any being would do that?

Speaker: 1
01:50:05

Is there something that you would extract from that kind of meh? Like, heating it up to a certain degree and having a mixture of all these things, and this is just a byproduct Yeah. I think that they’re dropping off?

Speaker: 0
01:50:16

I think it’s a byproduct of some process that ai, again, might might might.

Speaker: 1
01:50:21

Might might might extract.

Speaker: 0
01:50:23

It it ai be part of a propellant system. It might be part of the way that they generate the fields that allow these things to move. Again, these are all mites. It’s speculation. But it’s ai when you see something and do something that you don’t understand what it is, you have to be fully open.

Speaker: 0
01:50:39

I mean, for all I know, they’re flushing the toilet. Right?

Speaker: 1
01:50:43

Oh, boy.

Speaker: 0
01:50:44

Yeah. Ew. But They got metal poop. So but but, you know, I have the original, Polaroids from the police department of it. So, you know, it was real. And people sai, oh, it was thermite. Well, if it were thermite, there’d be there’d be aluminum ai,

Speaker: 1
01:51:01

you know. Thermite meaning that’s how it was melted meh.

Speaker: 0
01:51:03

That’s how it was melted down, and it’s just some kids playing around, etcetera. And it was a it was a big joke.

Speaker: 1
01:51:09

Wacky kids with their thermite.

Speaker: 0
01:51:10

With their thermite. You know? But it turns out there’s no aluminum hydroxide or oxide, I should say, in the sample. I mean, I I have the analysis. It’s just not there.

Speaker: 1
01:51:18

So it had to have been extreme heat?

Speaker: 0
01:51:20

It had to have been extreme heat of some kind that would produce it. And, you know, whatever it was was hovering for a moment, so it wasn’t an airplane. And there was no helicopters, and at least no helicopters with flashing lights. And, you know, I’ve got there’s been huge chunks of

Speaker: 1
01:51:36

it still exist. And the amount of this stuff, the kind of cauldron that would have to exist in order to melt this would be immense.

Speaker: 0
01:51:45

Was immense. Yeah. So people back in the seventies already sort of made estimates of what was ai, and people said, oh, it’s a meteorite. Well, no. We we basically showed mathematically how, you know first of all, meteorites make holes when they hit the ground. They don’t melt, when they hit the ground, and they make explosions.

Speaker: 1
01:52:03

Are there similar instances of something along this line?

Speaker: 0
01:52:07

Several. Really? That’s what’s so interesting is that worldwide, there are multiple reports of molten metals that get dropped off of these objects. And I have actually two other ones, of a molten metal that was dropped off of, one case in Australia and another another arya.

Speaker: 0
01:52:28

I’m not allowed to sai, but it was one actually happened supposedly. I’ve gotta find the guy again in Fresno. Maybe he’s listening. That he said stuff dropped, and he has, you know, molten metal that land in a puddle in his in the asphalt of his of his ai. And he saw this object.

Speaker: 0
01:52:45

So and and he’s He’s just holding on

Speaker: 1
01:52:48

to it?

Speaker: 0
01:52:48

He’s holding on to it. He reached out to me, and I would I you know, it was still at a time when I was just kinda getting into this area, but there’s many, many examples of this kind of thing. So but interestingly, several of these other ones are just aluminum. The one that I have is iron or whatever. So does what does that tell me?

Speaker: 0
01:53:08

Does that tell me there’s different kinds of ways of accomplishing the goal? Mhmm. But whatever it is, they’re either throwing something overboard or for, you know, because they don’t need it anymore or because maybe it’s getting in the way of something and it’s time to get rid of it.

Speaker: 1
01:53:22

Have you brought in anyone who’s ai a real expert material sciences that would, like, theorize, like, given an immense increase in technology and what wait. What potentially do you think this could be?

Speaker: 0
01:53:35

The purpose of being on shows like this is to have experts maybe give me an idea because the people I’ve been to at Stanford, you know, the other professors, they’re like, okay. Yeah. I gotta go. Yeah.

Speaker: 1
01:53:52

Yeah. It could be it could actually be detrimental to your career. And that’s what’s really weird about something when you’re just talking about data, specifically, in this case, of an actual physical thing that anyone can measure.

Speaker: 0
01:54:04

Right. And I’ve got pieces I’ve got plenty of it, you know. And the original pieces is, you know, is ai this big that, the owner of it had brought to my lab just last summer again. It’s like

Speaker: 1
01:54:17

big as an iMac. Yeah. Exactly.

Speaker: 0
01:54:19

Oh, it’s huge. Crazy. And and so what is it? I would love for somebody to tell me that it’s conventional and has a purely prosaic answer. Because then I can go on to the next thing. The whole reason for getting that the Atacama mummy off the table was not because I wanted to annoy anybody.

Speaker: 0
01:54:36

It was because it was spectacular. It’s obviously something people would pay attention to. So if it’s real, let’s do it. If it’s not, let’s get off the table because it’s usually the stuff that’s hidden under the rubble. That’s the most interesting.

Speaker: 1
01:54:48

My question about that mummy, is not that it’s an alien, but if it does register as human in in the DNA, is it potentially a different kind of human than us?

Speaker: 0
01:55:01

Well, certainly, she She? Had we brought in an expert in South Meh, indigenous people genetics, And the analysis showed that the the genetic the standard genetic mutations that are found in different racial groups around the world, matched exactly the Atacama region of Chile.

Speaker: 0
01:55:27

So her parents, her relatives were clearly, Chilean. So yeah. I mean, that’s really all you can that’s really all you can say. If someone wants to say that she’s an alien, well, that’s fine. I’m convinced of what she is and that she deserves a proper burial.

Speaker: 1
01:55:46

And so it’s just a genetic anomaly? Just

Speaker: 0
01:55:48

a genetic anomaly.

Speaker: 1
01:55:50

I do know that you’ve paid attention to the tridactyl mummies. Yep. What is your take on that?

Speaker: 0
01:55:57

So, you know, I think people have conflated a lot of the different, mummies that are out there, first of all. There’s, like Ai point. 60 of them or something. And probably a fair number of them, I wouldn’t necessarily call them hoaxes. I would say that they are constructed.

Speaker: 0
01:56:16

But they’re old constructs, so maybe there’s some sort of homage paid

Speaker: 1
01:56:20

Right.

Speaker: 0
01:56:21

To the ancestors or something like that, whatever they are. So the the there are some ones that you clearly look at, you go, oh, come on. Right. That never lived. You know,

Speaker: 1
01:56:30

that’s Then there’s the fetal position.

Speaker: 0
01:56:31

Then there’s the fetal position ones, the big ones. Yeah. And I was at the beginning I was you know, I’m I’m always open to being wrong. I was at the beginning thinking, oh, well, because of the small ones, those are probably not real. But then the MRIs started coming out. Yeah.

Speaker: 0
01:56:46

The full body MRIs and the the ligature and the bone construction and the finger and then perhaps most, I think, extraordinarily, the the, the fingerprints on on them being clearly not human. So it’s interesting. But here’s the problem, is that because there’s so much circus around them, unfortunately created by people who want a circus because it sells their TV shows, no scientist of any merit would go near it.

Speaker: 0
01:57:22

So I was approached many times, many times to study them. And I said, I’ll do it on one condition. Here’s the money I need, not personally, but here’s the money I need to do the kinds of analysis, to accomplish this right. Second, there will be no TV cameras, and you won’t hear from me again until I’m ready to talk because I’ll have double checked and triple checked and quadruple checked the results, and then Ai gone out as I do with the Atacama mummy, bringing in further contiguous circles of experts to double check me.

Speaker: 1
01:57:56

And not make it a circus.

Speaker: 0
01:57:57

And not make it a circus. Because I won’t name the TV show that wanted to do it, but they wanted me they wanted to follow me around with a camera. Ai I’m like, no. This isn’t how science is done. Right. It’s I I can’t do it under the with those strictures.

Speaker: 0
01:58:13

So I would say that if anybody’s gonna do it again, lock the things away with South American ai. You don’t need a North American scientist to come in and do it. There’s plenty of smart people in South America who can do this properly and respect the rights of the indigenous peoples who own the sacred grounds within which these things were found.

Speaker: 0
01:58:34

I think that’s important. And, and then do the analysis right. You know, they’ve said they made, I think, the mistake of saying, well, we’ve done the DNA and there’s a lot of DNA that doesn’t match. Anything and the stuff is several 100 years old. Anything that old, you won’t get a lot of good DNA out of it.

Speaker: 0
01:58:56

But just they did the same thing with the d with the Denisovan and the and the Neanderthal. You have to correct the chemical errors that occur over time. There are ways to what’s called bioinformatically correct. You need to do what’s called over reading of the genome where you do so many reads of it that you stack them all up line by line like if you had a thousand versions of an ancient Ai.

Speaker: 0
01:59:22

You would stack up the lines one by one and finally you find one line that has this letter that’s correct and then this one correct and then you basically do a summation of an averaging of the correctness. And so they say, oh, well, there’s, you know, 90% of the genome is nonhuman. It’s probably garbage. It’s probably these mistakes. It’s probably bacterial contamination.

Speaker: 0
01:59:45

There’s ways to deal with that, but that requires money and not one off DNA sequences put on the on the interwebs for some amateur genomicist to make a claim about. Right. You know? So there’s ways to do it. I mean, you would sana, at the end of the day, to get the results to the level where you could go to the guys who did the Denisovan and the Neanderthal DNA, the Max Planck and others who won the Nobel Prize for it, and say, hey, what do you think?

Speaker: 0
02:00:14

But you don’t dare take it to people like that until you’ve done your homework.

Speaker: 1
02:00:20

Mhmm. I see. Yeah.

Speaker: 0
02:00:21

And you do it behind the scenes. You don’t put them under a flashlight. Right. Right. You know? And and and people, I think, have gotten used to this click mentality of impatience where Ai want the result today. Why can’t you just make it all transparent? Dump all the data on on the web tomorrow. You’re not transparent. You’re hiding something. No. I’m not.

Speaker: 0
02:00:43

I am just trying to make sure that you don’t make the mistake and accuse me of making the mistake that you’ll find in the data because the raw data is never clean.

Speaker: 1
02:00:55

In the Daily Mail headline.

Speaker: 0
02:00:59

Accurate. So so long story short, I think there’s still something worth looking at there.

Speaker: 1
02:01:06

Well, the scans are fascinating. Right?

Speaker: 0
02:01:08

Yeah. The scans are the most interesting to me.

Speaker: 1
02:01:10

Have you seen the Jesse Ai? Didn’t did you this video? Yeah.

Speaker: 0
02:01:13

Jesse is a good friend.

Speaker: 1
02:01:14

He’s great. I love that guy. And the the the episode that he did is fantastic. And when you see the scans and they go over the bone structure of the thing and you look at it, you’re like, god, that looks real. If that’s a hoax from seventeen hundred years ago or

Speaker: 0
02:01:30

Exactly. Exactly.

Speaker: 1
02:01:31

Whoever if they’ve if the carbon isotope dating that they did on it is accurate. I’ve looked at

Speaker: 0
02:01:37

that data. It looks good.

Speaker: 1
02:01:39

Okay. So then it is that old. Fuck you then.

Speaker: 0
02:01:42

Yeah.

Speaker: 1
02:01:43

Because there’s no way someone back then could fake that.

Speaker: 0
02:01:46

And somebody asked me the other day, they said, well, could you have a single mutation? Or Ai said, I said, no. I mean, because you don’t get one mutation that does all that.

Speaker: 1
02:01:55

Right. You

Speaker: 0
02:01:55

know, evolution works step by step that this is this does this, but it has a mistake, but it’s corrected by this mutation over here in evolution, which is corrected by this. And so the whole the genome fluctuates over time, compensating for the errors that would otherwise have killed you.

Speaker: 1
02:02:15

Also, one of them is pregnant.

Speaker: 0
02:02:18

That’s fast. Yeah. I know.

Speaker: 1
02:02:20

Okay. So it’s a three foot pregnant thing that doesn’t look remotely human being.

Speaker: 0
02:02:25

Yeah. So the jury is still out. Right. But if they’re gonna do it right, they need to sequester the stuff away, bring in the right people with sufficient resources, and get rid of the cameras.

Speaker: 1
02:02:38

Have you talked to them? Have you encouraged this? Is this is it possible to nudge this in the right direction? And where is it at right now?

Speaker: 0
02:02:45

I I wrote out on Twitter a full thing of what they needed to do. I mean, the easiest first milestone to do, to be honest, that could be done within a couple of months, is if it is somewhere in the hominid or, let’s say, vertebrate line, there are metabolism genes that we all share.

Speaker: 0
02:03:05

In fact, there are metabolism genes that we share with bacteria that are very similar. So there’s you probably do you know the technique called polymerase chain reaction, PCR? So, you know, why try to do the whole genome? Why not just target a bunch of genes that we know evolve slowly but do evolve and PCR those out because that’s easier to do than is trying to assemble a whole genome.

Speaker: 0
02:03:35

And then by having just those, let’s call it preliminary sets of evidence, you could then say, this actually reproducibly, if I take a sample from the finger, I take a sample from the bone marrow, I take a sample from here or there on the body, and I take a sample from different the three different main things.

Speaker: 0
02:03:58

And I see the same mutations and they’re different or somehow aligned with hominid evolution. Right? We compare it to all the known hominids. I mean, that would be the kind of data that you could actually publish in a journal like Nature if you did it ai. Mhmm.

Speaker: 0
02:04:17

Because that’s the only way that you’re gonna get anybody to pay attention.

Speaker: 1
02:04:20

There’s also the bizarre anecdotal nature of some of the artwork. Like, the fact that these people, did a lot of these tapestries and a lot of ancient artwork that’s a thousand years old that depicts these three fingered things.

Speaker: 0
02:04:38

Mhmm.

Speaker: 1
02:04:39

So it’s ai, what are they describe were they describing these actual creatures that that that it was only a few of them and it wasn’t weird genetic mutation? Or is this a common visitor that they’re describing?

Speaker: 0
02:04:51

I don’t know.

Speaker: 1
02:04:52

I don’t know either.

Speaker: 0
02:04:52

I mean, why would you why would you put them in a cave in Peru? I don’t know.

Speaker: 1
02:04:59

And if you didn’t put them in a cave in Peru, what would be left? That’s the problem. The problem is it’s really hard to make a fossil. It’s really hard to find bones, you know. Think about all the people that ai. Right. And where where, you know Yeah. Ai don’t find that many bones

Speaker: 0
02:05:11

Right.

Speaker: 1
02:05:11

Relatively speaking to in comparison to the fucking billions of Right. People that died.

Speaker: 0
02:05:16

Right.

Speaker: 1
02:05:16

It’s not like we’re tripping over human bones every day.

Speaker: 0
02:05:18

Right. Except in mass graves.

Speaker: 1
02:05:20

Yeah. Right. That’s really, Yeah. And even in mass graves, given enough time, they will deteriorate, like Yeah. Mass graves from a 1,700 years ago, whatever these things are.

Speaker: 0
02:05:31

So, you know, I find them again, I find them interesting. And I I I hope that behind the scenes, there are people who are taking a more methodical approach to this, who Ai think should remain stealth until they have the data to the point where it’s published. Publish. Yeah.

Speaker: 0
02:05:51

You know, publishing a white paper or putting something out on the Internet is not the same as putting out data that has all of the instruments that you used, the methods that you used, etcetera. The the reason you want papers, frankly, when you publish them to be almost boring and so thick with detail that no pseudo skeptic would dare approach it because they’re just they’re just not smart enough.

Speaker: 0
02:06:18

If you but if you put out these snippets that don’t have sufficient background, they can be picked apart by anybody. Ai? But that’s why peer review is so important. And people mistake peer review as trying to get the reviewers to agree with your conclusions. No.

Speaker: 0
02:06:35

The main purpose of peer review is actually to make sure that the methods that you used are sufficiently detailed and are correct enough to the extent you came to any conclusions, they match the methods that you used.

Speaker: 1
02:06:50

And when you think about these potential, whatever they are, whatever these these creatures are, if we did find out that they are some sort of a hominid, how much credence do you give to the theory that there’s, like, the possibility that these UFOs, UAPs, whatever it is, is a break off civilization from a very, very long time ago that’s very different from us, the same way we’re very different from chimpanzees

Speaker: 0
02:07:24

Right. Which we coexist with. Right. I have no problem conjecturing that. Did you ever see the the Netflix show Chimp Empire? Yes. Amazing. Right? Amazing. Twenty million years of separation, and it looked like fucking faculty meeting. You know, with people, like, looking at each other and planning and plotting Yep. Board meeting. You know?

Speaker: 0
02:07:45

And so we shared all of those interactions from 20,000,000 ago. Mhmm. So how much further back would you have to go to have something like what

Speaker: 1
02:07:58

Right.

Speaker: 0
02:07:59

That ai? I mean, it’s clearly not recent.

Speaker: 1
02:08:01

And also, if you think about what we are in comparison to chimps, we’re so fragile, we’re frail, we’re easily injured, we’re well, if you think of something that’s far more technologically advanced than us, it would be even more frail. It would be even more petite. It would have Right. Almost no muscle at all. It would look weirdly enough like the grace from closing cameras of the third kind.

Speaker: 1
02:08:26

Yeah. That’s what it would look like. If it was a hominid that’s whatever we are and it went way past that.

Speaker: 0
02:08:33

Right. Yeah. No. Technology gives evolution the excuse to no longer make or allow for you to be, Robust. Robust. Yeah.

Speaker: 1
02:08:43

And also for Why do you need opposable thumbs? Yeah. Right? These things don’t even have opposable thumbs.

Speaker: 0
02:08:48

That was what’s weird about it. Ai. It’s like, how do you interact with your environment? They look more like sloths than they do.

Speaker: 1
02:08:54

Right. Right.

Speaker: 0
02:08:55

I mean, at least their hands do. Yeah. And and I I don’t know. I find it

Speaker: 1
02:08:59

Well, if everything’s done with AI and automation Mhmm. And your interface is purely neurological.

Speaker: 0
02:09:06

Mhmm.

Speaker: 1
02:09:06

Ai, you have some sort of a human or a creature neuro interface with technology. And you just use fingers to, like, lay them on electronics so that you can sync up with it.

Speaker: 0
02:09:18

Right. Yes. Yeah. Yeah. Why not?

Speaker: 1
02:09:21

Ai why are you picking things up, bro? You don’t have to pick things up anymore. Those go away just like, you know

Speaker: 0
02:09:25

Can you imagine the scenario of I mean, these things we know are the the the body is real. What they are, we don’t know. But can you imagine the scenario of what happened as they were being buried? You know, what would they could could you, like, make a a of, you know, a film of the ceremonial burial of these things? Mhmm.

Speaker: 0
02:09:49

You know, what would what led to their death? What led to their placement there? Or if they were constructed, which I have a hard time with given the MRIs that we’ve all seen, etcetera, What led to it? And so that to me is as almost interesting as to whether or not they’re real or not.

Speaker: 1
02:10:11

Right. Like, the ones that are clearly constructed, that’s where it gets fascinating. Because, like, what were you trying to reproduce?

Speaker: 0
02:10:17

Yes.

Speaker: 1
02:10:18

And why are they so similar to the ones that look real?

Speaker: 0
02:10:21

Yeah. Are you is it an homage Right. To the ancestors or to the stories of the ancestors, etcetera?

Speaker: 1
02:10:29

Yeah. Especially when you look at Peru. Like, Peru is, like, you’ve got the NASCA lines Yeah. Which are really weird. Right. You can only see them from the ai, and they’re everywhere, and they’re huge. These depictions are very strange things.

Speaker: 0
02:10:44

I you know, I just so I just ask my scientific colleagues to not suspend disbelief, but to open your minds as to the possibility of what it of what these things might mean and just try to explain them without dismissing them. Because it’s so easy, and politics, we see it every day.

Speaker: 0
02:11:03

All you need to do is just give any answer, even if it’s obviously flagrantly wrong, as just as a way to deflect. And so, you know, you can either use that, approach you shouldn’t use that approach ever as a scientist, deflect, which unfortunately is what someone like, you know, Neil deGrasse Tyson often does.

Speaker: 1
02:11:23

Yeah.

Speaker: 0
02:11:24

And, as opposed to try to explain in a way that teaches your audience the right way to think.

Speaker: 1
02:11:34

Yeah. Well said. One of the things that Jacques Vallee highlighted is there’s an alloy, another piece of meh, some that they’d found that had layers, like these at an atomic level Yep. That if you wanted to make this alloy today, it would be almost impossible. It would cost billions of dollars.

Speaker: 0
02:12:00

So I worked with him on on one of those pieces. I got the atomic imaging of some of vatsal, and it’s, oh god. I’m I’m blanking on the event, but it was, the Sirocco event.

Speaker: 1
02:12:13

And where was that?

Speaker: 0
02:12:14

In, I think, New Mexico. I’m gonna get in trouble for not knowing exactly. But, and we actually did a, atomic layering using this device called atomic probe tomography, where you literally pick it apart atom by atom and get its three d position. It’s a 40 year old technology, so it’s nothing magic. Sai, and yeah. And it would just be very difficult to make it, you know.

Speaker: 0
02:12:36

And and certainly, it would be not something that you would have dropped in the middle of the desert. Is it Socorro? Socorro. In the middle of the desert, you know, in the nineteen seventies or whenever it was. I wouldn’t say it’s impossible to make, but why you would do it is another question.

Speaker: 0
02:12:57

It’s clearly evidence of technology and manufacture, and that’s what interests me is, first of all, why would you do it? Why would you create something, for instance, with the silicon and the magnesium with the altered ratios, not the where did it come from? Sai what is it evidence of?

Speaker: 0
02:13:18

It’s clearly evidence of technology.

Speaker: 1
02:13:21

Was this talk technology available at the time this supposed crash happened?

Speaker: 0
02:13:25

Which one? This no. Not no. Not not at the level of precision that was done and a chunk of and no. It just wasn’t.

Speaker: 1
02:13:34

It just wasn’t. So if that’s true, and if it really if that’s the chain of evidence is correct, and it really did come from that area ram that crash, that’s not a human creation.

Speaker: 0
02:13:45

Well, it wasn’t a crash. It was an object that a policeman had seen with beings short beings outside of it. And when it took off and left, he went over and found this piece that I had actually, I personally have it now. So but, you know, it’s, it’s hard to say what’s possible and what’s not possible.

Speaker: 0
02:14:11

So, you know, there’s plenty of military programs that make stuff that are way outside of mainstream capabilities right now. I mean, just look at the stealth bomber.

Speaker: 1
02:14:21

Right. For

Speaker: 0
02:14:21

instance, in the sai, the stealth bomber is just remarkable.

Speaker: 1
02:14:24

Is it possible they were doing that in 1970?

Speaker: 0
02:14:26

Maybe. Maybe. So that’s why I always leave open the possibility that, you know, which is why I mean, this is I’m gonna go back to this atomic imager thing that I’m making. It’s ai there is a there’s a level of evidence that I think can be produced with atomic imaging that goes beyond what it is we know anybody can make.

Speaker: 0
02:14:49

Right? So, and sai that’s my reason for wanting to do it because, you know, look, I can make money on it with looking at alloys and nanomaterials, etcetera, and that’s sana be what the purpose of the of making the instrument will. That’s how it will be a company, but it will have value elsewhere.

Speaker: 0
02:15:10

So the reason that I got interested in it was frankly for looking at chromosomes, but then I realized, maybe it has interest maybe it would be useful for these other things as well, which has ai of propelled my interest in it.

Speaker: 1
02:15:23

Well, Jacques Vallee is such a valuable researcher because he’s so logical about the way he handles things, and he doesn’t jump to any conclusions. Yep. And and his descriptions of these materials and the origin of these materials is really compelling. Because it’s just ai, if that’s not really possible to make in 1970, then someone help me out. Yeah. What is that? Yeah. And is it possible to make today?

Speaker: 1
02:15:48

And how much would it cost? Right. And where would you do it?

Speaker: 0
02:15:51

Well, that’s why the magnesium ratio thing was you know, when I first estimated, it was ai, this is millions and millions of dollars, and why would you leave it on a beach in the middle of Ubatuba, Brazil? Right. You know, it just it just seems it seems unlikely. Nothing’s impossible

Speaker: 1
02:16:08

No. But unlikely. Well

Speaker: 0
02:16:11

And and and and then it’s usually the chain of evidence. It’s it’s there’s lots of materials that you might find that are unusual. And believe me, I get rocks sent to me at my lab in the mail that people say, oh, this is unusual. No. It’s a rock. Sorry. It’s a rock.

Speaker: 0
02:16:24

But, you know, I have not yet been given anything, which I could definitively say this is not something a human might have been able to make. Ai be difficult, but not impossible yet. And so that’s because the level of resolution required to claim something is impossible is something we actually don’t even have yet.

Speaker: 0
02:16:51

Does that make sense?

Speaker: 1
02:16:53

Yes. That does make sense.

Speaker: 0
02:16:54

So that’s what I so my whole career has been inventing instruments that were Ai felt inevitable but not yet possible. But I could see a path to making them and so I said to most people, get out of my way. I’m sana do this because I know once I’ve got it, it will become valuable to everybody, which is that’s what made my career in immunology, making a succession of instruments like that and then making them available to the community.

Speaker: 0
02:17:22

So I think the next level is atomic because we now know you can pick up and look at any of the any of the major, you know, physics journals today. Everything is all about these weird exotic particles that exist in metamaterials down at the atomic level with vague and strange capabilities that will change their utility either as superconductors, room temperature or different kinds of, electronic components that might be better quantum computer circuits and qubits.

Speaker: 0
02:17:53

It’s all down at that level. But to do so requires a level of engineering that we don’t I mean, never mind reading what it is. Putting it together in the first place is what’s still required. And so if we don’t know how to put it together in the first place, then reading it and knowing that it can exist and then associating it with a function is the value that I’m looking to bring.

Speaker: 1
02:18:21

Well, this brings me to the idea of crash retrieval and the the idea that these crash retrievals started a long time ago, and that Roswell was just one of many. There’s another one that was near Roswell that apparently was even more significant, but didn’t get in the newspaper.

Speaker: 0
02:18:40

Trinity, are you talking about? I think It was the one that Jacques was involved with studying.

Speaker: 1
02:18:46

I’m gonna know. I’m basing this off of Richard Dolan’s book. Okay. But at the end of the day, but the point being, that if they did do that, if they really did back engineer something, and then they started these completely top secret scientific research projects where they were developing alloys that had never existed before with techniques that they had never really even considered

Speaker: 0
02:19:11

Mhmm.

Speaker: 1
02:19:11

Because they got it all from some spaceship. Well, that’s where it’s really crazy if you don’t disclose this information because you’re you’re you’re basically putting a bottleneck on human evolution, human human technological evolution Mhmm. And our our understanding of what’s actually possible. Right.

Speaker: 0
02:19:29

I agree. And, you know, if you’re going to excite the next generation of scientists in this country and you’re gonna bring economic prosperity to this country, then we should I wouldn’t say democratize it and put it all out on the Internet. Sai understand all the reasons why you might not Right. Need to, but you need to excite the populace.

Speaker: 0
02:19:51

I mean, my laboratory at Stanford for probably the last ten years is 90% foreigners. Not because I don’t want to take more Americans, but because Americans just don’t go into the sciences anymore. They don’t study math. You know, they aren’t encouraged to approach us. So we’re importing a lot of our scientists from overseas. Well, guess what?

Speaker: 0
02:20:20

A good third of them end up going back and bringing all the technology that they invented here back there and creating competitors. Now maybe that’s good on a global scale, you know, but maybe it’s not something that we want to encourage, on a local scale if we want to maintain our technological superiority.

Speaker: 0
02:20:38

And we’re basically governed by lawyers. China is governed by engineers. You know? I mean

Speaker: 1
02:20:45

When you see the results in their drone technology and electric cars and the things that are coming out of China recently?

Speaker: 0
02:20:50

Their Politburo is almost entirely engineers and scientists. Interesting. Yeah. There’s a little article in the in the Atlantic recently about that.

Speaker: 1
02:20:59

Ai advantage.

Speaker: 0
02:21:00

Yeah. So people were making these we have lawyers looking for all the reasons why something should or shouldn’t be bryden the liabilities. They’re looking at things, as to what’s possible.

Speaker: 1
02:21:11

When you’re looking at these UAP things that people bring you, what is there one that stands out as being the most compelling to you? One event?

Speaker: 0
02:21:22

Well, both the Council Bluffs and the Ubatuba event are interesting to me.

Speaker: 1
02:21:26

Because of the physical material?

Speaker: 0
02:21:28

Because of the physical material itself. I mean, I’m, at the end of the day, a physicalist. You know? I mean, I I don’t like all the anecdotes. I mean, a thousand anecdotes make a good story, good campfire. I mean, I think there’s statistical value in people seeing the same thing again and again, and and there’s a there’s a truth to it.

Speaker: 0
02:21:48

But as you know, and I can believe everything I want to around that and many of the statements that I’m purported to have said are around my beliefs as opposed to when I put on my scientist hat and I try to convince another ai, I can only provide this data and this evidence and I don’t have yet these materials.

Speaker: 0
02:22:06

Now maybe they exist, and maybe, you know, people like David Grosch will be able to pry them out of the clammy hands of those who sana keep it where it is. But give me one piece of that, and I will do wonders with it.

Speaker: 1
02:22:24

Mhmm. Yeah.

Speaker: 0
02:22:25

I mean, that’s why I’m so excited about the UAP disclosure act if it ends up becoming law because there will be this ability to start to maybe eke some of this out. And, again, it’s the reason why I think this commercial opportunity is the direction we wanna go where we have a sort of public private partnership is that the defense budget in and of itself is a zero sum game.

Speaker: 0
02:22:49

We’re taking money from one program to give to another. You know, whether you’re taking it from your taxes, you’re taking it from veterans, you know, insurance, etcetera, it’s a zero sum game. Whereas if you bring the investment community in, now you’re bringing in people who are willing to take a chance and willing to take a risk, and you’re not using the public’s money anymore.

Speaker: 1
02:23:09

Mhmm.

Speaker: 0
02:23:09

And so, and that excites, I mean, me as a the reason why I wanted to go back to Stanford is because the the entrepreneurial environment there and now which is actually almost homegrown here in Austin, is really what drives innovation. And so I want to excite that kind of community.

Speaker: 0
02:23:28

And again, the Seoul Foundation is a place where we can bring people in, and we’ve got investors who show up now who are talking to people about their ideas and what would we do with this. And so you you it almost has now a self propelling movement where I don’t need to be standing on a, you know, wooden box somewhere in the middle of the park saying, you know, look at this.

Speaker: 0
02:23:54

Look at this. People are just doing it now. There’s now a whole almost a cottage industry of, small groups or formalized groups who are doing this, independently now. So it’s almost like it’s inevitable. So Sky Watcher as an example. You probably know of Sai Watcher group. Yeah. I’ve heard of it. And, Jake Paul.

Speaker: 1
02:24:16

And they just stop operations? Did something happen?

Speaker: 0
02:24:19

No. It’s it’s strange because people said, oh, we stopped. No. Actually, it had been it had been determined from the beginning that we were gonna go from January until July or August and collect data. And now we’re in the, okay, what does the data mean phase where, we’re literally going through the data, looking at the data files sana trying to we’re as I said before, we’re filtering the data.

Speaker: 0
02:24:42

We’re looking for the obvious mistakes

Speaker: 1
02:24:44

Mhmm.

Speaker: 0
02:24:45

Etcetera. And so, no. They’ve not stopped.

Speaker: 1
02:24:50

Yeah. There was something on Twitter about the something about the equipment.

Speaker: 0
02:24:55

I forget. No. So James Fowler, one of the guys who brought a lot of his equipment and technology to us, decided, that he wanted to basically go off and, work, in a DOD capacity as opposed to the research capacity. But he’s still advising us. I was just on a phone call, a Zoom call with him last week going over the data files.

Speaker: 1
02:25:18

So explain this Ai Watcher thing to people because it sounds insane.

Speaker: 0
02:25:23

Well, the idea behind it was that, there might be ways to, send a signal and get things to show up. And Right. James Fowler claimed that he had such a thing. Ai was at one of the events where something showed up. It was transient, momentary, but in this you know, indisputable.

Speaker: 0
02:25:47

But it’s just like What did it look like? It was just a silver ball moving quickly through several frames of a, of the, of a video, which was not fast enough, frankly, to pick it up. We just saw it move. It went that way, and

Speaker: 1
02:26:01

then You didn’t see it with your naked eye? No.

Speaker: 0
02:26:02

I didn’t see it with the naked eye. Mhmm. Which, of course, is a problem.

Speaker: 1
02:26:06

Do they sometimes see things with the naked eye? One guy did. Yeah. One guy.

Speaker: 0
02:26:10

Oh, I mean So

Speaker: 1
02:26:11

are these things variable in their appearance?

Speaker: 0
02:26:13

I wish I had my I don’t have my phone here. But, we do have a picture of one next to a meh next to the helicopter about 200 feet away, and it’s just kind of a fuzzy white blob, against a blue ai. But it was there, you know, and it’s it’s not a cloud, and it’s not a balloon. Mhmm. You know?

Speaker: 0
02:26:33

It’s not discernible as anything, obvious, but it was there, and it happened during one of these events out in the middle of the desert. Mhmm. And sai, know, so the the idea is ai Ai Watcher is to see if there are ways to get them to show up. And if so, in a reproducible manner and then have the right kind of simultaneous multi sensor capabilities to measure it, meaning radar, IR, visual, people on the ground.

Speaker: 1
02:27:07

What are they sending to get these things to go? What what signal?

Speaker: 0
02:27:11

James has a signal that, unfortunately, he won’t, I don’t know what it is.

Speaker: 1
02:27:16

He won’t let everybody know what the bat signal is?

Speaker: 0
02:27:18

Well, I mean, you know, I mean, maybe yeah. Exactly. I mean, it sounds kinda silly, but, I mean, why would you put that out on the Internet because, you know, you might render it useless. They’re like, ugh, I don’t gotta show up. Everybody’s everybody’s using it now.

Speaker: 1
02:27:32

Oh, so you think it’s a trick? Like, it tricks them into showing up? Ai don’t know. I really don’t. Don’t you think they’d be smarter than that?

Speaker: 0
02:27:40

Well, that tells you something maybe about the level of smarts that might be incorporated into these, let’s say, dumber machines.

Speaker: 1
02:27:47

Mhmm.

Speaker: 0
02:27:47

Maybe yeah. That was exactly my thought. It’s like, why would you show up when you know what it is unless there’s a reason you’re basically trying to train the monkeys what to do? Maybe you’re tricking the monkeys into sending I I don’t know.

Speaker: 1
02:28:04

But isn’t there a group of people that just go out and they just using their ai, they meditate Yeah. And supposedly, they have some success as well.

Speaker: 0
02:28:14

Yeah. There’s the CE five, groups that do that. And I’ve never participated in any of that because I don’t I don’t know how to measure it. Uh-huh. I’m, you know, I’m I’m more than willing to believe that there are technologies capable of measuring thoughts at a distance that might be, you know, some super advanced.

Speaker: 0
02:28:35

I don’t believe you have to call it telepathy and magic. I think that there’s, you know, if such a thing happens that there is a technology that might be able to read at a distance. Right.

Speaker: 1
02:28:46

Well, it’s not Sai

Speaker: 0
02:28:47

have no problem with that.

Speaker: 1
02:28:48

I I don’t have a problem with that either. I I don’t have a problem with the idea that consciousness is kind of vaguely and barely understood, and whatever our relationship to the universe itself and reality itself through consciousness is it’s not fully ai. And, also, it might evolve just like all of our other intellectual capabilities.

Speaker: 0
02:29:14

Right. Well, I mean, think of it this way. You know, you you and I are interacting with each other through quantum waves. I my meat brain sees you as an object, but meh everything that you are sits in quantum space time down at the Planck level and you’re not even mass. You’re just a series of, I mean, in some people’s minds, vibrating fields and objects.

Speaker: 0
02:29:34

And so we have sensors that see and hear each other and think about each other, but our consciousness somehow is embedded in space time. And so who’s to say that there’s not signals passing to and from that are vaguely able to be picked up by our meat brains that we don’t necessarily appreciate?

Speaker: 0
02:29:53

Right. Right? So that just because I can’t think at you and you can’t hear me doesn’t mean that there aren’t perhaps brain organizations of some people that are a little bit better at hearing the the the echo than others.

Speaker: 1
02:30:08

Well, this is also probably the reason why when you go to the woods and there’s no cell phone signals, the world feels different.

Speaker: 0
02:30:14

Yeah.

Speaker: 1
02:30:14

Because you’re probably experiencing a bunch of signals that your brain vaguely interacts with. Right. That, you know, might not even necessarily be good for you. Right. But they’re out there and they’re a part of the world that you live in. Mhmm. And you just you can’t you don’t have a radio.

Speaker: 0
02:30:30

Right.

Speaker: 1
02:30:30

Right? So you’re not like tuning into them. You don’t have a cell phone. So you can’t just ai make calls with it, but Right. You’re experiencing it.

Speaker: 0
02:30:36

Right. Well, and, you know, our civilization is drowning us in constant noise. Yeah. And so maybe, you know, that drowns it out and that’s why meditation is why people claim that they can interact with other things. I don’t know.

Speaker: 1
02:30:50

Yeah. I don’t know either. Sana I saw an interview that you did, where you were describing the ai over, off the coast of San Diego in 02/2004, the the Nimitz sighting, where you said that the amount of power why don’t you describe it? So the amount of power that that thing had to to use to move the way it did. Right.

Speaker: 0
02:31:12

So it’s on radar. It’s on radar. So so these are actually calculations by Kevin Knuth, a physicist from the University of Albany, and a published paper. Again, just speculation. But what he basically said was how much power would it take to instantaneously accelerate, from 50 feet over the ocean to 50 miles above the earth, whatever the number was, and instantaneously decelerate.

Speaker: 0
02:31:40

So it’s not just the amount of power to lift something. It’s the amount of power to accelerate and decelerate instantaneously. And so you can make simple physical calculations, of a one ton object, let’s say, and it’s more than the nuclear output of The United States for a year.

Speaker: 0
02:31:59

And yet these things seem capable of doing that at will. So where are they getting the energy from? And I remember asking how a question like this years ago, we were stepping into a how put off stepping into an elevator, and we were talking about his ideas about how these things might move.

Speaker: 0
02:32:16

And I said, so they’re cheating somehow, aren’t they? And his answer was, from our point of view, they’re cheating. From their point of view, they’re just using the physics that we don’t understand yet. Sai where’s the energy coming from? What are they doing?

Speaker: 0
02:32:30

And sai that might be, as a for instance, a reason why you don’t want everybody having access to it. Yeah. Because any one of those objects is worse than a thermonuclear bomb. You shoot one of those things at a city and that’s the end of the city. And if anybody could do it, you know

Speaker: 1
02:32:48

Well, maybe that’s the the step of human evolution, of the the evolution of our society and civilization is that AI has to come into power before we have access to all this other stuff.

Speaker: 0
02:33:00

Right.

Speaker: 1
02:33:00

That we do need an AI government structure. Right. That we do no longer require military intervention and all Right. All the shit that is Agreed. The bane of civilization today. Mhmm. Because if you you ask the average person today, is do you envision a world where war doesn’t exist? Most people are saying no.

Speaker: 1
02:33:19

Right. The vast majority except for a few delusional hippies.

Speaker: 0
02:33:23

They’re gonna say no. Right?

Speaker: 1
02:33:24

But if you ask them, okay, given this super intelligent AI takes over the world and proves to be benevolent and really just wants to accentuate the life of Right. Human beings on Earth and make it better for everybody, then yes. Right. Then a 100% yes.

Speaker: 0
02:33:42

Right.

Speaker: 1
02:33:42

Why would it want war?

Speaker: 0
02:33:43

Right.

Speaker: 1
02:33:44

Right? So maybe something like that has to take place before we get to a situation where, okay, this is how you really travel.

Speaker: 0
02:33:52

Right.

Speaker: 1
02:33:53

Right. Okay. Now that you’re not going to war anymore, listen. But you can already about gravity bubbles.

Speaker: 0
02:33:58

You can already imagine the negatives where people will say, oh, well, it’s the it’s the it’s the apocalyptic nanny state.

Speaker: 1
02:34:09

Mhmm.

Speaker: 0
02:34:09

Right? Where AI just basically takes care of you and humans devolve into something, which is why I think Yeah. A merger of human intellect with this where it’s a synergy as opposed to an either or. I don’t wanna be nanny stated either. Right. I wanna use it to explore ideas or explore pleasure. I mean, I’m finding people wanna be hedonistic and, you know, participate in virtual parties all day long Right.

Speaker: 0
02:34:38

For all I care. I don’t care. Right. But I think giving people the option to do whatever it is that they wanna do, it’s the most, I don’t know what’s the it’s the most liberal and conservative way of living because you’re allowed to do what you want to do. But we’re not because we’re living at the behest of so many other structures.

Speaker: 1
02:35:01

Always. Yeah. Last question. What do you what’s your take on the Bob Lazar story?

Speaker: 0
02:35:09

Elements of truth, with, a healthy dose of misinformation that perhaps he was provided. I don’t think that he’s entirely lying. He seems to know enough about things that the average person wouldn’t know. But, you know, I’ve heard from Eric Davis and others saying, you know, he’s he’s he’s a this, he’s a that.

Speaker: 0
02:35:41

I don’t know because, you know, it’s like that’s why there are great people like Richard Dolan who who’s a, you know, a wonderful writer of the history of the arya, or people like Robert Powell or Michael Swords who write just the facts not coming to too many conclusions. I don’t live in that world. That’s it’s it’s not my specialty.

Speaker: 0
02:36:06

My specialty is working with data and analyzing things and bringing rigorous science to it so that I can convince another scientist what is right or what is wrong. Because I won’t be happy. I mean, I’m pretty sure of what I know, but I want to validate that to my colleagues if only to be able to say Ai told you so. Right?

Speaker: 0
02:36:30

There’s a little bit of human pettiness in there.

Speaker: 1
02:36:34

You know? A little bit of pettiness is great motivation.

Speaker: 0
02:36:36

Yeah. But, you know, but but that’s I think, again, enabling people to live in a world like that where you can talk about these ideas without being ridiculed is really I think the objective of what science should be and what open minded, you know, non theologically dogmatic approaches should be.

Speaker: 0
02:36:56

It’s it’s like accuse a scientist of being a a priest sana that’s the best way to really upset them. But pointing out that what they’re doing is mimicking dogma and priesthood is the only way to shame them into doing the right thing. Does that make sense?

Speaker: 1
02:37:16

It does. It does. Well, listen, man. I’m glad we finally did this.

Speaker: 0
02:37:21

Yes.

Speaker: 1
02:37:21

Thank you. So much for being here. Thank you so much for all the research that you’re you’re currently involved in and all the stuff that you’ve done, and it’s been amazing talking to you.

Speaker: 0
02:37:29

I really appreciate it. Thank you.

Speaker: 1
02:37:31

Thank you so much. Alright. Okay. Bye, everybody.

Ready to try this in Speak?

Upload your audio, video, or text and get transcription, summaries, and insights in minutes. Start self-serve, or book a consult if you need white-label, routing, or advanced workflows.

Don’t Miss Out - ENDING SOON!

Save Big With Speak's March Limited Offers 🎁

For a limited time, save on a fully loaded Speak plan. Join 250K+ who save time and money with our top-rated AI platform for capture, transcription, translation, analysis and more.