#472 – Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

Terence Tao is widely considered to be one of the greatest mathematicians in history. He won the Fields Medal and the Breakthrough Prize in Mathematics, and has contributed to a wide range of fields from fluid dynamics with Navier-Stokes equations to mathematical physics & quantum mechanics, prime numbers & analytics number theory, harmonic analysis, compressed sensing, random matrix theory, combinatorics, and progress on many of the hardest problems in the history of mathematics. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep472-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/terence-tao-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Terence's Blog: https://terrytao.wordpress.com/ Terence's YouTube: https://www.youtube.com/@TerenceTao27 Terence's Books: https://amzn.to/43H9Aiq SPONSORS: To support this podcast, check out our sponsors & get discounts: Notion: Note-taking and team collaboration. Go to https://notion.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:36) - Sponsors, Comments, and Reflections (09:49) - First hard problem (15:16) - Navier–Stokes singularity (35:25) - Game of life (42:00) - Infinity (47:07) - Math vs Physics (53:26) - Nature of reality (1:16:08) - Theory of everything (1:22:09) - General relativity (1:25:37) - Solving difficult problems (1:29:00) - AI-assisted theorem proving (1:41:50) - Lean programming language (1:51:50) - DeepMind's AlphaProof (1:56:45) - Human mathematicians vs AI (2:06:37) - AI winning the Fields Medal (2:13:47) - Grigori Perelman (2:26:29) - Twin Prime Conjecture (2:43:04) - Collatz conjecture (2:49:50) - P = NP (2:52:43) - Fields Medal (3:00:18) - Andrew Wiles and Fermat's Last Theorem (3:04:15) - Productivity (3:06:54) - Advice for young people (3:15:17) - The greatest mathematician of all time PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips

Transcribe, Translate, Analyze & Share

Join 170,000+ incredible people and teams saving 80% and more of their time and money. Rated 4.9 on G2 with the best AI video-to-text converter and AI audio-to-text converter, AI translation and analysis support for 100+ languages and dozens of file formats across audio, video and text.

Start your 7-day trial with 30 minutes of free transcription & AI analysis!

More Affordable
1 %+
Transcription Accuracy
1 %+
Time & Cost Savings
1 %+
Supported Languages
1 +

You can listen to the #472 – Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI using Speak’s shareable media player:

#472 – Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI Podcast Episode Description

Terence Tao is widely considered to be one of the greatest mathematicians in history. He won the Fields Medal and the Breakthrough Prize in Mathematics, and has contributed to a wide range of fields from fluid dynamics with Navier-Stokes equations to mathematical physics & quantum mechanics, prime numbers & analytics number theory, harmonic analysis, compressed sensing, random matrix theory, combinatorics, and progress on many of the hardest problems in the history of mathematics.

Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep472-sc

See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Transcript:

Transcript for Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI | Lex Fridman Podcast #472

CONTACT LEX:

Feedback – give feedback to Lex: https://lexfridman.com/survey

AMA – submit questions, videos or call-in: https://lexfridman.com/ama

Hiring – join our team: https://lexfridman.com/hiring

Other – other ways to get in touch: https://lexfridman.com/contact

EPISODE LINKS:

Terence’s Blog: https://terrytao.wordpress.com/

Terence’s YouTube: https://www.youtube.com/@TerenceTao27

Terence’s Books: https://amzn.to/43H9Aiq

SPONSORS:

To support this podcast, check out our sponsors & get discounts:

Notion: Note-taking and team collaboration.

Go to https://notion.com/lex

Shopify: Sell stuff online.

Go to https://shopify.com/lex

NetSuite: Business management software.

Go to http://netsuite.com/lex

LMNT: Zero-sugar electrolyte drink mix.

Go to https://drinkLMNT.com/lex

AG1: All-in-one daily nutrition drink.

Go to https://drinkag1.com/lex

OUTLINE:

(00:00) – Introduction

(00:36) – Sponsors, Comments, and Reflections

(09:49) – First hard problem

(15:16) – Navier–Stokes singularity

(35:25) – Game of life

(42:00) – Infinity

(47:07) – Math vs Physics

(53:26) – Nature of reality

(1:16:08) – Theory of everything

(1:22:09) – General relativity

(1:25:37) – Solving difficult problems

(1:29:00) – AI-assisted theorem proving

(1:41:50) – Lean programming language

(1:51:50) – DeepMind’s AlphaProof

(1:56:45) – Human mathematicians vs AI

(2:06:37) – AI winning the Fields Medal

(2:13:47) – Grigori Perelman

(2:26:29) – Twin Prime Conjecture

(2:43:04) – Collatz conjecture

(2:49:50) – P = NP

(2:52:43) – Fields Medal

(3:00:18) – Andrew Wiles and Fermat’s Last Theorem

(3:04:15) – Productivity

(3:06:54) – Advice for young people

(3:15:17) – The greatest mathematician of all time

PODCAST LINKS:

– Podcast Website: https://lexfridman.com/podcast

– Apple Podcasts: https://apple.co/2lwqZIr

– Spotify: https://spoti.fi/2nEwCF8

– RSS: https://lexfridman.com/feed/podcast/

– Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4

– Clips Channel: https://www.youtube.com/lexclips
This interactive media player was created automatically by Speak. Want to generate intelligent media players yourself? Sign up for Speak!

#472 – Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI Podcast Episode Summary

Podcast Episode Summary: Lex Fridman with Terence Tao

Key Points & Major Topics:
– The episode features Terence Tao, renowned mathematician and Fields Medalist, discussing his career, mathematical thinking, and the evolving landscape of mathematics with technology.
– A significant portion of the conversation centers on the use of computer-assisted proofs, particularly the Lean programming language, and how formal proof assistants are transforming mathematical collaboration and verification.
– Tao and Fridman explore the philosophical and practical differences between traditional pen-and-paper proofs and formalized, computer-verified proofs, highlighting the increased rigor and collaborative potential of the latter.
– The discussion touches on the future of AI in mathematics and business, including the potential for AI agents to automate complex tasks and the importance of human intuition for edge cases.
– Tao shares insights into the collaborative process in mathematics, the challenges of large-scale projects, and the importance of clear attribution and recognition in group work.

Important Guests:
– Terence Tao: The main guest, celebrated for his breadth of contributions to mathematics and his collaborative approach.
– Lex Fridman: Host, guiding the conversation and providing context.

Actionable Insights & Advice:
– For students struggling with math: Tao recommends focusing on understanding concepts deeply, seeking good mentors, and not being discouraged by setbacks.
– For young people choosing careers: He emphasizes adaptability, transferable skills, and the value of reasoning and problem-solving over narrow specialization.
– In collaborative work: Clear communication, division of labor, and transparent attribution of contributions are crucial, especially in large teams.
– On using formal proof assistants: Start by formalizing existing proofs to build familiarity, and leverage tools like Lean for more reliable and collaborative mathematics.

Recurring Themes & Overall Messages:
– The synergy between human intuition and technological tools is essential for progress in mathematics and beyond.
– Recognition and collaboration are evolving with technology, but human mentorship and adaptability remain vital.
– The importance of humility, continuous learning, and intellectual curiosity is emphasized throughout.

Overall, the episode offers a deep dive into the intersection of mathematics, technology, and human collaboration, with practical advice for learners and professionals alike.

This summary was created automatically by Speak. Want to transcribe, analyze and summarize yourself? Sign up for Speak!

#472 – Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI Podcast Episode Transcript (Unedited)

Speaker: 0
00:00

The following is a conversation with Terence Tao, widely considered to be one of the greatest mathematicians in history, often referred to as the Mozart of math. He won the Fields Meh and the Breakthrough Prize in Mathematics and has contributed groundbreaking work to a truly astonishing range of fields in mathematics and physics.

Speaker: 0
00:22

This was a huge honor for me for many reasons, including the humility and kindness that Terry showed to me throughout all our interactions. It means the world. And now a quick few mention of his sponsor. Check them out in the description or at lexfriedman.com/sponsors. It’s the best way to support this podcast.

Speaker: 0
00:45

We got Notion for teamwork, Shopify for selling stuff online, NetSuite for your business, Element for electrolytes, and the AG1 for your health. Choose Wizen, my friends. And now onto the full ad reads. They’re all here in one place. I do try to make them interesting by, talking about some random things I’m reading or thinking about. But if you skip, please still check out the sponsors.

Speaker: 0
01:07

I enjoy their stuff. Maybe you will too. To get in touch with me for whatever reason, go to alexstreamer.com/contact. Alright. Let’s go.

Speaker: 0
01:15

This episode is brought to you by Notion, a note taking and team collaboration tool. I use Notion for everything, for personal notes, for planning this podcast, for collaborating with other folks, and for super boosting all of those things with AI because Notion does a great job of integrating AI into the whole thing.

Speaker: 0
01:34

You know, what’s fascinating is the mechanisms of human meh. Before we had widely adopted technologies and tools for writing and recording stuff, certainly before the computer. So you can look at medieval monks, for example, that would use the now well studied, memory techniques like the memory palace, the spatial memory techniques to memorize entire books.

Speaker: 0
01:59

That is certainly the effect of technology started by Google search and moving to all the other things like Notion that we’re offloading more and more and more of the task of memorization to the computers, which I think is probably a positive thing because it frees more of our brain to do deep reasoning, whether that’s deep dive focused specialization or the journalist type of thinking versus memorizing facts.

Speaker: 0
02:28

Although, I do think that there’s a ai of background model that’s formed when you memorize a lot of things and from there, from inspiration, arises discovery. So I don’t know. There could be a great cost to offloading most of our, memorization to the machines. But it is the way of the world.

Speaker: 0
02:50

Try Notion AI for free when you go to notion.com/lex, that’s all lowercase, notion.com/lex to try the power of Notion AI today. This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere with a great looking online store. Our future friends has a lot of robots in it. Looking into that distant future, you have Amazon warehouses with millions of robots that move packages around.

Speaker: 0
03:16

You have Tesla bots everywhere in the factories and in the home and on the streets and, the baristas. All of that, that’s our future. Right now, you have something like Shopify that connects a lot of humans in the digital space. But more and more, there will be a automated, digitized, AI fueled connection between humans in the physical space.

Speaker: 0
03:38

Like a lot of futures, there’s going to be negative things and there’s going to be positive things. And like a lot of possible futures, there’s little we could do about stopping it. All we can do is steer it in a direction that, enables human flourishing. Instead of hiding in fear or fear mongering, be part of the group of people that are building the best possible trajectory of, human civilization.

Speaker: 0
04:04

Anyway, sign up for a $1 per month trial period at shopify.com/lexx. That’s all lowercase. Go to shopify.com/lexx to take your business to the next level today. This episode is also brought to you by NetSuite, an all in one cloud business management system. There’s a lot of messy components to running a business, and I must ask and I must wonder at which point there’s going to be an AI AGI like CFO of a company.

Speaker: 0
04:34

An AI agent that handles most if not all of the financial responsibilities or all of the things that NetSuite is doing, at which point will NetSuite increasingly leverage AI for those tasks? I think probably it will integrate AI into its tooling, but I think there’s a lot of edge cases that we need the the human wisdom, the human intuition grounded in, years of experience in order to make the tricky decision around the edge cases.

Speaker: 0
05:06

I suspect that running a company is a lot more difficult than people ai. But there’s a lot of sort of paperwork type stuff that could be automated, could be digitized, could be summarized, integrated, and used as a foundation for the said humans to make decisions. Anyway, that’s our future. Download the CFO’s Ai to AI and Machine Learning at netsuite.com/lex. That’s netsuite.com/lex.

Speaker: 0
05:32

This episode is also brought to you by Ai, my daily zero sugared and delicious electrolyte mix. Now I run along the river, often and, get to meet some really interesting people. One of the the people I met was preparing for his vatsal marathon. I believe he said it was a 100 ai. And that, of course, sparked in me the thought that, I need for sure to, to do one myself.

Speaker: 0
05:59

Some time ago now, I was planning to do something with David Goggins, and I think that’s still on the sort of to do list between the two of us to do some crazy, physical feat. Of course, the thing that is, crazy for me is a daily activity for Goggins. But nevertheless, I think it’s important in the physical domain, the mental domain, and and all domains of life to challenge yourself.

Speaker: 0
06:25

And athletic endeavors is one of the most sort of crisp, clear, well structured way of challenging yourself. But there’s all kinds of things, writing a book. To be honest, having kids and marriage and relationships and friendships, all of those, if you take it seriously, if you go all in and do it right, I think, that’s a serious challenge because most of us are not prepared for it.

Speaker: 0
06:50

And you learn along the way. And if you have the rigorous feedback loop of improving constantly and growing as a person and really doing a great job of the thing, I think, that might as well be an ultra marathon. Anyway, get a sample pack for free with any purchase. Try it at drinkelement.com/lex.

Speaker: 0
07:10

And finally, this episode is also brought to you by AG One, an all in one daily drink to support better health and peak performance. I drink it every day. I’m preparing for a conversation on drugs in the Third Reich. And funny enough, it’s a kind of way to analyze Hitler’s biography is to look at what he consumed throughout and, Norman Tyler does a great job of analyzing all of that and tells the story of Hitler and the Third Reich in a way that hasn’t really been touched by historians before.

Speaker: 0
07:48

It’s always nice to look at, key moments in history through a perspective that’s not often taken. Anyway, I ai that because I think Hitler had a lot of stomach problems and so that was the motivation for getting a doctor. The doctor that eventually, would, fill him up with all kinds of drugs.

Speaker: 0
08:08

But, the doctor earned Hitler’s trust by giving him probiotics, which is a kind of revolutionary thing at the time. And so that really helped deal with whatever stomach issues that, Hitler was having. All of that is a reminder that war is waged by humans and humans are biological systems and biological systems require fuel and meh and all of that kind of stuff.

Speaker: 0
08:32

And depending on what you put in your body will affect your performance in the short term and the long term with meth. That’s true with, Hitler to his last days in the bunker in Berlin, all the cocktail of drugs that he was taking. So I think I got myself somewhere deep, and I’m not sure how to get out of this. It deserves a multi hour conversation versus a few seconds of mention.

Speaker: 0
08:57

But, yeah, all of that was sparked by my thinking of, a g one and how much I love it. I appreciate that you’re listening to this and, coming along for the wild journey that these, ad reads are. Anyway, a g one will give you a one month supply of fish oil when you sign up at drinkag1.com/lex. This is a Lex Fridman podcast.

Speaker: 0
09:22

To support it, please check out our sponsors in the description or at lexfreidman.com/sponsors. And now, dear friends, here’s Terrence Vatsal. What was the really difficult research level math problem that you encountered? One that gave you pause maybe.

Speaker: 1
09:57

Well, I mean, in your undergraduate, education, you you learn about the really hard impossible problems ai, the Riemann hypothesis, Dupuytren Ai conjecture. You can make problems arbitrarily difficult. That’s not really a problem. In fact, there’s even problems that we know to be unsolvable.

Speaker: 1
10:10

What’s really interesting, are the problems just on the boundary between what we can do fairly easily and what are hopeless. But what are problems where, like, existing techniques can do ai 90% of the job and then you just need that remaining 10%? Ai think as a PhD student, the Kakeya problem certainly caught my eye and it just got solved actually.

Speaker: 1
10:32

It’s a problem I’ve worked on a lot in my early research. Historically, I came from a little puzzle by the Japanese meh, Sai Kakeya, in ai 1918 or so. So the puzzle is that you you you have, a needle, in on the plane. Or or think ai like a like driving on ai on on on on a road or something. And you want to execute a u-turn.

Speaker: 1
10:54

You want to turn the needle around. But you want to do it in as little space as possible. So you want to use this little area in order to turn it around. So, but the needle is infinitely maneuverable. So you can imagine just spinning it around its, as a unit needle, you can spin it around its center.

Speaker: 1
11:12

And I think, that gives you a disc of area, I ai, pi over four. Or you can do a three point u-turn, which is what we teach people in the driving schools to do. And that actually takes area sai over eight. So it’s a little bit more efficient than a rotation. And so for a while, people thought that was the most efficient way to turn things around.

Speaker: 1
11:31

But Mersikovich showed that in fact, you could actually turn the needle around using as little arya as you wanted. So point zero zero one, there was some really fancy multi, back and u-turn thing that you could do that that you could turn the needle around. And in so doing, it would pass through every intermediate direction.

Speaker: 0
11:50

Is this in the two dimensional plane? This

Speaker: 1
11:52

is in the two dimensional plane. Yeah. So we understand everything in two dimensions. So the next question is what happens in three meh? So suppose ai the Hubble Space Telescope is tube in space and you want to observe every single star in the universe. So you want to rotate the telescope to reach every single direction. And here’s the unrealistic part. Suppose that space is at a premium, which it totally is not.

Speaker: 1
12:12

You want to occupy as little volume as possible in order to rotate your your needle around in order to see every single star in the sky. How small a volume do you need to do that? And so you can modify Bessel Kovich’s construction. And so if your telescope has zero thickness, then you can use as little volume as you need. That’s a simple modification of the two dimensional construction.

Speaker: 1
12:32

But the question is that if your telescope is not zero thickness but but just very, very thin, some thickness delta, what is the minimum volume needed to be able to see every single direction as a function of delta? So So as delta gets smaller, as you need or gets thinner, the volume should go down. But but how fast does it go down?

Speaker: 1
12:51

And the conjecture was that it goes down very, very slowly, like logarithmic, roughly speaking. And that was proved after a lot of work. So this seems like a puzzle. Why is it interesting? So it turns out to be surprisingly connected to a lot of problems in partial differential equations, in number theory, in geometry, combinatorics.

Speaker: 1
13:11

For example, in in wave propagation, if you splash some some water around, you create waterways and they they travel in various directions. But waves exhibit both both particle and wave type behavior. So you can have what’s called a wave packet, which is like a a very localized wave that is localized in space and moving a certain direction in time.

Speaker: 1
13:29

And so if you plot it in both space and time, it occupies a region which looks like a tube. And so what can happen is that you can have a wave which initially is very dispersed but it all come it all focuses at a single point later in time. Like you can imagine dropping a pebble into a pond and the ripple is spread out.

Speaker: 1
13:47

But then if you time reverse that that that scenario and the equations of wave motion are time reversible, you can imagine ripples that are converging to a single point and then a big speak occurs, maybe even a singularity. And so it’s possible to do that. And geometrically what’s going on is that there’s always ai rays.

Speaker: 1
14:07

So like if if if this wave represents light for example, you can imagine this wave as the superposition of photons, all traveling at the speed of ai. They all travel on these light rays and they’re all focusing at this one point. So you can have a very dispersed wave focus into a very concentrated wave at one point in space and time but then it defocuses again and it separates.

Speaker: 1
14:27

But potentially if the conjecture had a negative solution, so what that meant is that there’s sai very efficient way to pack, tubes pointing different directions to a very, very narrow region of of of a very narrow volume. Then you would also be able to create waves that start out some there’ll be some arrangement of waves that start out very, very dispersed.

Speaker: 1
14:46

But they would concentrate not just at a single point but, there’ll be a large, there’ll be a lot of concentrations in space and time. And, and you could create what’s called a blow up where these waves, their amplitude becomes so great that the laws of physics that they’re governed by are no longer wave equations, but something more complicated and nonlinear.

Speaker: 1
15:07

And so in mathematical physics, we care a lot about whether certain equations in wave equations are stable or not, whether they can create, these singularities. There’s a famous unsolved problem called the Navier Stokes regularity problem. So the Navier Stokes equations, equations that govern the fluid flow for incompressible fluids like water.

Speaker: 1
15:24

The question asked, if you start with a smooth velocity field of water, can it ever concentrate so much that, like, the velocity becomes infinite at some point? That’s called a singularity. We don’t see that, in real life. If you splash around water on the bathtub, it won’t explode on you, or have water leaving at the speed of light or anything. But potentially, it is possible.

Speaker: 1
15:47

And in fact, in recent years, the the consensus has has drifted towards the, the the belief that, that in fact for certain very special con initial con configurations of of sai water vatsal singularities can form. But people have not yet been able to, to actually establish this.

Speaker: 1
16:06

The Clay Foundation has these seven millennium prize problems, has a million dollar prize for solving one of these problems. So this is one of them. Of these seven only one of them has been solved, at the Poincare conjecture ai Perlman. So the Kakin conjecture is not directly, directly related to the Navier Stokes problem, but understanding it would help us understand some aspects of things like wave concentration, which would indirectly probably help us understand the Navier Stokes problem better.

Speaker: 0
16:32

Can you speak to the Navier Stokes? So the existence sana smoothness, like you said, millennial ai problem. Right. You’ve made a lot of progress on this one. In 2016, you published a paper, ai time blow up, for an averaged three-dimensional Navier Stokes equation. Right. So so we’re trying to figure out if this thing usually doesn’t blow up. Right. But can we say for sure it never blows up? Right.

Speaker: 1
16:56

Yeah. Sai, yeah, that is literally the the the million dollar question. Yeah. So this is what distinguishes mathematicians from pretty much everybody else. Like, if if if something holds 99.99% of the time, that’s good enough for most, you know, for for for most things. But mathematicians are one of the few people who really care about whether every like 100%, really 100% of all, situations are covered by ai, yeah.

Speaker: 1
17:20

So most fluids most of the time, water does not blow up. But could you design a very special initial state that does this?

Speaker: 0
17:28

And maybe we should say that this is sai set of equations that govern in the field of fluid dynamics. Yes. Trying to understand how fluid behaves and it’s actually turns out to be a really comp you know, fluid is Yeah. Extremely complicated thing to try to model.

Speaker: 1
17:42

Yeah. So it has practical importance. So this clay ai problem concerns what’s called the incompressible Navier Stokes, which governs things like water. There’s something called the compressible Navier Stokes, which governs things like air. And that’s particularly important for weather prediction. Weather prediction, it does a lot of computational fluid dynamics.

Speaker: 1
17:56

A lot of it is actually just trying to solve the Navier Stokes equations as best they can. Also gathering a lot of data sai that they can get they can initialize ai equation. There’s a lot of moving parts. So it’s very important problem practically.

Speaker: 0
18:09

Why is it difficult to prove general things about these set of equations ai it not not blowing up?

Speaker: 1
18:17

Short answer is Maxwell’s demon. So Maxwell’s demon is a concept in thermodynamics. Like, if you have a box of two gases and bryden and nitrogen, and maybe you arya with all the oxygen on one side and nitrogen on the other side, but there’s no barrier between them, ai, then they will mix.

Speaker: 1
18:30

And they should stay mixed. Right? There’s there’s no reason why they should unmix. But in principle, because of all the collisions between them, there could be some sort of weird conspiracy ai, like maybe there’s a microscopic demon called Maxwell’s demon that will, every time a oxygen and nitrogen atom collide, they will bounce off in such a way that the oxygen sort of drifts onto one side and the nitrogen goes to the other.

Speaker: 1
18:50

And you could have an extremely improbable configuration emerge, which we never see. And and we statistically, it’s extremely ai. But mathematically, it’s possible that this can happen and we can’t rule it out. And this is a situation that shows up a lot in mathematics. A basic example is the digits of ai, 3.14159 sana so The digits look like they have no pattern and we believe they have no pattern.

Speaker: 1
19:17

On the long term, you should see as many ones and twos and threes as fours and fives and sixes. There should be no preference in the digits of ai to favor, let’s sai, seven over eight. But maybe there is some demon in the digits of pi that every time you compute more more digits, it biases one digit to another.

Speaker: 1
19:35

And this is a conspiracy that should not happen. There’s no reason it it should happen. But, there’s there’s there’s no way to prove it, with our current technology. Okay. So getting back to Navier Stokes, sai fluid has a certain amount of energy. And because the fluid is in motion, the energy gets transported around.

Speaker: 1
19:52

And, water is also viscous. So if the energy is spread out over many different locations, the natural viscosity of the fluid will just damp out the energy and it will go to zero. And this is what happens when we actually experiment with water. You speak around, there’s some turbulence and waves and so but eventually it settles down.

Speaker: 1
20:14

And the lower the amplitude, the smaller velocity, the more calm it gets. But potentially, there is some sort of demon that keeps pushing the, the energy of the fluid into a smaller and smaller scale. And it will move faster and faster. And at faster speeds, the effective viscosity is relatively less.

Speaker: 1
20:31

And so it could happen that that it it creates us some sort of, what’s called a self similar blob scenario where, you know, the end of your fluid starts off at some, large scale and then it all sort of, transfers energy into a smaller, region of of of the fluid, which then vatsal much faster rate, moves into, an even smaller region and so And and each time it does this, it takes maybe half as as long as as the previous one.

Speaker: 1
21:00

And then, yeah, you you could you could actually, converge to to all the energy, concentrating in one point in a a ai amount of time. And that that’s, that scenario is called Ai blowup. So in practice, this doesn’t happen. So water is what’s called turbulent. So it is true that, if you have a big eddy of water, it will tend to break up into smaller eddies, But it won’t transfer all this energy from one big eddy into one smaller eddy.

Speaker: 1
21:26

It will transfer it into maybe three or four. And then those must split up into maybe three or four small eddies of their own. And so the energy gets dispersed to the point where the viscosity can can then keep everything under control. But if it can somehow, concentrate, all the energy, keep it all together, and do it fast enough that the viscous effects don’t have enough time to calm everything down, then this blurb can occur.

Speaker: 1
21:50

So there are papers who had claimed that, oh, you just need to take into account conservation of energy and just carefully use the viscosity and you can keep everything under control for not just the Navier Stokes, but for many, many types of equations like this. And so in the past, there have been many attempts to try to obtain what’s called global regularity for Navier Stokes, which is the opposite of finite time blow up, vatsal velocity stays smooth.

Speaker: 1
22:10

And it all failed. There was always some sign error or some subtle mistake, and and it couldn’t be salvaged. So what I was interested in doing was trying to explain why we were not able to disprove, pattern time blow up. I couldn’t do it for the actual equations of fluids, which were too complicated.

Speaker: 1
22:28

But if I could average the equations of motion of naviesocics, sai physically if if, if I could turn off certain types of ways in which water interacts and only keep the ones that I sana. So in particular, if, if there’s a fluid and it could transfer its energy from a large eddy into this small eddy or this other small eddy, I would turn off the energy channel that would transfer energy to this this one and and direct it only into, this smaller eddy while still preserving the low concentration energy.

Speaker: 0
22:58

So you’re trying to make a blow up?

Speaker: 1
22:59

Yeah. Yeah. So I I I basically engineer, a blow up by changing laws of physics, which is one thing that mathematicians are allowed to do. We can change the equation.

Speaker: 0
23:07

How does that help you get closer to the proof of something?

Speaker: 1
23:10

Right. So it provides what’s called sai obstruction in mathematics. So so so what I did was that, basically, if I turned off the, certain parts of the equation, so which usually when you turn off certain interactions make it less nonlinear, it makes it more regular and less likely to blow up.

Speaker: 1
23:26

But I found that by turning off a very well designed set of of of of of of of interactions, I could force all the energy to blow in ai time. So what that means is that if you wanted to prove, global regularity for Navier Stokes, for the actual equation, you had you must use some feature of the true equation which which my artificial equation, does not satisfy.

Speaker: 1
23:51

So it it rules out certain, certain approaches. So, the thing about math is is is it’s not just about finding, you know, taking a technique that is gonna work and applying it, but you you need to not take the techniques that don’t work. And for the problems that are really hard, often there are dozens of ways that you might think might apply to solve the problem.

Speaker: 1
24:13

But, it’s only after a lot of experience that you realize there’s no way that these methods are going to work. So having these counterexamples for nearby problems, kind of rules out, it saves you a lot of time because you’re not wasting, energy on things that you now know cannot possibly ever work.

Speaker: 0
24:30

How deeply connected is it to that specific problem of fluid dynamics or just some more general intuition you build up about mathematics?

Speaker: 1
24:37

Right. Yeah. So the key phenomenon that, was ai my technique exploits is what’s called supercriticality. So in partial differential equations, often these equations are like a tug of war between different forces. So in Navier Stokes, there’s the dissipation, force coming from viscosity and it’s very well understood. It’s linear. It calms things down.

Speaker: 1
24:56

If if viscosity was all there was then then nothing bad would ever happen. But there’s also transport that that energy from in one location of space can get transported because the fluid is in motion to other locations. And that’s a nonlinear effect and that causes all the problems. So there are these two competing terms in the Navier Stokes equation the dissipation term and the transport term.

Speaker: 1
25:19

If the dissipation term dominates, if it’s large then basically you get regularity and if, if the transport term dominates then, then we don’t know what’s going on. It’s a very nonlinear situation. It’s unpredictable. It’s turbulent. So sometimes these forces are in balance at small scales but not in balance at large scales or or vice versa.

Speaker: 1
25:38

So Navier Stokes is what’s called supercritical. So at at smaller and smaller scales, the transport terms are much stronger than the viscosity terms. So the viscosity terms are things that calm things down. And so this is, this is why the problem is hard. In two meh, so the Soviet meh, Vadyshynskaya, she in the 60s showed that in two dimensions there was no blow up.

Speaker: 1
26:00

And in two dimensions the Navier Stokes equations is what’s called critical the effect of transport and the effect of viscosity about the same strength even at very very small scales. And we have a lot of technology to handle critical and also subcritical equations and prove regularity. But for supercritical equations, it was not clear what was going on.

Speaker: 1
26:17

And I did a lot of work and then there’s been a lot of follow-up showing that for many other types of supercritical equations, you can create all kinds of blow up examples. Once the nonlinear effects dominate the linear effects at small scales, you can have all kinds of bad things happen.

Speaker: 1
26:32

So this is sort of one of the main insights of this this line of work is that supercriticality versus criticality and subcriticality. This this makes a big difference. I mean, that that’s a key qualitative feature that distinguishes some equations for being sort of nice and predictable and and, you know, like ai planetary motion.

Speaker: 1
26:48

And, I mean, there’s there’s certain equations that that you can predict for millions of years and or or thousands at least. Again, it’s not really a problem. But but there’s a reason why we can’t predict the weather past two weeks into the future because it’s a supercritical equation.

Speaker: 1
27:01

Lots of really strange things are going on at at very fine scales.

Speaker: 0
27:04

So whenever there is some huge source of nonlinearity Yeah. That can create a huge problem for predicting what’s gonna happen.

Speaker: 1
27:12

Yeah. And if nonlinearity is somehow more and more featured and interesting at at small scales. I mean, there’s there’s many equations that are nonlinear, but, in many in in many equations, you can approximate things by the bulk. Sai for example, planetary motion. You know, if you wanted to understand the orbit of the moon or Mars or something, you don’t really need the microstructure of, ai, like, the seismology of the moon or or, like, exactly how the mass distributed.

Speaker: 1
27:35

You just basically you can almost approximate these planets by point masses, and it’s just the aggregate behavior is important. But if you sana model a fluid, like like the weather, you can’t just say in Los Angeles, the temperature is this, the wind speed is this. For supercritical equations, the ai confirmation is is really important.

Speaker: 0
27:54

If we can just linger on the Navier Stokes, equations a little bit. So you suggested, maybe you can describe it, that one of the ways to solve it or to negatively resolve it would be to sort of to construct a liquid a kind of liquid computer

Speaker: 1
28:12

Right.

Speaker: 0
28:12

And then show that the halting problem from computation theory has consequences for fluid dynamics. So, show it in that way. Can you describe this this idea?

Speaker: 1
28:22

Yeah. So this came out of of this work of constructing the this this app this average equation that that blew up. So one, as as part of how I had to to do this so there’s sort of this naive way to do it. You you just keep pushing, every time you you get and you get one scale, you you push it immediately to the next scale as as fast as possible.

Speaker: 1
28:41

This is sort of the naive way to to to to force blow up. It turns out in five and higher dimensions this works. But in three dimensions there was this funny phenomenon that I discovered that if you if you keep if you change the laws of physics you just always keep trying to push the energy into smaller smaller scales.

Speaker: 1
28:59

What happens is that the energy starts getting spread out into multi many scales at once. So you you have energy at one scale, you’re pushing it into the next scale, and then, as soon as it enters that scale, you also push it to the next scale, but there’s still some energy left over from the previous scale.

Speaker: 1
29:15

You’re trying to do everything at once. And this spreads out the energy too much. And then it turns out that that, it makes it vulnerable for viscosity to come in and actually just damp out everything. So it turns out this direct debaucher doesn’t actually work. There was a separate paper by some other authors that actually showed this in three dimensions.

Speaker: 1
29:34

Sai what I needed was to program a delay, so kinda like airlocks. So, I Ai needed an equation which would start with a fluid doing something at one scale. It would push its energy into the next scale, but it would stay there until all the energy from the from the arya scale got transferred.

Speaker: 1
29:54

And only after you pushed all the energy in, then you sort of open the next gate, and and then you you push that in as well. So, by doing that, it kind of the energy inches forward scale by scale in such a way that it’s always, localized at one scale at a ai. And then it can resist the effects of viscosity because it’s it’s not dispersed. So in order to make that happen, yeah, I had to construct a rather complicated nonlinearity.

Speaker: 1
30:18

And it was basically ai, you know, ai, it was constructed like an electronic circuit. So I I I can’t thank my wife for this because she was trained as a electrical engineer. And, you know, she she talked about, you know, she had to design circuits and so And, you know, if you if you want a circuit that does a certain thing, ai, maybe have a light that that flashes on and then turns off and then on and then off, you can build it from from more primitive components, you know, capacitors and resistors and so And you have to build a ram.

Speaker: 1
30:47

And you you, and these diagrams, you can you can sort of follow-up your eyeballs and say, oh, yeah. The the the current will will build up here, and then it will stop, and then it would do that. So I knew how to build the analog of basic electronic components, you know, like resistors and capacitors and so And and I would I would stack them together, in shah in such a way that that I would create something that would open one gate, and then there’ll be a clock that would and then once the clock hits the certain threshold, it would close it.

Speaker: 1
31:10

It would become a Rube Goldberg type machine, but ai mathematically. And this ended up working. So what I realized is that if you could pull the same thing off for the actual equations so if the equations of water support a computation. So, like, if you can imagine kind of a steampunk, but it’s really water punk ai of thing where, you know, so modern computers are electronic.

Speaker: 1
31:31

You know, they’re they’re they’re powered by by electrons passing through very tiny wires and interacting with other other electrons and and so But instead of electrons, you you can imagine these pulses of of water moving at certain velocity. And maybe it’s there are two different configurations corresponding to a bit being up or down.

Speaker: 1
31:49

Probably that if you had two of these moving bodies of water collide, they would come out with some new configuration, which is which would be something like an and gate or or gate, you know, that, it it the the the output would depend in a very predictable way on on the inputs.

Speaker: 1
32:03

And, like, you could chain these together and maybe create a Turing machine, and and then you could you have computers, which are made completely out of water. And if you have computers, then maybe you can do robotics. Ai I so I you know, hydraulics and so And so you could create some machine, which is basically a fluid analog, what’s called a Von Neumann machine.

Speaker: 1
32:25

So Von Neumann proposed, if you sana colonize Mars, the sheer cost of transporting people and machines to Mars is just ridiculous. But if you could transport one machine to Mars and this machine had the ability to mine the planet, create some more materials, smelt them, and build more copies of the same machine, then you could colonize the whole planet, over time.

Speaker: 1
32:47

Sai, if you could build a fluid machine, which, yeah. So it’s it’s it’s a it’s a rope it’s a fluid robot. And what it would do, its its purpose in life, it’s programmed so that it would create a smaller version of itself in some sort of cold state. It it wouldn’t start just yet. Once it’s ready, the big robot configuration of water would transfer all its energy into the smaller configuration and then power down. K.

Speaker: 1
33:11

And then, like like, like, clean itself up. And then what’s left is this newest state, which would then turn on and do the same thing, but smaller and faster. And then the equation has a certain scaling symmetry. Once you do that, it can just keep iterating. So this in principle would create a blow up, for the actual Navier Stokes. And this is what Ai managed to accomplish for this average Navier Stokes.

Speaker: 1
33:29

So it provided this sort of roadmap to solve the problem. Now this is a pipe dream because, there are so many things that are missing for this to actually be a reality. So, I I I can’t create these basic logic gates. I I don’t I don’t have these these special configurations of water.

Speaker: 1
33:48

I mean, there’s candidates that include vortex rings that might possibly work. But, but also, you know, analog computing is really nasty, compared to digital computing. I mean, because there’s always errors. You you have to you have to do a lot of error correction along the way.

Speaker: 1
34:04

I don’t know how to completely power down the big machine sai that it doesn’t interfere with the the ai of a smaller machine. But everything in principle can happen. Like, it doesn’t contradict any of the laws of physics. So it’s sort of evidence that this thing is possible.

Speaker: 1
34:18

There are other groups who are now pursuing ways to make Naviosaurus blow up, which are nowhere near as ridiculously complicated as this. They they actually are pursuing much closer to the the direct self similar model, which can it it doesn’t quite work as is, but there could be some simpler scheme than what I just described to make this work.

Speaker: 0
34:40

There is a real leap of genius here to go from Navier Stokes to this Turing machine. So it goes from what the self similar blob scenario that you’re trying to get the smaller and smaller blob

Speaker: 1
34:52

Mhmm.

Speaker: 0
34:53

To now having a liquid Turing machine meh smaller and smaller and smaller and somehow seeing how that could be used to say something about a blow up. I mean, that’s a big leap.

Speaker: 1
35:07

So there’s precedent. I mean, so the the thing about mathematics is that it’s it’s really good at, spotting connections between what you think of what you might think of as completely different, problems. But if if the mathematical form is the same, you you you can you can draw a connection. So, there’s a lot of previously on what we call cellular automator.

Speaker: 1
35:28

Mhmm. The most famous of which is Conway’s Game of Life. There’s this infinite discrete bryden. At any given time, the grid is either occupied by a cell or it’s empty. And there’s a very simple rule that, tyler you how these cells evolve.

Speaker: 1
35:40

So sometimes cells live and sometimes they they die. And there’s, you know, when I was a a student, it was a very popular screensaver to actually just have these these animations, like, go on. And and they look very chaotic. In fact, they look a little bit like turbulent flow sometimes.

Speaker: 1
35:53

But at some point, people discovered more and more interesting structures within this game of life. So for example, they discovered this thing called a glider. So a glider is a very tiny configuration of like four or five ai, which evolves and it just moves at a certain direction.

Speaker: 1
36:05

And that’s like this this vortex rings. Shah. So this is an saloni. The game of life is ai of like a discrete equation. And and, the fluid Naviosoc is is a continuous equation, but mathematically, they have some similar features.

Speaker: 1
36:19

And, so over time, people discovered more and more interesting things that you could build within the game of life. The game of life is a very simple system. It only has, like, three or four rules, to to do it, but but you can design all kinds of interesting configurations inside it.

Speaker: 1
36:34

There’s something called a glider gun that does nothing but speak out gliders one at a one at a time. And then after a lot of effort, people managed to create, and gates and or gates for gliders. Like, there’s this massive ridiculous structure, which if you if if, if you have a stream of gliders, coming in here and a stream of gliders coming in here, then you may produce extreme gliders coming out.

Speaker: 1
36:58

If sai maybe if both of of the streams, have gliders, then there’ll be an output ram. But if only one of them does then nothing comes out. Mhmm. So they could build something like that. And once you could build an these basic gates then just from software engineering you can build almost anything.

Speaker: 1
37:18

You can build a Turing machine. I mean, it’s it’s like an enormous steampunk type things. They look ridiculous. But then people also generated self replicating objects in the game of ai. A massive machine, a bone in a machine, which over a a lot huge period of time and always over gladi guns inside doing these very steampunk calculations.

Speaker: 1
37:36

It would create another version of vatsal, which could replicate.

Speaker: 0
37:40

That’s so incredible.

Speaker: 1
37:41

A lot of this was ai community crowdsourced ai, like amateur mathematicians actually. Sai I knew about that that that work. And so that is part of what inspired me to propose the same thing with Navier Stokes. That we’re just a much as I said, analog is much worse than digital.

Speaker: 1
37:57

Like, it’s gonna be, you can’t just directly take the constructions in the game of life and plump them in. But, again, it just it shows it’s possible.

Speaker: 0
38:05

You know, there’s a kinda emergence that happens with these cellular automata. Local rules Mhmm. Maybe it’s similar to fluids. I don’t know. But local rules operating at scale can create these incredibly complex dynamic structures. Do you think any of that is amenable to mathematical analysis?

Speaker: 0
38:28

Do we have the tools to say something profound about

Speaker: 1
38:33

that? The thing is you can get this emergent, very complicated structures, but only with very carefully prepared initial conditions. Yeah. So so these these these glider guns and and gates and and sulfur machines, if you just plunk on randomly some cells and you and look at that, you will not see any of these.

Speaker: 1
38:48

And that’s the analogous situation of Navier Stokes again. You know, that that with with typical initial conditions, you will not you will not have any of this weird computation going on. But basically through engineering, you know, by by by specially designing things in a very special way, you can make clever constructions.

Speaker: 0
39:07

I wonder if it’s possible to prove the sort of the negative of, like, basically prove that only through engineering can you ever create

Speaker: 1
39:14

Yeah.

Speaker: 0
39:14

Yeah. Yeah. Something interesting.

Speaker: 1
39:15

This this is a recurring challenge in mathematics that, I call it the dichotomy between structure and randomness, that most objects that you can generate in mathematics are random. They look like ai digits of sai. Well, we believe is a good example. But there’s a very small number of things that have patterns.

Speaker: 1
39:32

But, now you can prove something as a pattern by just constructing. You know, like if something has a simple pattern and you have a proof that it does something like repeat itself every so often, you can do that. But and you can prove that that for example, you can prove that most sequences of digits have no pattern.

Speaker: 1
39:48

So ai if you just pick digits randomly, there’s some good little large numbers. It tells you you’re gonna get as many ones as as twos in the long run. But, we have a lot fewer tools to to to if I give you a specific pattern like the digits of pi, how can I show that this doesn’t have some weird pattern to it?

Speaker: 1
40:06

Some other work that I spend a lot of time on is to prove what are called structure theorems or inverse theorems that give tests for when something is is very structured. So some functions are what’s called additive. Like, if you have a function of mass, the natural numbers are natural numbers.

Speaker: 1
40:20

So maybe, you know, two maps to four, three maps to sai, and so Some functions are what’s called additive, which means that if you add if you add two inputs together, the output gets gets added as well. For example, I’m multiplying by a constant. If you multiply ai number by 10, if you if you multiply a plus b by 10 that’s the same as multiplying a by 10 and b by 10 and then adding them together.

Speaker: 1
40:40

So some, functions are additive. Some of the integers are kind of additive but not completely additive. So ram example if I take a number n Ai multiply by the square root of two and I take the integer part of that. So 10 by square root of two is like 14 something sai 10 went up to 14. 20 went up to 28. So in that case, additivity is true then.

Speaker: 1
41:03

So 10 plus 10 is twenty and fourteen plus 14 is 28. But because of this rounding, sometimes there’s round off errors and and sometimes when you, add a plus b, this function doesn’t quite give you the sum of of the two individual outputs, but the sum plus minus one. Sai it’s almost additive but not quite additive.

Speaker: 1
41:19

So there’s a lot of useful results in mathematics and I’ve worked a lot in developing things like this to the effect that if if a function exhibits some structure like this then, it’s basically there’s a reason for why it’s true. And the reason is because there’s there’s some other ai function, which is actually, completely structured, which is explaining this sort of partial pattern that you have.

Speaker: 1
41:42

And so if you have these inverse theorems, it, it creates this sort of dichotomy that that either the objects that you study are either have no structure at all or they are somehow related to something that is structured. And in either way, in either, in either case, you can make progress.

Speaker: 1
41:59

A good example of this is that there’s this old theorem in mathematics called Szemeredi’s theorem, proven in the 1970s. It concerns trying to find a certain type of pattern in a set of numbers that the patterns have to make progression. Things like three, five, and seven or or or ten, fifteen, and 20.

Speaker: 1
42:14

And I’m really, Andre, is I’m really proved that, any set of numbers that are sufficiently big, what’s called positive density, has, arithmetic progressions in it of of any length you wish. So for example, the odd numbers have a sai of density one half, and they contain arithmetic progressions of any length.

Speaker: 1
42:33

So in that case, it’s obvious because the the odd numbers are really, really structured. I can just take, eleven, thirteen, fifteen, seventeen. I ai just I can I can easily find ethnic progressions in in in that set? But, zebra system also applies to random sets. If I take the set of old numbers and I flip a coin, and I, for each number and I only keep the numbers which for which I got a heads.

Speaker: 1
42:56

Okay. So I just put coins. I just randomly take out half the numbers. I keep one half. So that’s sai set that has no no patterns at all.

Speaker: 1
43:03

But just from random fluctuations, you will still get a lot of, of ethnic progressions

Speaker: 0
43:09

in that set. Can you prove that there’s arithmetic progressions of arbitrary length within a random

Speaker: 1
43:17

Yes. Have you heard of the infinite monkey theorem? Usually, mathematicians give boring names to theorists, but occasionally they they give colorful names. Yes. The popular version of the infinite monkey theorem is that if you have an infinite number of monkeys in a room with each of a typewriter, they can type out, text randomly.

Speaker: 1
43:32

Almost surely, one of them is going to generate the entire square of Hamlet or any other finite string of text. It will just take some ai, quite a lot of time, actually. But if you have an infinite number, then it happens. So, basically the theorem says that if you take an infinite string of of digits or whatever, eventually any finite pattern you wish will emerge.

Speaker: 1
43:53

It may take a long time but it will eventually happen. In particular tyler ethnic progressions of any length will eventually happen. Okay. But you need that you but you need an extremely long random sequence for this to happen.

Speaker: 0
44:04

I suppose that’s intuitive. It’s just infinity.

Speaker: 1
44:08

Yeah. Infinity absorbs a lot of sins.

Speaker: 0
44:10

Yeah. How are we humans supposed to deal with infinity? Well, you

Speaker: 1
44:14

can think of infinity as as an abstraction of, a finite number for which you you do not have a bound for. That, you know, I mean so nothing in real life is truly infinite. But, you know, you can, you know, you can ask yourself questions ai, you know, what if I had as much money as I wanted?

Speaker: 1
44:32

You know, or what if I could go as fast as I wanted? And a way in which mathematicians formalize that is mathematics has found a formalism to idealize instead of something being extremely large or extremely small to actually be exactly infinite or zero. And often the the mathematics becomes a lot cleaner when you do that. I mean, in in physics, we we joke about, assuming spherical cows.

Speaker: 1
44:53

You know, like, real world problems have got all kinds of real world effects, but you can idealize send something to infinity, send something to zero. And, and the mathematics becomes a lot simpler to work with there.

Speaker: 0
45:06

I wonder how often using infinity, forces us to deviate from, the physics of reality.

Speaker: 1
45:16

Yeah. So this is a lot of pitfalls. So, you know, we we spend a lot of time in undergraduate math classes teaching analysis. And analysis is often about how to take limits and and and and whether you you know, so for example, a plus b is always b plus a. So when when you have a finite number of terms and you add them, you can swap them and there’s no there’s no problem.

Speaker: 1
45:34

But when you have an infinite number of terms, they’re they sort of show games you can play where you can have a series which converges to one value but you rearrange it and it suddenly converges to another value. And so you can make mistakes. You have to know what you’re doing when you allow infinity.

Speaker: 1
45:48

You have to introduce these epsilon’s and deltas and and and there’s there’s a certain type of way of reasoning that helps you avoid mistakes. In more recent years, people have started taking results that are true in in infinite limits and what’s called ai them. So you know that something’s true eventually, but, you don’t know when. Now give me a rate. Okay.

Speaker: 1
46:11

So it’s such that if I ai don’t have an infinite number of monkeys, but but a large finite number of monkeys, how long do I have to wait for Hamlet to come out? And, that’s a more qualitative question. And this is something that you can you can, attack by purely finite meh, and you can use your finite intuition.

Speaker: 1
46:28

And in this case, it turns out to be exponential in the length of the text that you’re you’re trying to generate. So if, and so this is why you’d never see the monkeys create Hamlet. You can maybe see them create a four letter word, but not nothing that big. And so I personally find once you finalize an infinite statement, it’s it does become much more intuitive, and it’s no longer so so weird.

Speaker: 0
46:51

So even if you’re working with infinity, it’s good to finalize so that you can have some intuition.

Speaker: 1
46:57

Yeah. The downside is that the finalize groups are just much, much meh. Yeah. And and, yeah. So so the infinite ones are found usually, ai, decades earlier, and then later on, people ai them.

Speaker: 0
47:07

So since we mentioned a lot of math and a lot of physics Mhmm.

Speaker: 1
47:10

What

Speaker: 0
47:10

to use the difference between mathematics and physics as disciplines, as ways of understanding of seeing the world? Maybe we can throw an engineering in there. You mentioned your wife as an engineer. Give it new perspective on circuits. Right. So there’s a different way of looking at

Speaker: 1
47:24

the world given that you’ve done mathematical physics. So you you’ve you’ve worn all the hats. Right. So I think science in general is interaction between three things. There’s the real world. There’s what we observe of the real world, our observations, and then our mental models as to how we think the world works.

Speaker: 1
47:43

Sai, we can’t directly access reality. K. All we have are the observations which are incomplete and they they have errors. And, there are meh, many cases where we would, we want to know, because what is the weather like tomorrow? And we don’t yet have the observation and we’d like to predict. And then we have these simplified models, sometimes making unrealistic assumptions, you know, spherical cow type things.

Speaker: 1
48:08

Those are the mathematical models. Mhmm. Mathematics is concerned with the models. Science collects the observations and it proposes the models that might explain these observations. What mathematics does is, we we stay within the model and we ask what are the consequences of that model?

Speaker: 1
48:24

What observations what what predictions will the model make of the of future observations, or past observations. Does it fit observed data? So there’s definitely a symbiosis. It’s meh I guess mathematics is is unusual among other disciplines is that we start from ai like the axioms of a model and ask what conclusions come up from that model.

Speaker: 1
48:48

In almost any other discipline, you start with the conclusions. You know, I want to do this. I want to build a bridge. You know, I I want to to make money. I want to do this. Okay. And then you you you find the path to get there.

Speaker: 1
49:01

A lot there’s there’s a lot less sort of speculation about

Speaker: 0
49:05

it.

Speaker: 1
49:06

Suppose I did this, what would happen? You know, planning and and and modeling. Speculative fiction maybe is is one other place. But, that’s about it, actually. Most of the things we do in life is conclusions driven, including physics and science. You know, I mean, they want to know, you know, where is this asteroid gonna go? You know? What it was what what’s the weather gonna be tomorrow?

Speaker: 1
49:25

But, FX also has this other direction of of of going from the, the axioms.

Speaker: 0
49:32

What do you think? There is this tension in physics between theory and experiment.

Speaker: 1
49:36

Mhmm.

Speaker: 0
49:37

What do you think is the more powerful way of discovering truly novel ideas about reality?

Speaker: 1
49:41

Well, you need both top down and bottom up. Yeah. It’s just a it’s it’s a really interaction between all these things. Sai over time, the observations and the theory and the modeling should go both get get closer to reality. But initially, and it isn’t I mean, this is, there’s always the case out there. They’re always far apart to begin with.

Speaker: 1
50:00

But you need one to figure out where where to push the other. You know? So, if your model is predicting anomalies, that are not picked up by experiment, that tells experimenters where to look, you know, to to to to to ai more data, to refine the models. You know, so it it it goes it goes back and Within mathematics itself, there’s there’s also a theory and experimental component.

Speaker: 1
50:23

It’s just that until very recently, theory has dominated almost completely. Like, 99% of mathematics is theoretical mathematics. And there’s a very tiny amount of experimental mathematics. Sai mean, people do do it, you know, like if they want to study prime numbers or whatever, they can just generate large datasets and with a comp Sai once we had the computers, we began to do it a little bit.

Speaker: 1
50:45

Although even before, well, like Gauss, for example, he discovered, he conjectured the most basic theorem in number theory. It’s called the prime number theorem, which predicts how many primes that up to a million, up to a trillion. It’s not an obvious question. And basically what he did was that he computed, Ai mean, mostly, by himself but also hired human computers, people who whose professional job it was to do arithmetic, to compute the 100,000 ai or something and made tables and made a prediction.

Speaker: 1
51:13

That was an early example of experimental mathematics. But until very recently it was not yeah I mean theoretical mathematics was just much more successful. I mean because doing complicated mathematical computations is was just not not feasible until very recently. And even nowadays, you know, even though we have powerful computers, only some mathematical things can be, explored numerically. There’s some called the combinatorial explosion.

Speaker: 1
51:38

If you sana us to study, for example, Szemeredi’s theorem, you sana study all possible subsets of ai one to a thousand. There’s only 1,000 numbers. How bad could it be? It turns out the number of different subsets of of one to a thousand is two to the power 1,000, which is way bigger than than than any computer can currently can can any computer ever or ever enumerate.

Speaker: 1
51:56

So you have you have to be, there are certain math problems that very quickly become just intractable to attack by direct brute force computation. Chess is another, a famous example. The number of chess positions, we can’t get a computer to fully explore. But now we have AI.

Speaker: 1
52:16

We have tools to explore this space not with 100% guarantees of success, but with experiment. You know? So, like, we can empirically solve chess now. For example, we have we have a very very good Ai that that can you know they don’t explore every single position in the game tree but they have found some very good approximation.

Speaker: 1
52:37

And people are using actually these chess engines to make, to do experimental chess. That they’re revisiting old chess theories about oh you know when you this type of opening this is a good this is a good type of move this is not and they can use these chess engines to actually refine and in some cases overturn commission wisdom about chess.

Speaker: 1
52:57

And Ai I do hope that, that mathematics will

Speaker: 0
53:00

will have a larger experimental component in the future, perhaps powered by AI. We’ll, of course, talk talk about that. But in the case of chess, and there’s a similar thing in mathematics, I don’t believe it’s providing a kind of formal explanation of the different positions. Yep.

Speaker: 0
53:17

It’s just saying which position is better or not and that you can intuit as a human being. And then from that, we humans can construct Yes. A theory of the matter. You’ve mentioned the Plato’s cave allegory.

Speaker: 1
53:29

Mhmm.

Speaker: 0
53:30

So in case people don’t know, it’s where people are observing shadows of reality, not reality itself, and they believe what they’re observing to be reality. Is that in some sense what mathematicians and maybe all humans are doing is, looking at shadows of reality? Do is it possible for us to truly access reality?

Speaker: 1
53:55

Well, there arya these three ontological things. There’s actual reality, there’s observations, and our our models. And, technically, they are distinct, and I think they will always be distinct. Ai. But they can get closer over time. You know, so, and the process of getting closer often means that you’re you have to discard your initial intuitions.

Speaker: 1
54:20

So, ai provides great examples, you know, like, you know, like, your an initial model of the world is that it’s flat because it it looks flat, you know, and, and that it’s and it’s big, you know, and the rest of the universe, the ai is not, you know, like the sun, for example, looks really tiny.

Speaker: 1
54:37

And so you start off with a model which is actually really far from reality. But it fits kind of the observations that you have. Sai things look good. But over time as you make more and more observations, bringing it closer to reality, the model gets dragged along with it. And so over time meh had to realize that the Earth was round, that it spins, it goes around the solar system, solar system goes around the galaxy sana so on and so And the sky’s part of the universe is expanding.

Speaker: 1
55:01

The expansion is self expanding accelerating. And in fact, very recently in this year, I saw this, even the acceleration of the universe itself is, this evidence ram this is nonconstant.

Speaker: 0
55:12

And, the explanation behind why that is It’s it’s catching up. It’s catching up. I mean, it’s still, you know, the dark matter, dark energy, this this kind

Speaker: 1
55:21

of thing. Yes. We have we have a model that sort of explains that fits the data really well. It just has a few parameters that, you have to specify. But so people sai, oh, that’s Fudge factors. You know, with with enough Fudge factors, you can you can explain anything. You have to but, the mathematical point of the model is that, you want to have fewer parameters in your model than data points in your observational set.

Speaker: 1
55:42

So if you have a model with 10 parameters that explains 10 up to 10 observations, that is a completely useless model. It’s what’s called overfitted. But, like, if you have a model with with, you know, two parameters and it explains a trillion observations, which is basically, so, yeah, the the the dark matter model, I think, has, like, 14 parameters, and it explains petabytes of data, that that that that the astronomers have.

Speaker: 1
56:04

You can think of all the theory. Like, one way to think about, physical meh theory theory is it’s it’s a compression of of the universe, and data compression. So, you know, you have these petabytes of observations. You’d like to compress it to a model which you can describe in five pages and specify a certain number of parameters.

Speaker: 1
56:24

And if it can fit to reasonable accuracy, you know, almost all of your observations. I mean, the more compression that you make, the better your theory.

Speaker: 0
56:32

In fact, one of the great surprises of our universe and of everything in it is that it’s compressible at all. It’s the unreasonable effectiveness of mathematics. Yeah.

Speaker: 1
56:40

Einstein had a quote like that. The the most incomprehensible thing about the universe is that it is comprehensible.

Speaker: 0
56:45

Right. And not just comprehensible. You can do an equation like e equals meh squared.

Speaker: 1
56:49

There is actually a some mathematical possible explanation for that. So there’s this phenomenon in mathematics called universality. So many complex systems at the macroscale are coming out of lots of tiny interactions at the macroscale. And normally because of the meh of explosion, you would think that, the macroscale equations must be, like, infinitely exponentially more complicated than than the, the macroscale ones.

Speaker: 1
57:11

And they are if you want to solve them completely exactly. Ai, if you want to model, all the atoms in a box of of air, that’s like, Avogadro’s number is humongous. Right? There’s a huge number of particles. If you actually have to track each one, it’ll be ridiculous.

Speaker: 1
57:26

But certain laws emerge at the microscopic scale that almost don’t depend on what’s going on at the macro scale, only depend on a very small number of parameters. So if you sana to model a sai, of, you know, frontillion particles in a box, you just need to know its temperature and pressure and volume and a few ram, like five or six.

Speaker: 1
57:45

And it models almost everything you need need to know about these 10 to 23 or whatever particles. So we we have, we we don’t understand universality anywhere near as we would like meh. But there are much simpler toy models where we do, have a good understanding of why universe universality occurs.

Speaker: 1
58:05

Most basic one is is the central limit theorem that explains why the bell curve shows up everywhere in nature. But so many things are distributed by what’s called a Gaussian distribution, ai famous bell curve. There’s now even a meme with this curve.

Speaker: 0
58:18

And even the meme applies broadly. Is there any universality to the meme? Yes. You can call meta,

Speaker: 1
58:24

if you ai. But there are many, many processes. For example, you can take lots of independent, random variables and average them together, in in various ways. But you can take a simple average or more complicated average, and we can prove in various cases that that these these bell curves, these Gaussians emerge.

Speaker: 1
58:40

And it is a satisfying explanation. Sometimes they don’t. So so if you have many different inputs and they all correlated in some systemic way, then you can get something very far from a bell curve show up. And this is also important to know when the system fails. So universality is not a 100% reliable thing to rely on.

Speaker: 1
58:58

That, that the global financial crisis was a famous example of this. People thought that, mortgage defaults, sai this sort of, Gaussian type behavior that that if you if you ask if a population of of of, you know, a 100,000 Americans with mortgages, that’s what what proportion of tyler world default in the mortgages.

Speaker: 1
59:19

If everything was decorrelated, it could be an sai bell curve and and ai you you can you can manage risk of options and derivatives and so And, and it is a very beautiful theory. But if there are systemic shocks in the economy, that can push everybody to default at the same time, that’s very non Gaussian behavior.

Speaker: 1
59:37

And, this wasn’t fully accounted for in the 02/2008. Now I think there’s some more awareness that this is a systemic risk is ai a meh bigger issue. And, just because the model is pretty, and nice, it may not match reality. Sai so the mathematics of working out what models do is really important.

Speaker: 1
59:59

But, also the the ai of validating when the models fit reality and when they don’t. I mean, that you need both. And but mathematics can help because it it can, for example, these central limit theorems, it it told you that that if you have certain axioms like like like, non correlation, that if all the inputs were not correlated to each other, then you have these grasping behaviors that things are fine.

Speaker: 1
01:00:21

It it tells you where to look for weaknesses in the model. So if you have a mathematical understanding of central limit theorem and someone proposes to use these Gaussian copy loads or whatever to to model, default risk, if you’re mathematically, trained, you would say, okay.

Speaker: 1
01:00:38

But what are the systemic correlation between all your inputs? And so then that then you can ask the economists, you know, how how how much of a risk is that? And then you can you can you can go look for that. So there’s always this this this synergy between science and and mathematics.

Speaker: 0
01:00:52

A little bit on the topic of universality.

Speaker: 1
01:00:54

Mhmm.

Speaker: 0
01:00:56

You’re known and celebrated for working across an incredible breadth of mathematics, reminiscent of Hilbert a century ago. In fact, the great Fields Meh winning mathematician, Tim Gowers, has said that you are the closest thing we get to Hilbert. He’s a colleague of yours.

Speaker: 1
01:01:14

Oh, yeah. Good friend.

Speaker: 0
01:01:16

But anyway, so you you are known for this ability to go both deep and broad in mathematics. So you’re the perfect person to ask. Do you think there are threads that connect all the disparate arya areas of mathematics? Is there a kind of deep underlying structure, to all of mathematics?

Speaker: 0
01:01:35

There’s

Speaker: 1
01:01:36

certainly a lot of connecting threads, and a lot of the progress of mathematics has can be represented by taking by stories of two fields of mathematics that were previously not connected and finding connections. An ancient example is, geometry and number theory. You know, so so in the times of the ancient Greeks, these were considered different subjects. I mean, mathematicians worked on both.

Speaker: 1
01:01:59

You know, Euclid work both on on geometry most famously, but also on numbers. But they were not really considered related. I mean, a little bit ai, you know, you you could say that that that this length was five times this length because you could take five copies of this length and so But it wasn’t until Descartes who really realized that, who developed, we’re not going to call it analytic geometry that you can you can ram the plane, a geometric object ai, by two real numbers.

Speaker: 1
01:02:26

Right? Every point can be and so geometric problems can be turned into into problems about numbers. And the the today, this feels almost trivial. Like, the effect that there’s there’s there’s no content to this. Like, of course, you you know, a plane is x x and ai, and of course, that’s what we teach, and it’s internalized.

Speaker: 1
01:02:45

But it was an important development that these these two fields are are are ai. And this process has just gone on throughout mathematics over and over again. Algebra and geometry were separated, and now we have this root algebraic geometry that connects them and and over and over again.

Speaker: 1
01:03:01

And that’s certainly the type of mathematics that Ai enjoy the most. So I think there’s sort of different styles to being a mathematician. I think hedgehogs and fox. A A fox knows many things a little bit, but a a hedgehog knows one thing very, very well. And in mathematics, there’s definitely both hedgehogs and foxes.

Speaker: 1
01:03:17

And then there’s people who are kind of, who can play both roles. And I think ideal collaboration between mathematicians involves very you need some diversity. Ai, a ai fox working with many hedge hedgehogs or or vice versa. So yeah. But but I identify mostly as a fox, certainly. I I I like arbitrage somehow.

Speaker: 1
01:03:39

You know, like like, learning how one field works, learning the tricks of that wheel, and then going to another field, which people don’t think it is related, but I can I can adapt the tricks? So see the connections

Speaker: 0
01:03:51

between the fields.

Speaker: 1
01:03:51

Yeah. So there are other mathematicians who are far deeper than I am. Like who they are really, they’re really hedgehogs. They know everything about one field and they’re they’re much faster and and and more effective in that field, but I can I can give them these extra tools?

Speaker: 0
01:04:04

I mean, you sai that you could be both the hedgehog and and the fox depending on the context Yeah. Depending on the collaboration. So what can you, if it’s at all possible, speak to the difference between those two ways of thinking about a problem? Say you’re encountering a new problem, you know, searching for the connections versus, like, very singular focus.

Speaker: 1
01:04:26

I’m much more comfortable with with the, the, the fox paradigm. Yeah. So, yeah, I I like looking for analogies, narratives. I I speak a lot of time if if there’s a result, I see it in one field ai I like the result. It’s a cool result, but I don’t like the proof. Like, it uses types of mathematics that I’m not super familiar with.

Speaker: 1
01:04:47

Ai often try to reprove it myself using the tools that I favor. Often my proof is worse. But, by the exercise of doing so, I can say, oh, now I can see what the other proof was trying to do. And from that, I can get some understanding of of the tools that are used in in that field.

Speaker: 1
01:05:07

So it’s very exploratory, very doing crazy things in crazy fields and and, like, yeah, reinventing the wheel a lot. Yeah. Whereas, so the hedgehog style is, I think meh more scholarly. You know, you you you’re very knowledge based. You you you you stay up to speed on, like, all the developments in this field. You you know all the history.

Speaker: 1
01:05:24

You have a very good understanding of of exactly the strengths and weaknesses of of each particular, technique. Yeah. I think you’d you’d rely a lot more on sort of calculation than sort of trying to find narratives. Sai, yeah, I mean, I can do that too, but, there are other people who are extremely good at that.

Speaker: 0
01:05:43

Let’s step back and, maybe look at the the a bit of a romanticized version of mathematics.

Speaker: 1
01:05:52

Mhmm.

Speaker: 0
01:05:52

So, I think you’ve said that early on in your life, math was more like a puzzle solving activity when you were, young. When did you encounter a problem or proof where you ai math can have a kind of elegance and beauty to it?

Speaker: 1
01:06:13

That’s a good question. When I came to graduate school, in Princeton, so John Conway was there at the time. He he passed away a few years ago. But, I remember one of the very research talks I I went to was a talk by Conway on what he called extreme proof. So Conway just had this this amazing way of of thinking about all kinds of things in a in a way that you would normally think of.

Speaker: 1
01:06:33

So, he thought of proofs themselves as occupying some sort of space. You know? So so, if you sana to prove something, let’s sai that there’s meh many primes. Okay? You have all different proofs, but you could you could rank them in different axes.

Speaker: 1
01:06:45

Like ai proofs are elegant, some proofs are long, some proofs are, meh and so And so there’s this cloud so the space of all proofs itself has some sort of shape. And so he was interested in in extreme points of the shah. Like, out of all all these proofs, what is one of those the shortest at the at the expense of every everything else or or the most elementary or or whatever.

Speaker: 1
01:07:08

And so he gave some examples of well known theorems, and then he would give what he thought was was the extreme proof, in these different aspects. I I just found that really eye opening, that that, you know, it’s it’s it’s not just getting a proof for a result was interesting.

Speaker: 1
01:07:26

But but once you have that proof, you know, trying to to to optimize it in various ways, that that proof, proofing itself had some craftsmanship to it. It certainly informed my writing tyler, that, you know, like, when you do your your math assignments and as a undergraduate, your homework and so you you’re sort of encouraged to just write down any proof that works.

Speaker: 1
01:07:49

Okay, and hand it in and get a get as long as it gets a tick mark, you you move on. But if you want your your results to actually be influential and be read by people, it can’t just be correct. It should also be a pleasure to read, you know, motivated, be adaptable to to generalize to other, things.

Speaker: 1
01:08:09

It’s it’s the same in in many other disciplines ai like coding. It’s sai there’s a lot of analogies between math and coding. I like analogies if you haven’t noticed. But, you know, like, you can code something, spaghetti code that works for a certain task, and it’s quick and dirty and it works.

Speaker: 1
01:08:24

But, there’s lots of good principles for for, writing code well so that a little bit can use it, build upon it, and so then has fewer bugs and whatever. Mhmm. And there’s sai there’s similar things with with meh mathematics. So

Speaker: 0
01:08:37

Yeah. The of all, there’s so many beautiful things there, and and Conway is one of the great minds, in mathematics ever in computer science. Just even considering the space of proofs Yeah. And saying, okay. What does this space look ai? And what are the extremes? Like you mentioned, coding is an analogy.

Speaker: 0
01:08:57

It’s interesting because there’s also this activity called the code golf

Speaker: 1
01:09:01

Oh, yeah. Yeah.

Speaker: 0
01:09:02

Yeah. Which I also find beautiful and fun, where people use different programming languages to try to write the shortest possible program that accomplishes a particular task. Yeah. Ai I believe there’s even competitions on this. Yeah.

Speaker: 1
01:09:14

Yeah. Yeah.

Speaker: 0
01:09:14

Yeah. And, it’s also a nice way to stress test not just the sort of the programs or, in this case, the proofs, but also the different languages. Maybe that’s a different notation or whatever to use to to accomplish a different task.

Speaker: 1
01:09:30

Yeah. You learn a lot. I mean, it it may seem like a frivolous exercise, but it it it can generate all these insights, which if you didn’t have this artificial, objective, to to to pursue, you you might not see.

Speaker: 0
01:09:42

What to use is the most beautiful or elegant equation in mathematics? I mean, one of the things that people often look to in in beauty is the simplicity. So if you look at e equals mc squared. Mhmm. So when when a few concepts come together, that’s why the Euler identity is often considered, the most beautiful equation mathematics.

Speaker: 1
01:10:04

Mhmm. Mhmm.

Speaker: 0
01:10:05

Do you do you find beauty in that one, in the Euler identity?

Speaker: 1
01:10:08

Yeah. Well, as I said, I mean, what I find most appealing is is connections between different things that you ai, so the if you, e to pi I equals minus one. So, yeah, people are I always use all the fundamental constants. Okay. That that that’s I mean, that’s cute. But but to me so the exponential function was just by order to measure exponential growth. You know?

Speaker: 1
01:10:28

So I think compound interest or decay or anything which is continuously growing, continuously decreasing growth and decay or dilation or contraction is modeled by the exponential function. Whereas pi, comes around ram circles and rotation. Right? If you want to rotate a needle, for example, 180 degrees, you need to rotate by pi radians.

Speaker: 1
01:10:46

And pi, complex numbers, represents this hopping between the imaginary axis of 90 degree rotation. So a change in direction. So the exponential function represents growth and decay in the direction that you really are. When you stick an Ai in the exponential, it it now it’s it’s instead of motion in the same direction as your current position, it’s the motion as a right angle to your current position, so rotation.

Speaker: 1
01:11:09

And then sai e to the ai equals minus one tells you that if you rotate for ai pi, you end up at the other direction. So it unifies geometry through dilation and exponential growth ai through this act of of classification, protection by by by eye. So it it connects together all these tools mathematics.

Speaker: 1
01:11:27

Yeah. Yeah. The ai, geometry, complex, and complex, and, the complex numbers, they’re all considered almost yeah. They’re all next door neighbors in mathematics because of ai identity.

Speaker: 0
01:11:37

Do do you think the thing you mentioned is cute? The the the the collision of notations from these disparate fields, is just a frivolous side effect, or do you think there is legitimate, like, value in when the notation, although our old friends come together ai the night?

Speaker: 1
01:11:54

Well, it’s it’s it’s confirmation that you have the right concepts. So when you study anything, you you have to measure things and give them names. And initially, sometimes you’re because your your model is getting too far off from reality, you give the wrong things the best names, and you only ai out later what’s what’s really important.

Speaker: 0
01:12:14

Physicists can do this sometimes. I mean, but it turns out okay. So actually with ai, sai E equals

Speaker: 1
01:12:19

meh c squared. Okay. So, one of the big things was the E, right? So when Aristotle came up with his laws of motion and then, Galileo and Newton and so you know, they saw the things they could they could measure. They could measure mass and acceleration and force and so And so Newtonian mechanics, for example, ai equals m a was the famous Newton’s law of motion.

Speaker: 1
01:12:39

So those were the the primary objects. So as they gave them the central building in the theory. It was only later after people started analyzing these equations that there always seemed to be these quantities that were conserved. So in particular meh and energy. And it’s not obvious that things happen energy. Ai, it’s not something you can directly measure the same way you can measure mass and and and velocity. So both.

Speaker: 1
01:13:01

But over time, people realized that this was actually a really fundamental concept. Hamilton eventually in the century reformulated Newton’s laws of physics into what’s called Hamiltonian mechanics where the energy, which is now now called the Hamiltonian, was the dominant object.

Speaker: 1
01:13:14

Once you know how to measure the Hamiltonian of any system, you can describe completely the the ai, like, what what happens to to all the states. Like, it’s, it it really was a central actor, which was not obvious initially. And this, helped actually this change of perspective really helped when quantum mechanics came along, because, the early physicists who ai quantum mechanics, they had a lot of trouble trying to adapt their Newtonian thinking because, you know, everything was a particle and so to to to quantum mechanics, you know, because I think because there was a wave.

Speaker: 1
01:13:48

But it just looked really, really weird. Ai, you ask shah is the quantum vision ai equals m a? And it’s really, really hard to to give an answer to that. But it turns out that the Hamiltonian, which was so, secretly behind the scenes in classical mechanics also is the key, object in, in quantum mechanics that this there’s also an object called ram Hamiltonian.

Speaker: 1
01:14:09

It’s a different type of object. It’s what’s called an operator rather than than a function. But, and, but again, once you specify it, you specify the entire dynamics. So there’s some called Schrodinger equation that tells you exactly how quantum systems evolve once you have a Hamiltonian. So side by side, they look completely different objects.

Speaker: 1
01:14:25

You know, ai the one involves particles, one involves waves, and so But with this centrality, you could start actually transferring a lot of intuition and facts from classical mechanics to quantum mechanics. So for example, in in classical mechanics, there’s this thing called Noether’s theorem. Every time there’s a symmetry in a physical system, there is a conservation law.

Speaker: 1
01:14:43

So the laws of physics are translation invariant. Like, if I move 10 steps to the left, I experience the same laws of physics as if I was here. And that corresponds to conservation of momentum. If I turn around by by some angle, again, I experience the same laws of physics.

Speaker: 1
01:14:57

This corresponds to the conservation of angular momentum. If I wait for 10, I still have the same laws of physics. So there’s time transition invariance. This corresponds to the law of conservation of energy. So there’s this fundamental connection between symmetry and conservation.

Speaker: 1
01:15:11

And that’s also true in quantum mechanics even though the equations are completely different. But because they’re both coming from the Hamiltonian, Hamiltonian controls everything. Every time the Hamiltonian has a symmetry, the equations will will have a conservation law. So it’s it’s it’s it’s it’s once you have the right language, it actually makes things, a lot a lot cleaner.

Speaker: 1
01:15:31

One of the problems ai we can’t unify quantum mechanics and general relativity yet, we haven’t figured out what the fundamental objects are. Like, for example, we have to give up the notion of space and time being at least almost Euclidean type of spaces. And it has to be, you know, and, you know, we kind of know that at very tiny scales, there’s gonna be quantum fluctuations as sai space space time foam.

Speaker: 1
01:15:51

And trying to to use Cartesian coordinates x ai z is gonna be it’s it’s just it’s it’s a nonstarter. But we don’t know how to shah to replace it with. We We don’t actually have the mathematical, concepts. Yeah. The analog of the Hamiltonian, but it sort of organized everything.

Speaker: 0
01:16:08

Does your gut say that there is a theory of everything? So this is even possible to ai, to find this language that unifies general relativity and quantum mechanics?

Speaker: 1
01:16:19

I believe so. I mean, the history of physics has been out of unification, much like mathematics, over the years. You know, electricity and magnetism were were separate theories and then Maxwell unified them. You know, Newton unified the the motion of the heavens with the motions on of objects on the earth and so So it should happen.

Speaker: 1
01:16:35

It’s just that the, again, to go back to this model of the of the observations and and and theory. Part of our problem is that physics is a victim of its own success, that our two big theories of of of physics, general relativity and quantum mechanics, are so are so good now.

Speaker: 1
01:16:51

So together, they cover 99.9% of sort of all the observations we can make. And you have to, like, either go to extremely insane particle accelerations or or the early universe or or or things that are really hard to measure, in order to get any deviation from either of these two theories to the point where you can actually figure out how to how to combine them together.

Speaker: 1
01:17:10

But I have faith that we, you know, we’ve we’ve been doing this for centuries and we’ve made progress before. And if and there’s no reason why we should stop.

Speaker: 0
01:17:18

Do you think you’ll be a mathematician that develops, theory of everything?

Speaker: 1
01:17:24

What often happens is that when the physicists need, some theory of math mathematics, there’s often some precursor that the mathematicians, worked out earlier. So when Einstein started realizing that space was curved, he went to some mathematician and asked, you know, is there is there some theory of curved space that the mathematicians already came up with that could be useful?

Speaker: 1
01:17:45

And he said, oh, yeah. There’s sai I think of Riemann came up with something. And so, yeah, Riemann had developed the remaining geometry, which is precisely, you know, a a a theory of spaces that are curved in in various general ways, which turned out to be almost exactly what was needed, by science theory.

Speaker: 1
01:18:00

This is going back to to bring this unreasonable effectiveness of mathematics. I think the theories that work well for explaining the universe tend to also involve the same mathematical objects that work well to solve mathematical problems. Mhmm. Hopefully, they’re just sort

Speaker: 0
01:18:12

of both ways of organizing data, in in in in useful ways. It just feels like you might need to go some weird land that’s very hard to to intuit. Like Yeah. You have, like, string theory.

Speaker: 1
01:18:24

Yeah. That that’s that was that was a leading candidate for many decades. It’s I think it’s slowly falling out of fashion because it’s it’s not matching experiment.

Speaker: 0
01:18:32

Sai one of the big challenges, of course, like you said, is experiment is very tough Yes. Because of the how effective Yeah. Both theories are. But the other is, like, just you know, you’re talking about you’re not just deviating from space time. You’re going into, like, some crazy number of dimensions. Yeah.

Speaker: 0
01:18:52

You’re doing all kinds of weird stuff that, to us, we’ve gone so far from this flat Earth that we started

Speaker: 1
01:18:58

at, like you mentioned. Yeah. Yeah. Yeah.

Speaker: 0
01:18:59

Yeah. Now we’re just it’s it’s very hard to use our limited ape descendants of, cognition to intuit what that reality really is like.

Speaker: 1
01:19:09

This is why analogies are so important. Yeah. I mean, so yeah. The round Earth is not intuitive because we’re we’re stuck on it. But, you know, but, you you you know, but round objects in general, we have pretty good intuition over. And we’ve been interested about light works and so And, like, it’s it’s actually a good exercise to actually work out how eclipses and phases of of the sun and the moon and so can be really easily explained by by by by round Earth and round moon, you know, and models.

Speaker: 1
01:19:35

And and you can just take, you know, a basketball and a golf ball and a and and a light source and actually do these things yourself. So the intuition is there. But, yeah, you have to transfer it.

Speaker: 0
01:19:46

That is a big leap intellectually for us to go from flat to round Earth

Speaker: 1
01:19:50

Mhmm.

Speaker: 0
01:19:51

Because, you know, our life is mostly lived in flatland

Speaker: 1
01:19:54

Yeah.

Speaker: 0
01:19:55

To load that information. And we’re all, like, take it for granted. We take so many things for granted because science has established a lot of evidence for this kind of thing. But, you know, we’re on a round rock Yeah. Flying through space. Yeah. Yeah. That’s a big leap. And you have to take a chain of those leaps the more and more and more we progress.

Speaker: 1
01:20:15

Right. Yeah. So modern science is maybe, again, a victim of its own success is that, you know, in order to be more accurate, it has to to move further and further away from your initial intuition. And so, for someone who hasn’t gone through the whole process of science education, it looks more ram more suspicious Yeah. Because of that.

Speaker: 1
01:20:31

So, you know, we we we need we need more grounding. I mean, I I think, I mean, you know, there are there are scientists who do excellent outreach. But there’s there’s there’s there’s lots of science things that you can do at home. I I there’s lots of YouTube videos. I did a YouTube video recently of Grant Sanderson, who we told us earlier that, you know, how the ancient Greeks were able to measure things like the distance of the moon, distance of the Earth, and, you know, using techniques that you you could also replicate yourself.

Speaker: 1
01:20:54

It doesn’t all have to be, like, fancy space telescopes and and very deep and intimidating mathematics.

Speaker: 0
01:21:00

Yeah. That’s, I highly recommend that. I believe you give a lecture and you also did an incredible video with Grant. It’s a beautiful experience to try to put yourself in the mind of a person from that time Mhmm. Shrouded in mystery. Right. You know? You’re, like, on this planet.

Speaker: 0
01:21:17

You don’t know the shape of it, the size of it. You see some stars, you see some you see some things and you try to, like, localize yourself in this world Yeah. Yeah. And try to make some kind of general statements about distance to places.

Speaker: 1
01:21:28

Change your perspective is really important. You say travel broadens the mind. This is intellectual travel. You know? Put yourself in the mind of the ancient Greeks or, or some other person, some other time period, make hypotheses, spherical cows, whatever, you know, speak.

Speaker: 1
01:21:41

And you know, this is, this is what mathematicians do. And some artists do, actually.

Speaker: 0
01:21:47

It’s just incredible that given the extreme constraints, you could still say very powerful things. That’s why it’s inspiring. Looking back in history, how much can be figured out

Speaker: 1
01:21:58

Right.

Speaker: 0
01:21:58

When you don’t have meh.

Speaker: 1
01:22:00

Right.

Speaker: 0
01:22:00

Figure out stuff would If

Speaker: 1
01:22:01

you propose axioms, then the mathematics lets you, you know, follow those axioms to to their conclusions. And sometimes you can get quite a quite a long way from, you know, initial hypotheses.

Speaker: 0
01:22:09

If we stay in the land of the weird, you mentioned general relativity. You’ve, you’ve contributed, to the mathematical understanding of Einstein’s field equations. Can you explain this work? And, from a sort of mathematical standpoint, what speak of general relativity are intriguing to you, challenging to you?

Speaker: 1
01:22:30

I have worked on some equations. There’s something called the the wave maps equation or the sigma field model, which is not quite the equation of speak time gravity itself, but of certain fields that might exist on top of space time. So Einstein’s equation of relativity just describes space and time itself. But then there’s other fields that live on top of that.

Speaker: 1
01:22:51

There’s the elect electromagnetic field, there’s things called Yang Mills fields. And there’s this whole hierarchy of different equations of which Einstein is considered one of the most nonlinear and difficult. But relatively low in the hierarchy was this thing called the wave maps equation.

Speaker: 1
01:23:05

So it’s a wave which at any given point, is fixed to be like on a sphere. So, I can think of a bunch of arrows in space and time and and and the arrows are pointing in in different directions, but they ai like waves. If if if you wiggle an arrow, it was it’ll propagate and create make all the arrows move kinda ai, sheaves of wheat in the wheat field.

Speaker: 1
01:23:26

And I was interested in the global work allowed problem again for this like, is it possible for for all the energy here to to to collect at a point? So the equation I considered was actually what’s called a critical equation where it’s actually the behavior at all scales is roughly the same.

Speaker: 1
01:23:41

And I was able barely to show that, that you couldn’t actually force a scenario where all the energy constricted at one point. But the energy had to disperse a little bit. At the moment it just a little bit, it it it would it would stay regular. Yeah. This was back in February. That was part of why I got interested in area stocks afterwards, actually. Yeah. So I developed some techniques to, solve that problem.

Speaker: 1
01:24:02

So part of it is it was, this problem is really nonlinear, because of the curvature of the sphere. There’s there was a certain nonlinear effect, which was a nonperturbative effect. It was when you sort of looked at it normally, it it looked larger than the linear effects of the wave equation.

Speaker: 1
01:24:17

And so it was hard to to keep things under control even when your energy was small. But I developed what’s called a gauge transformation. So the equation is kinda like an evolution of of of heaves of wheat, and and they’re all bending back and and so there’s a lot of motion.

Speaker: 1
01:24:32

But like if you imagine like stabilizing the flow by attaching little cameras at different points in space which are trying to move in a way that captures most of the motion. And under this sort of stabilized flow the flow becomes a lot more linear. I discovered a way to transform the the equation to reduce the amount of nonlinear effects.

Speaker: 1
01:24:52

And then I was able to to to to solve the equation. I found this transformation while visiting my art in Australia, and I was trying to understand the dynamics of all these fields and I Ai couldn’t do a pen and paper. And I had none of the facilities of computers to do any computer simulations. So I ended up closing my eyes speak on the on the floor.

Speaker: 1
01:25:10

I just imagined myself to actually be the speak field and rolling around to try to to see how to change coordinates in such a way that somehow things in all directions would behave in a reasonably linear fashion. And, yeah, my aunt walked in on on me while I was doing that, and she was asking, what do I ai am I doing doing this?

Speaker: 1
01:25:27

It’s complicated and easy. Yeah. And, you know, she said, okay. Fine. You know, you’re a young man. I don’t ask questions.

Speaker: 0
01:25:33

I I I have to ask about the, you know, how do you approach solving difficult problems? What if it’s possible to go inside your mind when you’re thinking, are you visualizing in your mind the mathematical objects, symbols maybe? What are you visualizing in your mind usually when you’re thinking?

Speaker: 1
01:25:56

A lot of pen and paper. One thing you pick up as a mathematician is sort of, I call it cheating strategically. Sai, the the the beauty of mathematics is that is that you get to change the rule, change the problem, change the rules as you wish. Like, this you don’t get to do this for any other field.

Speaker: 1
01:26:12

Like, you know, if if you’re an engineer and someone says, build a bridge over this this river, you can’t say, I wanna build this bridge over here instead, or I wanna put it out of paper instead of steel. But a mathematician, you can you can do whatever you want. It’s it’s like trying to solve a computer game where you can there’s unlimited cheat codes available.

Speaker: 1
01:26:31

And so, you know, you you can you can set this so Sai there’s a dimension that’s large. I’ll set it to one. I’d solve the one dimensional problem So there’s a main term and an error term. I’m gonna make a spherical car assumption as I’ve seen the error term as zero. And so the way you should solve these problems is is not in sort of this ai man mode where you make things maximally difficult.

Speaker: 1
01:26:50

But actually, the way you should you should approach, at any reasonable math problem is that you if if there are 10 things that are making you a lot difficult, find a version of the problem that turns off nine of the difficulties and only keeps one of them, and solve that. And then that just so you you you install nine cheats. Okay. If you install 10 cheats, then then the game is trivial.

Speaker: 1
01:27:10

But you install nine cheats, you solve one problem that that that that teaches you how how to deal with that particular difficulty. And then you turn that one off and you turn someone else something else on, and then you solve that one. And after you you know how to solve the 10 problems, 10 difficulties separately, then you have to start merging them a few at a time.

Speaker: 1
01:27:26

I I sai a kid, I watched a lot of these Hong Kong action movies, ram a culture. And, one thing is that every time it’s a fight scene, you know, sai maybe the the hero gets swarmed by a 100 bad guy goons or whatever. But it will always be choreographed so that you’d always be only fighting one person at a time, and then it would defeat that person and move on.

Speaker: 1
01:27:47

And and because of that, he could he could defeat all of them. Right? But whereas if they had fought a bit more intelligently and, you know, just swarmed the guy at once, it would make for much, much worse sana that, that they would win.

Speaker: 0
01:28:01

Are you usually, pen and paper? Are you working, with computer and latex?

Speaker: 1
01:28:07

I’m mostly pen and paper, actually. So in in my office, I have four giant blackboards. And sometimes Sai just have to write everything I know about the problem on the four blackboards and then sit on my couch and just sort of see the whole thing.

Speaker: 0
01:28:19

Is it all symbols ai notation or is there some drawings?

Speaker: 1
01:28:23

Oh, there’s a lot of drawing and a lot of bespoke doodles that, only make sense to me. I mean and and that’s the beauty of blackboards you erase and it’s it’s it’s sai very organic thing. I’m beginning to use more and more computers, partly because, AI makes it much easier to do simple coding things that, you know, if I wanted to plot a function before, which is moderately complicated, has some iteration or something, you know, I’d had to to remember how to set up a Python program and and and and and and how does a full loop work and and and debug it and it would take in two hours and so And and now I can do it in ten, fifteen minutes.

Speaker: 1
01:28:55

It’s it’s meh, yeah, I’m I’m using more and more, computers to do simple explorations.

Speaker: 0
01:29:01

Let’s talk about AI a a little bit if we could. So, maybe a good entry point is just talking about computer assisted proofs in general. Can you describe the lean formal proof programming language and how it can help as a proof assistant and maybe how you started using it and how, it has helped you.

Speaker: 1
01:29:25

So, Lean is a computer language, much like sort of standard languages like Python and C and and so Ai that in most languages, the focus is on choosing executable code. Lines of code do things. You know, they they flip bits or or they make a robot move or or they they deliver you text on the Internet or something.

Speaker: 1
01:29:43

So lean is a language that can also do that. It can also be run as a standard, traditional language, but it can also produce certificates. So a software like ai Python might do a computation and give you that the answer is seven. Okay. That it does the sum of three plus four is equal to seven.

Speaker: 1
01:29:59

But, lean can produce not just the answer, but but a proof that, how it got the the answer of seven as three plus four, and all the steps involved in in in in in sai it’s a look. It creates these more complicated objects, not just statements, but statements with proofs attached to them.

Speaker: 1
01:30:15

And, every line of code is just a way of piecing together previous statements to to create new ones. So the idea is not new. These things are are called proof assistance, and so they provide languages for which you you can create quite complicated, intricate, mathematical proofs.

Speaker: 1
01:30:30

And, they produce these certificates that give a a 100%, guarantee that your arguments are correct if you trust the ai, obviously. But they made the compiler really small, and you can there are several different compilers available for the same method. Can

Speaker: 0
01:30:44

you give people some intuition about the the difference between writing on pen and paper versus using lean programming language? How hard is it to formalize

Speaker: 1
01:30:54

Right.

Speaker: 0
01:30:55

Statement? So

Speaker: 1
01:30:56

lean, a lot of mathematicians were involved in the design of lean. So it’s it’s designed so that, individual lines of code resemble individual lines of mathematical argument. Like, you might want to introduce a variable. You wanna sana prove ai contradiction. You your there are very standard things that you can do, and and it it’s it’s written sai ai it should be like a one to one correspondence.

Speaker: 1
01:31:16

In fact, it’s it isn’t because Lean is ai explaining a proof to an extremely pedantic colleague who will will point out, okay. Did you really mean this? Like, what what happens if this is zero? Okay. Did you how do you justify this?

Speaker: 1
01:31:28

So lean has a lot of automation in it, to try to to, to be less annoying. Sai for example, every mathematical object has to come with a type. Like, if I if I talk about x, is x a real number or a a natural number or or a function or something? It if you write things informally, it’s, it’s up in the sort of context.

Speaker: 1
01:31:51

You say, you know, clearly, x is equal to let x be the sum of y and z, and y and z were already real numbers, so x should also be a real number. Sai lean can do a lot of that, but every so often, it it says, wait a minute. Can you tell me more about what this object is, what what type of object it is?

Speaker: 1
01:32:07

You have to think more, at a philosophical level about not just sort of computations you’re doing, but sort of what each object actually, is in in some sense.

Speaker: 0
01:32:16

Is he using something like LLMs to do, the type inference or like you mentioned with the real number?

Speaker: 1
01:32:22

It’s it’s using much more traditional, what’s called, good old fashioned AI. Yeah. You can represent all these things as trees, and there’s always ai to match one tree to another tree. So

Speaker: 0
01:32:30

it’s actually doable to figure out if something is a a real number or a natural number. Meh.

Speaker: 1
01:32:36

Every object sort of comes with a history of where it came from, and you can you can kinda trace it. Oh, I see. Yeah. So it’s it’s it’s designed for reliability. So, modern Ai are not used in it’s a distronic technology. People are beginning to use AI’s on top of lean. So when a mathematician tries to program, a proof in lean, often there’s a step. Okay.

Speaker: 1
01:32:55

Now I want to use, the fundamental theorem of calculus, say, k, to do the next step. So the lean developers have built this this massive project called Mephilib, a a collection of tens of thousands of useful facts about mathematical objects. And somewhere in there is the fundamental theorem of calculus, but you need to find it. So a lot of the bottleneck now is actually lemma search.

Speaker: 1
01:33:15

You know, there’s a tool that that you know is in there somewhere, and you need to find it. And so you can there are various search engines specialized for Meh that you can do. But there’s now these large language models that you can say, like, I need the fundamental calculus at this point. And it ai like, okay.

Speaker: 1
01:33:30

For example, when I code, I have GitHub Copilot installed as a plug in to my Bryden. And it scans my text, and it sees what I need. It says, you know, how about even ai speak. Now I need to use the fundamental calculus. Okay. And then it it might suggest, ai. Try this.

Speaker: 1
01:33:46

And, like, maybe 25% of the time, it works exactly. And then another 15% of the time, it doesn’t quite work. But it it’s close enough that I can say, oh, yeah. If I just change it here and here, it’ll it’ll work. And then, like, half the time, it gives me complete rubbish.

Speaker: 1
01:33:58

So but people are beginning to use AIs a little bit on top, mostly on the level of basically fancy autocomplete, that, you can type half or one line of a proof, and it will find

Speaker: 0
01:34:10

it’ll tell you. Yeah. But but a fancy, especially fancy with the sort of capital letter f is, remove some of the friction Yeah. Mathematician might feel when they move from pen and paper to formalizing. Yes.

Speaker: 1
01:34:23

Yeah. So right now, I estimate that the effort time and effort taken to formalize a proof is about 10 times the amount taken to to write it out. Yeah. So it’s doable, but, you don’t it’s it’s annoying. But doesn’t

Speaker: 0
01:34:35

it, like, kill the whole vibe of being a mathematician?

Speaker: 1
01:34:39

Yeah. Just sai I mean

Speaker: 0
01:34:40

Having a pedantic coworker. Right. Yeah.

Speaker: 1
01:34:42

If if that was the only aspect of it. Okay. But, okay, there there’s some there there’s some case it was actually more pleasant to do this formally. So there’s a there’s a theorem I formalized, and there’s a certain constant 12, that that came out of it at, in in the final statement.

Speaker: 1
01:34:56

And so this 12 had to be carried all through the proof, and, like, everything had to be checked that it goes through all the all these other numbers are had to be consistent with this final number 12. And then so we wrote a paper through this theorem with this number 12. And then a few weeks later, someone said, oh, we can actually improve this 12 to an 11 by reworking some of these steps.

Speaker: 1
01:35:12

And when this happens with pen and paper, like, every time you change a parameter, you have to check line by line that every single line of your proof still works. And there can be subtle things that you didn’t quite realize, some properties on number 12 that you didn’t even realize that you were taking advantage of.

Speaker: 1
01:35:26

So a proof can break down at a subtle place. Sai we had formalized the proof of this constant 12. And then when this this new paper came out, we said, okay. Let’s sai that took, like, three weeks to formalize, and and, like, 20 people to formalize this this this original proof.

Speaker: 1
01:35:40

Ai said, open now now let’s let’s, let’s update the twelve to eleven. And what you can do with lean so you just in your ai theorem, you’ve you change the 12 to 11. You run the tyler, and, like, of the thousands of lines of code you have, 90% of them still work, and there’s a couple that are lined in red.

Speaker: 1
01:35:57

Now I can’t justify these these steps, but it immediately isolates which steps you need to change. But you can skip over everything which which works just fine. And if you program things correctly, with some good programming practices, most of your lines will not be read. And there’ll just be a few places where you I mean, if if you don’t hard code your constants, but you sort of, you use smart tactics and so you can you can localize, the things you need to change to to a very small, period of time.

Speaker: 1
01:36:24

So it’s ai within a day or two, we had updated our proof to because this is very quick process here. You make a change. There are 10 things now that don’t work. For each one, you you make a change, and now there’s ai more things that don’t work, but but the process converges much more smoothly than with pen and paper.

Speaker: 0
01:36:39

So that’s for writing. Are you able to read it? Like, if somebody else sends a proof, are you able to, like how what’s what’s the, versus paper and

Speaker: 1
01:36:47

Yeah. So the proofs are longer, but each individual piece is easier to read. So, if you take a math paper and you jump to page 27 and you look at paragraph six and you have a line of of text of math, I often can’t read it immediately because it assumes various definitions, which I have to to go back and and and maybe 10 pages earlier this was ai.

Speaker: 1
01:37:09

And this, the proof is scattered all over the place and you basically are forced to read fairly sequentially. It’s it’s not ai, say, a novel where, like, you know, in a theory you could, you know, open up a novel halfway through and and start reading. There’s a lot of context.

Speaker: 1
01:37:21

But when a proven lean, if you put your cursor on a line code, every single object there, you can hover over it and it would it would say what it is, where it came from, where it’s justified. You can trace things back much easier than sort of flipping through a math paper. So one thing that lean really enables is actually collaborating on proofs at a really atomic scale that you really couldn’t do in the past.

Speaker: 1
01:37:41

So traditionally, meh and paper, when you sana collaborate with another mathematician, either you do it out of blackboard where you, you can really interact. But if you’re doing it sort of by email or something, basically, yeah, yeah, you have to segment it. So I’m gonna I’m gonna finish section three.

Speaker: 1
01:37:56

You do section four. But, you can’t really sort of work on the same thing, collaborative at the same time. But with lean, you can be trying to formalize some portion of the proof and say, oh, I got stuck at line 67 here. I need to prove this thing, but it it doesn’t quite work. Here’s the, like, the freelancer code I’m having trouble with.

Speaker: 1
01:38:12

But because all the context is there, someone else can say, oh, okay. I recognize what you need to do. You need to to to apply this trick or this tool. And you can do extremely atomic level conversations. So if because of lean, I can collaborate, you know, with dozens of people across the world, most of whom I don’t have never met in person.

Speaker: 1
01:38:29

And I may not know actually even whether they’re, how reliable they are in in in their, in in the proof statement. But Lean gives me a certificate of of of trust. So I can do I can do trustless mathematics.

Speaker: 0
01:38:42

So there’s so many interesting questions. There’s so one, you’re you’re known for being a great collaborator. So what is the right way to approach solving a difficult problem in mathematics when you’re collaborating? Are you doing a divide and conquer type of thing, or are you brains Are you focusing on a particular part and you’re brainstorming?

Speaker: 1
01:39:05

There’s always a brainstorming process Yeah. So math research projects sort of by their nature, when you start, you don’t really know how to do the problem. It’s not like an engineering project where some other theory has been established for decades and it’s it’s implementation is the main difficulty.

Speaker: 1
01:39:20

You have to figure out even what is the right path. So so this is what I said about about cheating You know? It’s like, to go back to the bridge building analogy, you know? Ai assume you have infinite budget and and, like, unlimited amounts of of of workforce and so Now can you can you build this bridge? Okay. Okay.

Speaker: 1
01:39:36

Now have infinite budget but only finite workforce. Ai? Now can you do that and so So, I mean, of course, no no engineer can actually do this. Like I said, they have fixed requirements. Meh.

Speaker: 1
01:39:47

There’s this sort of jam sessions always at the beginning where you try all kinds of crazy things, and you you make all these assumptions that ai unrealistic, but you plan to fix later. And you try to see if there’s even some skeleton of an approach that might work. And then hopefully that breaks up the problem into smaller sub problems which you don’t know how to do but then you, you focus on on on the sub ones.

Speaker: 1
01:40:10

And sometimes different collaborators are better at at working on on certain things. So one of my theorems I’m known for is a theorem of Ben Green, which is called the Green Tao theorem. It’s a statement that the primes contain arithmetic progressions of any event. So it was a modification of this theorem already.

Speaker: 1
01:40:25

And the way we collaborated was that Meh had already proven a similar result for progressions of length three. He showed that sets like the primes contain loss and loss of progressions of length three even and even subsets of the primes certain subsets do but his techniques only worked for, for their three progressions.

Speaker: 1
01:40:43

They didn’t work for longer progressions. But I had these techniques coming from ai godic theory, which is something that I had been playing with and and, and I knew better than Ben at the time. And so, if I could justify certain randomness properties of meh some set relating to ai.

Speaker: 1
01:40:58

Like, there’s there’s a certain technical condition which if I could have it if if Ben could supply me to this fact, I could give I could conclude the theorem. But I what I asked was a really difficult question in number theory, which, he sai, no. There’s no way we can prove this.

Speaker: 1
01:41:13

Can so he said, can you prove your part of the theorem using a weaker hypothesis that I have a chance to prove it? And he proposed something which he could prove, but it was too weak for me. I I can’t use this. So there’s this there’s this conversation going back and It’s sort of different cheats too. Yeah. Yeah. Yeah. I sana cheat more. He wants to cheat less. Yeah.

Speaker: 1
01:41:32

But eventually, we found a a a a a a a property which, a, he could prove, and b, I could use, and then we we could prove our view. And, yeah. So there’s there’s a there are all kinds of dynamics. You know? I mean, it’s it’s it’s every every, collaboration has a has a has some story.

Speaker: 1
01:41:49

No two are the same.

Speaker: 0
01:41:50

And then on on the flip side of that, like you meh, with lean programming Mhmm. Now that’s almost like a different story because you can do you can create I think you’ve mentioned a kind of a blueprint

Speaker: 1
01:42:02

Mhmm.

Speaker: 0
01:42:02

Right. For a problem, and then you can really do a divide and conquer with lean where you’re working on separate parts

Speaker: 1
01:42:10

Right.

Speaker: 0
01:42:10

And they’re using the computer system proof checker, essentially Yeah. To make sure that everything is correct along the way.

Speaker: 1
01:42:16

Sai it makes everything compatible and, yeah, and and trustable. Yeah. So currently, only a few mathematical projects can be cut up in this way. At the current state of the art, most of the lean activity is on formalizing proofs that have already been proven by humans. A math paper basically is a boop a blueprint in a sense that it is taking a a difficult statement like big theorem and breaking up into a 100 little lemmas, but often not all written with enough detail that each one can be sort of directly formalized.

Speaker: 1
01:42:45

A blueprint is like a really pedantically written version of a paper where every step is explained as as much detail as as possible. And ai trying to make each step kind of self contained and or depending on only a very specific number of previous statements that have been proven sai that each node of this blueprint graph that gets generated can be tackled independently of the others.

Speaker: 1
01:43:08

And you don’t even need to know how the whole thing works. So it’s like a modern supply chain. You know, like, if you sana create an iPhone or or some other complicated object, no one person can can build a, a single object. But you can have specialists who who just if they’re given some widgets from some other company, they can combine them together to form a slightly bigger widget.

Speaker: 0
01:43:27

I think that’s a really exciting possibility because you can have if you can find problems that could be Right. Broken down this way, then you could have, you know, thousands of contributors. Right? Yes. Yes.

Speaker: 1
01:43:38

Yes. Yes. Distributed. Sai I told you before about the split between theoretical and experimental mathematics. And ai now, most mathematics is theoretical and when you type it as experimental. I think the platform that lean and and other software tools, sai, GitHub and things like that, allow they will allow experimental mathematics to be to scale up, to a much greater degree than we could do now.

Speaker: 1
01:43:58

So right now, if you sana to, do any mathematical exploration, of some mathematical pattern or something, you need some code to write out the pattern. And, I mean, sometimes there are some computer algebra packages that help, but often it’s just one meh coding lots and lots of Python or whatever.

Speaker: 1
01:44:14

And because coding is such an error of productivity, it’s not practical to allow other people to collaborate with you on writing modules for your code. Because if one of the modules has a bug in it, the whole thing ai is unreliable. So it’s these are, so you get these bespoke, spaghetti code written by non not professional programmers, but mathematicians. You know? And they’re clunky and and and slow.

Speaker: 1
01:44:38

And, and so because of that it’s it’s it’s hard to to really mass produce experimental results. But, yeah. But I think with lean I meh, so I’m already starting some projects where we are not just experimenting with data, but experimenting with proofs. So I have this project called the Equation of Theories project. Basically, we generated about 22,000,000 little problems in abstract algebra.

Speaker: 1
01:45:00

Maybe I should back up and and tell you what what the project is. Okay. So abstract algebra studies operations like multiplication and addition and their abstract properties. Okay. So multiplication, for example, is commutative. X times y is always y times x. These four numbers. And it’s also associative.

Speaker: 1
01:45:14

X times y times z is the same as x times y times z. So, these operations obey some laws that don’t obey others because some x times x is not always equal to x sai that law is not always true. So given any any operation it obeys some laws and not others. And so we generated about 4,000 of these possible laws of algebra that certain operations can satisfy.

Speaker: 1
01:45:37

And our question is which laws imply which other ones? So for example, does commutativity imply associativity? And the answer is no because it turns out you can describe an operation which obeys the commutative law but doesn’t obey the associative law. So by producing an example, you can you can show that commutativity does not imply associativity.

Speaker: 1
01:45:53

But some other laws do imply other laws by substitution and so and you can write down some some algebraic proof. So we look at all the pairs between these 4,000 laws and these are twenty two twenty two million of these pairs. And for each pair, we ask, does this law imply this law, law? If so, give a give, give a proof. If not, give a counterexample. Example. Mhmm.

Speaker: 1
01:46:12

So 22,000,000 problems, each one of which you could give to, like, an an undergraduate algebra student, and they had a decent chance of solving the problem. Although there arya few at least 22,000,000, they’re ai a 100 or so that are really quite hard. Okay. But a lot are easy.

Speaker: 1
01:46:25

And the project was just to to work out to determine the entire graph, like like, which ones imply which other ones.

Speaker: 0
01:46:30

That’s an incredible project, by the way. Such a good idea, such a good test of the very thing we’ve been talking about at a scale that’s remarkable.

Speaker: 1
01:46:37

Yeah. So it it would not have been feasible. You know? I mean, the state of the art in the literature was ai, you know, 15 equations and sort of how they ai it. That’s sort of at the limit of what a human bryden and paper can do. So so you need to scale that up. So you need to crowdsource, but you also need to trust all the, you know, I Ai mean, no one person can check 22,000,000 of these proofs. K?

Speaker: 1
01:46:57

You you need it to be computerized. And so it only became possible with with lean. We were hoping to use a lot of AI as well. So the the project is almost complete. So of these 22,000,000, all but two had been settled. Wow.

Speaker: 1
01:47:11

And, well, actually and of those two, we have a pen and paper proof of the two, and we we’re formalizing. In fact, I was this morning, I was working on finishing it. So we’re almost done on this. It’s incredible.

Speaker: 0
01:47:23

It’s yeah. Fantastic. How many people were able to get About

Speaker: 1
01:47:27

50, which in mathematics is is considered a huge number.

Speaker: 0
01:47:30

It’s a huge number.

Speaker: 1
01:47:31

That’s

Speaker: 0
01:47:31

crazy. Yeah.

Speaker: 1
01:47:32

So we’re gonna have a paper with 50 authors, and a big appendix of who contributed what? Here’s an

Speaker: 0
01:47:38

interesting question, not to maybe speak even more generally about it. When you have this pool of people, is there a way to, organize the contributions by level of expertise of the people, all the contributors? Now okay. I’m asking a lot of pothead questions here, but I I’m imagining a bunch of humans and maybe in the future meh Ai. Yeah.

Speaker: 0
01:47:59

Can there be, like, an ELO rating type of situation where,

Speaker: 1
01:48:05

like, a gamification of this? The beauty of of these Lean projects is is that automatically, you get all this data. You know? So, like like, everything’s being uploaded for this GitHub. It could have tracks who contributed what. So you could generate statistics from at any at any later point in time.

Speaker: 1
01:48:20

You could say, oh, this person contributed this meh ai of code or whatever. I mean, these are very crude metrics. I would I would definitely not want this to become, like, you know, part of your tenure review or something. But, I mean, I think already in in in enterprise computing, right, people do use some of these metrics as part of of the assessment of of, performance of a of an employee.

Speaker: 1
01:48:41

Again, this is the direction which is a bit scary for academics to go down. We we we don’t like metrics so much.

Speaker: 0
01:48:48

And yet academics use metrics. They just use old ones. Number of papers.

Speaker: 1
01:48:56

Yeah. Yeah. It’s true. It’s true that yeah. I mean,

Speaker: 0
01:48:59

It feels like this is a metric while flawed is is going in the more in the right direction. Right? Yeah. It’s interesting. At at least it’s a very interesting metric.

Speaker: 1
01:49:08

Yeah. I think it’s interesting to study. I mean, I think you can you can do studies of of whether these are better predictors. There’s this problem called Goodhart’s Sai. If a statistic is actually used to incentivize performance, it becomes gamed, and then it is no longer a useful measure.

Speaker: 0
01:49:22

Oh, humans always gamed.

Speaker: 1
01:49:23

Yeah. Yeah. No. I mean, it’s it’s it’s rational. So what we’ve done for this project is is self report. So, there are actually these standard categories, from the sciences of what types of contributions people give. So this this concept and validation and resources and and and and encoding and and so So we we we there’s a standard list of pro or so categories.

Speaker: 1
01:49:43

And we just ask each contributor to there’s a big matrix of all the of all the authors in all the categories just to tick the boxes where they think that they contributed, and just give a rough ai. You know? Like, oh, so you did some coding and and, and you provided some compute, but you didn’t do any for pen and paper verification or whatever.

Speaker: 1
01:50:01

And I think that that works out. Traditionally, mathematicians just order alphabetically by surname. So we don’t have this tradition as in the sciences of, you know, lead author and author and so which we’re proud of. You know, we make all the authors equal status, but it doesn’t quite scale to this size. So a decade ago, I was involved in these things called polymath projects.

Speaker: 1
01:50:21

It was the crowdsourcing ai, but without the lean component. So it was limited by you needed a human moderator to actually check that all the contributions coming in were Cuvada, and and this was a huge bottleneck, actually. But still we had projects that were, you know, 10 authors or so.

Speaker: 1
01:50:37

But we had decided at the tyler, not to try to decide who did what, but to have a single pseudonym. So we created this fictional character called DHJ Polymath. And this bit of Bawakhi, Bawakhi is is the the pseudonym for a famous group of mathematicians in the century. But, and so the paper was all authored under the pseudonym. So none of us got the author credit.

Speaker: 1
01:51:00

This actually turned out to be not so great for a couple of reasons. Sai so one is that if you actually wanted to be considered for tenure or whatever, you could not use this paper in your, as you submitted as a one of your publications because it wasn’t you didn’t have the formal author credit.

Speaker: 1
01:51:16

But the other thing that we’ve recognized meh later is that when people referred to these projects, they naturally referred to the most famous person who was involved in the project. Oh, yeah. So this was Tim Gallo’s playwright project. This sai Tim Ai playwright project and not meh the the other 19 or whatever people that were involved.

Speaker: 0
01:51:36

Yeah.

Speaker: 1
01:51:36

So we’re trying something different this time around where we have everyone’s an author, but we will have an an appendix with this matrix, and we’ll see how that works.

Speaker: 0
01:51:44

I mean, so both ai are incredible. Just the fact that you’re involved in such huge collaborations, but I think I saw a talk from Kevin Buzzard about, the lean programming languages a few years ago, and you’re saying that, this might be the future of mathematics. And so it’s also exciting that you’re embracing, one of the greatest mathematicians in the world embracing this what seems like the paving of the future of mathematics.

Speaker: 0
01:52:10

So I have to ask you here about the integration of AI into this whole process. So DeepMind’s alpha proof was trained using reinforcement learning on both failed and successful formal lean proofs

Speaker: 1
01:52:26

Mhmm.

Speaker: 0
01:52:26

Of IMO problems. So this is sort of high level high school Oh, very high level. Yes. Very high level high school mathematics problems. What do you think about the system, and maybe what is the gap between this system that is able to prove the high school level problems

Speaker: 1
01:52:42

Right.

Speaker: 0
01:52:43

Versus gradual level, problems. Yeah.

Speaker: 1
01:52:46

The the difficulty increases exponentially with the the number of steps involved in the proof. It’s a combinatorial explosion. Right? So the thing of large language models is is that they make mistakes. And so if a proof has got 20 steps and your last ai has a 10% failure rate, at each step, of of going in the wrong direction, like, it’s it’s just extremely unlikely to actually, reach the end.

Speaker: 0
01:53:09

Actually, just to take a small tangent here, is how hard is the problem of mapping from natural language to the formal program?

Speaker: 1
01:53:18

Oh, yeah. It’s extremely hard, actually. Natural language, you know, it’s very fault tolerant. Like, you can make a few minor grammatical errors and a speaker in the language can get some idea of what you’re saying. Yeah. But but formal language, yeah, you know, if if you get one little thing wrong, I think that the whole thing is is is is nonsense. Got it.

Speaker: 1
01:53:36

Even formal to formal is is is very hard. There are different incompatible, proof of system languages. There’s Lean, but also Clark and Isabelle and and so Ai, even converting from a formal actual formal language is, is an unsolved basically, unsolved problem.

Speaker: 0
01:53:51

That is fascinating. Okay. So, but once you have an informal language, they’re using, their RL trained model. So some something akin to AlphaZero that they used in Go to then try to come up with boost. They also have a model I believe it’s a separate model for geometric problems.

Speaker: 0
01:54:11

So what impresses you about the system, and, what do you think is

Speaker: 1
01:54:17

the gap? Yeah. We talked earlier about things that are amazing over time become kind of normalized. Sai, yeah, now somehow it’s, of course, geometry is

Speaker: 0
01:54:25

a silver bullet problem. Right. That’s true. That’s true. I mean, it’s still beautiful.

Speaker: 1
01:54:29

Yeah. Yeah. No. It’s it’s it’s a great work that shows what’s possible. I mean, it’s it, the approach doesn’t scale currently. Yeah. Three days of Google’s server is server time to solve one high school math ram there. This this is not a scalable speak, especially with the exponential increase in, as as their complexity increases.

Speaker: 0
01:54:49

Which meh that they got a silver meh performance.

Speaker: 1
01:54:52

The equivalent of. I meh, yeah, it’s equivalent of a silver medal performance. So of all, they took way more time than was, allotted. And they had this assistance with with the humans arya helped by by formalizing. But, also, they they give me those full marks for the solution, which I guess is formally ai. So I guess that that’s that’s fair.

Speaker: 1
01:55:09

There there are efforts there was there will be a proposal at some point to actually have an Ai meh Olympiad where at the same time as the human contestants meh the the actual little bit of, problems, AI’s will also be given the same problems with the same time period, and the outputs will have to be created by the same judges.

Speaker: 1
01:55:31

And which means that it will have to be written in natural language rather than formal language.

Speaker: 0
01:55:37

Ai, I hope that happens. I hope that this IMO happens. I hope I hope the next one

Speaker: 1
01:55:41

It won’t happen this IMO. The performance is not good enough in in in the time period. And and, but there are smaller competitions. There are competitions where the the answer is a is a number rather than a a long form proof. And that’s that’s, AI is actually a lot better at, problems where there’s a specific numerical answer, because it’s it’s it’s easy to to to, to reinforce to reinforce some learning on it.

Speaker: 1
01:56:06

Yeah. You got the right answer. You got the wrong answer. It is it’s it’s a very clear signal. But a long form proof either has to be formal, and then you the lean can give a thumbs up, thumbs down, or it’s informal.

Speaker: 1
01:56:16

But then you need a human to create it Mhmm. To tell and if you’re trying to do billions of of reinforcement learning, you know, ram, you’re you’re not you can’t hire enough humans to to create those. Meh it’s already hard enough for for the last nine months to do reinforce some learning on on just the regular text that that people get.

Speaker: 1
01:56:36

But now we actually hire people not just give thumbs up, thumbs down, but actually check the the output mathematically. Yeah. That’s too expensive.

Speaker: 0
01:56:45

So if we, just explore this possible future, what what what is the thing that humans do that’s most special in, in mathematics sai that you could see AI, not cracking for a while. So inventing new theories. So coming up with new conjectures versus, proving the conjectures. Right.

Speaker: 0
01:57:07

Building new abstractions, new representations, maybe, an AI turn of style with, seeing new connections between disparate fields?

Speaker: 1
01:57:17

That’s a good question. I I think the nature of what mathematicians do over time has changed a lot. You know? So a thousand years ago, mathematicians had to compute the date of Easter, and those really complicated, calculations, you know, but it’s all automated been automated for centuries.

Speaker: 1
01:57:32

Meh don’t need that anymore. You know? They used to navigate to do circle navigation circle trigonometry to navigate how to get from from, the old world to the new or something. A very complicated conversation. Again, we’d bryden automated.

Speaker: 1
01:57:43

You know, even a lot of undergraduate mathematics, even before AI, like Wolfram Alpha, for example, is is not a language model, but it can solve a lot of undergraduate level math tasks. So on the computational side, verifying routine things ai having a a problem and, ai say, here’s a problem in partial differential equations.

Speaker: 1
01:58:02

Could you solve it using any of the 20 standard techniques? And they they have yes. I’ve tried all 20. I hear that 100 different permutations and and here’s my results. And that type of thing, I think, will work very well.

Speaker: 1
01:58:13

Type of scaling to once you solve one problem to to make the AI attack a 100 adjacent problems. The things that humans do still so so where the AI really struggles right now, is knowing when it’s made a wrong turn. And you can say, oh, I’m gonna solve this problem. I’m gonna split up this one into, into these two cases. I’m gonna try this technique.

Speaker: 1
01:58:38

And, sometimes if you’re lucky and it’s a simple problem, it’s the right technique and you solve the problem. And sometimes it will meh it will have a problem with it it would propose an approach which is just complete nonsense. And but, like, it looks like a proof. So this is one annoying thing about L. M. Generated mathematics.

Speaker: 1
01:58:56

So, we we’ve we’ve had human generated mathematics as a very low quality, like, you know, submissions where we who don’t have the formal training and so But if a human proof is bad, you can tell it’s bad pretty quickly. It makes really basic mistakes. But the AI generated proofs, they can look superficially flawless.

Speaker: 1
01:59:16

And it’s partly because that’s what the reinforcement learning has, like, you train them to do, to to make things to to produce text that looks ai, what is correct, which for many applications is good enough. So the error is often really subtle. And then when you spot them, they’re they’re really stupid. Like, you know, like, no human would have actually made that mistake.

Speaker: 0
01:59:35

Yeah. It’s actually really frustrating in the programming context ai I I program a lot and, yeah, when a human makes when a low quality code, there’s something called code smell. Right? You can tell. You can tell. They immediately, like

Speaker: 1
01:59:48

I know.

Speaker: 0
01:59:48

Yeah. There’s signs. But with with the ad generate code

Speaker: 1
01:59:51

Code of us.

Speaker: 0
01:59:52

And then you’re right. Yeah. Eventually, you find an obvious, dumb thing that just looks Yeah. Like good code.

Speaker: 1
01:59:59

Yeah. So, It’s

Speaker: 0
02:00:00

very tricky too and frustrating for some reason to Yeah.

Speaker: 1
02:00:03

So have to work. Yeah. So the sense of smell Okay. Yes. Yeah. This this is this is one thing that humans have, and there’s sai metaphorical mathematical smell that Yeah. This weird it’s not clear how to get the AIs to duplicate that. Eventually, I mean, so the the way, alpha zero and so they make progress on go and and chess and so is is in some sense, they have developed a sense of smell for go and chess positions.

Speaker: 1
02:00:29

You know, that that this position is good for ai. It’s good for black. They can’t enunciate ai. But just having that that sense of smell lets them strategize. So if Ai gained that ability to sort of assess the viability of certain proof strategies, sai so you can say, I’m gonna try to to break up this problem into two small subtasks, and then it can say, oh, this looks good.

Speaker: 1
02:00:52

The two tasks look like they’re simpler tasks than than your main task, and they still got a good chance of being true. So this is good to try. Or no. You’ve you’ve you’ve made the problem worse because each of the two subproblems is actually harder than your original problem, which is actually what normally happens if you try a random, thing to try.

Speaker: 1
02:01:07

Normally, you actually it’s very easy to transform a problem into an even harder problem. Very rarely do you problem transport a simpler problem. Yeah. So if they can pick up a sense of smell, then they could maybe start competing with, human level methodicians.

Speaker: 0
02:01:23

So so this is a hard question, but not competing, but collaborating. Yeah. If okay. Hypothetical. If I gave you an oracle that was able to do some aspect of what you do, and you could just collaborate with it Yeah.

Speaker: 1
02:01:36

Yeah. Yeah.

Speaker: 0
02:01:37

What would that oracle what would you like that oracle to be able to do? Would you like it to, maybe be a verifier, like, check Mhmm. To the code smell like, you’re yes. Yeah. Professor Shah, this is the correct this is a good this is a promising fruitful direction.

Speaker: 1
02:01:54

Yeah. Yeah. Yeah.

Speaker: 0
02:01:55

Or or would you like it to, generate possible proofs and then you see which one is the right one? Or would you like it to maybe generate different representation, different totally different ways of seeing this problem? Yeah.

Speaker: 1
02:02:10

I think all of the above. A a lot of it is we don’t know how to use these tools because it’s it’s sai paradigm that is is not yeah, we have not had in the past systems that are competent enough to understand complex instructions Mhmm. That can work at massive scale, but are also unreliable. Ai, it’s it’s a interesting, a bit bit ai in subtle ways whilst we whilst providing superficially good output.

Speaker: 1
02:02:37

It’s a interesting combination. You know, I mean, you have re you have, like, graduate students that you work with who’ve, like, kinda like this, but not at scale. You know, and and and we had previous software tools that, can work at scale, but but very narrow. So we have to figure out how to how to use I mean, so Tim Keller ai actually you mentioned he actually foresaw, like, in in February, he was envisioning what mathematics would look like in in actually two and a half decades.

Speaker: 1
02:03:06

That’s funny. And That’s funny. Yeah. He he wrote in his in his arya, like ai a a hypothetical conversation between a mathematical assistant of the future, and himself. You know, he’s trying to solve a problem.

Speaker: 1
02:03:18

And they would have to have a conversation that he sometimes the human would would propose an idea and the AI would would, evaluate it. And sometimes the AI would propose an idea. And, and sometimes a conversation was required and the AI would just go and say, ai. I’ve I’ve checked the the 100 cases needed here. Or, the you you set the situation for all n.

Speaker: 1
02:03:37

I’ve checked the n up to 100, and it looks good so far. Or hang on. There’s a problem at n equals 46. And so just a free form conversation where you don’t know in advance where things are gonna go, but just based on on I think ideas are good proposed on both sides, calculations are good proposed on both sides.

Speaker: 1
02:03:53

I’ve had conversations with AI where I say, okay. Let’s we’re gonna collaborate to solve this math problem, and it’s a problem that Ai already know a solution to. So I I try to prompt it. Okay. So here’s the problem. I I suggest using this tool.

Speaker: 1
02:04:03

And ai find this this lovely argument using a completely different tool, which eventually goes, you know, into the weeds. I say, no. No. No. If I’m using this, okay, and they might start using this, and then it’ll go back to the tool that I wanted to to to do before.

Speaker: 1
02:04:14

And, like, you have to keep railroading it, onto the path you want. And I I could eventually force it to give the proof I wanted, but it was like herding cats. Like and the amount of personal effort I had to take to not just sort of prompt it, but also check its output because it like, a lot of what it looks like is gonna work.

Speaker: 1
02:04:32

I know there’s a problem on ai. And I basically arguing with it. Ai, it was more exhausting than doing it, unassisted. Sai, like, it but that’s the current state of the art.

Speaker: 0
02:04:43

I wonder if there’s there’s a phase shift that happens to where it’s no longer feels like herding vatsal. And maybe it’ll surprise us how quickly that ai.

Speaker: 1
02:04:54

I I believe so. So in formalization, I I mentioned before that it takes 10 times longer to formalize a proof than to write it by hand. With these modern AI tools, it’s and, also, just better tooling. It’s the, the lean, developers are doing a great job adding more ram more features and making it user friendly. It’s going ram nine to eight to seven. Okay.

Speaker: 1
02:05:14

No big deal. But one day, it’ll drop a little one, and that’s a pay shift. Because suddenly, it makes sense when you write a paper to to write it in lean or through a conversational AI who’s generating lean, on the fly with you. And it becomes natural for journals to accept, and maybe they’ll offer ai refereeing. You know, that yeah.

Speaker: 1
02:05:36

If if a paper has already been formalized in in lean, they’ll just ask the referee to comment on on the significance of the results and how it connects to the literature and not worry so much about the correctness, because it that’s been certified. Papers are getting longer and longer in mathematics, and ai it’s harder and harder to get good referee reading for, the the really long ones unless they’re really important.

Speaker: 1
02:05:57

It it is actually an an issue which and the formalization is coming in just the right time for this to be.

Speaker: 0
02:06:03

And the easier and easier to guess because of the tooling and all the other factors, then you’re gonna see much more, like, math lib will grow Right. Potentially exponentially. It’s a it’s a it’s a virtuous, cycle. Okay.

Speaker: 1
02:06:15

I mean, one fascia of this type that happened in the past was, the adoption of late meh. Sai so late tech is is typesetting language that all meh use now. So in the past, people use all kinds of word processors and typewriters and and whatever. But at some point, LaTeX became easier to use than all other competitors. And, like, people would just switch, you know, within a few years.

Speaker: 1
02:06:34

Like ai, it was just a dramatic, phase shift.

Speaker: 0
02:06:37

It’s a wild out there question, but what what year? How far away are we from a, AI system being a collaborator on a proof that wins the Fields medal. So that level.

Speaker: 1
02:06:54

Okay. Well, it depends on the level of collaboration, I mean.

Speaker: 0
02:06:57

No. Like, it deserves to be to get the Fields medal. Like so half and half.

Speaker: 1
02:07:02

Already, like, I I can imagine if it was a meh wooden paper having some AI systems in writing it, you know, just, you know, like, they’re all complete alone. It’s already I I I use it. Like, it speeds up my my own writing. Ai, you know, you you you can have a theorem and you have a proof, and the proof has three cases.

Speaker: 1
02:07:19

And I I I write down the proof of case, and the autocomplete just ai, oh, now here’s how the proof of case could work. And, like, it was exactly correct. That was great. Saved me, like, ai, ten minutes of, of of typing.

Speaker: 0
02:07:30

But in that case, the AI system doesn’t get the Fields medal. No. Arya we talking twenty years, fifty years, a hundred years? What do you think?

Speaker: 1
02:07:41

Okay. So I I I gave a prediction in print, but so by 2026, which is now next year, there will be math collaborations, you know, with AI. So not fields vatsal winning, but but, like, actual research level meh

Speaker: 0
02:07:53

based Ai, published ideas that Yeah. In part generated by Sai.

Speaker: 1
02:07:57

Maybe not the ideas, but at least, some of the computations, the verifications. Yeah. I mean, that that

Speaker: 0
02:08:04

Has that already happened?

Speaker: 1
02:08:04

That’s already happened. Yeah. There there are there are problems that were solved, by a complicated process conversing with with AI to propose things and then the human goes and tries it and and then the contract doesn’t work, but the the it might propose a different idea.

Speaker: 1
02:08:19

It it’s it’s hard to disentangle exactly. There are certainly math results which could only have been accomplished because there was a math math human mathematician and an AI involved. But it’s hard to sort of disentangle credit. I mean, these tools, they they do not, replicate all the skills needed to do mathematics, but they can replicate sort of some nontrivial percentage of them. You know, 40%.

Speaker: 1
02:08:48

So they can fill in gaps. You know? Sai, coding is is is a is a good example. You know? So I I, it’s annoying for me to to code in Python. I’m not I’m not a native, no professional ram.

Speaker: 1
02:09:01

But, the with Ai, the the the friction cost of of doing it is is is much reduced. So it it fills in that gap for me. It AI is getting quite good at literature review. Ai mean, there’s still a problem with, hallucinating, you know, the references that don’t exist. But this, I think, is a silverware problem. If you train in the right way and so you can you can and, and verify, you know, using the Internet.

Speaker: 1
02:09:29

You, you know, you, you should, in a few years, get the point where you you have a a lemma that you need and, say, has anyone proven this lemma before? And it will do basically a fancy web search AI assistant and say, yeah. Yeah. There are these six papers where something similar has happened.

Speaker: 1
02:09:45

And, I mean, you can ask it right now, and it’ll give you six papers of which maybe one is is legitimate and relevant. One exists, but it’s not relevant, and four are who’s needed. It has a nonzero success rate right now, but, it’s there’s so much garbage, so much the signal to noise ratio is so poor that it’s it’s, it’s most helpful when you’re already somewhat know the relationship.

Speaker: 1
02:10:07

And you just need to be prompted to be reminded of a paper that was really subconsciously in your memory.

Speaker: 0
02:10:13

Or it’s just helping you discover new you were not even aware of, but is the correct citation.

Speaker: 1
02:10:18

Yeah. That’s yeah. That it can sometimes do, but but when it does, it’s it’s buried in in a list of options to which the other bad.

Speaker: 0
02:10:26

Yeah. I mean, being able to automatically generate a related work section that is correct Yeah. That’s actually a beautiful thing that might be another phase shift because it assigns credit correctly. Yeah. It does it breaks you out of the silos of

Speaker: 1
02:10:40

Yeah. Yeah. Yeah. No. A lot.

Speaker: 0
02:10:41

You know?

Speaker: 1
02:10:41

Yeah. No. But it it there’s a big hump to overcome right now. I mean, it’s it’s it’s like self driving cars. Right. The safety margin has to be really high Yeah. For it to be, to be feasible. So yeah. So there’s a last mile problem, with a lot of AI applications, that, you know, they can do their tools that work 20%, 80% of the time, but it’s still not good enough, and in fact, even worse than good in some ways.

Speaker: 0
02:11:07

I mean, another way of asking the fields meh question is, what year do you think you’ll wake up and be, like, real surprised? You read the headline, the news, or something happened that AI did, like, you know, real breakthrough, something. It doesn’t you know, ai, feels meh, bryden ai, It could be, like, really just this alpha zero moment would go that ai

Speaker: 1
02:11:31

of thing. Right. Yeah. This this decade, I can I can see it, like, making a conjecture between two unrelated two two things that people thought was unrelated?

Speaker: 0
02:11:42

Oh, interesting. Generating a conjecture, that’s a beautiful conjecture.

Speaker: 1
02:11:45

Yeah. And and actually has a real task of being correct and and and meaningful. And,

Speaker: 0
02:11:50

Because that’s actually kind of doable, I suppose. But the where the data is is yeah. Yeah. No. That would be truly amazing. Yeah.

Speaker: 1
02:11:58

The current models struggle a lot. I mean, so, a version of this is, I mean, the the physicists have a dream of getting the Ai to discover new new laws of physics. You know, the the dream is you just feed it all this data. Okay? And and this is a here here is a new patent that we didn’t see before.

Speaker: 1
02:12:14

But it actually even struggle the current state of the art even struggles to discover old laws of physics, from the data. I mean, or if it does, there’s a big concern of contamination that that it did only because it’s, like, somewhere in this training data, it would be somewhat new, you know, Boyle’s law or whatever ball you you’re trying to to to reconstruct.

Speaker: 1
02:12:32

Part of it is we don’t have the right type of training data for this. Yes. So for laws of physics, like, we we don’t have, like, a million different universes with a million different balls of nature. And, like, a lot of what we’re missing in math is actually the negative space of so we have published things of things that people have been able to prove, and conjectures that ended up being verified, or would be counterexamples produced.

Speaker: 1
02:12:59

But, we don’t have data on on things that were proposed, and they’re kind of a good thing to try. But then people quickly ai that it was the wrong conjecture, and then they they sai, oh, but we we should actually change, our claim to modify it in in this way to actually make it more plausible.

Speaker: 1
02:13:14

There’s this there’s a trial and error process, which is a real integral part of human mathematical discovery, which we don’t record because it’s it’s embarrassing. We make mistakes and and we only like to publish our our wins. And, the AI has no access to this data to to train on.

Speaker: 1
02:13:31

I sometimes joke that basically, you know, I ai like an AI has to go through, a grad school and actually, you know, go to grad courses, do the assignments, go to office hours, make mistakes, get advice on how to correct the mistakes ai learn from that.

Speaker: 0
02:13:47

Let me, ask you, if I may, about, Grigori Perlman. Mhmm. You mentioned that you try to be careful in your work and not let a problem completely consume you. Just you’ve really fallen in love with the problem and really cannot rest until you solve it. But you also hasten to add that sometimes this approach actually can be very successful. Mhmm.

Speaker: 0
02:14:07

An example you gave is Gregor Perlman, who proved the Poincare conjecture and did so by working alone for seven years with basically little contact with the outside world. Can you explain this one millennial prize problem that’s been solved, Poincare conjecture, and maybe speak to the journey that Gugoraparaman’s been on?

Speaker: 1
02:14:31

All right. So it’s, it’s a question about curved spaces. Earth is a good example. Sai I ai you think it was a two d surface. Interesting being round, you know, it could maybe be a torus with a hole in it or it could have many holes. And there are many different topologies a priori that that a surface could have, even if you assume that it’s it’s bounded and and, and and smooth and so So we we have figured out how to classify surfaces.

Speaker: 1
02:14:52

As a approximation, everything’s determined by some of the genus, how many holes it has. So the sphere has genus zero, a doughnut has genus one, and so And one way you can tell these surfaces apart, probably the sphere has which is called simply connected. If you take any closed loop on the sphere ai a big closed loop of rope, you can contract it to a point and while staying on the surface.

Speaker: 1
02:15:11

And the sphere has this property, but a torus doesn’t. And if you’re on a torus and you you take a rope that goes around, say, the the outer diameter Mhmm. Torus, there’s no way it it can’t get through the hole. There’s no way to to contract it to a point. So it turns out that the the the sphere is the only surface with this property of contractibility. I mean, up to, like, continuous deformations of the sphere.

Speaker: 1
02:15:32

So, something that I want to call topologically, equivalent to the sphere. So point coherent asks the same question in higher dimensions. Sai this it becomes hard to ai, because, surface you can think of as embedded in three dimensions. But as a curved three space, we don’t have good intuition of four d space to to to live in.

Speaker: 1
02:15:49

And then there there are also three d speak that can’t even fit into four dimensions. You need five or six or higher. But anyway, mathematically you can still pose this question that if you have a bounded three-dimensional space now which is also has this simply connected property that every root can be contracted can you turn it into a three-dimensional version of the sphere?

Speaker: 1
02:16:06

And so this is the Poincare conjecture. Weirdly in higher dimensions four and five it actually was actually easier. So, it was solved in higher dimensions. There’s somehow more room to do the deformation. It’s easier to to to move things around to to a sphere. But three was really hard. So people tried many approaches.

Speaker: 1
02:16:23

There’s sort of commentary approaches where you chop up the the surface into little triangles or or tetrahedra, and you you you just try to argue based on how the faces interact each other. There were, algebraic approaches. There’s there’s various algebraic objects, ai things called the fundamental group that you can attach to these homology and cohomology and and and and all these very fancy tools.

Speaker: 1
02:16:44

They also didn’t quite work. But Richard Hamilton’s proposed a, partial differential equations approach. So you take, you take so the problem is that you so you have this object which is sai secret is a sphere, but it’s given to you in a in a really, in a in a real way. So it’s like I think a ball that’s speak of crumpled up and twisted, and it’s not obvious that it’s a ball.

Speaker: 1
02:17:07

But, like if you if you have some sort of surface which is which is a deformed sphere, you could, you could think of it as the surface of a balloon. You could try to inflate it. You you blow it up. And naturally as you fill the air, the the wrinkles will sort of smooth out and it will turn into a nice round sphere.

Speaker: 1
02:17:28

Unless of course it was a torus or something in which case it would get stuck at some point. Like, if you inflate a torus, there would there’ll be a point in the middle. When the inner ring shrinks to zero, you get a you get a singularity, and you can’t blow up any further. You can’t flow any further.

Speaker: 1
02:17:41

So he created this flow, which is now called Ricci flow, which is a way of taking an arbitrary surface or or space and smoothing it out to make it rounder and rounder to make it look like a sphere. And he wanted to show that that either, this process would give you a sphere or it would create a singularity.

Speaker: 1
02:17:56

I can very much like how PDEs either have they have global regularity or finite and blow up. Basically, it’s almost exactly ai the same thing. It’s all connected. And so and and he showed that for two dimensions, two dimensional surfaces, if you arya a semiconductor, no singularities ever formed.

Speaker: 1
02:18:13

You you never run into trouble, and you could flow, and and it will give you a speak. And it so he he got a new proof of the two dimensional result.

Speaker: 0
02:18:20

But by the way, that’s a beautiful explanation where we should flow in its application and its context. How difficult is the mathematics here, like, for the two d case? Is it

Speaker: 1
02:18:27

Yeah. These are quite sophisticated equations on par with the Einstein equations. Slightly ai, but, Yeah. But but they were considered hard, nonlinear equations to solve. And there’s lots of special tricks in two d that that helped but in three d the problem was that this equation was actually supercritical it has the same problems as Navier Stokes as you blow up maybe the curvature could get concentrated in finer smaller smaller regions And it, it looked more and more nonlinear, and things just look worse and worse.

Speaker: 1
02:18:57

And there could be all kinds of singularities that that showed up. Some singularities, like if you there’s these things called neck pinchers where where the, the surface sort of creates behaves like a like a like a barbell, and it it it pinches at a point. Some some singularities are simple enough that you can sort of see what to do next.

Speaker: 1
02:19:14

You just make a snip, and then you can turn one surface into two and evolve them separately. But those those are the con the prospect that there’s some really nasty, like, knotted singularities showed up that you you couldn’t see how to, resolve in any way, that you couldn’t do any surgery to.

Speaker: 1
02:19:30

So you need to classify all the singularities. Like, what are all the possible ways that things can go wrong? So so what Kahneman did was of all, he he made the problem he turned the problem from a supercritical problem to a critical problem. I said before about how, the meh of the of of energy, the Hamiltonian, ai really clarified Newtonian mechanics.

Speaker: 1
02:19:50

Sai he introduced something which is now called permanent reduced volume and permanent entropy. He introduced new quantities ai like energy that look the same at every single scale and turn the problem into a critical one where the nonlinearities actually suddenly looked a lot less scary than they did before.

Speaker: 1
02:20:06

And then he had to solve he still had to analyze the singularities of this critical problem. And that itself was a problem similar to this sai I’ve seen it worked on, actually. So on the on the level of difficulty of that. So he managed to classify all the singularities of this problem and show how to apply surgery to each of these and through that was able to to resolve the Poincare conjecture.

Speaker: 1
02:20:25

Sai, quite like a lot of really ambitious steps. And like like nothing that a large language model today for example could I mean at best Ai could imagine a model proposing this idea as one of hundreds of different things to try. But the other 99 would be complete dead ends, but you’d only find out after months of of work.

Speaker: 1
02:20:46

He must have had some sense that this was the right track to pursue because, like, you know, it takes years to get ram from a to b.

Speaker: 0
02:20:54

So you’ve done, like you said, actually, you see even strictly mathematically, but more broadly in terms of the process, you’ve done similarly difficult things. What what can you infer from the process he was going through? Because he was doing it alone. What are some low points in a process like that?

Speaker: 0
02:21:12

When you start to, like you’ve mentioned hardship, like, AI doesn’t know when it’s failing. What what happens to you? You’re sitting in your office when you realize the thing you did for the last few days, maybe weeks, Yeah. Is a failure.

Speaker: 1
02:21:27

Well, for me, I switched to different problem. So, as I said, I’m I’m I’m I’m a fox. I’m not a hedgehog.

Speaker: 0
02:21:32

But you generally, that is a break that you can take is is to step away and look at a different problem. Yeah.

Speaker: 1
02:21:38

You can modify the problem too. I mean, yeah, you can ask some cheat. If if there’s a specific thing that’s blocking you that that this, some bad case keeps showing up that that that for which your tool doesn’t work, you can just assume by fiat this this bad case doesn’t occur.

Speaker: 1
02:21:53

So you you do some magical thinking, for the you know, but but but strategically, okay, for the point to see if the rest of the argument goes through. If there’s multiple problems, with with with your approach, then maybe you just give up. Okay? But if this is the only problem arya we know, then everything else checks out, then it’s still worth fighting. Sai, yeah. Yeah.

Speaker: 1
02:22:12

You have to do some, some sort of forward reconnaissance sometimes to, you know, and

Speaker: 0
02:22:17

that is sometimes productive to assume ai, okay, we’ll figure it out. Oh, yeah. Yeah. Yeah. Eventually.

Speaker: 1
02:22:23

Ai, actually, it’s it’s even productive to make mistakes. So, one of the I mean, there’s a project which actually, we won some prizes for. I have four other people. We worked on this PDE problem. Again, actually, this blow off regularity type problem. And it it was considered very hard.

Speaker: 1
02:22:40

Jean Burgain, was another field of methodist who worked on a special case of this, but he could not solve the general case. And we worked on this problem for two months, and we found we thought we solved it. We we had this this cute argument that if anything fit and we were ai, we were planning celebration, to all get together and have champagne or something.

Speaker: 1
02:22:59

Sana we started writing it up, and one one of us, not me actually, but another caller sai, oh, in this in this lemma here, we, we have to estimate these 13 terms that that show up in this expansion. And we estimate 12 of them, but in our notes, I can’t find the the estimation of Can you? Can someone supply that? And I said, sure.

Speaker: 1
02:23:18

I’ll look at this and and, like, you yeah. We didn’t cover we completely omitted this term. And this term turned out to be worse than the other 12 terms put together. In fact, we could not estimate this term. And we tried for a few more months and all different permutations, and there was always this one thing one term that we could not control.

Speaker: 1
02:23:34

And so, like, this was very frustrating. But because we had already invested months and months of effort in this already, we stuck at this, which we tried increasingly desperate things and and crazy things. And after two years, we found an approach which is somewhat different ai quite a bit from our initial, strategy, which did actually didn’t generate these problematic terms and and and actually solved the problem.

Speaker: 1
02:23:58

So we we solved the problem after two years. But if we hadn’t had that initial full storm of nearly solving the problem, we would have given up by month two or something and and worked on an easier problem. Meh. If we had known, it would take two years. Not sure we would have started the project. Yeah.

Speaker: 1
02:24:14

Sometimes actually having the incorrect you know, it’s it’s like Columbus traveling in New York. They had an incorrect version of a measurement of the ai of the Earth. He thought he was going to find a new new trade route to India. Or at least that was how he sold it in his prospectus.

Speaker: 1
02:24:28

I mean, it it could be that he actually secretly knew it. But

Speaker: 0
02:24:31

Just on the psychological element, do you have, like, emotional or, like, self doubt that just overwhelms the moments like that? You know? Because it the this stuff, it feels like math is is so engrossing that, like, it can break you. When you, like, invest so much yourself in the problem, and then it turns out wrong, you could start to a similar way, chess has broken some people.

Speaker: 1
02:24:58

Yeah. I I think different mathematicians have different levels of emotional investment in what they do. I mean, I think for some people, it’s as a job. You know, you you have a problem, and if it doesn’t work out, you you will you go on the next one. Yeah. So the fact that you can always move on to another problem, it reduces the emotional connection. I mean, there are cases you know?

Speaker: 1
02:25:21

So there are certain problems that are, what do you call, macular diseases where where where just latch on to that one problem, and they spend years and years thinking about nothing but that one problem. And, you know, maybe their their career suffers and so They say, oh, but I got this big win.

Speaker: 1
02:25:36

This will you know, ai I once I finish this problem, I ai I will make up for all the years of of of of lost opportunity. And that that’s that’s I mean, occasionally occasionally, it works. But Sai Ai, I really don’t recommend it for people who have the the right fortitude. Yeah.

Speaker: 1
02:25:54

So Ai I’ve never been super invested in any one problem. One thing that helps is that we don’t need to call our problems in advance. Well, when we do ai proposals, we just sana say we will we will study this set of problems. But even though we don’t promise definitely ai five years, I will supply ai proof of all these things. You know?

Speaker: 1
02:26:13

We, you promise to make some progress or discover some interesting phenomena. And maybe you don’t solve the problem, but you find some related problem that you you you can say something new about. And that’s that’s a much more feasible task.

Speaker: 0
02:26:26

But I’m sure for you, there’s problems like this. You have you have made so much progress towards the hardest problems in the history of mathematics. So is there is there a problem that just haunts you? It sits there in the dark corners, you know, twin prime conjecture, Riemann hypothesis, global conjecture.

Speaker: 0
02:26:47

Twin ai,

Speaker: 1
02:26:48

that sounds look. Again ai, I mean, the problems like the Riemann hypothesis, those are so far out of reach.

Speaker: 0
02:26:53

I I I You think so?

Speaker: 1
02:26:55

Yeah. There there’s no even viable strategy. Like, even if I activate all my all the cheats that I know of in this book, like, it it there’s just still no way to get meh to be. Yeah. Like, it’s it’s, I think it needs a breakthrough in another area of mathematics to happen And for someone to recognize that it vatsal be a useful thing to transport into this problem.

Speaker: 0
02:27:18

So we we should maybe step back for a little bit and just talk about prime numbers.

Speaker: 1
02:27:22

Okay.

Speaker: 0
02:27:22

So they’re often referred to as the atoms of mathematics. Can you just speak to the structure that these, atoms provide?

Speaker: 1
02:27:30

The natural numbers have two basic operations attached to them, addition and multiplication. So if you wanna generate the natural numbers, you can do one of two things. You can just start with one and add one to itself over and over again, and and that generates you the natural numbers.

Speaker: 1
02:27:42

So, additively, they’re very easy to generate, one, two, three, or five. Or you can take the prime number. If you wanna generate it multiplicatively, you can take all the prime numbers, two, three, five, seven, and multiply them all together. And together, they they they gives you all the the natural numbers except maybe for one. So there are these two separate ways of thinking about the natural numbers.

Speaker: 1
02:28:00

I mean, additive point of view and a multiplicative point of view. And separately, they’re not so bad. So, like, any question about that tyler natural numbers only was addition is relatively easy to solve. And any question that only was multiplication is relatively easy to solve.

Speaker: 1
02:28:14

But what has been frustrating is that you combine the two together, and suddenly you get the extremely rich I mean, we know that there are statements in number theory that are actually as undecidable. There are certain polynomials in some number of variables. Is there a solution in the natural numbers?

Speaker: 1
02:28:29

And the answer depends on on a undecidable statement, like like, whether, the axioms of of ai are consistent or not. But, yeah, but even the the simplest problems that combine something more applicative such as the primes with something additive such as shifting by two, separately, we understand both of them well.

Speaker: 1
02:28:49

But if you ask, when you shift the prime by two, do you can you get a how often can you get another prime? We it’s been amazingly hard to relate

Speaker: 0
02:28:58

the two. And we should say that the twin prime conjecture is just that it pauses that there are infinitely many pairs of prime numbers that differ by two. Yes. Now the interesting thing is that you have been very successful at pushing forward the field and answering these complicated questions, of this ai, like you meh, the Green Ai Theorem.

Speaker: 0
02:29:20

It proves that prime numbers contain arithmetic progressions of any length. Right. Which is mind blowing that you could prove something like that.

Speaker: 1
02:29:27

Right. Yeah. So what we’ve realized because of the this this type of of research is that this different patterns have different levels of indestructibility. So so what makes the twin prime problem hard is that if you take all the primes in the world, you know, three, five, seven, eleven, sai there are some twins in there.

Speaker: 1
02:29:46

Eleven and thirteen is a twin ai, Per of twin prime and so But you could easily, if you wanted to, redact the primes to get rid of to get rid of the, these twins. Like, the twins, they show up, and there are infinitely many of them, but they’re actually reasonably sparse. Not there’s there’s not I mean, initially, there’s quite a few, but once you go out to the millions, trillions, they become rarer and rarer.

Speaker: 1
02:30:07

And you could actually just if if someone was given access to the database of primes, you just edit out a few a few primes here and there, They could make the TRIMPACK conjecture false by just removing, like, point 01% of the primes or something, just well well chosen to to, to do this.

Speaker: 1
02:30:23

And so you could present a censored database of the primes, which passes all of these statistical tests of the ai. You know, that it it’ll base things like the prime number theorem and other facts of the primes but doesn’t contain any trims anymore. And this is a real obstacle for the trims prime conjecture.

Speaker: 1
02:30:40

It means that any proof strategy to actually find twin primes in the actual primes must fail when applied to these slightly edited primes. And so it it must be some very, subtle delicate feature of the primes that you can’t just get ram, like like aggregate statistical analysis.

Speaker: 0
02:31:01

Okay. So that’s out. Yeah.

Speaker: 1
02:31:03

On the other hand, ai progressions has turned out to be much more robust. Like you can take the primes and you can eliminate 99% of the primes, actually. You know? And you can take take any 99% that you want. And, it turns out and another thing we prove is that you still get arithmetic progressions.

Speaker: 1
02:31:17

Ai progressions are much you know, they’re like cockroaches.

Speaker: 0
02:31:21

Of arbitrary lengths. Yes. That’s crazy. I mean, so so this, for for people who don’t know meh progressions is a sequence of numbers that differ by some fixed amount.

Speaker: 1
02:31:31

Yeah. But it’s again ai this it’s it’s an infinite monkey type phenomenon. For any fixed length of your set, you don’t don’t get average reading. That’s progressions. You only get quite short progressions.

Speaker: 0
02:31:40

But you’re saying twin ai is not an infinite monkey of phenomena. I mean, it’s a very subtle monk it’s still an infinite monkey phenomenon.

Speaker: 1
02:31:47

Right. Yeah. If the primes were really genuinely random, if the primes were generated by monkeys,

Speaker: 0
02:31:53

then, meh, in fact, the infinite monkey theorem would Oh, but you’re saying that twin prime is it doesn’t you can’t use the same tools, like the it doesn’t appear random almost.

Speaker: 1
02:32:04

Well, we don’t know. Yeah, we we we we believe the primes behave like a random set. Ai so the reason why we care about the trimming conjecture is is is a test case for whether we can genuinely confidently say with with 0% chance of error that the primes behave like a random set.

Speaker: 1
02:32:20

Okay. Random yeah. Random versions of the primes we know contain twins, at least with with with 100% probably. Well, probably tending to 100 as you go out further and further. Yeah. So the primes we believe that they’re random.

Speaker: 1
02:32:34

The reason why path length progressions are indestructible is that regardless of whether your set looks random or looks, structured, like periodic, in both cases, arithmetic progressions appear, but for different reasons. And this is basically all the ways in which the thing there are many proofs of of these sort of arithmetic progression of theorems, and they’re all proven by some sort of dichotomy where your set is either structured or random.

Speaker: 1
02:32:57

And in both cases, you can say something, and then you put the two together. But in twin primes, if if the primes are random, then you’re happy. You win. But if your primes are structured, they could be structured in in a specific way that eliminates the twin the twins. And we can’t rule out that one conspiracy.

Speaker: 0
02:33:15

And yet you were able to make a, as I understand, progress on the k tuple version.

Speaker: 1
02:33:21

Right. Yeah. So, the the funny funny thing about conspiracies is that any one conspiracy theory is really hard to disprove. Mhmm. That, you know, if if you believe the world is run by by lizards, you say, here’s some evidence that that it it not by lizards, but that that evidence was talked about lizards.

Speaker: 1
02:33:34

Yeah. Ai might have encountered, this kind of phenomenon. Yeah. So so, like like, a pure like, there’s there’s almost no way to to, definitively without a and the same is true in meh. But a consumer says taught solely devote devoted to learning twin primes.

Speaker: 1
02:33:50

You know, like, it would you would have to also infiltrate other areas of mathematics to sort of but but, like, it could be made consistent at least as far as we know. But there’s a weird phenomenon that you can make one, one conspiracy rule out other conspiracies. So, you know, if the if the world is is run by this, they can’t also be run by aliens. That’s right. Right.

Speaker: 1
02:34:08

So one unreasonable thing is is is is hard to dispute. But but more than one, there are there are tools. So, so yeah. So for example, we we know there’s infinitely many primes that are, no no two which ai, so the infinite pairs of primes which differ by at most, 246 ai is is is the is the current method.

Speaker: 0
02:34:26

Oh, so there’s, like, a bound Yes. On the

Speaker: 1
02:34:28

Right. So, like, this twin primes this this thing called cousin primes that differ by by four. This thing called sexy primes that differ by six.

Speaker: 0
02:34:36

What

Speaker: 1
02:34:36

are sexy primes? Primes that differ by six. The ai name the name is much less the the concept is much less exciting than the name suggests. Got it. Sai you can make a conspiracy rule out one of these, but, like, once you have, like, 50 of them, it turns out that you can’t rule out all of them at once.

Speaker: 1
02:34:50

It’s just it requires too much energy somehow in this conspiracy space.

Speaker: 0
02:34:55

How do you do the bound part? How do you how do you develop a bound for the difference between the ai that Okay.

Speaker: 1
02:35:01

So,

Speaker: 0
02:35:01

that there’s an infinite number of.

Speaker: 1
02:35:03

So it’s ultimately based on, what’s called the pigeonhole principle. So the pigeonhole principle, is a statement that if you have a number of pigeons and they will have to go into into pigeonholes and you have more pigeons than pigeonholes, then one of the pigeon holes has to have at least two pigeons there.

Speaker: 1
02:35:16

So there has to be two pigeons that are that are close together. So for instance, if you have a 100 numbers and they all range from one to a thousand, two of them have to be at most 10 apart.

Speaker: 0
02:35:25

Mhmm.

Speaker: 1
02:35:25

Because you you can divide up the numbers from one to a 100 into 100 pigeon holes. Let’s let’s say they have a 100 if you have a 101 numbers. If you have a 101 numbers, then two of them have to be, distance less than 10 apart because two of them had to belong to the same pigeon hole.

Speaker: 1
02:35:38

So it’s a basic, basic feature of, a basic principle in mathematics. So it doesn’t quite work with the primes already because the primes get sparser and sparser as you go out, that that fewer and fewer numbers are prime. But it turns out that there’s a way to assign weights to the to to numbers.

Speaker: 1
02:35:56

Like, so there are numbers that are kind of almost ai, but they’re not they they don’t have no factors at all other ram themselves in one. They have very few factors. And it turns out that we understand almost primes a lot better than us on primes. And sai, for example, it was known for a long time that they were trained almost primes. This has been worked out.

Speaker: 1
02:36:15

So almost primes are something we we can’t understand. So you can actually restrict attention to a suitable set of almost primes. And, whereas the primes are very sparse overall, relative to the almost ai, they actually are meh less sparse. They make you can set up a set of almost primes where the primes have density like say 1%.

Speaker: 1
02:36:35

And that gives you a shot at proving by applying some original principle that that those pairs of primes are just only one hundred one hundred apart. But in order to with the trimpon conjecture you need to get the density of primes sai the allspice up to up to a threshold of 50.

Speaker: 1
02:36:50

Once it’s up to 50% you would get ram ai. But, unfortunately there are barriers. We know that that no matter what kind of good set of warmest brands you speak, the density of primes can never get above 50%. It’s called the parity barrier. And I would love to find meh.

Speaker: 1
02:37:05

So one of my long term dreams is to find a way to breach that barrier Because it would open up not only to Trim Ram conjecture, but to go back conjecture, and many other problems in number theory are currently blocked because our current techniques would require improve going beyond this theoretical, parity barrier.

Speaker: 1
02:37:21

It’s just like it’s like pulling past the speed of light.

Speaker: 0
02:37:23

Yeah. So we should say a twin prime conjecture, one of the biggest problems in the history of mathematics, Golbach conjecture also. They feel like next door neighbors. Has there been days when you felt you saw the path?

Speaker: 1
02:37:37

Oh, yeah. Yeah. Sometimes you try something and it it works super well. You you again, again, the sense of meh smell, we talked about earlier. You learn from experience when things are going too well Because there are certain difficulties that you sort of have to encounter.

Speaker: 1
02:37:54

I think the way a colleague might put it is that, you know, ai, if if you are on the streets of New York and you put on a blindfold and you put in a car sana and, after some hours, you the ai is off and then you’re in Beijing. You know, I mean, that was too easy somehow. Like, like, there was no ocean being crossed.

Speaker: 1
02:38:14

Even if you don’t know exactly what how what what was done, you’re suspecting that that something would wasn’t right.

Speaker: 0
02:38:21

But is that still in the back of your head to do you return to these to the prime do you return to the prime numbers every once in a while to see?

Speaker: 1
02:38:29

Yeah. When I have nothing better to do, which is less than less than ai tyler, which is I get busy with so many things he says. But, yeah, when I have free time and I’m not and I’m too frustrated to to work on my sort of research projects. Ai I also don’t want to do my administrative stuff or I don’t sana to do some errands for my family. I can play with these these things, for fun.

Speaker: 1
02:38:49

And usually, you get nowhere. Yeah. You have to you have to learn to just say, okay. Fine. I once again, nothing happened. I ai I will move on. Meh.

Speaker: 1
02:38:56

Very occasionally, one of these problems Ai actually solved. I have well, sometimes, as you say, you think you solved it and then you you for it for, maybe fifteen minutes, and then you think I should check this because this is too easy too to be true and it usually is.

Speaker: 0
02:39:10

What’s your gut say about when these problems would be, solved? Oh. Twin ai and go back?

Speaker: 1
02:39:16

Ai, I think we’ll keep getting keep getting more partial results. It does need at least one this parity barrier is is the biggest remaining obstacle. There are simpler versions of the conjecture where we are getting really close. Ai I think we will in ten years, we will have it. Many more much closer results.

Speaker: 1
02:39:38

We may not have the whole thing. Yeah. So Ai is somewhat close. Yeah. Riemann hypothesis, I have no clue. I mean, it has happened by accident.

Speaker: 1
02:39:46

I think,

Speaker: 0
02:39:46

so the Riemann hypothesis is a kind of more general conjecture about the distribution of prime numbers.

Speaker: 1
02:39:52

Right? Ai. It’s it’s it’s states that are sort of viewed multiplicatively. Like, for for questions only involving multiplication, no addition, the primes really do behave ai randomly as as you could hope. So So there’s a phenomenon in probability called square root cancellation that, you know, like, if you wanna poll, say, Meh, upon on some issue, and you you ask one or two voters, you may have sampled a bad sample, and then you get you get a really imprecise measurement of of a full average.

Speaker: 1
02:40:19

But if you sample more and more people, the accuracy gets better and better, and it it actually improves ai the square root of the number of of people you, you sample. So, yeah, if you sample, a thousand people, you can get, like, a two, three percent margin of error. So in the same sense, if you measure the primes in a certain multiplicative sense, there’s a certain type of statistic you you can meh, and it’s it’s called the Riemann zeta function, and it fluctuates up and down.

Speaker: 1
02:40:41

But in some sense, as you keep averaging more and more, if you sample more and more, the fluctuations should go down as if they were random. And there’s a very precise way to quantify ai. And the three meh hypothesis is a very elegant way that captures this. But, as with many other ways in mathematics, we have very few tools to show that something really genuine behaves like, really random.

Speaker: 1
02:41:02

And this section is not just a little bit random, but it’s it’s asking that it behaves as random as it actually random sai, this this this square root cancellation. And we know, actually, because of things related to the parity problem, actually, that that most of us usual techniques cannot hope to settle this question.

Speaker: 1
02:41:18

The proof has to come out of left field. Yeah. But, what that is yeah. No one has any serious proposal. Yeah.

Speaker: 1
02:41:30

And and there’s there’s various ways to sort of as I said, you you can modify the primes a little bit, and you can destroy the human hypothesis. So, like, it has to be very delicate. You can’t apply something that has huge margins of error. It has to just barely work. And, like, there’s, like, all these pits pitfalls they, like, dodge very adeptly.

Speaker: 0
02:41:49

Yeah. The prime numbers are just fascinating. Yeah.

Speaker: 1
02:41:52

Yeah. Yeah.

Speaker: 0
02:41:52

What what to you is, most mysterious about the prime numbers?

Speaker: 1
02:41:59

So that’s a good question. Sai so, like, conjecturally, we have a good model of them. I mean, like, as I said, I mean, they have certain patterns like the primes are usually odd, for instance. But apart from these obvious patterns, they behave very randomly. And just assuming that they behave so there’s something called the Ram Random Model of the ai that that after a certain point, primes just behave like a random set.

Speaker: 1
02:42:18

And there’s various slight modifications to this model. But this has been a very good model. It matches the numerics. It tells us what to predict. Like, I can tell you with complete certainty the true and bryden. The random model gives overwhelming odds that it’s true. I just can’t prove it.

Speaker: 1
02:42:33

Most of our mathematics is optimized for solving things with patterns in them. And the primes have this anti pattern, as do almost everything, really. But we can’t prove that. Yeah. I guess it’s not mysterious that the primes be ran it’s kind of random because there’s no reason for them to be, to have any kind of secret pattern.

Speaker: 1
02:42:56

But what is mysterious is what is the mechanism that really forces the randomness to happen? This is just absent.

Speaker: 0
02:43:04

Another incredibly ai difficult problem is the Collatz conjecture. Oh, yes. Simple to state Mhmm. Beautiful to visualize Yes. In its simplicity, and yet extremely, difficult to solve, and yet you have been able to make progress. Paul Ridder said about the Collas conjecture that mathematics may not be ready for such problems.

Speaker: 0
02:43:27

Others have stated that it is an extraordinarily difficult problem, completely out of reach, this is in 2010, Out of reach of present day mathematics, and yet Yeah. You have made some progress. Why is it so difficult to make? Can you actually even explain what it is? Is it Oh, yeah.

Speaker: 0
02:43:41

So

Speaker: 1
02:43:41

it’s it’s a it’s a problem that you can explain. Yeah, it it, it helps with some, visual aids. But yeah, so you take any natural number ai, say, 13, and you apply the following procedure to it. So if it’s even, you divide it by two. And if it’s odd, you multiply ai by three and add one. So even numbers get smaller, odd numbers get bigger.

Speaker: 1
02:44:01

So 13, would become 40 because 13 times three is 39. Add one, you get 40. So it’s a simple process. For odd numbers and even numbers, they’re both very easy operations. And then you put it together, it’s still reasonably simple.

Speaker: 1
02:44:13

But then you you ask what happens when you iterate it. You you take the output that you just got and feed it back in. So 13 becomes 40. 40 is now even. Ai by two is 20. 20 is still even. Divided by 10 two, ten. Ai. And then five times three plus one is 16.

Speaker: 1
02:44:27

And then 8421. So, and then from one, it goes one four two one four two one. It cycles forever. So the sequence I just described, you know, thirteen, forty, twenty, meh, so both, these are also called hailstones sequences because there’s an oversimplified model of of hailstorm formation, you know, which is not actually quite correct, but it’s still ai taught to high school students as a approximation.

Speaker: 1
02:44:48

Is that, ai a a little nugget of ice gets gets, a nice crystal forms in and clouded. It goes up and down because of the wind. And and sometimes when it’s cold, it gets it ai a bit a bit more mass and maybe it it melts a little bit. And this process of going up and down creates this sort of partially melted ice, which eventually causes hailstone. And eventually it falls out the earth.

Speaker: 1
02:45:09

So the conjecture is that no matter how high you start up, like, you take a number which is in the millions or billions, you go this process that that goes up if you’re odd and and down. If you’re even, ai, goes down to to to earth all

Speaker: 0
02:45:22

the time. No matter where you start with this very simple algorithm, you end up at one. Right. And you might climb for a while

Speaker: 1
02:45:28

Right. Yeah. So it’s unknown. Yeah. If you plot it, these sequences, they look like Brownian motion. They look like the stock market. You know, they just go up and down in a in a seemingly random pattern. And in fact, usually that’s what happens. That that if you plug in a random number, you can actually prove at least initially that it would look like, ai random walk.

Speaker: 1
02:45:46

And that’s actually a random walk with, a downward drift. It’s like if you’re always gambling on on a roulette at at the casino with odds slightly weighted against you. So sometimes you you win, sometimes you lose. But over in the long run, you lose a bit more than you win.

Speaker: 1
02:46:01

And so normally your wallet will hit will go to zero, if you just keep playing over and over again.

Speaker: 0
02:46:06

So statistically, it makes sense.

Speaker: 1
02:46:08

Yes. Sai so the result that I I proved roughly speaking is such that that statistically, like, 99% of all inputs would would drift down to maybe not all the way to one, but to be meh, much smaller than what you started. So it’s it’s like if I told you that if you go to a casino, most of the ai, you end up if you keep playing it for long enough, you end up with a smaller amount of in your wallet than when you started.

Speaker: 1
02:46:32

That’s kind of like the result the result that I proved.

Speaker: 0
02:46:34

So why is that result like, can you continue down that thread to prove the full conjecture?

Speaker: 1
02:46:42

Well, the the problem is that, my I I used arguments from probability theory. And there’s always this exceptional event. So, you know, so in probability, we have these these low, large numbers, which tells you things like if you play a casino with a, a game at a casino with a losing, expectation, over time, you are guaranteed well, almost surely, with probability as close to 100% as you wish, you’re guaranteed to lose money.

Speaker: 1
02:47:06

But there’s always this exceptional outlier. Like, it is mathematically possible that even in and then the game is is odds are not in your favor, you could just keep winning slightly more than than you lose. Very much like how in Napier Stokes, it could be, you know, most of the time, your waves can disperse.

Speaker: 1
02:47:22

There could be just one outlier choice of initial conditions that would lead you to blow up. And there could be one outlier choice of, speak number that you they stick in that shoots off infinity where all other numbers crash to Earth, crash to one. Yeah. In fact, there’s some meh, who’ve Alex Kantorovich, for instance, who’ve proposed that, that actually, these collapse, iterations are like these similar automator.

Speaker: 1
02:47:51

Actually, if if you look at what they happen on on in binary, they do actually look a little bit like like these Game of Life type patterns. And in an analogy to how the Game of Life can create these these massive, like, self replicating objects and so possibly you could create some sort of heavier than air flying machine.

Speaker: 1
02:48:07

A number which is actually encoding this machine, which is just whose job it is to encode, is to create a a version of itself which is which is larger.

Speaker: 0
02:48:17

Heavier than air machine encoded in a number Yeah. That flies forever. Yeah.

Speaker: 1
02:48:22

So Ai, in fact, worked on worked on this problem as well. Oh, wow. So Conway, so similar in fact, that was one of our inspirations for the Navier Stokes project. Conway studied generalizations of the collapse problem where instead of mapping by three and adding one or dividing by two, you you have a more complicated branch of those.

Speaker: 1
02:48:39

But but instead of having two cases, maybe you have 17 cases and then you go up and down. And he showed that once your iteration gets complicated enough, you can actually encode Turing machines and you can actually make these problems undecidable and and do things like this.

Speaker: 1
02:48:52

In fact, he met meh a programming language for, these kind of fractional linear transformations. He called a fact track as a play on on four track, and he showed that that you you couldn’t, you you can program it was too incomplete. You could you could you could, you could make a program that if if your number you insert in was encoded as a prime, it would sync to zero.

Speaker: 1
02:49:13

It would go down. Otherwise, it would go up, and things like that. So the general class of problems is is really, as complicated as all the mathematics.

Speaker: 0
02:49:22

Some of the mystery of the cellular automata that we talked about, having a mathematical framework to say anything about cellular automata, maybe the same kind of framework is required. Yeah. Yeah. Glocks injector.

Speaker: 1
02:49:35

Yeah. If you want to do it not statistically, but you really want one one hundred percent of all inputs to to to for the Earth. Yeah. So what might be feasible is is, yeah, assisting 99%, you know, go to go to one. But to to, like, everything, you know, that looks hard.

Speaker: 0
02:49:50

What would you say is out of these within reach, famous problems is the hardest problem we have today. Is there a Riemann hypothesis?

Speaker: 1
02:49:59

We wanna it’s up there. P equals NP is is a good one because, like, that’s sai that’s that’s a meh problem. Like, if you solve that in the, in the positive sense that you can find ai P equals NP algorithm, then potentially, this solves a lot of other problems as well. And we should mention some of the conjectures we’ve been talking about.

Speaker: 0
02:50:17

You know, a lot of stuff is built on top of them now. There’s ripple effects. P goes on p has more ripple effects than basically any other Right.

Speaker: 1
02:50:24

If the Riemann hypothesis is disproven, that’d be a big mental shock to a number theorist, but it would have follow on effects for, cryptography. Mhmm. Because a lot of cryptography uses number theory. It uses number theoretic constructions involving primes and so And, it relies very much on the intuition that number theorists have built over many, many years of what operations involving primes behave randomly and what ones don’t.

Speaker: 1
02:50:51

And in particular, our encryption, methods are designed to turn text with information on it into text which is indistinguishable ram, from random noise. So, and hence, we believe to be almost impossible to crack, at least mathematically. But, if something has caught, I believe, as a three point hypothesis, is is wrong, it means that there are there are actual patterns of the primes that we’re not aware of.

Speaker: 1
02:51:20

And if there’s one, there’s probably going more. And suddenly, a lot of our crypto systems are in doubt.

Speaker: 0
02:51:27

Yeah. But then how do you then say stuff about the the primes? Yeah. Now you’re going towards the collapse con conjecture again. Because if Ai I I you you want it to be random. Right? You want it to be randomly.

Speaker: 1
02:51:42

Sai more broadly, I’m just looking for more tools, more ways to show that that Yeah. That things are random. How do you prove a conspiracy doesn’t happen? Right.

Speaker: 0
02:51:50

Is there any chance to you that p equals n p? Is there some can you imagine a possible universe? It is possible.

Speaker: 1
02:51:57

I mean, there’s there’s various, scenarios. I mean, there’s there’s one way it is technically possible, but in ram, it’s never actually implementable. The evidence is sort of slightly pushing in favor of no, that we probably p is not a good NP.

Speaker: 0
02:52:10

I mean, it seems like it’s one of those cases similar to Riemann hypothesis. Ai think the evidence is lean pretty heavily on the no.

Speaker: 1
02:52:19

Certainly more on the no than on the on the yes. The funny thing about p goes NP is that we have also a lot more obstructions than we do for almost any other problem. So while there’s evidence, we also have a lot of results ruling out many, many types of approaches to the problem.

Speaker: 1
02:52:33

This is the one thing that the computer ai have actually been very good at. It’s actually saying that that certain approaches cannot work. No go theorems. It could be undecidable. We don’t yeah. We don’t know.

Speaker: 0
02:52:43

There’s a funny story I read that when you won the Fields Meh, somebody from the Internet wrote you and asked, you know, what are you gonna do now that you won this prestigious award? And then you just quickly, very humbly sai that, you know, this ai medal is not gonna solve any of the problems I’m currently working on.

Speaker: 1
02:53:01

So just

Speaker: 0
02:53:02

gonna I’m gonna keep I’m gonna keep working on them. It’s just of all, it’s funny to me that you would answer an email in that context. And of all, it, it just shows your humility. But, anyway, maybe you could speak to the Fields Meh, but it’s another way for me to ask about, Gregorian Perlman.

Speaker: 0
02:53:21

What do you think about him famously declining the Fields Medal and the millennial prize, which came with a $1,000,000 of prize money? He stated that I’m not interested in money or fame. The prize is completely irrelevant for me. If the proof is correct, then no other recognition is needed.

Speaker: 1
02:53:39

Yeah. No. He’s he’s somewhat of an outlier, even among mathematicians who tend to, to have, somewhat idealistic views. Ai never met him. I think I’d be interested to meet him one day, but I I never had the chance. I I know people who met meh, but he’s always had strong views about certain things.

Speaker: 1
02:53:54

You know, I mean, it’s it’s not like he was completely isolated from the meh community. I mean, he would he would give talks and, you know, write papers and so But at some point, he just decided not to engage with the rest of the community. He was he was disillusioned or something. I don’t know. And he decided to to, to speak out, and, you know, collect mushrooms in Saloni Petersburg or something.

Speaker: 1
02:54:16

And and that’s that’s fine. You know? And you can you can do that. I mean, that’s another sort of flip side. I mean, we are not a lot of our problems that we solve, you know, they some of them do have practical application, and that’s that’s great.

Speaker: 1
02:54:27

But, like, if you stop thinking about a problem, you you know, so he’s he hasn’t published since in this field, but that’s fine. There’s many, many other people who have done so as well. Yeah. So I guess one thing I didn’t realize initially with the Fields meh is that it it sort of makes you part of the establishment.

Speaker: 1
02:54:45

You know, so, you know, most mathematicians, you know, there’s, they’re just career mathematicians. You know, you just focus on publishing the next paper, maybe getting one test to promote one one rank, you know, and and starting a few projects, maybe taking some students or something.

Speaker: 1
02:54:59

Yeah. But then suddenly people want your opinion on things, and, you have to think a little bit about, you know, things that you might just sort of foolishly say because you know no one’s gonna listen to you. It’s it’s more important now.

Speaker: 0
02:55:11

Is it constraining to you? Are you able to still have fun and be a rebel and try crazy stuff and

Speaker: 1
02:55:17

Well, play with ideas? I have a lot less free time than I Yeah. Had previously. I mean, mostly by choice. I mean, I I I could obviously, I have the option to sort of, decline. Sai I decline a lot of things. Ai I I could decline even more. Or I could acquire a reputation for things so unreliable that people don’t even ask anymore.

Speaker: 0
02:55:36

This is I love the different algorithms here. This is great.

Speaker: 1
02:55:40

This is it’s it’s always an option. But, you know, there are things that are, you know, like I mean, so I mean, I I I don’t spend as much time as I do as a post doc, you know, just just working at one problem at a time or, fooling around. I still do that a little bit. But, yeah, as you advance in your career, some of the more soft skills so math somehow front loads all the technical skills to the early stages of your career.

Speaker: 1
02:56:03

So, yeah. So it’s a as a postdoc to publish or perish, you you you you’re incentivized to basically focus on on proving very technical theorems to sort of prove yourself, as well as proof of theorems. But then as as you get more senior, you have to start, you know, mentoring and and and and giving interviews, and, and ai to shape direction of field, both research wise and and, you know, sometimes you have to, you know, just various administrative things.

Speaker: 1
02:56:33

And it’s kind of the right social contract because you you need to to work in the trenches to see what can help mathematicians.

Speaker: 0
02:56:39

The other side of the establishment, sort of the the the really positive thing, is that, you get to be a light that’s an inspiration to a lot of young mathematicians or young people that are just interested in mathematics. It’s like Yeah. Yeah. It’s just how the human mind works.

Speaker: 0
02:56:53

This is where I would probably, say that I like the Fields Meh, that it does inspire a lot of young people somehow. I don’t this is just how human brains work. Yeah. At the same time, I also wanna give sort of respect to somebody like Gregorian Perlman who is critical of awards in his mind. Those are his principles.

Speaker: 0
02:57:15

And any human that’s able for their principles to, like, do the thing that most humans would not be able to do, it’s beautiful to see.

Speaker: 1
02:57:24

Some recognition is is necessarily important, but, yeah, it’s it’s also important to not let these things take over your ai. Yeah. And, like, only be concerned about, getting the next big award or whatever. Ai mean, if it yeah. Ai, again, you see these people try to only solve, like, a really big math problems and and not work on on things that are less, sexy, if you wish, but but but actually, still interesting and sana is instructive.

Speaker: 1
02:57:51

As you say, like, the way the human mind works, it’s, we understand things better when they’re attached to humans. And also, if they’re attached to a small number of humans, like Ai said, the the the way our humans, ai is is wired. We can comprehend the relationships between the 10 or 20 people. But once you get beyond, like, a 100 people, like, there’s there’s a there’s a limit.

Speaker: 1
02:58:12

I ai there’s a name for it, beyond which, it just becomes the other. Yeah. And, so we have you have to simplify the pole mass of, you know, 99.9% of humanity becomes the other. And often these models are are are incorrect, and this causes all kinds of problems. But, so yeah.

Speaker: 1
02:58:30

So to humanize a subject, you know, if you identify a small number of people, let’s say, you know, these are representative people of the subject. You know, role models, for example. That has some role. But it can also be, yeah, too much of it can be harmful because it’s I’ll be the to say that my own career ram is not that of a typical mathematician.

Speaker: 1
02:58:52

I the very accelerated education, I skipped a lot of classes. I think I always had very fortunate mentoring opportunities, and I think I was at the right place at the right time. Just because someone does doesn’t have my trajectory, you know, doesn’t mean that they can’t be good mathematicians.

Speaker: 1
02:59:08

I mean, they they may be mathematicians in in a very different ai, and we need people of of a different style. And, you know, even if and sometimes too much focus is given on the on the person who that’s the last step to complete, a project in mathematics or elsewhere that’s that’s really taken, you know, centuries or decades with lots and lots of building a lot of previous work.

Speaker: 1
02:59:30

But that’s sana story that’s difficult to tell, if you’re not an expert because, you know, it’s it’s easier to just say one person did this one thing. You know, it makes for a much simpler history.

Speaker: 0
02:59:40

I think on the whole, it, is a hugely positive thing to to talk about Steve Jobs Mhmm. As a representative of Apple when I personally know and, of course, everybody knows the incredible design, the incredible engineering teams

Speaker: 1
02:59:55

Mhmm.

Speaker: 0
02:59:55

Just the individual humans on those teams. They’re not a team. They’re individual humans on a team, and there’s a lot of brilliance there. But it’s just a nice shorthand, like a very like pie. Yeah. Steve Jobs.

Speaker: 1
03:00:08

Yeah. Yeah. Pie. As as a starting point. It’s, you know, as a as a approximation. That’s not as how you

Speaker: 0
03:00:13

And then read some biographies and then look into much deeper approximation. Yeah. That’s right. So you mentioned you were a person to, Andrew Wiles at that time.

Speaker: 1
03:00:21

Oh, yeah. He’s a professor there. Mhmm.

Speaker: 0
03:00:23

It’s a funny moment how history is just all interconnected. And at that time, he announced that he proved the Fermat’s last year. What did you think, maybe looking back now with more context about that moment in, math history? Yes. Ai was

Speaker: 1
03:00:37

a graduate student at the time. I mean, I I vaguely remember, you know, there was press attention and, we all had the same, we shah pigeonholes in the same mailroom. You know? So we all put together mail. And it was ai, suddenly, Andrew Wiles’ mailbox exploded to be overflowing. That’s a good that’s a good metric. Yeah. You know?

Speaker: 1
03:00:55

Sai, yeah, we we all talked about it at at t and so I mean, we we didn’t understand most of us sort of ai the proof. We understand sort of high level details. Ai, there’s an ongoing project to formalize it in lean. Right? Kevin buzzed it exactly.

Speaker: 0
03:01:08

Yeah. Can can we take that small tangent? Is it is it how difficult is that? Because as as I understand the ram as last the the proof for, ram as last theorem has, like, super complicated objects.

Speaker: 1
03:01:20

Yeah. That’s really

Speaker: 0
03:01:20

difficult to formalize. No?

Speaker: 1
03:01:22

Yeah. I guess yeah. You’re right. The the objects that they use, you can define them, so they’ve been defined in lean. Okay. So so just defining what they are can be done. That’s really not trivial, but it’s been done there. But there’s a lot of really basic facts about, these objects that have taken decades to prove and that they’re they’re in all these different math papers.

Speaker: 1
03:01:41

Ai so a lot of lots of these have to be formalized as well. Kevin’s, Kevin Buzzard’s goal, actually, he has a five year ram to formalize Fermat’s last theorem. And his aim is that he doesn’t think he will be able to get all the way down to the basic axioms. But he wants to formalize it for point where the only things that he needs to rely on as black boxes are things that were known by 1980 to, to number theorists at the time.

Speaker: 1
03:02:06

And then some other person or some other work would have to be done to to to get from there. Sai it’s it’s a different area of mathematics than, the type of mathematics I’m used to. In analysis, which is ai of my area, the objects we study are kind of much closer to the ground.

Speaker: 1
03:02:23

We study I study things like prime numbers and and and functions and and things that are within scope of a high school, math education to at least, define. Yeah. But then there’s this very advanced algebraic ai of number theory where people have been building structures upon structures for for quite a while.

Speaker: 1
03:02:41

And it’s it’s a very sturdy structure. There’s sai it’s it’s it’s been it’s been very, at the base, at the least, it’s extremely well developed with textbooks and so But, it does get to the point where, if you’re if you haven’t taken these years of studying, you you wanna ask about what what is going on at, like, Level 6 of the of this tower.

Speaker: 1
03:03:00

Ai you have to spend quite a bit of time before they can even get to the

Speaker: 0
03:03:03

point where you can see you see someone you recognize. What, inspires you about his journey that was similar as we talked about? Seven years mostly working in secret.

Speaker: 1
03:03:15

Yeah. Yes. That is a a romantic, yeah. So it ai of fits with sort of the the romantic image I think people have of mathematicians to the extent that they think of thing vatsal as these kind of eccentric, you know, wizards or something. So that that certainly kind of, accentuated that perspective. You know? I mean, it’s it is a great achievement.

Speaker: 1
03:03:37

His style of solving problems is so different from my own, but which is great. I mean, we we need people like that.

Speaker: 0
03:03:44

Be good? Like, what, in in terms of, like, the you like the collaborative

Speaker: 1
03:03:49

I like moving on from a problem if it’s giving too much difficulty.

Speaker: 0
03:03:54

Got it.

Speaker: 1
03:03:54

But you need the people who have the tenacity and the the fearlessness. You know, I Sai I’ve collaborated with with people like that where where I wanna give up, because the the approach that we tried didn’t work and the one didn’t approach. They’re convinced and they have the and the approach works. And I’d have to eat my words. Okay.

Speaker: 1
03:04:12

I didn’t think this was gonna work, but, meh, you were right saloni.

Speaker: 0
03:04:16

And we should say for people who don’t know, not only are you known for the brilliance of your work, but the incredible productivity, just the number of papers, which are all of very high quality. So there’s something to be said about being able to jump

Speaker: 1
03:04:28

from topic to topic. Yeah. It works for me. Yeah. I mean, but there are also people who are very productive and they they focus very deeply on yeah. I think everyone has to find their own workflow. Like, one thing which is a shame in mathematics is that we have mathematics is sort of a one size fits all approach to teaching teaching mathematics.

Speaker: 1
03:04:46

Sana, you know, so we have a certain curriculum and so I mean, you know, maybe, like, if you do math competitions or something, you meh a slightly different experience. But, I think many people, they don’t find their their native math language, until very late, or usually too ai.

Speaker: 1
03:05:03

So they they they stop doing mathematics, and they have a bad experience with a teacher who’s trying to teach them one way to do mathematics. They they don’t like it. My theory is that, humans don’t come evolution has not given us a math center of a brain directly. We have a vision center and a language center and some other centers, which have evolutions honed. But we it doesn’t we don’t have innate sense of mathematics.

Speaker: 1
03:05:26

But our other centers are sophisticated enough that different people we we we can repurpose other areas of our brain to do mathematics. So some people have figured out how to use the visual center to do meh, and so they think think very visually when they do mathematics. Some people have to be because they’re they’re they’re language center, and they think very symbolically.

Speaker: 1
03:05:47

You know, some people, like, if if they are very competitive and they they’re ai gaming, there’s a type there’s this part of your brain that’s very good at at at, at solving puzzles and games and and and and that can be repurposed. But, like, when I talk to other mathematicians, you know, they don’t quite think that I can tell that they’re using some of the different styles of of thinking than I am.

Speaker: 1
03:06:10

I mean, not not disjoint, but they they may prefer visual. Ai I I don’t like to prefer visual so much. I Ai need lots of visual aids myself. You know, mathematics ai a common language, so we can still talk to each other even if we are thinking in in different ways.

Speaker: 0
03:06:25

But you could tell there’s a different set of subsystems being used in the thinking process.

Speaker: 1
03:06:32

They they take different paths. They’re very quick at things that I struggle with and vice versa, and yet they still get to the same goal. Yeah. That’s beautiful. And yeah, but, I mean, the way we educate unless you have, like, a personalized tutor or something. I mean, education sort of just financial skill has to be mass produced. You know, you have to teach to 30 kids. You know, they have 30 different styles.

Speaker: 1
03:06:52

You can’t you can’t teach 30 different ways.

Speaker: 0
03:06:54

On that topic, what advice would you give to students, young students who are struggling with math, but arya interested in it and would like to get better? Is there something in this Yeah. In this educational context? What what would you Yeah.

Speaker: 1
03:07:09

It’s a tricky problem. One nice thing is that it there are now lots of sources for ai cycle enrichment outside the classroom. Mhmm. So in in in my day, there are already there are math competitions. And, And, you know, they’re also, like, popular math books in the library. Mhmm. Yeah.

Speaker: 1
03:07:23

But but now you have, you know, YouTube. You there there are forums just devoted to solving, you know, math puzzles. And, and math shows up in in other ai, you know, like, for example, there there are hobbyists who play poker, for fun. And, they they, you know, they, for very specific reasons, are are interested in very specific probability questions. Meh.

Speaker: 1
03:07:44

And and, they they actually felt in this community of amateur problem lists in in in in poker, in chess, in baseball. I mean, there’s there’s there’s, yeah, there’s math all over the place. And I’m I’m I’m hoping actually with, with these new sort of tools of for lean and so that actually we can incorporate the broader public into math research projects.

Speaker: 1
03:08:08

Like this is almost is, is doesn’t happen at all currently. So in the ai, there is some scope for citizen science. Like astronomers, looking at the amateurs who would discover meh, and there’s biologists that people who could identify butterflies and so And in meh in, there are small number of activities where, amateur mathematicians can, like, discover new primes and so But but previously, because we have to verify every single contribution, like, most mathematical research projects, it would not help to have input from the general public.

Speaker: 1
03:08:40

In fact, it would it would just be be time consuming because just error checking and everything. But, you know, one thing about these formalization projects is that they are bringing together more bringing in more people. So I’m sure there are high school students who have already contributed to some of these formalizing projects who contributed to Matholib.

Speaker: 1
03:08:57

You know, you don’t need to be a PhD holder to just work on one topic thing.

Speaker: 0
03:09:02

There’s something about the formalization here that also at at at the as a very step opens it up to the programming community too.

Speaker: 1
03:09:10

Yes.

Speaker: 0
03:09:11

The people who are already comfortable Yes. With ram. It seems like programming is somehow maybe just the feeling, but it feels more accessible to folks than math. Math is seen as this, like, extreme especially modern mathematics, seen as this extremely difficult to enter arya, and programming is not. So that could be just an entry point.

Speaker: 1
03:09:30

You can execute code and you can get results. You know, you you can print out the world pretty quickly. Yeah. You know, like, if, if programming was taught as an almost entirely theoretical subject where you just taught the the computer science, the the theory of functions and and and and and routines and so And and outside of some some very specialized homework assignments, you don’t actually program, like, on the weekend for fun.

Speaker: 1
03:09:54

Yeah. Or yeah. They would be as considered as hard as math.

Speaker: 0
03:09:57

Mhmm.

Speaker: 1
03:09:59

Yeah. So as I said, you know, there are communities of non mathematicians where they’re deploying math for some very specific purpose, you know, like like optimizing their poker game. And and for them, then ai becomes fun for them.

Speaker: 0
03:10:12

What advice would you give in general to young people how to pick a career, how to find themselves, like, that they can

Speaker: 1
03:10:17

be good at? That’s a tough tough tough question. Yeah. So, there’s a lot of certainty now in the world. You know, I mean, I I there was this period after the war where, at least in the West, you know, if you came from a good demographic, you, you know, like, you there was a very stable path through it to a good creator.

Speaker: 1
03:10:35

You go to college, you get an education, you pick one profession, and and you you stick to it. Just becoming meh more a thing of the past. So I think you just have to be adaptable and flexible. I think people will have to get skills that are transferable. You know?

Speaker: 1
03:10:49

Like like learning one specific programming language or one specific subject of mathematics or something. It’s it’s it’s that itself is not a super transferable skill, but sort of knowing how to, reason with with abstract concepts or how to problem solve when things go wrong.

Speaker: 1
03:11:05

And so anyway, these are things which I think we will still need. Even as our tools get get better and, you know, you you’ll be working with Ai, sport, and so

Speaker: 0
03:11:13

But, actually, you’re an interesting case study. I mean, you’re like a one of the great living mathematicians. Right? And then you had a way of doing things, and then all of a sudden you start learning I mean, you of all, you kept learning new fields. Yeah. But you learn lean. That’s not that’s a nontrivial thing to learn.

Speaker: 0
03:11:33

Like, that’s a

Speaker: 1
03:11:35

Yeah. That’s a

Speaker: 0
03:11:36

for for a lot of people, that’s an extremely uncomfortable leap to take. Right?

Speaker: 1
03:11:39

Yeah. A lot of mathematicians. of all, I’ve I’ve always been interested in new ways due to mathematics. I I I feel like a lot of the ways we do things right now are inefficient. Ai I I I speak Ai meh and my colleagues, we speak a lot of time doing very routine computations or doing things that other meh would instantly know how to do and we don’t know how to do.

Speaker: 1
03:11:59

And why can’t we search and get a quick response and stuff like that. So that’s why I’ve always been interested in exploring new workflows. About four or five years ago, I was on a committee where we had to ask for ideas for interesting workshops to run ai a math institute.

Speaker: 1
03:12:15

And at the time, Peter Shorza had just, formalized one of his his, new theorems. And, there were some other developments in computer assisted proof that ai quite interesting. And I said, oh, we should we should, we should run a workshop on this. This would be a good idea. And then I I was a bit too enthusiastic about this idea, so I I got vol voluntold to actually run it.

Speaker: 1
03:12:36

So I did with a bunch of other people, Kevin Ai and Jordan Ellenberg and and, a bunch of other people. And it was it was an a a a ai success. We brought together a bunch of mathematicians and computer scientists and other people, and and we got up the speed of state of the art.

Speaker: 1
03:12:51

And it was really interesting, developments that that most mathematicians didn’t know what’s going on. That lots of nice proofs of concept. You know, it’s just sort of hints of of what was going to happen. This is just before chat GBT, but there was even then there was one talk about language models and the potential, capability of of those in the future.

Speaker: 1
03:13:09

So that got me excited about the subject. So I started giving talks, about this is something which more of us should start looking at, now that I I I mentioned the run at this conference. And then ChatGPT came out, and, like, suddenly AI was everywhere. And so, I got interviewed a lot about about this topic, and in particular, the interaction between AI and and formal proof assistants.

Speaker: 1
03:13:33

Sana I said, yeah, that they should be combined. This this is this is, this perfect synergy to to happen here. And at some point, I realized that I have to actually do not just talk the talk, but walk the walk. You know? Like, you know, I don’t work in machine learning. I and I don’t work in true formalization.

Speaker: 1
03:13:46

And there’s a limit to how much I can just rely on authority and say, you know, I I I’m a I’m a woman of mathematician. Just trust meh. You know? When I say that this is gonna change my phthalates, and I’m not doing it any when I don’t do any of it myself. So I felt like I had to actually ai it. Yeah.

Speaker: 1
03:14:02

A lot of what I get into, actually, I don’t quite see an ai as how much time I’m gonna spend on it. And it’s only after I’m sort of waist deep in in in in a project that I I I realized. By that point, I’m committed.

Speaker: 0
03:14:15

Well, that’s deeply admirable that you’re willing to go into the fray, be in some small way beginner. Right? Or have some of the sort of challenges that a beginner would. Right? Sai new new concepts, new ways of thinking, also, you know, sucking at a thing that others, Ai think Sai think in that talk, you could be a Fields Vatsal winning mathematician and an undergrad knows something better.

Speaker: 0
03:14:41

Yeah.

Speaker: 1
03:14:42

I think mathematics inherently I mean, mathematics is so huge these days that nobody knows all of modern mathematics. And inevitably, we make mistakes. And, you know, you can’t cover up your mistakes with just sort of bravado and and, I mean, because people will ask for your proofs.

Speaker: 1
03:15:00

And if you don’t have the proofs, you don’t have the proofs.

Speaker: 0
03:15:02

I don’t love math.

Speaker: 1
03:15:03

Yeah. So it it it does keep us honest. I mean, not not I mean, you can still it’s not a perfect, panacea, but I think, we do have more of a culture of admitting error than because we’re forced to all the time.

Speaker: 0
03:15:17

Big ridiculous question. I’m sorry for it once again. Who is the greatest mathematician of all time? Maybe one who’s no longer with us. Who are the candidates? Euler, Gauss, Newton, Ramanujan, Hilbert.

Speaker: 1
03:15:32

So of all, as as as meh before, like, there’s there’s some time dependence.

Speaker: 0
03:15:36

But On the day.

Speaker: 1
03:15:37

Yeah. Ai like, if if you if you if you plot cumulatively over time, for example, Euclid, like like, sort of, like Yeah. Is is is one of the lead contenders. And then maybe some unnamed anonymous messages before that. You know, whoever came up with the concept of numbers. You know?

Speaker: 1
03:15:51

You know?

Speaker: 0
03:15:53

Do mathematicians today still feel the impact of Hilbert?

Speaker: 1
03:15:56

Just Oh, yeah.

Speaker: 0
03:15:57

Directly of what everything that’s happened in the century?

Speaker: 1
03:16:00

Yeah. They’re Hilbert spaces. We have lots of things that are named after him, of course. Just the arrangement of mathematics and just the introduction of certain concepts. I mean, 23 problems have been extremely influential.

Speaker: 0
03:16:11

There’s some strange power to the declaring which problems Yeah. Are hard to solve, the statement of the open problems.

Speaker: 1
03:16:18

Yeah. Ai mean, you know, this is bystander effect in everywhere. Ai, if if no one says you should do x, everyone just sort of moves around waiting for somebody else to to, to do something and and, like, nothing gets done. Sai and and, like, it like, it’s the one thing that actually, you have to teach undergraduates in mathematics is that you should always try something.

Speaker: 1
03:16:39

So, you see a lot of paralysis, in an undergraduate trying a math problem. If they recognize that there’s a certain technique that that can be applied, they will try it. But there are problems for which they see none of their standard techniques obviously applies. And the common reaction is than just paralysis. I don’t know what to do.

Speaker: 1
03:16:58

I ai, ai I think there’s a quote from the Simpsons. I’ve tried nothing, and I’m all out of ideas. So, you know, like, the next step then is to try anything, like, no matter how stupid. And in fact, almost the stupider the better, which, you know, I want I think it could just almost guarantee to fail, but the way it fails is gonna be instructive.

Speaker: 1
03:17:19

Like, it it fails because you you you’re not at all taking into account this hypothesis. Oh, this hypothesis must be useful. That’s a clue.

Speaker: 0
03:17:25

I I think you also suggested somewhere this this fascinating approach, which

Speaker: 1
03:17:30

really stuck with meh.

Speaker: 0
03:17:31

I started using it and it really works. I think you said it’s called structured procrastination. No. Yes. It’s when you really don’t wanna do a thing. They imagine a thing you don’t wanna do more Yes. Yes. That’s worse than that. And then in that way, you procrastinate by not doing the thing that’s worse.

Speaker: 1
03:17:48

Yeah. Yeah.

Speaker: 0
03:17:48

That’s a nice it’s a nice hack. It actually works.

Speaker: 1
03:17:51

Yeah. Yeah. It is. I mean, with anything, like, you know, I mean, like, if, psychology is really important. Like, you you you you talk to athletes ai marathon runners and so and, you know, and they talk about what’s the most important thing. Is it their training regimen or their diet and so Actually, so much of it is like your ai.

Speaker: 1
03:18:07

You know, to to tricking yourself to to think that the form is feasible, so that you you’re motivated to do it.

Speaker: 0
03:18:15

Is there something our human mind will never be able to comprehend?

Speaker: 1
03:18:20

Well, I sort of I guess, a mathematician I mean, you know, like, it’s it’s reduction. Ai and and it’s a really there must be some really large number that you can’t understand. That was the thing that came to mind.

Speaker: 0
03:18:30

So that, but even broadly, is there are we lim is there something about our mind that’s we’re going to be limited even with the help of mathematics?

Speaker: 1
03:18:41

Well, okay. I mean, it’s like, how much augmentation are you willing? Like, ai example, if if I didn’t even have pen and paper, like, if I had no technology whatsoever. Okay. So I’m not allowed blackboard, pen and paper.

Speaker: 0
03:18:52

Right. You’re already much more limited than you would be.

Speaker: 1
03:18:55

Incredibly limited. Even language. The English language is a technology. It’s, it’s one that’s been very internalized.

Speaker: 0
03:19:02

So you’re right. They’re really the the the formulation of the problem is incorrect because there really is no longer a just a solo human. We’re already augmented in extremely complicated, intricate ways. Right? Yeah.

Speaker: 1
03:19:17

Yeah. Yeah.

Speaker: 0
03:19:17

So we’re already like a collective intelligence.

Speaker: 1
03:19:20

Yes. Yeah. Ai guess. Sai so humanity, plural, has much more intelligence in principle on his good days Than than than the individual humans put together. It can all happen less. Okay. But, yeah. Sai, yeah, math math meh mathematical community, plural, is is is an incredibly super intelligent entity that, no single human mathematician can can come close to to to replicating.

Speaker: 1
03:19:44

You see it a little bit on these, like, question analysis sites. So this math overflow, which is the math version of Speak Overflow. Mhmm. And, like, sometimes you get, like, this very quick responses to very difficult questions from the community. And it is it is it’s a pleasure to watch actually as an as an expert.

Speaker: 0
03:20:01

I’m a fan spectator of that, of that ai. Just seeing the brilliance of the different people there, the depth of knowledge some people have, and the the willingness to engage in the in the rigor and the nuance of the particular question is pretty cool to watch. It’s fun. It’s almost like just fun to watch. What gives you hope about this whole thing we have going on with human civilization?

Speaker: 1
03:20:24

I think, yeah, the younger generation is always, ai, like, really creative and enthusiastic and and inventive. It’s a pleasure working with with with, with, with young students. You know, the, the progress of science tells us that the problems that used to be really difficult can become extremely, you know, can become, like, trivial to solve.

Speaker: 1
03:20:47

You know? Ai, I mean, like, it it was, like, navigation. You ai? Just just knowing where you ai on on the planet was this horrendous problem. People people ai, you know, and or or lost fortunes because they couldn’t navigate. You know?

Speaker: 1
03:21:00

And we have devices in our pockets that do this automatically for us, like, ai a completely solved problem. You know? So things that are seem unfeasible for us now could be maybe just homework exercises for me.

Speaker: 0
03:21:12

Yeah. One of the things I find really sad about the finiteness of life is that I won’t get to see all the cool things we create as a civilization. You know? That because it in the next hundred years, two hundred years, just imagine showing showing up in two hundred years.

Speaker: 1
03:21:26

Yeah. Well, already plenty has happened. You know? Like, if if you could go back in time and and talk to you of your teenage self or ai, you know what I mean? Yeah. And just the Internet and and and our Ai. I mean, it’s like ai, they’ve they’ve been into they’re beginning to internalize and say, yeah.

Speaker: 1
03:21:40

Of course, an AI can understand our voice and and give reason why, you know, slightly incorrect answers to to any question. But, yep, I this was mind blowing even two years ago.

Speaker: 0
03:21:50

And in the moment, it’s hilarious to watch on the Internet and so on, the the ram, people take everything for granted very quickly, and then they we humans seem to entertain ourselves with drama. Out of anything that’s created, somebody needs to take one opinion, another person needs to take an opposite opinion, argue with each other about it.

Speaker: 0
03:22:08

But when you look at the arc of things, I mean, just even in progress of robotics Yeah. Just to take a step back and be like, wow. This is beautiful that we humans are able to create this.

Speaker: 1
03:22:19

Yeah. When the infrastructure and and the culture is is healthy, you know, the community of humans can be so much more intelligent and mature and and and rational than the individuals within it.

Speaker: 0
03:22:30

Well, one place I can always count on rationality is the meh section of your blog, which I’m a fan of. There’s a lot of really smart people there. And thank you, of course, for, for putting those ideas out on the blog. And it’s I can’t tell you how, honored I am that you would spend your time with me today. I was, looking forward to this for a long ai.

Speaker: 0
03:22:52

Terry, I’m a huge fan. You inspire me. You inspire millions of people. Thank you so much for talking. It was a pleasure.

Speaker: 0
03:22:59

Thanks for listening to this conversation with Terrence Vatsal. To support this podcast, please check out our sponsors in the description or at lexfredeman.com/sponsors. And now let me leave you with some words from Galileo Galilei. Mathematics is a language with which God has written the universe.

Speaker: 0
03:23:19

Thank you for listening and hope to see you next time ai.

Transcribe, Translate, Analyze & Share

Join 170,000+ incredible people and teams saving 80% and more of their time and money. Rated 4.9 on G2 with the best AI video-to-text converter and AI audio-to-text converter, AI translation and analysis support for 100+ languages and dozens of file formats across audio, video and text.

Start your 7-day trial with 30 minutes of free transcription & AI analysis!

Trusted by 150,000+ incredible people and teams

More Affordable
1 %+
Transcription Accuracy
1 %+
Time Savings
1 %+
Supported Languages
1 +
Don’t Miss Out - ENDING SOON!

Get 93% Off With Speak's Start 2025 Right Deal 🎁🤯

For a limited time, save 93% on a fully loaded Speak plan. Start 2025 strong with a top-rated AI platform.