0:00
/
0:00
Transcript

Erik J Larson - Understanding AI, The Mind vs. Machine Debate & The Framework of Reasoning

Why AI Isn't What You Think It Is

Welcome to My First Episode

Share

Hey, dear readers. I'm excited to welcome you to my very first podcast episode.

You know, I've been wanting to start this conversation for a while now. A few months ago, I read

's book "The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do," and it really struck a chord with me.

I've been increasingly frustrated by what I see happening in discussions about AI. Tech leaders are often spreading questionable philosophy, philosophers misunderstand the technology, and then there are these influencers and newsletters with millions of subscribers just pumping out complete nonsense about the field.

I think it's time for some clarity. That's why I'm so excited to have Erik on the show today. What makes him unique is his background in both philosophy and computer science, combined with the fact that he's had a front-row seat to AI development for the past 25 years.

Honestly, this is someone worth listening to. Even Peter Thiel recommended his book, saying, "If you want to know about AI, read this book... It shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence."

So let's cut through the hype together and have an honest conversation about AI—what it is, what it isn't, and why the distinction matters. I hope you'll join me on this journey.

Summary

In this conversation, Alberto and Erik J. Larson discuss the complexities of artificial intelligence, exploring its evolution, philosophical implications, and the limitations of current AI systems. They also discuss the journey of writing Erik's book, the distinction between human reasoning and machine learning, and the future of AI technology. The dialogue emphasises the need for a deeper understanding of AI's capabilities and the philosophical questions surrounding consciousness and intelligence.

Takeaways

  • Erik J. Larson has been involved in AI since 2000, with a background in philosophy and computer science.

  • The hype surrounding AI often overshadows its actual capabilities and limitations.

  • AI systems primarily rely on data and do not possess true understanding or consciousness.

  • Abduction, a form of reasoning, is a significant limitation in current AI approaches.

  • The distinction between human reasoning and machine learning is crucial in understanding AI's capabilities.

  • Philosophical perspectives on AI can influence how we perceive its potential and limitations.

  • AI can be seen as an extension of search technology rather than a revolutionary leap.

  • The future of AI may face challenges due to data limitations and the current plateau in advancements.

  • Understanding the assumptions behind AI discussions is essential for meaningful discourse.

  • AI's confidence in providing information can lead to misinformation if not critically evaluated.

Chapters

00:00 - Introduction to AI and Erik's Journey

12:03 - The Genesis of Erik's Book

21:59 - Understanding AI: The Mind vs. Machine Debate

32:18 - Limitations of Current AI Systems

32:45 - The Framework of Reasoning: Induction, Inference, and Abduction

36:55 - Generalization and Reasoning in Machine Learning

40:03 - Practical Applications of AI in Daily Life

46:14 - The Future of AI Agents and Their Impact

50:15 - Looking Ahead: Innovations Beyond Current AI Models

Full, Unedited Transcript

(Apologies, it may contain some errors)

Alberto (00:05)

Okay, good evening, I think in your time zone, And yes, and morning for Sydney where I'm recording. So I've been waiting to do this for a long time. I first met about your work, Eric, when I read the myth of...

Erik J. Larson (00:09)

It is.

Alberto (00:27)

artificial intelligence, why computers can't think the way we do, a couple of years ago. It was very unfortunate that I came across your book just after I finished writing and published mine, because I would have included a lot of quotes and remarks in my book as well, coming from yours. But yeah, that's history. first of all, I want to ask you to introduce you

to our listeners, what is your background, what has been your journey, etc.

Erik J. Larson (01:05)

Okay, so I'm Eric Larson. There's another writer who writes fiction with the exact same name. So I usually go by Eric J. Larson. My middle name is John. And I started working in AI in a technical capacity in January 3rd, 2000. So I always joke that for the entire century so far, I've been doing AI. So

I worked for a company that's a symbolic AI company called Psych Corp in Austin, Texas. And then from there I got into natural language processing, machine learning. did a PhD in 2009 at the University of Texas at Austin. And that was kind of interdisciplinary linguistics, philosophy, and computer science. And then after that, when I...

finished the academic stuff, I submitted a provisional patent on the tech that I had been working on at UT and ended up getting DARPA funding and so started a company and moved out to Silicon Valley and did that whole thing. so I've been sort of like off and on in tech over the last decade. I took some time and traveled. So a couple of years in Europe.

I came back and co-founded a company and actually before I left for Europe, I co-founded a company in 2016 and that was reasonably successful, but I ended up parting ways with the other founder.

So I do like I basically I'm an ideas guy I write I start things from the very beginning and I try to stay out of management as much as possible so yeah but yeah I've been been involved in a couple of couple decades now you know and seen a lot of changes and in what we call AI the field.

Alberto (03:10)

Yeah, that's fantastic. It's definitely like first front row seats witnessing the whole story that led to the, I would say overhype. I guess you agree with me that we're here today. I'm curious what year was it when you started your company?

Erik J. Larson (03:24)

Yeah.

the company started in, I think officially I incorporated in 2007 and, it was, just R and D from DARPA and the department of defense. So I went to, Palo Alto, to, to basically explore commercial opportunities, but we ended up being an R and D company and we did.

fundamental research and classifying. At the time blogs were a big deal and so we were trying to figure out how do we organize blogs so people can find what they're looking for and they weren't actually showing up very well on Google searches because you have this rich content but they're not like linked like commercial sites. It's just a guy writing. So we have solutions to that now with Substack and so on. We have platforms now where you can become

well known even though globally you don't have 50 million hits but you can develop networks but at the time blogs were just kind of getting left out of search indexes and so I developed a way that blogs could get seen basically and yeah so it's about 2007 and then we I think

Um, around 2012, I, I officially shut it down. we, you know, we were.

We basically had about through, I think in today's dollars, it'd be over $4 million in R &D that we did. And it went back to the government. you know, I don't know what they're doing with it right now. It could be in a dark project where you can't see it because I don't have clearance or something, but they had a non-exclusive license for it. So, but we never actually went commercial facing, but it was an amazing experience. built it from the ground up. It was just me sitting in my house waiting for funding.

Alberto (05:21)

Yeah.

Erik J. Larson (05:28)

all the way to like millions of dollars and you know I think we had 13 something employees but for a tech company that's a lot of employees so yeah yeah that was fun that was a that was a very seminal and pivotal moment in my in my career so yeah

Alberto (05:50)

And I think you also have some background in philosophy, if I'm right.

Erik J. Larson (05:54)

Yeah, so I was a double major in math and philosophy as an undergrad. And then I did a master's in philosophy. But I was studying Gödel's theorem, which is kind of a very technical aspect of philosophy. And so I did math logic, proving things. But I also studied the Wittgenstein and

And you mean, you can't get through a master's in philosophy. At least this was true 20 years ago without knowing a lot about philosophers. So I had to take continental philosophers as well, Mill and...

Kant and Hume and all those guys. So yeah, and that actually has been really helpful for me having core ideas like that to be able to communicate, you know, in AI. So it's not just math and statistics. You have to be able to conceptualize what's happening. So I think the philosophy has really helped actually as a writer, you know, it's really helped and maybe more than just as a writer. But yeah.

Alberto (07:04)

Yeah, yeah, think that's really much the case. I've been reading your writings, starting from your book and your recent publication on Substack called Lego, which I find really amazing because I've been long passionate about philosophy. Actually, when I had choose university, was indecided between physics and philosophy. I went over to physics, but I still have this sort of back...

back thought on my head saying maybe sometime I would like to get a degree in philosophy. And it's very important to understand where all this is going. it also leads to my next topic I wanted to discuss with you. I was really curious to understand the journey that inspired you to write a book, that kind of book, that kind of topic.

Erik J. Larson (07:39)

Yeah.

So the book, I'm gonna turn this

up a little bit. hope it's not too distracting, because I need to hear you a little bit better. There we go. So the book was a kind of odyssey that started well before I ever wrote a word on a page. I was...

I was at, I was interning at the Discovery Institute up in Seattle and they have a

Alberto (08:25)

.

Erik J. Larson (08:25)

kind of, I don't know how you want to say this. I don't know what I should say, but a lot of their, their researchers are people who come from Judeo-Christian background. And so they were very,

skeptical about this idea that you can have a mechanism that reproduces the mind because that has something to do, you know, with some other aspect of with creation and so on. And so I have a friend who's a really good professor and I was doing work with them and I started thinking about it. I'm not like super religious myself and that's kind of like, you know,

Alberto (08:57)

you

Erik J. Larson (09:08)

Orthodox way going to church and so on but I thought it was interesting how they were framing things and so from very early on in the 1990s I was thinking like what what can we do with a machine and can machines be mines and so you have this kind of core question and All the math and physics in the world can't really answer that right? But with but with AI you have this interesting

scenario where if you can actually, if you can get the behavior to be intelligent enough, it's a kind of existence proof that maybe it can be a mind, right? So if the, if the computation basically simulates or reproduces everything that we're doing and thinking about,

Alberto (09:45)

So,

Erik J. Larson (09:56)

Turing himself said back in the 1950s, how can we exclude it from our consideration of mind if behaviorally it's doing everything that we do? And so we don't know what's going on inside of us. We don't really know what's going on inside of a neural network. But I kind of started with that core question,

what's the difference between a mind and a machine? And then,

You know, as I was kind of working in the field, it got refined more and more. And so the questions started to get more specific, like, can we get a machine to do, to understand, you know, monologue, which is like read a newspaper and then answer questions about it. And then can we get a machine to understand dialogue, which is conversational, which is chagy BT. And so like working as a computer scientist.

Alberto (10:25)

So, I'm to ahead presentation.

Erik J. Larson (10:48)

I started to refine that and eventually I kind of came back to that original question. What are we doing here? And obviously,

there was a lot of hype going on as we see again. But in the the 2000s around 2005 we had Facebook had started in 2004 and we had this sudden Web 2.0 was was was coming up and then suddenly AI was working again because we had all this data on the web. And so you started to hear these arguments that it's right around the corner. We're going to have a GI. We're going to have super intelligence. And so I just thought like the public kind of needs to know you know like it's really really

Alberto (11:18)

So, thank

Erik J. Larson (11:29)

difficult for us to get computers to mimic or simulate what human beings do and you know at the time we had no evidence really I still think we don't by the way we can get into that which had GPT

Alberto (11:42)

Mm-hmm.

Erik J. Larson (11:42)

but that was the you know this is my long answer here but like that was the genesis of okay I'm gonna write a book I'm gonna explain this you know I have a company I'm working I got it I have you know I'm working as a computer scientist

You

know, I'm in a perfect position to say this is what's really going on. Here's the inside scoop. And so the book just came together over years of me scratching notes.

And it just like I was really fortunate that that Harvard called. I was actually in Ukraine when they called and it was just like I we had tried for about a year. Me, my agent, my book agent, Roger, he's no longer but he at the time and we didn't have any success getting it under, you know, the myth of artificial intelligence getting it under contract. And then so so when Harvard called and said, like, we want to take this on, I was overjoyed and

Alberto (12:16)

Hmm.

Erik J. Larson (12:41)

And so I wrote some of it in Europe and then I came back to the States and wrote the rest of it. It was an odyssey. There was no one point where I said, okay, I'm gonna stop doing what I'm doing and I'm gonna start writing. It started all the way back when I was a kid, thinking about what are computers and what can they do and who are we and what can we do. And it just finally kind of culminated in that book, so yeah.

Alberto (13:08)

Amazing, amazing journey. And yeah, there are many themes that I want to pick up on. But let's start from the one you briefly mentioned. It's becoming clear through Chars GPT today, or I'll say the GPT, large language models, that it's difficult to say that machines are doing exactly what humans are doing. And I really like this topic because I...

I've been building machine learning systems for a long while now and I see what's going on into these algorithms and I often find it a bit funny when I hear lots of posts on social media but also talks from experts at the top AI conferences that make these parallels between AI and the human mind and the...

they humanize a lot the algorithms, et cetera. So it would be great if you can shed some light on this and try to simplify what is the difference in a natural.

Erik J. Larson (14:22)

Well, right, mean, so...

My official position is that we don't know what the limits yet are to computational systems. But I have to be honest, I'm very skeptical. So when you develop systems as you're just explaining that you did, when you develop systems, you realize that basically the intelligence is coming from the designers. It's coming from the people who are writing the code, right? And so I think in the entire history of the field, there's only

Alberto (14:34)

Yeah.

Erik J. Larson (14:54)

one kind of road bump to that, which are these, you know, these foundational models, because they exhibit a kind of pseudo emergent behavior that we didn't expect. I didn't see it coming. I think even the guys that developed it in originally in Google and then, you know, the attention and then in judge chat GPT, where they actually did the pre training, they didn't see it coming. Everybody was surprised. And so it does kind of raise this issue, like maybe we got it wrong. Maybe computers

can be mind-like, I actually don't think that's true. I think we have these powerful technologies now. And a friend of mine, Gerben, in the Netherlands, Gerben Verde, he calls it wide AI. It's not general intelligence, but it's a little bit more than narrow intelligence. We have wide AI. And so...

I don't know if there's an existence proof. Maybe you could use something in like Gertl's theorem or something and prove that there's a fundamental distinction between provability and truth or something. But I think that everything that we're doing

in AI, the way I see the field is that we're engineering solutions to problems. So we're actually an exogenous intelligence that's bringing into this architecting of something to solve.

some of our problems. So it's a little perverse to say that the thing that we architected is actually smart and it's smarter than us, right? I think that's just that's not a scientific move on the chessboard as far as I see the evidence, as far as I see what goes on, what we do when we develop systems.

Alberto (16:28)

Yeah.

Yeah. Yeah. And that brings me to another consideration that I keep hearing all the time. So I want to connect what you were saying about one of the origins of your journey was studying along scholars that were coming from Judeo-Christian ideas, probably. And so I find this very interesting.

I'm on the contrary, I'm a scientist, but also I'm a religious person and I don't think that two things go against each other. And I find really interesting that usually academics and scientists, they dismiss talks like, oh, you know, the connection between the mind and the soul, if there is such a thing. they're as quick as to discard this discourse or topics as they are quick to assume that

Erik J. Larson (17:17)

Yeah.

Alberto (17:38)

the mind is computational. And I find it quite disturbing that this is an assumption that no one challenges. I never read any proper debate around these assumptions, but it's just taken for granted. If someone starts discussing it, it's really ridiculed or not taken seriously. I want to understand your take on this. Why are we so rational? that we no longer can talk about the soul?

But on other hand, all the discourse around AGI has strong religious connotations.

Erik J. Larson (18:09)

Yeah.

Yeah,

yeah, exactly. it's funny, if you just look at it sort of like, let's step out of being scientists, engineers and look at it in more cultural, political terms, you never see.

Alberto (18:28)

Yeah.

Erik J. Larson (18:33)

a proponent or an enthusiast of AGI or even if they think it's gonna be bad, right? Like killer robots, they're always secular. Like you very, very rarely see somebody from a religious persuasion, know, Christianity, Judaism, Muslim, what have you. There's this idea that like who we are is unique.

and what we make is a different thing. so if you have that grounding in this kind of historical understanding of the mind as being something that didn't come out of the material.

It didn't solely exclusively come out of the material of the earth, but in fact there's something else that's special about life and because life has a creator, then you never really arrive at this point where you have killer robots that have these angry thoughts. But if you have a secular mindset, then we're just computers made of meat. As Marvin Minsky, the famous MIT AI pioneer, he said,

famously like you know we're computers made of meat so if you have this idea that there's nothing special going on with with us other than physics and chemistry and so on

But you need one more step. You need to actually get out of the wetware and you need to say that what constitutes intelligence is a functionalism or in other words, just the software. So you can run the software on a brain or you can run the software on a computer chip. But I think like all of these premises are highly debatable and dubious. And in fact, I think one of the greatest mistakes we've

made was we should have abandoned this metaphor.

decades ago, right? And we would have entirely new research programs right now, but we're stuck to it. And even though it doesn't really make sense, and we all know, you know, I used to just pull my hair out arguing, and then I finally stopped arguing with the, you know, the futurists, the tech futurists, the computers are gonna come alive and they're gonna either create heaven on earth or they're gonna eat us all, you know, like Skynet. I used to argue with them, and then I realized it's it's arguing with a new religion.

Alberto (20:31)

Yeah.

Erik J. Larson (20:56)

Like literally they are never going to change their mind. It doesn't matter what I say They they believe that the computer is the new sort of reference point for eternity right for meaning and To me that's just silly. I mean you might as well say that my chair or you know, it's just an artifact It's something that we build so I mean you can see why that those conversations don't don't go anywhere so I you know, I I

Alberto (21:11)

Yeah.

Erik J. Larson (21:21)

I haven't found like the myth again, just getting back to the book. That was my attempt to say, here's something like deal with this. This is not just me sort of, you know, popping off about what I believe, but this is a very structured systematic way of looking at this, which, you know, what do you make of it? And, but I, but again, like, I think it's complicated because GPT solves a lot of the problems that I put in the book.

Right? So there's this sense in which we're still engineering more powerful systems.

But I'm never confused that the power of those systems ever creates a kind of consciousness. It's just apples and oranges, right? Like it's just, how is consciousness gonna emerge from more powerful computation running more powerful algorithms? right? Like how is exactly is it supposed to suddenly have this quantitative change? So yeah, I don't buy it. I don't know if I answered your question, but.

Alberto (22:15)

Yeah. Yeah, yeah,

it is certainly helpful. And yeah, I think it also sheds some light on the assumptions like that most of these discourses are based on. Because I again, I

I rarely see someone laying down what the assumptions are. There is also this very steep leap ahead, jumping off. Everything that we are talking about, the possibility of superintelligence, AGI, etc. etc. is based on these assumptions. Nobody just stresses that. Then, of course, you can believe or not the assumption, but at least stating that's the...

Erik J. Larson (22:56)

Yeah.

Yeah,

no, I think there's a lot of bad philosophy that goes on.

You know, we have this technical field that has billions, trillions of dollars by now invested in it. And, you know, it's exciting and it's running the world and we're all using it on our laptops. That's why I, you know, if you read my subsack, I say, look, there's no reason not to use this. You're using Google search. It's just an extension. Like these things are extensions. And, but if you look at the arguments that the

Alberto (23:28)

Yeah.

Erik J. Larson (23:33)

you know, like techno futurists and people who believe that the computers are gonna come alive. It's really bad philosophy, right? Like the, is your philosophy? What is your theory of mind? Like how, you know, lay out to me how this is supposed to happen. And all you ever hear is it's gonna get really fast and really smart. And then it's gonna be alive. It's like, how, you know, like it doesn't make like, you know, a calculator.

Alberto (23:40)

Yes.

Erik J. Larson (24:00)

I used to use this example, like a calculator is like the perfect AI, because it never gets an answer wrong. Right? And so there's a sense in which like nobody's gonna, because of that precision and that power in the calculator doing arithmetic, arithmetic operations, no one's going to ascribe to it consciousness. But somehow, you know, if we have these other systems that have a wider throw a wider net, and they can capture natural language, they have consciousness.

Alberto (24:06)

Yes.

Erik J. Larson (24:27)

I don't see the argument philosophically. You're just inserting it. You're just saying it will. Right. And so I've never I've you know there's something else I was going to say along this line in terms of the running the the algorithms with the brain but I'll have to come back to it. I can't remember right now. Yeah.

Alberto (24:48)

Yeah.

Okay. Yeah, yeah, that's definitely some great points there.

I think I want to transition a bit into a topic that you discuss a lot in your book as well as in your sub stack. You break down this framework very well speaking about machines doing what the human brain does when reasoning, which is induction, inference and abduction.

explain very briefly what these terms are for anyone who's non-technical, non-philosophical. And we can digress a little bit into the abduction, which is the biggest probably limitation of current approaches to AI.

Erik J. Larson (25:40)

Right, in computer science, as you know, you have...

Basically you have some preparation or training if you're doing machine learning and then you have a situation where you need to have it perform on new unseen data, new unseen observations, right? And then that's called inference. And inference is just how does it actually perform? How does it draw a conclusion? How does it produce the next token? Whatever the task is. But that's inference after you've done the prep or the training or what it is.

So the reason I keep saying prep or training is because if you're building a symbolic system, you're not actually doing statistical optimization. You're actually writing code to do that. But at some point, you're going to unleash that on the world. And then it's going to infer something based on how you made that system. And so inference is this ubiquitous thing. All cognitive systems do it. We.

are the prime example. And it's basically given what I know and what I see, what does it make sense to believe next? That's inference, right? It's both what I already know, prior knowledge, and what I currently observe, what does it make sense to believe next? And so we're doing inference constantly. It's like a condition of being awake. Like we're constantly inferring. And so the question is, sort of how does that work? How do you combine what we know and what we see to get something new and it's all connected

and it's reasonable, right? And there's basically three ways to do that. And that's what I laid out in the book, this tripartite framework. The one that's based on pretty much all of AI at this point is called induction. And that's just from prior observation.

What what does it makes what what do you believe is true about the world? So if you see you know 10,000 white swans What should you believe about swans probably that they're white, but it's also it's also

Defeasible or fallible because just one counterexample suffices to to break the inference, right? So if you see one black swan and you've you've inferred that all swans are white based on all your prior observations, you've actually invalidated the rule that you created. So it's inherently

a weak kind of inference, but because we do so much observing in the world, it has a very wide scope. So a lot of things fall under induction, but it's a kind of weak inference because it's held hostage by the threat of a counter example, right? And so if you go to deduction,

Alberto (28:19)

you

Yeah.

Erik J. Larson (28:32)

you have, and this has been studied all the way back to Aristotle. It's one of the oldest forms of inference or reasoning that we know of. Deduction, you the classic example is the syllogism. You know, the one I think that Aristotle said was over 2,000 years ago, all men are mortal.

We'd say all people are mortal now, but the original was all men are mortal.

Socrates is a man, therefore Socrates is mortal. And so that's called a syllogism because you've got two statements about the world that you believe are correct. One is a rule and one is a fact. All men are mortal. Socrates is man is the fact. That's the rule. And then you conclude a new fact from that, which is Socrates is mortal. So that's a syllogism that's still described in Latin in academic circles as modus ponens. And so there's all kinds of these, right?

You can reason about negation, you can do all this kind of stuff, the idea is that that inference is actually given the premises, given that you use a valid rule of inference, I just mentioned modus ponens, the conclusion can always follow with a hundred, so it has to be true if the premises are true, right? And so with deduction, you have this,

Distinction between but I hope this is too not too much for your viewers by the way, but you have this You have this distinction between

Alberto (30:04)

I think it's all right.

Erik J. Larson (30:06)

validity and soundness so I can say You know all All computers are mortal

this thing in front of me is a computer, therefore it's mortal. And I didn't break the rule to infer. So it's still a valid deductive inference, but it's not what they call sound because the premises, the two preceding statements aren't true. with deduction, you're trying to get at both.

a way of reasoning that cannot be, doesn't have the counter example like induction does, you don't have a black swan, it's always true, but it's always conditional on the truth of the prior knowledge. Right? So you know that you can always have validity, which means you didn't screw up your reasoning, but you might not have something true about the world until you can independently verify the truth of the premises. So it's a little bit of a high bar, but you have this idea of certainty.

sound argument in deduction, you know that it's true. Like a triangle has three sides, you know that it's true. There's no way that it can't be true. And so it has that virtue, but a lot of what we do is kind of messy and we're not able to establish the truth of those premises. And so because we can't get from validity to soundness, a lot of knowledge has to kind of sit and lay unresolved.

as

it were, in deduction until we establish the truth of those premises. But the idea that you can use rules in a structured way of doing thinking and you don't just have to count examples is the key to deduction. And so we do a lot of deduction. And when we can make use of it, it's always right and it's an extremely powerful inference tool. But unlike induction, which is very wide scope and weak, deduction is very powerful.

small scope just only a little bit of our world will you know allow us that kind of certainty and so then abduction is the third in this kind of tripartite set of inferences and abduction is basically a logical fallacy it's like saying let me change the example so if you say if it's raining the streets are wet

there's your rule, first premise. And then

Alberto (32:41)

So,

Erik J. Larson (32:41)

if you wanna do deduction, you'll say, the streets are in fact wet, right? Or I'm sorry, it is in fact raining. I'm sorry, I did the abduction one, not the deduction one. Yeah, yeah, yeah, you don't wanna do that. That's the whole abduction thing. So if it's raining, the streets are wet. It is in fact raining, right?

Alberto (32:49)

Yeah, streets are wet, it is in fact.

Erik J. Larson (32:59)

So you know the rule fired, in other words, right? So you know that you have this instance of that it's raining. So you know that the streets are wet and then now you have deduction. But what abduction does, it's called affirming the consequence. It's basically a logical fallacy, but it's what we use all the time. And

we say like, if it's raining, the streets are wet. I see that the streets are wet. Maybe it's raining, maybe something else. Maybe the, you know, the fire hydrant was broke. Maybe the kids are outside playing, you know, with the hose.

you know, maybe a, you know, a super tanker flew overhead. We have these fires in California and dumped a bunch of water in the wrong spot. Who knows? There's all kinds of reasons that the streets could be wet. So we don't have deductive certainty, but we have an argument where we are forced to look at the world like Sherlock Holmes and say, what is the evidence for this? And we, that's almost everything we do is abduction and we can't have certainty. And it doesn't, we don't say I've seen the streets.

I've seen the streets get wet 10,000 times before. And so I know that this time it's because of rain, you know, because at any given time there could be another reason for it, right? So that we don't use the straight jacket of induction. We don't just say, this is the way it's always been. It has to be this way. It has to be that it's raining. We look at it in, you know, a flash and we say like, what is the reason for this occurring? And the way

Alberto (34:10)

.

Erik J. Larson (34:28)

Perse said it, the 19th century scientist, philosopher, he said like, I see something surprising and I asked myself, if something else were true, this wouldn't be surprising anymore, right? I didn't expect the streets to be wet, but there's a fire hydrant broken, broke right around the corner. So the surprise is reduced. Oh, now I see. So, so.

Yeah, so what I did in the book was I said, look, we're using abduction all the time, but nobody in AI is working on it. We're all working on induction and we know that that's not adequate for intelligence, right? So we know that we're not on this path to AGI because we're stuck in induction, which is machine learning.

And so that was the basic argument. Yeah. I mean, it's a mouthful. I hope I didn't lose you on all the, I mean, not you, but like other, your listeners. Yeah. So it's a mouthful.

Alberto (35:08)

Yeah. I hope

not and I think it's very important to lay it down properly and... Yeah.

Erik J. Larson (35:26)

Yeah, it's really important. Yes. Yeah, I totally.

Yeah.

Alberto (35:31)

I think which brings me to discussing the current AI systems that we see today. You mentioned that they do induction well, I wouldn't say perfectly, but they do it well enough to be useful for a number of things. And I think I'll come back later in terms of understanding what is useful for. I just want to say, conclude the part where we just dissect what is the main limitation and therefore, what is the proper use of these systems.

And it's interesting this point on abduction because if I understand it well, abduction is the sort of reasoning that went with the classical physics examples of Newton seeing an apple falling from a tree and he would abduct if that was true gravitational laws, that it would make sense that the apple falls from tree in that particular fashion.

We've been hearing a lot of the overhyped marketing from the AI labs that these new kind of foundational AI will make new sciences, will solve global warming, etc. All these messianic claims. But what I'm interested in is can these systems or is there evidence that these systems can do abduction?

Erik J. Larson (36:51)

Well, so it's an interesting question. think the answer is that they can simulate abduction without doing it. so I think what you have, if you take GPT or chat GPT, you have a kind of closed world assumption where you can kind of draw a circle around.

Alberto (37:03)

Mm.

Erik J. Larson (37:15)

such a large amount of human knowledge on the internet that almost anything that we're going to talk about is going to end up within the purview of that system. It's you talk about vector space and embedding and all that if you want. basically, you know, what it's doing is

I mean, a very, very simplified view of what it's doing is it's creating embeddings with word similarities, token similarities, right? And so if you, know, crocodile is closer to Florida than it is to the planet Mars, right? You know, and, you know, so you have all these similarities, you know, a cruise ship is closer to tourism than it is to...

desert scorpion right like just you know and so you can imagine having this space that's so huge that almost any word you know you have a way of finding

the relevant next word. And so in that sense, you can do abduction by just generating the sentence that's the answer without knowing that you did abduction, right? And so, you know, people say, by the way, I need to mention this. So I had some really hard examples in the book and chat GPT knows, I'm not bragging when I say this, I'm just telling you, it knows me.

Alberto (38:11)

I'm to go ahead start presentation.

Erik J. Larson (38:33)

It indexed my Wikipedia page and my book. So I can ask it questions, who am I? I'm Eric. And it'll say, yeah, you wrote this book. Here's what you said in chapter three. So it's not fair because it trained on...

Alberto (38:36)

Hey.

Erik J. Larson (38:47)

the myth of artificial intelligence. So of course it solves all the problems because it was in the training set, right? So didn't have to generalize. But it does generalize in a way that looks

like abduction. But what it's doing is it's basically saying, if you see enough of the world, and I'll tell you why this can't work, we can't keep doing this, how we got this far is if that circle is large enough and you have enough observations, you can always find a way

to create a statement that looks like you have do stick from premises. But in fact, what you did was you just added you added tokens on and we read it in our minds say, wow, that's true.

And so look, give it credit if you want. It's a philosophical issue. Do we want to say, it does abduction? But it's not aware of doing that kind of reasoning. It's not surprised by a fact and then goes and looks for another fact. It's doing the same thing as it always does. It's a unitary sort of mechanism that it uses for inference. And it doesn't scale, by the way. It's stuck in the cyber world. If you try to put these systems in self-driving cars and put video cameras and say,

Alberto (39:51)

Mm.

Erik J. Larson (39:59)

like

go drive around the country in rain and snow and so on. It can't use the internet to figure out what to do. the jig is up, right? So it's not a silver bullet for AI. But for conversational AI, we just have this powerful way to grab so much data and project it into this space using an inductive procedure, right?

Alberto (40:11)

Yeah.

Erik J. Larson (40:27)

that we can get these amazing results from it. That's how I look at it, you know, yeah.

Alberto (40:33)

Got it, yeah. And you mentioned the word generalization. think I'm always surprised the extent to which even machine learning researchers don't appreciate enough the word generalization in machine learning. And machine learning works well when it's able to generalize well and...

unseen, let's say, context or pattern. And I sort of refer to the ability to quote unquote reason of LLMs is machine learning working well. So machine learning generalizing well. Would you agree with that? Would you add some caveats of differences between generalizations, what people call reasoning?

Erik J. Larson (41:14)

No, no, no, I mean, I think like, so there are many instances where the,

the inference in a language model, an LLM.

is not a reproduction of the training set, which is the definition of generalization. So you're getting a lot of novel information, recombined tokens that are actually not verbatim, token by token in the training set. So in all those cases, you're getting a generalization performance. That's the definition of what we mean by it. It generalizes to grammatically correct meaningful statements that are not verbatim in the training set.

Alberto (41:33)

Yeah.

Yeah.

Erik J. Larson (41:59)

And so I think in that sense, it's an extremely powerful system in AI. I'm very impressed with it. I bet I say this on my sub stack and I get in trouble because people say, thought you were against it. I am, not, like,

you know, I can walk and chew gum at the same time, right? Yeah. Yeah. So yeah.

Alberto (42:15)

Yeah, absolutely. I have

the same trouble because today it's very common that around any topics there are two main factions forming and I don't belong to either as well. I'm very excited about AI and what it is today. My PhD is in natural language processing and the field has made like...

enormous steps in the past 10 years. So it's really fantastic. On the other hand, yes, I'm cautious, I'm worried about all these narratives and ways of thinking about AI. And one of the statements or the conclusions that I really admired from reading your book is exactly what you mentioned briefly at the beginning of our conversation that a lot is being investment on a lot of, let's say a lot of investment goes into the same technique and there is no like,

real will to explore many other possible ways to do AI. I think I would like to move now to more the positive sides, knowing what we know, establish the premises of what these systems are, what are limitations, what are useful for. People, they scrub their heads every day saying, yes, we have these very powerful technologies and we can make animated memes now.

longer just picture and copy paste. We can make more sophisticated, fun stuff. what else? What are some general rule or framework for which these tools are really useful and helpful for a multitude of tasks?

Erik J. Larson (43:59)

Yeah, no, I agree. Are you asking like a practical business question about how they're going to get sort of diffused in to...

Alberto (44:08)

I would like probably more to extrapolate at a more abstract level, in personal life, business, work, knowing that there are these limitations of the current approach based on machine learning inference, what it becomes useful for.

Erik J. Larson (44:30)

Yeah, I mean, look, I think that, you know, I have this kind of unpopular view, which is in theory, it's a lot different than if you do an internet search, right? Internet searches use deep neural networks now to rank the result set that gets returned and so on. And they do reinforcement learning to say, this answer is better than this. And so a lot of the same machine learning techniques actually

Alberto (44:46)

Yeah.

Erik J. Larson (45:00)

go into a Google search. And that started around 2014, 2015, Google abandoned just the old page rank. They still have page rank as a signal, but they started using deep learning to basically organize the results and then present that. the way that, you know, and people hate it when I say this because they want to say we're in this new era, but I view it as it's a kind of extension to like we're trying to get information

from our laptops to appear on our screens or on our phones, right? So I wanna, like, I actually use voice to text when I talk to chat GPT and I'll ask it a question that just comes to my mind.

It could be anything like what's going on with the California wildfires and then go search the internet, right? Or it could be something like what's the chemical composition of sodium? I don't know. And it gives you, right? And so I just feel like it's an extension of search.

And it's a kind of thing that we should really celebrate as a technology. And what it does is just use it for a while and figure out how you, where it fits into your information space, right? And so I think the problem with it is that it is so confident about the information that it provides, and it doesn't provide any way for a human mind to check it.

So it basically just acts as an oracle that says this is true. And if you don't know how to revise your prompt or ask follow up questions, you might end up with actually bad information about the world. And you know, if you put that into an educational context where you have, sorry about that, where you have, you know, third graders or fourth graders who are just learning, you know,

Alberto (46:28)

Yeah.

Erik J. Larson (46:53)

how things work and how to, what constitutes a good essay and verify your sources and all the stuff that we went through. If you're using that as an oracle and if educators are actually using that as an oracle, you're short circuiting the human process of learning. But if you take people like me and you who I already learned how to write and I can tell.

Maybe not the chemical composition of sodium because I'm not a chemist. Well, I might have to actually like Google search that, but I kind of know, hey, this doesn't sound right. This doesn't seem right. Or you're referencing something I've never heard before. I'm going to, you know, do a follow up search, you know, in on Google or something. So I think like there's this tremendous value for it.

for that technology just as a search extension. Like if we could transport back to 2002 and we could talk to Sergey and Larry, hey, we've got this technology, it's like search and it can do search, but it's like you have an AI on your phone and you can just ask it questions.

they would have freaked, they would have said like, that's the holy grail of everything. So we should say that, we should admit that this is really cool and we can use this. But we, there are problems because it's not a mind and it has no idea, it literally has no concept of being wrong. Like if you say like, hey,

Alberto (48:02)

Yeah.

Yeah.

Erik J. Larson (48:22)

That is wrong. You just gave me bad information. Shame on you. It'll say, I'm sorry, but it's just starting a new confident assertion, right? Like, it doesn't take a moment and say, yeah, I guess I shouldn't have said that. Did I screw you

up? Did you lose your job? I'm sorry. know? Like, it's like we have to deal with this kind of, you know, fake but real at the same time phenomenon in our information space.

Alberto (48:49)

Yeah.

Erik J. Larson (48:51)

And I just think that's the challenge of the 2020s. That's what we're doing. So yeah.

Alberto (49:00)

Absolutely. Absolutely. I think it's a fantastic way to frame it. I never thought about this before. Yeah. It's like saying, I'm coming across probably some different angles or conversations I had.

People in the education worried about students overusing ChalGPT or similar software. And on the other side, even people in the workspace, young graduates or people are not trained enough using these tools. bit like, know, the comparison I think holds really well. I remember when I started recruiting interns and I was really surprised as there was a sort of generational gap in understanding how to do Google search properly.

Erik J. Larson (49:43)

Yeah.

Alberto (49:44)

And I think it comes with experience, like with a lot of experience and knowing certain things and knowing how to work and what to look for and how to challenge results. It's someone can search properly. And now it's becoming even more difficult because the answers from these tools, they seem so convincingly intelligent that we should always like hold back and say, wait, you know, that that's not an intelligent thing.

It's a deep challenge. we heard a lot between, in the last two weeks, if you open any social media, X or LinkedIn, you keep hearing 2025, the year of agents, the CEO of Nvidia made his own marketing manifesto a couple of days ago on the trillion dollar opportunities are waiting for us.

Erik J. Larson (50:30)

Yeah.

Alberto (50:36)

I always take this noise as a pinch of salt, but what is the implications in terms of what we just discussed for agents? Is agents gonna solve any of these problems or actually amplify them?

Erik J. Larson (50:54)

Yeah, yeah. Well, you you notice in 2022 when it came out, I think it was the fastest. It was the fastest growing internet application ever. Right. So it eclipsed even Instagram, which had this like.

Alberto (51:11)

Yeah.

Erik J. Larson (51:13)

exponential growth, you know, from, the release of Instagram, they had something like 25,000 followers the first day and then it hit a million just like less than a month later. But Chad GPT hit a hundred million, right? Like an order of magnitude more in a month or two. And so like, you know, we, we like, there's no, there's no question that this is like a, like a home run hit for AI.

I'm actually confused why there's so many critics because this is the first application of AI where you can point to it and say everybody wants in on this, right? Like everybody wants to start using this. But, you know, on the other hand, like if you look at 2022 and then you see the growth rate is basically completely plateaued now. And then all the promises that we have.

you know, that they're going to think and they're going to lot, you know, there has been, I haven't looked into this yet. I, but the, the model that they called strawberry, which was the strawberry project at open AI, which was the, let's get it to be able to reason more. And, and so I think they're, had some success, but basically we're still using, the top tools are still four GPT four And then I think after that it's.

I want to say maybe it was Anthropic or something. then, yeah, yeah. so, but basically that technology and the performance has silently stopped growing and getting better. But we're still talking about it like it's coming, like AGI is coming. But it's actually like, it's, know, like what we have is basically what we're going to have for, you know, like until we have another innovation, like this is pretty much what we're to have. We don't, there's not enough, there's, we had hit a date, they call it a data wall. I mean, there's just not enough.

Alberto (52:36)

Yeah.

Erik J. Larson (53:04)

enough new non synthetic or non computer rated computer generated data on the internet. And so and the training is already in the hundreds of millions of dollars. It takes about nine about three months to train the trillion parameter models. Right. And so you basically who's going to give you what if it what if it's not better. So you spent you know he spent like 200 million dollars at nine months of you know you're running like nuclear reactors.

Alberto (53:17)

Yeah.

Erik J. Larson (53:34)

to keep this thing going and then you get the result three months later and it's marginally better. The business model just fell off the cliff. Like this is what we have. And so, you know, I

think people need like I we should write about this. I should write about you should write about this. I don't see and Marcus, Gary Marcus is always is always talking about this, right? To the point that it's it's it's almost like, dude, you know, but but you know, like

Alberto (53:54)

Yeah.

Erik J. Larson (54:00)

Stop being so negative, you know, but like, you know, opening eye is evil and you know, Sam Altman is going to go to jail. It's like, dude, no, it's not that bad. But I like Gary. Like I communicate with him every now and then on email and he's,

Alberto (54:13)

Of course.

Erik J. Larson (54:14)

he's a good guy. But my point is just, I think we need to start thinking now already about what's coming next. Right.

Yeah, because I think we are really at a plateau with what we're going to be able to do with these large models. It was a big bang. It was great, the Fourth of July, the fireworks. But what's next? Yeah.

Alberto (54:36)

Yeah.

Yeah, yeah. That's which which we can use it as our closing questions. What if if you have to answer that what's next without speculating too much, but or at least what you wish is next.

Erik J. Larson (54:52)

Yeah, mean, so

like what I would love to see is for us to get finally out of the data AI, which is the input to these systems are basically it's data. And so if you look in our field, natural language processing, that's going to be a sequence of words or tokens. Right. And so, you know, we, if you, if you notice the, the, the generalization performance of these models like GPT four,

actually capture grammar.

You know, right? So I'll use English, but it applies to other natural languages. But it actually basically, the tokens that it reproduces without actually knowing the rules of grammar, it doesn't break them. So you get this kind of inference implicitly that it captures grammar. And it captures other things like I think it knows what a subject is and what an object is and what a verb is and so on. But what it doesn't get up to is thinking in terms of thoughts like we do, right?

we think in terms of thoughts, a thought is, the simplest thought would be something like John ran. You have a noun and you have a verb. So you have an object and you have an action, right? And so in this case, you have an intransitive verb, know, John ran. And so with two tokens, you can make a thought. But that's the thought of a human being running. It's not just John, right? John is the thought of some singular thing and you don't know what it's doing, right?

Alberto (56:08)

.

Erik J. Larson (56:24)

get it to see instead of looking at tokens and being able to generalize up through grammar and syntax and some semantics, but if we could get it to see thoughts, we would actually have, and I don't think we're ever gonna get AGI, I think we're on a different train. I think we're gonna create some really powerful computational.

cognition, that's not what we do, right? And it's always, know, like, so I don't think the whole thing about it's gonna get smarter and then get rid of us. It's never really gonna mesh with us, but we can make these systems more powerful in a way that's useful to us.

Right? But I think we need to figure out how can we finally get from data to thoughts. And I have some ideas, but nothing that, you and I don't have $100 million in three months, right? To play with it, right? So it's unclear how we're gonna get out of the big tech problem because really the only players that can do this game right now.

Alberto (57:10)

Yeah.

Erik J. Larson (57:29)

We have, you know, Meta released llama, which is open source. And so, you know, you can manipulate the parameters, but you can't retrain the model that's owned by that's owned by Meta. And so it's unclear how we're going to get innovation when there's only a few like huge companies that basically control, you know, how, the fundamental progress is going.

Alberto (57:53)

Yeah.

Erik J. Larson (57:54)

So I don't know where we're at. think like as writers and thinkers, need to, you know, we need to let more and more people know. And, know, eventually some, something's going to change, right? But yeah, yeah. It's not, yeah, yeah. So where are we headed next is, you know, I think like, I don't mean to sound negative, but I don't think we're going to see like super exciting.

Alberto (58:08)

Yeah. Yeah, thank you.

Erik J. Larson (58:23)

innovation in computer science, given the trajectory of where we are now. I think we're already working out the consequences of the innovation before. And so, we're gonna get lawsuits and we're gonna get regulation and we're gonna get better prompts and we're gonna get this and that. But really like the big step forward now.

Alberto (58:36)

Yeah.

Erik J. Larson (58:49)

is a question mark when it's gonna come. We're in a sense,

Alberto (58:53)

Yeah, very interesting

Erik J. Larson (58:53)

we're trapped in our own success. We can't see anything but the models now and the models aren't gonna get really better. Like that's my view. I could be wrong, but I don't think they're gonna get a whole lot better at this point. So yeah.

Alberto (59:08)

view, I think convincing and yes, I think our task is to let people know where we are and draw the consequences of...

Erik J. Larson (59:15)

Yeah, that's right.

Yeah.

Alberto (59:22)

how and when we can move forward. Yes. Yeah, exactly. Yeah. Yeah, I'm very familiar with the example because my wife is a

Erik J. Larson (59:25)

We're the people who shine the light. You've heard about the drunk that's looking for his car keys and can't find them, so he looks under the streetlight. We're the people who are saying, like, maybe we should look in the yard somewhere else. Yeah, I don't think it's gonna be under the streetlight just because it's light there. Yeah, so we...

Alberto (59:52)

professor in physics and she works on the discovery of dark matter and the field has a very similar, let's say, patterns as machine learning. The physicists have been looking far too much in the same sort of frame and place and it's difficult to make a dramatic change to other ideas or other venues.

Erik J. Larson (1:00:01)

yeah.

Yeah.

Yeah, that whole thing with dark matter is like a Stephen King novel.

It's so creepy that most of the stuff, that's a whole other discussion of course, but like what an interesting field of study though. Unfortunately, we're just dealing with machines. Not the physical world, just things that we design. Yeah.

Alberto (1:00:26)

Yeah, yeah, yeah. Yeah, yeah, totally. Yeah, absolutely. Absolutely. Awesome.

Eric, it was really lovely chatting with you. I think everyone is going to benefit from

this conversation. Just as a quick end note, where can people engage with you, listen to what you say and what you write and write what you write, read what you write? I pointed out the book, Collego, but I'll you speak.

Erik J. Larson (1:01:02)

yeah, just.

Yeah,

yeah. You mentioned you, I'll send you a blurb, but yeah, like I'm doing my writing online now on Substack. It's you can find me at Coligo, which is C-O-L-L-I-G-O. It's a Latin word that means gather together. So gather people together to talk about the issues of Coligo, C-O-L-L-I-G-O, Eric, Jay Larson. That's where everything's happening. And then the book is, you know, it's on Amazon, obviously. So.

Alberto (1:01:33)

Awesome, fantastic. Thank you very much. yeah, looking forward. I'm gonna stop recording, but you can stay on the line.

Erik J. Larson (1:01:35)

Yes.

Yeah, you bet. It was a fun conversation. I'll have to get down to Australia sometime and we'll take a vacation. Yeah. Yeah.

Sure.