AI catalysts and bubble signals

Evan Zehnal, Fidelity Assistant Portfolio Manager, discusses the evolving capabilities of AI, its improvements over time, and some of the current challenges facing the sector.

Play Video
Click to play video
Transcript

1

00:00:04,371 --> 00:00:08,775

So one major unlock this year is the emergence of

 

2

00:00:08,775 --> 00:00:12,879

reinforcement learning and of reasoning in AI.

 

3

00:00:12,879 --> 00:00:16,783

Effectively, these techniques solve for the reliability issue and

 

4

00:00:16,783 --> 00:00:20,286

hallucinations, which is the biggest drawback of AI.

 

5

00:00:20,286 --> 00:00:23,556

So if you think about it as a consumer, just in terms of trust, if your model

 

6

00:00:23,556 --> 00:00:26,393

is hallucinating, it's a lot tougher to trust that model.

 

7

00:00:26,393 --> 00:00:30,397

And that's even amplified for business users where they can't as a

 

8

00:00:30,397 --> 00:00:34,667

business give wrong answers and have hallucinations in their business workflow.

 

9

00:00:34,667 --> 00:00:39,305

And so really what we've seen this year is that

 

10

00:00:39,305 --> 00:00:42,842

reasoning and that RL has gotten a lot better.

 

11

00:00:42,842 --> 00:00:46,813

And the anecdote I like to point to, to explain what reasoning and RL really

 

12

00:00:46,813 --> 00:00:50,283

looks like is imagine two scenarios.

 

13

00:00:50,283 --> 00:00:53,953

One is you have a math test and all you've done is you've skimmed the math

 

14

00:00:53,953 --> 00:00:57,724

textbook. That's gonna be a hard test for you to do.

 

15

00:00:57,724 --> 00:01:01,461

The other example, which is closer to RL.

 

16

00:01:01,461 --> 00:01:05,732

And to reasoning models is imagine you've skimmed the math textbook and

 

17

00:01:05,732 --> 00:01:09,469

then you've done all of the practise questions a couple times at the end of

 

18

00:01:09,469 --> 00:01:12,005

every chapter and then, you take the test.

 

19

00:01:12,005 --> 00:01:15,275

Your answers are probably going to be a lot better for that test and more

 

20

00:01:15,275 --> 00:01:18,845

reliable and you'll feel better about the output and whoever is grading that

 

21

00:01:18,845 --> 00:01:21,548

test is probably going to feel better about that output as well.

 

22

00:01:21,548 --> 00:01:25,585

That's the unlock and that's the opportunity that we

 

23

00:01:25,585 --> 00:01:29,422

have with RL and reasoning in models.

 

24

00:01:29,422 --> 00:01:31,658

And it really gets at that heart of.

 

25

00:01:31,658 --> 00:01:36,229

Um, hallucination, reliability that's held models back in terms of adoption.

 

26

00:01:36,262 --> 00:01:38,865

And you've probably even seen this as a consumer.

 

27

00:01:38,865 --> 00:01:42,735

The other thing I'd point to just in terms of optimism around use cases that

 

28

00:01:42,735 --> 00:01:46,639

actually the most revenue generative use case today of AI is ranking

 

29

00:01:46,639 --> 00:01:50,577

algorithms. So things like your Google search and your Facebook feed, those

 

30

00:01:50,577 --> 00:01:54,581

are all using AI it now it's not next token prediction AI, but

 

31

00:01:54,581 --> 00:01:58,818

it is generative AI. Um, and so that is a really revenue generatve

 

32

00:01:58,818 --> 00:02:03,756

use case. And so there is real spend on the back of AI.

 

33

00:02:03,756 --> 00:02:07,794

The other thing that I think is important is to recognise

 

34

00:02:07,794 --> 00:02:11,397

if we're in a bubble or not. I think the catalyst to a bubble would be

 

35

00:02:11,397 --> 00:02:15,602

infrastructure over investment and a lack of monetizable use

 

36

00:02:15,602 --> 00:02:20,240

cases. Really, as the model improves and gets better and more reliable,

 

37

00:02:20,240 --> 00:02:24,210

there isn't any new revenue generative use cases, that's how you get

 

38

00:02:24,210 --> 00:02:26,546

a bubble, that's what's gonna be a problem.

 

39

00:02:26,546 --> 00:02:30,550

Especially given, unlike fibre in the ground, in

 

40

00:02:30,550 --> 00:02:33,953

the telco cycle, GPUs have a five-year lifespan.

 

41

00:02:33,953 --> 00:02:36,389

Fibre is good for decades.

 

42

00:02:36,389 --> 00:02:38,992

And after five years, it needs to be refreshed.

 

43

00:02:38,992 --> 00:02:43,029

So if we don't have that revenue generative use case emerge within five

 

44

00:02:43,029 --> 00:02:46,900

years you've got to effectively redo much of the investment.

 

45

00:02:46,900 --> 00:02:51,137

So I think that's how you could see a bubble is a lot of investment and revenue

 

46

00:02:51,137 --> 00:02:54,807

generatives use cases actually continue to take longer than we expect.

 

47

00:02:54,807 --> 00:02:58,745

The other... Kind of dark horse candidate that I think is interesting to

 

48

00:02:58,745 --> 00:03:00,747

talk about is algorithmic improvement.

 

49

00:03:00,747 --> 00:03:03,416

So today AI is very inefficient.

 

50

00:03:03,416 --> 00:03:07,120

If you think about it, the human brain, and let's use the use case of driving a

 

51

00:03:07,120 --> 00:03:11,424

car. The human brain consumes about 20 watts.

 

52

00:03:11,424 --> 00:03:14,294

And I looked it up in Ontario for driver's ed.

 

53

00:03:14,294 --> 00:03:18,431

You need 10 hours behind the wheel to get your licence.

 

54

00:03:18,431 --> 00:03:21,768

If you just compare that with Waymo.

 

55

00:03:21,768 --> 00:03:25,705

Waymo has tens of billions of simulated road hours to get to where it

 

56

00:03:25,705 --> 00:03:29,776

is today and it's still not broadly deployed and it uses many

 

57

00:03:29,776 --> 00:03:33,746

many kilowatts per car to get there or even

 

58

00:03:33,746 --> 00:03:37,650

if you just compare it a gigawatt data centre is like 50 million brains and so

 

59

00:03:37,650 --> 00:03:41,888

if we do have a real algorithmic improvement and unlock it actually

 

60

00:03:41,888 --> 00:03:46,226

could drive demand for gigawat

 

61

00:03:46,226 --> 00:03:50,196

data centres and lots of ops lower because the algorithms just get

 

62

00:03:50,196 --> 00:03:54,701

better. So that's kind of, you know, how I think about AI implementation

 

63

00:03:54,701 --> 00:03:57,303

and use cases and the potential for a bubble.

Listen to the podcast version