Takeaway 1: AI is concentrating power but we are not doing enough about it.

Our future depends on innovation. Right now, if large language models (LLMs) turn out to be half of what some of their boosters are claiming to be, then those who control them are going to have tremendous power over information and tremendous power all sorts of economic inputs. Yet, most politicians and business leaders are asleep at the wheel.

Takeaway 2: AI will replace jobs but human creativity will remain central

In the age of AI, there will be transitions in terms of what sorts of skills are valued in the labor market. But Daron remains completely convinced on the basis of his broader understanding of history, as well as the technology of artificial intelligence, that human creativity and human flexibility ought to remain central.

Takeaway 3: We need to ensure technology is made more pro-human

The most important human element is actually to take ownership and take the democratic step right now via elections and other means such as media and civil society organizations. There needs to be a counterweight to push back against companies to encourage them to be more pro human in their directions.

Full Transcript

Daron Acemoglu and Anirudh Suri, on How can Nations Succeed in the Age of Artificial Intelligence?

Anirudh: [00:00:00] Few people who have thought deeply, as deeply as you have about the reasons why nations fail or regions prosper, but others don’t.

Daron: Our future depends on innovation. Right now, if large language models. Turn out to be half of what some of their boosters are claiming to be. Then those who control them are going to have tremendous power over information and tremendous power all sorts of economic inputs. Most politicians and business leaders are asleep at the wheel.

What’s going on with artificial intelligence and how this will shape the international division of labor, the balance of power in the world and control over information and data. Those are not questions that any of these politicians are asking.

I think India has tremendous promise. You know, India, Turkey, Indonesia, Mexico, Brazil, they should come together and they need to develop a voice in global affairs on the issue of technology.[00:01:00] 

Anirudh: So on today’s episode of the Great Tech Game podcast, I am absolutely honored to have, I can safely say, one of my favorite economists in the world and the author of, uh, At least two of my favorite books, uh, Professor Darren Acemoglu, who’s an economist at, uh, the Massachusetts Institute of Technology, MIT, and has most recently authored a book called Power and Progress, which we’ll spend a lot of time on, but also another book, which I’ve read and reread, uh, Why Nations Fail.

So it’s a delight and an honor to have you with us, Professor Acemoglu. 

Daron: It’s my pleasure. Thanks for having me on the podcast. 

Anirudh: I’ve been waiting to have this conversation. I think there are few people who have thought deeply, as deeply as you have about the reasons why, uh, nations fail or regions prosper, but others don’t.

And that’s the question I want to get started with before we get to [00:02:00] your more recent book, Power and Progress, where I think you make a different kind of argument as well. But for a second, if we can start off with your views on why or what causes certain nations to fail. Others do not. And whether you’ve evolved your views since your earlier book came out, Why Nations Fail.

Daron: Of course, my views have evolved somewhat, but I’m still quite wedded to the main thesis of that book, that every nation, regardless of geography, religion, and history, has the capacity to succeed economically and socially. And the reason why many, Countries have failed to grasp these opportunities as institutions, meaning the way that they have chosen to organize their society, their politics, their economy.

In particular, we have emphasized how many institutions around the world are shaped by elites, [00:03:00] powerful groups that have disproportionate political power, and they have been designed in a way to look after their interests. While impoverishing many other segments of society, the form of these institutional failures differs from society to society.

But the commonality is the absence of secure property rights, good laws that provide equal access, equal opportunity, a level playing field, and state institutions that have the capacity to help the disadvantaged, protect the less powerful, and provide the types of laws and regulations that will contain the power of large companies and big players in the economy because You know, quite frankly, it’s clear even more today than ever in the last 30, [00:04:00] 40 years we live in a society in which we live in a world in which, you know, some companies are going to get more powerful than others.

They’re going to get bigger than others. They’re going to have bigger market size. They’re going to be able to exploit network economies. They’re going to get their support from government institutions, and we have to have the ability to contain their power. And that’s the way. I see the more recent book, Power and Progress, co written with Simon Johnson, to be intimately and closely related to the ideas of why nations fail, because we’re still trying to grapple with the same issues of how to contain the power of powerful players, large firms, and visionary leaders.

But now we are also talking about how this affects the direction of technology. 

Anirudh: Absolutely. And, you know, I was, uh, recently speaking with, I think, someone you know well as well, Professor Joel Mokyr, and, uh, and in [00:05:00] our, again, wide ranging conversation on this question of, you know, why certain nations or societies prosper, versus others don’t, he ultimately distilled it down to one piece.

And I want to get your views on that, because he also specifically asked me to ask you this. So his, his, his, you know, core distillation of all his work and decades of work on this work came down to one piece, he said, any society that encourages rebellion, that encourages people to question authority that encourages people, incentivizes them almost to Question the traditional ways of having done things that to him was the key determinant for why certain nations.

uh, regions succeed and others don’t, which is not necessarily in conflict with what you’re saying. Oh, not at all. I completely agree with that. To think of institutions as, uh, you know, as, as behemoths that become behemoths themselves. And [00:06:00] then by virtue of the bureaucracy of those institutions become antithetical almost to the point that professor Mocha was making is the key determinant from his perspective.

Daron: No, I think Joel, you know, who is one of the social scientists I respect more, most, uh, is, is absolutely right. That sort of questioning, that sort of risk taking both in the economic and the social sphere is absolutely important. Rebelliousness is something we emphasize both in power and progress and also in the second term.

Uh, book that I wrote. Actually, it’s the third book that I wrote with James Robinson, The Narrow Corridor. I think we would put it slightly differently than Joel and say that what you need is a balance between. The capacity and power of state institutions and the ability of society to question and control these institutions.

So the rebelliousness risk taking comes in the second leg, but that needs to be balanced with the power of the state as well. [00:07:00] Because I think predictability of laws is extremely important too. And if you have the one leg, meaning the sort of rebelliousness and the uh, uh, sort of questioning of authority, questioning of well established truth, which is critical, but you don’t have the order providing institutions, that is not going to be very conducive to economic activity either.

So you need a balance between these two things. 

Anirudh: Yeah, yeah, no, that makes sense. Uh, you know, one of the, in my own book that came out last year called the great tech game, one of the pieces that I argued was to understand what we are going through today. And I think I’ve, I’ve, I’ve seen that in your book in power and progress as well is to rely and look back into history.

To see what kind of insights and lessons we can draw from earlier occasions where either technology or the shapers have really transformed society. Right. And one of the things that I concluded from, [00:08:00] uh, you know, looking back into history was I. You know, the frame that I used was that I divided the past, at least the last 10, 000 to 15, 000 years of human history, into about five great games.

I call the current game what we’re living through, uh, the great tech game, but I called the earlier great games like agriculture, trade, global trade, both land based and oceanic, uh, colonization, industrialization, capitalism, and then now technology, as these great games of our history where certain shapers, whether it was agriculture, industrialization, and so on and so forth, transformed, uh, the, the geopolitics of the time, the geoeconomics of the time, uh, shaped who won, who prospered, who didn’t.

And, uh, and one of the conclusions I drew from there was that While we tend to look into history to see capabilities that will apply or will determine a nation’s ability to win, my sense [00:09:00] was that those capabilities change depending on the era that you’re living in. So we can never say that the era, the, the capabilities required in, let’s say the industrial era, uh, or in the beginning of the industrial era, would be similar to the ones you might require today, where.

technology seems to be the great engine of economic growth, right? Um, and so I want to get your views on that. Besides the choices question, which we’ll get to and, uh, which you emphasize so much in the Power Progress book, what kind of capabilities do you feel have changed in terms of the capabilities required to win or to prosper today versus in history?

Daron: Well, there’s much I like in your thesis because, uh, one of the things we stress in Power and Progress is that technological transitions, you know, moments in which new technologies that are going to have applications via a variety of, in a variety of sectors, in a variety of different ways, those transitional moments have a lot of choice, contingency, and require a lot of adaptation.

There’s much I like in your thesis because one of the things we stress in Power and Progress is that technological transitions, you know, moments in which new technologies that are going to have applications via a variety of, in a variety of sectors, in a variety of different ways, those transitional moments have a lot of choice, contingency, and require a lot of adaptation.

Daron acemoglu

This was so for the transition to agriculture from unsettled life from the world of hunter gatherers and mobile societies. It was the same when from the early agricultural societies, uh, there was a bigger transition to, you know, big empires and states.

It was the same when, uh, you know, post Roman world got formed. It was definitely so when the commercial revolution happened and when the industrial revolution happened, when the second industrial revolution happened, all of these were great moments of adaptation and the choices that were made during that period were critically important.

And, uh, and also the need to think of The general evolution of societies in a dynamic manner. Uh, that was one of the main themes of the narrow corridor where we, uh, [00:11:00] defined a balance between state and society, which I mentioned briefly a second ago as a, uh, never ending dynamic rat race type of thing.

That’s why we coined the same term. As in Alice in Wonderland, the Red Queen effect, meaning that both states and society groups within society that are disorganized, sometimes more organized, they have to continuously change in order to keep up with the changing circumstances and the power balance between them.

So very much the same thing. And, uh. I think that’s so obvious today when we are grappling with, for example, how we have to change our laws, regulations, and contain different powers that have emerged in the process in the age of generative AI. 

Anirudh: Absolutely. And, and other capabilities going to be that will be required different as well.

So, you know, obviously there’s the policy piece. Right. And the policy, especially around choice of technologies and all of that. But are the capabilities changing or are we [00:12:00] ultimately still relying on human innovation? And as you mentioned, the power of ideas. Yeah, I think, I think those are common. 

Daron: I, I don’t doubt that institution state capacity, human creativity, human adaptability are going to be keys in the 21st century as they were in the 20th or the 19th century. But if you look at more micro level, yes, obviously the types of skills that were so important in the 20th century are going to become less important because generative AI can become a compliment in terms of information acquisition, processing retrieval for agents. It can play some of the same roles that informed agents who emerged as hubs and social networks used to play, uh, the type of physical, uh, manual work coordinated with mental activity that has become less important already in the age of, you know, industrial robots.

And that’s part of the reason why skilled blue collar work has become less [00:13:00] valuable. Or semi skilled blue collar work has become less valuable over the last, you know, 30, 35 years. So there will be transitions in terms of what sorts of skills are valued in the labor market. But I remain completely convinced on the basis of my broader understanding of history, as well as the technology of artificial intelligence, that human creativity and human flexibility ought to remain central.

That doesn’t mean that it will because there is a hype right now about how machines are better than humans and they can inadvisably and incorrectly sideline humans to the detriment of society and business organizations in general. 

Anirudh: Yeah, and I think your book is very clearly, um, you know, one of the frames I try and use in conversations with people is to ask them to think about where on the spectrum of techno optimism on one side and techno pessimism on the other, they lie.

Right? And, and I think in your book, [00:14:00] clearly one of the pieces that you’re arguing is that Extreme forms of techno optimism, which are often find, found amongst, one can argue in places like Silicon Valley and Bangalore and in entrepreneurs and tech venture capitalists, the circle I often spend a lot of time in.

I think your argument is to temper that. 

Daron: Absolutely. It’s a counterweight to that, but I wouldn’t have bothered to do that if it were to be found only in Silicon Valley and Bangalore, I think it is to be found in academia. It is to be found in journalism today. It is to be found in, uh, Washington, D.C. Look, I think in the United States. You can say anything, but the hardest thing to get heard is when you start questioning tech, uh, you know, all of the major newspapers are so enamored with the tech sector that even when they criticize, you know, this aspect of Facebook or that aspect of Elon Musk, they still try to elevate them to the, to the extent possible.

So the U. S. has become really a [00:15:00] techno obsessed country in some sense, and don’t get me wrong. I think like you, I am very convinced. That our future depends on innovation. I cannot imagine a happy, healthy, prosperous future for humanity that does not leverage our technological ingenuity, but it doesn’t mean worshipping at the altar of new technologies, regardless of what they do.

You know, I give the example of, uh, uh, you know, the advances in, uh, chemical engineering, such as the Haber Bosch process at the beginning of the 20th century. It is one of the most important breakthroughs that we’ve had in terms of, for example, feeding the world through artificial fertilizers. But it was also the method for building much more powerful explosives that hundreds of thousands of people, you know, there isn’t.

a necessity that every technological ingenuity is going to be used for good. There are some technologies that need to be regulated, [00:16:00] and we have also a very, very important societal responsibility of choosing the direction of technological use and direction of technological development. 

Anirudh: Absolutely. I want to quickly quote a little, uh, piece of my conclusion to you.

Uh, and it’s almost the first question you tackle in your book. What is progress? The chapter that talks about what is progress, because I think that’s at the core of what you’re just saying as well. So I say, uh, Literally, I think one of the last pages of my book, I say that the focus of technology, well, we should not view technology within a narrow paradigm of progress.

The focus of technology must not be confined to the economic, political, and environmental models propagated by the trade, industrialists, or capitalist eras. Rather, we must view technology in the context of societal progress and a healthier, sustainable future for humans. Only then will technological progress start to equate [00:17:00] with societal and human progress, right?

We should not view technology within a narrow paradigm of progress. The focus of technology must not be confined to the economic, political, and environmental models propagated by the trade, industrialists, or capitalist eras. Rather, we must view technology in the context of societal progress and a healthier, sustainable future for humans. Only then will technological progress start to equate with societal and human progress.

ANIRUDH SURI

And I think, as I was reading your book, it seems to me that in part you’re making, you’re starting with the same assumption. What I conclude, I think that’s your starting assumption, that technological progress, cannot and will not equate societal or human progress. 

Daron: Absolutely. Absolutely. And that’s part of the reason why this is so difficult for both economists and the general policy, uh, uh, experts in the United States, because it involves holding two potentially conflicting ideas in your head at the same time.

On the one hand, economists, economic historians, People in the policy world recognize correctly that we are so prosperous. We are so fortunate today. We are so much more prosperous, so much healthier, so much more comfortable today because of technological advances over the last 300 years. Hell, even 50 years ago, things were much harder for most people in the [00:18:00] West in terms of health access, in terms of comfort.

And technologies have also facilitated that, but that has to be held at the same time as the fact that there was nothing automatic any about any of these advances, that technological ingenuities could have been used for something much more nefarious during other historical periods they were, and there wasn’t an ultimate inexorable progress that will always take us forward.

Towards better outcomes after these technologies are introduced, many of their harm may be irreversible. So it is these two conflicting ideas that then lead to the key conclusion. of the book, which is that we have to find just the right institutions and the just right direction of technology to take advantage of this great moment, another potential transition, or another game as you’ve called it, which is the generative AI age.

But we are likely to stumble into something quite bad [00:19:00] if we don’t create the new institutions to rein in the AI. Those who have monopoly over data, huge power over people, huge ability to automate work and surveil, uh, install more surveillance. I think there are tremendous imbalances that are emerging in the midst of our great opportunity.

Anirudh: Absolutely. And I want to ask you about that. I want to ask you about the divergence, not just within nations, but also among nations. But before I get into that, I want to ask you, is this kind of techno optimism that you’re seeing in the U S Are you seeing that across the world? No, no, 

Daron: no, not to the other countries 

Anirudh: are unfortunately on the other side of the spectrum.

And sometimes 

Daron: I would say they are on the other side. But many, many times they are also followers. I think And I can see that from the reception that the book has gotten in the UK and the US. Uh, I think people are much more reasonable in terms of their understanding of technology, the opportunities and the dangers in the UK.

But on the other hand, you see pressures from [00:20:00] politicians and, uh, and some businesses to be more like the US. Let’s not worry about any of the guardrails. And let’s now use this moment where we have freed ourselves from the European Union to jump. All in into AI without questioning it. I see that in the UK as well.

So in some sense where the US leads there and also where China leads, there is a lot of, uh, tendency for other countries to follow sometimes. 

Anirudh: And that applies mostly to adoption of technology though, right? Uh, the creation. But the creation 

Daron: also, you know, look, I think, don’t get me wrong. Silicon Valley has been an enormous wealth generator for the United States, and we should celebrate that while at the same time criticizing Silicon Valley.

And Europe would love to have its own Silicon Valley, but they are trying to become creators of technology as well. And if you look at some areas of artificial intelligence and tech, there are important hubs within Europe as well. And India. I think India has tremendous talent. Uh, [00:21:00] promise in some of these areas, but it’s also also at the crosshairs of tremendous danger.

If you look at some very good jobs in India, they are in the context of offshore services, lots of tasks that are offshore to India, where there are skilled workers with some IT knowledge and ability to use the English language written and spoken in an excellent way. But all of those jobs are at the crosshairs of automation with AI.

Anirudh: No, absolutely, absolutely. And that’s why I think in many ways, the conversation here But even with my book, I tried to spark was how do you become creators of technology and not just, uh, you know, a talent nation? Um, I think India’s future cannot be just as a talent nation, but really has to be as a tech nation where it’s developing technology where it’s creating technology much like, you know, Silicon Valley does and the the idea [00:22:00] of just being an offshore service provider will not suffice.

While it’s been great, it has really led to the transformation of India’s digital transformation journey over the last 20, 30 years. But it hasn’t necessarily meant that we’ve built very profitable companies, or that we’ve built companies that have serious modes around their business models, like some of the big tech firms in the US do.

Daron: Well, that that is part of the reason why I think. Just like India, many other countries are at the danger of, uh, being caught unawares about these generative AI developments because they have not shown the capacity to create the highest value added sectors yet. That’s a process in the same sense that it took 30 years for China to move from, you know, cheap toys and the cheapest textile. It’s the same for India. It’s going to be the same for, you know, Indonesia, Pakistan, Turkey, but in the process. Generative AI may completely [00:23:00] change the international division of labor. 

Anirudh: Yeah, and which, which, which leads me to my, to, to the question I want to ask you. So, you know, in my book also, I think, uh, and you were alluding to it just now, uh, I talk about the, you know, you look back at the industrial era and one of the key things that any economist talks about is the great divergence, right?

How those sets of technologies at that time and access to them, early access to them, adoption of them early leads to that, great divergence chart that, you know, many of us who’ve studied economics in the international context have looked at. And, uh, what you’re suggesting also seems to be alluding to a possible second great divergence.

Daron: Absolutely, absolutely. But I think the biggest divergence will be within countries at this rate. Inequality is increasing, and it, uh, has in some sense, uh, exploded in the United States already, and we may be at the, on the verge of yet another big explosion in equality. Look, you know, [00:24:00] uh, right now, if large language models turn out to be half of what some of their boosters are claiming to be, then those who control them, Are going to have tremendous power over information and tremendous power, all sorts of economic inputs, data, creativity, information, and that sector may be highly oligopolistic.

It may be just Google and Microsoft with the support of open AI, or it may be three players or four players, but for such an important resource, that would be a tremendous concentration of wealth and power. 

Anirudh: Absolutely. And the last time we saw such concentration of power, at least one occasion when we see such concentration of power in history is the pre World War I era.

And you spent a bit of time talking about that in your book. Uh, it leads to this. set of choices that elites must not control, uh, all the sort of metrics of power and wealth and leads [00:25:00] to, you know, one can argue the socialist movement, um, leads to lots and lots of socio political churn, labor movements, and so on and so forth.

So now if you were to look at that and try and draw some insights for today’s time, how, how does this get re imagined? structure, right? So if, if today everything is leading towards the concentration of power, wealth, data, technology, capital, and we are seeing signs of it, and having seen how it was dealt with the last time around, at least a hundred years ago, what are some of the key lessons or insights or sort of action items that we can draw, uh, for today’s time?

Well, look, I mean, 

Daron: there are so many parallels. It’s a little bit cheap to draw them out so much, but You know, no, but please do it. Inequality was exploding in the, uh, gilded age, uh, that started at the end of the 19th century, it was a era of technological innovations [00:26:00] and the people who were at the forefront of it were also.

Becoming fabulously wealthy and fabulously powerful. Carnegie, Rockefeller. Even more strikingly, what made them so powerful was that they were controlling these new industries that had systemic importance for the economy as a whole. Oil and transport. Those were services and goods on which the rest of the economy depended.

It 

Anirudh: was moving. That’s right. That’s right. And one can argue even finance, right? Banks. Absolutely. 

Daron: And that had become a tremendously concentrated and tremendously powerful industry as well. And that advantage was feeding into more and more inefficiencies in the economy. To make things worse, we Had a fairly corrupt politics where Congress people and senators were bought and sold.

There were [00:27:00] limits on democracy. For example, direct elections of, uh, senators had not been introduced yet. Many limits on the tools that the federal government had. You know, there was no federal reserve. There was no proper income tax. There was limited set of instruments for regulation. Media was often in the hands of very rich tycoons as well.

So you can say this was the time when it would be very difficult for democracy to work and for the larger than life characters to be brought under control. But at the end, the progressive movement, muckrakers and others from society’s heart really managed to do that. And they did that by changing the narrative, laying it bare upon the people that.

You know, these very, very wealthy individuals who had become also, you know, media celebrities, uh, for their day, were actually [00:28:00] abusing their power and, uh, and influencing politics, suppressing their workers and wages, uh, you know, destroying rivals unfairly. helped build countervailing powers and new institutions such as the labor movement.

And they also led to the adoption of specific policies, including in the financial domain, including into anti regulations and so on. And in fact, in the book, Simon Johnson and I draw parallels into the current age, turning this into a sort of a recipe for what needs to be done. But on the other hand, I think There are, you know, some may say, well, today we are much more enlightened.

We are much more educated. We have much more information at our fingertips. Our democracy is more direct and healthy, but actually I don’t think so. I think there is a sense in which things are harder today because tech barons have a much greater lock on our [00:29:00] information and. The expression of our views.

Correct. Right, exactly. So, today, it’s not tanks, it’s not jackboots that are the threat against democracy. It is control over information and persuasion power, monopoly over persuasion power. That’s what we argue in the book. 

Anirudh: And are we looking at more deeper welfare states as a result as well? 

Daron: I don’t know.

I think we are at a point of flux, you know, uh, when Simon and I started writing this book and when I sort of my, my, I, myself wrote some articles touching upon some of the themes, Donald Trump was the president and the welfare state, such as it was in the United States was being dismantled in some respects, even, uh, the modicum of, uh, Healthcare that provided access to previously uninsured people was on the cutting table.

Today we’ve had a 100 degree turn and industrial policy has become the [00:30:00] new buzzword. We have tremendous The ambitious new programs, but it remains to be seen. I think we are at another point of flux, and there are many things that, for example, the Biden administration is doing that are bold and ambitious.

On the other hand, I think they’re doing it without a map. They don’t have a conceptual framework of what they’re trying to achieve. And the most important things to me, such as control over new technology, the direction of technology and control of the tech world. I think those are not on their agenda.

Or they’re, they’re add-ons. 

Anirudh: And it seems to me that the recent efforts in the US to, you know, things like IRA, the CHIPS Act, et cetera, all of these initiatives seem to be more concerned about the divergence between nations. In this case, the U. S. and China, then the divergence within the U. S. It goes back to the question I was asking about the second divergence.

Exactly, that’s what I 

Daron: mean without a roadmap in some sense, meaning that [00:31:00] I think the Biden administration had some good instincts that, you know, technological leadership had to be in place. Uh, bolstered in the United States and they saw this opening and they may have felt it themselves, but they saw this opening that anti China feelings are bipartisan in some sense in Congress and they have put everything in a China U.S. competition axis or framework. But I don’t think that’s the right framework for dealing with these issues. If China did not exist, We would still have a problem of Google and Microsoft and Amazon and Facebook having so much power over us. 

Anirudh: Yeah. And if you’re sitting in, you know, a country other than the US or China today, right, as you are, as I am currently, at least, um, how do you view this?

Right? So if you, if you are a country, that’s a Turkey or an India or a, Brazil or France. How do you view this frame that seems to have developed now for geoeconomics, geopolitical strategy, industrial [00:32:00] strategy, which is the U. S. versus China? 

Daron: Well, I think, first of all, uh, there is some apprehension and, uh, uh, lack of complete understanding of the implications of this new rivalry.

And the same is true in the United States and China probably. But also, and I have been to several countries, Uh, over the last few years talking about these issues. And what I see is most politicians and business leaders are asleep at the wheel. They deal with more immediate concerns, you know, what’s going to happen in the next election?

What will happen to the inflation rate or this or that current crisis they have to deal with but questions of what’s going on with Artificial intelligence and how this will shape the international division of labor, the balance of power in the world and control over information and data. Those are not questions that any of these politicians are asking and talking about.

And in fact, I think, [00:33:00] you know, you can say, well, what can they do anyway? So if the Turkish president woke up tomorrow and said, okay, that’s my main priority, who will listen to Turkey? You know, they don’t even listen to, you know, the U. S. president. So, but, you know, these are common issues, you know, India, Turkey, Indonesia, Mexico, Brazil, they should come together, you know, there are critical questions that are relevant for the entire emerging economies, and they need to develop a voice in global affairs on the issue of technology.

Anirudh: No, absolutely. And you know, that was one of the main thrusts of my argument. I tried to write it from a global standpoint. And I said that unlike, you know, the, there was a great game between Britain and Russia that played out 150 years ago, uh, in Central Asia. Uh, and while that was a regional game, I argue that now this is a global game and other countries cannot see this or allow this to be seen as a US China [00:34:00] battle alone.

Because this has implications for every country and as a result, each country needs to develop a game plan of its own, an economic, geopolitical, you know, possibly military, but definitely an economic and geopolitical game plan. to figure out where in this great tech game are you gonna play and where will your sort of niche or your defensible competitive position lie.

Or even more 

Daron: importantly, how can you avoid being swept involuntarily by these gales? 

Anirudh: Absolutely, 

Daron: correct. 

Anirudh: Which brings me to the point that you made about institutions and global governance, right? So, uh, you’ve obviously argued that institutions at a national level are key, right, for determining progress and determining prosperity and growth.

At the global level, and even at the national level, what kind of changes do you anticipate now institutions need to make? Both the global governance institutions that we’ve seen, the UN Bretton Woods [00:35:00] order, and also within nations. Or are we, are we comfortable with the kind of institutional structure that at least democracies tend to have, uh, built over the last several decades?

Daron: I mean, I think there is no doubt that democracy is at a crossroads today. Uh, I have not seen in my lifetime anything approaching the current level of polarization and lack of trust and common purpose. In politics in the United States, in the UK and France. And there is also no doubt in my mind that that’s a national problem before being a global problem and that nations have to fix it nationally first.

And then once better functioning, democratic institutions are in place, that would be the right vehicle for international cooperation, which is also of course, crucial for dealing with tech, dealing with capital, taxation, dealing with pandemics and. International health crises. [00:36:00] So we do need global institutions.

We do need a global umbrella for creating guardrails. You know, after all, how can you regulate Google or, uh, Amazon as a national entity? They have many. International functions and they exist across borders, but and the same is even more true of any AI technology that’s going to emerge. Data is now no longer Indian data or American data.

It’s global data. You cannot do that by saying, okay, we’re going to be internationalists. You need to fix democracies at home first and then build on top of those foundations. 

Anirudh: Yeah, and you’ve mentioned, uh, and I want to pick up on this. You’ve mentioned, I think, about six, maybe seven or eight policies for redirecting technology, as you say, right?

Because you obviously have a very clear view that, uh, technology is not deterministic and that we must make The choices that are necessary to [00:37:00] redirect technology towards this idea of progress that we were talking about the one that’s broader than technological progress. I think we’ve covered maybe a few of them.

Are there any others you want to talk about? I know we’ve talked about yeah, big tech piece breaking up big tech piece. We’ve talked a little bit about maybe even tax reforms, etc. Any others you 

Daron: want 

Anirudh: to highlight? 

Daron: Well, let me say some, something was briefly, you know, my concern is that we are on a path that is failing to take advantage of all of the opportunities that new technologies bring.

And that’s partly because we are using these technologies in a anti human direction. We are prioritizing automation rather than making Workers more productive. We are using them for surveillance and data collection in order to control and monitor and sometimes manipulate humans. We are using them for non, uh, democratic rather than pro democratic ways.

And part of the promise [00:38:00] of AI and digital technologies in general is that I think there is a very fruitful pro human direction. You know, generative AI, for example, can be a tool for making workers more autonomous. More responsible, more creative, better informed, better decision makers. They can make democracy work better.

You know, people who thought that wikis and social media were going to be pro democratic tools, of course, they were naive and they turned out to be wrong. But they weren’t completely delusional. That promise was there in 2000 and it is even more so today. But it’s not that this direction that we have chosen.

So the first step, Simon and I argue is to recognize that a pro human human complimentary direction is feasible and desirable and change the narrative about what technology should do and who controls technology. We all. Collectively should control technology. It’s not Elon Musk. It’s not Sam Altman. It’s not Mark Zuckerberg.

It’s not a battle of titans. It is society’s responsibility and society’s right to have a say on [00:39:00] these matters. That’s 50 percent of the battle. Once we do that, we have really achieved a much more clear eye framework for the discussion. That’s not enough by itself, then we need to build better institutions, countervailing powers.

I hope we don’t repeat what we saw four months ago, three months ago, when, you know, finally the US Senate woke up that there is something called generative AI. They decided to have a hearing and they invited the top executives of the top five tech giants. You know, what about workers? Hundreds of millions of workers in the United States and billions of workers around the world are going to be infected.

What about their voice and their views? No, no, they don’t count. Well, so we change that by having countervailing powers, better institutions for participatory decision making. And then we should think about specific policies, and we mentioned seven of them in the book. For example, leveling the playing field in terms of taxes.

The U. S. tax code, the federal tax code. The British tax code, for example, uh, most other countries, they [00:40:00] subsidize capital and they tax labor that creates artificial tendency for automating work. That’s true. Uh, we Have created this ecosystem around digital individualized ads that are highly manipulative and it doesn’t allow alternative business models that are much more participatory to emerge.

So we propose digital ad taxes to deal with that. We also think that. Data is going to become more and more important and right now the data of hundreds of millions of people and especially creative workers such as artists, journalists, writers are being expropriated by tech companies. So we need to have data rights, probably collective data rights of some sort.

So there are a number of other. Policy ideas related to this, but we don’t claim that any of this is a magic bullet and in some of them, perhaps we are wrong and others will come up with better ideas, but we want to change the conversation and we want to take the first step for understanding the need [00:41:00] for building new institutions around these topics.

Anirudh: No, absolutely. And I think that’s, that’s, uh, I mean, as urgent, uh, uh, conversation that needs to be started publicly and globally, I think, as any other, because as you rightly say, the stakes are high, the future of our societies and the structure of our societies and our economies globally. is, is in many ways at stake.

Let me ask you just the last question, and I know we’re running out of time, which is at the human level. So often the conversations, you know, when people are not talking about AI, um, there’s obviously high level conversations that people have just about curiosity about the technology, but then there’s also this human fear of whether I will become irrelevant or I will become redundant, et cetera, right?

And it’s related to jobs, but it’s also deeper in many ways. This battle that you were also talking about between machine and human. So my question is, as you think about it, right, industrial revolution made us evolve as humans, our values evolved, our capabilities [00:42:00] that we needed to have evolved, or in many ways, maybe devolved.

What’s your sense of now, in this age of generative AI and technology that we are currently under in, in the midst of, what kind of capabilities are going to be required for humans to adapt and evolve along with the technological evolution that we are seeing? 

Daron: Well, obviously one at one level. The answer is very simple.

We need to develop skills that are complimentary to machines rather than substitutable to machines. But I want to add two things to that. First of all, the future of technology is uncertain, and it’s our choice. It’s not like these technologies have a determinate path that they’re going to follow. How they develop will determine what those complimentary skills are.

And I think the real danger, and that’s where the human element comes most importantly, the real danger is that companies are going to push these technologies in a direction that increases their control over society, which [00:43:00] also involves more information being monopolized in their hands and more automation and less new tasks.

And when that happens, that’s actually going to leave less room for human complementary actions because more and more is automated. The human is sidelined. That’s not a Right future for us. It’s not good for business organizations. It’s not good for democracy, but it is a path that could be imposed upon us.

So therefore, I think the most important human element is actually to take ownership Democratic step right now that via elections and other means media civil society organizations. I think there needs to be a counterweight to push back against companies to encourage them to be more pro human in their directions.

The most important human element is actually to take ownership and take the democratic step right now via elections and other means such as media and civil society organizations. I think there needs to be a counterweight to push back against companies to encourage them to be more pro human in their directions.

DARON ACEMOGLU

And I think that’s the most important lesson from my research in the book. 

Anirudh: Yeah, and with the clarity that pro human here does not mean Luddite views of Absolutely not. Absolutely 

Daron: not. You know, Ludditism is like you try to stop the [00:44:00] River. No, I don’t think what we need to do is to stop the river, but dams are most useful when they redirect the river in a more better pasture so that we can make better use of it.

We don’t want to create, you know, floods. We want to create the right type of energy to fuel our next. Developments. 

Anirudh: That’s right. And if I may add the values here, that will be at stake, right? So, I mean, when certain technologies get developed in, let’s say, the U. S., where labor might be, um, let’s say, short, right?

And where automation might have a certain value, compared to a country like India, where In a way, you have the reverse. You have excess of labor and you’re looking for, you know, more structured and gainful employment, the technological choices that countries like India have to make are going to inherently be very different than the choices.

Well, that’s why India and Turkey and Indonesia 

Daron: and Mexico need to be at the table as well. I think the choices are very, very important and they are our choices. This is a really critical point in our history for that reason. 

Anirudh: Absolutely. Uh, I know we’ve run out of [00:45:00] time, but thank you so much. We thank you.

This was a true pleasure. Two quick questions, right? And, uh, every podcast with two quick questions. One is a book that you would recommend other than yours, uh, and mine, uh, for people who are interested in these issues. And second is, uh, another guest for the podcast that you think would be great to have.

Daron: Well, uh, let me actually recommend a really interesting book that, uh, I read recently called, after I finished this, this book, uh, called God, Human, Animal, Machine by Megan O’Glibin, uh, about where ideas about autonomous, uh, machines, uh, Come from and how they merge with religious ideas. It’s actually very entertaining and thought provoking book.

I also recommend Michael Sandel’s tyranny of merit about how we have sort of, uh, created. The wrong sort of perceptions in our society about who is [00:46:00] successful and how that success is achieved. And, uh, and I think Michael Sandel would be an excellent guest if you haven’t had him on the program, James Robinson would be another, uh, guest I would recommend, I think.

Uh, and, uh, and, and I think on questions of. Technology and thinking about how the tech sector works. I think another, uh, person who, uh, would be great is Rana Farquhar, who is a journalist at FT and has written two wonderful books on these topics. 

Anirudh: Yeah, I read a book and I, and I love Michael Sandel’s Tyranny of Merit.

It made me really question so many assumptions that, uh, you know, one didn’t even realize one had in their mind. Yeah, that’s why absolutely. Wonderful. So thanks for this conversation. It was a lot of fun. Thank you so much, Professor Azamoglu and, uh, I look forward to seeing you in India at some point soon.

It’s not that far. Thank you very much. Oh, it’s not that far. Thank you very [00:47:00] much.