University of Oxford economist Max Kasy is pushing back hard on the popular notion that artificial intelligence is an unstoppable technological tidal wave and we're all on the beach waiting powerlessly for it to crash over us. That view of AI is wrong — or at least, conveniently incomplete, says Kasy, who runs the Machine Learning and Economics Group at Oxford and who’s just written a new book called The Means of Prediction: How AI Really Works (and Who Benefits). The title is a play on words referencing what Karl Marx and Friedrich Engels called the means of production—the industrial assets that gave their capitalist owners the power to define social classes according to their interests. Pull aside the curtain on AI development, Kasy says, and you see something similar: AI is just a tool—yes, a powerful one—but nonetheless one being shaped, as we speak, by specific choices made by specific people with specific interests. And understanding that, he says, is the first step toward doing something about it. He joins me today to talk about ways to give users and the public a say in AI’s development and deployment, why strategies like protecting individual data privacy are unlikely to help, and what things like transparency requirements, basic income, and job guarantees have to do with making AI work for everyone rather than just a handful of tech giants.
Maximilian Kasy is a professor of economics at the University of Oxford and coordinator of the Machine Learning and Economics Group in the Oxford Department of Economics. He is the author of the book The Means of Prediction: How AI Really Works (and Who Benefits). His research interests include machine learning theory, the social impact of algorithmic decision making, the political economy of AI, economic inequality, basic income and job guarantee programs, adaptive experimental design, and Statistical decision theory. He holds a Ph.D. in economics and an M.A. in statistics from the University of California at Berkeley and magister degrees in mathematics and economics from the University of Vienna. In addition to his teaching at Oxford, he has also been an assistant professor at Harvard and UCLA and a visiting professor at MIT. Among numerous other professional affiliations, he is a research fellow at the Centre for Economic Policy Research.
(music)
Cold Open:
Dani Rodrik: Economics has changed and, in particular, has become much more empirical. It's become less wedded to theoretical preconceptions about, such as, markets will always take care of problems or that the governments cannot ever solve things…
Stefanie Stantcheca: What we do find in our survey work is that fairness matters a lot to people…
Suresh Naidu: …inclusive prosperity meant thinking about democracy and its relationship to economic policy and democratic politics as something that we were committed to…
Atif Mian: …those deeper questions of how we want to organize ourselves as a society, so we deliver not just economic growth, but we also deliver the values that we need to aspire to.
Intro (Ralph Ranalli): Hi. It’s Ralph Ranalli. Welcome back to the Economics for Inclusive Prosperity podcast. My guest today, University of Oxford economist Max Kasy, is pushing back hard on the popular notion that artificial intelligence is an unstoppable technological tidal wave heading toward our way as we stand on the beach waiting powerlessly for it to crash over us. That view of AI is wrong — or at least, conveniently incomplete, says Kasy, who runs the Machine Learning and Economics Group at Oxford and who’s just written a new book called The Means of Prediction. The title is a play on words referencing what Karl Marx and Friedrich Engels called the means of production—the industrial assets that gave their capitalist owners the power to define social classes according to their interests. Pull aside the curtain on AI development, Kasy says, and you see something similar: AI is just a tool—yes, a powerful one—but nonetheless one being shaped, as we speak, by specific choices made by specific people with specific interests. And understanding that, he says, is the first step toward doing something about it. He joins me today to talk about ways to give users and the public a say in AI’s development and deployment, why strategies like protecting individual data privacy are unlikely to help, and what things like transparency requirements, basic income, and job guarantees have to do with making AI work for everyone rather than just a handful of tech giants. Let’s get to it.
Ralph Ranalli: Hi Max. It's nice to see you. Great to have you on.
Max Kasy: Hi Ralph. Very nice meeting you.
Ralph Ranalli: Well, I wanted to start with your background because it's really interesting how you've ended up where you are now in terms of your research interest. You've had a longstanding interest in inequality, but also mathematics and statistics. And you run the Machine Learning and Economics Group at Oxford. Can you talk a little bit about how those interests converged and merged with researching AI?
Max Kasy: Sure. So by training originally, I, I have a background, as you mentioned in mathematics statistics, but then did an econ PhD and for a long time was kind of focused on, on issues of methods, empirical methods in economics, so like econometrics in the context of labor markets, labor market inequality, public finance, settings like that.
And so over the last 15 years then, I've slowly moved into into working more on machine learning, foundations of machine learning, which was kind of a natural step coming from the statistics side. And I've separately been doing more empirical economics work on issues like job guarantee, basic income, which maybe we'll talk about later. And so, kind of coming out of this interest, I've started to work more and more on kind of questions of what's the social impact of AI? How should we think normatively about what AI does, as a society how should we think about regulating this kind of technology? And so that's kind of where the background for this book came from. And yeah, over the last few years I've been organizing events in this space, in, in Oxford, like our reading group on machine learning and economics and the workshop series and various topics in this space.
Ralph Ranalli: So let's talk about your book for a second, and how the development of AI has been shaped by human decisions- which is a key point in your book- how it's been shaped by human decisions of what you call the ownership class versus some organic march of technological progress. That seems to be a fairly popular um, opinion that AI is almost synonymous with inevitability. But you're saying it's not some kind of unstoppable force in and of its own, but it's development's been shaped by human decisions made by an ownership class about development and deployment and that like these Hollywood portrayals of a sentient AI takeover are kind of stories that serve certain interests by hiding the fact that there's this Wizard of Oz behind the veil. So are our robot overlords really just avatars for our economic overlords?
Max Kasy: Yeah. You get to a very important point here right away. So. I think AI, like with AI as with technology more general, it's often the story that we hear. It's kind of just something happening to us. It's the march of progress. There's nothing that can be done about it. And at best we can try to adapt or maybe retrain to not all lose our jobs.
And I think that's really wrong at the very fundamental level. And it's maybe even more obviously wrong for AI than for other technologies because AI kind of by construction is optimization. So there's kind of this massification of what AI is, right? Thinking about sentience or autonomous agents and so on. But if you look under the hood, what's going on, it's really always optimization. There's some kind of numerical objective that the algorithm tries to make as large as possible. That's usually called the reward function or a loss function. And so then the question is, who gets to choose this objective? And that's kind of the key question that the book focuses on. Who gets to choose what AI optimizes?
And then as you said, who gets to choose the practice? That's basically the people who control the relevant inputs for AI, and that's where the title of the book comes from: "The Means of Prediction." So who controls the data? Who controls the compute and so on, that's necessary to build modern AI.
Ralph Ranalli: I definitely wanna get into optimization 'cause I think that's a really interesting issue, and it's an issue where economics and AI really intersect. But I wanted to continue on this thread about sort of the power of who controls predictions. Can you talk a little bit about, about the impact of predictive power on individuals and society and, and whether we realize exactly how much power is concentrated in the ability to control predictions that way.
Max Kasy: Yeah. So one, one of the key components or key ingredients for modern AI is prediction, which is a part of what's called machine learning. So basically the automated analysis of data. Yeah, and that can take many forms, right? So that can take things like predicting from an image from the pixels in an image, what you see in that image. Like, facial recognition, that can be predicting. From a preceding sentence, what's the next word to come in language modeling. But that also involves prediction that's kind of at the level of humans, at the level of individuals. And those are in some ways the most insidious, the most socially consequential applications of AI to date, I believe.
And they're a bit more mundane maybe than the language models that have garnered all the attention in the last couple years. And that involves all kinds of things like the management of gig workers, right? Setting wages and work times and who gets jobs and so on. The screening of job candidates, if you apply for any bigger corporate job these days, some is gonna screen your CV. That involves basically filtering our entire access to reality in some sense, when you use search engines, social media, that kind of determine what we see about our friends, what we see about the news, what we see about the world at large. And that maybe most consequentially involves applications of AI by state actors like predictive policing and incarceration. Immigrations and customs enforcement is using algorithms to target their raids.
Ralph Ranalli: Right. Palantir.
Max Kasy: Palantir and many other data providers at this point.
Ralph Ranalli: Yeah.
Max Kasy: And most, at the most extreme level in warfare, right? Like in Gaza, it was algorithms that decided who gets killed and when. So those, those are all obviously very consequential if you have at the receiving end of these algorithms. And they're all based on- on some level- some form of predictions at the level of individuals. How much are you willing to pay? What ad are you gonna click on? What's gonna keep you on the website? Who is likely to not have documents and so on.
Ralph Ranalli: And it's pretty remarkable that all this power is concentrated in sort of non-state actors to a vast majority degree, I would think. Can we talk a little bit about how we got here, in terms of privacy safeguards, because it seems like we've moved way past the old privacy safeguards that were put in. And you say they were based on the individual, right? And on
Max Kasy: Yeah.
Ralph Ranalli: And they focused on individual data privacy, which just doesn't cut it anymore. Can you just explain how that works?
Max Kasy: Yeah, so I mean, all these applications that I just mentioned, right? Like it's all about first gathering data about individuals and then making predictions for those individuals. And so a natural response that you might have is to say, well, individuals need to have control over their data. And that kind of makes, makes sense on some level, right? So, you should have a right to know what data is collected about you, maybe a right to be forgotten, a right to use services without sharing your data. And that's kind of the core idea of traditional privacy rights.
The extent to which the exist vary across a lot across countries, right? So they're much more developed in the EU, say, than most of the U.S. Though places like California have adapted some of the EU laws. But at the end of the day, all these, all of these like privacy regulations are based on the idea of individual property rights over the data that concern you. And as you say, that's at the end of the day not going to save us. And for a fairly fundamental but maybe not obvious reason. And that reason is that AI is always about learning about patterns across people, right? So it's never about your individual data, it's kind of what data about people that are similar to you, allow the algorithm to learn about you.
Right? And so anytime you share your data for, for learning purposes for an AI, then it's learning patterns. And these patterns affect many other people. And so that's kind of similar to climate change, right? When, when I emit carbon by flying or doing something else doesn't really impact my chance of being a victim of climate disasters. But if all of us do it, it very much impacts our risk of climate disasters. And so it's kind of the same as with AI. AI learns patterns across people. Me sharing data doesn't impact myself, but it impacts everybody else. And if you collectively do it, it can have massive consequences, positive or negative, that just cannot be prevented or promoted by, by individual level property rights.
Ralph Ranalli: Yeah. Well, I think people have a very negative view in general now about AI application and machine learning applications and things like lending and credit checks and d ecisions on incarceration. But you said there were, there were positives too. Can you talk a little bit about what the positives are?
Max Kasy: I mean, in all of these domains, right? You can use the, the predictive algorithms and the automated decision systems for all kinds of objectives or ends. And the key question is, at the end of the day, who chooses these ends? And just to give you one, one example, right? Like take, take health data. Imagine- and that's, that's a very real scenario- we imagine we get better and better at predicting individual disease risk for various diseases. Let's say cancer or something like that. Again, like you share your data about, about your health records, that allows the algorithm to, to learn, to predict other people's cancer risks, say. That could be used by, by private health insurance companies to exclude you from insurance. Right? And at the end of the day, lead to an unraveling of any kind of social sharing of health risks, where it's each unto their own at the end of the day.
Or the very same predictions could be used for preventive treatments and making sure people get healthcare before things escalate and make everybody's health better off, right? The same is true in many other domains. Where maybe algorithms could help you find important and interesting information online, or they can be used to promote hate speech, amplify teenage mental health issues, you name it. Again, there's, there's nothing that's predetermined in the technology about what it's being used for. It's those choices that are being made by someone.
Ralph Ranalli: Yeah. One thing I've always wondered, is I guess there are different kinds of predictions and different quality of predictions, right? But there's also, I think the term is reflexive predictions, where the prediction actually influences that which is being predicted and changes it. Does that happen with AI? And how does that work?
Max Kasy: Absolutely. That's something that the field of machine learning has been discussing under the name of performative prediction in recent years. So this idea, right, that you make predictions about people and then they react because the predictions are consequential for them, right? So credit scoring would be a classic example where people try to game the credit scoring algorithm to get the mortgage they need to buy a house or something like that. Which then maybe invalidates the original patterns. It's a bit of a cat and mouse game, between the, the predicting side and the people that predictions are being made upon.
Ralph Ranalli: I wanted to talk about, I think the public understanding of this. How important is it for the public to develop a good understanding of this. And I think this falls into a category of what I've sort of traditionally called priesthoods. You know, where it's a topic where you have a certain class of people who say like, I am the one who understands this. You need to take my word for what it means, and therefore it's too complex for you to understand so why, why should you really bother learning about it to the extent to where you wanna participate in the process of decision making about it. These issues, are they in fact, too complex for the general public? Or is there a way to inform the general public about them in a way that empowers them enough to say: " I want a say over this; I want to either help create a new avenue for public input, or I want to go to an existing avenue like my congressman or my state representative and have some input on this."
Max Kasy: Yeah. So one of the key arguments my book tries to make is that at the end of the day, it's not that complicated. And it's very much in the interest of certain parties to make it look more complicated than it is. But the basic ideas for AI, I mean, for one they have been actually around for a long time. Right? So it's, it's not that we invented statistics and machine learning yesterday. Many of these ideas are 200 years old and in a more specific modern forms, maybe 70 years old. It's just that they, they've come to a level where they became useful for different application domains.
Ralph Ranalli: That's interesting to me. What's an example of a 200 year old aspect of this?
Max Kasy: I mean, so the, the original applications that people were interested is like predicting the, the trajectories of different planets or stars and so on. Some of the original statistics was developed and later it became more like state level predictions as government statistics became developed. But the kind of core technology of predicting some outcomes based on some other variables, those ideas have been around for that long.
And that same technology can be used in all kinds of domains and becomes useful once the scale of of the inputs becomes large enough. And so that's been happening over time, over the last 20 years, especially first, say, with things like image recognition. More recently with things like language modeling and so on. But they're basically underlying ideas again, the same, it's just the massive scale of the inputs and the compute needed for these domains. That's changed.
I think this, again, these basic ideas are actually broadly accessible. And what, what are these basic ideas? I mean at, at the most fundamental level, again, it's this idea that AI is optimization. So there is some objective, some goal that somebody has to choose and some set of actions that the algorithm can choose from, and some information it does that based on. And that's, in some sense, the most basic information that's already enough to have a lot of the important discussions here. Right. So then under the hood, obviously there's a lot of kind of more, more intricate technical things going on, but in many ways those are secondary and don't change that much what's happening at the end of the day.
So, say to discuss what, um, what a Facebook algorithm does to our public sphere, our mental health or social interactions, it's important to know that it maximizes ad clicks and as kind of an intermediate goal to get there, that it maximizes engagement, keeping you on the website. Beyond that, not that important that you know what's the latest neural network architecture that's used to do this optimization. Right? And so that the same carries across all kinds of application domains. And then it's useful to go a little bit deeper than this idea that there's something being optimized. But at the end of the day, that's the core thing we need to have a public understanding about.
And then of course, you hear, especially from from industry representatives a lot, like this is all very complicated. You can't understand it. That's kind of the pitch to give to policy makers, leave it to us, to self-regulate. There's an obvious material and social- political interest behind that, right? They wanna keep control, they wanna keep their profits- but that doesn't mean that the stories that they're telling are are accurate. One thing that the book tries is explain these key ideas of how machine learning works as a precondition for then having a broad public debate about what we should want to do with it and who should call the shots about what it's being used for.
Ralph Ranalli: So you said the first step is transparency, right?
Max Kasy: Yep.
Ralph Ranalli: And that you could require companies- similar to the way they have their corporate financing reporting requirements- that companies that use AI could be required to disclose like what their objectives are for their use of AI and what their algorithms are trying to maximize or optimize as you're saying. How do you make that a reality in the sort of political and economic sphere we're operating in right now?
Max Kasy: Lemme just take a slight step back from that, right. What I would argue is that kind of the normative endpoint that we should aim for is that people have a right to determine their own lives and to have self-governance or democracy at the end of the day because it can't just happen at the level of each of us for our own. And so then there's the question, how do we get to the point where we, where we kind of govern our own lives.
And that's tricky given that we are in the world is quite far from, from that ideal. But as you say, I think one, one key step to get there is to understand the decisions that need to be made and the decisions that are being currently made. And I think one first key step could be and should be transparency legislation of the form that, when those systems are being used, that we get clear information about what they are maximizing, right? And I think that's something we could potentially get broader political buy-in for. We have the same thing for all kinds of, of other domains, right? Like any company has to do financial reporting, accounting that satisfies certain standards. Why not have anybody who deploys the algorithm do similar type of reporting of what are they maximizing? And I think once we know what's being maximized, then we can have much more of a meaningful public debate about: Is this what we want? Do we want something else? In whose interest is it to maximize this? And that can be then a starting point to the potential change. What is being maximized? In whose interests?
Ralph Ranalli: I mean, I think transparency is a really interesting subject when it comes to AI because I mean some of these algorithms and these agents are even somewhat black boxes to the people that develop them. They don't necessarily know exactly how they work.
Max Kasy: That's another interesting question, right? Where there's all this discussion: Can we understand what AI does? Can we explain it? And there's a lot of confusion, I think, about what we mean when we say explain or understand. And I would like to distinguish three different levels at which you can explain what an AI is doing, right? So there is, there's the type of thing that engineers are typically interested in, which is like you have this really complicated function, like a neural network that somehow maps its inputs to some output or some decision, or some prediction. And understanding that function is, in a way, useful for the engineers, for debugging purposes, for figuring out is this really doing what I wanted to do? What's going wrong? And it's kind of scientifically interesting.
But that's very different from the type of explanation we would typically care about in a more social context, where there's the more individualized version, again, that lawyers often care about, which is a decision was made that affected you and you're unhappy about the decision maybe and you want to have an explanation why that decision happened, right? So say you applied for a job, you didn't get the job, and then you wanna know: Why did I not get a job? Maybe it was for some discriminatory reason, for some characteristics that shouldn't matter. Maybe it was for some reasons that you could easily change and that you wanna change if you apply again.
So those are kind of individual level explanations of the decision, but then there's explaining the, the whole system. And that gets back to what I was just saying before, which is explaining again what's being optimized and who's interest over what set of possible actions based on what information. And I think that's the kind of explanation that's most important for, for thinking about regulating, thinking about democratic governance. And so that kind of explanation is much more easily accessible, while the type of explanation that the engineers and scientists want that's kind of very much an open question. There's this whole field of so-called mechanistic interpretability that has sprung up over the last couple years; which is basically people trying to understand what a specific neural network is doing.
Ralph Ranalli: So what is the sort of bird's eye view of those different and sometimes competing groups who have an interest in AI? What are the different groups, what are their incentives, and what are the stakes for each of those groups? And we've talked about the owners of technology group and then we've talked about the public, but you also mentioned like scientists and engineers. What are those different groups? What do they want and how do the things that they want interact with each other?
Max Kasy: I mean, it's all very much not monolithic, I think, right.? And so you've, you have private actors, you have state actors, and the state is not like one thing either, right? There's many different parts of the state. But I think, I mean, a large part of AI machine learning to this day is commercial, corporate. A lot of it's still geared towards towards selling ads at the end of the day. Another increasing share is, is trying to, to sell things to the defense sector, to the militaries around the world and secret services and surveillance apparatuses.
But we need to also think about all kinds of people who don't necessarily currently control AI, but might have an impact on it. And so that's what I discussed in the book under the header of Agents of Change- who could possibly move things in a direction that is more democratic, that makes it so that AI is more used in the general interest. And there I think we, we really need to move away in discussions about AI from just talking to engineers, right? So there's kind of often this focus on ethics of engineers, like essentially telling engineers to be nice. And there's nothing wrong I think with adding ethics curriculum or ethics courses to a computer science curriculum.
But it's not gonna solve the major social issues around AI. And the reason it doesn't is because if you work at the company whose bottom line is determined maybe how much ads to sell, then it might be nice that you care about hate speech online, but at the end of the day, you have to sell ads, right?
Ralph Ranalli: Right. Maybe the ethics courses need to be in business school as well as. As well as engineering school.
Max Kasy: Managers. Managers too supposedly have a fiduciary responsibility to maximize profits, right? And so I think what we need to do is instead to talk to many different actors in society that could have an impact here. That's the different workers that are implicated from the gig workers or the click workers who produce the data to train AI. That's the consumers of various AI products who maybe, typically are less, less organized, right? There's, there's usually not like a union of consumers, but still they could have quite an impact just through some reactions to scandals, people leaving products if they're unhappy with what it's being used for and so on. Which impacts corporate decision making.
There's the broader like public debate and media, which matter, both because of what consumers and workers do, but also because of what regulators, policy makers do, what feeds through the political system. There's all kinds of actors in the state and judiciary that can have an impact and all kinds of tools from more, more traditional regulations- things like monopoly and competition law, intellectual property law, privacy law, anti-discrimination law. Technical standards- incredibly important here. All of those things have an impact on, on how power over the inputs of AI has shifted, and at the end of the day how power over AI is exerted and to what end. And so we- I think- really, really have to think much more broadly here than just talking to engineers and telling them to be nice.
Ralph Ranalli: So the title of your book references the Marx- Engles phrase, the Means of Production. I think if, if one could imagine that if Marx and Engles were around today, they might advocate for public ownership of the means of prediction like they did for the means of production. And you mentioned state actors. Is there a case that this is a pub... Such a public good with such potential downside risk that public ownership makes sense?
Max Kasy: So I think, and I suspect even Mark and Engels might agree, is like the ultimate goal is not necessarily to, to have state control here. Right? And that's also not what I would mean by by democracy here necessarily. It's more like, again, that the people who are being impacted by something have a real say about what the thing is doing to them. And I think in a way the good news is it doesn't have to happen all at once, right? We don't need to go out today and nationalize X and Open AI. I think AI is being deployed on many levels and we can start building democratic control at these levels and not necessarily handover control to whoever controls governments at the moment, right? This can be at your local school board that uses some algorithm for the teaching process, that can be at your workplace where you and your 10 colleagues have to figure out should the AI be used to automate the way your job or to make your job more interesting and productive. This can be used at telling your municipal police department what they should do about using predictive algorithms to target their, their police patrols. That can be maybe at, at a non-local level, but at the level of online communities of interest, right? The users of a social media platform, having a say over how feeds are selected on that platform.
Right? And so I think that's, that's really what, what we should aim for here is, is building the power and shifting the control, not necessarily to like a monolithic state, but to the people who are being affected in different domains and where it's not an all or nothing, it's like a continuous shifting of power in different domains through different levels. Anything from technical standards that make it easier to switch platforms to transparency laws that make it easier to understand what's going on, what the algorithms are doing. And thereby build more, more real democracy in some way.
Ralph Ranalli: So I wanted to turn, if we could, to the relationship between AI and economics. You've said that economics and machine learning have things in common- we touched on this before, like optimization, probability, decision theory- and you've said that pretty much all of AI is basically optimization. And I think people's big problem with the neoliberal path in economics has been that optimization mindset that reduces people basically to economic inputs. Is this just more of the same when we're talking about AI? or Is that a danger or is there a way around that?
Max Kasy: I think at the level of kind of academic disciplines or intellectual frameworks, in a way that's a reason why an economist might write this book that I wrote- is that I think economics can, as a discipline, can serve a useful bridging function here, between more technical discussions and broader social science and political discussions. Where, as you say just as the intellectual frameworks that we have as economists or that the part of the standard econ curriculum includes all those things that, that engineers and computer scientists in particular also think about, like optimization, like probability and uncertainty. But the big difference between those fields is that for an engineer, that the objective that the optimizing is typically given, and then they're extremely good and sophisticated about how they achieve that objective.
Whereas for an economist, it's natural to think about the fact that there's more than one person in the world, right? Which doesn't sound that surprising when you put it that way, but that's kind of the key point. And that there are different people who want different things, who have different resources, who have different values or interests or beliefs. Who have different information. And so that brings us to a world where there's inequality, there's conflicts of interest, there's struggles over w ho controls these things? And that, I think, provides a useful unifying perspective here to think about social conflicts around what technology is used for.
So that's kind of at the high level of academic disciplines in terms of people being worried about the optimization mindset. I think again, it all boils down to what is being optimized, right? You can optimize profits at the expense of human wellbeing and environmental degradation and you name it. Or you can optimize something that's more like some combination of what you and I and everybody else cares for, and especially what those who have the least need or care for. And so optimization by itself doesn't mean much. It's, all about optimization of what, I would say.
Ralph Ranalli: So one of the things your group at Oxford does is you're working on developing what you call a common research agenda, sort of at this place where machine learning and economics meet. What are the components of that research agenda as you see them so far in your work?
Max Kasy: What I've been trying to do there is to bring people together across different academic fields. Right. And I think in a way, like machine learning AI provides an exciting opportunity for that, among other reasons just because there's so much hype and excitement around it right now that everybody wants to work in that field, and that makes it possible to have conversations across boundaries that are otherwise more, more rigid and across difference of disciplines. So that's one thing.
The other thing is kind of goes in the direction, what I was already mentioning. That we have these foundations of statistics, of machine learning in particular, that all take this very specific decision focus, right? This idea: there's some objective that's given to you and you're trying to achieve that as well as possible. And I think that misses both, as we've discussed at length already, what machine learning does in a society. It also misses what kind of more traditional statistics does, which I think is much more about describing things and communicating them across people rather than about you as an isolated person making a decision based on data. Right? That's not really what we do when we do science. But that is how we teach statistics in all our standard statistics courses across disciplines. So that's kind of, I think in a way the key high-level angle is to think about the world where there's more than one person.
Ralph Ranalli: Right, which is very much on brand for this podcast and the network, the Economics for Inclusive Prosperity Network. What are the possible effects, do you think, of AI, both positive and negative on an economics that has this broader view of what prosperity is?
Max Kasy: Again, at the level of academic work, the hype and excitement kind of opens new avenues where people are open to exploring new ideas, and that's exciting and fun. I think at the broader social level, what it implies economically. I mean, in a way it's going back to the title of the book, "The Means of Prediction," right? Is what controlling assets does or how we think about it as economists is usually two things: it gives you an income stream potentially, rather you owning an asset gives you the right to derive the income that's generated by the asset and gives you control rights. The book focuses more on the control rights, but it's really both that matter, right? It's like actors, companies, appropriating control over data, scraping the internet and reselling it to us, that's a massive redistribution of income and concentration of income in smaller and smaller number of hands. And it's a concentration of control, again, over what's being optimized in whose interest. And I think both of those, those matter, and I think there's a, there's a very real risk that all of this results in an even, even greater economic inequality. Both through the concentration of income streams that come at the end of the day from knowledge, from information ,data and the impacts that, that it might have on the labor market, depending on how it is used.
On the labor market impact, right? It's one of the big fears about AI: it's gonna replace all jobs, everybody will be unemployed. And in a way that's a fantasy, that many of the Silicon Valley guys fit into, where they're like... They believe, like, they're the smart guys. Everybody else is, at the end of the day, unnecessary or useless. We are gonna replace them by robots, and then maybe we are going to prevent the revolution by giving them a little basic income.
That's both kind of a delusional hubris, but I think it's also, again, goes back to the point that we have agency, which is something that the other economists have also very much emphasized in recent times, people like Hachimoru and others, right, where there's nothing predetermined about the labor market impact of the technology of AI. It's such a general purpose tool, right? Learning based on data and making automated decision to optimize something, which is at the core of what's going on here. It can be used for so many different purposes. It can very much be used to automate away jobs, to repress wages, to, to reduce the labor force, drive up unemployment, with the goal of increasing profits. Or it could be used in very different ways to maybe to take away the drudgery of rote work that goes along with many jobs to make it more interesting, to level up where maybe it's increasing more the productivity of those who have less training or education. A lot here depends also on the creativity of how it's being used. But the, that in turn is going to depend on, again, who calls the shots? Is it only in view of shareholder value to, to depress wages into the labor force, to maximize profit, or is it done, maybe encoded by the determination with, with workers, with the workforce, and with their interests in mind. And I think that is very much going to determine what the labor market of AI is going to be.
Ralph Ranalli: Well, it's interesting to hear you talk about sort of the Silicon Valley view of the robot replacement theory and a basic income, because you've done considerable work on basic income. How do you see those two... and you're clearly skeptical that we're all gonna be sort of sitting at home with our basic incomes and then the robots are gonna be doing all the work. What do you see as the actual relationship between basic income programs and the effects that AI is gonna have on the labor force.
Max Kasy: So as you say, like in, in a kind of a separate line of work, I've done a fair amount of projects around basic income programs and job guarantee programs. A few large scale field experiments, policy pilots we've been running over the last few years. And I think both of those debates about basic income and about job guarantee are very fraught in many ways. Because a lot depends on the fine print and there's, in a way, like a lot of like false friends and false enemies you can have in these discussions because different people mean very different things when they use these names and the thing that's, that's very much true for basic income in particular, right? Where I already mentioned this kind of Silicon Valley idea, if you're gonna automate everybody away and then prevent the revolution with the basic income.
But I think there's also a much more progressive way of thinking about what these programs can do and cannot do. Personally, I would be very much in favor of us building like a safety net that's unconditional and universal, that gives outside options to everyone. And for many people that's in particular income, but it's not just income alone.
And so that's where the job guarantee programs come in that I think are, are relevant in particular for, for more disadvantaged groups that experience social exclusion, where like having a meaningful, well-paid, guaranteed employment, has a lot of additional benefits beyond just the income, right, in terms of self-respect, social, inclusion, social context written to the day, sense of meaning and purpose and so on.
And I think both of of those are important, but what's common to basic income and job guarantee programs is really that, that they should provide unconditional outside options. I think that's really for me, the core general idea here. This idea that if you're in a shitty situation, you have to have the option to leave that situation to say no and to have a fallback that's, that gives you a decent life. And by just having that fallback, in many cases, you will not even be in a situation where you actually need to use it because it strengthens your hand, right? It strengthens your hand in a, in an abusive employment relationship. It strengthens your hand maybe in an abusive private relationship that it allows you to leave. It strengthens your hand relative to an a government bureaucrat who might abuse their power. Having an unconditional outed options in all these contexts is going to improve your situation even if you don't need the outed option at the end of the day.
Ralph Ranalli: I thought that part of it was fascinating because you could almost characterize it as not just a, a basic income program. It's a sort of a universal basic power program, right? Because of the significant empowerment that lies below just what the income or that job guarantee represents.
Max Kasy: Many, many union activists are skeptical of basic income, but a lot of that comes kind of out of the flip side of hearing this Silicon Valley type idea of let's just replace everything and keep people alive by a little bit of money. And also based on this fine print that for some people comes along with like, "Oh, let's just replace all public services and other like, education, health and so on by, by just a cash transfer." Which I think would be a bad idea. But you could also think about it very differently in a way that might resonate more with union folks, right? Which is to say it's an unlimited, tax-financed strike fund, right? So that that could massively strengthens the bargaining power of unions, if you think about it that way. And framing matters here.
Ralph Ranalli: So just nerding out for a second before the end of our time. I was really interested in what you called optimal tax theory and its relationship with this. The magic bucket. Could you just explain the magic bucket and how it works?
Max Kasy: So the kind of traditional way that economists often talk about taxes is that there is kind of this trade off between, on the one hand, we want to redistribute to those who need the money more. On the other hand, there are the incentive effects where taxation might distort behavior, right? Where this idea is like, if you have a progressive tax that kind of takes from the rich and gives to the poor, maybe either people work less to save less or they evade taxes more and somehow that reduces the tax base. But for a lot of what we do at the lower end of the income distribution, especially in the US with programs like the earned income tax credit and so on is actually like subsidizing low wage labor as opposed to helping the truly unemployed.
And in a way it's quite bizarre in some sense because, right, you could... In this case, you're like over incentivizing labor supply, right? There's inefficiently low labor supply, but there's also inefficiently high labor supply and by, by subsidizing low wage labor, in that case, you both redistribute to, to employers of low wage workers, you depress wages and, you're kind of incentivizing people to work more than would be socially optimal given maybe the other obligations they have. And so, in a way from, from the perspective of very mainstream classic optimal tax theory, it would actually be clearly efficiency-improving to not do that and give people a basic income instead. Where you don't distort behavior in that way from working or towards working in the labor market. Instead, give people the choice, give people real autonomy to choose what's best for them while giving them the basic material safety that the basic income could guarantee.
Ralph Ranalli: Well, Max, thank you very much. This has been a really interesting conversation and there's certainly a lot to chew on. A lot of possible avenues for research if you're interested in economics, but also if you're interested in the political side for advocacy and trying to realize some of these ideas to empower people and make this whole technological revolution we're going through a little bit more democratic. So I just wanna say thank you and this was great.
Max Kasy: Thank you so much, Ralph. This was fun.
Outro (Ralph Ranalli): Thanks for listening.The Economics for Inclusive Prosperity podcast is produced in collaboration with the Reimagining the Economy Project at the Malcolm Wiener Center for Social Policy at the Kennedy School of Government at Harvard University. The co-producer of this podcast is Tony Ditta. Please join us again in two weeks for another new episode featuring University of Chicago Booth School of Business economist and podcaster Luigi Zingales and his thoughts—and big worries—about the future of capitalism.
And if you like this podcast, please remember to subscribe on Apple Podcasts or your favorite podcasting app.