The title of this post is deliberately provocative. My goal here is to give some rigor and structure to discussions of the utopian opportunities afforded by AI, and to stimulate discussion on whether and how to realize those opportunities. It’s not my view that AI is sufficient to realize utopia, though it may be necessary.
Also, I deliberately used the phrase “far to” instead of “long until” in the title of the post. How soon, if ever, certain things discussed here happen depends not just on clock time but also the amount of effort applied to achieving them. In addition to the temporal distance to these scenarios, we should also think about AI opportunities in terms of technical distance, political distance, etc. Many factors are involved in bringing good things about, as well as analyzing the likelihood of them coming about.
One of the reasons I work on AI-related issues is that I think it has enormous positive potential, much of which hasn’t yet been realized (even with the current state of the technology, let alone how it may be in the future). At the same time, I think a lot of discussions regarding that potential are simplistic, vague, or otherwise problematic. So I’ll try to do a better and more critical job of analyzing the positive potential of AI here in order to encourage more of the same, but this post certainly shouldn’t be interpreted as a prediction that all these good things will necessarily happen. If such a thing were possible, it’ll happen because people work actively to bring it about, not because it’s inevitable. And I’m also not downplaying the negative potentials of AI, which are also very important to think rigorously about, but which I’m not focusing on in this blog post.
Without further ado, let’s get into what “utopia” and “utopianism” mean, and then explore several different connections between AI and utopia(nism).
Defining utopia and utopianism
There’s a lot of scholarly work on utopia and utopianism – indeed, there’s even a Journal of Utopian Studies. It’d be impossible to do a comprehensive survey of utopian thought and its various critiques in this blog post (though you can find pointers to some of the key texts in a paper I wrote for a class on utopianism). Here I’ll just give a very brief summary of what some people who study this sort of thing say about it.
Defining utopia and utopianism is tricky for some obvious as well as non-obvious reasons. While noting the many debates on this, I’ll simply define utopia as a space (past, present, or future) in which justice is realized, and utopianism as a system of belief that considers the pursuit of utopia as a valuable political project. Note that there is no presumption here that the (lack of) realization of justice is binary – one could imagine calling a certain state of affairs a utopia, while still finding room for improvement, though this doesn’t necessarily fit how the term is commonly used in public discourse (where it is often used in a derogatory fashion, for reasons we may better understand shortly). Indeed, the question of perfection and perfectibility looms large in debates on the meaning of utopia, but in the context of AI, what I’ll suggest below is simply that AI may at least help us attain a world that is much more just in many respects.
So, why are utopia and utopianism important and controversial? Besides the perfection stuff mentioned above, and the common historical association of the term with particular (failed) attempts at utopias, what some scholars of utopianism say is that utopianism is necessarily controversial, in some sense, at least when it’s discussed or pursued in unjust societies. Why? Because it serves implicitly or explicitly as a critique of the status quo. One of the foremost theorists in the history of utopian thought is Karl Mannheim, who theorized “utopia” as having a critical relation to the prevailing “ideology” of the times. Clashes between utopia and ideology are clashes about the nature of the world and what’s possible in it. He wrote:
“What in a given case appears as utopian, and what as ideological, is dependent, essentially, on the stage and degree of reality to which one applies this standard. It is clear that those social strata which represent the prevailing social and intellectual order will experience as reality that structure of relationships of which they are the bearers, while the groups driven into opposition to the present order will be oriented towards the first stirrings of the social order for which they are striving and which is being realized through them. The representatives of a given order will label as utopian all conceptions of existence which from their point of view can never be realized.” (Mannheim, 1932, p. 164, original emphasis).
Mannheim’s seminal early work is hardly the last word on utopianism, or ideology for that matter. But his idea of utopia as having a critical orientation towards the status quo is quite relevant for our discussion here, since some utopian visions for AI call into question many aspects of our society that are often taken for granted—to give just one example, the necessity of paid labor, which I’ll have a lot more to say about below.
AI and utopia
So, now that we have at least a vague sense of what utopian thought is and why it’s important, let’s get into some of the specific ways in which AI relates to utopia. I’ll discuss three specific relationships in this post, but this is not necessarily exhaustive, and some of these could be broken down into further sub-categories. The three I’ll explore are: AI as problem solver, AI as work improver, AI as work remover, and AI as equalizer. In most of these cases, I’ll discuss the various forms of (technical/social/policy/etc.) distances involved before they can be fully realized, examples of how they might work, issues in their implementation or ultimate desirability, and whether they actually count as being properly utopian. At the end of the post, I’ll try to tie some of these threads together and discuss the political function of AI-topia and the immediate challenges we face in achieving these goals, if we want to do so.
AI as problem solver
This one is very common. Over 8,600 AI researchers and other interested parties (myself included) have signed an open letter which, among other things, contains this brief summary of the case for AI as problem solver:
“The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.” (FLI open letter, 2015)
Eradication of disease and poverty would be very big deals, and arguably could serve as building blocks of or stepping stones toward utopia under some definitions. Addressing climate change would also be a big deal, as would addressing all sorts of other problems in society that have been claimed by various people to be (at least partially) solvable with AI. But how plausible is this? And can we parse out the “at least partially” part a bit more? I’ll try to do that a little bit in this section, while noting that it’s a very big question in need of more than a blog post to answer.
First, let’s note that this is a distinct sort of claimed benefit from AI from the others I’ll discuss below (e.g. AI as work remover). Eliminating the need for paid work in order to attain a decent (or high) standard of living would both require and enable a very sweeping change to society, with myriad consequences. In contrast, or so the AI as problem solver story goes, using AI to cure a disease is a more straightforward technical fix—we identify some sort of problem like a disease, apply AI to it, and, if it is a problem amenable to the application of intelligence, presto, we have a solution (say, a cure for a disease). The range of issues claimed to be solvable with AI is very broad, though—they are not all narrow problems amenable to technical fixes. Borrowing from Sarewitz and Nelson’s 2008 Nature article, “Three rules for technical fixes,” we might apply three criteria to various problems in order to assess the plausibility of AI as a solution to them:
“I. The technology must largely embody the cause–effect relationship connecting problem to solution. …
II. The effects of the technological fix must be assessable using relatively unambiguous or uncontroversial criteria. …
III. Research and development is most likely to contribute decisively to solving a social problem when it focuses on improving a standardized technical core that already exists.” (Sarewitz and Nelson, 2008)
This is just one set of criteria, and you can look at the original article to assess whether their reasoning makes sense in the context of AI, but it is at the very least a useful reminder that not all problems are equally solvable through technical means, and that there might be some structure to that relative solvability. For example, one might argue that developing a cure for a disease would be easier for AI to accomplish than ensuring its effective distribution. Or, one might argue that “providing everyone with access to X standard of living” (whatever level that may be) is easier to accomplish than a comprehensive solution to the complex, multi-faceted, multi-level phenomena of racial, economic, gender, and other forms of inequality, in which poverty is intricately intertwined. The latter (at least absent a superintelligent AI imposing fundamental changes in society against many of its members’ wills) seems less amenable to an AI-based “solution,” though it’s plausible that AI could help on at least some of these fronts in at least some ways.
We should also appreciate here that different levels of technical capability in AI may enable different extents or likelihoods of “solution” to different problems. With a sufficiently advanced AI, it may be straightforward to tell it to find a cure for a certain disease. Today, however, AI is not able to easily solve any such problems without a significant amount of human input and fine-tuning. A comprehensive analysis of different possible states of the art in AI and how they might enable solutions to different sorts of problems is beyond the scope of this paper, but it seems like it could be an interesting exercise and relevant to cultivating more nuanced-while-still-ambitious utopian visions for AI and society.
Finally, when looking at AI through a utopian lens, we should consider the political and policy dimensions involved in determining what counts as a problem in need of an AI-based solution. There is a legitimate concern here about excessive technocratic management of society which needs to be taken into account, as well as the more general question of whether there is currently a good mapping between the application of AI technologies and the most important problems. While various organizations (claim to) work on applying AI to important problems, it’s far from obvious, to me at least, that, for example, the current distribution of U.S. federal funding for AI and robotics is being well-targeted to addressing societal problems as opposed to merely military problems and a handful of miscellaneous (mostly health-related) other ones. So, it may be that not only is AI as problem solver in need of more critical technical/social assessment, but its actual realization may require more political and policy changes than have heretofore been discussed in the literature. For example, what would it look like if governments took a comprehensive approach to funding AI research applied to grand challenges, or implemented an AI equivalent of agricultural extension services (provision of technical expertise to farmers), actively seeking out NGOs, local governments, and others who might benefit from AI but need technical experts to help? What problems are not currently likely to be solved by profit-motivated companies, which employ an increasing portion of global AI talent? These are just some of the questions a robust AI-topian project would need to answer.
AI as work improver
Another common claim made in support of (possibly) utopian visions for AI is that AI and robotics will enable the elimination of “dull, dirty, and dangerous” aspects of jobs, allowing work to be more stimulating, enjoyable, fulfilling, etc. This is potentially plausible, but not at all guaranteed, and probably the least interesting of the four AI/utopia connections I’ll discuss in this post. I discuss this claim in more detail in the context of Brynjolfsson and McAfee’s book The Second Machine Age, here, but in this section I’ll briefly recapitulate the key points of that article and explain why I find this to be not that interesting or utopian.
There’s definitely something to the idea that work could be better for a lot of people—indeed, it’s currently better than it used to be for a lot of people (see, e.g. the chapter on work conditions in Gordon’s book The Rise and Fall of American Growth). But AI does not by any means ensure this outcome. The reasons I give for this in the article mentioned above are:
“First, the tasks that are easy to automate aren't necessarily the boring and repetitive ones, and the tasks that are hard to automate aren't necessarily the fun and interesting ones. …
Second, even when enjoyable and harmonious jobs are technologically and socially feasible, companies may not face strong incentives to design them like that. …
Third, one person's meaningless, repetitive labor is another person's satisfying, hard day's work and a big part of her identity.” (Brundage, 2014).
These considerations suggest that increasing the quality of work is probably not exclusively, or even primarily, a matter of technology, as social science research has already shown for a long time. Political and policy measures are probably more important there, some of which I mention in that article – for example, I wrote: “Nobel Prize–winning economist Edmund Phelps has proposed subsidizing companies to hire low-wage workers. When businesses have to compete against one another to attract employees, instead of the other way around, they may be more inclined to foster satisfying, and not merely productive, work environments.”
Lastly, this doesn’t strike me as particularly utopian, or at least sufficiently utopian for my tastes, though it may be a very important impact of AI and one that should be actively pursued. Why? Because it leaves essentially unchallenged the presumption that people should have to spend a huge fraction of their lives working for others at jobs they may not want to do in order to have a decent standard of living, to the exclusion of other activities that they may find more fulfilling. Under some conceptions of justice, that requirement is itself unjust. Better work may be important, but less work may be a more utopian ambition, so we turn to that next.
AI as work remover
For a very long time, people have imagined an end of work achieved through political or technical means, or some combination of the two. And indeed, this ambition has been realized in many ways, and in many places. Child labor laws have led to big improvements in child welfare and educational advancement, and in many European countries there exists a much heavier emphasize on work/leisure balance, enforced by law. Historian Benjamin Hunnicutt goes so far as to say that free time is the forgotten American Dream, one that was actively sought for generations but was steadily eroded by consumerism (fueling higher material demands and thus higher desire to work) and a societal endorsement of the work ethic.
Moreover, an end of work has figured prominently in many specific conceptions of utopia, both in philosophical/political treatises and in (science) fiction. And yet, the end of work also features prominently in many modern fears associated with AI – namely, socially destructive technological unemployment. So, where does this rich history leave us regarding our discussion of AI and utopianism, and are we actually close to the dream (or nightmare) of an end of work? And “close” in what sense?
The history of debates on technological unemployment should, first of all, be a reason for skepticism about current claims about present or future technological unemployment. Not only is it the case that people in the past worried about permanent broad-based technological unemployment, and were ultimately proven wrong, but moreover, they even made many of the same arguments that are being made today. In the 30’s, people talked about the replacement of phone switchboard operators with automated switchboards in pretty much the same way people today talk about the possibility of AI substituting for white collar jobs—namely, that back in the day, machines just substituted for muscles, but now they’re substituting for brains. So, we should look carefully at the specific technical developments in question and their relationship to human cognitive abilities, and where AI could (or could not) plausibly soon substitute for humans. We should also think about whether and in what sense modern AI is different from earlier waves of automation –is machine learning the critical distinction, or the range of cognitive and manual tasks that are now being automated? I think a compelling case can be made that AI is likely to be able to automate a wide range of tasks, but my point here is simply that we need some historical perspective here, and to remember that many new jobs will be created, too.
It’s also important to note that, on the other hand, earlier prophets of technological unemployment were right in a sense—many people were, in fact, fired after being replaced by machines, entire careers disappeared, and substantial human suffering resulted. Overall, economies continued to grow and many (though not all) people eventually got new jobs, but it was not usually a smooth transition.
Another common talking point made in the 30’s, as well as the 60’s, regarding technological unemployment (or lack thereof) is also made today—that machines will augment, rather than substitute for, human labor. I’ve thought about this distinction a lot and am still not convinced it’s always meaningful or easy to discern in the case of AI—augmentation of person A may result in the substitution of person B. But it points to an important issue, namely that in many cases AI is not able to replace the full range of tasks performed by people, so visions of the future of work should take this into account. Indeed, a recent article by researchers at McKinsey argued that AI is more likely to substitute for individual tasks than entire occupations, and will change the nature of many jobs more than eliminating them. But even if this is true, is this a good thing? Do we want to ensure that a large fraction of the population is employed/employable in the future?
I’m not sure, especially since there is a lot of evidence that a lot of people get various benefits (besides just money) from paid work. But my current thinking is that a future with less need for work is a critical component of realizing justice. Here I’ve been influenced a lot by thinkers such as Phillipe van Parijs, who argue persuasively, in my view, that a robust conception of justice requires real freedom for all, that is, not merely a lack of external forces preventing one from achieving one’s goals, but the material and social capabilities to actually achieve them. This, in turn, requires the ability to choose one’s distribution of work and leisure, and to be able to say no to jobs that are demeaning, or insufficiently compensated, or otherwise undesirable. Achieving this, van Parijs argues, requires a maximum sustainable basic income, and indeed he thinks such a basic income is the way that capitalism as an economic system can be morally redeemed.
These are big issues, and I don’t expect to resolve them here, but I’ll briefly say a few words about the relationship between basic income, freedom from (required) work, and AI progress. Today, it is possible for people on average to work at least somewhat less than they do. We can see this from the variation cross countries in hours worked, with e.g. citizens of many European countries working much fewer hours than Americans. This, in turn, hinges on policy. But it’s also in part a technical question of what level of living standards society can afford to provide to all people. In the U.S., for example, the levels of basic income that have been analyzed are on the order of a few thousand to ten thousand dollars. Giving much more than that would require some combination of massively more taxes, massive changes to current welfare state policies, or a massive increase in economic productivity.
So, could we increase economic productivity a lot with AI in order to support a steadily rising basic income over time, perhaps even eventually being so productive as to support a global basic income? I think so, but how exactly to do this is still an unresolved question being explored by entrepreneurs and large tech companies right now. In the limit, as Robin Hanson has argued, AI could support extremely massive, nearly unimaginable increases in the rate of economic growth. But in the near future, the key questions that arise are: what range of goods and services can be automated sufficiently effectively to allow them to be distributed affordably today, and in the foreseeable future? What level of broadly-supplied living standards, if any, would count as achieving utopia? This obviously raises a host of technical, political, economic, ethical, and other questions. In addition, these issues intersect with other policy questions like the minimum wage. While small increases in the minimum wage have been shown to not have a huge effect on employment, very large ones plausibly could increase the rate of automation. Whether this is a good or bad thing depends, again, on your values, and arguments have been by some, e.g. Srnicek and Williams in their book Inventing the Future that this is desirable, and that a combination of basic income, increased automation, increased minimum wage, and diminishment of societal valorization of work should be pursued in tandem.
One final point worth considering when evaluating AI as work reducer through a utopian lens is that it’s very important to be clear about how we define “work.” The way it’s commonly used in discussions of technological unemployment, work just means paid labor. But if we consider feminist arguments to the effect that care work (e.g. raising children, caring for elderly people, etc.) is often either unjustly unpaid or insufficiently paid, yet equally vital to the maintenance of our society as other forms of work, then we might arrive at different perspectives. So we should be wary of a rush to automate all “(paid) work” in the pursuit of utopia while ignoring the systematic inequalities that may remain in place, often along race and gender lines, in these other domains. And we should think about what it would mean, and if it’d be valuable, to automate many aspects of those forms of work.
AI as equalizer
The final utopian vision for AI that I want to discuss here is probably the least discussed among the ones considered here, but it may also be extremely important. By “AI as equalizer,” I mean the idea that AI may help enable a greater level of equality in society above and beyond its impact on paid work, by providing people with access to either cheap or free cognition that levels the playing field between people in some way. To some extent, this has already occurred. A large fraction of the world has access to search engines, which in turn give them access to a large amount of information that is equally available to others, and which would have been prohibitively expensive, difficult, or impossible to acquire before. But I’m not just talking about search engines—I’m thinking about wide-ranging capabilities of personal assistant AIs, which we may not see in full form for some time but will happen progressively in the coming years and decades.
Who cares about personal assistant AIs? Perhaps we should care more about this than we currently do. In the world today, the ability to have a generally intelligent personal assistant is associated with privilege—it either requires a lot of money or a high-status job, and, of course, a human to do the work. But if personal assistant AIs were to develop to a sufficient extent that they could provide a very large fraction, all, or even more of the capabilities that a human assistant could provide, it could have a great leveling effect on society. Consider one dimension of that impact—the impact that assistant AIs could have in mitigating the impacts of differences in human intelligence levels. People vary naturally (and unnaturally, due to environmental factors) in their levels of intelligence, and there is abundant evidence that these differences are strongly linked to large differences in various life outcomes, ranging from health to job outcomes, and more generally, the ability to cope with the (rising) complexity of everyday life. If utopia is, at least in part, about enabling people to develop and pursue their own goals, then enabling them to effectively do that through access to high-quality cognition on the cheap (or for free) could be critically important.
We can flesh out this argument a bit using the terminology introduced by Sendhil Mullainathan and Eldar Shafir in their book Scarcity. In the book, they present various forms of evidence to show how scarcity broadly construed (scarcity of time, of money, of bandwidth, etc.) has a common set of effects on people, and functions, among other things, as a tax on the poor. When people experience scarcity, they are more likely to make bad decisions because their cognitive bandwidth is tied up in dealing with immediate crises, and we only have so much bandwidth. The opposite of scarcity is slack. If AI could democratize cognitive slack, i.e. enabling people to focus their mental resources on what they want to focus them on, while reliably outsourcing tasks, for free or very cheaply, to an AI, we may see more equal outcomes in various aspects of life. This argument, like many of the ones above, is necessarily speculative at this point, but it seems to me to deserve somewhat more consideration in the context of the utopian potentials of AI.
Conclusion: how close are we to AI-topia?
As I mentioned at the beginning of this post, there are multiple forms of distance between where we are today and the full realization of the positive potentials of AI. Some of these are technical, others are political, and others are laden with all sorts of uncertainties right now and we really don’t know how to think about them yet. But if I had to suggest some takeaway messages from this blog post, they are: first, that we should be more deliberate in how we use today’s AI to address major societal problems, and that it’s not obvious we’re doing this optimally right now. Second, there seems to be at least some reason to think that AI could be a critical component of a transition to a (much more) just world, and as such it probably actually deserves all the attention it’s currently getting, and then some. Third, we should be conscious of the role of utopian visions in our thinking about the future of AI. In the case of work, for example, talking about a post-work future can, in part, function as a critique of the political as well as technical conditions that make work necessary today. In the case of AI as a problem solver, highlighting the potential of AI to help address our grand challenges not only is an important endeavor in its own right, but it can also serve as a reminder that technology does not progress and apply itself inevitably and independent of human input—rather, we should deliberately seek to use this powerful tool at our disposal to make our world a better place.
Thanks for reading and I look forward to your thoughts!