Yesterday the White House Office of Science and Technology Policy (OSTP) announced a series of workshops on AI. This is great news, and I’m excited to attend one or two of these workshops as I do related research, and it’s a good sign that OSTP is taking the societal issues of AI this seriously. It’s also encouraging that they have a good lineup of speakers and organizers from academia and industry so far.
Still, I think it is important for those interested in these issues, and people in general, to not take the White House and powerful corporations at face value, since we all have a stake in AI policy being done well. So I made what I thought were fairly supportive but mildly critical comments about this last night on Twitter. I’m copying them below for reference, and then I’ll say a bit more about some of the themes in them and my response to a reply from Ryan Calo, who is involved in organizing two of these events (Ryan's scholarly and advocacy work is great, by the way, and I applaud him for taking this on!). My tweets said:
"1. Some thoughts on the White House AI initiative: first, this is very good news overall IMO. Important issue getting high level attention.
2. Also, the thematic grouping of the workshops makes sense as does livestreaming them, and they seem to have good speakers lined up so far.
3. Last point of praise: the task force on AI for government is much needed. U.S. Digital Service, etc. are good but AI seems underutilized.
4. Now on to some mild critiques/caveats - it would be a mistake to accept the framing of this as a real public engagement effort. It's not.
5. It will have lots of value, IMO, but to actually engage the public on AI, one would need to look at and learn from prior similar efforts.
6. Among many other reasons why this isn't on the cutting edge of such things, self-selection will play a huge role in participation.
7. And even if all were interested in it, not all could go. This is why robust such efforts do things like compensate people for their time.
8. For examples of serious efforts at engagement, see e.g. https://www.nasa.gov/sites/default/files/atoms/files/ecast-informing-nasa-asteroid-initiative_tagged.pdf … (asteroids, in which I participated as a facilitator), ..
9. or the National Citizens' Technology Forum: http://pus.sagepub.com/content/23/1/53.abstract … That sort of thing may follow the workshops but they're not the same.
10. One final area of caveating/mild critique regards the report to be produced after the workshops. This could be a great report, but...
11. it is also clear that as in similar cases (see, e.g. Krimsky's Genetic Alchemy on Asilomar) this will hardly be an apolitical document.
12. It will reflect not just what happens at the workshops but various political constraints/biases of those involved at and around OSTP.
13. This is not intrinsically bad - some of those biases may be good, and science being political isn't unheard of - but it should be known.
14. Among other potential biases here are those for low/no regulation, and with many corporations involved here, advisory capture is a risk.
15. So, as in a lot of things sci./tech policy-related, processes are key - and beyond the livestreaming, there is ~ no accountability here.
16. What is *done* with this report after it's produced will also be critical, and again, this is not immune from biases/constraints/etc.
17. So, in short, good turn of events but I want to emphasize that this is not a panacea from either an AI policy or democratic perspective."
In reply to this and a subsequent tweet that CC’d him and the apparent lead on this at OSTP (Ed Felten, whose work I’m less familiar with but also seems smart and well-intentioned here), Ryan wrote:
"Thanks, Miles. Seems a little odd/hard to prejudge here but we'll watch for these dangers."
This caused me to reflect a bit more on the topic and while I responded briefly to the above right after, I wanted to write up some of my thoughts at more length, and say in particular why I don’t think it’s necessarily odd/hard to prejudge the limitations of the proposed workshops in at least a few ways. Indeed, people who study public engagement in science and technology (and there are many such people – I just got home from a conference of a hundred or so such people which involved only a small fraction of them) already know a lot about when and why public engagement works and from the publicly available information on these workshops, they do not seem to obviously circumvent known issues. Which would be fine if this were billed merely as a process for gathering input from experts, but in Felten’s post on the topic (which I recommend reading here), he indicated some fairly ambitious aims for the process, such as “engaging with the public” and “spur[ring] public dialogue.” These are good goals, but in practice they’re non-trivial to do well.
There’s a large body of practice and scholarship on what the intrinsic limitations of science and technology public engagement are, and there is plenty already known about this topic that informs my (again, mildly) skeptical reaction to these workshops and the associated report that will stem from them. Below I’ll summarize a small amount of what’s known on this, which I think is sufficient to raise some questions and suggest possible solutions to the issues these workshops may run into. At the end I’ll give some specific concrete suggestions for consideration.
Why public engagement with science and technology is hard
There’s a long history of theory and practice on public engagement with science and technology, and making various cases for why it might or might not matter, may or may not be feasible to do in a meaningful way, etc. See, for example, this paper by Jack Stilgoe on some of the key issues explored in that literature. I can’t do justice to the breadth and depth of that literature here but I’ll briefly share a very selective/biased perspective on it from my point of view.
There are at least two key issues at play when one wants to engage the public on science and technology. One is representation, and the other is expertise. With regard to representation, efforts at public engagement (even ones much more involved than the apparent White House approach) have frequently been criticized as limited in their effective inclusion of diverse perspectives. This includes not just demographic diversity but also institutional and ideological diversity (e.g. overly corporate participation with minimal non-profit, union, or other perspectives). This happens not out of malice (usually, at least) but because it’s difficult. People are busy, distracted, committed, etc. and taking time out of their week to engage with an unfamiliar and complex issue like an emerging technology isn’t always high on people’s list of priorities. Hence, self-selection and exclusion of perspectives is a common critique of various public engagement approaches. I’ll give examples of efforts to overcome this in the next section.
The second issue, expertise, has also been a matter of much discussion. On the one hand, the very people whose perspectives aren’t yet reflected in science and technology policy discussions rarely have expertise in the topics in question. This means that their inputs at such events may be poorly informed (or at least perceived as such) and immediately disregarded by expert organizers. Again, ways to address this exist to some extent. But there is also a relationship here between expertise and power. In ways both direct/explicit (e.g. experts lecturing to non-experts) and indirect/implicit (e.g. reluctance of non-expert participants to voice dissenting views in the face of apparent expertise), there can be an imbalance of power between groups in the room. This raises the importance of process design and learning from prior similar experiences in order to facilitate constructive, informed dialogue. Lastly, expertise in the engagement context has theoretical issues—who is really an expert in the ethics of AI, or the future of work? In both these areas there is a lot of disagreement among “experts,” and while certainly some are more familiar with the contours of these debates than others, there is a lot of fundamental uncertainty and value disagreement involved, and efforts to convey the state of the art in such areas often results in overly narrow framing of problems and the exclusion of important uncertainties from the discussion. Efforts to inform the public also often go awry when they are, as is common, misinformed by a long-debunked “deficit model” of science communication--that by informing people of the “truth” about science and technology, they will be more supportive of the science/technology. In fact, the opposite sometimes happens, and in some areas like nanotechnology, scientists have been shown to be more concerned about some issues like health impacts than non-experts are. Science dialogue and policy are complex and tricky to do well.
I’ve raised a few thorny issues and trade-offs, but there are ways to address these to some extent. I’ll turn to examples of such efforts now.
What people have tried before
There have been many, many public engagement efforts in the U.S. and around the world in the past several decades. Europe (and a few countries there in particular) has been a leader in this area in sheer number, but the U.S. also has been innovative in this regard, especially in the last decade. Here I’ll give two examples of such efforts, one of which I participated in and another that I didn’t, and I’ll say a few words about what was interesting about these approaches.
One notable effort (which I did not participate in) is the National Citizens’ Technology Forum (NCTF), a series of events held in 2008. Organized by the Center for Nanotechnology in Society at ASU, the events represented a substantial investment of time, effort, and thought in how to engage a broad swathe of the public on the subject of nanotechnology-enabled human enhancement. This involved, among other things, a substantial effort in producing background materials to inform lay participants beforehand (a 61 page background report). It engaged members of the public who exhibited significant geographic diversity (the events was held at six sites across the U.S.), as well as socio-economic and ethnic diversity. This effort explicitly addressed some of the above concerns regarding self-selection and representation by compensating participants for their time with $500 at the end. And lastly, it was not a one-shot event – it spanned a month of virtual and face-to-face interactions. There is literature one can read about this (e.g. this and this) to learn more about it as well as subsequent events with even larger scope such as World Wide Views, but broadly speaking, a key takeaway from the experience is summarized by Dave Guston (full disclosure: my advisor) in the former paper:
"The general portrait of deliberation that emerged from the NCTF strongly supports the contention that lay citizens can deliberate in a thoughtful way across a continent about emerging technologies (Philbrick and Barandiaran, 2009) – with a few caveats [relating to the virtual component and the significance of the financial incentive]."
Guston also notes that “The participants mastered technical aspects of nanotechnology presented to them, and they engaged content experts in active, informed, and critical questioning.” I’m not referring to this because it’s a perfect model, and as noted above, it has been improved upon subsequently. The literature on this speaks to various “wins” as well as new considerations. But it’s still a stark contrast to the depth of participation that seems likely at the AI workshops, which is why I bring it up – to emphasize the vast range of possible public engagement models, and the value of learning from history in this area.
The second case I’ll discuss is one that I participated in directly, specifically as a discussion facilitator. This event was known as the NASA Asteroid Initiative Citizens Forum, and you can read all about it in this report. In contrast to the NCTF, the asteroid forum was limited to one day each in two physical locations (Phoenix and Boston) and an online component, with different participants in each city drawn from the local community. The event was sponsored by NASA and represented a novel collaboration between NASA (which provided the funding), the ECAST Network centered at ASU (ECAST stands for “Expert & Citizen Assessment of Science & Technology), the Boston Science Museum, and others. It also represented a very significant investment of effort, including an extremely informative and well put together planetarium show introducing participants to the nature of asteroids, their risks and opportunities in relation to Earth impacts, science, and space exploration, and a number of possible next steps for NASA’s asteroid initiatives. NASA funded this event because it wanted to get feedback about a specific set of options it was choosing between, and from what I understand, it was very pleased with the outcome and announced its decision a few months later.
Also, the participants were very diverse. At the table I was in charge of facilitating discussion at (essentially, just encouraging people to talk when they were quiet but looked like they had something to say, and taking notes), there were people from a variety of ethnicities, occupations, incomes, and educational backgrounds, and the event was generally considered successful, entertaining, and useful. Of particular note for the issues above is the way that expertise was managed. The planetarium show and subsequent materials were the result of negotiation between the various parties involved, and they were well caveated regarding uncertainties and values issues at stake. The activities of the day were also both entertaining (e.g. scenario planning involving cards dealt that had to do with different asteroid sizes, time until impact, and expected impact locations) and relevant (we discussed the impacts of different technologies and asteroid governance regimes).
Again, my point is not to suggest that any of the prior approaches is entirely unproblematic. Public engagement is difficult and multi-faceted. But I hope these are useful reference cases for considering the White House AI workshops.
Interlude regarding the final AI report and bias
I’ll briefly discuss one issue I mentioned in my original tweets, which is the political complexity of report production in this sort of context. As is well established in the scholarly literature on this topic, expert committees providing authoritative reports are not immune from bias and politics. And this isn’t necessarily a bad thing – some of those biases may be good, and politics is not inherently bad. But it’s worth noting here one specific aspect of the AI report which will be produced later this year – namely, it will not obviously bear a specific connection to each and every topic of discussion at all four workshops. There will be a need to exclude some comments from the report, or even entire topics entirely, given the diversity of the likely discussions. This is inevitable but could be handled more or less transparently, so I’ll suggest some possible mechanisms for encouraging transparency and fair characterization of uncertainty and disagreement at the end of this post. But for now, I’ll just emphasize that this is an area in which, for better or worse (probably both), the public doesn’t have a direct say in the final policy process, and there is a very significant responsibility on those involved to solicit diverse opinions before and during the process of writing this report.
Now, back to the public engagement issues and whether it’s early to say that the White House approach is limited…
Why it looks like the White House approach may be limited in important respects
I don’t have any insider knowledge of the planning of these events, and there is certainly a possibility that many of the above issues will be well handled. But based on the publicly available information, it seems that the events will be limited in two ways that bear on the issues of representation and expertise discussed earlier.
First, regarding diversity: there are many types of diversity, and the events discussed here seem to be doing well in certain respects, e.g. representation of women and people of color among speakers. This is great and to be applauded. Still, as discussed earlier, the value of the feedback the White House gets at these events (and the extent to which they can credibly claim to have “heard” from the public on this issue) may be impaired by a few factors. There is the lack (at least as far as I can tell from the websites) of any compensation for participation, which may limit economically diverse representation. There is an apparent heavy emphasis on corporate and academic participants, with little in the way of, e.g. unions or non-profits in general, raising the issue of ideological and institutional diversity. Lastly, there is also limited geographic diversity: all four events will take place in major metropolitan areas, none in rural locations and all on or near the coasts. I understand there are limitations of time and resources for these events (which seem perhaps a bit rushed, with one event taking place in less than a month, presumably to be completed before the end of the administration). Still, these are factors to consider in future follow-on events, if there are to be any, and some of the above considerations are potentially actionable for these events if organizers act soon.
Second, timing and format: Two issues related to the timing of the events are noteworthy. First, they are all one day long (with the exception of at least one technical workshop taking place the day before), which offers only limited opportunities for substantial learning and input. There is also a timing dimension to consider within the day of each workshop: namely, the distribution of different activities. With the number of listed speakers, it seems like a significant amount of time will be allocated to lectures, and with finite time, that leaves less opportunity for rich engagement with the participants (many of whom will be experts). In short, I fear much of the events will consist of participants listening to lectures, learning a thing or two, and standing in line to ask questions/make comments at the end. This has value, for sure, but it may fall short of the potential for rich public engagement on AI which would be possible in longer or more creatively structured events.
Again, I’ll emphasize that I have limited knowledge here so these are meant as provocations, not final conclusions. I don’t know what the precise plans are for each of the events, but given the history of events like these in a wide variety of scientific and technological domains, it does not seem unreasonable to me to be concerned about the events’ diversity, richness of dialogue, and productivity.
Concrete suggestions
Some of these may be already underway or have been considered and discarded, but nonetheless I’d like to offer several suggestions for the organizers to consider. On any of these points, I’d be happy to elaborate, discuss further, or facilitate connections to people who are more knowledgeable about these topics than me.
1. Reach out to groups with different opinions on priorities and assumptions
There is a lot of diversity of opinion among AI experts and AI/ethics/society experts and diversity of interests among those who could be affected by AI. While it is not possible to represent all possible views in a finite amount of time, or to report all such disagreements in a report, diversity might be fostered by reaching out to invite participation from organizations such as unions (regarding job impacts), consumer advocacy organizations, privacy advocacy organizations, and organizations concerned specifically with long-term AI safety (e.g. the Machine Intelligence Research Institute). On the latter, I have already heard some concern voiced from people in that world who think that the safety and control workshop will be positioned to exclude such concerns, which, contra a somewhat popular belief, are shared by at least some AI experts (e.g. Stuart Russell would be a good participant, if he’s not already involved, and has a very different perspective on these issues than another currently confirmed participant, Oren Etzioni).
2. Incentivize non-expert participation
I raised this issue above already – namely, the exclusionary effect of uncompensated, time-consuming events. There are complexities to this, and I’d be happy to refer the organizers to people who have gone through such a process before. Which leads me to another point…
3. Reach out to the experts on non-experts (I realize the irony!)
There are many experts on the issue of lay participation in science and technology discourse. If they haven’t already, the organizers of these workshops could get more detailed feedback and suggestions from organizations like the ECAST Network. This is a difficult thing to get right and it’d be a shame if the relevant lessons learned from similar events were not brought to bear on making these AI events as successful as possible.
4. Precommit to conveying disagreements
This relates to the issue of the final report above. While the events will be livestreamed, not everyone will take the time to watch four days of video. So this makes it incumbent upon the organizers of this process to fairly represent the breadth of discussion at these events. One way to ensure this would be to commit beforehand to doing so in some fashion (e.g. through inclusion of dissenting perspectives to the report’s conclusion in appendices, or in a supplementary website), thus bringing attentive portions of the public’s scrutiny to bear on whether these commitments were followed up on.
5. Gather and precommit to reporting data on inclusion
Another thing that the organizers could precommit to is gathering and reporting data on the diversity and representativeness of workshop participants. This would create some accountability for fairly representing the opinions raised in the workshop as being as (un)representative as they actually are, and it would provide an incentive for organizers to expedite efforts to ensure such diversity if they know that this information will subsequently be reported.
6. Consider follow-on events with other institutional partners
Finally, given the intrinsic limitations of any single event or series of events in informing, eliciting, and representing public opinion, these workshops should be seen and characterized by the White House as the beginning of a dialogue, not its apotheosis. Felten has done this well already in his opening blog post on the topic, saying that he seeks to spark dialogue. One way to ensure that dialogue continues would to begin working with a group like ECAST to envision subsequent events beyond this summer which could involve richer, lengthier forms of participation by a wider group of people. I don’t know the constraints Felten et al. are under and this may not be possible at this time, but it seems worth at least considering. Perhaps this could be an initiative to be taken up by another institution besides OSTP, such as NSF, or the Domestic Policy Council.
I hope this context is helpful for some people and provided at least a few provocations! I look forward to anyone’s thoughts on these matters.
Still, I think it is important for those interested in these issues, and people in general, to not take the White House and powerful corporations at face value, since we all have a stake in AI policy being done well. So I made what I thought were fairly supportive but mildly critical comments about this last night on Twitter. I’m copying them below for reference, and then I’ll say a bit more about some of the themes in them and my response to a reply from Ryan Calo, who is involved in organizing two of these events (Ryan's scholarly and advocacy work is great, by the way, and I applaud him for taking this on!). My tweets said:
"1. Some thoughts on the White House AI initiative: first, this is very good news overall IMO. Important issue getting high level attention.
2. Also, the thematic grouping of the workshops makes sense as does livestreaming them, and they seem to have good speakers lined up so far.
3. Last point of praise: the task force on AI for government is much needed. U.S. Digital Service, etc. are good but AI seems underutilized.
4. Now on to some mild critiques/caveats - it would be a mistake to accept the framing of this as a real public engagement effort. It's not.
5. It will have lots of value, IMO, but to actually engage the public on AI, one would need to look at and learn from prior similar efforts.
6. Among many other reasons why this isn't on the cutting edge of such things, self-selection will play a huge role in participation.
7. And even if all were interested in it, not all could go. This is why robust such efforts do things like compensate people for their time.
8. For examples of serious efforts at engagement, see e.g. https://www.nasa.gov/sites/default/files/atoms/files/ecast-informing-nasa-asteroid-initiative_tagged.pdf … (asteroids, in which I participated as a facilitator), ..
9. or the National Citizens' Technology Forum: http://pus.sagepub.com/content/23/1/53.abstract … That sort of thing may follow the workshops but they're not the same.
10. One final area of caveating/mild critique regards the report to be produced after the workshops. This could be a great report, but...
11. it is also clear that as in similar cases (see, e.g. Krimsky's Genetic Alchemy on Asilomar) this will hardly be an apolitical document.
12. It will reflect not just what happens at the workshops but various political constraints/biases of those involved at and around OSTP.
13. This is not intrinsically bad - some of those biases may be good, and science being political isn't unheard of - but it should be known.
14. Among other potential biases here are those for low/no regulation, and with many corporations involved here, advisory capture is a risk.
15. So, as in a lot of things sci./tech policy-related, processes are key - and beyond the livestreaming, there is ~ no accountability here.
16. What is *done* with this report after it's produced will also be critical, and again, this is not immune from biases/constraints/etc.
17. So, in short, good turn of events but I want to emphasize that this is not a panacea from either an AI policy or democratic perspective."
In reply to this and a subsequent tweet that CC’d him and the apparent lead on this at OSTP (Ed Felten, whose work I’m less familiar with but also seems smart and well-intentioned here), Ryan wrote:
"Thanks, Miles. Seems a little odd/hard to prejudge here but we'll watch for these dangers."
This caused me to reflect a bit more on the topic and while I responded briefly to the above right after, I wanted to write up some of my thoughts at more length, and say in particular why I don’t think it’s necessarily odd/hard to prejudge the limitations of the proposed workshops in at least a few ways. Indeed, people who study public engagement in science and technology (and there are many such people – I just got home from a conference of a hundred or so such people which involved only a small fraction of them) already know a lot about when and why public engagement works and from the publicly available information on these workshops, they do not seem to obviously circumvent known issues. Which would be fine if this were billed merely as a process for gathering input from experts, but in Felten’s post on the topic (which I recommend reading here), he indicated some fairly ambitious aims for the process, such as “engaging with the public” and “spur[ring] public dialogue.” These are good goals, but in practice they’re non-trivial to do well.
There’s a large body of practice and scholarship on what the intrinsic limitations of science and technology public engagement are, and there is plenty already known about this topic that informs my (again, mildly) skeptical reaction to these workshops and the associated report that will stem from them. Below I’ll summarize a small amount of what’s known on this, which I think is sufficient to raise some questions and suggest possible solutions to the issues these workshops may run into. At the end I’ll give some specific concrete suggestions for consideration.
Why public engagement with science and technology is hard
There’s a long history of theory and practice on public engagement with science and technology, and making various cases for why it might or might not matter, may or may not be feasible to do in a meaningful way, etc. See, for example, this paper by Jack Stilgoe on some of the key issues explored in that literature. I can’t do justice to the breadth and depth of that literature here but I’ll briefly share a very selective/biased perspective on it from my point of view.
There are at least two key issues at play when one wants to engage the public on science and technology. One is representation, and the other is expertise. With regard to representation, efforts at public engagement (even ones much more involved than the apparent White House approach) have frequently been criticized as limited in their effective inclusion of diverse perspectives. This includes not just demographic diversity but also institutional and ideological diversity (e.g. overly corporate participation with minimal non-profit, union, or other perspectives). This happens not out of malice (usually, at least) but because it’s difficult. People are busy, distracted, committed, etc. and taking time out of their week to engage with an unfamiliar and complex issue like an emerging technology isn’t always high on people’s list of priorities. Hence, self-selection and exclusion of perspectives is a common critique of various public engagement approaches. I’ll give examples of efforts to overcome this in the next section.
The second issue, expertise, has also been a matter of much discussion. On the one hand, the very people whose perspectives aren’t yet reflected in science and technology policy discussions rarely have expertise in the topics in question. This means that their inputs at such events may be poorly informed (or at least perceived as such) and immediately disregarded by expert organizers. Again, ways to address this exist to some extent. But there is also a relationship here between expertise and power. In ways both direct/explicit (e.g. experts lecturing to non-experts) and indirect/implicit (e.g. reluctance of non-expert participants to voice dissenting views in the face of apparent expertise), there can be an imbalance of power between groups in the room. This raises the importance of process design and learning from prior similar experiences in order to facilitate constructive, informed dialogue. Lastly, expertise in the engagement context has theoretical issues—who is really an expert in the ethics of AI, or the future of work? In both these areas there is a lot of disagreement among “experts,” and while certainly some are more familiar with the contours of these debates than others, there is a lot of fundamental uncertainty and value disagreement involved, and efforts to convey the state of the art in such areas often results in overly narrow framing of problems and the exclusion of important uncertainties from the discussion. Efforts to inform the public also often go awry when they are, as is common, misinformed by a long-debunked “deficit model” of science communication--that by informing people of the “truth” about science and technology, they will be more supportive of the science/technology. In fact, the opposite sometimes happens, and in some areas like nanotechnology, scientists have been shown to be more concerned about some issues like health impacts than non-experts are. Science dialogue and policy are complex and tricky to do well.
I’ve raised a few thorny issues and trade-offs, but there are ways to address these to some extent. I’ll turn to examples of such efforts now.
What people have tried before
There have been many, many public engagement efforts in the U.S. and around the world in the past several decades. Europe (and a few countries there in particular) has been a leader in this area in sheer number, but the U.S. also has been innovative in this regard, especially in the last decade. Here I’ll give two examples of such efforts, one of which I participated in and another that I didn’t, and I’ll say a few words about what was interesting about these approaches.
One notable effort (which I did not participate in) is the National Citizens’ Technology Forum (NCTF), a series of events held in 2008. Organized by the Center for Nanotechnology in Society at ASU, the events represented a substantial investment of time, effort, and thought in how to engage a broad swathe of the public on the subject of nanotechnology-enabled human enhancement. This involved, among other things, a substantial effort in producing background materials to inform lay participants beforehand (a 61 page background report). It engaged members of the public who exhibited significant geographic diversity (the events was held at six sites across the U.S.), as well as socio-economic and ethnic diversity. This effort explicitly addressed some of the above concerns regarding self-selection and representation by compensating participants for their time with $500 at the end. And lastly, it was not a one-shot event – it spanned a month of virtual and face-to-face interactions. There is literature one can read about this (e.g. this and this) to learn more about it as well as subsequent events with even larger scope such as World Wide Views, but broadly speaking, a key takeaway from the experience is summarized by Dave Guston (full disclosure: my advisor) in the former paper:
"The general portrait of deliberation that emerged from the NCTF strongly supports the contention that lay citizens can deliberate in a thoughtful way across a continent about emerging technologies (Philbrick and Barandiaran, 2009) – with a few caveats [relating to the virtual component and the significance of the financial incentive]."
Guston also notes that “The participants mastered technical aspects of nanotechnology presented to them, and they engaged content experts in active, informed, and critical questioning.” I’m not referring to this because it’s a perfect model, and as noted above, it has been improved upon subsequently. The literature on this speaks to various “wins” as well as new considerations. But it’s still a stark contrast to the depth of participation that seems likely at the AI workshops, which is why I bring it up – to emphasize the vast range of possible public engagement models, and the value of learning from history in this area.
The second case I’ll discuss is one that I participated in directly, specifically as a discussion facilitator. This event was known as the NASA Asteroid Initiative Citizens Forum, and you can read all about it in this report. In contrast to the NCTF, the asteroid forum was limited to one day each in two physical locations (Phoenix and Boston) and an online component, with different participants in each city drawn from the local community. The event was sponsored by NASA and represented a novel collaboration between NASA (which provided the funding), the ECAST Network centered at ASU (ECAST stands for “Expert & Citizen Assessment of Science & Technology), the Boston Science Museum, and others. It also represented a very significant investment of effort, including an extremely informative and well put together planetarium show introducing participants to the nature of asteroids, their risks and opportunities in relation to Earth impacts, science, and space exploration, and a number of possible next steps for NASA’s asteroid initiatives. NASA funded this event because it wanted to get feedback about a specific set of options it was choosing between, and from what I understand, it was very pleased with the outcome and announced its decision a few months later.
Also, the participants were very diverse. At the table I was in charge of facilitating discussion at (essentially, just encouraging people to talk when they were quiet but looked like they had something to say, and taking notes), there were people from a variety of ethnicities, occupations, incomes, and educational backgrounds, and the event was generally considered successful, entertaining, and useful. Of particular note for the issues above is the way that expertise was managed. The planetarium show and subsequent materials were the result of negotiation between the various parties involved, and they were well caveated regarding uncertainties and values issues at stake. The activities of the day were also both entertaining (e.g. scenario planning involving cards dealt that had to do with different asteroid sizes, time until impact, and expected impact locations) and relevant (we discussed the impacts of different technologies and asteroid governance regimes).
Again, my point is not to suggest that any of the prior approaches is entirely unproblematic. Public engagement is difficult and multi-faceted. But I hope these are useful reference cases for considering the White House AI workshops.
Interlude regarding the final AI report and bias
I’ll briefly discuss one issue I mentioned in my original tweets, which is the political complexity of report production in this sort of context. As is well established in the scholarly literature on this topic, expert committees providing authoritative reports are not immune from bias and politics. And this isn’t necessarily a bad thing – some of those biases may be good, and politics is not inherently bad. But it’s worth noting here one specific aspect of the AI report which will be produced later this year – namely, it will not obviously bear a specific connection to each and every topic of discussion at all four workshops. There will be a need to exclude some comments from the report, or even entire topics entirely, given the diversity of the likely discussions. This is inevitable but could be handled more or less transparently, so I’ll suggest some possible mechanisms for encouraging transparency and fair characterization of uncertainty and disagreement at the end of this post. But for now, I’ll just emphasize that this is an area in which, for better or worse (probably both), the public doesn’t have a direct say in the final policy process, and there is a very significant responsibility on those involved to solicit diverse opinions before and during the process of writing this report.
Now, back to the public engagement issues and whether it’s early to say that the White House approach is limited…
Why it looks like the White House approach may be limited in important respects
I don’t have any insider knowledge of the planning of these events, and there is certainly a possibility that many of the above issues will be well handled. But based on the publicly available information, it seems that the events will be limited in two ways that bear on the issues of representation and expertise discussed earlier.
First, regarding diversity: there are many types of diversity, and the events discussed here seem to be doing well in certain respects, e.g. representation of women and people of color among speakers. This is great and to be applauded. Still, as discussed earlier, the value of the feedback the White House gets at these events (and the extent to which they can credibly claim to have “heard” from the public on this issue) may be impaired by a few factors. There is the lack (at least as far as I can tell from the websites) of any compensation for participation, which may limit economically diverse representation. There is an apparent heavy emphasis on corporate and academic participants, with little in the way of, e.g. unions or non-profits in general, raising the issue of ideological and institutional diversity. Lastly, there is also limited geographic diversity: all four events will take place in major metropolitan areas, none in rural locations and all on or near the coasts. I understand there are limitations of time and resources for these events (which seem perhaps a bit rushed, with one event taking place in less than a month, presumably to be completed before the end of the administration). Still, these are factors to consider in future follow-on events, if there are to be any, and some of the above considerations are potentially actionable for these events if organizers act soon.
Second, timing and format: Two issues related to the timing of the events are noteworthy. First, they are all one day long (with the exception of at least one technical workshop taking place the day before), which offers only limited opportunities for substantial learning and input. There is also a timing dimension to consider within the day of each workshop: namely, the distribution of different activities. With the number of listed speakers, it seems like a significant amount of time will be allocated to lectures, and with finite time, that leaves less opportunity for rich engagement with the participants (many of whom will be experts). In short, I fear much of the events will consist of participants listening to lectures, learning a thing or two, and standing in line to ask questions/make comments at the end. This has value, for sure, but it may fall short of the potential for rich public engagement on AI which would be possible in longer or more creatively structured events.
Again, I’ll emphasize that I have limited knowledge here so these are meant as provocations, not final conclusions. I don’t know what the precise plans are for each of the events, but given the history of events like these in a wide variety of scientific and technological domains, it does not seem unreasonable to me to be concerned about the events’ diversity, richness of dialogue, and productivity.
Concrete suggestions
Some of these may be already underway or have been considered and discarded, but nonetheless I’d like to offer several suggestions for the organizers to consider. On any of these points, I’d be happy to elaborate, discuss further, or facilitate connections to people who are more knowledgeable about these topics than me.
1. Reach out to groups with different opinions on priorities and assumptions
There is a lot of diversity of opinion among AI experts and AI/ethics/society experts and diversity of interests among those who could be affected by AI. While it is not possible to represent all possible views in a finite amount of time, or to report all such disagreements in a report, diversity might be fostered by reaching out to invite participation from organizations such as unions (regarding job impacts), consumer advocacy organizations, privacy advocacy organizations, and organizations concerned specifically with long-term AI safety (e.g. the Machine Intelligence Research Institute). On the latter, I have already heard some concern voiced from people in that world who think that the safety and control workshop will be positioned to exclude such concerns, which, contra a somewhat popular belief, are shared by at least some AI experts (e.g. Stuart Russell would be a good participant, if he’s not already involved, and has a very different perspective on these issues than another currently confirmed participant, Oren Etzioni).
2. Incentivize non-expert participation
I raised this issue above already – namely, the exclusionary effect of uncompensated, time-consuming events. There are complexities to this, and I’d be happy to refer the organizers to people who have gone through such a process before. Which leads me to another point…
3. Reach out to the experts on non-experts (I realize the irony!)
There are many experts on the issue of lay participation in science and technology discourse. If they haven’t already, the organizers of these workshops could get more detailed feedback and suggestions from organizations like the ECAST Network. This is a difficult thing to get right and it’d be a shame if the relevant lessons learned from similar events were not brought to bear on making these AI events as successful as possible.
4. Precommit to conveying disagreements
This relates to the issue of the final report above. While the events will be livestreamed, not everyone will take the time to watch four days of video. So this makes it incumbent upon the organizers of this process to fairly represent the breadth of discussion at these events. One way to ensure this would be to commit beforehand to doing so in some fashion (e.g. through inclusion of dissenting perspectives to the report’s conclusion in appendices, or in a supplementary website), thus bringing attentive portions of the public’s scrutiny to bear on whether these commitments were followed up on.
5. Gather and precommit to reporting data on inclusion
Another thing that the organizers could precommit to is gathering and reporting data on the diversity and representativeness of workshop participants. This would create some accountability for fairly representing the opinions raised in the workshop as being as (un)representative as they actually are, and it would provide an incentive for organizers to expedite efforts to ensure such diversity if they know that this information will subsequently be reported.
6. Consider follow-on events with other institutional partners
Finally, given the intrinsic limitations of any single event or series of events in informing, eliciting, and representing public opinion, these workshops should be seen and characterized by the White House as the beginning of a dialogue, not its apotheosis. Felten has done this well already in his opening blog post on the topic, saying that he seeks to spark dialogue. One way to ensure that dialogue continues would to begin working with a group like ECAST to envision subsequent events beyond this summer which could involve richer, lengthier forms of participation by a wider group of people. I don’t know the constraints Felten et al. are under and this may not be possible at this time, but it seems worth at least considering. Perhaps this could be an initiative to be taken up by another institution besides OSTP, such as NSF, or the Domestic Policy Council.
I hope this context is helpful for some people and provided at least a few provocations! I look forward to anyone’s thoughts on these matters.