NOTE: This is not continually updated. Please refer to my Google Scholar page here for publications since early 2022. I also sometimes contribute to OpenAI's blog posts.
2024
"Computing Power and the Governance of Artificial Intelligence," Sastry, Heim, Belfield, Anderljung, Brundage, and Hazell et al. (see paper for full author list). Published on arXiv here.
"FAQs and General Advice on AI Policy Careers," published on Medium here.
2023
"Practices for Governing Agentic AI Systems," Shavit, Agarwal, and Brundage et al. (see paper for full author list). Published on the OpenAI website here.
"International Institutions for Advanced AI," Ho et al. (see paper for full author list). Published on arXiv here.
"Scoring Humanity's Progress on AI Governance," published on Medium here.
"Frontier AI Regulation: Managing Emerging Risks to Public Safety," Anderljung, Barnhart, Korinek, Leung, O'Keefe, and Whittlestone et al. (see paper for full author list). Published on arXiv here.
"Report of the 1st Workshop on Generative AI and Law," Cooper, Lee, Grimmelmann, and Ippolito et al. (see paper for full author list). Published on arXiv here.
"Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings," Shoker and Reddie et al. (see paper for full author list). Published on arXiv here.
"GPT-4 Technical Report," Achiam et al. (see paper for full author list). Published on arXiv here and on the OpenAI website here.
2022
"DALL·E 2 Preview: Risks and Limitations," Pamela Mishkin, Lama Ahmad, Miles Brundage, Gretchen Krueger, Girish Sastry. GitHub.
"Lessons Learned on Language Model Safety and Misuse," Miles Brundage, Katie Mayer, Tyna Eloundou, Sandhini Agarwal, Steven Adler, Gretchen Krueger, Jan Leike, Pamela Mishkin. OpenAI blog.
2021
"Evaluating CLIP: Towards characterization of broader capabilities and downstream implications," Sandhini Agarwal, Gretchen Krueger, Jack Clark, Alec Radford, Jong Wook Kim, Miles Brundage. Preprint available here.
"Filling gaps in trustworthy development of AI," Shahar Avin, Haydn Belfield, Miles Brundage, Gretchen Krueger, Jasmine Wang, Adrian Weller, Markus Anderljung, Igor Krawczuk, David Krueger, Jonathan Lebensold, Tegan Maharaj, Noa Zilberman. Science. Preprint available here.
"Evaluating Large Language Models Trained on Code," Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba. Preprint available here.
"Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models," Alex Tamkin, Miles Brundage, Jack Clark, Deep Ganguli. Proceedings of a workshop hosted by OpenAI and Stanford. Available here.
2020
"All the News that’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation," Sarah E. Kreps, Miles McCain, and Miles Brundage. Published in Journal of Experimental Political Science. Preprint available here.
"Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims," Miles Brundage*, Shahar Avin*, Jasmine Wang*, Haydn Belfield*, Gretchen Krueger*, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, Tegan Maharaj, Pang Wei Koh, Sara Hooker, Jade Leung, Andrew Trask, Emma Bluemke, Jonathan Lebensold, Cullen O'Keefe, Mark Koren, Théo Ryffel, JB Rubinovitz, Tamay Besiroglu, Federica Carugati, Jack Clark, Peter Eckersley, Sarah de Haas, Maritza Johnson, Ben Laurie, Alex Ingerman, Igor Krawczuk, Amanda Askell, Rosario Cammarota, Andrew Lohn, David Krueger, Charlotte Stix, Peter Henderson, Logan Graham, Carina Prunkl, Bianca Martin, Elizabeth Seger, Noa Zilberman, Seán Ó hÉigeartaigh, Frens Kroeger, Girish Sastry, Rebecca Kagan, Adrian Weller, Brian Tse, Elizabeth Barnes, Allan Dafoe, Paul Scharre, Ariel Herbert-Voss, Martijn Rasser, Shagun Sodhani, Carrick Flynn, Thomas Krendl Gilbert, Lisa Dyer, Saif Khan, Yoshua Bengio, and Markus Anderljung. PDF available here; report website with Chinese translation of the executive summary and occasional updates here.
2019
"Release Strategies and the Social Impact of Language Models," Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Jasmine Wang. arXiv preprint. 2019. Related blog post: "GPT-2: 6-Month Follow-Up," Jack Clark, Miles Brundage, and Irene Solaiman. OpenAI blog, 2019.
"Responsible Governance of Artificial Intelligence: An Assessment, Theoretical Framework, and Exploration," Miles Brundage, Arizona State University, dissertation (defended in 2018 and revised/accepted in 2019). PDF.
"The Role of Cooperation in AI Development," Amanda Askell, Miles Brundage, and Gillian Hadfield. arXiv preprint. 2019. Related blog post: "Why Responsible AI Development Needs Cooperation on Safety," Amanda Askell, Miles Brundage, and Jack Clark, OpenAI blog, 2019.
“Second Report of the Axon AI & Policing Technology Ethics Board: Automated License Plate Readers,” Barry Friedman, Chris Harris, Christy Lopez, Jeremy Gillula, Jim Bueermann, Kathleen O’Toole, Mecole Jordan, Miles Brundage, Tracy Ann Kosa, and Wael Abd-Almageed, October 2019.
"Understanding the Movement(s) for Responsible Innovation," Miles Brundage and David Guston, chapter in International Handbook of Responsible Innovation, von Schomberg and Hankins (eds.), 2019.
"First Report of the Axon AI & Policing Technology Ethics Board," Ali Farhadi, Barry Friedman, Christy E. Lopez, Jeremy Gillula, Jim Bueermann, Kathleen M. O’Toole, Mecole Jordan, Miles Brundage, Tracy Ann Kosa, Vera Bumpers, and Walt McNeil (alphabetical by first name), June 2019.
"GPT-2 Interim Update," Miles Brundage, Alec Radford, Jeffrey Wu, Jack Clark, Amanda Askell, David Lansky, Danny Hernandez, Daniela Amodei, and David Luan, May 2019.
"Better Language Models and their Implications," Alec Radford, Jeffrey Wu, Dario Amodei, Daniela Amodei, Jack Clark, Miles Brundage, and Ilya Sutskever. OpenAI blog, February 2019.
"Accounting for the Neglected Dimensions of AI Progress," Fernando Martínez-Plumed, Shahar Avin, Miles Brundage, Allan Dafoe, Seán Ó hÉigeartaigh, and José Hernández-Orallo, arXiv preprint, 2019.
2018
"Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence," Miles Brundage, chapter in a report by the Science and Technology Options Assessment division of the European Parliament, 2018.
"The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," Miles Brundage,* Shahar Avin,* Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, and Dario Amodei. Covered by the New York Times, BBC, Reuters, and many other media, and cited in policy fora such as Parliamentary and Congressional hearings. 2018.
2017
"Future of Humanity Institute - Written Evidence," Miles Brundage and Allan Dafoe. Evidence submitted to the Lords Select Committee on AI, 2017.
"A Brief Survey of Deep Reinforcement Learning," Kai Arulkumaran, Marc Deisenroth, Miles Brundage, and Anil Bharath. IEEE Signal Processing Magazine. Preprint available here. 2017.
"Guide to working in AI policy and strategy," Miles Brundage in collaboration with the team at 80,000 Hours. 2017.
"Cognitive Scarcity and Artificial Intelligence: How Assistive AI Could Alleviate Inequality," Miles Brundage and John Danaher. Blog post at Danaher's blog Philosophical Disquisitions, May 2017.
2016
"Modeling Progress in AI," Miles Brundage, presented at the AAAI 2016 International Workshop on AI, Ethics, and Society, 2016.
"Toward Smart Policies for Artificial Intelligence," Miles Brundage and Joanna Bryson, arXiv preprint, 2016.
2015
"Taking Superintelligence Seriously," Miles Brundage, review of Superintelligence by Nick Bostrom, special issue of Futures.
"Chappie and the Future of Moral Machines," Miles Brundage and Jamie Winterton, March 17, 2015. Future Tense blog, Slate.com.
2014
"The Anti-HAL: The Interstellar Robot Should Be the Future of Artificial Intelligence," Miles Brundage, November 14, 2014. Future Tense blog, Slate.com
"The Government Role in Developing Solar Thermal Technology," Miles Brundage, chapter in The Rightful Place of Science: Government & Energy Innovation, Consortium for Science, Policy, and Outcomes.
"Economic Possibilities for Our Children: Artificial Intelligence and the Future of Work, Education, and Leisure," Miles Brundage, presented at the AI & Ethics workshop at AAAI 2015.
"Artificial Intelligence and Responsible Innovation," Miles Brundage, chapter in Fundamental Issues of Artificial Intelligence, ed. Vincent C. Müller, Berlin: Springer (Synthese Library).
"Limitations and Risks of Machine Ethics," Miles Brundage, Journal of Experimental and Theoretical Artificial Intelligence, 2014. See "Presentations" for a video of my talk based on this paper.
"Why Watson is Real Artificial Intelligence," Miles Brundage and Joanna Bryson. February 14, 2014, Future Tense blog, Slate.com
"The New RoboCop Gets Robot Ethics Completely Wrong," Miles Brundage, February 14, 2014, Future Tense blog, Slate.com
"Will Technology Make Work Better for Everyone?," Miles Brundage, January 29, 2014, Future Tense blog, Slate.com
2013
"Battlestar Galactica's 10th anniversary: How the show predicted today's AI debates," Miles Brundage, December 23, 2013, Future Tense blog, Slate.com
"What Undercover Boss and The Jetsons Tell Us About the Future of Jobs," Miles Brundage, August 27, 2013, Future Tense blog, Slate.com
"No, Artificial Intelligence is Not as Smart as a 4-Year-Old Child," Miles Brundage, July 19, 2013, Future Tense blog, Slate.com.
"Energy Technology Breakthroughs In Context," Miles Brundage, March 27, 2013, As We Now Think, the blog of the Consortium for Science, Policy, and Outcomes (CSPO) at Arizona State University.
2024
"Computing Power and the Governance of Artificial Intelligence," Sastry, Heim, Belfield, Anderljung, Brundage, and Hazell et al. (see paper for full author list). Published on arXiv here.
"FAQs and General Advice on AI Policy Careers," published on Medium here.
2023
"Practices for Governing Agentic AI Systems," Shavit, Agarwal, and Brundage et al. (see paper for full author list). Published on the OpenAI website here.
"International Institutions for Advanced AI," Ho et al. (see paper for full author list). Published on arXiv here.
"Scoring Humanity's Progress on AI Governance," published on Medium here.
"Frontier AI Regulation: Managing Emerging Risks to Public Safety," Anderljung, Barnhart, Korinek, Leung, O'Keefe, and Whittlestone et al. (see paper for full author list). Published on arXiv here.
"Report of the 1st Workshop on Generative AI and Law," Cooper, Lee, Grimmelmann, and Ippolito et al. (see paper for full author list). Published on arXiv here.
"Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings," Shoker and Reddie et al. (see paper for full author list). Published on arXiv here.
"GPT-4 Technical Report," Achiam et al. (see paper for full author list). Published on arXiv here and on the OpenAI website here.
2022
"DALL·E 2 Preview: Risks and Limitations," Pamela Mishkin, Lama Ahmad, Miles Brundage, Gretchen Krueger, Girish Sastry. GitHub.
"Lessons Learned on Language Model Safety and Misuse," Miles Brundage, Katie Mayer, Tyna Eloundou, Sandhini Agarwal, Steven Adler, Gretchen Krueger, Jan Leike, Pamela Mishkin. OpenAI blog.
2021
"Evaluating CLIP: Towards characterization of broader capabilities and downstream implications," Sandhini Agarwal, Gretchen Krueger, Jack Clark, Alec Radford, Jong Wook Kim, Miles Brundage. Preprint available here.
"Filling gaps in trustworthy development of AI," Shahar Avin, Haydn Belfield, Miles Brundage, Gretchen Krueger, Jasmine Wang, Adrian Weller, Markus Anderljung, Igor Krawczuk, David Krueger, Jonathan Lebensold, Tegan Maharaj, Noa Zilberman. Science. Preprint available here.
"Evaluating Large Language Models Trained on Code," Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba. Preprint available here.
"Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models," Alex Tamkin, Miles Brundage, Jack Clark, Deep Ganguli. Proceedings of a workshop hosted by OpenAI and Stanford. Available here.
2020
"All the News that’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation," Sarah E. Kreps, Miles McCain, and Miles Brundage. Published in Journal of Experimental Political Science. Preprint available here.
"Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims," Miles Brundage*, Shahar Avin*, Jasmine Wang*, Haydn Belfield*, Gretchen Krueger*, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, Tegan Maharaj, Pang Wei Koh, Sara Hooker, Jade Leung, Andrew Trask, Emma Bluemke, Jonathan Lebensold, Cullen O'Keefe, Mark Koren, Théo Ryffel, JB Rubinovitz, Tamay Besiroglu, Federica Carugati, Jack Clark, Peter Eckersley, Sarah de Haas, Maritza Johnson, Ben Laurie, Alex Ingerman, Igor Krawczuk, Amanda Askell, Rosario Cammarota, Andrew Lohn, David Krueger, Charlotte Stix, Peter Henderson, Logan Graham, Carina Prunkl, Bianca Martin, Elizabeth Seger, Noa Zilberman, Seán Ó hÉigeartaigh, Frens Kroeger, Girish Sastry, Rebecca Kagan, Adrian Weller, Brian Tse, Elizabeth Barnes, Allan Dafoe, Paul Scharre, Ariel Herbert-Voss, Martijn Rasser, Shagun Sodhani, Carrick Flynn, Thomas Krendl Gilbert, Lisa Dyer, Saif Khan, Yoshua Bengio, and Markus Anderljung. PDF available here; report website with Chinese translation of the executive summary and occasional updates here.
2019
"Release Strategies and the Social Impact of Language Models," Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Jasmine Wang. arXiv preprint. 2019. Related blog post: "GPT-2: 6-Month Follow-Up," Jack Clark, Miles Brundage, and Irene Solaiman. OpenAI blog, 2019.
"Responsible Governance of Artificial Intelligence: An Assessment, Theoretical Framework, and Exploration," Miles Brundage, Arizona State University, dissertation (defended in 2018 and revised/accepted in 2019). PDF.
"The Role of Cooperation in AI Development," Amanda Askell, Miles Brundage, and Gillian Hadfield. arXiv preprint. 2019. Related blog post: "Why Responsible AI Development Needs Cooperation on Safety," Amanda Askell, Miles Brundage, and Jack Clark, OpenAI blog, 2019.
“Second Report of the Axon AI & Policing Technology Ethics Board: Automated License Plate Readers,” Barry Friedman, Chris Harris, Christy Lopez, Jeremy Gillula, Jim Bueermann, Kathleen O’Toole, Mecole Jordan, Miles Brundage, Tracy Ann Kosa, and Wael Abd-Almageed, October 2019.
"Understanding the Movement(s) for Responsible Innovation," Miles Brundage and David Guston, chapter in International Handbook of Responsible Innovation, von Schomberg and Hankins (eds.), 2019.
"First Report of the Axon AI & Policing Technology Ethics Board," Ali Farhadi, Barry Friedman, Christy E. Lopez, Jeremy Gillula, Jim Bueermann, Kathleen M. O’Toole, Mecole Jordan, Miles Brundage, Tracy Ann Kosa, Vera Bumpers, and Walt McNeil (alphabetical by first name), June 2019.
"GPT-2 Interim Update," Miles Brundage, Alec Radford, Jeffrey Wu, Jack Clark, Amanda Askell, David Lansky, Danny Hernandez, Daniela Amodei, and David Luan, May 2019.
"Better Language Models and their Implications," Alec Radford, Jeffrey Wu, Dario Amodei, Daniela Amodei, Jack Clark, Miles Brundage, and Ilya Sutskever. OpenAI blog, February 2019.
"Accounting for the Neglected Dimensions of AI Progress," Fernando Martínez-Plumed, Shahar Avin, Miles Brundage, Allan Dafoe, Seán Ó hÉigeartaigh, and José Hernández-Orallo, arXiv preprint, 2019.
2018
"Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence," Miles Brundage, chapter in a report by the Science and Technology Options Assessment division of the European Parliament, 2018.
"The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," Miles Brundage,* Shahar Avin,* Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, and Dario Amodei. Covered by the New York Times, BBC, Reuters, and many other media, and cited in policy fora such as Parliamentary and Congressional hearings. 2018.
2017
"Future of Humanity Institute - Written Evidence," Miles Brundage and Allan Dafoe. Evidence submitted to the Lords Select Committee on AI, 2017.
"A Brief Survey of Deep Reinforcement Learning," Kai Arulkumaran, Marc Deisenroth, Miles Brundage, and Anil Bharath. IEEE Signal Processing Magazine. Preprint available here. 2017.
"Guide to working in AI policy and strategy," Miles Brundage in collaboration with the team at 80,000 Hours. 2017.
"Cognitive Scarcity and Artificial Intelligence: How Assistive AI Could Alleviate Inequality," Miles Brundage and John Danaher. Blog post at Danaher's blog Philosophical Disquisitions, May 2017.
2016
"Modeling Progress in AI," Miles Brundage, presented at the AAAI 2016 International Workshop on AI, Ethics, and Society, 2016.
"Toward Smart Policies for Artificial Intelligence," Miles Brundage and Joanna Bryson, arXiv preprint, 2016.
2015
"Taking Superintelligence Seriously," Miles Brundage, review of Superintelligence by Nick Bostrom, special issue of Futures.
"Chappie and the Future of Moral Machines," Miles Brundage and Jamie Winterton, March 17, 2015. Future Tense blog, Slate.com.
2014
"The Anti-HAL: The Interstellar Robot Should Be the Future of Artificial Intelligence," Miles Brundage, November 14, 2014. Future Tense blog, Slate.com
"The Government Role in Developing Solar Thermal Technology," Miles Brundage, chapter in The Rightful Place of Science: Government & Energy Innovation, Consortium for Science, Policy, and Outcomes.
"Economic Possibilities for Our Children: Artificial Intelligence and the Future of Work, Education, and Leisure," Miles Brundage, presented at the AI & Ethics workshop at AAAI 2015.
"Artificial Intelligence and Responsible Innovation," Miles Brundage, chapter in Fundamental Issues of Artificial Intelligence, ed. Vincent C. Müller, Berlin: Springer (Synthese Library).
"Limitations and Risks of Machine Ethics," Miles Brundage, Journal of Experimental and Theoretical Artificial Intelligence, 2014. See "Presentations" for a video of my talk based on this paper.
"Why Watson is Real Artificial Intelligence," Miles Brundage and Joanna Bryson. February 14, 2014, Future Tense blog, Slate.com
"The New RoboCop Gets Robot Ethics Completely Wrong," Miles Brundage, February 14, 2014, Future Tense blog, Slate.com
"Will Technology Make Work Better for Everyone?," Miles Brundage, January 29, 2014, Future Tense blog, Slate.com
2013
"Battlestar Galactica's 10th anniversary: How the show predicted today's AI debates," Miles Brundage, December 23, 2013, Future Tense blog, Slate.com
"What Undercover Boss and The Jetsons Tell Us About the Future of Jobs," Miles Brundage, August 27, 2013, Future Tense blog, Slate.com
"No, Artificial Intelligence is Not as Smart as a 4-Year-Old Child," Miles Brundage, July 19, 2013, Future Tense blog, Slate.com.
"Energy Technology Breakthroughs In Context," Miles Brundage, March 27, 2013, As We Now Think, the blog of the Consortium for Science, Policy, and Outcomes (CSPO) at Arizona State University.