This page has only limited features, please log in for full access.
Progress on artificial intelligence (AI) requires collective action: the actions of two or more individuals or agents that in some way combine to achieve a result. Collective action is needed to increase the capabilities of AI systems and to make their impacts safer and more beneficial for the world. In recent years, a sizable but disparate literature has taken interest in AI collective action, though this literature is generally poorly grounded in the broader social science study of collective action. This paper presents a primer on fundamental concepts of collective action as they pertain to AI and a review of the AI collective action literature. The paper emphasizes (a) different types of collective action situations, such as when acting in the collective interest is or is not in individuals’ self-interest, (b) AI race scenarios, including near-term corporate and military competition and long-term races to develop advanced AI, and (c) solutions to collective action problems, including government regulations, private markets, and community self-organizing. The paper serves to bring an interdisciplinary readership up to speed on the important topic of AI collective action.
Robert de Neufville; Seth D. Baum. Collective action on artificial intelligence: A primer and review. Technology in Society 2021, 66, 101649 .
AMA StyleRobert de Neufville, Seth D. Baum. Collective action on artificial intelligence: A primer and review. Technology in Society. 2021; 66 ():101649.
Chicago/Turabian StyleRobert de Neufville; Seth D. Baum. 2021. "Collective action on artificial intelligence: A primer and review." Technology in Society 66, no. : 101649.
Corporations play a major role in artificial intelligence (AI) research, development, and deployment, with profound consequences for society. This paper surveys opportunities to improve how corporations govern their AI activities so as to better advance the public interest. The paper focuses on the roles of and opportunities for a wide range of actors inside the corporation—managers, workers, and investors—and outside the corporation—corporate partners and competitors, industry consortia, nonprofit organizations, the public, the media, and governments. Whereas prior work on multistakeholder AI governance has proposed dedicated institutions to bring together diverse actors and stakeholders, this paper explores the opportunities they have even in the absence of dedicated multistakeholder institutions. The paper illustrates these opportunities with many cases, including the participation of Google in the U.S. Department of Defense Project Maven; the publication of potentially harmful AI research by OpenAI, with input from the Partnership on AI; and the sale of facial recognition technology to law enforcement by corporations including Amazon, IBM, and Microsoft. These and other cases demonstrate the wide range of mechanisms to advance AI corporate governance in the public interest, especially when diverse actors work together.
Peter Cihon; Jonas Schuett; Seth Baum. Corporate Governance of Artificial Intelligence in the Public Interest. Information 2021, 12, 275 .
AMA StylePeter Cihon, Jonas Schuett, Seth Baum. Corporate Governance of Artificial Intelligence in the Public Interest. Information. 2021; 12 (7):275.
Chicago/Turabian StylePeter Cihon; Jonas Schuett; Seth Baum. 2021. "Corporate Governance of Artificial Intelligence in the Public Interest." Information 12, no. 7: 275.
This paper provides the first-ever survey of the implications of violent conflict risk for planetary defense program decisions. Arguably, the aim of planetary defense should be to make Earth safer from all threats, including but not limited to threats from near-Earth objects (NEOs). Insofar as planetary defense projects affect other risks besides NEOs, these other risks should be taken into account. This paper evaluates three potential effects of planetary defense programs on violent conflict risk. First, planetary defense may offer a constructive model for addressing a major global risk. By documenting the history of its successes and failures, the planetary defense community can aid efforts to address other global risks, including but not limited to violent conflict. Second, the proposed use of nuclear explosions for NEO deflection and disruption could affect the role of nuclear weapons in violent conflict risk. The effect may be such that nuclear deflection/disruption would increase aggregate risks to human society. However, the effect is difficult to assess, mainly due to ambiguities in violent conflict risk. Third, planetary defense could reduce violent conflict risk by addressing the possibility of NEO collisions being mistaken as violent attacks and inadvertently triggering violent conflict. False alarms mistaken as real attacks are a major concern, especially as a cause of nuclear war. Improved awareness of NEOs and communication between astronomers and military officials could help resolve NEO false alarms. Each of these three effects of planetary defense programs on violent conflict risk can benefit from interaction between the communities that study and address NEO and violent conflict risks.
Seth D. Baum. Accounting for violent conflict risk in planetary defense decisions. Acta Astronautica 2020, 178, 15 -23.
AMA StyleSeth D. Baum. Accounting for violent conflict risk in planetary defense decisions. Acta Astronautica. 2020; 178 ():15-23.
Chicago/Turabian StyleSeth D. Baum. 2020. "Accounting for violent conflict risk in planetary defense decisions." Acta Astronautica 178, no. : 15-23.
A recent article by Beard, Rowe, and Fox (BRF) evaluates ten methodologies for quantifying the probability of existential catastrophe. This article builds on BRF’s valuable contribution. First, this article describes the conceptual and mathematical relationship between the probability of existential catastrophe and the severity of events that could result in existential catastrophe. It discusses complications in this relationship arising from catastrophes occurring at different speeds and from multiple concurrent catastrophes. Second, this article revisits the ten BRF methodologies, finding an inverse relationship between a methodology’s ease of use and the quality of results it produces—in other words, achieving a higher quality of analysis will in general require a larger investment in analysis. Third, the manuscript discusses the role of probability quantification in the management of existential risks, describing why the probability is only sometimes needed for decision-making and arguing that analyses should support real-world risk management decisions and not just be academic exercises. If the findings of this article are taken into account, together with BRF’s evaluations of specific methodologies, then risk analyses of existential catastrophe may tend to be more successful at understanding and reducing the risks.
Seth D. Baum. Quantifying the probability of existential catastrophe: A reply to Beard et al. Futures 2020, 123, 102608 .
AMA StyleSeth D. Baum. Quantifying the probability of existential catastrophe: A reply to Beard et al. Futures. 2020; 123 ():102608.
Chicago/Turabian StyleSeth D. Baum. 2020. "Quantifying the probability of existential catastrophe: A reply to Beard et al." Futures 123, no. : 102608.
This paper considers the question: In what ways can artificial intelligence assist with interdisciplinary research for addressing complex societal problems and advancing the social good? Problems such as environmental protection, public health, and emerging technology governance do not fit neatly within traditional academic disciplines and therefore require an interdisciplinary approach. However, interdisciplinary research poses large cognitive challenges for human researchers that go beyond the substantial challenges of narrow disciplinary research. The challenges include epistemic divides between disciplines, the massive bodies of relevant literature, the peer review of work that integrates an eclectic mix of topics, and the transfer of interdisciplinary research insights from one problem to another. Artificial interdisciplinarity already helps with these challenges via search engines, recommendation engines, and automated content analysis. Future “strong artificial interdisciplinarity” based on human-level artificial general intelligence could excel at interdisciplinary research, but it may take a long time to develop and could pose major safety and ethical issues. Therefore, there is an important role for intermediate-term artificial interdisciplinarity systems that could make major contributions to addressing societal problems without the concerns associated with artificial general intelligence.
Seth D. Baum. Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems. Philosophy & Technology 2020, 1 -19.
AMA StyleSeth D. Baum. Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems. Philosophy & Technology. 2020; ():1-19.
Chicago/Turabian StyleSeth D. Baum. 2020. "Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems." Philosophy & Technology , no. : 1-19.
There has been extensive attention to near-term and long-term AI technology and its accompanying societal issues, but the medium-term has gone largely overlooked. This paper develops the concept of medium-term AI, evaluates its importance, and analyzes some medium-term societal issues. Medium-term AI can be important in its own right and as a topic that can bridge the sometimes acrimonious divide between those who favor attention to near-term AI and those who prefer the long-term. The paper proposes the medium-term AI hypothesis: the medium-term is important from the perspectives of those who favor attention to near-term AI as well as those who favor attention to long-term AI. The paper analyzes medium-term AI in terms of governance institutions, collective action, corporate AI development, and military/national security communities. Across portions of these four areas, some support for the medium-term AI hypothesis is found, though in some cases the matter is unclear.
Seth D. Baum. Medium-Term Artificial Intelligence and Society. Information 2020, 11, 290 .
AMA StyleSeth D. Baum. Medium-Term Artificial Intelligence and Society. Information. 2020; 11 (6):290.
Chicago/Turabian StyleSeth D. Baum. 2020. "Medium-Term Artificial Intelligence and Society." Information 11, no. 6: 290.
Baum, S.D. 2018. Reconciliation between factions focused on near-term and long-term artificial intelligence. AI & Society 33(4): 565–572. Baum, S.D., B. Goertzel, and T.G. Goertzel. 2011. How long until human-level AI? Results from an expert assessment. Technological Forecasting and Social Change 78(1): 185–195. Dreyfus, H. 1972. What computers can’t do. Cambridge, MA: MIT Press. Lyell, D., and E. Coiera. 2017. Automation bias and verification complexity: A systematic review. Journal of the American Medical Informatics Association 24(2): 423–431. Marcus, G.F., and E. Davis. 2019. Rebooting AI: Building artificial intelligence we can trust. New York: Pantheon Books. Russell, S. 2019. Human compatible: Artificial intelligence and the problem of control. New York: Viking. Download references Robert de Neufville provided helpful feedback on an earlier version of this review. Any remaining errors are the author’s alone. Correspondence to Seth D. Baum. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Reprints and Permissions Baum, S.D. Deep learning and the sociology of human-level artificial intelligence. Metascience (2020). https://doi.org/10.1007/s11016-020-00510-6 Download citation Published: 14 April 2020 DOI: https://doi.org/10.1007/s11016-020-00510-6
Seth D. Baum. Deep learning and the sociology of human-level artificial intelligence. Metascience 2020, 29, 313 -317.
AMA StyleSeth D. Baum. Deep learning and the sociology of human-level artificial intelligence. Metascience. 2020; 29 (2):313-317.
Chicago/Turabian StyleSeth D. Baum. 2020. "Deep learning and the sociology of human-level artificial intelligence." Metascience 29, no. 2: 313-317.
To prevent catastrophic asteroid–Earth collisions, it has been proposed to use nuclear explosives to deflect away earthbound asteroids. However, this policy of nuclear deflection could inadvertently increase the risk of nuclear war and other violent conflict. This article conducts risk–risk tradeoff analysis to assess whether nuclear deflection results in a net increase or decrease in risk. Assuming nonnuclear deflection options are also used, nuclear deflection may only be needed for the largest and most imminent asteroid collisions. These are low‐frequency, high‐severity events. The effect of nuclear deflection on violent conflict risk is more ambiguous due to the complex and dynamic social factors at play. Indeed, it is not clear whether nuclear deflection would cause a net increase or decrease in violent conflict risk. Similarly, this article cannot reach a precise conclusion on the overall risk–risk tradeoff. The value of this article comes less from specific quantitative conclusions and more from providing an analytical framework and a better overall understanding of the policy decision. The article demonstrates the importance of integrated analysis of global risks and the policies to address them, as well as the challenge of quantitative evaluation of complex social processes such as violent conflict.
Seth D. Baum. Risk–Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection. Risk Analysis 2019, 39, 2427 -2442.
AMA StyleSeth D. Baum. Risk–Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection. Risk Analysis. 2019; 39 (11):2427-2442.
Chicago/Turabian StyleSeth D. Baum. 2019. "Risk–Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection." Risk Analysis 39, no. 11: 2427-2442.
Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence is the subject of major ongoing debate, which includes a significant amount of misinformation. Superintelligence misinformation is potentially dangerous, ultimately leading bad decisions by the would-be developers of superintelligence and those who influence them. This paper surveys strategies to counter superintelligence misinformation. Two types of strategies are examined: strategies to prevent the spread of superintelligence misinformation and strategies to correct it after it has spread. In general, misinformation can be difficult to correct, suggesting a high value of strategies to prevent it. This paper is the first extended study of superintelligence misinformation. It draws heavily on the study of misinformation in psychology, political science, and related fields, especially misinformation about global warming. The strategies proposed can be applied to lay public attention to superintelligence, AI education programs, and efforts to build expert consensus.
Seth D. Baum. Countering Superintelligence Misinformation. Information 2018, 9, 244 .
AMA StyleSeth D. Baum. Countering Superintelligence Misinformation. Information. 2018; 9 (10):244.
Chicago/Turabian StyleSeth D. Baum. 2018. "Countering Superintelligence Misinformation." Information 9, no. 10: 244.
This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be built, with massive and potentially catastrophic consequences. There is substantial skepticism about superintelligence, including whether it will be built, whether it would be catastrophic, and whether it is worth current attention. To date, superintelligence skepticism appears to be mostly honest intellectual debate, though some of it may be politicized. This paper finds substantial potential for superintelligence skepticism to be (further) politicized, due mainly to the potential for major corporations to have a strong profit motive to downplay concerns about superintelligence and avoid government regulation. Furthermore, politicized superintelligence skepticism is likely to be quite successful, due to several factors including the inherent uncertainty of the topic and the abundance of skeptics. The paper’s analysis is based on characteristics of superintelligence and the broader AI sector, as well as the history and ongoing practice of politicized skepticism on other science and technology issues, including tobacco, global warming, and industrial chemicals. The paper contributes to literatures on politicized skepticism and superintelligence governance.
Seth D. Baum. Superintelligence Skepticism as a Political Tool. Information 2018, 9, 209 .
AMA StyleSeth D. Baum. Superintelligence Skepticism as a Political Tool. Information. 2018; 9 (9):209.
Chicago/Turabian StyleSeth D. Baum. 2018. "Superintelligence Skepticism as a Political Tool." Information 9, no. 9: 209.
This paper studies the risk of collision between asteroids and Earth. It focuses on uncertainty in the human consequences of asteroid collisions, with emphasis on the possibility of global catastrophe to human civilization. A detailed survey of the asteroid risk literature shows that while human consequences are recognized as a major point of uncertainty, the studies focus mainly on physical and environmental dimensions of the risk. Some potential human consequences are omitted entirely, such as the possibility of asteroid explosions inadvertently causing nuclear war. Other human consequences are modeled with varying degrees of detail. Direct medical effects are relatively well-characterized, while human consequences of global environmental effects are more uncertain. The latter are evaluated mainly in terms of a global catastrophe threshold, but such a threshold is deeply uncertain and may not even exist. To handle threshold uncertainty in asteroid policy, this paper adapts the concept of policy boundaries from literature on anthropogenic global environmental change (i.e., planetary boundaries). The paper proposes policy boundaries of 100 m asteroid diameter for global environmental effects and 1 m for inadvertent nuclear war. Other policy implications include a more aggressive asteroid risk mitigation policy and measures to avoid inadvertent nuclear war. The paper argues that for rare events like large asteroid collisions, the absence of robust data means that a wide range of possible human consequences should be considered. This implies humility for risk analysis and erring on the side of caution in policy.
Seth D. Baum. Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold. Natural Hazards 2018, 94, 759 -775.
AMA StyleSeth D. Baum. Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold. Natural Hazards. 2018; 94 (2):759-775.
Chicago/Turabian StyleSeth D. Baum. 2018. "Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold." Natural Hazards 94, no. 2: 759-775.
Atomically precise manufacturing (APM) is the assembly of materials with atomic precision. APM does not currently exist, and may not be feasible, but if it is feasible, then the societal impacts could be dramatic. This paper assesses the net societal impacts of APM across the full range of important APM sectors: general material wealth, environmental issues, military affairs, surveillance, artificial intelligence, and space travel. Positive effects were found for material wealth, the environment, military affairs (specifically nuclear disarmament), and space travel. Negative effects were found for military affairs (specifically rogue actor violence and AI. The net effect for surveillance was ambiguous. The effects for the environment, military affairs, and AI appear to be the largest, with the environment perhaps being the largest of these, suggesting that APM would be net beneficial to society. However, these factors are not well quantified and no definitive conclusion can be made. One conclusion that can be reached is that if APM R&D is pursued, it should go hand-in-hand with effective governance strategies to increase the benefits and reduce the harms.
Steven Umbrello; Seth D. Baum. Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing. Futures 2018, 100, 63 -73.
AMA StyleSteven Umbrello, Seth D. Baum. Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing. Futures. 2018; 100 ():63-73.
Chicago/Turabian StyleSteven Umbrello; Seth D. Baum. 2018. "Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing." Futures 100, no. : 63-73.
A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. These decisions must be made up front in the initial AI design—designers cannot “let the AI figure it out”. Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results. Furthermore, non-social choice ethics face similar issues, such as whether to count future generations or the AI itself. These issues can be more important than the question of whether or not to use social choice ethics. Attention should focus on these issues, not on social choice.
Seth D. Baum. Social choice ethics in artificial intelligence. AI & SOCIETY 2017, 35, 165 -176.
AMA StyleSeth D. Baum. Social choice ethics in artificial intelligence. AI & SOCIETY. 2017; 35 (1):165-176.
Chicago/Turabian StyleSeth D. Baum. 2017. "Social choice ethics in artificial intelligence." AI & SOCIETY 35, no. 1: 165-176.
Artificial intelligence (AI) experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist–futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons (as found in the traditional norms of computer science) and a “societalist faction” that seeks to develop AI for the benefit of society. The paper argues in favor of societalism and offers three means of concurrently addressing societal impacts from near-term and long-term AI: (1) advancing societalist social norms, thereby increasing the portion of AI researchers who seek to benefit society; (2) technical research on how to make any AI more beneficial to society; and (3) policy to improve the societal benefits of all AI. In practice, it will often be advantageous to emphasize near-term AI due to the greater interest in near-term AI among AI and policy communities alike. However, presentist and futurist societalists alike can benefit from each others’ advocacy for attention to the societal impacts of AI. The reconciliation between the presentist and futurist factions can improve both near-term and long-term societal impacts of AI.
Seth D. Baum. Reconciliation between factions focused on near-term and long-term artificial intelligence. AI & SOCIETY 2017, 33, 565 -572.
AMA StyleSeth D. Baum. Reconciliation between factions focused on near-term and long-term artificial intelligence. AI & SOCIETY. 2017; 33 (4):565-572.
Chicago/Turabian StyleSeth D. Baum. 2017. "Reconciliation between factions focused on near-term and long-term artificial intelligence." AI & SOCIETY 33, no. 4: 565-572.
This paper discusses means for promoting artificial intelligence (AI) that is designed to be safe and beneficial for society (or simply “beneficial AI”). The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to social impacts. Two types of measures are available for encouraging the AI field to shift more toward building beneficial AI. Extrinsic measures impose constraints or incentives on AI researchers to induce them to pursue beneficial AI even if they do not want to. Intrinsic measures encourage AI researchers to want to pursue beneficial AI. Prior research focuses on extrinsic measures, but intrinsic measures are at least as important. Indeed, intrinsic factors can determine the success of extrinsic measures. Efforts to promote beneficial AI must consider intrinsic factors by studying the social psychology of AI research communities.
Seth D. Baum. On the promotion of safe and socially beneficial artificial intelligence. AI & SOCIETY 2016, 32, 543 -551.
AMA StyleSeth D. Baum. On the promotion of safe and socially beneficial artificial intelligence. AI & SOCIETY. 2016; 32 (4):543-551.
Chicago/Turabian StyleSeth D. Baum. 2016. "On the promotion of safe and socially beneficial artificial intelligence." AI & SOCIETY 32, no. 4: 543-551.
Seth D. Baum; David C. Denkenberger; Jacob Haqq-Misra. Isolated refuges for surviving global catastrophes. Futures 2015, 72, 45 -56.
AMA StyleSeth D. Baum, David C. Denkenberger, Jacob Haqq-Misra. Isolated refuges for surviving global catastrophes. Futures. 2015; 72 ():45-56.
Chicago/Turabian StyleSeth D. Baum; David C. Denkenberger; Jacob Haqq-Misra. 2015. "Isolated refuges for surviving global catastrophes." Futures 72, no. : 45-56.
Seth D. Baum. The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives. Futures 2015, 72, 86 -96.
AMA StyleSeth D. Baum. The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives. Futures. 2015; 72 ():86-96.
Chicago/Turabian StyleSeth D. Baum. 2015. "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives." Futures 72, no. : 86-96.
Seth D. Baum; David C. Denkenberger; Joshua M. Pearce; Alan Robock; Richelle Winkler. Erratum to: Resilience to global food supply catastrophes. Environment Systems and Decisions 2015, 35, 424 -424.
AMA StyleSeth D. Baum, David C. Denkenberger, Joshua M. Pearce, Alan Robock, Richelle Winkler. Erratum to: Resilience to global food supply catastrophes. Environment Systems and Decisions. 2015; 35 (3):424-424.
Chicago/Turabian StyleSeth D. Baum; David C. Denkenberger; Joshua M. Pearce; Alan Robock; Richelle Winkler. 2015. "Erratum to: Resilience to global food supply catastrophes." Environment Systems and Decisions 35, no. 3: 424-424.
Risk and resilience are important paradigms for analyzing and guiding decisions about uncertain threats. Resilience has sometimes been favored for threats that are unknown, unquantifiable, systemic, and unlikely/catastrophic. This paper addresses the suitability of each paradigm for such threats, finding that they are comparably suitable. Threats are rarely completely unknown or unquantifiable; what limited information is typically available enables the use of both paradigms. Either paradigm can in practice mishandle systemic or unlikely/catastrophic threats, but this is inadequate implementation of the paradigms, not inadequacy of the paradigms themselves. Three examples are described: (a) Venice in the Black Death plague, (b) artificial intelligence (AI), and (c) extraterrestrials. The Venice example suggests effectiveness for each paradigm for certain unknown, unquantifiable, systemic, and unlikely/catastrophic threats. The AI and extraterrestrials examples suggest how increasing resilience may be less effective, and reducing threat probability may be more effective, for certain threats that are significantly unknown, unquantifiable, and unlikely/catastrophic.
Seth D. Baum. Risk and resilience for unknown, unquantifiable, systemic, and unlikely/catastrophic threats. Environment Systems and Decisions 2015, 35, 229 -236.
AMA StyleSeth D. Baum. Risk and resilience for unknown, unquantifiable, systemic, and unlikely/catastrophic threats. Environment Systems and Decisions. 2015; 35 (2):229-236.
Chicago/Turabian StyleSeth D. Baum. 2015. "Risk and resilience for unknown, unquantifiable, systemic, and unlikely/catastrophic threats." Environment Systems and Decisions 35, no. 2: 229-236.
Many global catastrophic risks threaten major disruption to global food supplies, including nuclear wars, volcanic eruptions, asteroid and comet impacts, and plant disease outbreaks. This paper discusses options for increasing the resilience of food supplies to these risks. In contrast to local catastrophes, global food supply catastrophes cannot be addressed via food aid from external locations. Three options for food supply resilience are identified: food stockpiles, agriculture, and foods produced from alternative (non-sunlight) energy sources including biomass and fossil fuels. Each of these three options has certain advantages and disadvantages. Stockpiles are versatile but expensive. Agriculture is efficient but less viable in certain catastrophe scenarios. Alternative foods are inexpensive pre-catastrophe but need to be scaled up post-catastrophe and may face issues of social acceptability. The optimal portfolio of food options will typically include some of each and will additionally vary by location as regions vary in population and access to food input resources. Furthermore, if the catastrophe shuts down transportation, then resilience requires local self-sufficiency in food. Food supply resilience requires not just the food itself, but also the accompanying systems of food production and distribution. Overall, increasing food supply resilience can play an important role in global catastrophic risk reduction. However, it is unwise to attempt maximizing food supply resilience, because doing so comes at the expense of other important objectives, including catastrophe prevention. Taking all these issues into account, the paper proposes a research agenda for analysis of specific food supply resilience decisions.
Seth D. Baum; David C. Denkenberger; Joshua Pearce; Alan Robock; Richelle Winkler. Resilience to global food supply catastrophes. Environment Systems and Decisions 2015, 35, 301 -313.
AMA StyleSeth D. Baum, David C. Denkenberger, Joshua Pearce, Alan Robock, Richelle Winkler. Resilience to global food supply catastrophes. Environment Systems and Decisions. 2015; 35 (2):301-313.
Chicago/Turabian StyleSeth D. Baum; David C. Denkenberger; Joshua Pearce; Alan Robock; Richelle Winkler. 2015. "Resilience to global food supply catastrophes." Environment Systems and Decisions 35, no. 2: 301-313.