EA - Existential risk mitigation: What I worry about when there are only bad options by MMMaas
The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund
Categorías:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Existential risk mitigation: What I worry about when there are only bad options, published by MMMaas on December 19, 2022 on The Effective Altruism Forum.(This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was encouraged to post something).(Written in my personal capacity, reflecting only my own, underdeveloped views).(Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated)My status: doubt. Shallow ethical speculation, including attempts to consider different ethical perspectives on these questions that are both closer to and further from my own.If I had my way: great qualities for existential risk reduction optionsWe know what we would like the perfect response to an existential risk to look like. If we could wave a wand, it would be great to have some ideal strategy that manages to simultaneously be:functionally ideal:[...]effective (significantly reduces the risks if successful, ideally permanently),reliable (high chance of success),technically feasible,politically viable,low-cost;safe (little to no downside risk -- ie graceful failure),robust (effective, reliable, feasible, viable and safe across many possible future scenarios),ethically ideal[...]pluralistically ethical (no serious moral costs or rights violations entailed by intervention, under a wide variety of moral views),impartial (everyone is saved by its success; no one bears disproportionate costs of implementing the strategy) / 'paretotopian' (everyone is left better off, or at least no one is made badly worse off);widely accepted (everyone (?) agrees to the strategy's deployment, either in active practice (e.g. after open democratic deliberation or participation), passive practice (e.g. everyone has been notified or informed about the strategy), or at least in principle (we cannot come up with objections from any extant political or ethical positions, after extensive red-teaming)),choice-preserving (does not lead to value lock-in and/or entail leaving a strong ethical fingerprint on the future)etc, etc.But it may be tragically likely that interventions that combine every single one of these traits are just not on the table. To be clear, I think many proposed strategies for reducing existential risk at least aim at hitting many or all of these criteria. But these won't be the only actions that will be pursued around extreme risks.What if the only feasible strategies to respond to existential risks--or the strategies that will most likely be pursued by other actors in response to existential risk--are all, to some extent, imperfect, flawed or 'bad'?Three 'bad' options and their moral dilemmasIn particular, I worry about at least three (possible or likely) classes of strategies that could be considered in response to existential risks or global catastrophes: (1) non-universal escape hatches or partial shields; (2) unilateral high-risk solutions; (3) strongly politically or ethically partisan solutions.All three plausibly constitute '(somewhat) bad' options. I don't want to say that these strategies should not be pursued (e.g. they may still be 'least-bad', given their likely alternatives; or 'acceptably bad', given an evaluation of the likely benefits versus costs). I also don't want to claim that we should not analyze these strategies (especially if they are likely to be adopted by some people in the world).But I do believe that all create moral dilemmas or tradeoffs that I am uncomfortable with--and risky 'failures' that could be entailed by taking one or another view on whether to use them....
