EA - What the Moral Truth might be makes no difference to what will happen by Jim Buhler
The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund
Categorías:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What the Moral Truth might be makes no difference to what will happen, published by Jim Buhler on April 9, 2023 on The Effective Altruism Forum.Many longtermists seem hopeful that our successors (or any advanced civilization/superintelligence) will eventually act in accordance with some moral truth. While I’m sympathetic to some forms of moral realism, I believe that such a scenario is fairly unlikely for any civilization and even more so for the most advanced/expansionist ones. This post briefly explains why.To be clear, my case does under no circumstances imply that we should not act according to what we think might be a moral truth. I simply argue that we can't assume that our successors -- or any powerful civilization -- will "do the (objectively) right thing", which matters for longtermist cause prioritization.Epistemic status: Since I believe the ideas in this post to be less important than those in future ones within this sequence, I wrote it quickly and didn’t ask anyone for thorough feedback before posting, which makes me think I’m more likely than usual to have missed important considerations. Let me know what you think.Update April 10th: When I first posted this, the title was "It Doesn't Matter what the Moral Truth might be". I realized this was misleading. It was making it look like I was making a strong normative claim regarding what matters while my goal was to predict what might happen, so I changed it.Rare are those who will eventually act in accordance with some moral truthFor agents to do what might objectively be the best thing to do, you need all these conditions to be met:There is a moral truth.It is possible to “find it†and recognize it as such.They find something they recognize as a moral truth.They (unconditionally) accept it, even if it is highly counterintuitive.The thing they found is actually the moral truth. No normative mistake.They succeed at acting in accordance with it. No practical mistake.They stick to this forever. No value drift.I think these seven conditions are generally quite unlikely to be all met at the same time, mainly for the following reasons:(#1) While I find compelling the argument that (some of) our subjective experiences are instantiations of objective (dis)value (see Rawlette 2016; Vinding 2014), I am highly skeptical about claims of moral truths that are not completely dependent on sentience.(#2) I don’t see why we should assume it is possible to “find†(with a sufficient degree of certainty) the moral truth, especially if it is more complex than – or different from – something like “pleasure is good and suffering is bad.â€(#3 and #4) If they “find†a moral truth and don’t like what it says, why would they try to act in accordance with it?(#3, #4, #5, and #7) Within a civilization, we should expect the agents who have the values that are the most adapted/competitive to survival, replication, and expansion to eventually be selected for (see, e.g., Bostrom 2004; Hanson 1998), and I see no reason to suppose the moral truth is particularly well adapted to those things.Even if they’re not rare, their impact will stay marginalNow, let’s actually assume that many advanced civilizations converge on THE moral truth and effectively optimize for whatever it says. The thing is that, for the same reason why we may expect agents “adopting†the moral truth to be selected against within a civilization (see the last bullet point above), we may expect civilizations adopting the moral truth to be less competitive than those who have the values that are the most adaptive and adapted to space colonization races.My forthcoming next post will investigate this selection effect in more detail, but here is an intuition pump: Say humanity wants to follow the moral truth which is to maximize the sum X−Y, where X is somethi...
