EA - Samotsvety's AI risk forecasts by elifland

The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund

Podcast artwork

Categorías:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Samotsvety's AI risk forecasts, published by elifland on September 9, 2022 on The Effective Altruism Forum. Crossposted to LessWrong and Foxy Scout Introduction In my review of What We Owe The Future (WWOTF), I wrote: Finally, I’ve updated some based on my experience with Samotsvety forecasters when discussing AI risk. When we discussed the report on power-seeking AI, I expected tons of skepticism but in fact almost all forecasters seemed to give >=5% to disempowerment by power-seeking AI by 2070, with many giving >=10%. In the comments, Peter Wildeford asked: It looks like Samotsvety also forecasted AI timelines and AI takeover risk - are you willing and able to provide those numbers as well? We separately received a request from the FTX Foundation to forecast on 3 questions about AGI timelines and risk. I sent out surveys to get Samotsvety’s up-to-date views on all 5 of these questions, and thought it would be valuable to share the forecasts publicly. A few of the headline aggregate forecasts are: 25% chance of misaligned AI takeover by 2100, barring pre-APS-AI catastrophe 81% chance of Transformative AI (TAI) by 2100, barring pre-TAI catastrophe 32% chance of AGI being developed in the next 20 years Forecasts In each case I aggregated forecasts by removing the single most extreme forecast on each end, then taking the geometric mean of odds. To reduce concerns of in-group bias to some extent, I calculated a separate aggregate for those who weren’t highly-engaged EAs (HEAs) before joining Samotsvety. In most cases, these forecasters hadn’t engaged with EA much at all; in one case the forecaster was aligned but not involved with the community. Several have gotten more involved with EA since joining Samotsvety. Unfortunately I’m unable to provide forecast rationales in this post due to forecaster time constraints, though I might in a future post. I provided my personal reasoning for relatively similar forecasts (35% AI takeover by 2100, 80% TAI by 2100) in my WWOTF review. WWOTF questions Aggregate (n=11) Aggregate, non-pre-Samotsvety-HEAs (n=5) Range What's your probability of misaligned AI takeover by 2100, barring pre-APS-AI catastrophe? 25% 14% 3-91.5% What's your probability of Transformative AI (TAI) by 2100, barring pre-TAI catastrophe? 81% 86% 45-99.5% FTX Foundation questions For the purposes of these questions, FTX Foundation defined AGI as roughly “AI systems that power a comparably profound transformation (in economic terms or otherwise) as would be achieved in [a world where cheap AI systems are fully substitutable for human labor]”. See here for the full definition used. Unlike the above questions, these are not conditioning on no pre-AGI/TAI catastrophe. Aggregate (n=1) Aggregate, non-pre-Samotsvety-HEAs (n=5) Range What's the probability of existential catastrophe from AI, conditional on AGI being developed by 2070? 38% 23% 4-98% What's the probability of AGI being developed in the next 20 years? 32% 26% 10-70% What's the probability of AGI being developed by 2100? 73% 77% 45-80% Who is Samotsvety Forecasting? Samotsvety Forecasting is a forecasting group that was started primarily by Misha Yagudin, Nuño Sempere, and myself predicting as a team on INFER (then Foretell). Over time, we invited more forecasters who had very strong track records of accuracy and sensible comments, mostly on Good Judgment Open but also a few from INFER and Metaculus. Some strong forecasters were added through social connections, which means the group is a bit more EA-skewed than it would be without these additions. A few Samotsvety forecasters are also superforecasters. How much do these forecasters know about AI? Most forecasters have at least read Joe Carlsmith’s report on AI x-risk, Is Power-Seeking AI an Existential Risk?. Those who are short on time may have just skimme...

Visit the podcast's native language site