“Testing Framings of EA and Longtermism” by David_Moss, Jamie E

EA Forum Podcast (All audio) - Un pódcast de EA Forum Team

Rethink Priorities has been conducting a range of surveys and experiments aimed at understanding how people respond to different framings of Effective Altruism (EA), Longtermism, and related specific cause areas. There has been much debate about whether people involved in EA and Longtermism should frame their efforts and outreach in terms of Effective altruism, Longtermism, Existential risk, Existential security, Global priorities research, or by only mentioning specific risks, such as AI safety and Pandemic prevention (examples can be found at the following links: 1,2,3,4,5,6,7,8). These discussions have taken place almost entirely in the absence of empirical data, even though they concern largely empirical questions.[1] In this post we report the results of three pilot studies examining responses to different EA-related terms and descriptions. Some initial findings are: Longtermism appears to be consistently less popular than other EA-related terms and concepts we examined, whether presented just as a [...] ---Outline:(01:52) Study 1. Cause area framing(05:13) Demographics(07:15) Study 2. EA-related concepts with and without descriptions(10:58) Demographics(11:31) Study 3. Preferences for concrete causes or more general ideas/movements(15:04) Demographics(15:29) Manifold Market Predictions(16:43) General discussionThe original text contained 2 footnotes which were omitted from this narration. The original text contained 18 images which were described by AI. --- First published: November 7th, 2024 Source: https://forum.effectivealtruism.org/posts/qagZoGrxbD7YQRYNr/testing-framings-of-ea-and-longtermism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Visit the podcast's native language site