“What I Think An AI Safety Givewell For Video Work Should Look Like” by Michaël Trazzi
EA Forum Podcast (All audio) - Un pódcast de EA Forum Team

Categorías:
A few days ago, Austin Chen and Marcus Abramovitch published How cost-effective are AI safety YouTubers?, an "Early work on ”GiveWell for AI Safety”, ranking different interventions in the AI Safety Video space, using a framework that measured impact by basically multiplying watchtime by three quality factors (Quality of Audience, Fidelity of Message and Alignment of Message). Quality-adjusted viewer minute = Views × Video length × Watch % × Qa × Qf × Qm The goal of this post is to explain to what extent I think this framework is useful, things I think it got wrong, and provide some additional criteria that I would personally want to see in a more comprehensive "AI Safety Givewell for video work". tl;dr: I think Austin's and Marcus' framework has a lot of good elements, especially the three factors Quality, Fidelity and Alignment of message. Viewer Minutes is the wrong proxy [...] ---Outline:(00:58) tl;dr:(01:45) What The Framework Gets Right(02:50) Limitation: Viewer Minutes(04:55) Siliconversations And The Art of Call to Action(07:51) Moving Talent Through The Video Pipeline(09:46) Categorizing The Video Pipeline(12:18) Original Video Content(14:21) Conclusion--- First published: September 15th, 2025 Source: https://forum.effectivealtruism.org/posts/d9kEfvKq3uqwjeRFJ/what-i-think-an-ai-safety-givewell-for-video-work-should --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.