“Is ‘superhuman’ AI forecasting BS? Some experiments on the ’539″ bot from the Centre for AI Safety” by titotal

EA Forum Podcast (All audio) - Un pódcast de EA Forum Team

This is a link post. Disclaimer: I am a computational physicist's and this investigation is outside of my immediate area of expertise. Feel free to peruse the experiments and take everything I say with appropriate levels of skepticism. Introduction: The centre for AI safety is a prominent AI safety research group doing technical AI research as well as regulatory activism. It's headed by Dan Hendrycks, who has a PHD in computer science from Berkeley and some notable contributions to AI research. Last week CAIS released a blog post, entitled “superhuman automated forecasting”, announcing a forecasting bot developed by a team including Hendrycks, along with a technical report and a website “five thirty nine”, where users can try out the bot for themselves. The blog post makes several grandiose claims, claiming to rebut Nate silvers claims that superhuman forecasting is 15-20 years away, and that: Our bot performs better than experienced [...] The original text contained 7 images which were described by AI. --- First published: September 18th, 2024 Source: https://forum.effectivealtruism.org/posts/266keE3kHpCYDeDaT/is-superhuman-ai-forecasting-bs-some-experiments-on-the-539 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Visit the podcast's native language site