“LLMs are weirder than you think” by Derek Shiller
EA Forum Podcast (All audio) - Un pódcast de EA Forum Team
Categorías:
The Rethink Priorities Worldview Investigation Team is working on a model of consciousness in artificial systems. This article describes some complications to thinking of current LLM AI systems as coherent persons, despite the appearances to the contrary. Introduction The standard way that most people interact with cutting-edge AI systems is through chatbot interfaces like the ones pictured above. These interfaces are designed to look like the traditional messaging platforms that people use to talk with friends and family. The familiarity of this framework and its apparent analogy to interpersonal conversations encourages us to understand these systems in roughly the way we understand human conversation partners. However, this conception of AI systems is inaccurate. Moreover, appreciating its inaccuracies can help us think more carefully about whether AIs are persons, and what might be suitable indicators for AI consciousness and moral patienthood. We should take care to ensure [...] ---Outline:(00:29) Introduction(02:44) Background(05:20) Complications(05:24) 1. The user is not interacting with a single dedicated system.(09:18) 2. An LLM model doesn’t clearly distinguish the text it produces from the text the user feeds it.(14:04) 3. An LLM's output need not reflect its commitments or represent its own perspective.(27:04) 4. An LLM's output gives us fairly narrow insight into internal activity.(37:46) ConclusionThe original text contained 54 footnotes which were omitted from this narration. The original text contained 5 images which were described by AI. --- First published: November 20th, 2024 Source: https://forum.effectivealtruism.org/posts/FfKBhK933o2fFK6Sd/llms-are-weirder-than-you-think --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.