“Risk Alignment in Agentic AI Systems” by Hayley Clatterbuck, arvomm
EA Forum Podcast (All audio) - Un pódcast de EA Forum Team
Categorías:
This is a link post. Agentic AIs—AIs that are capable and permitted to undertake complex actions with little supervision—mark a new frontier in AI capabilities and raise new questions about how to safely create and align such systems with users, developers, and society. Because agents’ actions are influenced by their attitudes toward risk, one key aspect of alignment concerns the risk profiles of agentic AIs. What risk attitudes should guide an agentic AI's decision-making? What guardrails, if any, should be placed on the range of permissible risk attitudes? What are the ethical considerations involved when designing systems that make risky decisions on behalf of others? Risk alignment will matter for user satisfaction and trust, but it will also have important ramifications for society more broadly, especially as agentic AIs become more autonomous and are allowed to control key aspects of our lives. AIs with reckless attitudes toward risk (either because [...] --- First published: October 1st, 2024 Source: https://forum.effectivealtruism.org/posts/avZw2ceAB4P9KmaiC/risk-alignment-in-agentic-ai-systems --- Narrated by TYPE III AUDIO.