59 Episodo

  1. 18 - Concept Extrapolation with Stuart Armstrong

    Publicado: 3/9/2022
  2. 17 - Training for Very High Reliability with Daniel Ziegler

    Publicado: 21/8/2022
  3. 16 - Preparing for Debate AI with Geoffrey Irving

    Publicado: 1/7/2022
  4. 15 - Natural Abstractions with John Wentworth

    Publicado: 23/5/2022
  5. 14 - Infra-Bayesian Physicalism with Vanessa Kosoy

    Publicado: 5/4/2022
  6. 13 - First Principles of AGI Safety with Richard Ngo

    Publicado: 31/3/2022
  7. 12 - AI Existential Risk with Paul Christiano

    Publicado: 2/12/2021
  8. 11 - Attainable Utility and Power with Alex Turner

    Publicado: 25/9/2021
  9. 10 - AI's Future and Impacts with Katja Grace

    Publicado: 23/7/2021
  10. 9 - Finite Factored Sets with Scott Garrabrant

    Publicado: 24/6/2021
  11. 8 - Assistance Games with Dylan Hadfield-Menell

    Publicado: 8/6/2021
  12. 7.5 - Forecasting Transformative AI from Biological Anchors with Ajeya Cotra

    Publicado: 28/5/2021
  13. 7 - Side Effects with Victoria Krakovna

    Publicado: 14/5/2021
  14. 6 - Debate and Imitative Generalization with Beth Barnes

    Publicado: 8/4/2021
  15. 5 - Infra-Bayesianism with Vanessa Kosoy

    Publicado: 10/3/2021
  16. 4 - Risks from Learned Optimization with Evan Hubinger

    Publicado: 17/2/2021
  17. 3 - Negotiable Reinforcement Learning with Andrew Critch

    Publicado: 11/12/2020
  18. 2 - Learning Human Biases with Rohin Shah

    Publicado: 11/12/2020
  19. 1 - Adversarial Policies with Adam Gleave

    Publicado: 11/12/2020

3 / 3

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.

Visit the podcast's native language site