506 Episodo

  1. The Art of Scaling Reinforcement Learning Compute for LLMs

    Publicado: 16/10/2025
  2. A small number of samples can poison LLMs of any size

    Publicado: 16/10/2025
  3. Dual Goal Representations

    Publicado: 14/10/2025
  4. Welcome to the Era of Experience

    Publicado: 14/10/2025
  5. Value Flows: Flow-Based Distributional Reinforcement Learning

    Publicado: 14/10/2025
  6. Self-Adapting Language Models

    Publicado: 12/10/2025
  7. The Markovian Thinker

    Publicado: 12/10/2025
  8. Moloch’s Bargain: emergent misalignment when LLMs compete for audiences

    Publicado: 12/10/2025
  9. Transformer Predictor Dynamics and Task Diversity

    Publicado: 11/10/2025
  10. Base models know how to reason, thinking models learn when

    Publicado: 11/10/2025
  11. Spectrum tuning: Post-training for distributional coverage and in-context steerability

    Publicado: 11/10/2025
  12. Understanding Prompt Tuning and In-Context Learning via Meta-Learning

    Publicado: 11/10/2025
  13. MLPs Learn In-Context on Regression and Classification tasks

    Publicado: 11/10/2025
  14. Is Pre-Training Truly Better than Meta-Learning?

    Publicado: 11/10/2025
  15. Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models

    Publicado: 11/10/2025
  16. Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs

    Publicado: 9/10/2025
  17. Learning dynamics of LLM finetuning

    Publicado: 9/10/2025
  18. Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF

    Publicado: 9/10/2025
  19. OpenAI Agent Builder and n8n: Orchestrating Reasoning Versus Automating Process

    Publicado: 8/10/2025
  20. Training Agents Inside of Scalable World Models

    Publicado: 8/10/2025

2 / 26

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site