Kimi K2, DeepSeek-R1 vibe check and Google’s data center investments

Mixture of Experts - Un pódcast de IBM - Viernes

Categorías:

Is Kimi K2 actually better than Claude? In episode 64 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Chris Hay and Kaoutar El Maghraoui. First, Moonshot AI released Kimi K2, their trillion-parameter MoE model, and our experts analyze the benchmarks and what this really means. Then, we reflect on DeepSeek-R1 6 months later; did it live up to the hype? Next, Google is investing $25 billion in AI infrastructure, and it’s not just AI chips. How does this compare to their competitors? Finally, Anthropic’s Claude for Enterprise announced an expansion with Lawrence Livermore National Laboratory —what AI safety concerns might this raise?  Tune in to today’s episode of Mixture of Experts to find out. 00:00 – Intro 01:18 – Kimi K2 12:07 – DeepSeek-R1 vibe check 28:49 – Google's data center investments 41:20 – Claude powers LLNL research  The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Resources:Read more on how DeepSeek has changed the landscape of AI, six months after the watershed release of R1 → http://ibm.com/think/news/deepseek-global-ai-local  Subscribe for AI updates → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts 

Visit the podcast's native language site