Numbered Episodes
From 80,000 Hours Podcast
Regular numbered episodes (#123 – ...).
Episodes
- #236 – Max Harms on why teaching AI right from wrong could get everyone killed 2026-02-24
- #235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’ 2026-02-17
- #234 – David Duvenaud on why 'aligned AI' would still kill democracy 2026-01-27
- #233 – James Smith on how to prevent a mirror life catastrophe 2026-01-13
- #232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings 2025-12-19
- #231 – Paul Scharre on how AI-controlled robots will and won't change war 2025-12-17
- #230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet 2025-12-10
- #229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman 2025-12-03
- #228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI 2025-11-20
- #227 – Helen Toner on the geopolitics of AGI in China and the Middle East 2025-11-05
- #226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes 2025-10-30
- #225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like 2025-10-27
- #224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie 2025-10-02
- #223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2) 2025-09-15
- #222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1) 2025-09-08
- #221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments 2025-08-28
- #220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years 2025-07-08
- #219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand 2025-06-24
- #218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good 2025-06-12
- #217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress 2025-06-02
- #216 – Ian Dunt on why governments in Britain and elsewhere can't get anything done – and how to fix it 2025-05-02
- #215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power 2025-04-16
- #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway 2025-04-04
- #213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared 2025-03-11
- #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway 2025-02-14
- #211 – Sam Bowman on why housing still isn't fixed and what would actually work 2024-12-19
- #210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals 2024-11-29
- #209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit 2024-11-27
- #208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world 2024-11-21
- #207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead 2024-11-14
- #206 – Anil Seth on the predictive brain and how to study consciousness 2024-11-01
- #205 – Sébastien Moro on the most insane things fish can do 2024-10-23
- #204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism 2024-10-16
- #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation 2024-10-03
- #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science 2024-09-19
- #201 – Ken Goldberg on why your robot butler isn’t here yet 2024-09-13
- #200 – Ezra Karger on what superforecasters and experts think about existential risks 2024-09-04
- #199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy 2024-08-29
- #198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours 2024-08-26
- #197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task 2024-08-22
- #196 – Jonathan Birch on the edge cases of sentience and why they matter 2024-08-15
- #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them 2024-08-01
- #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government 2024-07-26
- #193 – Sihao Huang on navigating the geopolitics of US–China AI competition 2024-07-18
- #192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US 2024-07-12
- #191 (Part 2) – Carl Shulman on government and society after AGI 2024-07-05
- #191 (Part 1) – Carl Shulman on the economy and national security after AGI 2024-06-27
- #190 – Eric Schwitzgebel on whether the US is conscious 2024-06-07
- #189 – Rachel Glennerster on why we still don’t have vaccines that could save millions 2024-05-29
- #188 – Matt Clancy on whether science is good 2024-05-23
- #187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard" 2024-05-14
- #186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives 2024-05-01
- #185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals 2024-04-18
- #184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT 2024-04-11
- #183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more 2024-03-14
- #182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more 2024-03-08
- #181 – Laura Deming on the science that could keep us healthy in our 80s and beyond 2024-03-01
- #180 – Hugo Mercier on why gullibility and misinformation are overrated 2024-02-21
- #179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety 2024-02-12
- #178 – Emily Oster on what the evidence actually says about pregnancy and parenting 2024-02-01
- #177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps 2024-01-24
- #176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models 2023-12-22
- #175 – Lucia Coulter on preventing lead poisoning for $1.66 per child 2023-12-14
- #174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers 2023-12-07
- #173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe 2023-11-22
- #172 – Bryan Caplan on why you should stop reading the news 2023-11-17
- #171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures 2023-11-09
- #170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down 2023-11-01
- #169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels 2023-10-26
- #168 – Ian Morris on whether deep history says we're heading for an intelligence explosion 2023-10-23
- #167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption 2023-10-18
- #166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere 2023-10-12
- #165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe 2023-10-06
- #164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives 2023-10-02
- #163 – Toby Ord on the perils of maximising the good that you do 2023-09-08
- #162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI 2023-09-01
- #161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite 2023-08-23
- #160 – Hannah Ritchie on why it makes sense to be optimistic about the environment 2023-08-14
- #159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less 2023-08-07
- #158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk 2023-07-31