Sinopsis
A show about the world's most pressing problems and how you can use your career to solve them.Subscribe by searching for '80,000 Hours' wherever you get podcasts.Hosted by Rob Wiblin, Director of Research at 80,000 Hours.
Episodios
-
#172 – Bryan Caplan on why you should stop reading the news
17/11/2023 Duración: 02h23minIs following important political and international news a civic duty — or is it our civic duty to avoid it?It's common to think that 'staying informed' and checking the headlines every day is just what responsible adults do. But in today's episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.Links to learn more, summary, and full transcript.In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including: That it overwhelmingly provides us with information we can't usefully act on. That it's very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant. That it ob
-
#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures
09/11/2023 Duración: 01h46min"Rare events can still cause catastrophic accidents. The concern that has been raised by experts going back over time, is that really, the more of these experiments, the more labs, the more opportunities there are for a rare event to occur — that the right pathogen is involved and infects somebody in one of these labs, or is released in some way from these labs. And what I chronicle in Pandora's Gamble is that there have been these previous outbreaks that have been associated with various kinds of lab accidents. So this is not a theoretical thing that can happen: it has happened in the past." — Alison YoungIn today’s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future.Links to learn more, summary, and full transcript.They cover: The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reporting The Dugway life science te
-
#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down
01/11/2023 Duración: 02h57min"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world. "That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many differe
-
#169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels
26/10/2023 Duración: 01h47min"One of our earliest supporters and a dear friend of mine, Mark Lampert, once said to me, “The way I think about it is, imagine that this money were already in the hands of people living in poverty. If I could, would I want to tax it and then use it to finance other projects that I think would benefit them?” I think that's an interesting thought experiment -- and a good one -- to say, “Are there cases in which I think that's justifiable?” — Paul NiehausIn today’s episode, host Luisa Rodriguez interviews Paul Niehaus — co-founder of GiveDirectly — on the case for giving unconditional cash to the world's poorest households.Links to learn more, summary and full transcript.They cover: The empirical evidence on whether giving cash directly can drive meaningful economic growth How the impacts of GiveDirectly compare to USAID employment programmes GiveDirectly vs GiveWell’s top-recommended charities How long-term guaranteed income affects people's risk-taking and investments Whether recipients prefer getting lump su
-
#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion
23/10/2023 Duración: 02h43min"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't. What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian MorrisIn today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence. Links to learn more, summary and full transcript.They cover: Some crazy anomalies in the historical record of civilisational progress Whether we should think about technology from an evolutionary perspective Whether we ought to expect war to make a resurgence or c
-
#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption
18/10/2023 Duración: 01h54min"There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we're just so early on with alternative proteins and there's so much white space, it's actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology — which, fundamentally, is just quite inefficient. You're feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that.Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you're just growing a bunch of food to then feed a third of the world's crops directly to animals, where the vast majority of those calories going in are lost to animals existing." — Seren KellLinks to learn more, summary and full transcript.In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food
-
#166 – Tantum Collins on what he’s learned as an AI policy insider
12/10/2023 Duración: 03h08min"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions. My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum CollinsIn today’s episode, host Rob Wiblin gets the rare chance to interview someone with inside
-
#165 – Anders Sandberg on war in space, whether civilizations age, and the best things possible in our universe
06/10/2023 Duración: 02h48min"Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody's sitting on Mars and you're going to war against them, it's very hard to hit them. You don't have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it's going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it's actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you're in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast. So my general conclusion has been that war looks unlikely on some size scales but not on others." — Anders SandbergIn today’s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things tha
-
#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives
02/10/2023 Duración: 03h03min"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already. And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" — Kevin EsveltIn today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons.Links to learn more, summary and full transcript.They cover: Why it makes sense to focus on deliberately released pandemics Case studies of people who actually wanted to kill billions of humans How many people have the technical ability to produce dangerous viruses The different threats of stealth and wildfire
-
Great power conflict (Article)
22/09/2023 Duración: 01h19minToday’s release is a reading of our Great power conflict problem profile, written and narrated by Stephen Clare.If you want to check out the links, footnotes and figures in today’s article, you can find those here.And if you like this article, you might enjoy a couple of related episodes of this podcast: #128 – Chris Blattman on the five reasons wars happen #140 – Bear Braumoeller on the case that war isn’t in decline Audio mastering and editing for this episode: Dominic ArmstrongAudio Engineering Lead: Ben CordellProducer: Keiran Harris
-
#163 – Toby Ord on the perils of maximising the good that you do
08/09/2023 Duración: 03h07minEffective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”Links to learn more, summary and full transcript.Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the wa
-
The 80,000 Hours Career Guide (2023)
04/09/2023 Duración: 04h41minAn audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon and on Audible.If you know someone who might find our career guide helpful, you can get a free copy sent to them by going to 80000hours.org/gift.
-
#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI
01/09/2023 Duración: 59minMustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.Links to learn more, summary and full transcript.On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extre
-
#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite
23/08/2023 Duración: 03h30min"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892. However, the number of human manual operators peaked in 1920 -- 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they've invented the complete automation of this thing that they're employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn't stop existing until I think like 1980.So it takes 90 years from the invention of full automation to the full adoption of it in a single company that's a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?" — Michael WebbIn today’s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanfo
-
#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment
14/08/2023 Duración: 02h36min"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." — Hannah RitchieIn today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism.Links to learn more, summary and full transcript.They cover: Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could get Her new book about how we could be the first generation to build a sustainable planet Whether climate change is the most worrying environmental issue How we reduced outdoor air pollution Why Hanna
-
#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less
07/08/2023 Duración: 02h51minIn July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."Links to learn more, summary and full transcript.Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem -- it’s also hiring dozens of scientists and engineers to build out the Superalignment team.Plenty of people are pessimistic that this can be don
-
We now offer shorter 'interview highlights' episodes
05/08/2023 Duración: 06minOver on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, and if a topic matters to you we do recommend listening to the full episode — but we think these will be a nice upgrade on skipping episodes entirely.Get these highlight episodes by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.Highlights put together by Simon Monsour and Milo McGuire
-
#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk
31/07/2023 Duración: 03h13minBack in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well.In today's conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work.Links to learn more, summary and full transcript.(As Holden reminds us, his wife is also the president of one of the world's top AI labs, Anthropic, gi
-
#157 – Ezra Klein on existential risk from AI and what DC could do about it
24/07/2023 Duración: 01h18minIn Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.Links to learn more, summary and full transcript.Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC wou
-
#156 – Markus Anderljung on how to regulate cutting-edge AI models
10/07/2023 Duración: 02h06min"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it. And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." — Markus AnderljungIn today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of