Sinopsis
A show about the world's most pressing problems and how you can use your career to solve them.Subscribe by searching for '80,000 Hours' wherever you get podcasts.Hosted by Rob Wiblin, Director of Research at 80,000 Hours.
Episodios
-
Bonus: The Worst Ideas in the History of the World
30/06/2023 Duración: 35minToday’s bonus release is a pilot for a new podcast called ‘The Worst Ideas in the History of the World’, created by Keiran Harris — producer of the 80,000 Hours Podcast.If you have strong opinions about this one way or another, please email us at podcast@80000hours.org to help us figure out whether more of this ought to exist.
-
#155 – Lennart Heim on the compute governance era and what has to come after
22/06/2023 Duración: 03h12minAs AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?In today's interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.Links to learn more, summary and full transcript.As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and
-
#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters
09/06/2023 Duración: 03h09minCan there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still.Today's guest — machine learning researcher Rohin Shah — goes into the Google DeepMind offices each day with that peculiar backdrop to his work. Links to learn more, summary and full transcript.He's on the team dedicated to maintaining 'technical AI safety' as these models approach and exceed human capabilities: basically that the models help humanity accomplish its goals without flipping out in some dangerous way. This work has never seemed more important.In the short-term it could be the key bottleneck to deploying ML models in high-stakes real-life situations. In the long-term, it could be the difference between humanity thriving and disappearing entirely.For years Rohin has been on a
-
#153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work
02/06/2023 Duración: 02h56minGiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the world's poorest people avoid easily prevented diseases, like intestinal worms or vitamin A deficiency.But should GiveWell, as some critics argue, take a totally different approach to its search, focusing instead on directly increasing subjective wellbeing, or alternatively, raising economic growth?Today's guest — cofounder and CEO of GiveWell, Elie Hassenfeld — is proud of how much GiveWell has grown in the last five years. Its 'money moved' has quadrupled to around $600 million a year.Its research team has also more than doubled, enabling them to investigate a far broader range of interventions that could plausibly help people an enormous amount for each dollar spent. That work has led GiveWell to support dozens of new organisations, such as Kangaroo Mother Care, MiracleFeet, and Dispensers for Safe Water.But s
-
#152 – Joe Carlsmith on navigating serious philosophical confusion
19/05/2023 Duración: 03h26minWhat is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what’s a species to do?In today's episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.Links to learn more, summary and full transcript.To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity's self-assured understanding of the world.The first idea is that we might be living in a computer simul
-
#151 – Ajeya Cotra on accidentally teaching AI models to deceive us
12/05/2023 Duración: 02h49minImagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don't get to see any resumes or do reference checks. And because you're so rich, tonnes of people apply for the job — for all sorts of reasons.Today's guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.Links to learn more, summary and full transcript.As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are proba
-
#150 – Tom Davidson on how quickly AI could transform the world
05/05/2023 Duración: 03h01minIt’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from. For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before? You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.” But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least *consider* the idea that the world is about to get — at a minimum — incredibly weird. Links to learn more, summary and full transcript. As a teaser, consider the following: Developing artifici
-
Andrés Jiménez Zorrilla on the Shrimp Welfare Project (80k After Hours)
22/04/2023 Duración: 01h17minIn this episode from our second show, 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It's the first project in the world focused on shrimp welfare specifically, and as of recording in June 2022, has six full-time staff. Links to learn more, highlights and full transcript. They cover: • The evidence for shrimp sentience • How farmers and the public feel about shrimp • The scale of the problem • What shrimp farming looks like • The killing process, and other welfare issues • Shrimp Welfare Project’s strategy • History of shrimp welfare work • What it’s like working in India and Vietnam • How to help Who this episode is for: • People who care about animal welfare • People interested in new and unusual problems • People open to shrimp sentience Who this episode isn’t for: • People who think shrimp couldn’t possibly be sentient • People who got called ‘shrimp’ a lot in high school and get anxious when they hear
-
#149 – Tim LeBon on how altruistic perfectionism is self-defeating
12/04/2023 Duración: 03h11minBeing a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself. But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you're doing as much as you think you should makes it hard to focus and get things done. So now you're performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat. This is the disastrous cycle today's guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset. Links to learn more, summary and full transcript. Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them
-
#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't
03/04/2023 Duración: 02h17minIf you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no. Today's guest, Johannes Ackva — the climate research lead at Founders Pledge, where he advises major philanthropists on their giving — thinks the best strategy is actually pretty different, and one few are adopting. In reality you don't want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment. Links to learn more, summary and full transcript. Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we're familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one. In short: we're uncertain what the future holds and really
-
#147 – Spencer Greenberg on stopping valueless papers from getting into top journals
24/03/2023 Duración: 02h38minCan you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don't get the same result if the experiments are repeated. Two key reasons are 'p-hacking' and 'publication bias'. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they're actually not — a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a 'null result' never saw the light of day. The resulting phenomenon of publication bias is one we've understood for 60 years. Today's repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years. Links to learn more, summary and full transcript. He recently checked whether p-values, an indicator of how likely a
-
#146 – Robert Long on why large language models like GPT (probably) aren't conscious
14/03/2023 Duración: 03h12minBy now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user: "I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else." (It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.") Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google’s AI system, LaMDA, was conscious. What should we make of these AI systems? One response to seeing conversations with chatbots like these is to trust the chat
-
#145 – Christopher Brown on why slavery abolition wasn't inevitable
11/02/2023 Duración: 02h42minIn many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success. It’s tempting to believe this was inevitable — that the arc of history “bends toward justice,” and that as humans get richer, we’ll make even more moral progress. But today's guest Christopher Brown — a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire during the 18th and 19th centuries — believes the story of how slavery became unacceptable suggests moral progress is far from inevitable. Links to learn more, summary and full transcript. While most of us today feel that the abolition of slavery was sure to happen sooner or later as humans became richer and more educated, Christopher doesn't believe any of the arguments for that conclusion pass muster. If
-
#144 – Athena Aktipis on why cancer is actually one of our universe's most fundamental phenomena
26/01/2023 Duración: 03h15minWhat’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer. But today’s guest Athena Aktipis says that the opposite of cancer is us: it's having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function. If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead. Links to learn more, summary and full transcript. As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise: • Cells will proliferate when they shouldn't. • Cells won't die when they should. • Cells won't engage in the kind of division of labour that they should. • Cells won’t do the jobs that they're supposed to do. • Cel
-
#79 Classic episode - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles
16/01/2023 Duración: 02h35minRebroadcast: this episode was originally released in June 2020. Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, "You know what, she's not so bad". Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history. He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His latest book asks: if we reframe global problems as puzzles, would the world be a better place? Links to learn more, summary and full transcript. This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at clever show notes that change style each paragraph
-
#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments
09/01/2023 Duración: 02h37minRebroadcast: this episode was originally released in July 2020. 80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments. Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment. In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that yo
-
#83 Classic episode - Jennifer Doleac on preventing crime without police and prisons
04/01/2023 Duración: 02h17minRebroadcast: this episode was originally released in July 2020. Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three ways to effectively prevent crime that don't require police or prisons and the human toll they bring with them: better street lighting, cognitive behavioral therapy, and lead reduction. One of Jennifer’s papers used switches into and out of daylight saving time as a 'natural experiment' to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double. Links to sources for the claims in these show notes, other resources to learn more, the full blog post, and a full transcript. The idea here is that if you try to rob someone in broad daylight, they might see you com
-
#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons
29/12/2022 Duración: 02h40minAmerica aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially. As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted." Links to learn more, summary and full transcript. We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint. As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no. Jeffrey is a specialist on the nut
-
#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction
20/12/2022 Duración: 01h47minJohn McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work he's also written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column. • Links to learn more, summary, and full transcript • Video version of the interview • Lecture: Why the world looks the same in any language Our show is mostly about the world's most pressing problems and what you can do to solve them. But what's the point of hosting a podcast if you can't occasionally just talk about something fascinating with someone whose work you appreciate? So today, just before the holidays, we're sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him: • Can you
-
#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well
13/12/2022 Duración: 02h44minLarge language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow. But do they really 'understand' what they're saying, or do they just give the illusion of understanding? Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society. Links to learn more, summary and full transcript. One way to think about 'understanding' is as