80,000 Hours Podcast With Rob Wiblin

  • Autor: Vários
  • Narrador: Vários
  • Editor: Podcast
  • Duración: 668:05:57
  • Mas informaciones

Informações:

Sinopsis

A show about the world's most pressing problems and how you can use your career to solve them.Subscribe by searching for '80,000 Hours' wherever you get podcasts.Hosted by Rob Wiblin, Director of Research at 80,000 Hours.

Episodios

  • #204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

    16/10/2024 Duración: 01h57min

    Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything.Links to learn more, highlights, video, and full transcript.On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It’s a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits.But on Nate’s telling, it’s a group particularly vulnerable to oversimplification and hubris. Where Riverians’ ability to calculate the “expected value” of actions isn’t as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate’s discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall.Given th

  • #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

    03/10/2024 Duración: 01h25min

    "In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, 'I had +4 level of experience for this hour, then I had -2 for the next hour, and then I had -1' — and you sort of sum to try to work out the total… And I came to think that something like that will be applicable in some of the animal cases as well… There are achievements, there are experiences, there are things that can be done in the face of difficulty that might be seen as having the same kind of redemptive role, as casting into a different light the difficult events that led up to it."The example I use is watching some birds successfully raising some young, fighting off a couple of rather aggressive parrots of another species that wanted to fight them, prevailing against difficult odds — and doing so in a way that was so wholly successful. It seemed to me that if you wanted to do an accounting of how things had gone for those birds, you would not want to do the naive thing of just counting up difficult and less-d

  • Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

    27/09/2024 Duración: 01h36min

    In this episode from our second show, 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.Links to learn more, highlights, and full transcript.They cover:Keiran’s views on free will, and how he came to hold themWhat it’s like not experiencing sustained guilt, shame, and angerWhether Luisa would become a worse person if she felt less guilt and shame — specifically whether she’d work fewer hours, or donate less money, or become a worse friendWhether giving up guilt and shame also means giving up prideThe implications for loveThe neurological condition ‘Jerk Syndrome’And some practical advice on feeling less guilt, shame, and angerWho this episode is for:People sympathetic to the idea that free will is an illusionPeople who experience tons of guilt, shame, or angerPeople worried about what would happen if they stopped feeling tonnes of guilt, shame, or angerWho this episode isn’t for:People strongly in favour of retributive justi

  • #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

    19/09/2024 Duración: 02h20min

    "For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the next questions are, and then getting to the next stage and the next stage and so on. And occasionally there’ll be revolutionary ideas which will really completely change your view of science. And it is possible that some revolutionary breakthrough in our understanding will come about and we might crack this problem, but there’s no evidence for that. It doesn’t mean that there isn’t a lot of promising work going on. There are many legitimate areas which could lead to real improvements in health in old age. So I’m fairly balanced: I think there are promising areas, but there’s a lot of work to be done to see which area is going to be promising, and what the risks are, and how to make them work." —Venki RamakrishnanIn today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel P

  • #201 – Ken Goldberg on why your robot butler isn’t here yet

    13/09/2024 Duración: 02h01min

    "Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surprising, because humans are very good at this. In fact, even with one eye, we can navigate and we can clear the dinner table. But it seems that we’re building in a lot of understanding and intuition about what’s happening in the world and where objects are and how they behave. For robots, it’s very difficult to get a perfectly accurate model of the world and where things are. So if you’re going to go manipulate or grasp an object, a small error in that position will maybe have your robot crash into the object, a delicate wine glass, and probably break it. So the perception and the control are both problems." —Ken GoldbergIn today’s episode, host Luisa Rodriguez speaks to Ken Goldberg — robotics professor at UC Berkeley — about the major research challenges still ahead before robots become broadly integrated into o

  • #200 – Ezra Karger on what superforecasters and experts think about existential risks

    04/09/2024 Duración: 02h49min

    "It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra KargerIn today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.Links to learn more, highlights, and full transcript.They cover:How forecasting can im

  • #199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

    29/08/2024 Duración: 01h12min

    "I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels like one of those things where you read it and it’s like, 'This is the thing that everyone is screaming about?' I think it’s a pretty modest bill in a lot of ways, but I think part of what they are thinking is that this is the first step to shutting down AI development. Or that if California does this, then lots of other states are going to do it, and we need to really slam the door shut on model-level regulation or else they’re just going to keep going. "I think that is like a lot of what the sentiment here is: it’s less about, in some ways, the details of this specific bill, and more about the sense that they want this to stop here, and they’re worried that if they give an inch that there will continue to be other things in the future. And I don’t think that is going to be tolerable to the public in the long ru

  • #198 – Meghan Barrett on challenging our assumptions about insects

    26/08/2024 Duración: 03h48min

    "This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t spend as much time learning about them at museums; and they’re just harder to spend time with in a lot of ways, I think, for people. So people have pets that are vertebrates that they take care of across the taxonomic groups, and people get familiar with those from going to zoos and watching their behaviours there, and watching nature documentaries and more. But I think the insects are still really underappreciated, and that means that our intuitions are probably more likely to be wrong than with those other groups." —Meghan BarrettIn today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what t

  • #197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

    22/08/2024 Duración: 02h29min

    The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the original cofounders of Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.Links to learn more, highlights, video, and full transcript.As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instan

  • #196 – Jonathan Birch on the edge cases of sentience and why they matter

    15/08/2024 Duración: 02h01min

    "In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed. People don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous." —Jonathan BirchIn today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)Links to le

  • #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

    01/08/2024 Duración: 02h08min

    "Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella NevoIn today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.Links to learn more, highlights, and full transcript.They cover:Real-world examples of sophisticated security breaches, and what we can learn from them.Why AI model weights mig

  • #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

    26/07/2024 Duración: 03h04min

    "If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik ButerinCan ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.Links to learn more, highlights, video, and full transcript.Seeing his social circle divided and fighting, Vi

  • #193 – Sihao Huang on the risk that US–China AI competition leads to war

    18/07/2024 Duración: 02h23min

    "You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data. And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world." —Sihao HuangIn today’s episode, host Luisa Rodriguez speaks to Sihao Huang — a technology and security policy fellow at RAND — about his work on AI governance and tech policy in China, w

  • #192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

    12/07/2024 Duración: 01h54min

    "Ring one: total annihilation; no cellular life remains. Ring two, another three-mile diameter out: everything is ablaze. Ring three, another three or five miles out on every side: third-degree burns among almost everyone. You are talking about people who may have gone down into the secret tunnels beneath Washington, DC, escaped from the Capitol and such: people are now broiling to death; people are dying from carbon monoxide poisoning; people who followed instructions and went into their basement are dying of suffocation. Everywhere there is death, everywhere there is fire."That iconic mushroom stem and cap that represents a nuclear blast — when a nuclear weapon has been exploded on a city — that stem and cap is made up of people. What is left over of people and of human civilisation." —Annie JacobsenIn today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.Links to learn more, highlights, and

  • #191 (Part 2) – Carl Shulman on government and society after AGI

    05/07/2024 Duración: 02h20min

    This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.Links to learn more, highlights, and full transcript.As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability ad

  • #191 (Part 1) – Carl Shulman on the economy and national security after AGI

    27/06/2024 Duración: 04h14min

    This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating.Links to learn more, highlights, and full transcript.Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s,

  • #190 – Eric Schwitzgebel on whether the US is conscious

    07/06/2024 Duración: 02h46s

    "One of the most amazing things about planet Earth is that there are complex bags of mostly water — you and me – and we can look up at the stars, and look into our brains, and try to grapple with the most complex, difficult questions that there are. And even if we can’t make great progress on them and don’t come to completely satisfying solutions, just the fact of trying to grapple with these things is kind of the universe looking at itself and trying to understand itself. So we’re kind of this bright spot of reflectiveness in the cosmos, and I think we should celebrate that fact for its own intrinsic value and interestingness." —Eric SchwitzgebelIn today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World.Links to learn more, highlights, and full transcript.They cover:Why our intuitions seem so unreliable for answering fundamental questions about reality.W

  • #189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems

    29/05/2024 Duración: 02h48min

    "You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value. Now, don’t get me wrong. I don’t think that they should have charged $5,000 or $6,000. That’s not ethical. It’s also not economically efficient, because they didn’t cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people."But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they’re not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I’m a firm, but it’s a very efficient strategy for society, and so we’ve got to bridge that gap." —Rachel GlennersterIn today’s episode, host Luisa Rodriguez speaks to Rachel

  • #188 – Matt Clancy on whether science is good

    23/05/2024 Duración: 02h40min

    "Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff."Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to thin

  • #187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard"

    14/05/2024 Duración: 03h06min

    "Earth economists, when they measure how bad the potential for exploitation is, they look at things like, how is labour mobility? How much possibility do labourers have otherwise to go somewhere else? Well, if you are on the one company town on Mars, your labour mobility is zero, which has never existed on Earth. Even in your stereotypical West Virginian company town run by immigrant labour, there’s still, by definition, a train out. On Mars, you might not even be in the launch window. And even if there are five other company towns or five other settlements, they’re not necessarily rated to take more humans. They have their own oxygen budget, right? "And so economists use numbers like these, like labour mobility, as a way to put an equation and estimate the ability of a company to set noncompetitive wages or to set noncompetitive work conditions. And essentially, on Mars you’re setting it to infinity." — Zach WeinersmithIn today’s episode, host Luisa Rodriguez speaks to Zach Weinersmith — the cartoonist behin

página 3 de 15