Steptoe Cyberlaw Podcast

  • Autor: Vários
  • Narrador: Vários
  • Editor: Podcast
  • Duración: 88:56:29
  • Mas informaciones

Informações:

Sinopsis

A weekly podcast on cybersecurity and privacy from the cyberlaw practice at Steptoe and Johnson. Featuring Stewart Baker, Michael Vatis, and Jason Weinstein.

Episodios

  • Interviewing Jimmy Wales Cofounder of Wikipedia

    01/06/2023 Duración: 41min

    In this bonus episode of the Cyberlaw Podcast, I interview Jimmy Wales, the cofounder of Wikipedia. Wikipedia is a rare survivor from the Internet Hippie Age, coexisting like a great herbivorous dinosaur with Facebook, Twitter, and the other carnivorous mammals of Web 2.0. Perhaps not coincidentally, Jimmy is the most prominent founder of a massive internet institution not to become a billionaire. We explore why that is, and how he feels about it.  I ask Jimmy whether Wikipedia’s model is sustainable, and what new challenges lie ahead for the online encyclopedia. We explore the claim that Wikipedia has a lefty bias, whether a neutral point of view can be maintained by including only material from trusted sources, and I ask Jimmy about a concrete, and in my view weirdly biased, entry in Wikipedia on “Communism.” We close with an exploration of the opportunities and risks posed for Wikipedia from ChatGPT and other large language AI models.   Download 460th Episode (mp3)  You can subscribe to The Cyberlaw Podcas

  • When AI Poses an Existential Risk to Your Law License

    31/05/2023 Duración: 01h16min

    This episode of the Cyberlaw Podcast features the second half of my interview with Paul Stephan, author of The World Crisis and International Law. But it begins the way many recent episodes have begun, with the latest AI news. And, since it’s so squarely in scope for a cyberlaw podcast, we devote some time to the so-appalling- you-have-to-laugh-to keep-from-crying story of the lawyer who relied on ChatGPT to write his brief. As Eugene Volokh noted in his post, the model returned exactly the case law the lawyer wanted—because it made up the cases, the citations, and even the quotes. The lawyer said he had no idea that AI would do such a thing. I cast a skeptical eye on that excuse, since when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked, “Are the other cases you provided fake,” the model denied it. Well, all right then. Who among us has not asked Westlaw, “Are

  • Sam Altman-Fried Comes to Washington

    23/05/2023 Duración: 01h24min

    This episode features part 1 of our two-part interview with Paul Stephan, author of The World Crisis and International Law—a deeper and more entertaining read than the title suggests. Paul lays out the long historical arc that links the 1980s to the present day. It’s not a pretty picture, and it gets worse as he ties those changes to the demands of the Knowledge Economy. How will these profound political and economic clashes resolve themselves?  We’ll cover that in part 2. Meanwhile, in this episode of the Cyberlaw Podcast I tweak Sam Altman for his relentless embrace of regulation for his industry during testimony last week in the Senate.  I compare him to another Sam with a similar regulation-embracing approach to Washington, but Chinny Sharma thinks it’s more accurate to say he did the opposite of everything Mark Zuckerberg did in past testimony. Chinny and Sultan Meghji unpack some of Altman’s proposals, from a new government agency to license large AI models, to safety standards and audit. I mock Sen.

  • EUthanizing AI

    16/05/2023 Duración: 50min

    Maury Shenk opens this episode with an exploration of three efforts to overcome notable gaps in the performance of large language AI models. OpenAI has developed a tool meant to address the models’ lack of explainability. It uses, naturally, another large language model to identify what makes individual neurons fire the way they do. Maury is skeptical that this is a path forward, but it’s nice to see someone trying. The other effort, Anthropic’s creation of an explicit “constitution” of rules for its models, is more familiar and perhaps more likely to succeed. We also look at the use of “open source” principles to overcome the massive cost of developing new models and then training them. That has proved to be a surprisingly successful fast-follower strategy thanks to a few publicly available models and datasets. The question is whether those resources will continue to be available as competition heats up. The European Union has to hope that open source will succeed, because the entire continent is a desert

  • How worried should we be about “existential” AI risk?

    09/05/2023 Duración: 58min

    The “godfather of AI” has left Google, offering warnings about the existential risks for humanity of the technology. Mark MacCarthy calls those risks a fantasy, and a debate breaks out between Mark, Nate Jones, and me. There’s more agreement on the White House summit on AI risks, which seems to have followed Mark’s “let’s worry about tomorrow tomorrow” prescription. I think existential risks are a bigger concern, but I am deeply skeptical about other efforts to regulate AI, especially for bias, as readers of Cybertoonz know. I argue again that regulatory efforts to eliminate bias are an ill-disguised effort to impose quotas more widely, which provokes lively pushback from Jim Dempsey and Mark. Other prospective AI regulators, from the Federal Trade Commission (FTC)’s Lina Khan to the Italian data protection agency, come in for commentary. I’m struck by the caution both have shown, perhaps due to their recognizing the difficulty of applying old regulatory frameworks to this new technology. It’s not, I su

  • Does the government need a warrant to warn me about a cyberattack?

    02/05/2023 Duración: 56min

    We open this episode of the Cyberlaw Podcast with some actual news about the debate over renewing section 702 of FISA. That’s the law that allows the government to target foreigners for a national security purpose and to intercept their communications in and out of the U.S. A lot of attention has been focused on what happens to those communications after they’ve been intercepted and stored, and particularly whether the FBI should get a second court authorization—maybe even a warrant based on probable cause—to search for records about an American. Michael J. Ellis reports that the Office of the Director of National Intelligence has released new data on such FBI searches. Turns out, they’ve dropped from almost 3 million last year to nearly 120 thousand this year. In large part the drop reflects the tougher restrictions imposed by the FBI on such searches. Those restrictions were also made public this week. It has also emerged that the government is using section 702 millions of times a year to identify the v

  • It’s the Data (Not the Model), Stupid!

    25/04/2023 Duración: 53min

    The latest episode of The Cyberlaw Podcast was not created by chatbots (we swear!). Guest host Brian Fleming, along with guests Jay Healey, Maury Shenk, and Nick Weaver, discuss the latest news on the AI revolution including Google’s efforts to protect its search engine dominance, a fascinating look at the websites that feed tools like ChatGPT (leading some on the panel to argue that quality over quantity should be goal), and a possible regulatory speed bump for total AI world domination, at least as far as the EU’s General Data Privacy Regulation is concerned. Next, Jay lends some perspective on where we’ve been and where we’re going with respect to cybersecurity by reflecting on some notable recent and upcoming anniversaries. The panel then discusses recent charges brought by the Justice Department, and two arrests, aimed at China’s alleged attempt to harass dissidents living in the U.S. (including with fake social media accounts) and ponders how much of Russia’s playbook China is willing to adopt. Nic

  • The international regulatory dogpile

    19/04/2023 Duración: 47min

    Every government on the planet announced last week an ambition to regulate artificial intelligence. Nate Jones and Jamil Jaffer take us through the announcements. What’s particularly discouraging is the lack of imagination, as governments dusted off their old prejudices to handle this new problem. Europe is obsessed with data protection, the Biden administration just wants to talk and wait and talk some more, while China must have asked ChatGPT to assemble every regulatory proposal for AI ever made by anyone and translate it into Chinese law.  Meanwhile, companies trying to satisfy everyone are imposing weird limits on their AI, such as Microsoft’s rule that asking for an image of Taiwan’s flag is a violation of its terms of service. (For the record, so is asking for China’s flag but not asking for an American or German flag.) Matthew Heiman and Jamil take us through the strange case of the airman who leaked classified secrets on Discord. Jamil thinks we brought this on ourselves by not taking past leaks

  • What Makes AI Safe?

    11/04/2023 Duración: 55min

    We do a long take on some of the AI safety reports that have been issued in recent weeks. Jeffery Atik first takes us through the basics of attention based AI, and then into reports from OpenAI and Stanford on AI safety. Exactly what AI safety covers remains opaque (and toxic, in my view, after the ideological purges committed by Silicon Valley’s “trust and safety” bureaucracies) but there’s no doubt that a potential existential issue lurks below the surface of the most ambitious efforts. Whether ChatGPT’s stochastic parroting will ever pose a threat to humanity or not, it clearly poses a threat to a lot of people’s reputations, Nick Weaver reports. One of the biggest intel leaks of the last decade may not have anything to do with cybersecurity. Instead, the disclosure of multiple highly classified documents seems to have depended on the ability to fold, carry, and photograph the documents. While there’s some evidence that the Russian government may have piggybacked on the leak to sow disinformation, Nic

  • Letting the Chips Fall

    04/04/2023 Duración: 41min

    Dmitri Alperovitch joins the Cyberlaw Podcast to discuss the state of semiconductor decoupling between China and the West. It’s a broad movement, fed by both sides. China has announced that it’s investigating Micron to see if its memory chips should still be allowed into China’s supply chain (spoiler: almost certainly not). Japan has tightened up its chip-making export control rules, which will align it with U.S. and Dutch restrictions, all with the aim of slowing China’s ability to make the most powerful chips. Meanwhile, South Korea is boosting its chipmakers with new tax breaks, and Huawei is reporting a profit squeeze. The Biden administration spent much of last week on spyware policy, Winnona DeSombre Berners reports. How much it actually accomplished isn’t clear. The spyware executive order restricts U.S. government purchases of surveillance tools that threaten U.S. security or that have been misused against civil society targets. And a group of like-minded nations have set forth the principles th

  • China in the Bull Shop

    28/03/2023 Duración: 53min

    The Capitol Hill hearings featuring TikTok’s CEO lead off episode 450 of the Cyberlaw Podcast. The CEO handled the endless stream of Congressional accusations and suspicion about as well as could have been expected.  And it did him as little good as a cynic would have expected. Jim Dempsey and Mark MacCarthy think Congress is moving toward action on Chinese IT products—probably in the form of the bipartisan Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act. But passing legislation and actually doing something about China’s IT successes are two very different things. The FTC is jumping into the arena on cloud services, Mark tells us, and it can’t escape its DNA—dwelling on possible industry concentration and lock-in and not asking much about the national security implications of knocking off a bunch of American cloud providers when the alternatives are largely Chinese cloud providers. The FTC’s myopia means that the administration won’t get as m

  • AI Everywhere

    23/03/2023 Duración: 55min

    GPT-4’s rapid and tangible improvement over ChatGPT has more or less guaranteed that it or a competitor will be built into most new and legacy information and technology (IT) products. Some applications will be pointless; but some will change users’ world. In this episode, Sultan Meghji, Jordan Schneider, and Siobhan Gorman explore the likely impact of GPT4 from Silicon Valley to China.   Kurt Sanger joins us to explain why Ukraine’s IT Army of volunteer hackers creates political, legal, and maybe even physical risks for the hackers and for Ukraine. This may explain why Ukraine is looking for ways to “regularize” their international supporters, with a view to steering them toward defending Ukrainian infrastructure. Siobhan and I dig into the Biden administration’s latest target for cybersecurity regulation: cloud providers.  I wonder if there is not a bit of bait and switch in operation here. The administration seems at least as intent on regulating cloud providers to catch hackers as to improve defenses

  • More National Security Economic Regulation on Congress’s Docket

    14/03/2023 Duración: 54min

    This episode of the Cyberlaw Podcast kicks off with the sudden emergence of a serious bipartisan effort to impose new national security regulations on what companies can be part of the U.S. Information Technology and content supply chain. Spurred by a stalled Committee on Foreign Investment in the United States negotiation with TikTok, Michael Ellis tells us, a dozen well-regarded Democrat and Republican senators have joined to endorse the Restricting the Emergence of Security Threats that Risk Information and Communications Technology Act, which authorizes the exclusion of companies based in hostile countries from the U.S. economy. The administration has also jumped on the bandwagon, making the adoption of some legislation more likely than in the past.   Jane Bambauer takes us through the district court decision upholding the use of a “geofence warrant” to identify January 6th rioters. We end up agreeing that this decision (and the context) turned out to be the best possible result for the Justice Depart

  • A Group Autopsy of the Supreme Court’s Section 230 Oral Argument

    28/02/2023 Duración: 53min

    As promised, the Cyberlaw Podcast devoted half of this episode to an autopsy of Gonzalez v. Google LLC , the Supreme Court’s first opportunity in a quarter century to construe section 230 of the Communications Decency Act. And an autopsy is what our panel—Adam Candeub, Gus Hurwitz, Michael Ellis and Mark MacCarthy—came to perform. I had already laid out my analysis and predictions in a separate article for the Volokh Conspiracy, contending that both Gonzalez and Google would lose. All our panelists agreed that Gonzalez was unlikely to prevail, but no one followed me in predicting that Google’s broad immunity claim would fall, at least not in this case. The general view was that Gonzalez’s lawyer had hurt his case with shifting and opaque theories of liability, that Google’s arguments raised concerns among the Justices but not enough to induce them to write an opinion in such a muddled case. Evaluating the Justices’ performance, Justice Neil Gorsuch’s search for a textual answer drew little praise and some de

  • AI off the rails

    22/02/2023 Duración: 55min

    This episode of the Cyberlaw Podcast opens with a look at some genuinely weird behavior by the Bing AI chatbot – dark fantasies, professions of love, and lies on top of lies – plus the factual error that wrecked the rollout of Google’s AI search bot. Chinny Sharma and Nick Weaver explain how we ended up with AI that is better at BS’ing than at accurately conveying facts. This leads me to propose a scheme to ensure that China’s autocracy never gets its AI capabilities off the ground.  One thing that AI is creepily good at is faking people’s voices. I try out ElevenLabs’ technology in the first advertisement ever to run on the Cyberlaw Podcast. The upcoming fight over renewing section 702 of FISA has focused Congressional attention on FBI searches of 702 data, Jim Dempsey reports. That leads us to the latest compliance assessment on agencies’ handling of 702 data. Chinny wonders whether the only way to save 702 will be to cut off the FBI’s access – at great cost to our unified approach to terrorism intelli

  • Who Needs Hackers When You Have Balloons?

    14/02/2023 Duración: 53min

    The latest episode of The Cyberlaw Podcast gets a bit carried away with the China spy balloon saga. Guest host Brian Fleming, along with guests Gus Hurwitz, Nate Jones, and Paul Rosenzweig, share insights (and bad puns) about the latest reporting on the electronic surveillance capabilities of the first downed balloon, the Biden administration’s “shoot first, ask questions later” response to the latest “flying objects,” and whether we should all spend more time worrying about China’s hackers and satellites. Gus then shares a few thoughts on the State of the Union address and the brief but pointed calls for antitrust and data privacy reform. Sticking with big tech and antitrust, Gus recaps a significant recent loss for the Federal Trade Commission (FTC) and discusses what may be on the horizon for FTC enforcement later this year. Pivoting back to China, Nate and Paul discuss the latest reporting on a forthcoming (at some point) executive order intended to limit and track U.S. outbound investment in certain

  • Phony Cybersecurity Regulation

    07/02/2023 Duración: 45min

    This episode of the Cyberlaw Podcast is dominated by stories about possible cybersecurity regulation. David Kris points us first to an article by the leadership of the Cybersecurity and Infrastructure Security Administration in Foreign Affairs. Jen Easterly and Eric Goldstein seem to take a tough line on “Why Companies Must Build Safety Into Tech Products.“ But for all the tough language, one word, “regulation,” is entirely missing from the piece. Meanwhile, the cybersecurity strategy that the White House has been reportedly drafting for months seems to be hung up over how enthusiastically to demand regulation. All of which seems just a little weird in a world where Republicans hold the House. Regulation is not likely to be high on the GOP to-do list, so calls for tougher regulation are almost certainly more symbolic than real. Still, this is a week for symbolic calls for regulation. David also takes us through an National Telecommunications and Information Administration (NTIA) report on the anticompetiti

  • Suddenly, Everyone Is Gunning for Google

    31/01/2023 Duración: 54min

    The big cyberlaw story of the week is the Justice Department’s antitrust lawsuit against Google and the many hats it wears in the online ad ecosystem. Lee Berger explains the Justice Department’s theory, which is not dissimilar to the Texas attorney general’s two-year-old claims. When you have lost both the Biden administration and the Texas attorney general, I suggest, you cannot look too many places for friends—and certainly not to Brussels, which is also pursuing similar claims of its own. So what is the Justice Department’s late-to-the-party contribution? At least two things, Lee suggests: a jury demand that will put all those complex Borkian consumer-welfare doctrines in front of a northern Virginia jury and a “rocket docket” that will allow Justice to catch up with and maybe lap the other lawsuits against the company. This case looks as though it will be long and ugly for Google, unless it turns out to be short and ugly. Mark reminds us that, for the Justice Department, finding an effective remedy may

  • The Beginning of the End for Ransomware?

    24/01/2023 Duración: 44min

    We kick off a jam-packed episode of the Cyberlaw Podcast by flagging the news that ransomware revenue fell substantially in 2022. There is lots of room for error in that Chainalysis finding, Nick Weaver notes, but the effect is large. Among the reasons to think it might also be real is resistance to paying ransoms on the part of companies and their insurers, who are especially concerned about liability for payments to sanctioned ransomware gangs. I also note that a fascinating additional insight from Jon DiMaggio, who infiltrated the Lockbit ransomware gang. He says that Entrust was hit by Lockbit, which threatened to release its internal files, and that the company responded with days of Distributed Denial of Service (DDoS) attacks on Lockbit’s infrastructure – and never did pay up. That would be a heartening display of courage. It would also be a felony, at least according to the conventional wisdom that condemns hacking back. So I cannot help thinking there is more to the story. Like, maybe Canadian Sec

  • Tracers in the Dark by Andy Greenberg

    21/01/2023 Duración: 43min

    In this bonus episode of the Cyberlaw Podcast, I interview Andy Greenberg, long-time WIRED reporter, about his new book, “Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency.” This is Andy’s second author interview on the Cyberlaw Podcast. He also came on to discuss an earlier book, Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers. They are both excellent cybersecurity stories. “Tracers in the Dark”, I suggest, is a kind of sequel to the Silk Road story, which ends with Ross Ulbricht, the Dread Pirate Roberts, pinioned in a San Francisco library with his laptop open to an administrator’s page on the Silk Road digital black market. At that time, cryptocurrency backers believed that Ulbricht’s arrest was a fluke, and that properly implemented, bitcoin was anonymous and untraceable. Greenberg’s book explains, story by story, how that illusion was trashed by smart cops and techies (including our own Nick Weaver!) who showed that the blockchain’s “forev

página 3 de 5