Steptoe Cyberlaw Podcast

  • Autor: Vários
  • Narrador: Vários
  • Editor: Podcast
  • Duración: 88:56:29
  • Mas informaciones

Informações:

Sinopsis

A weekly podcast on cybersecurity and privacy from the cyberlaw practice at Steptoe and Johnson. Featuring Stewart Baker, Michael Vatis, and Jason Weinstein.

Episodios

  • The Digital Fourth Amendment with Orin Kerr

    30/05/2025 Duración: 01h08min

    The Cyberlaw Podcast is back from hiatus – briefly!  I’ve used the hiatus well, skiing the Canadian Ski Marathon, trekking through Patagonia, and having a heart valve repaired (all good now!). So when I saw (and disagreed with ) Orin Kerr’s new book, I figured it was time for episode 502 of the Cyberlaw Podcast.  Orin and I spend the episode digging into his book, The Digital Fourth Amendment: Privacy and Policing in Our Online World. The book is part theory, part casebook, part policy roadmap—and somehow still manages to be readable, even for non-lawyers. Orin’s goal? To make sense of how the Fourth Amendment should apply in a world of smartphones, cloud storage, government-preserved Facebook accounts, and surveillance everywhere. The core notion of the book is “equilibrium adjustment”—the idea that courts have always tweaked Fourth Amendment rules to preserve a balance between law enforcement power and personal privacy, even as technology shifts the terrain. From Prohibition-era wiretaps to the modern smart

  • World on the Brink with Dmitri Alperovitch

    22/04/2024 Duración: 49min

    Okay, yes, I promised to take a hiatus after episode 500. Yet here it is a week later, and I'm releasing episode 501. Here's my excuse. I read and liked Dmitri Alperovitch's book, "World on the Brink: How America Can Beat China in the Race for the 21st Century."  I told him I wanted to do an interview about it. Then the interview got pushed into late April because that's when the book is actually coming out. So sue me. I'm back on hiatus. The conversation  in the episode begins with Dmitri's background in cybersecurity and geopolitics, beginning with his emigration from the Soviet Union as a child through the founding of Crowdstrike and becoming a founder of Silverado Policy Accelerator and an advisor to the Defense Department. Dmitri shares his journey, including his early start in cryptography and his role in investigating the 2010 Chinese hack of Google and other companies, which he named Operation Aurora. Dmitri opens his book with a chillingly realistic scenario of a Chinese invasion of Taiwan. He explai

  • Who’s the Bigger Cybersecurity Risk – Microsoft or Open Source?

    11/04/2024 Duración: 01h11min

    There’s a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it’s appropriate, then, for our two lead stories to revive a theme from the 90s – who’s better, Microsoft or Linux? Sadly for both, the current debate is over who’s worse, at least for cybersecurity.   Microsoft’s sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports.  The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint.  Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compell

  • Taking AI Existential Risk Seriously

    02/04/2024 Duración: 01h01min

    This episode is notable not just for cyberlaw commentary, but for its imminent disappearance from these pages and from podcast playlists everywhere.  Having promised to take stock of the podcast when it reached episode 500, I’ve decided that I, the podcast, and the listeners all deserve a break.  So I’ll be taking one after the next episode.  No final decisions have been made, so don’t delete your subscription, but don’t expect a new episode any time soon.  It’s been a great run, from the dawn of the podcast age, through the ad-fueled podcast boom, which I manfully resisted, to the market correction that’s still under way.  It was a pleasure to engage with listeners from all over the world. Yes, even the EU!    As they say, in the podcast age, everyone is famous for fifteen people.  That’s certainly been true for me, and I’ll always be grateful for your support – not to mention for all the great contributors who’ve joined the podcast over the years   Back to cyberlaw, there are a surprising number of people a

  • The Fourth Antitrust Shoe Drops, on Apple This Time

    26/03/2024 Duración: 46min

    The Biden administration has been aggressively pursuing antitrust cases against Silicon Valley giants like Amazon, Google, and Facebook. This week it was Apple’s turn. The Justice Department (joined by several state AGs)  filed a gracefully written complaint accusing Apple of improperly monopolizing the market for “performance smartphones.” The market definition will be a weakness for the government throughout the case, but the complaint does a good job of identifying ways in which Apple has built a moat around its business without an obvious benefit for its customers.  The complaint focuses on Apple’s discouraging of multipurpose apps and cloud streaming games, its lack of message interoperability, the tying of Apple watches to the iPhone to make switching to Android expensive, and its insistence on restricting digital wallets on its platform.  This lawsuit will continue well into the next presidential administration, so much depends on the outcome of the election this fall.   Volt Typhoon is still in the ne

  • Social Speech and the Supreme Court

    19/03/2024 Duración: 01h16s

    The Supreme Court is getting a heavy serving of first amendment social media cases. Gus Hurwitz covers two that made the news last week. In the first, Justice Barrett spoke for a unanimous court in spelling out the very factbound rules that determine when a public official may use a platform’s tools to suppress critics posting on his or her social media page.  Gus and I agree that this might mean a lot of litigation, unless public officials wise up and simply follow the Court’s broad hint: If you don’t want your page to be treated as official, simply say up top that it isn’t official. The second social media case making news was being argued as we recorded. Murthy v. Missouri appealed a broad injunction against the US government pressuring social media companies to take down posts the government disagrees with.  The Court was plainly struggling with a host of justiciability issues and a factual record that the government challenged vigorously. If the Court reaches the merits, it will likely address the questi

  • Preventing Sales of Personal Data to Adversary Nations

    14/03/2024 Duración: 31min

    This bonus episode of the Cyberlaw Podcast focuses on the national security implications of sensitive personal information. Sales of personal data have been largely unregulated as the growth of adtech has turned personal data into a widely traded commodity. This, in turn, has produced a variety of policy proposals – comprehensive privacy regulation, a weird proposal from Sen. Wyden (D-OR) to ensure that the US governments cannot buy such data while China and Russia can, and most recently an Executive Order to prohibit or restrict commercial transactions affording China, Russia, and other adversary nations with access to Americans’ bulk sensitive personal data and government related data.  To get a deeper understanding of the executive order, and the Justice Department’s plans for implementing it, Stewart interviews Lee Licata, Deputy Section Chief for National Security Data Risk.

  • The National Cybersecurity Strategy – How Does it Look After a Year?

    13/03/2024 Duración: 56min

    Kemba Walden and Stewart revisit the National Cybersecurity Strategy a year later. Sultan Meghji examines the ransomware attack on Change Healthcare and its consequences. Brandon Pugh reminds us that even large companies like Google are not immune to having their intellectual property stolen. The group conducts a thorough analysis of a "public option" model for AI development. Brandon discusses the latest developments in personal data and child online protection. Lastly, Stewart inquires about Kemba's new position at Paladin Global Institute, following her departure from the role of Acting National Cyber Director.

  • Regulating personal data for national security

    07/03/2024 Duración: 53min

    The United States is in the process of rolling out a sweeping regulation for personal data transfers. But the rulemaking is getting limited attention because it targets transfers to our rivals in the new Cold War – China, Russia, and their allies. Adam Hickey, whose old office is drafting the rules, explains the history of the initiative, which stems from endless Committee on Foreign Investment in the United States efforts to impose such controls on a company-by-company basis. Now, with an executive order as the foundation, the Department of Justice has published an advance notice of proposed rulemaking that promises what could be years of slow-motion regulation. Faced with a similar issue—the national security risk posed by connected vehicles, particularly those sourced in China—the Commerce Department issues a laconic notice whose telegraphic style contrasts sharply with the highly detailed Justice draft. I take a stab at the riskiest of ventures—predicting the results in two Supreme Court cases about so

  • Are AI models learning to generalize?

    20/02/2024 Duración: 49min

    We begin this episode with Paul Rosenzweig describing major progress in teaching AI models to do text-to-speech conversions. Amazon flagged its new model as having “emergent” capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn “generalization” of its skills. If so, the more we train AI on a variety of tasks – chat, text to speech, text to video, and the like – the better AI will get at learning new tasks, as generalization becomes part of its core skill set. It’s lawyers holding forth on the frontiers of technology, so take it with a grain of salt. Cristin Flynn Goodwin and Paul Stephan join Paul Rosenzweig to provide an update on Volt Typhoon, the Chinese APT that is littering Western networks wi

  • Death, Taxes, and Data Regulation

    16/02/2024 Duración: 01h04min

    On the latest episode of The Cyberlaw Podcast, guest host Brian Fleming, along with panelists Jane Bambauer, Gus Hurwitz, and Nate Jones, discuss the latest U.S. government efforts to protect sensitive personal data, including the FTC’s lawsuit against data broker Kochava and the forthcoming executive order restricting certain bulk sensitive data flows to China and other countries of concern. Nate and Brian then discuss whether Congress has a realistic path to end the Section 702 reauthorization standoff before the April expiration and debate what to make of a recent multilateral meeting in London to discuss curbing spyware abuses. Gus and Jane then talk about the big news for cord-cutting sports fans, as well as Amazon’s ad data deal with Reach, in an effort to understand some broader difficulties facing internet-based ad and subscription revenue models. Nate considers the implications of Ukraine’s “defend forward” cyber strategy in its war against Russia. Jane next tackles a trio of stories detailing

  • Serious threats, unserious responses

    06/02/2024 Duración: 54min

    It was a week of serious cybersecurity incidents paired with unimpressive responses. As Melanie Teplinsky reminds us, the U.S. government has been agitated for months about China’s apparent strategic decision to hold U.S. infrastructure hostage to cyberattack in a crisis. Now the government has struck back at Volt Typhoon, the Chinese threat actor pursuing that strategy. It claimed recently to have disrupted a Volt Typhoon botnet by taking over a batch of compromised routers. Andrew Adams explains how the takeover was managed through the court system. It was a lot of work, and there is reason to doubt the effectiveness of the effort. The compromised routers can be re-compromised if they are turned off and on again. And the only ones that were fixed by the U.S. seizure are within U.S. jurisdiction, leaving open the possibility of DDOS attacks from abroad. And, really, how vulnerable is our critical infrastructure to DDOS attack? I argue that there’s a serious disconnect between the government’s hair-on-fir

  • Going Deep on Deep Fakes—Plus a Bonus Interview with Rob Silvers on the Cyber Safety Review Board.

    30/01/2024 Duración: 01h12min

    It was a big week for deep fakes generated by artificial intelligence. Sultan Meghji, who’s got a new AI startup, walked us through three stories that illustrate the ways AI will lead to more confusion about who’s really talking to us. First, a fake Biden robocall urged people not to vote in the New Hampshire primary. Second, a bot purporting to offer Dean Phillips’s views on the issues was sanctioned by OpenAI because it didn’t have Phillips’s consent. Third, fake nudes of Taylor Swift led to a ban on Twitter searches for her image. And, finally, podcasters used AI to resurrect George Carlin and got sued by his family. The moral panic over AI fakery meant that all of these stories were long on “end of the world” and short on “we’ll live through this.” Regulators of AI are not doing a better job of maintaining perspective. Mark MacCarthy reports that New York City’s AI hiring law, which has punitive disparate-impact disclosure requirements for automated hiring decision engines, seems to have persuaded NYC

  • High Court, High Stakes for Cybersecurity

    23/01/2024 Duración: 44min

    The Supreme Court heard argument last week in two cases seeking to overturn the Chevron doctrine that defers to administrative agencies in interpreting the statutes that they administer. The cases have nothing to do with cybersecurity, but Adam Hickey thinks they’re almost certain to have a big effect on cybersecurity policy. That’s because Chevron is going to take a beating, if it survives at all. That means it will be much tougher to repurpose existing law to deal with new regulatory problems. Given how little serious cybersecurity legislation has been passed in recent years, any new cybersecurity regulation is bound to require some stretching of existing law – and to be easier to challenge. Case in point: Even without a new look at Chevron, the EPA was balked in court when it tried to stretch its authorities to cover cybersecurity rules for water companies. Now, Kurt Sanger tells us, EPA, FBI, and CISA have combined to release cybersecurity guidance for the water sector. The guidance is pretty generic; a

  • Triangulating Apple

    09/01/2024 Duración: 01h22min

    Returning from winter break, this episode of the Cyberlaw Podcast covers a lot of ground. The story I think we’ll hear the most about in 2024 is the remarkable exploit used to compromise several generations of Apple iPhone. The question I think we’ll be asking for the next year is simple: How could an attack like this be introduced without Apple’s knowledge and support? We don’t get to this question until near the end of the episode, and I don’t claim great expertise in exploit design, but it’s very hard to see how such an elaborate compromise could be slipped past Apple’s security team. The second question is which government created the exploit. It might be a scandal if it were done by the U.S. But it would be far more of a scandal if done by any other nation.  Jeffery Atik and I lead off the episode by covering recent AI legal developments that simply underscore the obvious: AI engines can’t get patents as “inventors.” But it’s quite possible that they’ll make a whole lot of technology “obvious” and thus

  • Do AI Trust and Safety Measures Deserve to Fail?

    12/12/2023 Duración: 01h17min

    It’s the last and probably longest Cyberlaw Podcast episode of 2023. To lead off, Megan Stifel takes us through a batch of stories about ways that AI, and especially AI trust and safety, manage to look remarkably fallible. Anthropic released a paper showing that race, gender, and age discrimination by AI models was real but could be dramatically reduced by instructing The Model to “really, really, really” avoid such discrimination. (Buried in the paper was the fact that the original, severe AI bias disfavored older white men, as did the residual bias that asking nicely didn’t eliminate.) Bottom line from Anthropic seems to be, “Our technology is a really cool toy, but don’t use if for anything that matters.”) In keeping with that theme, Google’s highly touted OpenAI competitor Gemini was release to mixed reviews when the model couldn’t correctly identify recent Oscar winners or a French word with six letters (it offered “amour”). The good news was for people who hate AI’s ham-handed political correctness; it

  • Making the Rubble Bounce in Montana

    05/12/2023 Duración: 01h01min

    In this episode, Paul Stephan lays out the reasoning behind U.S. District Judge Donald W. Molloy’s decision enjoining Montana’s ban on TikTok. There are some plausible reasons for such an injunction, and the court adopts them. There are also less plausible and redundant grounds for an injunction, and the court adopts those as well. Asked to predict the future course of the litigation, Paul demurs. It will all depend, he thinks, on how the Supreme Court begins to sort out social media and the first amendment in the upcoming term. In the meantime, watch for bouncing rubble in the District of Montana courthouse. (Grudging credit for the graphics goes to Bing’s Image Creator, which refused to create the image until I attributed the bouncing rubble to a gas explosion. Way to discredit trust and safety, Bing!) Jane Bambauer and Paul also help me make sense of the litigation between Meta and the FTC over children’s privacy and previous consent decrees. A recent judicial decision opened the door for the FTC to purs

  • Rohrschach AI

    28/11/2023 Duración: 58min

    The OpenAI corporate drama came to a sudden end last week. So sudden, in fact, that the pundits never quite figured out What It All Means. Jim Dempsey and Michael Nelson take us through some of the possibilities. It was all about AI accelerationists v. decelerationists. Or it was all about effective altruism. Or maybe it was Sam Altman’s slippery ambition. Or perhaps a new AI breakthrough – a model that can actually do more math than the average American law student. The one thing that seems clear is that the winners include Sam Altman and Microsoft, while the losers include illusions about using corporate governance to engage in AI governance. The Google antitrust trial is over – kind of. Michael Weiner tells us that all the testimony and evidence has been gathered on whether Google is monopolizing search, but briefs and argument will take months more – followed by years more fighting about remedy if Google is found to have violated the antitrust laws. He sums up the issues in dispute and makes a bold p

  • Defenestration at OpenAI

    21/11/2023 Duración: 42min

    Paul Rosenzweig brings us up to date on the debate over renewing section 702, highlighting the introduction of the first credible “renew and reform” measure by the House Intelligence Committee. I’m hopeful that a similarly responsible bill will come soon from Senate Intelligence and that some version of the two will be adopted. Paul is less sanguine. And we all recognize that the wild card will be House Judiciary, which is drafting a bill that could change the renewal debate dramatically. Jordan Schneider reviews the results of the Xi-Biden meeting in San Francisco and speculates on China’s diplomatic strategy in the global debate over AI regulation. No one disagrees that it makes sense for the U.S. and China to talk about the risks of letting AI run nuclear command and control; perhaps more interesting (and puzzling) is China’s interest in talking about AI and military drones. Speaking of AI, Paul reports on Sam Altman’s defenestration from OpenAI and soft landing at Microsoft. Appropriately, Bing Image Cre

  • The Brussels Defect: Too Early is Worse Than Too Late. Plus: Mark MacCarthy’s Book on ”Regulating Digital Industries.”

    14/11/2023 Duración: 01h44s

    That, at least, is what I hear from my VC friends in Silicon Valley. And they wouldn’t get an argument this week from EU negotiators facing what looks like a third rewrite of the much-too -early AI Act. Mark MacCarthy explains that negotiations over an overhaul of the act demanded by France and Germany led to a walkout by EU parliamentarians. The cause? In their enthusiasm for screwing American AI companies, the drafters inadvertently screwed a French and a German AI aspirant Mark is also our featured author for an interview about his book, "Regulating Digital Industries: How Public Oversight Can Encourage Competition, Protect Privacy, and Ensure Free Speech" I offer to blurb it as “an entertaining, articulate and well-researched book that is egregiously wrong on almost every page.” Mark promises that at least part of my blurb will make it to his website. I highly recommend it to Cyberlaw listeners who mostly disagree with me – a big market, I’m told. Kurt Sanger reports on what looks like another myth abou

página 1 de 5