Steptoe Cyberlaw Podcast
Do AI Trust and Safety Measures Deserve to Fail?
- Autor: Vários
- Narrador: Vários
- Editor: Podcast
- Duración: 1:17:35
- Mas informaciones
Informações:
Sinopsis
It’s the last and probably longest Cyberlaw Podcast episode of 2023. To lead off, Megan Stifel takes us through a batch of stories about ways that AI, and especially AI trust and safety, manage to look remarkably fallible. Anthropic released a paper showing that race, gender, and age discrimination by AI models was real but could be dramatically reduced by instructing The Model to “really, really, really” avoid such discrimination. (Buried in the paper was the fact that the original, severe AI bias disfavored older white men, as did the residual bias that asking nicely didn’t eliminate.) Bottom line from Anthropic seems to be, “Our technology is a really cool toy, but don’t use if for anything that matters.”) In keeping with that theme, Google’s highly touted OpenAI competitor Gemini was release to mixed reviews when the model couldn’t correctly identify recent Oscar winners or a French word with six letters (it offered “amour”). The good news was for people who hate AI’s ham-handed political correctness; it