80,000 Hours Podcast With Rob Wiblin
#146 – Robert Long on why large language models like GPT (probably) aren't conscious
- Autor: Vários
- Narrador: Vários
- Editor: Podcast
- Duración: 3:12:51
- Mas informaciones
Informações:
Sinopsis
By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user: "I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else." (It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.") Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google’s AI system, LaMDA, was conscious. What should we make of these AI systems? One response to seeing conversations with chatbots like these is to trust the chat