Special episode: Is Google’s LaMDA chatbot sentient? Tiernan Ray from ZDNet breaks it all down and discusses how to tell a chatbot from a human… and the ethics of bots fooling people.
June 26, 2022
In this special episode, we unpack the controversy surrounding the sentient chatbot that "worries about its future". Google engineer Blake Lemoine published a transcript of a conversation with the chatbot LaMDA that generated strong reactions from technologists and AI ethicists. It conjured images from science fiction movies that always capture the public imagination.
Tiernan Ray, ZDNet writer, accomplished tech journalist, and good friend of the podcast, joined host Dan Turchin to reflect on the story based on his analysis of the 5,000-word LaMDA transcript.
Listen and learn…
- What will it be like to co-habit a world with thinking machines?
- What does it mean for an AI to be sentient? Why should we care?
- Should AI be protected under the 13th amendment?
- How do we know LaMDA's not sentient from the transcript?
- What are the ethical implications of developing sentient bots?
- Did Google act responsibly in developing a bot that is sentient-like?
References in this episode…
- Tiernan's analysis of the LaMDA transcript
- Melanie Mitchell on MSNBC
- Blake Lemoine's interview with Steven Levy in Wire
- Alan Turing's Imitation Game