A big piece of gossip from the tech giants came out over the weekend, when the Washington Post reported that Google AI engineer Blake Lemoine claimed to share evidence that Google‘s new AI system, LaMDA, had reached a level of consciousness equivalent to that of a human child.
LaMDA stands for “Language Model for Dialog Applications,” and it is one of several large-scale AI systems that has been trained on large quantities of text found on the internet to respond to prompts. LaMDA, like these other systems being developed, can find patterns and predict what should come next in a conversation. Google presented LaMDA last May as a system that can “engage in a free-flowing way about a seemingly endless number of topics.”
Lemoine posted on Medium an “interview” he and a collaborator had with LaMDA, where the AI system was asked questions about morality, books, likes and dislikes and thoughts about feelings. While the answers are human-like, responses from the AI community and other tech experts agree that the system is in line with current AI systems and is not exhibiting sentient responses. However, AI with more evolved “emotions” is not far off.
According to a statement from Google, “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.”