In interviews and public statements, many in the Artificial Intelligence community have dismissed the engineer’s claims, with some pointing out that his story underscores how technology can lead people to attribute human characteristics to it. But the belief that Google’s artificial intelligence could be sensitive undoubtedly highlights both our fears and our expectations of what this technology can do. LaMDA, which means “Language Model for Dialog Applications”, is one of many large-scale artificial intelligence systems trained in large portions of text from the Internet that can respond to written prompts. It is essentially up to them to find patterns and predict which word or words will follow. Such systems have become increasingly good at answering questions and writing in ways that may seem convincingly human – and Google itself introduced LaMDA last May in a blog post as a system that can ‘engage with free way to a seemingly endless number of topics. “But the results can also be strange, strange, annoying and prone to riots. Engineer Blake Lemoine reportedly told the Washington Post that he shared information with Google that LaMDA was sensitive, but the company did not agree. In a statement, Google said Monday that its team, which includes ethicists and technologists, “examined Blake’s concerns in accordance with our Artificial Intelligence Authority and informed him that the evidence did not support his allegations.” On June 6, Lemoine reported to Medium that Google had placed him on paid administrative leave “in connection with an investigation into artificial intelligence concerns I raised within the company” and that he could be fired “soon.” (Citing the experience of Margaret Mitchell, who was the leader of Google’s Ethical AI team until Google fired her in early 2021 after she was honest about the departure of then-co-leader Timnit Gebru in 2020. Gebru was ousted after including one related to a research project, the company’s artificial intelligence leadership has told it to withdraw from a conference or remove its name.) A Google spokesman has confirmed that Lemoine remains on administrative leave. According to the Washington Post, he was licensed to violate the company’s privacy policy. Lemoine was not available for comment on Monday. The constant emergence of powerful software programs trained in vast amounts of data has also raised concerns about the ethics that govern the development and use of this technology. And sometimes developments are viewed in the light of what can come, rather than what is currently possible. Responses from members of the artificial intelligence community to the Lemoine experience circulated on social media over the weekend, and generally came to the same conclusion: Google’s artificial intelligence is nowhere near consciousness. Abeba Birhane, a senior fellow at Mozilla’s trusted artificial intelligence, wrote on Twitter on Sunday: “We have entered a new era” this neural network is conscious “and this time it will expend so much energy to counter it.” The CEO of Geometric Intelligence, sold to Uber, and author of books such as “Rebooting AI: Building Artificial Intelligence We Can Trust,” described LaMDA’s idea as “stupid nonsense” in a tweet. He quickly wrote a blog post noting that the only thing such artificial intelligence systems do is match patterns drawn from huge language databases. In an interview Monday with CNN Business, Marcus said the best way to think of systems like LaMDA is as a “glorified version” of autocomplete software that you can use to predict the next word in a text message. If you type “I’m very hungry, so I want to go to a”, it may suggest “restaurant” as the next word. But this is a prediction made using statistics. “No one should think that auto-supplementation, even with steroids, is conscious,” he said. In an interview, Gebru, who is the founder and CEO of the Distributed AI Research Institute, or DAIR, said Lemoine is a victim of many companies claiming to have artificial intelligence or artificial intelligence – an idea that refers to artificial intelligence. intelligence that can work The tasks that look like humans and interact with us in meaningful ways – are not far off. For example, Ilya Sutskever, co-founder and lead scientist of OpenAI, wrote on Twitter in February that “perhaps today’s large neural networks are slightly aware.” And last week, Google Research vice president and associate Blaise Aguera y Arcas wrote in an article for the Economist that when he started using LaMDA last year, “I felt more and more like I was talking about something smart.” (This snippet now includes an author’s note stating that Lemoine has since “allegedly been licensed after claiming in an interview with the Washington Post that LaMDA, Google’s chatbot, had become” feeling “.) “What happens is that there is such a struggle to use more data, more calculations, to say that you have created this general thing that everyone knows, answers all your questions or whatever, and that is the drum you are playing. “, said Gebru. . “So how are you surprised when this person goes to extremes?” In a statement, Google said LaMDA had undergone 11 “distinguished reviews of artificial intelligence principles” as well as “rigorous research and testing” related to quality, safety and the ability to make fact-based statements. . “Of course, some in the wider artificial intelligence community are considering the long-term possibility of felt or general artificial intelligence, but it does not make sense to anthropomorphize today’s conversational models, which are not noticeable,” the company said. “Hundreds of researchers and engineers have talked to LaMDA, and we do not know if anyone else is making the powerful large-scale or humanizing LaMDA like Blake,” said Google.