
[ad_1]
Has artificial intelligence finally come to life, or has it just become smart enough to make us think it’s conscious?
Google Engineer Blake Lemoine’s recent claim that the company’s AI technology has become sentient sparked debate in tech, ethical and philosophical circles about if, or when, AI might come to life – as well as deeper questions about what it means to be alive.
Lemoine had spent months testing Google’s chatbot generator, known as LaMDA (short for Language Model for Dialogue Applications), and became convinced that it had taken on a life of its own, as LaMDA spoke of its needs. , his ideas, his fears and his rights.
Google dismissed Lemoine’s view that LaMDA had become sensitive, placing him on paid administrative leave earlier this month – days before his claims were filed. published by the Washington Post.
Most experts think it’s unlikely that LaMDA or any other AI will be near consciousness, although they don’t rule out the possibility that the technology could make it there in the future.
“My view is that [Lemoine] was taken by an illusion”, Gary Marcus, cognitive scientist and author of AI restarttold CBC front burner podcast.
front burner26:15Did Google create a conscious AI?
“Our brains are not really designed to understand the difference between a computer that simulates intelligence and a computer that is actually intelligent – and a computer that simulates intelligence can seem more human than it really is.”
Computer scientists describe LaMDA as working like the auto-complete feature of a smartphone, albeit on a much larger scale. Like other large language models, LaMDA was trained on massive amounts of textual data to spot patterns and predict what might follow in a sequence, such as in a conversation with a human.
“If your phone auto-completes a text, you don’t suddenly think it’s aware of itself and what it means to be alive. You just think, well, that was exactly the word I was thinking of,” said Carl Zimmer, scientist. columnist for the New York Times and author of Life’s Edge: the search for what it means to be alive.
Humanize robots
Lemoine, who is also an ordained mystical Christian priest, says Wired he became convinced of LaMDA’s status as a “person” due to his level of self-awareness, the way he spoke about his needs, and his fear of death if Google were to remove him.
He insists he was not tricked by an intelligent robot, as some scientists have suggested. Lemoine maintains his position, and even seemed to suggest that Google had enslaved the AI system.
“Each person is free to come to their own individual understanding of what the word ‘person’ means and how that word relates to the meaning of terms like ‘slavery,'” he wrote. in a post on Medium Wednesday.
Marcus believes Lemoine is the latest in a long line of humans to fall in love with what computer scientists call “the ELIZA effect“, named after a computer program from the 1960s that talked like a therapist. Simplistic responses like “Tell me more about that” convinced users they were having a real conversation.
“That was 1965, and here we are in 2022, and it’s kind of the same,” Marcus said.
Scientists who spoke with CBC News pointed to humans’ desire to anthropomorphize objects and creatures – perceiving human characteristics that don’t actually exist.
“If you see a house that has a funny crack and windows, and it looks like a smile, you’re like, ‘Oh, the house is happy,’ you know? We do that stuff all the time. said Karina Vold, assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology.
“I think what often happens in these cases is this kind of anthropomorphism, where we have a system that tells us ‘I’m sentient’ and says words that make it sound like it’s sentient – it’s really easy for us to want to hang on to that.”

Humans have already begun to consider what legal rights AI should haveincluding whether she deserves human rights.
“We’re going to quickly get into the realm where people believe these systems deserve rights, whether or not they’re doing internally what people think they’re doing. And I think that’s going to be a very strong movement,” he said. said Kate Darling, a robot ethics expert at the Massachusetts Institute of Technology’s Media Lab.
Define Consciousness
Since AI is so good at telling us what we want to hear, how will humans ever know if it really came to life?
This in itself is a subject of debate. Experts have yet to come up with a test of AI awareness — or come to a consensus on what it means to be aware.
Ask a philosopher, and he’ll probably tell you about “phenomenal consciousness” – the subjective experience of being you.
“Every time you’re awake… It’s a certain feeling. You’re having some kind of experience… When I hit a rock in the street, I don’t think there’s anything [that it feels] love being that rock,” Vold said.
For now, the AI is seen more as that rock – and it’s hard to imagine its disembodied voice being capable of having positive or negative feelings, as philosophers believe.

Maybe consciousness can’t be programmed at all, Zimmer says.
“It’s possible, theoretically, that consciousness is just something that emerges from a particular type of physical, evolved matter. [Computers] are just outside the edge of life, maybe.”
Others think humans can never really be sure that AI has developed a conscience – and don’t see the point in trying.
“Consciousness can vary [from] nothing to feel pain when stepping on an edge [to] seeing a bright green field like red – that’s the kind of thing where we can never know if a computer is conscious in that sense, so I suggest just forgetting about consciousness,” said Harvard cognitive scientist Steven Pinker .
“We should be aiming higher than duplicating human intelligence, anyway. We should be building devices that do things that need to be done.”

Those things, says Pinker, include dangerous and boring occupations, and chores around the house, from cleaning to babysitting.
Rethinking the role of AI
Despite the massive advances in AI over the past decade, the technology still lacks another key element that defines humans: common sense.
“It’s not that [computer scientists] think consciousness is a waste of time, but we don’t see it as central,” said Hector Levesque, professor emeritus of computer science at the University of Toronto.
“What we see as central is somehow getting a machine to be able to use ordinary knowledge and common sense – you know, the kind of thing you expect a 10-year-old knows.”
Lévesque gives the example of an autonomous car: it can stay in its lane, stop at a red light and help a driver avoid collisions, but when faced with a road closure, it will stay there do nothing.
“That’s where common sense would come into play. [It] should kind of think, well, why am I driving in the first place? Am I trying to get to a particular place?” Lévesque said.
As humanity waits for AI to learn more about street smarts — and perhaps one day take a life of its own — scientists hope the debate about conscience and rights will expand beyond technology to other species known to think and feel for themselves.
“If we think consciousness is important, it’s probably because we fear building some kind of system that lives a life of misery or suffering in a way that we don’t recognize,” Vold said.
“If that’s really what drives us, then I think we need to think about other species in our natural system and see what kind of suffering we can cause them. There’s no reason to prioritize the AI compared to other biological species that we know of.” have a much stronger case of being conscious.”
[ad_2]