Debate over AI sentience marks a watershed moment
The AI field is at a significant turning point. On the one hand, engineers, ethicists, and philosophers are publicly debating whether new AI systems such as LaMDA – Google’s artificially intelligent chatbot generator – have demonstrated sentience, and (if so) whether they should be afforded human rights. At the same time, much of the advance in AI in recent years, is based on deep learning neural networks, yet there is a growing argument from AI luminaries such as Gary Marcus and Yann LeCun that these networks cannot lead to systems capable of sentience or consciousness. Just the fact that the industry is having this debate is a watershed moment.
"Consensus is that LaMDA... not yet achieved sentience. Though this is rather beside the point. The fact that this debate is taking place at all is evidence of how far AI systems have come and suggestive of where they are going."
Starspawn0 says:
Whether or not people think all this story about "sentience" is "hype" -- or think Google is manipulating the news media or think some stupid ethics guy fell for an Eliza-like trick -- they better get used to it! I don't think this kind of talk is going away. As time goes on it will become more frequent and get louder.
As I said, my opinion is that it's a very good chatbot, not sentient. But it's not just a dumb Eliza-like system. Probably you can ask it some hard commonsense reasoning questions and it will get them right; you can ask it some questions that require "logical reasoning", and it will often get them right; you can ask it puzzles, and it can solve them; you can engage in a conversation, and it will give what seem to be "intelligent" replies. I coded up Eliza programs ages and ages ago, and know how brittle and simple they are -- what LaMDA does is multiple quantum leaps beyond just those simple programs; and quantum leaps beyond GOFAI chatbots like the Eugene Goostman bot. But it probably still has some limits, and if you chat with it long enough, they will become apparent.
The limits are not because "it's just matrix multiplication" or "it's just autocomplete" or "it's just statistics". Those are what Daniel Dennett would call "intuition pumps" that people use to persuade others not to take the claim that computers could be conscious seriously. Searle was good at that. His Chinese Room thought experiment is an example, as is his idea that if "functionalism" is true then you should be able to make a computer out of toilet paper and stones -- could toilet paper "think"? Could it be conscious?
....
But what about qualia? Well, perhaps some weak version of panpsychism is true, and machines experience qualia, too:
But if we accept panpsychism, don't we also accept that the qualia are "epiphenomenal"? -- and if so, isn't it weird that we have thoughts about qualia, that they become topics for argument (it doesn't just ride on top of the brain in operation, but even affects our thoughts about what consciousness is)? My answer to that is: not really. Maybe both our ability to introspect and our ability to have subjective experiences arise from the same cognitive capacity. Maybe this capacity is what creates subjective experience out of what we observe or imagine -- if we didn't have it, we'd just react without much subjective experience as we observe the world around us.