A collection of some relevant commentary about the recent story that a Google employee [was] reportedly put on leave after claiming chatbot became sentient, similar to a ‘kid that happened to know physics’.
First: the posting of the interview with the AI (it’s called LaMDA). It’s worth a read if you want to see what is sitting under all the fuss.
Second: The unequivocal counter-argument that, while this is a tremendous advance in language modelling, it is not sentience, it is just a massive statistical model that generates text based on text it has seen before:
Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.
Third: The text that the AI generates depends on the text it has seen. It is very responsive to leading questions:
Is this language model sentient? Is sentience a matter of “yes you are a sentient” (e.g. a dog) or “no you aren’t (e.g. a rock)”. Or are there matters of degree? Is a bee as sentient as a dolphin? If a machine became sentient would humans even be able to recognise it?
We used to think that the Turing test would be a good benchmark of whether a machine could (appear to) think. We are now well past that point.
It takes me back to my anthropology studies. It turns out it’s quite tricky to define what distinguishes humans from animals. We used to think it was ‘tool use’ until we discovered primates using tools. When we saw something that passed as ‘human’ according to the definition, but was obviously not human, we changed the definition. Seems we are in a similar place with AI.
A more pressing issue than the ‘sentient or not’ debate is ‘what are we going to do about it’. It’s the ethical side of these advances which are both terrific and terrifying. With deepfake imagery and these language models, the possibilities both good and bad are hard to get your head around.
So I leave you with.
Fourth: