Machine Learning

Clicking on Garbage Captcha

Feels like these days it would be easier for an AI to deal with a captcha than for a human being to complete it.

Today I had possibly my worst captcha-related experience.

This was not deemed acceptable.

I had to also click the middle one on the right. Even though it needs quite a lot of imagination to see it as a set of traffic lights.

Then I was allowed to proceed.

Feeling like a proper grumpy old man having to deal with this nonsense.

I can imagine an LLM could hallucinate that the middle right image is a traffic lights. But a human being would have to be pretty high to come to the same conclusion.

Technology Adoption

How close are the machines to taking over?

We overestimate the impact of technology in the short-term and underestimate the effect in the long run.

Amara’s Law (see https://fs.blog/gates-law/)

At one end of the debate we have people like Geoffrey Hinton flagging concerns about AI becoming able to control us. At the other end you’ve got people like Yann LeCun who tend to have a more optimistic outlook. Both people of similar levels of credibility in the space.

I’m going to suggest where I see the disconnect.

It’s in the language we use.

To most people, AI means something out of science fiction. Literally Skynet or I, Robot or Ex Machina: Something with its own motivations that are often at odds with humanity.

For researchers, the AI space is much broader. The NPCs you play against in computer games are AIs. You can even read about the AI behind the ghosts in the classic Pac-Man game. When AI researchers think about science fiction AI they use a different term: “Artificial General Intelligence” (AGI).

If you read that a researcher is talking about “AI” then you should be thinking: “wow, look how far we have come since Pac Man”. If they are talking about “AGI” then that is the beginnings of the path to science-fiction AI. But still just the beginnings.

I’ve made a handy graphic that shows where I think we are on this journey between Pac Man and Ex Machina. Obviously it’s somewhat tongue in cheek, but it’s informed by Amara’s law: There is a lot of hype about any new technology so people inevitably overestimate how much it will change things over the next year or two. But over the longer term …. a different story.

Machine Learning

From AI to Assistive Computation?

This post on Mastodon has been playing on my mind. It was written on 27th November, after the debacle with Galactica but before ChatGPT burst into the public’s consciousness.

Link to the full thread on Mastodon

I love the challenge it posts.

I am sure there are some areas where the term “AI” is meaningful, for example in academic research. But in the wider world, Ilyaz has a very strong argument.

Usually when people think of AI they’ll imagine something along the lines of 2001: A Space Odyssey or Aliens or I, Robot or Bladerunner or Ex Machina: Something that seems uncannily human but isn’t. I had this image in mind when I first wanted to understand AI and so read Artificial Intelligence: A Modern Approach. What an anti-climax that book was. Did you know that, strictly speaking, the ghosts in pac-man are AI’s? A piece of code that has its own objectives to carry out, like a pac-man ghost, counts as AI. It doesn’t have to ‘think’.

Alan Turing invented The Turing Test in 1950 as a test for AI. For a long time this seemed like a decent proxy for AI: if you’re talking to two things and can’t tell which is the human and which is the machine then we may as well say that the machine is artificially intelligent.

But these days you have large language models that can easily pass the Turing Test. It’s got to the point that ChatGPT has been explicitly coded/taught to fail the Turing test. We’ve got to the point where the AI’s can fake being human so much that they’re being programmed to not sound like humans!

A good description of these language models is ‘Stochastic Parrots‘: ‘Parrots’ because they repeat the patterns they have seen without necessarily understanding any meaning and ‘Stochastic’ because there is randomness in the way they have learnt to generate text.

Services like ChatGPT are bringing this sort of tech into the mainstream and transforming what we understand is possible with computers. This is a pattern we’ve seen before. The best analogy I can think of for where we are today in the world of AI tech is how Spreadsheets and then Search Engines and then Smartphones changed the world we live in.

They don’t herald the advent of Skynet (any more than any other tech from one of the tech titans), nor do they herald a solution for the world’s ills.

So maybe we should reserve the term ‘AI’ for the realms of academic study and instead use a term like ‘Assistive Computation’ as Ilyaz suggests when it comes to real-world applications.

Pretty provocative but at the same time pretty compelling.

To end this post, I’ll leave you with an old AI/ML joke that is somewhat relevant to the discussion here (though these days you’d have to replace with ‘linear regression’ with ‘text-davinci-003’ to get the same vibe):

Edited 2023-01-30: Added link to the full thread on Mastodon

Anthropology, Machine Learning

Notes on Sentience and Large Language Models

A collection of some relevant commentary about the recent story that a Google employee [was] reportedly put on leave after claiming chatbot became sentient, similar to a ‘kid that happened to know physics’.

First: the posting of the interview with the AI (it’s called LaMDA). It’s worth a read if you want to see what is sitting under all the fuss.

Second: The unequivocal counter-argument that, while this is a tremendous advance in language modelling, it is not sentience, it is just a massive statistical model that generates text based on text it has seen before:

Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.

Third: The text that the AI generates depends on the text it has seen. It is very responsive to leading questions:

Is this language model sentient? Is sentience a matter of “yes you are a sentient” (e.g. a dog) or “no you aren’t (e.g. a rock)”. Or are there matters of degree? Is a bee as sentient as a dolphin? If a machine became sentient would humans even be able to recognise it?

We used to think that the Turing test would be a good benchmark of whether a machine could (appear to) think. We are now well past that point.

It takes me back to my anthropology studies. It turns out it’s quite tricky to define what distinguishes humans from animals. We used to think it was ‘tool use’ until we discovered primates using tools. When we saw something that passed as ‘human’ according to the definition, but was obviously not human, we changed the definition. Seems we are in a similar place with AI.

A more pressing issue than the ‘sentient or not’ debate is ‘what are we going to do about it’. It’s the ethical side of these advances which are both terrific and terrifying. With deepfake imagery and these language models, the possibilities both good and bad are hard to get your head around.

So I leave you with.

Fourth: