Machine Learning

From AI to Assistive Computation?

This post on Mastodon has been playing on my mind. It was written on 27th November, after the debacle with Galactica but before ChatGPT burst into the public’s consciousness.

Link to the full thread on Mastodon

I love the challenge it posts.

I am sure there are some areas where the term “AI” is meaningful, for example in academic research. But in the wider world, Ilyaz has a very strong argument.

Usually when people think of AI they’ll imagine something along the lines of 2001: A Space Odyssey or Aliens or I, Robot or Bladerunner or Ex Machina: Something that seems uncannily human but isn’t. I had this image in mind when I first wanted to understand AI and so read Artificial Intelligence: A Modern Approach. What an anti-climax that book was. Did you know that, strictly speaking, the ghosts in pac-man are AI’s? A piece of code that has its own objectives to carry out, like a pac-man ghost, counts as AI. It doesn’t have to ‘think’.

Alan Turing invented The Turing Test in 1950 as a test for AI. For a long time this seemed like a decent proxy for AI: if you’re talking to two things and can’t tell which is the human and which is the machine then we may as well say that the machine is artificially intelligent.

But these days you have large language models that can easily pass the Turing Test. It’s got to the point that ChatGPT has been explicitly coded/taught to fail the Turing test. We’ve got to the point where the AI’s can fake being human so much that they’re being programmed to not sound like humans!

A good description of these language models is ‘Stochastic Parrots‘: ‘Parrots’ because they repeat the patterns they have seen without necessarily understanding any meaning and ‘Stochastic’ because there is randomness in the way they have learnt to generate text.

Services like ChatGPT are bringing this sort of tech into the mainstream and transforming what we understand is possible with computers. This is a pattern we’ve seen before. The best analogy I can think of for where we are today in the world of AI tech is how Spreadsheets and then Search Engines and then Smartphones changed the world we live in.

They don’t herald the advent of Skynet (any more than any other tech from one of the tech titans), nor do they herald a solution for the world’s ills.

So maybe we should reserve the term ‘AI’ for the realms of academic study and instead use a term like ‘Assistive Computation’ as Ilyaz suggests when it comes to real-world applications.

Pretty provocative but at the same time pretty compelling.

To end this post, I’ll leave you with an old AI/ML joke that is somewhat relevant to the discussion here (though these days you’d have to replace with ‘linear regression’ with ‘text-davinci-003’ to get the same vibe):

Edited 2023-01-30: Added link to the full thread on Mastodon

Software Development

Coding with ChatGPT

I’ve been using ChatGPT to help with some coding problems. In all the cases I’ve tried it has been wrong but has given me useful ideas. I’ve seen some extremely enthusiastic people who are saying that ChatGPT writes all their code for them. I can only assume that they mean it is applying common patterns for them and saving boilerplate work. Here is a recent example of an interaction I had with ChatGPT as an illustration.

The initial prompt:

Hi, I want to write a python function that will find common subsets that can be extracted from a list of sets. A common subset is one where several elements always appear together.

For example with the following sets:
s1 = {“a”,”b”,”c”}
s2 = {“a”,”b”,”c”}
s3 = {“c”}
s4 = {“d”,”e”}
s5 = {“d”,”e”,”f”}
s6 = {“d”,”e”,”f”,”g”}

The function should return
[{“a”,”b”},{“d”,”e”}]

What I liked about using it:

  1. It forced me to think about an individual function that can be tested in isolation
  2. It forced me to think really explicitly in terms of the inputs and outputs of the function
  3. The answers it provided, specifically using itertools.combinations, gave me a good direction to try out.

What I didn’t like about using it:

  1. The code didn’t work
  2. It gives the impression of knowing what it’s talking about. I have to keep reminding myself that it’s just producing a wall of text/code based on what it has seen in the past. Some of the stuff it generates is complete nonsense (see below when it tries to explain what output a particular function would create, or whether a set is a subset of another set).
  3. It’s a walled garden. Whereas in another life I might have posted my question on StackOverflow and got an answer that others could also benefit from, this discussion is just between me and ChatGPT. Not that StackOverflow is a panacea – I’ve still got an unanswered question on there from 3 + years ago, but, nevertheless, there is an important principle at stake.

See below the screenshots of the whole interaction:

Needless to say, this final piece of code didn’t return the expected output either.

Full code of the above conversation is below for anyone who wants to do some cut and paste.

Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def common_subsets(sets):
...     # Initialize an empty set to store the common subsets
...     common = set()
...     # Iterate through all possible subsets of the first set
...     for subset in itertools.chain.from_iterable(itertools.combinations(sets[0], r) for r in range(len(sets[0])+1)):
...         subset = set(subset)
...         # Check if the current subset is present in all other sets
...         if all(subset <= s for s in sets[1:]):
...             common.add(subset)
...     return list(common)
... 
>>> s1 = {"a","b","c"}
>>> s2 = {"a","b","c"}
>>> s3 = {"c"}
>>> s4 = {"d","e"}
>>> s5 = {"d","e","f"}
>>> s6 = {"d","e","f","g"}
>>> 
>>> print(common_subsets([s1,s2,s3,s4,s5,s6]))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 5, in common_subsets
NameError: name 'itertools' is not defined
>>> import itertools
>>> s1 = {"a","b","c"}
>>> s2 = {"a","b","c"}
>>> s3 = {"c"}
>>> s4 = {"d","e"}
>>> s5 = {"d","e","f"}
>>> s6 = {"d","e","f","g"}
>>> 
>>> print(common_subsets([s1,s2,s3,s4,s5,s6]))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 9, in common_subsets
TypeError: unhashable type: 'set'
>>> def common_subsets(sets):
...     # Initialize an empty set to store the common subsets
...     common = set()
...     # Iterate through all possible subsets of the first set
...     for subset in itertools.chain.from_iterable(itertools.combinations(sets[0], r) for r in range(len(sets[0])+1)):
...         subset = frozenset(subset)
...         # Check if the current subset is present in all other sets
...         if all(subset <= s for s in sets[1:]):
...             common.add(subset)
...     return list(common)
... 
>>> s1 = {"a","b","c"}
>>> s2 = {"a","b","c"}
>>> s3 = {"c"}
>>> s4 = {"d","e"}
>>> s5 = {"d","e","f"}
>>> s6 = {"d","e","f","g"}
>>> 
>>> print(common_subsets([s1,s2,s3,s4,s5,s6]))
[frozenset()]
>>> from typing import List
>>> 
>>> def common_subsets(sets:List[set]) -> List[set]:
...     common = set()
...     for subset in itertools.chain.from_iterable(itertools.combinations(sets[0], r) for r in range(1,len(sets[0])+1)):
...         subset = frozenset(subset)
...         if all(subset <= s for s in sets[1:]):
...             common.add(subset)
...     return list(common)
... 
>>> s1 = {"a","b","c"}
>>> s2 = {"a","b","c"}
>>> s3 = {"c"}
>>> s4 = {"d","e"}
>>> s5 = {"d","e","f"}
>>> s6 = {"d","e","f","g"}
>>> 
>>> print(common_subsets([s1,s2,s3,s4,s5,s6]))
[]
>>> from typing import List
>>> 
>>> def common_subsets(sets:List[set]) -> List[set]:
...     subsets = [set(itertools.combinations(s, r)) for s in sets for r in range(1, len(s) + 1)]
...     common = set.intersection(*subsets)
...     return list(common)
... 
>>> s1 = {"a","b","c"}
>>> s2 = {"a","b","c"}
>>> s3 = {"c"}
>>> s4 = {"d","e"}
>>> s5 = {"d","e","f"}
>>> s6 = {"d","e","f","g"}
>>> 
>>> print(common_subsets([s1,s2,s3,s4,s5,s6]))
[]
>>> from typing import List
>>> 
>>> def common_subsets(sets:List[set]) -> List[set]:
...     # find all subsets of all sets
...     subsets = [set(itertools.combinations(s, r)) for s in sets for r in range(1, len(s) + 1)]
...     # find subsets that are in all subsets list
...     common = set.intersection(*subsets)
...     # check if subsets are present in all sets
...     common = [subset for subset in common if all(subset <= s for s in sets)]
...     return list(common)
... 
>>> s1 = {"a","b","c"}
>>> s2 = {"a","b","c"}
>>> s3 = {"c"}
>>> s4 = {"d","e"}
>>> s5 = {"d","e","f"}
>>> s6 = {"d","e","f","g"}
>>> 
>>> print(common_subsets([s1,s2,s3,s4,s5,s6]))
[]

And at the risk of embarrassing myself in front of the whole world, and for the benefit of any future ChatGPT iteration that is reading this post, below is the solution I did come up with. Improvements welcome 😀

import itertools
from typing import List

def get_all_set_combinations(sets: List):
    all_combinations = set()
    for s in sets:
        if len(s) == 1:
            continue
        for l in range(2,len(s)+1):
            combos = itertools.combinations(s,l)
            for x in combos:
                all_combinations.add(tuple(x))
    return all_combinations

def find_extractable_subsets(sets: List):
    combos = get_all_set_combinations(sets)
    matching = set()
    for combo in sorted(combos, key=len, reverse=True):
        combo_set = set(combo)
        if not is_candidate_set_extractable(combo_set, sets):
            continue
        addable = True
        for x in matching:
            if combo_set & set(x) == combo_set:
                addable = False
                break
        if addable:
            matching.add(combo)
    return matching

def is_candidate_set_extractable(candidate, sets):
    for s in sets:
        # if this candidate is fully included in a set then it's a candidate to be exractable
        if (candidate & s) == candidate or (candidate & s) == set():
            continue
        else:
            return False
    return True


### And can be tested with:
s1 = {"a","b","c"}
s2 = {"a","b","c"}
s3 = {"c"}
s4 = {"d","e"}
s5 = {"d","e","f"}
s6 = {"d","e","f","g"}
find_extractable_subsets([s1,s2,s3,s4,s5,s6])

# With the expected result:
# {('b', 'a'), ('e', 'd')}

# it only picks the longest matching subsets, e.g.
find_extractable_subsets([s1,s2,s4,s5,s6])

# produces expected result:
# {('e', 'd'), ('b', 'c', 'a')}
Technology Adoption

The ChatGPT Arms Race

ChatGPT makes it so easy to produce good-looking content that people are getting concerned about the scope for cheating in school.

This is a story about the arms race between those who want to use ChatGPT to create content and those who want to be able to spot ChatGPT-created content.

Putting something like this out there was always going to be a red rag to a bull.

Smileys broke the checker. Obviously you wouldn’t do this in real-life: smileys might fool a tool checking whether content was created by a language model, but they won’t fool a human reader. But they are just here as an illustration – you could equally insert characters that a human wouldn’t see.

But it looks like you don’t even need to go to these lengths … surprise … someone had the bright idea of using the language model to re-write its content to make it look more like a human wrote it:

(!)

This genie is now out of the bottle. Trying to ban ChatGPT is a fool’s errand. It might even be counterproductive:

  • There will be a proliferation of similar tools built on large language models. Perhaps not as optimized for human-sounding chat, but certainly good enough at producing content
  • Schoolkids who don’t have access to these tools will find themselves at a disadvantage in the real world compared to those who learn how to make best use of them

One really basic example: one teacher I know was so impressed with the output of ChatGPT they said they’d use it to help students learn how to structure their essays. I’m sure with a bit of imagination there’d by plenty of other ways to use large language models to help teach students better.

It’s a better user of people’s energy to find ways to use large language models than to use the same energy to try to fight them.

Machine Learning

Evaluating Syracuse – part 2

I recently wrote about the results of trying out my M&A entity extraction project that is smart enough to create simple graphs of which company has done what with which other company.

For a side project very much in alpha it stood up pretty well against the best of the other offerings out there. At least in the first case I looked at. Here are two more complex examples chosen at random

Test 1 – M&A activity with multiple participants

Article: Searchlight Capital Partners Completes the Acquisition of the Operations and Assets of Frontier Communications in the Northwest of the U.S. to form Ziply Fiber

Syracuse

It shows which organizations have been involved in the purchase, which organization sold the assets (Frontier) and the fact that the target entity is an organization called Ziply Fiber.

To improve, it could make it clearer that Ziply is a new entity being created rather than the purchase of an entity already called Ziply from Frontier. Also to identify that this is related to US North West assets. But otherwise pretty good.

Expert.ai

As before, it’s really good at identifying all the organizations in the text, even the ones that aren’t relevant to the story, e.g. Royal Canadian Mounted Police.

The relations piece is patchy. From the headline it determines that Searchlight Capital Partners is completing an acquisition of some operations, and also there is a relationship between the verb ‘complete’ and the assets of Frontier Communications. Pretty good result from this sentence, but not completely clear that there is an acquisition of assets.

Next sentence has a really good catch that Searchlight is forming Ziply

It only identifies one of the other parties involved in the transaction. It doesn’t tie the ‘it’ to Searchlight – you’d have to infer that from another relationship. And it doesn’t flag any of the other participants.

Test 2 – Digest Article

Article: Deals of the day-Mergers and acquisitions

Syracuse

It’s identifying 7 distinct stories. There are 8 bullet points in the Reuters story – one of which is about something that isn’t happening. Syracuse picks all of the real stories. It messes up Takeaway.com’s takeover of Just Eat by separating out Takeway and com as two different organizations, but apart from that looks pretty good.

I’m particularly gratified how it flags Exor as the spender and Agnelli as another kind of participant in the story about Exor raising its stake in GEDI. Agnelli is the family behind Exor, so they are involved, but strictly speaking the company doing the buying is Exor.

Expert.ai

Most of the entities are extracted correctly. A couple of notable errors:

  1. It finds a company called ‘Buyout’ (really this is the description of a type of firm, not the name of the firm)
  2. It also gets Takeaway.com wrong – but where Syracuse split this into two entities, Expert.ai flags it as a URL rather than a company (in yellow in the second image below)

The relationship piece is also pretty impressive from an academic point of view, but hard to piece together what is really going on from a practical point of view. Take the first story about Mediaset as an example and look at the relationships that Expert.ai identifies in the 4 graphs below. First one identifies that Mediaset belongs to Italy and is saying something. The other 3 talk about an “it” doing various things, but don’t tie this ‘it’ back to Mediaset.

Conclusion

Looking pretty good for Syracuse, if I say so myself :D.

Machine Learning

Revisiting Entity Extraction

In September 2021 I wrote about the difficulties of getting anything beyond basic named entity recognition. You could easily get the names of companies mentioned in a news article, but not whether one company was acquiring another or whether two companies were forming a joint venture, etc. Not to mention the perennial “Bloomberg problem”: Bloomberg is named in loads of different stories. Usually they are referenced as a company reporting the story, sometimes as the owner of the Bloomberg Terminal. Only a tiny proportion of mentions of Bloomberg are about actions that the Bloomberg company is done.

These were very real problems that a team I was involved in were facing around 2017, and were still not fixed in 2021. I figured I’d see if more recent ML techologies, specifically Transformers, could help solve these problems. I’ve made a simple Heroku app, called Syracuse, to showcase the results. It’s very alpha, but the quality is not too bad right now.

Meanwhile, the state of the art has moved on leaps and bounds over the past year. So I’m going to compare Syracuse with the winner from my 2021 comparison: Expert.ai‘s Document Analysis Tool and with ChatGPT – the new kid on the NLP block.

A Simple Test

Article: Avalara Acquires Artificial Intelligence Technology and Expertise from Indix to Aggregate, Structure and Deliver Global Product and Tax Information

The headline says it all: Avalara has acquired some Tech and Expertise from Indix.

Expert.AI

It is very comprehensive. For my purposes, too comprehensive. It identifies 3 companies: Avalara, ICR and Indix. The story is about Avalara acquiring IP from Indix. ICR is the communications company that is making the press release. ICR appearing in this list is an example of the “Bloomberg Problem” in action. Also it’s incorrect to call Indix IP a company – the company is Indix. The relevant sentence in the article mentions Indix’s IP, not a company called Indix IP: “Avalara believes its ability to collect, organize, and structure this content is accelerated with the acquisition of the Indix IP.

It also identifies many geographic locations, but many of them are irrelevant to the story as they are just lists of where Avalara has offices. If you wanted to search a database of UK-based M&A activity you would not want this story to come up.

Expert.AI’s relationship extraction is really impressive, but again, overly comprehensive. This first graph shows that Avalara gets expertise, technology and structure from Indix IP to aggregate things.

But there are also many many other graphs which are less useful, e.g:

Conclusion: Very powerful. Arguably too powerful. It reminds me of the age-old Google problem – I don’t want 1,487,585 results in 0.2 seconds. I’m already drowning in information, I want something that surfaces the answer quickly.

ChatGPT

I tried a few different prompts. First I included the background text then added a simple prompt:

I’m blown away by the quality of the summary here (no mention of ICR, LLC, so it’s not suffering from the Bloomberg Problem). But it’s not structured. Let’s try another prompt.

Again, it’s an impressive summary, but it’s not structured data.

Expert.ai + ChatGPT

I wonder what the results would be by combining a ChatGPT summary with Expert.AI document analysis. Turns out, not much use.

Syracuse

Link to data: https://syracuse-1145.herokuapp.com/m_and_as/1

Anyone looking at the URLs will recognise that this is the first entry in the database. This is the first example that I tried as an unseen test case (no cherry-picking here).

It shows the key information in a more concise graph as below. Avalara is a spender, Indix is receiving some kind of payment and the relevant target is some Indix Technology (the downward triangle represents something that is not an organization)

I’m pretty happy with this result. It shows that despite how impressive something like Expert.AI and ChatGPT are, they have limitations when applying to more specific problems, like in this case. Fortunately there are other open source ML technologies out there that can help, though it’s a job of work to stitch them together appropriately to get a decent result.

In future posts I’ll share more comparisons of more complex articles and share some insights into what I’ve learned about large language models through this process (spoiler – there are no silver bullets).