Machine Learning

Evaluating Syracuse – part 2

I recently wrote about the results of trying out my M&A entity extraction project that is smart enough to create simple graphs of which company has done what with which other company.

For a side project very much in alpha it stood up pretty well against the best of the other offerings out there. At least in the first case I looked at. Here are two more complex examples chosen at random

Test 1 – M&A activity with multiple participants

Article: Searchlight Capital Partners Completes the Acquisition of the Operations and Assets of Frontier Communications in the Northwest of the U.S. to form Ziply Fiber

Syracuse

It shows which organizations have been involved in the purchase, which organization sold the assets (Frontier) and the fact that the target entity is an organization called Ziply Fiber.

To improve, it could make it clearer that Ziply is a new entity being created rather than the purchase of an entity already called Ziply from Frontier. Also to identify that this is related to US North West assets. But otherwise pretty good.

Expert.ai

As before, it’s really good at identifying all the organizations in the text, even the ones that aren’t relevant to the story, e.g. Royal Canadian Mounted Police.

The relations piece is patchy. From the headline it determines that Searchlight Capital Partners is completing an acquisition of some operations, and also there is a relationship between the verb ‘complete’ and the assets of Frontier Communications. Pretty good result from this sentence, but not completely clear that there is an acquisition of assets.

Next sentence has a really good catch that Searchlight is forming Ziply

It only identifies one of the other parties involved in the transaction. It doesn’t tie the ‘it’ to Searchlight – you’d have to infer that from another relationship. And it doesn’t flag any of the other participants.

Test 2 – Digest Article

Article: Deals of the day-Mergers and acquisitions

Syracuse

It’s identifying 7 distinct stories. There are 8 bullet points in the Reuters story – one of which is about something that isn’t happening. Syracuse picks all of the real stories. It messes up Takeaway.com’s takeover of Just Eat by separating out Takeway and com as two different organizations, but apart from that looks pretty good.

I’m particularly gratified how it flags Exor as the spender and Agnelli as another kind of participant in the story about Exor raising its stake in GEDI. Agnelli is the family behind Exor, so they are involved, but strictly speaking the company doing the buying is Exor.

Expert.ai

Most of the entities are extracted correctly. A couple of notable errors:

  1. It finds a company called ‘Buyout’ (really this is the description of a type of firm, not the name of the firm)
  2. It also gets Takeaway.com wrong – but where Syracuse split this into two entities, Expert.ai flags it as a URL rather than a company (in yellow in the second image below)

The relationship piece is also pretty impressive from an academic point of view, but hard to piece together what is really going on from a practical point of view. Take the first story about Mediaset as an example and look at the relationships that Expert.ai identifies in the 4 graphs below. First one identifies that Mediaset belongs to Italy and is saying something. The other 3 talk about an “it” doing various things, but don’t tie this ‘it’ back to Mediaset.

Conclusion

Looking pretty good for Syracuse, if I say so myself :D.

Machine Learning

Entity extraction powered by Flan

A Thanksgiving update on my side project. See here for an outline of the problem. In short, existing natural language processing techniques are good at generic entity extraction, but not at really getting to the core of the story.

I call it the ‘Bloomberg problem’. Imagine this text: “Bloomberg reported that Foo Inc has bought Bar Corp”. Bloomberg is not relevant in this story. But it is relevant in this one: “Bloomberg has just announced a new version of their Terminal”.

I wrote about my first attempt to address this problem, and then followed it up in July. I’ve been doing some more finessing since then and am pretty happy with the results. There is still some tidying up to do but I’m pretty confident that the building blocks are all there.

The big changes since July are:

  1. Replacing a lot of the post-processing logic with a model trained on more data. This was heartbreaking (throw away work, sad face emoji) but at the same time exhilarating (it works a lot better with less code in, big smile emoji).
  2. Implementing Flan T5 to help with some of the more generic areas.

At a high level this is how it works:

  1. The model
    • Approx 400 tagged docs (in total, across train, val and test sets)
    • Some judicious data synthesis
    • Trained a Named Entity Recognition model based on roberta-base
  2. Post-processing is a combination of
    • Benepar for constituency parsing to identify the relationships between entities for most cases
    • FlanT5 to help with the less obvious relationships.

Next steps are going to be to start representing this as a knowledge graph, which is a more natural way of exploring the data.

See below for a screenshot of the appointment topics extracted recently. These are available online at https://syracuse-1145.herokuapp.com/appointments

And below are the URLS for these appointment topics

The Native Antigen Company Strengthens Senior Team to Support Growing Product Portfolio

In this example, we have a number of companies listed – some of the company that is appointing these two new individuals and some are companies where the individuals used to work. Not all the company records are equally relevant.

Wolters Kluwer appoints Kevin Hay as Vice President of Sales for FRR

The Native Antigen Company Strengthens Senior Team to Support Growing Product Portfolio (Business Wire version)

Kering boosted by report Gucci’s creative director Michele to step down

Broadcat Announces Appointment of Director of Operations

Former MediaTek General Counsel Dr. Hsu Wei-Fu Joins ProLogium Technology to Reinforce Solid-State Battery IP Protection and Patent Portfolio Strategy

HpVac Appoints Joana Vitte, MD, PhD, as Chief Scientific Officer

Highview Power Appoints Sandra Redding to its Leadership Team

Highview Power Appoints Sandra Redding to its Leadership Team (Business Wire version)

ASML Supervisory Board changes announced

Recommendation from Equinor’s nomination committee

I’m pretty impressed with this one. There are a lot of organizations mentioned in this document with one person joining and one person leaving. The system has correctly identified the relevant individuals and organization. There is some redundancy: Board Member and Board Of Directors are identified as the same role, but that’s something that can easily be cleaned up in some more post-processing.

SG Analytics appoints Rob Mitchell as the new Advisory Board Member

Similarly, this article includes the organization that Rob has been appointed to and the names of organizations where he has worked before.

SG Analytics appoints Rob Mitchell as the new Advisory Board Member (Business Wire version)

Machine Learning

ML Topic Extraction Update

This is an update to https://alanbuxton.wordpress.com/2022/01/19/first-steps-in-natural-language-topic-understanding. It’s scratching an itch I have about using machine learning to pick out useful information from text articles on topics like: who is being appointed to a new senior role in a company; what companies are launching new products in new regions etc. My first try, and a review of the various existing approaches out there, was first summarised here: https://alanbuxton.wordpress.com/2021/09/21/transformers-for-use-oriented-entity-extraction/.

After this recent nonsense about whether language models are sentient or not, I’ve decided to use language that doesn’t imply any level of consciousness or intelligence. So I’m not going to be using the word “understanding” any more. The algorithm clearly doesn’t understand the text it is being given in the same way that a human understands text.

Since the previous version of the topic extraction system I implemented logic that use constituency parsing and graphs in networkx to better model the relationships amongst the different entities. It went a long way to improving the quality of the results but the Appointment topic extraction, for example, still struggles in two particular use cases:

  • When lots of people are being appointed to one role (e.g. a lot of people being announced as partners)
  • When one person is taking on a new role that someone else is leaving (e.g. “Jane Smith is taking on the CEO role that Peter Franklin has stepped down from”)

At this point the post-processing is pretty complex. Instead of going further on with this approach I’m going back to square one. I once saw a maxim along the lines of “once your rules get complex, it’s best to replace them with machine learning”. This will mean throwing away a lot of code so emotionally it’s hard to do. And there is an open question will be how much more labelled data the algorithm would need to learn these relationships accurately. But it will be fun to find out.

A simplified version of the app, covering Appointments (senior hires and fires) and Locations (setting up a new HQ, launching in a new location) is available on Heroku at https://syracuse-1145.herokuapp.com/. Feedback more than welcome.

Machine Learning, Software Development, Supply Chain Management

Comparison of Transformers vs older ML architectures in Spend Classification

I recently wrote a piece for my company blog about why Transformers are a better machine learning technology to use in your spend classification projects compared to older ML techniques.

That was a theoretical post that discussed things like sub-word tokenization and self-attention and how these architectural features should be expected to deliver improvements over older ML approaches.

During the Jubilee Weekend, I thought I’d have a go at doing some real-world tests. I wanted to do a simple test to see how much of a difference this all really makes in the spend classification use case. The code is here: https://github.com/alanbuxton/tbfy-cpv-classifier-poc

TL;DR – Bidirectional LSTM is a world away from Support Vector Machines but Transformers have the edge over Bi-LSTM. In particular they are more tolerant of spelling inconsistencies.

This is an update of the code I did for this post: https://alanbuxton.wordpress.com/2021/10/25/transformers-vs-spend-classification/ in which I trained the Transformer for 20 epochs. In this case it was 15 epochs. FWIW the 20-epoch version was better at handling the ‘mobile office’ example. This does indicate that better results will be achieved with more training. But for the purposes of the current blog post there wasn’t any need to go further.

Machine Learning, Software Development

Analyzing a WhatsApp group chat with Seaborn, NetworkX and Transformers

We had a company shutdown recently. Simfoni operates ‘Anytime Anywhere’, which means that anyone can work whatever hours they feel are appropriate from wherever they want to. Every quarter we mandate a full company shutdown over a long weekend to make sure that we all take time away from work at the same time and come back to a clear inbox.

For me this meant a bunch of playing with my kids and hanging out in the garden.

But it also meant playing with some fun tech courtesy of a brief challenge I was set: what insights could I generate quickly from a WhatsApp chat list.

I had a go using some of my favourite tools: Seaborn for easy data visualization; Huggingface Transformers for ML insights and Networkx for graph analysis.

You can find the repo here: https://github.com/alanbuxton/whatsapp-analysis

Enterprise Software, Machine Learning, Product Management

First steps in Natural Language Topic Understanding

In https://alanbuxton.wordpress.com/2021/09/21/transformers-for-use-oriented-entity-extraction/ I showed how transformers allowed me to build something more advanced than the generic entity extraction systems that are publicly available out there.

Next step was to see if I can do something useful with this. In past lives customers have told me about the importance of tracking certain signals or events in a company’s lifecycle, e.g. making an acquisition, expanding to a new territory, making a new senior hire etc.

So I gave it a go, initially looking purely at whether I could train an algorithm to pick out key staffing changes. Results below are 20 random topics pulled from my from my first attempt showing the good, bad and ugly. The numbers are the confidence scores that the algorithm chose for each entity in the topic.

I’ll give myself a B for a decent first prototype.

I do wonder who else out there is working on this sort of thing. From what I can see in the market ML is used to classify articles (e.g. “this article is about a new hire”) but I couldn’t see any commercial offering that goes to the level of “which org hired who into what role”.

If I were to take this further I would be training specialist models on each different type of topic. I wonder if there is something like a T5-style model to rule them all that can handle all this kind of intelligent detailed topic understanding?

TitleOSE Immunotherapeutics Announces the Appointment of Dominique Costantini as Interim CEO Following the Departure of Alexis Peyroles
Urlhttps://www.businesswire.com/news/home/20220116005013/en/OSE-Immunotherapeutics-Announces-the-Appointment-of-Dominique-Costantini-as-Interim-CEO-Following-the-Departure-of-Alexis-Peyroles
WhoWhatRoleOrgEffective When
Alexis Peyroles (0.9846990705)departure (0.943598628)Chief Executive Officer (0.9995111823)OSE Immunotherapeutics SA (0.9983804822)immediately (0.9876502156)
Dominique Costantini (0.9990960956)appointed (0.9998416901)interim Chief Executive Officer (0.9983062148)OSE Immunotherapeutics SA (0.9983804822)immediately (0.9876502156)
Alexis Peyroles (0.993326962)departure (0.9623697996)Chief Executive Officer (0.9994782805)OSE Immunotherapeutics SA (0.9968072176)
Dominique Costantini (0.9989916682)appointed (0.9993845224)interim Chief Executive Officer (0.9982660413)OSE Immunotherapeutics SA (0.9968072176)
AssessmentTopic is duplicated without the ‘effective immediately’ piece – should only keep the most granular topics
TitleBarclays appoints managing directors for Australia investment banking unit
Urlhttps://www.reuters.com/markets/funds/barclays-appoints-managing-directors-australia-investment-banking-unit-2022-01-17/
WhoWhatRoleOrgEffective When
Duncan Connellan (0.988427639)appointed (0.9996656179)managing directors (0.9994463921)Britain ‘s Barclays Plc (0.9851405621)
Duncan Beattie (0.9959402084)appointed (0.9996656179)managing directors (0.9994463921)Britain ‘s Barclays Plc (0.9851405621)
AssessmentPulled out the two key items but: didn’t do a great job of the Entity (Britain’s Barclays Plc was treated as one entity) and doesn’t understand the pluralised role name. Model was not trained to look for where the role is based, so haven’t identified that these roles are specifically in Australia
TitleTrulioo Appoints Michael Ramsbacker as Chief Product Officer
Urlhttps://www.prweb.com/releases/trulioo_appoints_michael_ramsbacker_as_chief_product_officer/prweb18439306.htm
WhoWhatRoleOrgEffective When
Michael Ramsbacker (0.999671936)appointment (0.9997799993)Chief Product Officer (0.9999740124)Trulioo (0.9999925494)
AssessmentGot it right
TitleElastrin Therapeutics Announces Newly Formed Scientific Advisory Board
Urlhttps://www.businesswire.com/news/home/20220117005220/en/Elastrin-Therapeutics-Announces-Newly-Formed-Scientific-Advisory-Board
WhoWhatRoleOrgEffective When
Dr. Pedro M. Quintana Diez (0.9665058851)chairman (0.9933767915)Elastrin Therapeutics Inc. (0.9841426611)
Dr. Pedro M. Quintana Diez (0.9665058851)Scientific Advisory Board (0.9952206612)Elastrin Therapeutics Inc. (0.9841426611)
AssessmentCorrectly extracts key info that Dr Quntana Diez is chairman of the new Scientific Advisory Board but treats these as two roles rather than as one
TitleToshiba Appoints Andrew McDaniel to Lead Its European Retail Business
Urlhttps://www.businesswire.com/news/home/20220117005027/en/Toshiba-Appoints-Andrew-McDaniel-to-Lead-Its-European-Retail-Business
WhoWhatRoleOrgEffective When
Andrew McDaniel (0.9996804595)senior vice president of Europe (0.9983366132)Toshiba Global Commerce Solutions (0.9999386668)January 15 , 2022 (0.9999966621)
Andrew McDaniel (0.9996804595)managing director (0.9998098612)Toshiba Global Commerce Solutions (0.9999386668)January 15 , 2022 (0.9999966621)
AssessmentGot it right
TitleCairn Real Estate Holdings Appoints Mark Johnson President of JPAR® – Real Estate
Urlhttps://www.prweb.com/releases/cairn_real_estate_holdings_appoints_mark_johnson_president_of_jpar_real_estate/prweb18437732.htm
WhoWhatRoleOrgEffective When
Mark Johnson (0.9998755455)appointment (0.955047369)JPAR® – Real Estate (0.9999427795)
AssessmentCorrectly pulls out the appointment but doesn’t identify the role
TitleFiona Macfarlane and Andrea Nicholls appointed to HSBC Bank Canada Board of Directors
Urlhttps://www.businesswire.com/news/home/20220117005321/en/Fiona-Macfarlane-and-Andrea-Nicholls-appointed-to-HSBC-Bank-Canada-Board-of-Directors
WhoWhatRoleOrgEffective When
Fiona Macfarlane (0.9959855676)appointed (0.9996260405)non-executive directors (0.9942650795)HSBC Bank Canada Board of Directors (0.9947710037)
Andrea Nicholls (0.9999670982)appointed (0.9996260405)non-executive directors (0.9942650795)HSBC Bank Canada Board of Directors (0.9947710037)
AssessmentGot it right
TitleDigital Mountain Announces Industry Veteran Calvin Weeks Joining Team as Director of Digital Forensics & Cybersecurity
Urlhttps://www.prweb.com/releases/2022/1/prweb18416336.htm
WhoWhatRoleOrgEffective When
Calvin Weeks (0.999994576)Director , Digital Forensics & Cybersecurity (0.999989152)Digital Mountain , Inc. (0.9999924898)
AssessmentGot the role right but didn’t get the ‘what’
TitleMiniCo Insurance Announces Two Strategic Leadership Promotions
Urlhttps://www.prweb.com/releases/minico_insurance_announces_two_strategic_leadership_promotions/prweb18437565.htm
WhoWhatRoleOrgEffective When
Rick Krouner (0.9899243116)named (0.9960696697)President (0.9988073111)MiniCo Insurance Agency ( MiniCo ) (0.9878121018)
Jim Henry (0.9995553493)named (0.9960696697)Specialty Programs division (0.9527196288)MiniCo Insurance Agency ( MiniCo ) (0.9878121018)
Jim Henry (0.9995553493)named (0.9960696697)National Programs division (0.9757707119)MiniCo Insurance Agency ( MiniCo ) (0.9878121018)
Jim Henry (0.9995553493)named (0.9960696697)President (0.9988151789)MiniCo Insurance Agency ( MiniCo ) (0.9878121018)
AssessmentSimilar to the Elastrin story it pulls out the title and the division but treats them as different roles; also only assigns one of the found roles to Mr Krouner. Also is a bit ‘greedy’ at identifying the Org – the part in parentheses is redundant
TitleStertil-Koni Names Supply Chain Sales Pro Scott Steinhardt as Vice President of Sales
Urlhttps://www.prweb.com/releases/stertil_koni_names_supply_chain_sales_pro_scott_steinhardt_as_vice_president_of_sales/prweb18430929.htm
WhoWhatRoleOrgEffective When
Scott Steinhardt (0.9999918938)joined (0.9999970198)Vice President of Sales (0.9999983311)Stertil-Koni (0.9999969602)
AssessmentGot it right
Machine Learning, Software Development

Simpified history of NLP Transformers

(Some notes I made recently and posting here in case of interest to others – see the tables below)

The transformers story was kicked off by the “Attention is all you need” paper published in mid 2017. (See “Key Papers” section below). This eventually led to use cases like Google implementing transformers to improve its search in 2019/2020 and Microsoft implementing transformers to simplify writing code in 2021 (See “Real-world use of Transformers” section below).

For the rest of us, Huggingface has been producing some great code libraries for working with transformers. This was under heavy development in 2018-2019, including being renamed twice – an indicator of how in flux this area was at the time – but it’s fair to say that this has stabilised a lot over the past year. See “Major Huggingface releases” section below.

Another recent data point – Coursera’s Deep Learning Specialisation was based around using Google Brain’s Trax (https://github.com/google/trax). As of October 2021 Coursera has now announced that (in addition to doing some of the course with Trax) the transformers part now uses Huggingface.

Feels like transformers are at the level of maturity now that it makes sense to embed them into more real-world use cases. We will inevitably have to go through the Gartner Hype Cycle phases of inflated expectations leading to despair, so it’s important not to let expectations get too far ahead of reality. But even with that caveat in mind, now is a great time to be doing some experimentation with Huggingface’s transformers.

Key papers

Jun 2017“Attention is all you need” publishedhttps://arxiv.org/abs/1706.03762
Oct 2018 “BERT: Pre-training of Deep Bidirectional Transformers forLanguage Understanding” publishedhttps://arxiv.org/abs/1810.04805
Jul 2019“RoBERTa: A Robustly Optimized BERT Pretraining Approach” published. https://arxiv.org/abs/1907.11692
May 2020“Language Models are Few-Shot Learners” published, describing use of GPT-3https://arxiv.org/abs/2005.14165

Real-world use of Transformers

Nov 2018Google open sources BERT codehttps://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html
Oct 2019Google starts rolling out BERT implementation for searchhttps://searchengineland.com/faq-all-about-the-bert-algorithm-in-google-search-324193
May 2020OpenAI introduces GPT-3https://en.wikipedia.org/wiki/GPT-3
Oct 2020Google is using BERT used on “almost every English-language query”https://searchengineland.com/google-bert-used-on-almost-every-english-query-342193
May 2021Microsoft introduces GPT-3 into Power Appshttps://powerapps.microsoft.com/en-us/blog/introducing-power-apps-ideas-ai-powered-assistance-now-helps-anyone-create-apps-using-natural-language/

Major Huggingface Releases

Nov 2018Initial 0.1.2 release of pytorch-pretrained-bert https://github.com/huggingface/transformers/releases/tag/v0.1.2
Jul 2019v1.0 of their pytorch-transformers library (including change of name from pytorch-pretrained-bert to pytorch-transformers)https://github.com/huggingface/transformers/releases/tag/v1.0.0
Sep 2019v2.0, this time including name change from pytorch-transformers to, simply, transformershttps://github.com/huggingface/transformers/releases/tag/v2.0.0
June 2020v3.0 of transformershttps://github.com/huggingface/transformers/releases/tag/v3.0.0
Nov 2020v4.0 of transformershttps://github.com/huggingface/transformers/releases/tag/v4.0.0
Machine Learning, Supply Chain Management

Transformers vs Spend Classification

In recent posts I’ve written about the use of Transformers in Natural Language Processing.

A friend working in the procurement space asked about their application in combating decepticons unruly spend data. Specifically, could it help speed up classifying spend data.

So I fine-tuned a Distilbert model using publicly-available data from the TheyBuyForYou project to map text to CPV codes. It took a bit of poking around but the upshot is pretty promising. See the following classification results where the model can distinguish amongst the following types of spend items:

'mobile phone' => Radio, television, communication, telecommunication and related equipment (score = 0.9999891519546509)
'mobile app' => Software package and information systems (score = 0.9995172023773193)
'mobile billboard' => Advertising and marketing services (score = 0.5554304122924805)
'mobile office' => Construction work (score = 0.9570050835609436)

Usual disclaimers apply: this is a toy example that I played around with until it looked good for a specific use case. In reality you would need to apply domain expertise and understanding of the business. But the key point is that transformers are a lot more capable than older machine learning techniques that I’ve seen in spend classification.

The code is all on Github and made available under the Creative Commons BY-NC-SA 4.0 License. It doesn’t include the model itself as the model is too big for github and I haven’t had a chance to try out Git Large File Storage. If people are interested I’m more than happy to do so.

Machine Learning, Software Development

Is it worth training an NLP Transformer from scratch?

In https://alanbuxton.wordpress.com/2021/09/21/transformers-for-use-oriented-entity-extraction/ I wrote about an experience training a custom transformer-based model to do a type of entity extraction. I tried training from scratch because the source text happened to have been preprocessed / lemmatized in such a way to include over 20 custom tokens that the RoBERTa model wouldn’t know about. My assumption was that this text would be so different to normal English that you may as well treat it as its own language.

Once I saw the results I decided to test this hypothesis somewhat by comparing the results of the preprocessed/lemmatized text with the custom model vs a raw version of the text on a fine-tuned out of the box roberta-base model.

Turns out that, for me, the fine-tuned RoBERTa model always outperformed the model trained from scratch, though the difference in performance becomes pretty minimal once you’re in the millions of sentences.

Conclusion – when working in this space, don’t make assumptions. Stand on the shoulders of as many giants as possible.

Approx number of
sentences for fine-tuning
F1 Score –
RoBERTa from scratch
F1 Score –
Fine-tuned roberta-base
1,5000.391820.52233
15,0000.752940.97764
40,0000.926390.99494
65,0000.972600.99627
125,0000.991050.99776
300,0000.996700.99797
600,0000.997710.99866
960,0000.997830.99865
1,400,0000.998100.99888

Machine Learning, Software Development, Technology Adoption

Transformers for Use-Oriented Entity Extraction

The internet is full of text information. We’re drowning in it. The only way to make sense of it is to use computers to interpret the text for us.

Consider this text:

Foo Inc announced it has acquired Bar Corp. The transaction closed yesterday, reported the Boston Globe.

This is a story about a company called ‘Foo’ buying a company called ‘Bar’. (I’m just using Foo and Bar as generic tech words, these aren’t real companies).

I was curious to see how the state of the art has evolved for pulling out these key bits of information from the text since I first looked at Dandelion in 2018.

TL;DR – existing Natural Language services vary from terrible to tolerable. But recent advances in language models, specifically transformers, point towards huge leaps in this kind of language processing.

Dandelion

Demo site: https://dandelion.eu/semantic-text/entity-extraction-demo/

While it was pretty impressive in 2018, the quality for this type of sentence is pretty poor. It only identified that the Boston Globe is an entity, but Dandelion tagged this entity as a “Work” (i.e. a work of art or literature). As I allowed more flexibility in finding entities, it also found that the term “Inc” and “Corp” usually relate to a Corporation, and it found a Toni Braxton song. Nul points.

Link to video

Explosion.ai

Demo site: https://explosion.ai/demos/displacy-ent

This organisation uses pretty standard named entity recognition. It successfully identified that there were three entities in this text. Pretty solid performance at extracting named entities, but not much help for my use case because the Boston Globe entity is not relevant to the key points of the story.

Link to video

Microsoft

Demo site: https://aidemos.microsoft.com/text-analytics

Thought I’d give Microsoft’s text analytics demo a whirl. Completely incomprehensible results. Worse than Dandelion.

Link to video

Completely WTF

Expert.ai

Demo site: https://try.expert.ai/analysis-and-classification

With Microsoft’s effort out of the way, time to look at a serious contender.

This one did a pretty good job. It identified Foo Inc and Bar Corp as businesses. It identified The Boston Globe as a different kind of entity. There was also some good inference that Foo had made an announcement and that something had acquired Bar Corp. But didn’t go so far as joining the dots that Foo was the buyer.

In this example, labelling The Boston Globe as Mass Media is helpful. It means I can ignore it unless I specifically want to know who is reporting which story. But this helpfulness can go too far. When I changed the name “Bar Corp” to “Reuters Corp” then the entity extraction only found one business entity: Foo Inc. The other two entities were now tagged as Mass Media.

Long story short – Expert.ai is the best so far, but a user would still need to implement a fair bit of post-processing to be able to extract they key elements from this text.

Link to video.

Expert.ai is identifying entities based on the nature of that entity, not based on the role that they are playing in the text. The relations are handled separately. I was looking for something that combined the relevant information from both the entities and their relations. I’ll call it ‘use-oriented entity extraction’ following Wittgenstein‘s quote that, if you want to understand language: “Don’t look for the meaning, look for the use”. In other words, the meaning of a word in some text can differ depending on how the word is used. In one sentence, Reuters might be the media company reporting a story. In another sentence, Reuters might be the business at the centre of the story.

Enter Transformers

I wondered how Transformers would do with the challenge of identifying the different entities depending on how the words are used in the text. So I trained a custom RoBERTa using a relatively small base set of text and some judicious pre-processing. I was blown away with the results. When I first saw all the 9’s appearing in the F1 score my initial reaction was “this has to be a bug, no way is this really this accurate”. Turns out it wasn’t a bug.

I’ve called the prototype “Napoli” because I like coastal locations and Napoli includes the consonants N, L and P. This is a super-simple proof of concept and would have a long way to go to become production-ready, but even these early results were pretty amazing:

  1. It could tell me that Foo Inc is the spending party that bought Bar Corp
  2. If I changed ‘Bar’ to ‘Reuters’ it could tell me that Foo Inc bought Reuters Corp
  3. If I changed the word “acquired” to “sold” it would tell me that Foo Inc is the receiving party that sold Reuters Corp (or Bar Corp etc).
  4. It didn’t get confused by the irrelevant fact that Boston Globe was doing the reporting.

Link to video