Feeds:
Posts
Comments

Posts Tagged ‘NLP’

Recently I’ve had the privilege of working with colleagues at Lexis Nexis on a variety of projects in the area of artificial intelligence and natural language processing. So I am pleased to share with you the following paper, which has been accepted for presentation at the 16th International Conference on Artificial Intelligence and Law in London next week. It’s co-authored with colleagues Zach Bennett and Kate Farmer.

We’ll be presenting this as part of the demo session on Tuesday afternoon. The paper is just two pages long so it’s quite concise, but we are hoping to submit an extended version to a suitable conference or workshop in due course. In the meantime, comments and feedback welcome 🙂

(more…)

Advertisements

Read Full Post »

When I started the London Text Analytics meetup group some seven years ago, ‘text analytics’ was a term used by few, and understood by even fewer. Apart from a handful of enthusiasts and academics (who preferred the label of “natural language processing” anyway), the field was either overlooked or ignored by most people. Even the advent of “big data” – of which the vast majority was unstructured – did little to change perceptions.

But now, in these days of chatbot-fuelled AI mania, it seems everyone wants to be part of the action. The commercialisation and democratisation of hitherto academic subjects such as AI and machine learning have highlighted a need for practical skills that focus explicitly on the management of unstructured data. Career opportunities have inevitably followed, with job adverts now calling directly for skills in natural language processing and text mining. So the publication of Tom Reamy’s book  “Deep Text: Using Text Analytics to Conquer Information Overload, Get Real Value from Social Media, and Add Bigger Text to Big Data” is indeed well timed.

(more…)

Read Full Post »

After a brief hiatus, I’m pleased to say that we will shortly be relaunching the London Text Analytics meetup. As many of you know, in the recent past we have organized some relatively large and ambitious events at a variety of locations. But we have struggled to find a regular venue, and as a result have had difficulty in maintaining a scheduled programme of events.

What we really need is a venue we can use on a more regular schedule, ideally on an ex-gratia basis. It doesn’t have to be huge – in fact; a programme of smaller (but more frequent) meetups is in many ways preferable to a handful of big gatherings.

(more…)

Read Full Post »

textmining

I received a pleasant surprise in the post today: my personal copy of Text Mining and Visualization: Case Studies Using Open-Source Tools, edited by Markus Hofmann and Andrew Chisholm. Now I don’t normally blog about books, since as editor of Informer there was a time when I would be sent all manner of titles for inspection and review. But I’ll make an exception here. This is partially since Chapter 7 is my own contribution (on mining search logs), as discussed in my earlier blog posts. This is complemented by 11 other chapters, covering a variety of topics organised into four sections:

(more…)

Read Full Post »

Diana Maynard entertains the masses

Diana Maynard entertains the troops

Last week I had the privilege of organising the 13th meeting of the London Text Analytics group, which featured two excellent speakers: Despo Georgiou of Atos SE and Diana Maynard of Sheffield University. Despo’s talk described her internship at UXLabs where she compared a number of tools for analysing free-text survey responses (namely TheySay, Semantria, Google Prediction API and Weka). Diana’s talk focused on sentiment analysis applied to social media, and entertained the 70+ audience with all manner of insights based on her expertise of having worked on the topic for longer than just about anyone I know. Well done to both speakers!

(more…)

Read Full Post »

I need to compare two text classifiers – one human, one machine. They are assigning multiple tags from an ontology. We have an initial corpus of ~700 records tagged by both classifiers. The goal is to measure the ‘value added’ by the human. However, we don’t yet have any ground truth data (i.e. agreed annotations).

Any ideas on how best to approach this problem in a commercial environment (i.e. quickly, simply, with minimum fuss), or indeed what’s possible?

I thought of measuring the absolute delta between the two profiles (regardless of polarity) to give a ceiling on the value added, and/or comparing the profile of tags added by each human coder against the centroid to give a crude measure of inter-coder agreement (and hence difficulty of the task). But neither really measures the ‘value added’ that I’m looking for, so I’m sure there must better solutions.

Suggestions, anyone? Or is this as far as we can go without ground truth data?

(more…)

Read Full Post »

Older Posts »