Feeds:
Posts
Comments

Posts Tagged ‘Text analytics’

When I started the London Text Analytics meetup group some seven years ago, ‘text analytics’ was a term used by few, and understood by even fewer. Apart from a handful of enthusiasts and academics (who preferred the label of “natural language processing” anyway), the field was either overlooked or ignored by most people. Even the advent of “big data” – of which the vast majority was unstructured – did little to change perceptions.

But now, in these days of chatbot-fuelled AI mania, it seems everyone wants to be part of the action. The commercialisation and democratisation of hitherto academic subjects such as AI and machine learning have highlighted a need for practical skills that focus explicitly on the management of unstructured data. Career opportunities have inevitably followed, with job adverts now calling directly for skills in natural language processing and text mining. So the publication of Tom Reamy’s book  “Deep Text: Using Text Analytics to Conquer Information Overload, Get Real Value from Social Media, and Add Bigger Text to Big Data” is indeed well timed.

(more…)

Advertisements

Read Full Post »

A short while ago I posted the slides to Despo Georgiou’s talk at the London Text Analytics meetup on Sentiment analysis: a comparison of four tools. Despo completed an internship at UXLabs in 2013-4, and I’m pleased to say that the paper we wrote documenting that work is due to be presented and published at the Science and Information Conference 2015, in London. The paper is co-authored with my IRSG colleague Andy MacFarlane and is available as a pdf, with the abstract appended below.

As always, comments and feedback welcome 🙂

ABSTRACT

Sentiment analysis is an emerging discipline with many analytical tools available. This project aimed to examine a number of tools regarding their suitability for healthcare data. A comparison between commercial and non-commercial tools was made using responses from an online survey which evaluated design changes made to a clinical information service. The commercial tools were Semantria and TheySay and the non-commercial tools were WEKA and Google Prediction API. Different approaches were followed for each tool to determine the polarity of each response (i.e. positive, negative or neutral). Overall, the non-commercial tools outperformed their commercial counterparts. However, due to the different features offered by the tools, specific recommendations are made for each. In addition, single-sentence responses were tested in isolation to determine the extent to which they more clearly express a single polarity. Further work can be done to establish the relationship between single-sentence responses and the sentiment they express.

(more…)

Read Full Post »

Valentin Tablan kicks things off (photo: Hercules Fisherman)

After a brief hiatus I’m pleased to say the London Text Analytics meetup resumed last night with an excellent set of talks from the participants in the AnnoMarket project. For those of you unfamiliar, this project is concerned with creating a cloud-based, open market for text analytics applications: a kind of NLP ‘app store’, if you will. The caveat is that each app must be implemented as a GATE pipeline and conform to their packaging constraints, but as we’ve discussed before, GATE is a pretty flexible platform that integrates well with 3rd party applications and services.

(more…)

Read Full Post »

I have an intern who will shortly be starting a project to extract sentiment from free text survey responses from the healthcare domain. She doesn’t have much programming experience, so is ideally looking for a toolkit /platform that will allow her to experiment with various approaches with minimal coding (e.g. perhaps just some elementary scripting etc.).

Free is best, although a commercial product on a trial basis might work. Any suggestions?

Related Posts:

  1. How do you compare two text classifiers?
  2. Text Analytics Summit Europe – highlights and reflections
  3. How do you measure site search quality?
  4. Prostitutes Appeal to Pope: Text Analytics applied to Search
  5. The role of Natural Language Processing in Information Retrieval

Read Full Post »

A client of mine wants to measure the difference between manual tagging and auto-classification on unstructured documents, focusing in particular on its impact on retrieval (i.e. relevance ranking).  At the moment they are considering two contrasting approaches:

(more…)

Read Full Post »

I need to compare two text classifiers – one human, one machine. They are assigning multiple tags from an ontology. We have an initial corpus of ~700 records tagged by both classifiers. The goal is to measure the ‘value added’ by the human. However, we don’t yet have any ground truth data (i.e. agreed annotations).

Any ideas on how best to approach this problem in a commercial environment (i.e. quickly, simply, with minimum fuss), or indeed what’s possible?

I thought of measuring the absolute delta between the two profiles (regardless of polarity) to give a ceiling on the value added, and/or comparing the profile of tags added by each human coder against the centroid to give a crude measure of inter-coder agreement (and hence difficulty of the task). But neither really measures the ‘value added’ that I’m looking for, so I’m sure there must better solutions.

Suggestions, anyone? Or is this as far as we can go without ground truth data?

(more…)

Read Full Post »

Older Posts »