Earlier this week I gave a talk called “Introduction to NLP” as part of a class I am currently teaching at the University of Notre Dame. This is an update of a talk I originally gave in 2010, whilst working for Endeca. I had intended to make a wholesale update to all the slides, but noticed that one of them was worth keeping verbatim: a snapshot of the state of the art back then (see slide 38). Less than a decade has passed since then (that’s a short time to me đ but there are some interesting and noticeable changes. For example, there is no word2vec, GloVe or fastText, or any of the neurally-inspired distributed representations and frameworks that are now so popular (let alone BERT, ELMo & the latest wave). Also no mention of sentiment analysis: maybe that was an oversight on my part, but I rather think that what we perceive as a commodity technology now was just not sufficiently mainstream back then.
Posts Tagged ‘text mining’
Introduction to Natural Language Processing (slideshow)
Posted in Text analytics, tagged natural language processing, NLP, Text analytics, text mining on January 22, 2019| Leave a Comment »
Book review: Deep Text by Tom Reamy
Posted in Information architecture, Search, Text analytics, tagged natural language processing, NLP, Text analytics, text mining on April 4, 2017| Leave a Comment »
When I started the London Text Analytics meetup group some seven years ago, âtext analyticsâ was a term used by few, and understood by even fewer. Apart from a handful of enthusiasts and academics (who preferred the label of ânatural language processingâ anyway), the field was either overlooked or ignored by most people. Even the advent of âbig dataâ – of which the vast majority was unstructured – did little to change perceptions.
But now, in these days of chatbot-fuelled AI mania, it seems everyone wants to be part of the action. The commercialisation and democratisation of hitherto academic subjects such as AI and machine learning have highlighted a need for practical skills that focus explicitly on the management of unstructured data. Career opportunities have inevitably followed, with job adverts now calling directly for skills in natural language processing and text mining. So the publication of Tom Reamyâs book  âDeep Text: Using Text Analytics to Conquer Information Overload, Get Real Value from Social Media, and Add Bigger Text to Big Dataâ is indeed well timed.
London Text Analytics: call for venues and speakers
Posted in Events, Text analytics, tagged natural language processing, NLP, opinion mining, sentiment analysis, text mining on July 21, 2016| Leave a Comment »
After a brief hiatus, Iâm pleased to say that we will shortly be relaunching the London Text Analytics meetup. As many of you know, in the recent past we have organized some relatively large and ambitious events at a variety of locations. But we have struggled to find a regular venue, and as a result have had difficulty in maintaining a scheduled programme of events.
What we really need is a venue we can use on a more regular schedule, ideally on an ex-gratia basis. It doesn’t have to be huge – in fact; a programme of smaller (but more frequent) meetups is in many ways preferable to a handful of big gatherings.
Sentiment analysis: a comparison of four tools
Posted in Events, Text analytics, tagged natural language processing, NLP, opinion mining, sentiment analysis, text mining on July 30, 2014| Leave a Comment »
Last week I had the privilege of organising the 13th meeting of the London Text Analytics group, which featured two excellent speakers: Despo Georgiou of Atos SE and Diana Maynard of Sheffield University. Despoâs talk described her internship at UXLabs where she compared a number of tools for analysing free-text survey responses (namely TheySay, Semantria, Google Prediction API and Weka). Dianaâs talk focused on sentiment analysis applied to social media, and entertained the 70+ audience with all manner of insights based on her expertise of having worked on the topic for longer than just about anyone I know. Well done to both speakers!
MeetUp review: AnnoMarket – text analytics in the cloud
Posted in Events, Text analytics, tagged cloud computing, information extraction, natural language processing, Text analytics, text mining on February 13, 2014| 1 Comment »
After a brief hiatus Iâm pleased to say the London Text Analytics meetup resumed last night with an excellent set of talks from the participants in the AnnoMarket project. For those of you unfamiliar, this project is concerned with creating a cloud-based, open market for text analytics applications: a kind of NLP âapp storeâ, if you will. The caveat is that each app must be implemented as a GATE pipeline and conform to their packaging constraints, but as weâve discussed before, GATE is a pretty flexible platform that integrates well with 3rd party applications and services.
Sentiment analysis tools for non-coders?
Posted in Text analytics, tagged natural language processing, sentiment analysis, Text analytics, text mining on June 11, 2013| 7 Comments »
I have an intern who will shortly be starting a project to extract sentiment from free text survey responses from the healthcare domain. She doesn’t have much programming experience, so is ideally looking for a toolkit /platform that will allow her to experiment with various approaches with minimal coding (e.g. perhaps just some elementary scripting etc.).
Free is best, although a commercial product on a trial basis might work. Any suggestions?
Related Posts:
How do you compare two text classifiers?
Posted in Text analytics, tagged natural language processing, NLP, Text analytics, text classifiers, text mining on April 27, 2012| 9 Comments »
I need to compare two text classifiers – one human, one machine. They are assigning multiple tags from an ontology. We have an initial corpus of ~700 records tagged by both classifiers. The goal is to measure the ‘value added’ by the human. However, we don’t yet have any ground truth data (i.e. agreed annotations).
Any ideas on how best to approach this problem in a commercial environment (i.e. quickly, simply, with minimum fuss), or indeed what’s possible?
I thought of measuring the absolute delta between the two profiles (regardless of polarity) to give a ceiling on the value added, and/or comparing the profile of tags added by each human coder against the centroid to give a crude measure of inter-coder agreement (and hence difficulty of the task). But neither really measures the ‘value added’ that I’m looking for, so I’m sure there must better solutions.
Suggestions, anyone? Or is this as far as we can go without ground truth data?
Text Analytics Summit Europe – highlights and reflections
Posted in Events, Text analytics, tagged Information Retrieval, natural language processing, NLP, sentiment analysis, Text analytics, text mining, User research on April 26, 2012| 4 Comments »
Earlier this week I had the privilege of attending the Text Analytics Summit Europe at the Royal Garden Hotel in Kensington. Some of you may of course recognise this hotel as the base for Justin Bieberâs recent visit to London, but sadly (or is that fortunately?) he didnât join us. Next time, maybe…
Still, the event was highly enjoyable, and served as visible testament of increasing maturity in the industry. When I did my PhD in natural language processing some *cough* years ago there really wasnât a lot happening outside of academia – the best youâd get in mentioning âNLPâ to someone was an assumption that youâd fallen victim to some new age psychobabble. So itâs great to see the discipline finally ‘going mainstreamâ and enjoying attention from a healthy cross section of society. Sadly I wasn’t able to attend the whole event, but hereâs a few of the standouts for me:
Text Analytics for Medical Informatics + Question Answering
Posted in Events, Text analytics, tagged information extraction, natural language processing, NLP, Text analytics, text mining on August 2, 2011| Leave a Comment »
Here’s a quick shout out for Friday’s meeting of the London Text Analytics group, which will be held at Fizzback‘s offices on the Strand at 18:30. As usual, we’ll aim to start with a couple of informal talks then adjourn to a local pub for a drink or two afterwards. As it happens, this meetup is now full, but you can always join the waiting list or (if you’re not yet a member) sign up for early notification of the next event. Full details below – hope to see you there.
Automating the formalization of clinical guidelines using information extraction: an overview of recent lexical approaches
Phil Gooch (City University)
Formalizing guideline text into a computable model, and linking clinical terms and recommendations in clinical guidelines to concepts in the electronic patient record (EHR) is difficult as, typically, both the guideline text and EHR content may be ambiguous, inconsistent and make use of implicit and background medical knowledge. How can lexical-based IE approaches help to automate this task? In this presentation, various design patterns are discussed and some tools presented.
Question-Answering over Linked Data
Danica Damljanovic (Sheffield University)
The availability and growth of the Linked Open Data cloud made exploiting the rich semantics easily accessible but also challenging mainly due to its scale. In this talk I will discuss challenges for building a Question-Answering system that uses these data as the main source for finding the answer. I will introduce the FREyA system which combines syntactic parsing with the semantic annotation in order to correctly interpret the question, and involves the user into dialog if necessary. Through the dialog, FREyA allows the user to validate or change the semantic meaning of each word in the question – the user’s input is used to train the system and improve its performance over time.
Related Posts:
- Prostitutes Appeal to Pope: Text Analytics applied to Search
- The role of Natural Language Processing in Information Retrieval
- IR book is out!
- Applying text analytics to product innovation and legal cases
- Text Analytics: Yesterday, Today and Tomorrow