Feeds:
Posts
Comments

Archive for the ‘Text analytics’ Category

textmining

I received a pleasant surprise in the post today: my personal copy of Text Mining and Visualization: Case Studies Using Open-Source Tools, edited by Markus Hofmann and Andrew Chisholm. Now I don’t normally blog about books, since as editor of Informer there was a time when I would be sent all manner of titles for inspection and review. But I’ll make an exception here. This is partially since Chapter 7 is my own contribution (on mining search logs), as discussed in my earlier blog posts. This is complemented by 11 other chapters, covering a variety of topics organised into four sections:

(more…)

Advertisements

Read Full Post »

Here’s a sample of some of the things we’re working on at UXLabs this year, neatly packaged into Masters level ‘internships’. I use quotes there as although it’s a convenient term used by many of my academic colleagues, these opportunities are (a) unpaid and (b) remote (i.e. hosted by your own institution). So maybe ‘co-supervised MSc projects initiated by a commercial partner’ is more accurate term… Anyway, what we offer is support, expertise, co-supervision and access to real world data/challenges. If you are interested in working with us on the challenges below, get in touch. (more…)

Read Full Post »

A short while ago I posted the slides to Despo Georgiou’s talk at the London Text Analytics meetup on Sentiment analysis: a comparison of four tools. Despo completed an internship at UXLabs in 2013-4, and I’m pleased to say that the paper we wrote documenting that work is due to be presented and published at the Science and Information Conference 2015, in London. The paper is co-authored with my IRSG colleague Andy MacFarlane and is available as a pdf, with the abstract appended below.

As always, comments and feedback welcome 🙂

ABSTRACT

Sentiment analysis is an emerging discipline with many analytical tools available. This project aimed to examine a number of tools regarding their suitability for healthcare data. A comparison between commercial and non-commercial tools was made using responses from an online survey which evaluated design changes made to a clinical information service. The commercial tools were Semantria and TheySay and the non-commercial tools were WEKA and Google Prediction API. Different approaches were followed for each tool to determine the polarity of each response (i.e. positive, negative or neutral). Overall, the non-commercial tools outperformed their commercial counterparts. However, due to the different features offered by the tools, specific recommendations are made for each. In addition, single-sentence responses were tested in isolation to determine the extent to which they more clearly express a single polarity. Further work can be done to establish the relationship between single-sentence responses and the sentiment they express.

(more…)

Read Full Post »

The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 26,000 times in 2014. If it were a concert at Sydney Opera House, it would take about 10 sold-out performances for that many people to see it.

Click here to see the complete report.

Read Full Post »

Diana Maynard entertains the masses

Diana Maynard entertains the troops

Last week I had the privilege of organising the 13th meeting of the London Text Analytics group, which featured two excellent speakers: Despo Georgiou of Atos SE and Diana Maynard of Sheffield University. Despo’s talk described her internship at UXLabs where she compared a number of tools for analysing free-text survey responses (namely TheySay, Semantria, Google Prediction API and Weka). Diana’s talk focused on sentiment analysis applied to social media, and entertained the 70+ audience with all manner of insights based on her expertise of having worked on the topic for longer than just about anyone I know. Well done to both speakers!

(more…)

Read Full Post »

Expectation Maximization applied to a new sample of 100,000 sessions

In a previous post I discussed some initial investigations into the use of unsupervised learning techniques (i.e. clustering) to identify usage patterns in web search logs. As you may recall, we had some initial success in finding interesting patterns of user behaviour in the AOL log, but when we tried to extend this and replicate a previous study of the Excite log, things started to go somewhat awry. In this post, we investigate these issues, present the results of a revised procedure, and reflect on what they tell us about searcher behaviour.

(more…)

Read Full Post »

EM, 7 features

As I mentioned in a previous post I’ve recently been looking into the challenges of search log analysis and in particular the prospects for deriving a ‘taxonomy of search sessions’. The idea is that if we can find distinct, repeatable patterns of behaviour in search logs then we can use these to better understand user needs and therefore deliver a more effective user experience.

We’re not the first to attempt this of course – in fact the whole area of search log analysis has an academic literature which extends back at least a couple of decades. And it is quite topical right now, with both ElasticSearch and LucidWorks releasing their own logfile analysis tools (ELK and SiLK respectively). So in this post I’ll be discussing some of the challenges in our own work and sharing some of the initial findings.

(more…)

Read Full Post »

« Newer Posts - Older Posts »