Feeds:
Posts
Comments

Posts Tagged ‘evaluation’

5126030385_e67759eb7f_zUnless you’ve been on another planet for the last year or so, you‘ll almost certainly have noticed that chatbots (and conversational agents in general) became quite popular during the course of 2016. It seems that every day a new start up or bot framework was launched, no doubt fuelled at least in part by a growth in the application of data science to language data, combined with a growing awareness in machine learning and AI techniques more generally. So it’s not surprising that we now see on a daily basis all manner of commentary on various aspects of chatbots, from marketing to design, development, commercialisation, etc.

But one topic that doesn’t seem to have received quite as much attention is that of evaluation. It seems that in our collective haste to join the chatbot party, we risk overlooking a key question: how do we know when the efforts we have invested in design and development have actually succeeded? What kind of metrics should be applied, and what constitutes success for a chatbot anyway?

(more…)

Read Full Post »

A client of mine wants to measure the difference between manual tagging and auto-classification on unstructured documents, focusing in particular on its impact on retrieval (i.e. relevance ranking).  At the moment they are considering two contrasting approaches:

(more…)

Read Full Post »

I am trying to put together a framework for search quality evaluation for a specialist information provider.

At the moment quality is measured by counting the number of hits for certain key docs across various queries, and monitoring changes on a regular schedule. I’d like to broaden this out into something more scalable and robust, from which a more extensive range of metrics can be calculated. (As an aside, I know there are many ways of evaluating the overall search experience, but I’m focusing solely on ranked retrieval and relevance here).

We are in the fortunate position of being able to acquire binary relevance judgements from SMEs, so can aspire to something like the TREC approach:

http://trec.nist.gov/data/reljudge_eng.html

But of course we are running just a single site search engine here, so can’t pool results across runs to produce a consolidated ‘gold standard’ result set as you would in the TREC framework.

I am sure this scenario repeats the world over. One solution I can think of is to run your existing search engine with various alternative configurations, e.g. precision oriented, recall oriented, freshness oriented, etc. and aggregate the top N results from each to emulate the pooling approach. Can anyone suggest any others? Or perhaps an alternative method entirely?

(more…)

Read Full Post »