Just in case you missed it, here are details of the latest issue of Informer, which came out on this week. As usual, lots of good stuff, with a mix of conference reviews, feature articles and the several obligatory mentions of the word ‘chatbot’. For further details see the Informer website. Or if you fancy becoming a contributor, get in touch!
We are just in the process of putting together the programme for Industry Day at ECIR 2017, and as part of that invite speaker proposals as outlined below. If you’re interested in presenting, or have any questions, just drop me a line. Hope to see you there!
This year’s ECIR conference will include an Industry Day, following very successful events at ECIR in recent years. The Industry Day will be held on Thursday 13th April 2017, immediately after the regular conference program.
Unless you’ve been on another planet for the last year or so, you‘ll almost certainly have noticed that chatbots (and conversational agents in general) became quite popular during the course of 2016. It seems that every day a new start up or bot framework was launched, no doubt fuelled at least in part by a growth in the application of data science to language data, combined with a growing awareness in machine learning and AI techniques more generally. So it’s not surprising that we now see on a daily basis all manner of commentary on various aspects of chatbots, from marketing to design, development, commercialisation, etc.
But one topic that doesn’t seem to have received quite as much attention is that of evaluation. It seems that in our collective haste to join the chatbot party, we risk overlooking a key question: how do we know when the efforts we have invested in design and development have actually succeeded? What kind of metrics should be applied, and what constitutes success for a chatbot anyway?
Most of us who work on digital products are familiar with the concept of A/B or multivariate testing – the process of exposing users to multiple variations of a design concept and using their aggregate behaviour to identify the optimal design, based on a predefined set of metrics. By gathering data across thousands of individual user sessions, multivariate testing can provide a rigorous evidence base for principled decision making. In principle, such data-centric, quantitative research techniques can be highly complementary to the more qualitative, user-centric research techniques typically associated with the UX profession.
On Wednesday last week I had the honour of co-chairing the 11th Search Solutions conference at BCS London in Covent Garden. As always, the event included presentations, panels and keynote talks by influential industry leaders on novel and emerging applications in search and information retrieval. But this year’s event was memorable for a different reason: we were delighted to be able to reveal the outcome of the inaugural IRSG Search Industry Awards. In a ceremony held at the close of Search Solution 2016, it was my honour to announce the winners for the following three categories:
I am looking for a highly skilled, articulate researcher/developer for a short engagement to investigate the options in migrating an existing desktop software application to the cloud. I need someone who understands the complexities involved in making an application that was built in Java FX work seamlessly on the web.
The deliverable will be a technical report that outlines the strengths, weaknesses, opportunities, and threats associated with each option, along with a set of practical recommendations for next steps and the costs associated with each.
To succeed in this role you will need to be:
- highly educated, technically literate and open-minded
- able to rapidly understand the requirements embodied in an existing desktop application, and anticipate future requirements and scalability concerns
- willing to ask intelligent questions to make sure you understand the subtleties, trade-offs and complexities of the project and are doing the best possible job for your client
- skilled in researching different technical options and critically evaluating them
- able to form robust and defensible recommendations based on your research
- experienced in communicating those recommendations in the form of credible and detailed technical report
- prepared to sign an NDA governed by English law.
Lots more information available on request. Principals only please (no agencies). Can you recommend anyone?
Just in case you missed it, here are details of the latest issue of Informer, which came out on this week. As usual, lots of good stuff, with a mix of conference reviews, feature articles and the several obligatory mentions of the words ‘deep learning’. For further details see the Informer website. Or if you fancy becoming a contributor, get in touch!