Feeds:
Posts
Comments

Posts Tagged ‘User research’

Earlier this week I had the privilege of attending the Text Analytics Summit Europe at the Royal Garden Hotel in Kensington. Some of you may of course recognise this hotel as the base for Justin Bieber’s recent visit to London, but sadly (or is that fortunately?) he didn’t join us. Next time, maybe…

Still, the event was highly enjoyable, and served as visible testament of increasing maturity in the industry. When I did my PhD in natural language processing some *cough* years ago there really wasn’t a lot happening outside of academia – the best you’d get in mentioning ‘NLP’ to someone was an assumption that you’d fallen victim to some new age psychobabble. So it’s great to see the discipline finally ‘going mainstream’ and enjoying attention from a healthy cross section of society. Sadly I wasn’t able to attend the whole event, but  here’s a few of the standouts for me:

(more…)

Read Full Post »

Earlier this week I had the pleasure of presenting a paper titled “The Information Needs of Mobile Searchers” at the Searching for Fun workshop at ECIR 2012, organised by David Elsweiler, Morgan Harvey and Max Wilson. I was able to attend only the morning session (as I was presenting my own tutorial in the afternoon), but still managed to gain some very useful feedback and ideas for extending and improving the framework. I’ll expand on those in a subsequent post, but for now here is the paper in more or less its original form. Note that this is co-authored with my colleague Tyler Tate, who proposed the original framework that led to the paper you see below.

ABSTRACT

The growing use of Internet-connected mobile devices demands that we reconsider search user interface design in light of the context and information needs specific to mobile users. In this paper we present a framework of mobile information needs, juxtaposing search motives—casual, lookup, learn, and investigate—with search types—informational, geographic, personal information management, and transactional.

(more…)

Read Full Post »

A couple of weeks ago I had the pleasure of presenting a paper at Enterprise Search Europe on a Taxonomy of Enterprise Search. This was the first time that the Enterprise Search Summit had found its way this side of the Atlantic, and I’m pleased to say it was a great success (due in no small part to the efforts the conference chair, Martin White).

The paper was essentially a research-driven piece, reporting on some empirical work into studying the search strategies and tactics that users commonly employ across a range of enterprise search contexts. As such, it mirrors Andrei Broder’s classic 2002 paper (A Taxonomy of Web Search), which addresses a broadly similar goal within the domain of web search. However, we used a more qualitative, user-oriented data source, and also extended the analysis to present some initial implications into how the findings could be applied in the design of search and discovery experiences.

After the event, Martin confided in me how unusual it would be to see such a paper at the New York event, intimating that there would be little room in the program for such a piece. That conversation and a subsequent exchange with Daniel Tunkelang at the CIKM Industry Event got me thinking: is the search industry playing its part in building an effective dialogue between researchers and practitioners? Could it do more? Is the job of disseminating and promoting the benefits and outcomes of IR research purely the responsibility of academics and researchers?

I hope to explore this issue further in a subsequent post. For now, here are the slides from the event. The associated paper is also available in a previous post and as a pdf from the HCIR conference website.

Related Posts:

  1. A Taxonomy of Enterprise Search and Discovery
  2. Findability is just So Last Year
  3. Designing the Search Experience (tutorial at Search Solutions 2011)
  4. A Taxonomy of Search Strategies and their Design Implications 
  5. Search Solutions 2011: London, November 16

Read Full Post »

Last week I attended the October edition of the London Enterprise Search meetup, which gave us (among other things) our usual monthly fix of great talks and follow up discussions. This time, one of the topics that particularly caught my attention was the question of how to measure the effectiveness of enterprise search. Several possible approaches were suggested, including measuring how frequently users can “find what they are looking for” within a fixed period of time (e.g. two minutes).

Now I’m not saying findability isn’t important, but in my opinion metrics like this really seem to miss the point. Leaving aside the methodological issues in defining exactly what is meant by “find what they are looking for”, they seem predicated on the notion that search is all about finding known items, as if to suggest that once they’re found, everyone can go home. In my experience, nothing could be further from the truth.

Most ‘finding’ tasks are but a small part of a much larger overall task, and are at best the beginning of an information interaction episode, rarely ever the end. Much of the value we can add in delivering enterprise search solutions should be in understanding the complete task lifecycle and helping the user complete their overall information goals, which invariably extend far beyond simple known-item search. To me, findability is but one element of the overall search experience, which (particularly in enterprise environments) often involves significant elements of higher-level problem-solving behaviour such as analysis and sensemaking:

Search is more than just findability

So why the fixation with findability? Part of the reason may be because it is both easy to understand (intuitively and quantitatively) and relatively easy to measure, with readily available metrics such as precision, recall, etc. But like the drunk searching for his car keys under the lamp post, just because it is more convenient, doesn’t mean it is the right place to look.

So I took the liberty of testing my own hypothesis against the data we used in the recent EuroHCIR paper, to see whether these intuitions have any basis in reality. I reviewed the scenarios we used in that study and counted how many of them actually were bona fide ‘findability’ tasks.

The answer? Two. Out of 104 enterprise search scenarios, less than 2% were categorised as findability tasks (i.e. locating a known item). The rest were focused on much broader goals, such as comparing, comprehending, exploring, evaluating, analysing, synthesising, and so on. Moreover, when findability was an influence, it was invariably part of a larger, composite activity, embedded in a longer sequence of analysis & sensemaking activity. So in that context, measuring the time it takes to “find what you are looking for” is at best a crude instrument; at worst, it simply measures the wrong thing.

Now of course, I’ve used a reasonably modest data sample here, and if you gather your own data, I’m sure your mileage will vary. So I plan to extend the analysis and dig a little deeper to look for further evidence to support (or contradict) the hypothesis above.

In the meantime, if you have some data or your own & you’d like to share (or even better, collaborate), I’d love to hear about it, either here or by email.

BTW, if you want to learn more about the ideas I’ve talked about above, the following are all good resources for further reading:

  • Bates, Marcia J. 1979. “Information Search Tactics.” Journal of the American Society for Information Science 30: 205-214
  • Cool, C. & Belkin, N. 2002. A classification of interactions with information. In H. Bruce (Ed.), Emerging Frameworks and Methods: CoLIS4: proceedings of the Fourth International Conference on Conceptions of Library and Information Science, Seattle, WA, USA, July 21-25, 2002, (pp. 1-15).
  • Jarvelin, K. and Ingwersen, P. 2004. “Information seeking research needs extension towards tasks and technology”, Information Research, Vol. 10, No. 1. (October 2004)
  • Kuhlthau, C. C. 1991. Inside the information search process: Information seeking from the user’s perspective. Journal of the American Society for Information Science, 42, 361-371.
  • Marchionini, G. 2006. Exploratory search: from finding to understanding. Commun. ACM 49(4): 41-46
  • Peter Pirolli and Stuart Card (2005). ‘The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis’, Proceedings of the 2005 International Conference on Intelligence Analysis, McClean, VA, May 2005
  • O’Day, V. and Jeffries, R. 1993. Orienteering in an information landscape: how information seekers get from here to there. INTERCHI 1993: 438-445
  • Rose, D. and Levinson, D. 2004. Understanding user goals in web search, Proceedings of the 13th international conference on World Wide Web, New York, NY, USA

Related Posts:

  1. A Taxonomy of Enterprise Search
  2. Designing the Search Experience (tutorial at Search Solutions 2011)
  3. A Taxonomy of Search Strategies and their Design Implications
  4. EuroHCIR 2011: lineup announced!
  5. Interaction Models for Faceted Search

Read Full Post »

The Ergonomics Society is about to embark on a redesign of its website, and ealier this month I posted out the initial user segmentation model, along with the draft user profiles and the prioritised scenarios. Now, following  conversations with various folks including Tina Worthy and Richard Bye, we have an updated plan for user research.

In summary, what we plan to do is:

  1. Establish some baseline data for the existing site experience (so that we have something to compare with after the redesign). Richard Bye has kindly offered the use of his analytic tools in assessing this.
  2. Perform depth interviews with participants from the 1st four priority segments, as follows:
    • Information Consumers (times 3)
    • Society Members (times 3)
    • Society Customers (times 2)
    • 3rd Party Service Consumers (times 2)
    1. Note that the breakdown here is designed to reflect both the relative priorities of the segments and what we feel is realistic given the resources available.
  3. Hold a focus group for the Staff Information Consumers.
  4. Run a formative IA exercise (such as an open card sort) to establish the key organisational principles for the site content. Participants to be segmented as in (2).

Evidently, there will be a fair amount of prep involved in all of this, notably the preparation of recruitment screeners, interview protocols, scripts, etc. Note also that the analytic tools that Richard has offered will also need configuring; no doubt a key part of this will be determining precisely what metrics to measure as a baseline. I suspect we’ll need to adopt a pretty lightweight / agile approach, especially considering that most if not all of this will need to fit around existing work commitments. And we shouldn’t underestimate timelines either – it is one thing to manage delivery of a web project when everyone is directly accountable to you; quite another when everyone is lending their time on a voluntary basis.

Looking further ahead, we will also need to consider the choice of development platform.  At the moment we are using phpMyAdmin, but it is likely that we will want to migrate to something more scalable and usable by a wider cross section of people (i.e. nominated content editors) in future. Lauren Morgan is currently evaluating alternatives such as Joomla and Drupal, and should be in a position to report back soon.

So, as a rough estimate, I’d say the timeline will pan out something like this:

  • August: user research
  • September: user research + data analysis. Output = refined segmentation model + profiles + scenarios
  • October: Interaction design + visual design (proceeding in parallel in so far as that’s practicable). Output = wireframes (which could be fairly simplistic, depending on the build approach) + visual design spec. (NB we should also consider producing a style guide for the site, but I am not sure we can deliver that as well within the scope of the exisiting project)
  • Nov + Dec: build. Output = CMS templates + associated tools & resources, etc.
  • Jan: UAT + soft launch
  • Feb: full launch

Note that I’m assuming we will interveave user feedback at suitable iteration points throughout the above timelime – as UCD specialists we should know this better than any 🙂

Read Full Post »

« Newer Posts