Posts Tagged ‘search modes’

Here are the slides from the talk I gave at EuroHCIR last week on A Model of Consumer Search Behaviour. This talk extends and validates the taxonomy of information search strategies (aka ‘search modes’) presented at last year’s event, but applies it in this instance to the domain of site search, i.e. consumer-oriented websites and search applications. We found that site search users presented significantly different information needs to those of enterprise search, implying some key differences in the information behaviours required to satisfy those needs.

As usual, some of the builds don’t come out quite right on Slideshare, but I can always make the ppt available if people want to see the original. I’ll post the full text of the paper itself here as well shortly (watch this space :))



Read Full Post »

In the previous post, we looked at the ways in which a response to an information need can be articulated, focusing on the various forms that individual search results can take. Each separate result represents a match for our query, and as such, has the potential to fulfil our information needs. But as we saw earlier, information seeking is a dynamic, iterative activity, for which there is often no single right answer.

A more informed approach therefore is to consider search results not as competing alternatives, but as an aggregate response to an information need. In this context, the value lies not so much with the individual results but on the properties and possibilities that emerge when we consider them in their collective form. In this section we examine the most universal form of aggregation: the search results page.


Read Full Post »

Here are the slides from the talk I gave at Enterprise Search Europe last week on A Taxonomy of Site Search. This talk extends and validates the taxonomy of information search strategies (aka ‘search modes’) presented at last year’s event, and reviews some of their implications for design. But this year we looked specifically at site search rather than enterprise search, and explored the key differences in user needs and behaviours between the two domains.

As usual, some of the builds don’t come out quite right on Slideshare, but I can always make the ppt available if people want to see the original.


Read Full Post »

In an earlier post we reviewed the various ways in which an information need may be articulated, focusing on its expression via some form of query. In this post we consider ways in which the response can be articulated, focusing on its expression as a set of search results. Together, these two elements lie at the heart of the search experience, defining and shaping much of the information seeking dialogue. We begin therefore by examining the most universal of elements within that response: the search result.


Read Full Post »

A little while ago I posted an article called Findability is just So Last Year, in which I argued that the current focus (dare I say fixation) of the search community on findability was somewhat limiting, and that in my experience (of enterprise search, at least), there are a great many other types of information-seeking behaviour that aren’t adequately accommodated by the ‘search as findability’ model. I’m talking here about things like analysis, sensemaking, and other problem-solving oriented behaviours.

Now, I’m not the first person to have made this observation (and I doubt I’ll be the last), but it occurs to me that one of the reasons the debate exists in the first place is that the community lacks a shared vocabulary for defining these concepts, and when we each talk about “search tasks” we may actually be referring to quite different things. So to clarify how I see the landscape, I’ve put together the short piece below. More importantly, I’ve tried to connect the conceptual (aka academic) material to current design practice, so that we can see what difference it might make if we had a shared perspective on these things. As always, comments & feedback welcome.


Read Full Post »

One of the things I’ve been thinking about recently is the concept ofsearch modes, i.e. the notion that certain types of information-seeking behaviour exist independently of any particular context or user. For example, “locating” and “exploring” are activities that are common to all sorts of contexts (e.g. web search, enterprise, mobile etc.) and all sorts of users (novices, experts, etc.).

If we could define a system of search modes, and validate them empirically, then we’d have a valuable ‘lens’ through which we could recognize common patterns of information seeking behaviour. But more importantly, we’d also have a basis for defining the behaviours that a particular search experience should support.

It’s the latter aspect that I’ve been focusing on, along with the observation that search modes do not occur randomly. Instead, they tend to cluster, forming distinct chains or patterns. Sometimes these chains consist of two discrete modes, sometimes three or more. Could these patterns suggest the existence of underlying ‘grammar’ that defines the particular combinations that are meaningful or productive?

I am intrigued by this possibility, and hope to validate this further through empirical research. But in the meantime, it calls to mind of an earlier piece of work in which we explored a similar notion underlying the semantics of visual communication in the form of icon design. This work has in fact already been published by Microsoft as part of the NHS Common User Interface guidance on Icons and Symbology, but I think it bears a second review in the context of this discussion.

I’ve included the relevant section below. For the full text, check out the Microsoft CUI website.

A Grammar for Icon Design

Icons are used to communicate information, and in that respect they can be said to exhibit some of the characteristics of human language. For example, icons can be used as symbols to represent concepts in the real world, analogous to words in a language. A picture of a printer can be said to convey as much information about its referent as the word “printer” itself (perhaps more, in some cases). Likewise, a set of icons representing the key concepts in a domain can be thought of as a visual vocabulary for that universe of discourse.

Furthermore, iconic concepts can be combined to produce a composite meaning; analogous to words arranged within a sentence. For example, a picture of a printer with a large “plus” symbol in the foreground might reasonably be construed to mean “Add printer”. In this respect, the process of icon design becomes one of developing composite icons from more basic “iconic morphemes” which represent atomic units of meaning.

However, once concepts are combined in this manner, the limitations of the approach become apparent. Language is more than just the arbitrary combination of symbols, as there are strict rules of syntax that govern how and where they may be combined. Moreover, it is only thorough a common understanding of these rules that native speakers are able to converse fluently. Without a grammar to resolve the structural ambiguities that arise when concepts are combined, composite meanings are inherently ambiguous, and only simple atomic concepts can be communicated effectively.

Consequently, much has been written about the notion of building a “grammar” for icons. Indeed, there would be clear benefits in developing such a framework:

  • Icon designers would have a clear set of rules to follow, thereby promoting icon reusability and consistency
  • The rules could also be applied “in reverse”, to determine if a given icon is well-formed
  • Once a user has learnt this language, their comprehension of the icons (and the context in which they are used) will be enhanced

The purpose of this section is to review some of the issues involved in developing a grammar for icons, and to explore the possibilities of applying such a grammar within clinical applications.

Developing an Icon Grammar

To a degree, the idea of developing an icon grammar has already been partially explored in previous CUI work, in particular Alert Symbol Design, in which an attempt was made to define a “visual syntax” for alert symbols. For example, an alert such as that shown in Figure 4 could be said to be composed of the following components:

  • An objective symbol (the telephone icon)
  • A modifier (the number 3)
  • A container (bounding the objective symbol)
  • Informational text (describe the symbol)

Figure 4: Visual Syntax for an Alert Symbol

However, whilst this work did succeed in enumerating some of the key properties of icons and articulating them as design dimensions, it stopped short of actually trying to define the rules by which iconic morphemes could be combined into meaningful composite units. In other words, it alluded to the existence of a grammar, but did not try to define it.

Moreover, a further fundamental difference is that the focus of the previous work was on exploring the role of certain icon design dimensions (such as shape, colour, size, etc.) within a classification framework defined by the key criteria of intensity and polarity. By contrast, the focus of the current work is on developing a vocabulary of symbols to represent real world objects (such as “patients” or “medications”) and actions (such as “add” or “delete”), and exploring ways in which these symbols can be combined to form meaningful composite units (such as “Add Patient”).

The current work takes this idea further, by attempting to assign grammatical categories to each of the words in the iconic vocabulary, for example:

  • Nouns are used to represent objects
  • Verbs are used to represent actions (applied to objects)
  • Adjectives are used to represent attributes (of objects)

Example 1: Patient Records and Toolbar Icons

Table 7 shows an example of a simple icon grammar, consisting of a single noun (“patient”) and a number of verbs (“search”, “add”, “delete”, and “edit”). As can be seen, we can combine these basic icons to form more complex, composite meanings such as “Search for patient”, “Add patient”, and so on.

It should be noted, however, that even with this simple example ambiguities still arise:

  • The denotation of the patient icon is actually the patient record, rather than the patient per se
  • The composite “Search for patient” icon has a more subtle nuanced meaning, i.e. search FOR patient”, rather than the more literal “patient search”, which could imply that the patient was actually the agent of the search rather than the object

Table 7: Composite Meanings Created from Basic Icons

However, despite these limitations, most users would be able to interpret the correct meaning of such composite icons in most contexts, particularly if presented with the appropriate label.

Moreover, the meaning of such icons is further clarified when combined with an appropriate semantics. Figure 5 and Figure 6 show the same four composite icons within the context of a toolbar, consisting of four action buttons. The toolbar is attached to a panel showing a list of patient records. In Figure 5, no patient record is selected, so it is not possible to “edit” or “delete”. This is reflected in the state of the buttons, which are disabled for those two verbs. By contrast, Figure 6 shows the same toolbar with a patient record selected – in this case, we see that all four buttons are enabled, in keeping with the contextual semantics of the four verbs. The semantics can therefore be used to reinforce the composite meanings created by the icon grammar.

Figure 5: No Patient Record Selected

Figure 6: Patient Record Selected

Example 2: Medications and Medline Symbology

The example above explored some initial possibilities of an icon grammar consisting of nouns and verbs. But what of adjectives? Can we extend the idea by using icons to describe an object in terms of the attributes it possesses?

Figure 7 shows a further example of a simple icon grammar, which in this case represents a single noun (“medication”) and a set of possible values for one of its key adjectives, in this case, that of “type”. This attribute can have values from the following two groups:

  • Group 1: “regular”, “one-off”, or “as required”
  • Group 2: “gas” or “infusion”

The Group 1 value is represented by the line style used at the end of each Medline, and Group 2 value is represented by the use of a coloured glyph overlaying the end of each Medline. We can thus use this visual symbology to generate composite meanings, such as a “one-off infusion” or a “regular gas”. Although this example has many principles in common with the example above, there are two fundamental differences:

  • the representation of the noun (the medication) uses text as the primary information medium and in this respect the visual symbology is additional reinforcement of this meaning
  • the adjectives are more abstract and therefore harder to represent visually, requiring the use of an arbitrary symbol for each value (whose meaning must be learnt by the user)

Nonetheless, we can combine these basic symbols to form composite meanings such as “cefotaxime, regular IV injection”, or “sodium chloride, continuous IV infusion”, and so on. However, instead of being used as labels for action buttons on a toolbar, with an associated semantics, these iconic morphemes are being combined to provide a visualisation of qualitative information to aid rapid assimilation of complex data.

Evidently, the example in Figure 7 explores one of the many attributes that a medication may possess. Other important attributes would include:

  • Route – such as topical, oral, intravenous, and so on
  • Form – such as tablet, capsule, solution for injection, cream, suspension, and so on
  • Dose – which is usually a quantitative value, measured (for example in milligrams)
  • Frequency – which could be “every four hours”, or “every eight hours”, and so on

However, a brief review of these attributes exposes the limitations of this approach – the reason the example in Table 7  is plausible is that the range of meanings being encoded corresponds to a small, finite set of (arguably) learnable symbols. In the case of other attributes, such as route or form, this assumption no longer applies. Consequently, any attempt to encode the full range of values for these attributes using an arbitrary symbology would place highly unrealistic demands on the user. Likewise, this approach would be unsuitable for the display of quantitative information such as dose or frequency.

Figure 7: Composite Meanings Created from Basic Symbols


To develop a grammar for icons within a specific domain:

  • List all the things for which icons will eventually be needed (for example, messages, prompts and so on). List both generic and specific concepts.
  • Design basic symbols for vocabulary (for example, actions, objects, attributes and so on)
  • Set up rules for combining symbols, for example:
    • Which elements are required and which are optional
    • How elements may be graphically combined
    • How elements are arranged (such as left to right, top to bottom, front to back)
    • How each element is represented (for example, as a border, as an object within the border, as an attachment and so on)
  • Avoid trying to represent all the concepts within a domain
    • Focus only on the key concepts
    • Look for finite, enumerable sets

Related Posts:

  1. A Taxonomy of Enterprise Search
  2. Design Patterns for Spatial Information Visualisation and Analytics Applications
  3. Designing Faceted Search: Getting the basics right (part 3)
  4. Where am I? Techniques for wayfinding and navigation in faceted search
  5. Interaction Models for Faceted Search

Read Full Post »

« Newer Posts