Usability Tests in a Nutshell, Part 3: Creating Tasks

The tasks you create for a usability study are essential for gathering the right data. The tasks determine what you’ll test and impacts what parts of the design your team fixes. If you give users the wrong tasks, you risk focusing on the wrong parts of the design and providing your design team with misleading recommendations. Yet, teams often overlook the importance of creating robust tasks for their studies.

When creating tasks for a usability study, you’ll want to ask yourself the following questions:

  1. What are your users’ goals with the product? List out the specific actions users most commonly complete with your product.
  2. What are your business goals? In what way does your product or web site help increase revenue or reduce organizational costs? The best tasks focus on areas of the design crucial to your organization’s business goals.
  3. What are the greatest risks with the design? If there are certain areas of the design where you have little knowledge as to how users interact with it, this is a good place to focus the tests.

There are three types of tasks for a usability test: verb-based tasks, scavenger hunt tasks, and interview-based tasks. Jared M. Spool and his brilliant team of researchers at User Interface Engineering first introduced this framework for designing tasks back in 1999.

Verb-based tasks

Verb-based tasks ask users to accomplish a specific action with the product. Verb-based tasks are most commonly used to test software, hardware, and web applications. For example, for an email system, we might ask users to:

  • Respond to the email you just received from Kate Austin
  • Write a note to your mother
  • Copy the text of this page to another document
  • Send the message from Kate to your friend, Lisa

All of the tasks begin with a verb and ask users to complete a specific action. Verb-based tasks effectively evaluate the product’s functionality and give teams the capability to test multiple users on the same tasks. Before the advent of the Web, almost all tasks for evaluating products were verb-based tasks.

Scavenger hunt tasks
Unlike verb-based tasks, we don’t use scavenger hunt tasks to evaluate software or web application functionality. Instead, scavenger hunt tasks help us to assess content-rich systems such as CRMs, rich data displays, and information-rich web sites.

With scavenger hunt tasks, we ask users to find a specific piece of information. These tasks help design teams evaluate whether users find and understand the product’s content. The tasks almost always begin with the verb. “find.” Some examples:

  • You were at a party last week. The discussion turned to recipes for authentic Italian pasta dishes. Go to the Food Network site and find an Italian recipe for pasta.
  • The doctor stops by on morning rounds and wants to know how much Mr. LaFleur’s blood pressure has been out of the normal range during the night. Find a record of Mr. LaFleur’s blood pressure.

The downside of traditional tasks such as verb-based and scavenger hunt tasks, is that it’s challenging for teams to assess whether they’ve chosen realistic tasks for users. Because of this, teams risk asking users to accomplish things with the product that aren’t related to what users would actually want to do.

Interview-based Tasks
To address the limitations of verb-based and scavenger hunt tasks, we use interview-based tasks, a methodology developed by User Interface Engineering.

With interview-based tasks, we interview users before and during the test to uncover users’ real goals with a product. During the recruitment phase, we screen candidates to ensure they have the appropriate interests before they come to the lab.

Before the user comes into the test, we haven’t finalized the tasks we’ll have them complete because we don’t specifically know what users will want to accomplish with the product. Instead, during the test we interview users to get a better idea of how they use a product. For example, when evaluating an investment web site, we would begin the test by asking the users specific questions, such as:

You mentioned you were interested in investing some money.

  • How much money are we talking about?
  • What kinds of investments do you have in mind?
  • Do you have a retirement plan?
  • What are the worries you need to address?
  • How do you evaluate a potential investment?

Based on the users’ specific responses to the questions, we’ll create tasks that are relevant to their specific needs. While we won’t ask all users to complete the same tasks, we get a very good sense of how the product works for users in the real world.

Interview-based tasks work best with sites and products that are almost ready to ship and populated with real information and data. Without real content or data for users to manipulate, it’s impossible to mirror the true experience for the user.

What types of tasks do you use in your usability tests? Share your thoughts with us.