RSS

RAA5: A Structured Process for Transforming Usability Data into Usability Information

11 Nov

1. APA Citation:

Howarth Jonathan, Andre S Terence, Hartson Rex (2007). A structured process for transforming usability data into usability information. Journal of Usability Studies, Volume 3, Issue 1, November 2007, Pages 7-23.

The link is

http://www.usabilityprofessionals.org/upa_publications/jus/2007november/JUS_howarth_nov2007.pdf

2. Purpose:

In recent usability research area, researchers have paid more and more attention to how to use the raw usability data generated by usability evaluation methods instead of developing evaluation methods. The assumption of the transforming process is relatively straightforward. However, the author questiones the assumption and argues that the assumption is incorrect for novice usability practitioners. Based on a new way of thinking about data, the paper presents a more structured approach to transforming raw data into usability information and conducts a study to evaluate the process.

3. Methods:

The extraction of usability problems is not straightforward because raw usability data are specific while usability problems are quite general. The author introduced the concept of UP instances to fill the gap. The structured usability evaluation process is as follows (figure 1).

Figure 1: Structured usability evaluation process

The usability problem analysis step is significant. Practitioners put more details to fill the UP instance records and then merge the same UP instance records into one UP. In the reporting stage, group the associated UP descriptions to create evaluation reports and make it appropriate for target users to read.

In order to evaluate how well a structured process supports novice practitioners, the author conducted a study with sixteen evaluators who were Virginia Tech graduate students recruited from university mailing lists. Additionally, all the participants were regarded as novice usability practitioners because they had less than one year of job experience related to usability engineering. The effectiveness of the result was defined as the accuracy and completeness of the final evaluation reports produced by usability practitioners. Therefore, in the experimental design, the independent variable was “support” which equals to “structured” and “no support” which means “freedom” with a between-subjects design. The dependent variables were time measures and quality of the evaluation reports rated by judges.

  • Firstly, the participants who were referred to as evaluators, began the evaluation of Scholar, a course management system (http://www.sakaiproject.org/), by watching videos of two representative users performing tasks. One representative user task is adding a student to and removing a student from a course, and the other user task is adding a student to a course. Each evaluator participated in one study session with no more than two and a half hours. Evaluators participated individually; each study session consisted of only one evaluator.
  • Secondly, while watching the videos of the representative users, the evaluators were asked to record their comments in Morae or create UP instance records in DCART. Morae and DCART are two different tools for usability reporting. Morae (TechSmith) was used for without explicit supporting for the structured process. On the contrary, DCART (the Data Collection, Analysis, and Reporting Tool) was used for explicit supporting for the structured process (figure 2).
  • Figure 2: Usability problem instance record in DCART
  • Thirdly, the evaluators with Morae reviewed their comments, added new comments, and reviewed the video. Then they created usability evaluation reports based on their comments. The evaluators merged the same UP instances and grouped related Ups using DCART. Then they built tem into DCART to generate a usability evaluation report based on the groups of UPs they had created.
  • The final step is calculating quality of evaluation reports. Evaluator effectiveness was of primary interest for this study. The quality was rated by judges and developers. Two individual with usability experience referred to as judges rated the reports based on six guidelines developed by Capra. As to developers, the researchers created a questionnaire based on the set of Capra’s guidelines and let three developers from the Scholar team fill out the questionnaire and rate the evaluation reports.

4. Main Findings:

A t-test indicated that there was not a significant difference between the freeform treatment and structured treatment means, t(14)=0.48, p=0.64, as well as the number of evaluators who finished the task. The data supported that the structured process would not affect the evaluation in terms of time.

For the quality as rated by judges, the treatment main effect indicated that mean rating for the structured treatment, M=0.45, SD=1.17, was significantly greater than for the freeform treatment, M=0.10, SD=1.54. The data supported that the structured process would increase the quality of the usability evaluation reports as rated by judges

As to the results of developer ratings, a higher mean rating for the guideline questions and the summary question would indicate a higher level of usefulness and quality. The treatment main effect indicated that mean rating for the guideline questions for the structured treatment, M=1.21, SD=0.97, was significantly greater than for the freeform treatment, M=0.39, SD=1. 43. The data also supported our hypothesis the structured process would increase higher quality as rated by developers.

All in all, all these results proved that a structured process helps novice usability practitioners transform raw data into usability information more accurately and completely with higher quality than a freeform approach. The higher ratings assigned by judges and developers support this interpretation.

5. Analysis:

When I read UxBook chapter 16&17, the article recommended us to use UX problem instance to abstract out details of data. I thought the process was quite rigorous and complicated. Why not abridge the analysis and just use some rapid methods? Unfortunately, it is a passing thought and I didn’t think it deeply. So when I found the article, I was very excited as well as regretful. Concrete and factual thinking of this question can result into a thesis! The author grabbed the opportunity and filled the gap while I just let it go and do nothing. Therefore, whenever you have something in your mind, don’t hesitate to post it or write it down. Further reflection in the following days will help you to find new ideas.

Overall, the study examines the efficiency of a structured approach to transform usability data produced usability evaluation reports comparing with freeform approach. The new method was rated to be of higher quality by both judges and developers. However, the paper did little to improve the effectiveness of novices. Not only a framework but also effective usability engineering tools will help novice practitioners understand or describe usability problems. Furthermore, the experiment was conducted in a fixed-resources environment with short video clips, few tasks and short performance time, which may not reflect a real lab-based usability evaluation.

Advertisements
 
1 Comment

Posted by on November 11, 2012 in Research Article Analyses

 

Tags:

One response to “RAA5: A Structured Process for Transforming Usability Data into Usability Information

  1. Mihaela

    November 14, 2012 at 3:14 pm

    Note that the last author is the textbook’s author. Good RAA. Your points on Bb.

     

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: