Quantifying the User Experience

Book description

Quantifying the User Experience: Practical Statistics for User Research offers a practical guide for using statistics to solve quantitative problems in user research. Many designers and researchers view usability and design as qualitative activities, which do not require attention to formulas and numbers. However, usability practitioners and user researchers are increasingly expected to quantify the benefits of their efforts. The impact of good and bad designs can be quantified in terms of conversions, completion rates, completion times, perceived satisfaction, recommendations, and sales.
The book discusses ways to quantify user research; summarize data and compute margins of error; determine appropriate samples sizes; standardize usability questionnaires; and settle controversies in measurement and statistics. Each chapter concludes with a list of key points and references. Most chapters also include a set of problems and answers that enable readers to test their understanding of the material. This book is a valuable resource for those engaged in measuring the behavior and attitudes of people during their interaction with interfaces.

  • Provides practical guidance on solving usability testing problems with statistics for any project, including those using Six Sigma practices
  • Show practitioners which test to use, why they work, best practices in application, along with easy-to-use excel formulas and web-calculators for analyzing data
  • Recommends ways for practitioners to communicate results to stakeholders in plain English
  • Resources and tools available at the authors’ site: http://www.measuringu.com/

Table of contents

  1. Cover Image
  2. Content
  3. Title
  4. Copyright
  5. Dedication
  6. Acknowledgments
  7. About the Authors
  8. Chapter 1. Introduction and How to Use This Book
    1. Introduction
    2. The Organization of This Book
    3. How to Use This Book
    4. Key Points from the Chapter
    5. Chapter Review Questions
    6. References
  9. Chapter 2. Quantifying User Research
    1. What is User Research?
    2. Data from User Research
    3. Usability Testing
    4. A/B Testing
    5. Survey Data
    6. Requirements Gathering
    7. Key Points from the Chapter
    8. References
  10. Chapter 3. How Precise Are Our Estimates? Confidence Intervals
    1. Introduction
    2. Confidence Interval for a Completion Rate
    3. Confidence Interval for Rating Scales and Other Continuous Data
    4. Key Points from the Chapter
    5. Chapter Review Questions
    6. References
  11. Chapter 4. Did We Meet or Exceed Our Goal?
    1. Introduction
    2. One-Tailed and Two-Tailed Tests
    3. Comparing a Completion Rate to a Benchmark
    4. Comparing a Satisfaction Score to a Benchmark
    5. Comparing a Task Time to a Benchmark
    6. Key Points from the Chapter
    7. Chapter Review Questions
    8. References
  12. Chapter 5. Is There a Statistical Difference between Designs?
    1. Introduction
    2. Comparing Two Means (Rating Scales and Task Times)
    3. Comparing Completion Rates, Conversion Rates, and A/B Testing
    4. Key Points from the Chapter
    5. Chapter Review Questions
    6. References
  13. Chapter 6. What Sample Sizes Do We Need?
    1. Introduction
    2. Estimating Values
    3. Comparing Values
    4. What can I Do to Control Variability?
    5. Sample Size Estimation for Binomial Confidence Intervals
    6. Sample Size Estimation for Chi-Square Tests (Independent Proportions)
    7. Sample Size Estimation for McNemar Exact Tests (Matched Proportions)
    8. Key Points from the Chapter
    9. Chapter Review Questions
    10. References
  14. Chapter 7. What Sample Sizes Do We Need?
    1. Introduction
    2. Using a Probabilistic Model of Problem Discovery to Estimate Sample Sizes for Formative User Research
    3. Assumptions of the Binomial Probability Model
    4. Additional Applications of the Model
    5. What affects the Value of p?
    6. What Is a Reasonable Problem Discovery Goal?
    7. Reconciling the “Magic Number 5” with “Eight Is Not Enough”
    8. More about the Binomial Probability Formula and Its Small Sample Adjustment
    9. Other Statistical Models for Problem Discovery
    10. Key Points from the Chapter
    11. Chapter Review Questions
    12. References
  15. Chapter 8. Standardized Usability Questionnaires
    1. Introduction
    2. Poststudy Questionnaires
    3. Post-task Questionnaires
    4. Questionnaires for Assessing Perceived Usability of Websites
    5. Other Questionnaires of Interest
    6. Key Points from the Chapter
    7. Chapter Review Questions
    8. References
  16. Chapter 9. Six Enduring Controversies in Measurement and Statistics
    1. Introduction
    2. Is It Okay to Average Data from Multipoint Scales?
    3. Do You Need to Test at Least 30 Users?
    4. Should You Always Conduct a Two-Tailed Test?
    5. Can You Reject the Null Hypothesis When p > 0.05?
    6. Can You Combine Usability Metrics into Single Scores?
    7. What If You Need to Run More Than One Test?
    8. Key Points from the Chapter
    9. Chapter Review Questions
    10. References
  17. Chapter 10. Wrapping Up
    1. Introduction
    2. Getting More Information
    3. Good Luck!
    4. Key Points from the Chapter
    5. References
  18. APPENDIX. A Crash Course in Fundamental Statistical Concepts
    1. Introduction
    2. Types Of Data
    3. Populations and Samples
    4. Measuring Central Tendency
    5. Standard Deviation and Variance
    6. The Normal Distribution
    7. Area Under the Normal Curve
    8. Applying the Normal Curve to User Research Data
    9. Central Limit Theorem
    10. Standard Error of the Mean
    11. Margin of Error
    12. t-Distribution
    13. Significance Testing and p-Values
    14. The Logic of Hypothesis Testing
    15. Errors in Statistics
    16. Key Points from the Appendix
  19. Index

Product information

  • Title: Quantifying the User Experience
  • Author(s): Jeff Sauro, James R Lewis
  • Release date: March 2012
  • Publisher(s): Morgan Kaufmann
  • ISBN: 9780123849694