Benchmark Usability: Competitive Home Apps
Overview
I identified a gap in data to measure our experiences against and proactively proposed and carried out a study to learn about key competitor products. Our study revealed the competitor experiences lacked an easy way for users to restart their internet hardware. When we tested the design of our new app, participants were faster on this task and rated it as easier.
When
2023-2024
Duration
Approx 4 months from planning to final report.
Outcome
Time on tasks and SEQ scores of new app matched or beat competitors.
New App SUS score = 87
Skills
Benchmark Usability
Moderated, In-person Interview
Stakeholders
Product executives
App product team
Hardware product team
UX design team
My Process
Problem statement
The product team set out to create a new app for customers to manage their home network as part of our goal to be the leading broadband carrier with the best user experience. Leadership emphasized our goal was to surpass examples identified as having the best user experiences. Recognizing a gap in existing and planned research, I proposed a competitive benchmark study to gather metrics and qualitative insights on key competitor experiences.
Planning and engaging stakeholders
It became clear while interviewing stakeholders that the appetite and list of questions was more than I anticipated. Working together, we prioritized and sorted out the feasibility and timing of goals and questions. Because the scope was bigger than expected, we also adjusted expectations for the timing of analysis and results.
Limitations and challenges
In addition to the growing list of things we wanted get out of the study, we had challenges procuring some of the competitor products and holiday breaks were quickly approaching that would cost us time. The procurement delays were particularly challenging because I needed to experience the common tasks and flows as I planned and prepared the protocol. To account for the delay, I found other ways to investigate the product and then planned the stimuli order around anticipated arrival dates.
Outcome and Next Steps
This study provided team members with findings specific to their research questions and a key finding that competitor apps lacked an easy way to restart internet hardware. It also provided findings and recommendations specific to some areas we weren’t looking for. These areas included nomenclature, value proposition, key user needs and where competitor solutions met or missed these, and designing for long wait times.
In hindsight, one thing I would’ve done differently is broken the analysis and reporting into two different parts and given priority to the app interface over the device set-up. Device set-up had already been designed and the timeline for changes was further down the road.
When our app finally was available in a build that could be evaluated, we were able to replicate tasks and measures to compare. We learned all but one task on our product were just as easy as on the competitors we were measuring ourselves against and able make informed decisions to improve the design for that task.
Verizon Home App screens
My Methodology
Research Objectives
1. Evaluate and measure competitor user experiences
What are the task metrics for primary tasks on each app?
How do they compare across competitors?
What usability issues do participants encounter?
How does overall ease of use compare, and what factors contribute?
2. Understand user attitudes and perceptions
What are participants’ perceptions of each competitor’s experience?
What aspects contribute to their perceptions?
Do participants have preferences? What delights them? What are their pain points?
3. Identify user needs and current experiences (when applicable)
How do participants currently interact with their home app?
What are their needs and expectations?
What pain points do they experience in managing their network and connected devices?
Methodology
To answer these research questions, I designed a usability study where individuals participated in a one-on-one interview in our research lab. Each participant experienced two out of the three competitor products. Products were presented in counterbalanced order to account for possible order effects when making comparisons.
Participant Recruitment
Sample Size: 19 participants (ensuring at least 12 evaluations per product)
Criteria:
Mix of male and female
Primary home internet decision-makers
Age distribution between 25-60 years
Variety of internet providers and connected devices
No prior experience with the evaluated competitor products
Data Collection
Each participant was provided with a packet for each product that included the set of tasks, subjective questions, and SUS survey. They were asked to read each task aloud, complete the task, verbally state when they had completed each task and then respond to the subjective post task questions. When all tasks were completed on a product, participants completed the System Usability Scale survey (SUS). While participants were not discouraged from thinking aloud if they did so on their own. The tasks focused on:
Router setup and device placement instructions
App feature exploration and common tasks across all products
Metrics Collected:
Time on task
Task success rate
Perceived ease by task (SEQ)
Confidence rating by task
System Usability Scale (SUS) survey