Another researcher and I had to evaluate how a data science model called “health score” could be incorporated into users’ workflows to support their work. Users were having trouble adopting health score because the model was more of a reflection of the capabilities of Uptake’s data science team than a need in the user workflow. Total elimination or adjustments to the model were met with resistance, so the solution fell to its visual display and contextual navigation.
Due to my non-disclosure agreement with Uptake, I cannot discuss the content of my research. However, I can speak generally to the process we used to conduct the research.
I participated in 3 iterations of health score, which observed the following process:
After the conclusion of one round of testing, the other researcher and I would meet with the product manager and designer who owned the part of the product containing health score. Together, we ideated potential solutions to address the testing findings. We began the session with an individual ideation period and then took turns sharing and discussing these solutions. The session ended with agreement on one or two different designs for the next round of testing.
The designer would then create lo-fi clickable wireframes in preparation for testing. For one of the iterations, I created the wireframes for testing in Axure.
Once the wireframes had been created, the other researcher or I would use the wireframes as a point of reference to craft a moderator’s guide for testing. During testing, we worked in teams of two: one researcher moderated the session, while the other took notes and was responsible for capturing a recording of the session. We debriefed each testing session, and at the end of all testing, we synthesized the research based on our debriefs and raw notes.
The next step would be to communicate our research findings and recommendations. Based on our synthesis, we created a readout deck to present to the relevant designers and product managers (testings sometimes evaluated multiple features). The deck was organized by priority derived from how disruptive a given finding was to the users’ work. Each finding included the critical components of what (what is the finding?), so what (why does it matter?) and now what (what are its implications?). It was also illustrated by a screenshot of the relevant part of the wireframe, or an audio or video clip from one of the user sessions.
While cycling back to ideation to repeat the process, the responsible designer and I would also attend sprint planning. We made ourselves available to answer any questions from the development team and to ensure research findings informed prioritization for sprint planning.
Health score continued to be included in the platform and updated in its display and contextual navigation. However, it was never fully adopted, and eventually another data science model was created whose output better served user needs.
Additionally, at the conclusion of these three iterations, I documented the evolution of health score and identified the successes and challenges of the current process, including:
Success: The structure Research used to communicate findings resonated with the product managers and enabled clear insight into findings and next steps.
Challenge: Disregard of sprint timelines when conducting research increased the risk that research would be neglected in development. This challenged us to think about how to make research compatible with an Agile/scrum framework.
Challenge: Our company was in a stage in its growth in which it served two groups of users in two different industry verticals. While the users belonged to the same user archetype, testing the same wireframe with both groups sometimes uncovered divergent needs and workflows. This led to the misinterpretation of some findings that resulted in product changes incorrectly implemented across both verticals. Research had to figure out how to communicate findings to clearly differentiate between vertical-agnostic and vertical-specific findings.