The Helix App:
An embedded app and IA strategy
Sequenced customers can access and manage their DNA products in one place for both stand alone apps and embedded apps built by partner companies. The embedded app concept uses templated designs meant to be developed quickly and enable re-branding based on the partner's color palette and image guidelines. Ancestry Basics and Wellness Basics were provided as the first two embedded apps in the Helix Marketplace and bundled with first-time DNA kit purchasers. If the embedded app experience was successful it would provide be a low effort entry point for new partners to quickly launch and test a digital product before investing time into their own standalone app.
When feedback started coming in that users were left disappointed with the lack of content and personal relevance, my role was to figure out what users really care about. Was the embedded app concept a bust or was the UX lacking something more meaningful?
Users cared more about finding out about their result and if if their DNA was rare than learning about the full range of results and prevalence in a population. They were least interested in the scientific background of their result.
Let's get this sorted
In collaboration with the product and science teams, we created a list of content modules that a user would see when they review their DNA result for a given genetic trait. Based on these requirements, we then framed each module as a question that the user might have about their result. An initial group of 10 participants were presented with 3 tasks:
Group the questions by top 3 important, least 3 important, and neutral
Sort questions based on preferred order of learning
Match wireframe examples to top 3 important questions
We utilized Optimal Sort in combination with in person testing to test the following questions, listed based on the hypothesized order:
1) What is my result?
2) What does my result mean?
3) What are other results?
4) What is this trait about?
5) How common or rare is my result?
6) How reliable is this result?
7) Now what? What can I do?
8) What is the impact of genetics vs. other factors?
9) How was my result calculated?
10) What is the science behind this product?
11) Is there anything else I can learn about?
All of these things are important
One strong and common theme with users was an apprehension to identify which questions were their least important and often stating that ALL questions had meaning. This was an important validation that there was value in the content. If users were unsatisfied, it was likely due to the architecture, presentation of the content, and overall user experience rather than the topic areas themselves. I continued to look into the main emerging patterns.
I rely heavily on visual analysis, utilizing color coding in this case, as part of my synthesis process. Colors were used to highlight common responding and identify alignment across the descriptive data and additional analyses. The green sections represent data that has low variability (indicating similarity in user responding) and personal importance based on frequency across users. The most and least important groupings provided the strongest patterns, and drove learning order preferences.
The result matrices from Optimal Sort provided further insight into what users felt was more or less personally important and what order the content should be presented when viewing your DNA result. Again, with the use of a color system, variable responding was eliminated (with grey overlays), and significant themes highlighted and coded with green (important, learn first) and red (least important, learn last). One final step was to compare across these two tasks and evaluate any discrepancies in responding.
One item, "Now what? What can I do?" was rated as personally important, but also preferred towards the bottom. The was inconsistent with other data given that all other items that were rated as important were typically preferred to be learned first. It was hypothesized that the difference may have been due to the wording of the question, leading users to place it lower on the list.
A diagonal line between the 4 quadrants represents the predicted distribution of prioritized items. With the exception of item 7, the actual distribution pattern was fitted closely to the line. However, there were items that were predicted to be more meaningful than was shown in the data and vice versa. Further exploration with a larger sample of screen participants would help solidify the prioritization.
The first round of testing was a great start for evaluating our study design and provided a framework for future research. Because the recruitment of participants was loosely implemented and the desired number of participants for a sorting study was low, we moved forward with creating better screening requirements and launched the study again on usertesting.com, focusing only on having participants sort based on personal importance.
A revised and improved study design
We knew based on the preliminary study findings that some of the questions contained bias, confused users, or weren't measuring what we were aiming to measure. Therefore, we started by evaluating the original set of items and created questions free from bias and that were more context specific.
Usertesting.com and screening requirements
Participants were then screened on usertesting.com based on our target persona and taken to Optimal Sort to complete the sorting task. The goal of this study was to identify if/how responding would differ from the original results and how the target persona (see attributes below) would rationalize their sorting.
Users were presented with a scenario that their DNA had just been sequenced and they received notice their results were ready to view. They decide to look at their results for caffeine metabolism. The questions were presented on the left, with personal importance categories on the right. *Side note: people are really good at sorting!
We sought to measure how preferences might differ when comparing preliminary study participants and our newly screen target persona group.
Back to my favorite part: the analysis. I dug into the data, relying on thematic analysis of the qualitative data and visualizing patterns with color coding. This time, we felt more confident in the presentation of content areas and wanted to know how different or similar the findings would be to the original study. Below is the distribution of data across items and the changes in ordering from the original study.
The order of importance for items in the preliminary study (left) changed slightly in the current study (right) when using stronger screening practices.
A new framework of nested content
Considering the quantitative data and qualitative information gathered from the recorded sessions, I recommended a new information architecture plan focusing on user mind sets, type of trait (monogenic vs. polygenic), and result type while accommodating both novice and advanced users.
There are two kinds of people in this world
Although we intended to put users into a mind frame of receiving DNA results for caffeine metabolism (a relatable, but non-frightening trait), about half of participants talked through their prioritization based on dangerous health risks, like cancer. Depending on what frame of mind people were in, they rated personal importance differently. Overall, the more the content related to the user, the more they found it important. The more general the information, the more they either didn't want to see it at all or ranked it as low on their list. Using the new IA we were able to quickly create wireframes and high fidelity prototypes of the new content structure.