A framework designed to better measure the quality and effectiveness of user experience changes to a product.
The HEART Framework is an Experience Design orientated measurement framework designed to add rigor around evaluating the quality of user experience changes.
It is designed to measure both the quality and effectiveness of these changes against the overall goals of the product.
It has two parts;
The quality of the user experience (HEART Measurement)
The goals of your product or project (Goals-Signals-Metrics Process)
Part 1: HEART Measurement
Measures of user attitudes, often collected via survey.
Example: satisfaction, perceived ease of use, and net-promoter score.
Level of user involvement, typically measured via behavioral proxies such as frequency, intensity, or depth of interaction over some time period.
Example: the number of visits per user per week or the number of photos uploaded per user per day.
New users of a product or feature.
Example: the number of accounts created in the last seven days or the percentage of users who use labels.
The rate at which existing users are returning.
Example: how many of the active users from a given time period are still present in some later time period? (e.g failure to retain, or “churn”).
Traditional behavioral metrics of user experience, such as efficiency (e.g. time to complete a task), effectiveness (e.g. percent of tasks completed), and error rate. This category is most applicable to areas of your product that are very task-focused, such as search or an upload flow.
Example: how many people signed up to a newsletter.
Applied in unison, these categories provide a deeper level of intelligence.
For example, you could just measure unique users over a time period when analyzing an interface, however measuring Adoption and Retention will allow you to distinguish new versus returning users - you can tell how quickly the user base is growing, and if it is stabilizing.
With the HEART framework, you don't need to apply all metrics to each test or analysis, and each item can be applied at a number of levels. Pick the ones that are most important for your situation.
Part 2: Goals-Signals-Metrics
Once you have defined categories, you need specific metrics to track. Goals-Signals-Metrics provides a process.
Start at the higher level by identifying your goals for each category. There may also be levels here; an interface may have different goals for different features.
An example is YouTube. At the higher level, the most important goal is engagement - Google wants users to consume videos, and keep discovering more videos and channels. However YouTube Search has a feature goal of task success - when they enter a search, Google wants them to quickly find the video that is most relevant.
Next to each goal, map out the lower-level Signals - a representation of how success or failure may manifest in user behavior or attitudes.
Going back to the YouTube, engagement may have a success Signal of time spent watching videos - higher is better, as it means they are not clicking away. For YouTube Search task success, a failure signal may be searching but not clicking on any results.
This step may be hard as there could be a heap of useful signals for each goal. This may require research, benchmarking, or existing data analysis to determine your best predictors.
After defining Goals and Signals, we need Metrics for each. These are the items you will track over time for comparison in A/B tests.
In the YouTube example, the Goal of engagement leading to time spent watching videos Signal may end up with "the average number of minutes spent watching videos per user per day" Metric.
Averages or percentages will work better for your Metric output if possible.
Try to only track metrics related to your top goals. It may be tempting to add "interesting stats", however you need to ask yourself if these will help in your ultimate decision making, or if they are really needed to be tracked over time (or if a single snapshot will suffice).
Try and avoid extra effort or dashboard clutter wherever possible.
Author's note, all frameworks are inherently flawed, so apply them wisely. The utility of a framework is always dependent on the individual problem at hand.
Google UX Researcher team via Kerry Rodden.