MAYApinion: App Engagement Metrics

May 31, 2013 in Human-Centered Design
David Bishop
Lead Designer & Researcher

A client asked us recently about measuring “engagement” so it seemed like a good time to outline what can be measured. Measuring usability and user satisfaction is a best practice and a key to success, but our experience is that few projects advance to the point where they’re accomplishing this — measuring usability or acting on what they learn. Unfortunately, there is no simple recipe or silver bullet, but even the effort put into determining what’s important will be valuable to your design process.

At the most general level, usability metrics fall into the categories of satisfaction, effectiveness, and efficiency. (See UsabilityNet for a short description or ISO 13407 and ISO 9241-11 for the full details.) Satisfaction is most easily measured by directly asking users after they’ve had experience with the system — app store ratings, the SUS questionnaire, satisfaction surveys, and the Net Promoter questionnaire. Effectiveness and efficiency typically need to have tasks designated, and can be measured by direct observation (say, during a usability test, using a stopwatch) or can be automated (by instrumenting the application or web site — remote unmoderated usability tests are a classic example of this).

Also at the general level, there are a number of classic measurements for “Conversion” (typically, how many visitors go on to buy). Traffic sources, new v. unique visitors, engagement (time and/or interactions per visit), value per visit, cost per conversion, bounce rate, and exit pages. These are the classics for web sites, but work just as well for mobile applications (with slight modifications like changing engagement from time-per-visit to time-spent-per-app-launch or in-app-time-per-day). Non-commerce sites or apps (i.e. those that aren’t selling anything, like a library or banking) have to adjust value metrics to track something more relevant to their business.

At a more specific level, metrics need to be tailored to the app in question — Urbanspoon thrives on ratings, so a measure of addiction might be new restaurant ratings per person per day; for an app like the Flashlight, you’d have to look at average number of minutes used per day.

There are a few articles out there, but they generally deal in vague terms like I did — because the metrics needs to be tailored to the specifics of the app. The Six Key Mobile App Metrics you need to be tracking sounds like it’s going to be a great list, but ultimately says things like “measure the navigation pathways within the app,” which isn’t particularly specific. Eric Rickson’s article Top Metrics for Mobile Apps: Measure What Matters is a bit better, listing metrics for use & engagement:


  • Total Downloads
  • App Users (unique users over time)
  • Active User Rate (ratio of users to downloads)
  • New Users (new users over time)


  • Frequency (visits/user for a time period)
  • Depth of visit (# screens viewed)
  • Duration (this is what I mentioned when I used the flashlight example)
  • Bounce rate (a classic; do users leave quickly, or not?)

As you can see, these are useful but general measurements — you’ll have to determine what’s important and appropriate for your app or product. If your goal is to crowdsource gasoline prices, you want to measure for a critical mass of price submissions, but if your app is a smartphone-based guitar tuner, success for your users might mean a low duration of use (because tuning their guitar faster means you’ve built a more usable and more satisfying experience).

Related Posts

The Benefit of Working With SMEs

Feb 18 2016

Subject matter experts (SMEs) can be your greatest collaborators when designing product solutions that delight users. Here are 5 ways to make the most out of your partnership.

Kevin Hoffmann
Director of Engineering

Feature Creep Rant: Making the Right Thing

Feb 09 2016

What is it about people that make them want to add features to products until they are bloated, unusable junk? Learn how to guard against feature creep.

David Bishop
Lead Designer & Researcher

An Argument for Observational Research

Aug 21 2015

Observational research doesn’t get much buzz as a design research method. But for real aha moments, it can reveal insights that you can’t get through conversation alone.

Bridget Monahan
Senior Designer & Researcher
See all posts in Human-Centered Design