Writing Analytics User Stories

“Your job isn’t to build more software faster: it’s to maximize the outcome and impact you get from what you choose to build.”
Jeff Patton, User Story Mapping

A key to successful projects is incorporating learnings to drive decisions. To obtain learnings from live users, teams must deliver features that collect and analyze behaviour. This post discusses writing user stories that involve analytics.

User stories related to analytics are slightly different from other stories:

Actors are often not external customers, but internal users who consume the reports and dashboards. This could be the product owner, marketing, designers, operations or anybody who is responsible for making decisions about the software.

The actors’ journey will use features of an analytics system, so the complete scenario will cross the borders of the product you are implementing and the analytics tool.

Story Sizes, Epics and Themes

Analytics stories span the full spectrum of story sizes, so let’s consider story sizing.

Perspective Correct Size
Business’ perspective Helps a business achieve a business outcome.
User’s perspective Fulfills a need.
Development team’s perspective Takes just a few days to build and test.

A good team will generate small stories from larger stories through discovery and conversation. These small stories can be categorized by Epics or Themes relating them back to the bigger stories.
Jeff Patton, User Story Mapping

At the start of a project, you may have a big analytics story such as:

As a marketer, I can understand visitor’s actions on the website so that I can determine which layouts, graphics and copy are most effective for driving conversions.

This story is far too big and ambiguous for a delivery team to take direct action on, so it will need to be decomposed into manageable pieces.

Writing Analytics Stories (and Chores)

Initial Tooling Chores

To get the data flowing to the analytics servers, the analytics library needs to be loaded and configured with the necessary account credentials. Since it is trivial to send pageview data, that functionality can be paired with the tool specification:

As a marketing analyst, I can see page detail reports in Google Analytics so that I can assess the overall volume and common routes through the site.

Chore: Install the redux-beacon library to capture pageviews and forward them to Google Analytics.

As a marketing analyst, I can use Dynamic Tag Manager so that I can modify the analytics data collection without the assistance of a developer.

To determine if there are any fields that are causing conversion problems, you can define stories such as:

As a marketing analyst, I can obtain funnel reports that show field level dropoffs so that I can learn which fields are causing lost conversions.

Note that the above story is independent of the tooling. It is also based on the assumption that it is easy for the team to implement catchall solutions. For example, if common components (such as ) are used on all the forms, the instrumentation can be implemented in a few components. Or perhaps global field level analytics is being performed by tools such as inspectlet or hotjar and implementing the story is a simple matter of including the tooling in the client bundles.

How Analytics User Stories Change During the Project

Analytics stories will vary depending on what stage the product development is in. During the initial build, the build-measure-learn process is likely to be focused on qualitative analysis and direct observation of users using early implementations in a controlled testing environment.

While building the initial minimum viable product, a measure plan should be executed. The measure plan will articulate the business objectives and then determine the key performance indicators (KPIs) and related targets. For example, there may be targets for # of orders or usability metrics such as average task completion time.

When users begin using the production system, the build-measure-learn cycle starts to revolve around unobtrusive monitoring, data driven decisions and experiment design and execution.

Problem Hypothesis and Experimental Fix

Once the system is collecting usage data, the team will leverage the analytics tools to find opportunities for improving the app. This should follow the pattern of identifying a problem and proposing a fix.

For example, if the analysis showed poor conversion rates for mobile, the team might theorize that the mobile layout is poor and propose a fix that the keyboard pop-up does not obscure the call-to-action (CTA) button.

As a mobile user arriving to the app for the first time, I want it to be easy to start the trial. As a product owner, I want the improved design to increase the conversion rate by 10%.

The story identifies the users and provides the desired outcome. The desired outcome (a 10% increase in conversions) should be a deliberate goal based on what the team thinks is achievable. For example, perhaps the web conversions are 10% higher than mobile. Through discussion, the team might identify an experiment that gives half the users a floating action button (FAB) labeled “Start Trial” when the device pops up a keyboard. This would lead to tasks such as implementing the FAB, implementing the AB split logic and configure the analytics tools to display the experiment results.

Detailed Tooling

As the team starts to gain insights, the observations begin to spawn new questions that cannot be answered with the data at hand. The team will then need to add additional instrumentation to collect the required information.

For example, a hypothesized problem is that frequent car renters are turned off by the positioning of the service being provided by the app. The car rental frequency is a question that is current collected and sent to the backend, but this data point is not yet integrated into Google Analytics (GA).

A story to address the GA data gap could be:

As a marketing analyst, I can filter the signup funnel analysis by car rental frequency in order to assess if there are significant variations for frequent and infrequent car renters.

You may also want to incorporate task satisfaction surveys into your app. For example, if a new rental agreement design has just been finished:

As a product-owner, I want to know how satisfied my customers are with the ease-of-use for completing the new rental agreement compared to the current agreement so I can select the better design.

With the consumer of the metric (who), the intent of the survey (what) and the outcome specified (why) clearly identified, the team is positioned to write the detailed tasks such as:
Design survey icons for easy and hard
Implement a pop-up survey with the text “Overall, how difficult was completing the rental agreement?” (0-very easy, 7-very hard)
Configure the analytics tool to report the customer satisfaction score for each cohort

Definition of Done Includes Tracking

Another way that analytics can weave into stories is as a definition of done criteria. For example, consider the story:

As a registered user, I can type my user ID and password to enter the system.

The team may agree that the system must track successful logins and failed logins as acceptance criteria for the login story.

Acceptance Criteria for Split Testing Stories

It is important to have agreement on what a completed test means. The most common criterion is that a 95% statistical significance should be reached(this means that there is only a 5% chance that results are by chance alone.) Split-testing tools will typically compute the significance for you or you can use this tool: http://www.evanmiller.org/ab-testing/sample-size.html

See Researching UX: Analytics, Luke Hay or A Practical Guide to Measuring Usability, by Jeff Sauro for more discussion on statistical significance with web analytics.

Metrics as a Story Attribute

You can also weave metrics into larger stories by creating an attribute that defines success metrics for the story. What user behaviours can be measured to determine if the feature was a success? For example, % of users who use a new feature, task completion rate, task completion time or you can even directly poll users with task satisfaction pop-up surveys.

A metric could also be used as part of the acceptance criteria or target outcomes. For example, a redesign of a user interface could have a goal of users completing a task in less than 10 seconds on average.

Having a target for a metric generally requires that you have collected some initial data to set a baseline. In the previous example, the “less than 10 seconds” target may have been informed by a current measure of 15 seconds on average and expectations of the impact that changes will have.

Some teams keep work-in-progress low by managing story cycle time. When stories are deployed (even experimentally), they should not impact work-in-progress metrics. This could be done by having a “collecting data” column on the Agile board or you can consider the story done and open new stories as required.

Very high level stories can identify high level outcome goals (e.g. increased revenue) in order to ensure the lower level stories have alignment. For a good talk about the fractal nature of outcomes and intermingling analytics with Agile, have a look at Gabrielle Benefield's talk at GOTO2012.

Innovation Metrics

The analytics team may also produce KPI’s that measure its own capabilities. For example:

● Number of experiments running

● Number of experiments that achieved statistically significant results

● Experiment implementation cycle time

● Key learnings synopsis (not a metric, but good info for an executive summary)

To express developing such metrics as a story:

As a reader of the innovation dashboard, I can see at a glance how the team’s [volume of experiments] has changed over time in order to understand their accomplishments and challenges.


Good teams get inspiration and product ideas from analyzing the data customers generate when engaged with their product, and celebrate when they achieve a significant impact to business KPIs. Marty Cagan, Forward of User Story Mapping

A key goal of successful story writing is to focus on outcomes opposed to output. It’s not how fast you build something that counts, but rather the impact of what you build. Developing analytics features are an essential ingredient to success because it offers hard data on the impact of your work.

By writing analytics stories that clearly identify the consumer of the information (who), the functionality that must be delivered (what) and the reasons the consumer of the analytics needs the information (why), the team will gain cohesion and efficiency.

Sidebar: Agile Analytics

Agile Analytics is a term used to mean applying Agile development methodologies to developing enterprise business intelligence (BI) systems. Applying Agile and Scrum to data warehousing and BI systems is gaining popularity for many parts of the work process. https://www.thoughtworks.com/insights/blog/agile-data-warehousing-and-business-intelligence-action

There is a train of thought that some activities in large enterprise warehouse projects do not translate well to scrum. http://www.eiminstitute.org/library/eimi-archives/volume-3-issue-3-march-2009-edition/beware-of-scrum-fanatics-on-dw-bi-projects

In any event, the focus of this article is on using Agile processes to build features into products that capture user-behaviour data. They in turn feed BI and web and mobile analytics tools instead of building BI applications with Agile.

Learn How Rangle Can Help With Your Analytics

Our built-in custom analytics solutions allow you to measure and then optimize for greater conversions and revenue. Contact Us to learn more about our analytics consulting and how we can help you get the most out of your data.