Posted mars 17, 2020

Scaling experimentation: How to build, launch & QA high-quality experiments

I’m Becca Bruggman, Optimizely’s Experimentation Program Manager. My job is to make sure we are “drinking our own champagne” and run a best-in-class experimentation program.

Becca Bruggman
by Becca Bruggman
decorative yellow lines on background

This is the second installment of a six part series designed to help you run a best-in-class experimentation program. In this series we are covering everything you need to build your program, develop it and have it running efficiently as well as make it visible. You can find the previous posts here and here

If you read my most recent post, you should have your key stakeholders mapped, have a plan for the program structure your team will be leveraging and have an initial goal tree. Now, let’s dig into how the team you have put in place will launch high-quality experiments at scale. The big secret: documentation and alignment!

As we were working to formalize our processes and program at Optimizely, we quickly realized how important it was to have comprehensive documentation all key stakeholders had reviewed and aligned on for launching experiments. While not everyone loves process, good process can help free up teams in a few key ways:

  • Having an impact through their experiments and being creative, as opposed to having to figure out every time what steps are needed to safely and successfully launch an experiment. 
  • It’s helpful having the experimentation process documented for onboarding to basically scale myself and minimize the need for 1:1 sessions with the team. (And as a human I don’t scale). 
  • Finally, if anyone wants to make modifications to the process, there is a single source of truth for where those changes can be discussed and made.

Below, I walk through the core documentation we have to streamline experimentation. I’ll be showing screenshots of what these look like at Optimizely, and you can find templates for all of these in my Experimentation Program Toolkit.

First – end to end documentation on running a web/client-side experiment! Template [here]

graphical user interface, text, application, email

text

Part of Optimizely’s documentation for running a Web experiment

This includes things like:

  • Where to log an experiment idea
  • Where to build the experiment itself
  • What components are best to use
  • How to get a code review and SLAs
  • Steps to QA
  • Pre-launch checklist 
  • Who to reach out to if stuck

In the beginning this can feel a bit overwhelming, but once the team goes through it once or twice, much of it becomes second nature. Additionally, if it becomes clear a process needs to be updated or removed, it’s easier to do when you have a single source of truth.

Leveraging the goal tree you built in my last post, and getting all your core events set up and QA’d can also help streamline experiment creation as well as QA in the future. Once these events have been created, corresponding documentation that maps out all your core events (for us it is events like Lead Form Submits) helps with having a single source of truth. This provides the full context behind what user behaviors these events track and where they happen on your site. 

I recommend for these core events to have a standard naming convention to make them easily discoverable and easy to understand as to what each aligns to within your experimentation program. We use the naming convention of “[MASTER]” to designate events that have been built and QA’d by the core team for everyone to leverage confidently.

graphical user interface, application

Optimizely uses “MASTER” to designate core events

Next up, end-to-end documentation on running a Full Stack experiment! There will likely be some overlap in the web and full stack document especially around the pre-launch checklist and who to reach out to if stuck. But, given the differences in code bases, QA process and approvals – having this broken out is usually helpful. Template [here]

This includes things like:

  • Where to log an experiment idea
  • Where to build the experiment itself (i.e. which code bases align to which product, which Optimizely Projects align to different parts of code base)
  • Links to documentation on SDK configuration and eventing
  • How to get a code review and SLAs
  • Steps to QA
  • Pre-launch checklist 
  • Who to reach out to if stuck

For each of these, it is helpful to get sign-off from your key program stakeholders, end users of the process and anyone who would be impacted by the process such as code reviewers. These will be evolving documents that I recommend reviewing once a quarter to ensure they are still relevant from a process and stakeholder perspective. For Optimizely, as our program has matured or we’ve had people leave, it’s been helpful to ensure the systems we started with still make sense as more people get involved and the experimentation program grows.

Finally, thanks to Software Engineer Derek Hammond, we also have documentation on how to create new components/entities (Pages, Audiences, Metrics, etc) in our Optimizely Web instance. Template [here]

graphical user interface, text, application, email

Optimizely’s documentation on new Web Entities

This document was something we built about a year into our program and I wish we had built it sooner, as I realized about a year into our program that our web project had become over-run with duplicate or single-use Pages, Audiences and Metrics. As a result I had to take several days to clean up our instance to reduce it down to only what was actively being used and useful, which you can see me doing back in January 2019:

a person standing next to a table with a computer and a large screen

Me mid-web project clean-up last year

graphical user interface, text, application, email

Announcing to the team all the clean-up I had done!

Learn from my mistakes! I encourage you to set up this document at the beginning as well as set up a frequent cadence for archiving old components in your web project. This makes the process for others seeking to build experiments much more straightforward as they won’t have to dig through the project components to find what they need to build an experiment.

From a governance perspective, the Slack integration, which integrates Optimizely with Slack, is very helpful to see what experiments are being created and launched across your entire account. This can be helpful to see in real time if anything is being built or launched outside of the agreed upon process and course correct as needed. At Optimizely, we have a channel called #experiment-feed where all these notifications pipe directly to:

table

#experiment-feed Slack channel at Optimizely

One important call-out to close with: you should expect all of this to grow and evolve with your program, especially your end-to-end process. Additionally, it’s good to think about always remembering to “Crawl/Walk/Run” in a way that is in alignment with your organization. While having all of this process documented initially is important, it will likely take some time for the team to get accustomed to leveraging it consistently and will also need updating based on team feedback and the needs of your program changing. So, just know that is coming and be ready to adapt!

You can find all templates noted above [here]. 

What processes does your team use for experimentation right now? What am I missing? Comment below or tweet me @bexcitement.

See you in the next post on experiment ideation and building your roadmap!