Publicerad december 22, 2020

Stop the silos! How to bring teams Together to build a successful experimentation program

As an experimentation program manager there will be plenty of moments where the phrase “herding cats” hits a little too close to home. Experimentation is a new set of processes and analytical rigor that require consistency, and we know consistency doesn’t happen overnight. Don’t be discouraged! Those are normal growing pains and the type of

decorative yellow lines on background

As an experimentation program manager there will be plenty of moments where the phrase “herding cats” hits a little too close to home. Experimentation is a new set of processes and analytical rigor that require consistency, and we know consistency doesn’t happen overnight. Don’t be discouraged! Those are normal growing pains and the type of moments where your persistence is the most important variable. The reason these moments continue to exist is because experimentation requires the vast majority of participants to change the way they do their work. Instead of creating one change to a homepage component, there are now two builds a developer has to do. Instead of one point of analysis, people need to learn a secondary system and methodology. With some simple tactics, you can be sure to spend less time herding and more time testing. 

chart

We’ve seen from our customers that as more people are involved, more experiments (and more effective experiments) are run!

How Do I Manage All These People?!

One of the incidental positive outcomes of experimentation is how it can increase functional team collaboration. There’s a lot of roles that can participate in experimentation to increase its value, and you should always strive as the program manager to bring in more folks. When you have more people involved in your program you run more experiments, solve more complex problems, and run experiments more efficiently. With more people that also means more Slack messages that look like “where are you with this?” Those are going to be needed sometimes, but you should always have these three mechanisms in place to help manage everyone who helps get an experiment from idea to launch. 

  • A place for review: I’m no fan of meetings. I’ve advocated for no 1-on-1s at Optimizely (to no avail). I, however, have to concede that a meeting for experimentation is table stakes to operate an effective program. At Optimizely we have our weekly Experiment Review, where anyone with a hypothesis comes together to share with the rest of the team. I’m a fan of the every two weeks cadence, but I have customers who also do weekly or monthly. The agenda for your meeting doesn’t need to be lengthy or time consuming. Simply ask your teammates to show up ready to share what they are planning to launch and results of any running or recently ended experiments. Sprinkle in a few of your prompting questions such as “what data backs up that customer problem?” or “why did you pick that as your primary metric?” and allow the rest of the team to provide feedback and thoughts as well. Your goal as program manager is to facilitate and encourage cross-team feedback.
  • A place for conversation: Once you walk out of the “room” (Zoom) after your meeting action items should be clear. You know who forgets some of those action items? Or has a new point of clarification on that primary metric question? Me. I need a follow-up place to be able to ask my analyst on why that metric is actually a better leading indicator of experiment success. This is another management practice you don’t need to overthink. In fact a lot of what I work with my customers on is a “where do you do these kinds of things now?” discussion. If you manage communities of practice on Slack (Optimizely even has a Slack integration!) or Microsoft Teams, then do the same with your experimentation practices. If you manage your backlog in Optimizely’s Program Management, you can use the comment functionality very easily to collaborate on open items. Especially when creating new processes, it’s helpful to “piggyback” on what already exists to minimize the change management. At Optimizely we have two channels. One to keep people up-to-speed on experiment progress (#experiment-feed) and one to allow people to discuss upcoming or past experiments (#experiment-review). As the program manager be sure to keep your updates constant and consistent!
  • A place for execution: So, you have your hypothesis from the meeting. You have ironed out the last details. Now, how do you actually get your developer to help build it? Well if your development team works in Jira (Optimizely has an integration for Jira as well) then we should help them out by working in Jira as well. No one enjoys learning a new project management tool! Once our hypotheses are ready for the relevant engineer, we create an issue in their own Jira board that links to program management with more details on what we are doing and why. At Optimizely, we use both Program Management and Jira, as Program Management holds all information about the experiment configuration such as hypothesis, metrics, visuals while Jira will be focused on what needs to be built by our engineering team. All of these actions that happen in these spaces scale even more quickly if you provide your teammates consistent documentation to use. This consistency ensures that how experiments are executed is to the standards that you set as the program manager!

How Can I Use Data to Understand Where My Time is Most Needed?

The most overlooked part of running an experimentation program is the data. No, not the data of how many more conversions or leads you’ve generated. I’m talking about the data that is created when you launch any experiment and how it informs the types of experiments you run moving forward as you learn more about your customers. Think about how much you know about that experiment:

  • How long it ran
  • What pages or audience it targeted
  • How many variations you used
  • What type of change(s) you were making 
  • What metrics it measured against
  • Which metric(s) was statistically significant
  • Which experiment stage took the longest to launch it

 

You should use these data points across all your experiments to understand what practices help your program and which don’t. These are the metrics I suggest for every customer to track from day one to set themselves up for this type of analysis. 

graphical user interface, text, application, email

graphical user interface, text, application, email

If you knew that running more targeted experiments would increase your win rate, I’d guess you’d run more targeted experiments. If you knew that getting experiments through QA is 50% of your execution lead time, I’d guess you’d start evaluating your QA steps and get involved to streamline them. Analyze the business operations of your program!

 

What Do Mature Programs Do Differently?

When I get asked this question I talk about how those programs who are doing hundreds of experiments a year are using data on themselves to inform what they should be doing better (now you know why that very specific piece received its own section). But, there are other things to focus on too! These may be easier to get started than creating a whole set of metrics to track against, especially if you are earlier on in your journey and only running a few experiments a month. These are other focus areas for mature programs that you could implement in your own program today:

  • Executive Buy-In – This doesn’t happen overnight unless you are lucky. Typically this takes a “prove it, or lose it” story. A strong program leverages data to measure business impact and tether an executive to their program. This executive helps on resourcing, setting a charter, and overcoming other roadblocks along the way. 
  • Experiment Policies – I wrote a bit about this before in a prior blog, but a challenge for reaching scale on the number of experiments you run is that many people assess experiment success differently. Creating a consensus, shared perspective on how to analyze experiment results across all your metrics will reduce the time to decision making. A good policy to start with is how you make decisions between your primary and secondary metric. If you have statistically significant lift on your click-through rate metric, is that good enough if purchase conversion is not statistically significant? Setting these guidelines helps your teams act quicker.
  • Education Program – The above bullet is a single characteristic of the scale challenge of “everyone is doing things differently.” To get the most out of experimentation there have to be some consistent practices in your program. Not only how you do analysis, but how do you write a hypothesis? How do you use your data to inform those? How do you leverage third-party data for targeting? These and so many other questions can be answered in an education program. Great programs range from having a few sessions to a two week “course” where all things about experimentation are provided to their teammates. I would make this part of onboarding if you can. It’s easy to get started on this with Optimizely’s Academy too! Be sure to document all these practices for easy sharing across the organization, even if it’s not part of your education program’s curriculum – your future self will thank you. 
  • OKRs – Once experimentation becomes a focus for your organization, you should be measuring each other against it. Our own program manager wrote about the importance of this to encourage our product managers to be more active participants in experimentation. This aspect of your program can merge well with whatever program-level measurement you put in place. If you are measuring your program by the number of experiments started and the number of variations run, you can measure each person by these practices too! Be sure to save time to share back to each other so you can evaluate what OKR measures really impact total program success. 

 

So, What Are the Takeaways?

I threw a lot out there on how you can stop silos and create a more collaborative program. But what really can you focus on tomorrow (or next week!)?

  • Make sure to dedicate a program manager and key roles across teams. No experimentation program succeeds without someone who makes at least 50% of their role about the program. There’s too many cats to herd. Take your workflow RACI and provide it to teams so they know how to execute an experiment end-to-end. Start measuring the ROI of the program if there are resourcing gaps you need to fight for. 
  • Bring people together in the ways they come together now. You don’t need to recreate the wheel or, in this case, new processes. If you have Slack and JIRA as your workflow tools right now, just use those. You’ll need that meeting too!
  • Measure your program for inefficiencies so you know where to spend time as program manager. You don’t need to take action on anything in the first month or even first year. But there will be a point where that data will help you answer a question you weren’t expecting. 
https://pixel.welcomesoftware.com/px.gif?key=YXJ0aWNsZT0yYTRhNWEzNmVhY2ExMWVlOTkzOWVlZDUwNTZjMGVhNw