Embracing modern work principles using experiment collaboration

Experimentation teams have an unsolved problem: Where we work has changed, but how we work has not. 

Embrace new principles designed for remote and distributed teams that emphasize structured, transparent and async-first collaboration through our new experiment collaboration features.

Discover how you can achieve larger wins, drive team engagement, secure organizational buy-in, and elevate your experimentation program. Don't miss this opportunity to optimize your testing efforts and achieve remarkable results through effective collaboration.

Download presentation

Transcript

Thank you all for being here. We're almost there at the end You guys excited? Yeah. Ready for the after party.

My name is Brad g. I'm a principal product manager here at optimizing for our experimentation products.

Glad to have you all here. Glad that you're sacrificing your Amazon Prime Day to be here. Even though that was yesterday, I hope you guys found something nice to buy yourself.

Quick run through of what we'll be talking about today. I have a little story to tell you guys, a little entertaining story.

Try to keep it light, right, just a small group. But we'll talk about sort of the non, the not so simple ingredients to building an expenditure program.

And I really wanna spend a majority of time talking about experiment collaboration.

Right? And this is something that we're probably actually a product officer was part of our keynote yesterday. If you were in the web and feature expansion road map, you saw a lot of the features. We'll actually get deeper into it. Right, and show you actually how these features work. And really, the whole point of this, right, is to help you guys build better habits around collaboration.

Right? We know since COVID era, many of us are working remote or in hybrid teams. And so it's even more important to find the right habits for effective collaboration amongst our teams.

And then, we're really excited to introduce, at the very end of are two customers who have been part of our beta program who have a lot to share about how they have adopted experiment collaboration and the successes that they've already seen.

So I'll introduce him, at the latter the later half of of this talk.

So let me first start off with a story.

How many of you guys recognize what this dish is?

Shout it out.

Casio with beppe. Great. Great.

And so the reason why this is one of my favorite pasta dishes is because It takes three simple ingredients.

Right? Do do you know what the ingredients are?

Yeah, cheese, noodles, and cracked pepper. Exactly.

Right? It takes just three simple ingredients, but yet it takes great skill to master.

Right? So back in high school, I took a study abroad to Italy and in Rome, I tried my very first dish of Caso pepe and was blown away.

I then came home right, back to my parents and said, can you make cashew a pepe? I asked my mom, right? And she said, I can try. And I told her it's just three simple ingredients. Right?

And I told her it's just cheese, pepper, and noodles.

And so she did just that. Right? She took sliced cheese. Right? This is this is what we had in our refrigerator, American cheese or cheese.

Our salt pepper, right, in our pantry. This is our salt pepper shaker. It actually currently retails for, like, or it's auctions for e on eBay for, like, sixty dollars now. Currently, it's a collector's item.

And and packages of, ramen noodles. Right?

And this ladies and gentlemen is Casi Opepe in the eyes of my mother. Alright?

And so what the point what's the point here? Right? The point is that, You know, you might, you might think you could combine three simple ingredients, but if you don't have sort of the experience in knowing what ingredients to combine, you'll end up something like this.

Right? And you kinda think about this when you think about the way you build your experimentation team and the way you run your program.

Right? We've all seen this sort of classic combination of people processing technology.

Right? We may think we can get the most experienced people, right, have the most thoughtful process and use the best sort of technology, maybe like a workflow solution like Jira, right, and run a highly effective experimentation program.

But oftentimes what we're left with is this.

We have people asking our teams, these questions. Are we randomly starting on this page or Can someone help me build this experiment? Or what are the results of last week's test? Right?

The process, it's it's very reactionary.

You have your designers asking, of it or your your, analyst asking, are design ready for their experiment yet? Or is this ready to be QA'd? I need to QA this right now. It needs to go out today, or does anyone know why this experiment's paused?

Right? And you have the technology that you use. Right? They just simply don't talk to each other.

Right? You're passing around flat files, you're copying and pasting fields from experimentation into Asana, into Google sheets, into a Word doc, right, just to share out everything.

Alright. And so you're left with this, right, when everyone really wants this delicious beautiful dish of Caso pepe.

Alright? How do we get there?

Let's first break down the technology piece. Right?

This is where we're really excited to to to announce to all you, our new set of capabilities called experiment collaboration.

Right? It's a centralized way for you as an organization to run your entire experimentation program out of, from ideation to planning, to orchestration, to sharing results.

Right? An exciting thing about this is we're actually gonna make this for free for customers of our top two experimentation tiers.

If you have a sell rate, or scale or business or enterprise, these top few tiers, you will actually soon enough in a few weeks, see this in your product.

And that goes along with our entire mission here at Optimizey. Right? We're trying to help you run more impactful experiments.

Now we know so this is sort of a a great helicopter view of all the features that are included. I'll just briefly talk about this, but really what I wanna get back to is sort of these habits that we're gonna help you and your teams form around effective collaboration.

So we all know you've come to either web experimentation or feature experimentation, right, to set up your experiment to hit publish to maybe help monitor those results.

But there's a lot more to that. Right? There's everything you do around planning. Right? And this is where experiment calibration will help you. Right? You'll have a centralized idea intake form.

You'll have templatized test plans to help you document those test ideas And you'll have really cool collaboration features where you can actually share design files of proposed variations and actually comment and pass and and and get additional feedback on them.

You have governance pieces. Right? You want to prescribe and templatize the workflow.

Right? So you can guarantee every experiment that goes out the door has met the stringent, sort of compliance checks, maybe that your company adheres to. And that every result that you see is trustworthy because it's been set up correctly.

Will have things like required approvals and notifications of followers as well.

Especially for the program managers who are maybe managing dozens or maybe multiple dozens of experiments one time, we'll have a lot of orchestration capabilities.

Right? We'll have a common board view.

So I know some of you guys have used Trello, right, and manipulated cards around. Right? We're actually gonna offer that sort of same style of experience correctly within experiment collaboration.

These are features, right, every product has features.

I think what often gets remiss is that the technology, right, the tooling, and the processes need to really compliment the people and their teams.

Just like our beautiful dish of Caso pepe. Right? These three ingredients really need to compliment each other.

Right? And so that's what we're gonna talk about next. Right? How does these features really help you and your teams build better habits around effective collaboration?

And the sort of three habits that I really wanna touch upon today.

The first habit is structure over improvisation.

The second habit is documentation over very, very expensive real time meetings. Right? We all know we don't have calendar fairies around that can book time with everyone on our team, right, to talk in secretness.

And I have a three, widespread involvement over small team execution.

And this is about promoting visibility of your program.

So let's start off with habit one. Alright. And sort of the way I'll structure this is I'll talk about the habit the behavior that we're trying to encourage, and so and then the feature in experiment collaboration that really, helps you get to that.

So the first sort of, behaviors to be able to use a structured intake process for new ideas.

So before I get to that, let me first, show you, how you would navigate into experiment collaboration from your web experimentation platform or your feature platform. So like I said, in the next few weeks, you'll be able to see, and that left nav right there, a new call to action to collaborate.

And so I'll play this quickly. Right? You could go down there, click collaborate, and it'll open up a new tab into the experiment collaboration experience.

Now back to this topic of how do I set up a work request or ID intake form?

Well, from here, you go into this feature called work request, And you see here, you already have a set of fields in your ID request form.

All these fields are highly customizable Right? So you can already see here, I have a hypothesis, problem that this solves for the business goals, but you can do a number of other things, but business units, regions, And all these fields have multiple field types. It could be a rich text field, a label, a drop down, a numeric value. Right? So you will have the really structure how these ideas are initially documented so that there's not a whole lot of back and forth between the person who's requesting that idea and your teams who trying to get clarification.

So let's create this idea a little further. If you add this title, this hypothesis, problem solved for, pick a business unit and hit submit request.

Alright. So the requester would submit this, and maybe it gets funneled to your team. Maybe you're the one who's responsible for actually evaluating this idea and considering whether it's a worthwhile idea for you to pursue.

So you'll have all these ideas in a request queue sort of like this. Right?

And this becomes the starting point for collaboration.

It might be that, hey, this idea is not so great. Right? We have a few of those. Maybe it's something I've already tested. Right? And so you can either collaborate back and forth. You can either accept or reject that idea from this request queue.

So say, I look at this idea, the program manager, hey, this is like a sound idea to begin working on, I then promote this to a hypothesis.

Right? And so this hypothesis is actually is the hub for everything around that experiment.

This is where you'll document that has parameters in a brief. You're able to upload screenshots and design files integration. I'll talk about in just a bit. And this is where you'll eventually build a link this to an experiment in either web or feature experimentation.

And this gets into the next sort of behavior in this habit. Right? It's about proactively setting roles and deadlines and the activities in the way you build an experiment.

So how do we do this here? Well, in this hypothesis, you can see they're very far right there.

A workflow that's already been set up, but this is highly customizable and templatized to depending on how you work.

So I could go in here. I could say, hey, maybe I've completed my requirements, completed, backlog step and moving this into creative design. I could assign this to my design team to be able to work on. I mean, Brit Hall is the head of design and is the one that needs to approve the designs before they go out the door. She can come in here and approve that step.

So everything's all centralized here.

The third sort of behavior here is about removing data silos.

Right? How many of us actually have to copy and paste fields from, like, optimizely to, like, Jira to Word docs to their Google sheets. I see some nods there. I see some hand raises. Yeah. A lot of a lot of copy pasting and man and just manual work. Right?

Always let Shoff talks about it. It's work about work. Right? And we were trying to do less of that.

And so with this hypothesis, you're probably wondering how this is actually linked to web experimentation or feature experimentation.

Right? And so I'll show you just a bit So you go into your experiment tab, right, pick a project where that experiment sits, and then find that experiment and simply link it.

Once you link it, we'll actually bring over all the relevant information about that experiment directly in external collaboration.

So we'll see the variations I've already been set up the traffic allocation, the metrics, the audiences, and then the status, whether it's pause, running, or archived. Right? And this is super helpful for, like, a QA engineer or for, like, the QA step Right? They can quickly see what's been set up on the experiment, compare that to what was documented on the brief template, right, and ensure that everything is sound. Before it gets published.

The second habit is around documentation over very, very expensive real time meetings. Right? And so this is all about being async first and using really written communication.

When when possible to both collaborate with your teams. And so let me show you example of that.

Here, were in that same hypothesis object, but we moved over to the variations tab.

Alright. This is where you're really brainstorming potential variations that you can actually test or that would be used for this hypothesis.

So I'll go here, adoration, add a URL, and maybe I really wanna optimize my home page to build a better sense of urgency. I could drop a URL for my home page. We actually show that live and iframe that in. And that could directly collaborate and comment on areas of the site that we should improve. Maybe I wanna change the call to action here to be, to build a greater urgency and say limited time only. I can even mention, members of my team for their feedback.

And so all that lives, all that commenting is now in place to that page. Right?

No more saying, Hey, I meant to change that headline on the bottom right, you know, top left corner of the homepage. Right? It's all directly in context, and it's easy to find.

Another aspect of this is how we'll now, enable the embedding of documents.

I I I read a fascinating study a few weeks ago, saying that thirty seven percent of our time is spent finding documents or conversations that we once had. Right?

Yeah. I I I certainly feel that. Especially one that loses the car keys, frequently.

I lose a lot of documents, and I end up using search a lot.

But with expand collaboration, right, we now provide a centralized place to be sort of that hub of all the design files, spreadsheets, conversation directly in this experience. So what we'll do with the support is now in this variation tabs of what as well, You can also add a link to maybe Figma, Envision, a Google doc, or Office three sixty five document, like I'm showing you here. I'll here maybe drop a Figma link.

You can see this Figma link and add it. We'll recognize it as a Figma link and actually expose that directly in experiment collaboration.

Right? Everything's really at your fingertips.

And also, we're not looking to replace Figma. Right? We know designers would kill us if we try to we probably wouldn't even be successful at replacing Figma, but we're trying to give everyone in your team easy access to that information.

They can even quickly click on open in Figma to be redirected to the actual design file that lives in Figma.

And then the last topic here, right, is around widespread involvement over small team execution.

And this is really about defining success clearly and serving that vision widely across your team.

Right?

I've spoken to a number of customers and how they operate as a team. And what's always odd is, like, sometimes we see, like, the development step as sort of, throw it over the wall kind of thing, and it comes back. And developer doesn't really know, you know, whether what they did actually had a meaningful contribution to the experiment. Right?

And so this is what this is enabling. Right? Everyone's working off a central hypothesis object. They can see what the goal of that hypothesis is.

And then when it reaches statistical significance, we can all relish in the success. They're able to feel that they had a meaningful contribution to their business metrics.

Right? And a big after to this is through our comp on boards, that I'll show you just now. So you can see here a very troll like experience where you can drag and drop cards. I can move it to the next step of the workflow.

I can even assign it to an individual. Right. I can even change the due dates. I can pretty much manage and orchestrate my entire mention program from this Kanban board view here.

And the last thing to talk about here is about intentional brainstorming Right? How many of you guys use, like, a collaborative workbook, sort of, collaborative canvas, like lucid Spark or Miro? See a few hands.

How many use Mero, sorry, lucid. Actually, I I actually, I'm very curious about this. Trying to determine what we wanna use, integrate with. Okay. How many use, lucid Spark.

How many use, like, Fig jam?

A few?

And a few others, how about mural? It's another one. Mirror.

So so inform, our prioritization and a road map, actually, as as I say inter insight there. I didn't I didn't think I didn't think that would be the case.

But yeah, I we love collaborative whiteboarding It's essential essential to the ideation stage. And what we'll support, what we currently support is to be able to embed things like a mirror board directly into that hypothesis object. So you see here we iframe it in, and you can actually manipulate and interact with it from this experience or we can choose to open it up in its native application.

Alright.

So that's Fig jam Fig jam, I showed, I think, Merrill earlier, same thing with Google Docs, we can do the same thing. Alright. Again, just giving you easy access to all this information. Right? Think of it as the hub. Right?

So, sort of wrap up here before we move to the, customer panel.

I think the key takeaway is you know, especially after COVID, where we worked has really changed, but I don't believe how we collaborate as a team has changed that much.

Right? And that's what expert collaboration is really meant to help you with. Right? Does it give you and help you build better habits around remote collaboration.

Right? And that's through, adopting modern work principles such as being async first, Right? And and and sharing things, and increasing visibility over small team execution.

And just a quick sneak peek, right, everything I actually showed you will be, is already available. We're gonna gradually roll that out, over the next, probably two to three weeks. But this is just a sneak peek in a road map. Right?

We wanna better, build out idea scoring features so that the so that you can adopt whatever scoring rubric you wanna use, whether it's pie, ice, rice, whatever another acronym is out there, right, and even use boost scoring. We wanna help you, automate the results sharing process. Right? If you imagine experimentation now has the hypothesis, it has the metrics, it has the variations, it has the results, and So why can't we just help you, accelerate how you, or, or speed up, how you share your results so that we can prepackage this in sort of a slide format that you can already easily share with your teams.

You know, even exploring AI generated heat maps. Right? So like earlier on in your design process, you can upload an image and we can predict where the tension is being directed to on that page and give you even like a clarity score.

Right? So these are all things in our roadmap, just to say that we're heavily invested in external collaboration, like I said earlier, really wanna help you as customers run more impactful experiments.

So with that, I'd like to invite, Ashley Anderson from aura, up to Sage, and Jackie Greg, from our Rapidexpertition team, as well as Brit Hall, who'll be moderating, this interview. Thanks, Brad. Yep. Give us a second to get situated.

Thanks.

That mixed media today. We've got a presentation. I've got questions jotted on no cards, and we're actually gonna show you a sneak peek into actual experiment collaboration instances. So I'm gonna show you real live not prerecorded. No slide on Brad, demos of what these guys are working on. Alright.

First off, introductions. Jackie, wanna start us off? Yeah. I have one of them Jackie Greg. I'm on our rapid experimentation team. I'm a senior organization manager.

So our focus is really on working with clients, building out web tests for them, so I'm here today to speak to a little bit about how we use this tool in our day to day. It's a really important vital tool in our process, so a kind of get into that in a little bit? Yeah. And I'm Ashley. I work for aura for the cybersecurity company.

And I'm obviously in house, but, collaboration is extremely critical for us because at any given time, like, right at this moment, I'm managing thirty tests. SO WE HAVE A LOT GOING ON AND IT WAS Predible FOR MEAN FOR MY SANITY.

WOWK, Matt, you can hear me.

For for me, for my sanity to okay.

Keep up with everything in one place. So collaboration has really helped with that.

Alright. We'll start it off super easy with the biggest question ever, which is what is experimentation at your organization and what does it mean or for each of you? I guess I'll start there.

Since I'm in house, honestly from top to bottom, we're experimenters.

We experiment on our front end. We experiment on our marketing site. We experiment with our paid ads. We'd experiment in our, enrollment flow, we experiment on our app.

We experiment on our, like, every aspect, there's not a part of our site. There's not a part of our product. There's not a part of any day where I don't have five people being like can can not should we? Can we test this?

It's not a question of do we really need to. It's a question of how quickly can we get this test going.

We're very data driven. We're very much experiment focused. So, it's it's part just part of our vernacular. So And for us, we're a team within optimizely, so obviously experimentation extremely important to optimizely, but I would save in more so for rapid experimentation.

It's within the name. Right? It's really everything that we do. It's our bread and butter.

We do it day to day. And we really focus on building out experiments for our customers. We're a team of optimization managers, we're a team of developers, quality analysts, designers, and we're really that piece that we try to do everything that we can to fill any gaps the customer might have to really help them increase their testing velocity, get actionable results. So really, it's it's all that we do.

And it's even more important because we do this daily and we do thousands of tests is to have a tool like experiment cloud where we can use that as our central hub. Right? Have a place where everyone puts in their test ideas, whether they're fully developed or they're not yet. They're just in a backlog phase, but it's something that we can then look to using our day to day, keep all our communications centralized with every idea.

And it's just been a really useful tool for us. So we're we're really excited to be working with it. Love it. So let's dig in a little bit.

Ashley, in particular, I'm curious about scalability, repeatability, you're running so many tests. Like, I would love, and as you watch me, like switch all of my screens here, I would love for you to take us through exactly what that looks like in your org. So we talked a little bit beforehand. We've prepped this, but tell us about codification of the work that you're doing? Yeah. So we really started off, with what do we need from our internal teams, like end of day? What's the base information that I need from joan over in product to get test a off the ground.

For this allowed us to be very specific with what we wanted, it allowed us to be very, intentional about the kind of information that we needed. So, you know, we start with, you know, who's requesting this test Who do I need to talk to about this? What area of the site is it on? What pages? That's usually the for some reason, the hardest part to get from anyone is what's his pages do you want this test to run on. But, nailing that down, give me a brief explanation of why we're running test, so that I can take that information and make sure that this is a relevant test, that it's something that is going to impact what you're wanting it to impact, that we can run it in the time that you need it to run, that we can figure out where it goes in our timeline and our roadmap.

So the thing that I loved about this that is that is so customizable as Brad was mentioning earlier, I can select which fields are required I can select which fields, like what order they go in so that it helps them kind of think in a waterfall effect of like, okay, I'm starting with the hypothesis. Well, if that's my hypothesis, then what do I kinda want to change on the page?

And kind of move from there.

We also do, we also do, like, within here, we ask them for, like, some basic information, like, Hey, do you off the top of your, like, off the top of your brain? Do you know how many people are visiting this page every every week, every day? So we can kind of run those numbers a quicker manner for them.

So that was great. And once it's submitted, we also have it set up to where me and, my other CRO are automatically assigned to it. So we automatically get tests.

We ask for what level of, we have, like, what top pages are they so that we can allow for filtering by tasks.

It's a really robust tool and I created my own aspects to this, I could create my own fields. It didn't just have to be whatever optimizely gave me. So what meant more more sense for me was there because I could make it. So And then as we move into, like, you know, something's been submitted, it'll be sent over to me. I'll go through it, select the task, and then I'll create a hypothesis from there leading us into our, like, road map, true road map. So like this one here, the work request one thirty four has already been created into a hypothesis, in our roadmap, and it just pulls that information directly over. I can edit it as I need I can decide where is it in the in the flow, and we can work through it that way.

It allows us to once again, these are all very customizable fields.

So I could pick what made sense for us. Like, I don't necessarily need to, like, say, like, who's going to approve a design because design is hand how we work, designers just handed off to me. So is design done? Yes.

So that's how we kind of work through this, in here. But it's it's been great to kind of be able to assign people who need to be assigned, add in the information. If there's nothing in the field, it's not there. So it's not like you're scrolling forever.

To find what you're looking for.

Yeah. Cool. Yeah. I love it. And I'll uncomplete this because this is a real world first. Yes.

Well, then the design did get sent over to me a little bit earlier. So yeah.

Alright. So someone else can complete that. Yeah. I will complete that in a little bit.

Okay. Great. I'll leave it alone. Yeah. So I was like, wow. It's ready to launch.

Alright. Cool. So, Jackie, a question for you. We've talked about, working asynchronously and distributed hybrid teams. I think your team personifies this better than, like, anybody else because you are our customer, but your customers are all of optimizely's customers. So tell me a little bit about that. What comms are you doing in real time versus what are you doing asynchronously?

Where how an pull your instance up as we go. Yeah. Great question. So for us, in the way that our team works, we typically only get a weekly call with clients.

Right? It's thirty minutes. Everyone in this day and age is working remotely. You only get so much time.

People are in different time zones. People are just spread across different countries. So it's really important to kind of keep things that maybe are super complex tests or topics that you can't really explain offline for those thirty minute calls. So we've had to come up with a process and, you know, test brief templates and work flows and things that can be used within a tool like experiment cloud where so much of that work can be done asynchronously.

Right? Again, I think she's gonna pull it up here and show you guys so you can really fill out every field everything that we would need to build a test. We can always go in there, ask questions, ask for pieces of information, different product teams can sign off. They can input their designs or let us know if they need our design work.

Everything can really be done there. So we're really reserving that thirty minutes once a week for things that are critical that really can't be done. But so much of the process is doable here, and you'll see within the test brief. And then within the workflow, they get to see what process or what step of the process is this test experiment in?

Right? Is it still in the requirements phase? Is it in development? Is it in QA?

Who's assigned next? What's our timeline? Is it gonna be turned on? All of that is seeable in here.

And it's not just by the people that are working with us. Right? It's really for every marketing team, everyone within the company. So there's that visibility into What's going on?

What are they testing? Where are they testing it? And you can see it all there. Images of the variations.

I mean, truly it's endless with the amount of information that you can plug in. It's made our day to day easier of just being able to do so much offline.

Oh, because it's you, have submitted the request.

Perfect. I'll get right off. I love it. We'll take a look at it in a second. Alright.

Brad talked about three habits within our, you know, people process and technology, and that third one is widespread involvement over small team execution in particular. So curious about how you get everyone involved and specifically what it looks like to share out with executives because I think that's what a lot of our people spend a lot of time doing. Yeah.

From our side, we use the plan, tab and the the timeline most predominantly.

Because a lot of our questions about this are what are we testing? Like, what's going on? Like, do we have a gap? Do we need, like, what can we get rolled up faster? So as you can see, we have like, a little bit of a gap in December, but as you, like, scroll down as you open more things up, like, more things, like, start popping out and you're like, okay, no, there is no gap. We are we are covered. We are doing this roadmap.

So there's tons of things happening. We are share I shared this out with our, higher ups every day. Honestly, like, they're able they're able to get in here and see this. Once we get to a more in-depth, we use our the coupon boards to like move things over. And look at that.

Within the actual, hypothesis is where we store our we use a data studio report for the visuals to make it easier easier for people to see, and understand. So I have a space in our briefs for the data studio report. And I just plop that link in there. Anyone who has access to this, task can then or hypothesis, not task hypothesis can then see the, results as they're coming in and since, data studio updates, I have it updating every hour. They can see updated results every hour. So, it's pretty easy for us if we have to use something else.

To put it in there, but then it's also really easy to share out.

I we use the timeline, the board, the calendar, pretty much anything that we can use to, like, help not just keep people in the dark. And it's shared out weekly, and then we do it more in-depth run through, monthly. So, yeah.

Yep. I can add to that a little bit because I think we use it pretty similarly at least for clients too. Right? They're using that timeline view.

They're using the calendar view. So I think that's super important in a similar process. But I'd say we also, for our purposes, oops, of when we deliver a test to somebody within the common within that workflow, we can always put in preview links. Right?

So anybody can come in. They can take a look at the test. They can take a look at the experience. They can sign off on it.

So they have that piece of it. But something I wanna mention too, because everyone loves the concept of transparency. Right? We want everyone to see what we're doing, what we're testing, maybe other team's social, email, whoever it may be, They wanna see what's going on.

Maybe they can use it for themselves. But sometimes, something that I hear kind of frequently from clients is we want everyone to know what's going on, but we don't wanna to have access to the platform. We might not want them to actually be able to go in there, change the code with an an experiment, pause it, publish it, kind of make those changes, So we really use this to be like, okay, use this as a tool to let everyone see what's going on, let them kind of understand what you're testing, where you're testing it, but reduce that risk, right, of maybe a mistake happening of someone going in and changing something or, you know, leave it to those that wanna really be key points of contact to publish things to be able to do so, and then let the others kind of just use this tool to see, you know, full access into what's going on.

Alright. Pull your lovely faces back up.

And as we do so, There we go.

Curious. Like, what's next? Where does the end of twenty twenty three take you? What's twenty twenty four look like? Jack and you wanna start? Yeah. I'll start.

So for the end of this year, you know, moving into next year, obviously always excited to take on new clients, get everybody really rolled out with experiment collab, get them really used to it, and just making the whole process a lot more efficient just for us and for them as well. But even outside of that, I don't know for those of you who were at the keynote yesterday. I got really excited thinking about Opel or AI tool. I think it's really cool.

The part that I loved was, the fact that I can help fill out a test brief, right? Because sometimes you might have an idea, and I think we find this a lot. There's an idea, but you might not know all the details the rest of it. Where do I wanna test it?

Who do I test it with? Who do I wanna show what? So I think the fact that you can use this tool and this technology to kind of fill that in is gonna save so much time. And so for most people, right, the people that we work with in experimentation, sometimes experimentation is not their focus.

They wear so many hats. They have so many other jobs that a test brief that should truly take maybe an hour or two is taking hours and weeks, and it kinda sits there and you reduce that ability to kind of increase your testing velocity. So maybe having a tool like this, I'm hopeful it's gonna really, you know, speed up how many test briefs you can complete. You can get those in.

Our team can build it for you. And I just I see so many opportunities with that, so I'm super excited.

Yeah. That I mean, I agree with all that. We also, on our side, we're looking more at velocity so we want to increase the personalization, kinda look into that, see how that fits into what we're doing over at aura. We're also looking at like, getting a little bit more involved and kind of thinking about, like, multi armed bandits. We wanna get more results, better results faster.

We're looking at, like, how do we do different kinds of testing and overlapping, how do we test exit ops along with something in enrollment, along with something on a landing page.

So kind of deeper audience segmentation and thinking about how do we keep the data as clean as possible.

So, yeah, we have a lot of stuff going on, and we're really, we're also, you know, working with big faces and big big names and kinda how do we, look a little bit more towards our organic traffic as we're kinda trying to step out into getting our name understood a little bit more. So there's definitely a lot of things that we're doing and looking for. And, I mean, optimizely has all those tools is just a matter of how do we start how do we start using them, and not necessarily a question for optimize it. It's more a question for me and my team. It's like, come on guys. Let's do this.

So that's kind of where we're going.

I love it. Well, with that, we'll invite Brown back up to the stage. We have six and a half minutes or so. And we're up for Q and A for Brad, for myself, for these women.

We'll just ask that Oh, no. We actually don't have a mic for this session. So just be super loud, and I'm trying to repeat your questions.

Wait back at you. Okay. Well, I know you can be loud. Go for it. Yeah.

For sure. So two things. First off, I think I interviewed with you in the beginning of the year. Me?

Let me see. Yeah. Oh, you know what?

Because we were like, oh, please don't hang up. No. Happy. I can tell.

I'll I'll on both sides. Yeah. Period.

Yeah. Yeah.

Your process and where you're at your program of incredibly similar. We are at Mhmm. Pretty much exactly what you are for the next year essentially.

And I'm curious how you plan to use this collaborate tool for the iterative process. Uh-huh. Because I know you guys spoke to that to to some amount, like, we often find we have run five to six very, like, versions of a test. And at the end of it because we're not great with naming structures, we're, like, which one had the results we want?

Yes. Hold on. And, like, digging through the archives and all that stuff. So how do we How do you guys currently and how do you hope to plan to use the iterative plan then?

Yeah. So we have, we're actually starting to implement some of those things. I just implemented today a new field into our requests that asks, like, because we run a lot of tests on specific landing pages, and we're running into a lot of, like, blockers and things like that. And so we wanted a quick way to easily see if they're was things or are things coming up on specific pages that were running.

So I just added another field and like, hey, check off. It's your test including this, this, this, or this. And if they check those off, it'll show up all in one way so we can see what's run there previously.

Thinking about, like, towards the future of, like, well, we've run this like we do have to do how our site works.

We are currently running a test. We're running series of tests right now on different subsets of traffic.

We Robert Downey junior's, face. We're putting him in lots of different places. He I'm, like, it sounds really weird when I say it. But, but, yeah, like, I'm just putting his face there guys. No context.

Yeah. So we're working with Robert Downey and Junior, and we're seeing where he works the best. Is it on our home page? Is it on our SCM traffic? Is it on our anti virus is in all our parental controls, like kinda seeing where he works the best. So very iterative. We we find that he doesn't work on anti virus, but we reuse kind of those same concepts.

I personally find it best for me and how my brain works to do whole new hypotheses for each area of the site that I'm testing on instead of just building it within one hypothesis because there's different there's different thoughts that have to go in with different areas of your site. So you might be considering, your change might need to be bigger. You might not need to have it in a different spot depending on what what page, what area of traffic you're on. So I personally like to have it separated out a little bit, but I would say for me, I like a lot of the filters.

I can add as many filters as I want by adding the fields, and I can subset my tests that way. So And one thing when we didn't show, there's a sort of the idea of a campaign experiment collaboration, but just think of it as a way of, like, grouping a series of related experiments Right? And so you can do that already in the inexpensive options. Yeah.

I have to get an initiative for you. Robust.

Layout of my, like, we have gateways. I I have it, like, on the marketing site, I have, like, within the marketing site, we've got the pricing pages. Then we also have, like, our paid landing pages within those landing pages, I have it broken out by, like, the, like, if it's an influencer page, if it's an affiliate page, if it's SEM, We also have like gateway pages, like how people enter our site. So we break all of our tests out that way. So we can see it all broken down and know Like, do we have ability to run tests on this traffic? So yeah.

Yeah.

Yeah. Do you guys wanna answer? I don't know if you'll give me some Give you a sneak peek into, like, future even further roadmap. If you were in the last session, we didn't talk about this a little bit, but we are starting to work on results summary and experimentation summaries.

Which will take everything that we know and use AI to consolidate it into something that's shareable. What we've also heard is that so many of you are like, the end of every test. If I put together a PowerPoint, I have to drop screenshots of the variations into the PowerPoint. Like, we have every bit of that in tools that we own.

And so Brad and I have been looking at using, templates similar to what you've already seen for, like, workflows and briefs and intake, but then output templates as well and allow you to build template and then prepopulate it and share it out. So more to come on both summaries and then getting them into the hands of the people who need them, a little bit further in the roadmap. Yeah. And in the short term, it's like, you know, you could create a field called key learnings.

You create a a a drop down for, like, learning a judge that's achieved, then yes or no, and so or begin to catalog the actual results portion of your hypothesis that way.

Yep.

Aggravating my topic.

Checkout related tests. Right? Because we have one we'll have an interest with somebody.

So you can absolutely already segment your maybe segment is in the right word to use, but group your test by any sort of taxonomy that you one of the first things that we do when you get stood up on experiment collaboration is sit down with you and understand what your data structure is and what is important to you with regard to cording so that you can put the data in in order to get it back out later. So if you do wanna be able to answer questions like you know, I gave the example in the last talk about, you know, in q four of twenty twenty three on the home page, what were the conclusive tests that we Like, you should be able to do that as long as you tag it with pumpage.

Yeah.

Work when setting the franchise on thirteen sites. And I'm more than fourteen Yeah. For every everything he has been asked with, so they all want to have, seats. Yeah.

Some level of seed.

Guess though are the base level of seats. And for those of you who will have experiment collaboration, they're completely free. So you can open them up to your entire org because we want that democratization. We want everybody in.

Then we have some really robust, like, permissioning structures, Jackie, do you wanna talk about that? Because you work with so many different customers who, like, really shouldn't see what everybody else is working on. Yeah. So, obviously, you can include everybody, and we kinda did it where we set up one specifically for experimentation.

So it's like specific permission, you only get to see some, you know, some of the features. So maybe the timeline and the board, but not some of the publishing and some of those other features. So It really depends what you want someone to come in and be able to see, but it's also very, like specific to the group. We worked with them to make it very personal to experimentation, and each client is gonna have slightly different pieces that they can kind of access and work on.

But it's all within permissions, and then the main person can kind of assign it as they go. So you could ten people seeing one thing, two people seeing another. So it's really useful. Just depend on what your goals are and what you want people to have access to.

But that level of visibility is still there, but just what options they can take. It's kind of a little bit more limited.

Got a question.

Since to a standalone tool. I mean, if I'm using a different tool currently or any testing or whatever. Yeah. From a compliance perspective, there's a lot of value.

If we're not on the full optimized platform Yep. But is it a stand alone tool where I could, like, use it for the compliance speed Yeah. Confligated question. It is not a standalone tool in that if you are not an experimentation customer, you cannot buy it separately.

If you are an experimentation customer and you do have access to this, there's nothing saying that you have to link it to an experimentation experiment.

So you could still use the workflows and everything else. I do that. We have a site that we do very little testing on that. We don't have Misley on.

We just use our CMS system to run tests for. So there's a whole area in our time, I don't know, timeline that's for our other site that we don't we might run a test, like, two or three times a year, but I need to have it in somewhere where I can keep it in my brain. And so it's in there. It just doesn't link to anything and optimizely.

Got it. So, yeah. Oops. And I have, sorry, yes, I can piece in the question.

We have an internal, like, legal approval process.

Does this link can that link to that? Or it's like, so a lot of dev work to do that. Not necessarily a lot of dev work. We have an external work management API that allows you to take a workflow step from this tool, send it to an external tool, let that process run, and then complete the workflow step here.

So we see that a lot for Jira, right, development teams are in that tool. But we've done compliance with other teams like Viva promo mats and that kind of thing. And I am assuming it's something similar where that would that external tool would be the step owner. And so instead of a person, it's a process that owns the step.

And then when that's completed, it comes back and it finishes out.

And No. Not a lot of custom diapers, more like integration, potentially some middleware. Yep. Thank you.

And I do wanna say we're over time, we'd there's no one after us in this room. So if you guys wanna stay, but also if you wanna leave and get to happy hour you're not going to affect any of us here. But also maybe we can hang out. Yeah.

It says that you stood this up. You onboarded and implemented, like, fifteen or sixteen people Yeah. Within, like, two weeks, which seems super fast. Yeah. Yeah.

Can you talk about, like, what that process looked like? How did you Yeah. Were you able to The short answer is told this is what we were doing.

So long answer is, we already had a pretty similar process in place.

I've used program management. So I've used that before. We also used we already had a lot of stuff in Asana.

With so it wasn't like a huge change for our flow in our process already.

But obviously, there's like little changes that are made when you're introducing a new product But for I'll say for anyone on our side who is experiment focused, like we work with an agency that is experimentation. I've been doing experimentation for six years. Like, we're very experiment focused.

It was like stepping into a pair of shoes. It was perfectly made for your foot. So it was like it had the right vernacular. It had like the ability for us to do the things that we wanted.

But for someone who was kind of, like, on the newer side, it wasn't that they had issues with it. Like, there was hardly any problems with anyone getting stood up. It was more just, this isn't called exactly the same thing. It was called over there.

So it it it we were already pretty organized with it long story short. And I said, no, I want to use this, because it's more experiment focused I love the variation feature. I can add comments to pictures of screenshots and be like, Hey, developer, you need to remove this, like, spacing, and I can just add a comment directly into the picture. There is no, and our developers are, like, we have our agencies in the UK.

Our developers are in the Middle East.

I'm never going to talk to you them in a, like, in any sort of real time way. So I can just be like, fix this. And I'll point to it. I don't have to be on a phone call with them.

They know exactly what I'm talking about. About, like, bringing anything existing that was in asada, like, into it was a little bit manual, but it wasn't crazy, like, difficult. Because like I said, we had a pretty similar it it was it was a little bit manual, and we did have a lot of tests in there. But me and my counterpart sat down, and it took us like I we blocked off time in our calendar.

I blocked off an hour and a half, and we just crushed it and just went through it and just got it all copied over.

And it was obviously a lot easier once we had it standing. So yeah.

So to follow-up on that, is there a, like, an intention or, like, some sort of import tool of some sort? Because, like, for example, I've seen the scores stood up in Trello. Yeah. Has been for over a year.

We have a huge archive of stuff we've completed in there. I'm fine to let in hanging tasks kinda get ported over manually, but all the stuff that's done. Yeah. And that's shared across other boards as well.

Would love to be able to, like, represent that stuff. Yeah. So an import tool and, like, I click a button and move it from there to here is probably pretty far out mostly because you're all using different tools. We've got Trello, we've got Asana, we've got Monday, we've done right.

Like but and even within those tools, everybody sets things up a little bit differently. But what we can do, we have an engineering team that sits within our services org that can script it. So as long as we can create a repeatable process for what data goes where from your external tool that you're using today, then we can script it and automate it and bring it in.

And so we can make it easier to get stood up, but I'm not gonna promise any sort of like button push from Yeah. Trello over so sorry. This this girl, right behind you, has raised her handle three times. I'm so sorry. We can see you. I didn't know if, like, you still had to question, but I wanted to make sure I don't wanna call you out, but I'm calling you out.

Okay.

Thank you. I was talking about alright. Could you know, the fact that there are team that don't use about us, but they might be wanting to know if that's to be nursing on the website or not, but it would be great to centralize everything together with you guys by answering them as possible.

Maybe after case verification, but I was actually sure you, maybe Yeah.

Have you been shopping with other customers where I can't expect to charm you on the idea. It's like, oh, Sure. They're just so stuck in those places that I may ask them, let's say, if you have testosterone and not and as a funeral consequence, why? In my husband's sis, but I did do a little bit of like, how I question it to me. Did you have to do any lab and convincing medicine? Yeah.

No. Not necessarily.

And I guess this is kind of Like, I feel like I'm I'm I'm with the dream team because I'm like, no, they just did it because I said it, but now, but I I think that we had we had so many pain points before.

And I'll I'll also say it doesn't really change much for it didn't really change much slash anything for, like, our design team.

How our design team personally works is I have a counterpart project manager over over on their side who builds everything, but I talk to her. So she project manages them on the design side and then sends me the stuff. So we have the Figma area. If I'm working with our, like, the person who's requesting the test on the creative features, I put it into my side and then poured over to her, and she is still in a I will say she is still in Asana for her stuff, just because of they have they're not just doing CRO tests.

They're doing our full product and like a whole bunch of other stuff. So, getting their backlog into here was, you know, a different story. But, they really have been super helpful because they know that if it's smoother for me, it's smoother for them. So that's kind of the mindset as I I usually go in and be like, you know all these pain points that we had?

Here's the solution. Like, here's here's where I'm saying, like, this is this is messed up. Here's how this will make it better.

And it it happens kind of on the fly, but it's I also have regular check ins with them to make sure that everything's working. Like, on Monday, I have a check-in with that team to be like, hey, is this still working? Do we need to add anything? So And you're getting real live demo, and this is why not everything goes according to plan.

But, when you take a Figma file, you just drop it in and you don't actually your designers to leave Figma. So in this case, you know, I've got David times two, working in Figma. And I'm actually just gonna grab this link and drop it into experiment collaboration. I'm not gonna ask David to not work in Figma.

Okay. That I misunderstood because I thought like They're gonna whoever collaboration has not given me back in some years.

No. The reviewers in particular are probably not going to have to go into Figma anymore. Right? They're not the ones working in there, but that it's like automatically syncing so that any change that happens in Figma or indesign or Google Docs or like whatever else that you're using, gets reflected back in experiment collaboration.

So it's just seen there. We're just, essentially, iframing it Yeah. And it's lovely. We love it.

Great. Yeah. It's fantastic.

Yeah.

Especially, like, one of the things I had seeing experience that there's a lot of simplicity.

Yeah.

Persive, very structured. And this is a great, a good example of a form in IV and everything. Was it great that also adopting to a person that business every day. To fill up, you know, and say thirty, eighty different fields or something.

And in the second stage, there's some more Yeah.

I mean, I I think basic level is you can set what's required initially. Right? And everything else could be optional, but then you fill out, at a later step in the process.

So that's what you can do here. Even the workflows, like, you can use a templatized one, you can also mark a workflow as sort of what we call flexible and that certain steps can be skipped you can add ad hoc steps if you realize there's something that's missing earlier on. And so it does accommodate sort of the unforeseen things that happens in the workflow. Process.

Yeah. Yeah. And so this is where, like, I just created this old space. Right? And there's there's nothing in it.

There's no brief. There's no variations. There's not even a workflow. So you can actually create that hold space.

You can you can start to put a due date on it, for example. Maybe I know that I'm gonna do this sometime through the month of October, but I don't know exactly when or where or how. And then as I start to get more, I can then come in and pick the work flow. Right?

I'm not creating an image or banner ad. I'm running my experimentation workflow. So then I can add that workflow. Like, It is where you can add to it a little bit later.

In this case, I actually skipped the entire ideation process. I didn't put in a request. I went straight to a hypothesis, which I think is a very real use case for so many of you. Right?

We're we don't want to take requests.

So just don't, and go straight to the brief template, and then you can you can go there too. Right? So keep picking the wrong one. Here we go. Now you've got, like, all of these questions to be filled out not by a requester, but probably by you. And directly in the hypothesis object and we've skipped the request entirely. So you do have a lot of flexibility.

Yeah.

Oh, yeah.

Yep.

And then also if something's not filled in, it goes away when you create, like, create it. So you don't have to sit there and look at empty space and scroll all the way because the very bottom one had information in it. So yeah. Yeah.

Yeah. I'm gonna skip everything that's not required, and you'll see it here. So Alright. I think we're out of time.

Thank you so much. We'll do it way over time. Thank you for Alright. Sticking around.

Yep.

Gain control and maximize impact over features

How you master competition in a digital world

Orchestrating the content supply chain and the use of AI

Unlocking more value from the CMS

Got it all wrong? How to optimize your digital business with SEO

How to fail: Optimize learnings from every test you run

Pets at home: our journey so far with a Headless CMS

Creating B2C style experiences for B2B Customers

Habits of high-performing marketing teams

Everything you need to know about the experimentation roadmap

Spire UI & PIM helped create a unique experience for Supplyland

Everything you need to know about our CMP & CMS roadmaps

Mastering Growth and Performance through Strategic Experiments

BMC Software's Transformational CMP Journey

Reimagining marketing: how flexibility meets simplicity

Scaling a High-Performing Program to Fit Your Business

Seize the Moment: The Opportunity for Digital Leaders

Everything you need to know about the commerce roadmap

Creating, publishing and streamlining for the Omnichannel future

Omnichannel experiences with personalization at scale

The importance of mobile for driving commerce forward

How Calendly leverages personalization for their 20 million users

Experiment with a destination in mind – Outrigger

Advanced techniques for A/B experiment architectures

Content creation in a non-linear world

Using AI to optimize B2B content: making bold bets pay off

Planning for content at scale with Optimizely CMP

Your conversion strategy's secret weapon

Navigating the transition to a headless CMS

Optimizely Graph as your new search engine

Content personalization: how to win with your content

How to steadily improve your digital results

Experimenting across the product development lifecycle

Accelerating testing velocity with limited resources

How we got here and where we're going with Microsoft's AI

Unveiling the future of Optimizely's CMS

ConnectWise's journey to streamline multi-site experiences

The digital transformation story of Zoom

Expand your experiment horizons with bandit tests

Unveiling the Optimizely connector for Salesforce Commerce Cloud

Next gen personalized commerce: using AI, promotions and beyond

Integrating good data for the purposes of personalization

Google and Optimizely: Sneak peek at an AI powered future

Anticipating the future of B2B commerce: top trends and insights

A fireside chat with Angela Ahrendts DBE

Using experimentation to drive business growth