Everything you need to know about the experimentation roadmap

Dive into the production direction for Web Experimentation and Feature Experimentation. Get our most recent releases and how we plan to continue innovating throughout the roadmap.

Download presentation

Transcript

Welcome everybody.

Good to see you. I hope you all had a had a good lunch, had some coffee, and, you're ready to hear a little bit about our product roadmap.

Before we get started, I'm Tilo. I'm the VP of product for experimentation, and I also have Brit with me, who's a director of product for, web experimentation.

Before I get into it, I wanna, tell you a little bit about myself.

I'm actually German similar to Alex. I don't have that nice of an accent.

And what's what's peculiar about Germans is like it's really tough for us to show our enthusiasm and excitement.

So I I used AI to try to show you how I feel inside right now. Because I'm super excited about what we what we get to show you today. So anytime when, you know, I might not seem as excited, just remember this. I also got AI to describe the picture for me.

And I think you got it exactly right.

Cool. Now, before we talk about the product road map, I wanna talk a little bit about why why we do what we do, why we build what we build, at optimizely.

And I wanna provide a little bit of a different lens onto experimentation, that is based on technological development.

Now, if you look at technological development and how it progresses over time and how we think about how technology develops over time.

We humans tend to think very linear.

Look at the past, and then we extrapolate into the future. And we do the same in our businesses. We look at the past and extrapolate to the future, and that's what we base a lot of our decisions on. And so we think we end up there.

But in reality, technology actually develops exponential.

And that's really difficult for for us to grasp.

So, where we actually end up is here, and we have a certain surprise factor as we go into the future.

And I think nothing shows this more than than this year where AI sort of, like, took over everybody's lives and lives in tech, and it came a little bit as a surprise.

But in reality, there is actually, really good science behind why this is the case and why technology develops exponentially.

And it's called the law of accelerating returns which talks about how technology tends to progress exponentially and how different technologies influence each other and amplify each other to have this exponential, growth.

And so I'll give you three examples that are pretty obvious for us in technology. The first one is the explosion of computational power.

Right? This is a graph of the computational capacity of the fastest supercomputers from ninety three to twenty twenty two. And you can see that insane development in terms of the computational power available to us And today, your smartphone is, you know, a billion times faster than the computer was in Apollo eleven. It's pretty insane.

And the same time, we have an explosion of data.

Right. The amount of data created, consumed, and stored is growing exponentially as well.

So I estimated that by twenty twenty five, we'll create one hundred and eighty one zettabytes of data. I have no idea what a zettabyte is.

But it sounds really, really big, so it's really impressive.

Right?

And then what do these things lead to?

If we look at artificial intelligence models, machine learning models, and the amount of training compute that goes into these models, That has grown by a factor of ten billion since since two thousand ten.

And compute resources used to train models double every six to ten months.

So if you think about, like, what that means in terms of capabilities that are being unlocked by that, It's really incredible. And so we're getting this exponential development that we all have to deal with in business. Right, whether you're in software or in your commerce, we're on this exponential trajectory. And how do you deal with that?

And of course, for us, one of the answers is experimentation because it offers us a structured way to explore uncertainty and helps businesses find their way even when the path ahead isn't clear and it amplifies your impact that you're having because nobody can predict the future. And humans are really bad at thinking exponentially.

So experimentation helps us explore this step by step.

So if you look, you know, online, you look up AB testing and all of these things, a lot of times what's What's being put in front of you is sort of this idea of, like, all experimentation leads to you making more money.

But that's actually just the output at the very, very end.

When experimentation actually is about, it's about learning, which then leads to the amplified impact.

Right? We're learning about we're exploring it. We're learning as we experiment, and that leads to a larger impact. If you talk to Pete who just joined us from the, Wall Street Journal, he makes a point about them experimenting and learning a lot And then at the very end sort of benefiting from that with this huge uplift in specific experiments where they put all of these learnings together, to test the outcome of all of their learnings, which had, you know, multi million dollar impacts.

But it's not the experiment itself that drives the revenue.

It's the change you're making, and it's all the learnings combined about your users that you put into that change to then drive that impact.

So how can we do this at scale? How can we learn at scale? Because it's not enough to just have one person in your organization, sitting there and experimenting, running a B test, and they get all the learnings. And then if that person leaves, what do you do? So this is why for us, a third aspect is really important, which is democratization of experimentation.

And this isn't necessarily about having everybody run experiments.

But it's about everybody being involved in experimentation in a way so that your entire organization can learn So you can explore that exponential future faster than you can petition can.

And this all comes together in the experimentation flywheel. If you think about it, the more of your organization is involved in experimentation and learning. The faster you're going to learn. So you're increasing your learning velocity.

And the bigger the impact your experimentation program is going to have. And what happens if you see something that is very impactful, you're going to invest more resources into it. You're going to have more people invested into it. So you come to democratize further with price more velocity.

And this is what we're trying to do with our experimentation products and what we're trying to enable.

Now, another fallacy of experimentation is usually that people think of it as I have an idea. I test it and then I make money.

But there's a lot more to experimentation. It's a full life cycle that you keep repeating. It's a learning cycle, where you analyze, your plan, you create, you run an experiment, You look at the results, you report from the results and you take action.

And what our goal is is to have you run through that cycle as quickly as possible, to learn as quickly as possible.

And so we develop capabilities that at every step of the way, help you in each of these steps and make that cycle run faster and faster sir.

And to talk about some of the new capabilities that we're releasing, gonna invite Brit up to the stage, to talk about all the exciting new features.

Awesome. Thank you, Tilo.

Hello, and welcome for those of you who don't know me Brit Hall, as Sheila mentioned director of web experimentation in particular, but I'm really proud proud to present with you both web and feature roadmap today. So everything that you're gonna see on the slides, you'll have some some notes for who's getting what, where, within here. So we're talking about this experimentation life cycle, right? And we do a lot OF THE CREATE AND RUN TODAY IN THE EXPERMENTATION TOLLS THAT YOU KNOW AND LOVE BUT THERE'S SO MUCH MORE THAT GOES INTO A FULL PROGRAM. And especially when we talk about the practice of experimentation, it's much bigger than actually just hitting that start button And so, we're going to interject features across this entire life cycle to support that entire process in your business. So we're gonna start with ideation.

How many of you have had somebody send you a Slack message or teens message or walk by your desk and say, you've got a great idea. It has happened to every single one of you. I guarantee it. What we wanna do is govern that process.

Right? We're not going to halt creativity here. We want to harness it. And so with that, we have an suite of tools dedicated to intelligent idea intake.

Specifically, this allows you to build templates which are then put in front of your entire audience. We have concept of guest users, you can open it up to your entire org without any additional cost. I'm not selling you anything here today.

And communicate with them, collaborate with them, decide which are the best ideas and what are you gonna run moving forward? This is part of our experiment collaborations feature set which we're gonna be talking about a little bit more in a session right after this as well but you're gonna see a handful of features that are part of this today.

After you get all of those great ideas in and you decide what tests you're going to run, you need a plan. Or how to get them out the door. So that comes with a handful of things too. Templets. I'm gonna mention it a handful of times today. It is a repeatable scalable process that exists outside of a single person.

So, with test brief templates, we can make sure that every single test that you're running meets those standards for, your plan Right? And intentional testing is really the name of the game here. We wanna make sure that you're not testing for testing's sake, that you have everybody everybody aligned to the why and the how we're going to do what we're going to do. So again, templatized repeatable, scalable process with an experiment collaboration.

Workflows are a big part of this. You heard Repali speak at the keynote, the other day about how we're bringing workflows across our entire TechSAC experimentation is no exception.

You also heard that those workflows have been around for a decade. And that they've had a five thousand percent adoption growth over the last five years. These are the same workflows. This is not a brand new thing where you're gonna get some really lightweight check boxes. This is a really, really robust workflow solution.

That allows you to set SLAs, automatically calculate due dates and deadlines. You can leave comments within these workflows super, super powerful, even notifications already built right in here, so all available to you as well. And then finally, note of this last bullet point, you can connect this workflow with third party tools like Jira. We know so many of your teams, your development teams particular work in Jira, we're not going to ask them to remove themselves from that tool. No.

That tool will be a step owner in this workflow. And when you get to that up, it fires off a ticket in Jira. That team works exactly the way that they always have been. They complete their work. It completes the step and it moves the process forward. And that integration already exists because it's been around for a while too.

Design collaboration is also part of that plan Right? Like, what are you going to test on? And what we've heard from so many of you is I spend all of my time doing version control. Somebody emailed me a I've seen your word docs where you do all of this planning today.

I call that work about work, right? And that's not what you are all here for. It's not what you're being paid to do. So bringing all of that collaboration into a single point solution where you can invite everyone in is really key.

So what you're seeing here is live web proofing. I've pulled up a web page and I've made a little bit of a note right directly on the page. But you can do this with things like Figma, EnvisionApp. These are all tools that you already use and you can use directly within the collaboration layer.

Cool. Alright. You've got everything planned. Now it's time to create it and execute the tests that you've decided to run. First up in this segment, AI powered copy variations.

So take copy. From your site, run it through this generator, it'll give you some ideas, thumbs up, thumbs down, provide feedback, let it know what you think, let it learn from you. And then when you decide that you have a variation that you like, hit use, create that variation directly in the product, and move it forward. Right?

This is kind of a cure for writer's block, if you will, or creativity block. We don't expect this to take over any sort of, you know, ideation that you have, but it should supplement it. Right? So you've heard the Jira and other folks talk about the power of testing more variations.

This should help improve, the velocity of testing those variations.

Cool. Google analytics partnership Google has a booth here. You've seen them. We're partnering on them.

We're partnering with them for AI, of course. But everyone uses Google Analytics and GA four in particular. So we have a handful of features all around GA four. The first is sending event data.

The second is kind of bringing that event data back. And then third, using those audiences, within your web experimentation tool. So the feedback that we've gotten which is I spend all of my time creating and recreating audiences.

No more of that. We're gonna save you a bunch of time.

Alright.

This one is probably one of my favorite slides. Dynamic selector support single page applications, huge nightmare. You have to go beg your development team to help you know more. We we want to allow you to test everywhere on your own. So if you are using those single page applications, React, XJS, Dynamic selector support should help you be able to do that on your own without asking those about devs.

Alright. Cruising through, I know. I heard last year that you thought that the roadmap was a little bit lightweight. This year's kind of the opposite. Stick with me.

Extension in Edge. So extensions are already available to you, but for a long time, haven't been available in performance edge in particular. So that's all we're doing. We're bringing that to edge, so that when you are running, tests in that platform, you can reuse those extensions that you already have built. So another save for developer time. In particular, we wanna make sure that you're not asking them to build the same type of experiment over and over.

Advanced audience targeting.

Test smarter, not harder. So if you are using a third party CDP, you've got segment, you've gotten particle, Tillium, zotaph, we want you to be able to make use of that data. So with that, you can use our internal, optimizely connect, bring that data in, use it within your tests, again, without recreating any of that work that you've already done.

Alright. Finally, you get to the point of running an experiment.

How many of you have left your kid's soccer game on a Saturday morning to go hit the start button. Like, I, we need to get rid of that particular use case here. So what we're doing is allowing you to just simply schedule when you want rules or flags.

Saturday, nine AM. Just set it and forget it. Don't have to log back into the platform to do that. You can just trust that it'll do it for you.

At the same time, when you do go into that platform and you hit the start button.

We've heard that you're taking that date then and opening up some spreadsheet and dropping it on a edge sheets so that you can then share it to your executive team to tell that team when they started running the experiment. So with an experiment collaboration, we are hooking those together to build you a live calendar. So every time you hit start and every time you hit pause, we'll just capture that. Right?

On top of that will allow you to decide exactly what dates you want to put on that executive calendar. So if you hit start and there's something wrong and you hit pause, you go back and you hit start and you hit pause and you hit start and you hit pause, you don't have to show every single one of those events on this calendar. You get control over that so you can really build something that's consumable by all of the audiences that you have in your org. Alright.

Analysis. Obviously, This is an important part of what we do in experimentation.

Stat sig notifications have been a long time coming We want you to be able to set it and forget it. Right? And we're peaking. We'll let you know when it's time to look. So when your test reaches statistical significance, we'll let you know.

With that, we know that there are still other reasons why you peed. You might not just need to know when something's gotten to stat sig. Anomaly detection is a really important piece for us. And SRM detection in particular is the start of what we're considering anomaly detection in particular. So this is available to you. You know, if you get an SRM error, we can let you know, and then you can go and make adjustments to your test.

Michael this morning from Charles Schwab talked about the fastest way to, I think you said torpedo your fermentation program is with untrustworthy data and sort of like losing the trust of your audiences.

This is a really important piece of that. You need to be able to trust the tests that you're running. And so SRM detection can make make that so that you know for a fact you can trust it.

Alright. And then what is data if nobody hears it or sees it? We've got a handful of things that come into this last section here. Winner rollouts is the first. So, again, coming back to the idea that we don't always have all of the development resources that we need all the time, and we can't always bug them for cycles on their roadmap.

So many of you are setting what we call hotfixes, and you're just letting an experiment run at a hundred percent.

Yep. I get a lot of nods in here.

And then you're consuming your MAUs and your and you don't have the right reporting. We want to allow you to just like set that as a concluded experiment that has been rolled out and allow it just to run on Now, this isn't any sort of substitution for actually doing the development work to make changes. But it will buy you some time and allow you to get on that dev cycle.

And then finally in here, program reporting So, coming back to this idea that not everybody needs every bit of information about a test, not everybody needs every bit of information about your program. So, allowing you to build dashboards with any of the data that we have available to you. So, everything from program impact, ROI, timelines, all of that should be available to you. So this is something that we are working on already, but we're also still very much taking feedback for it. If you have, reports in particular that you're interested in seeing in here, definitely give us a call.

Alright. So with that, we have features to support the entire life cycle, start to finish inside of the experimentation product that you know and love, and with experiment collaboration and we hope you enjoy them. Tilo. Back to you.

Thank you.

Alright. So as you've, if you've seen, we've been very, very busy.

To add capabilities around that entire life cycle. And, our goal is to expand that even further. Right? So what you'll see in the coming quarters is that we're really gonna dig into that whole aspect of like making you learn more. There's a lot especially that we have planned around more metrics capabilities, if you've seen program reporting is a big part of it.

But there's obviously more that I wanna talk about today because there's one topic that I haven't really touched on at all. Right? We've seen, Brit present one of the features.

But if you heard from our product keynote, Opel, our AI that now goes across all of our products is a really, really big investment for us. And so I wanna touch a little bit on what opal means for experimentation.

Now, if you know optimizely, we've been around for over thirteen years in the space of experimentation. We've pioneered this as a category in software, and we've collected a huge amount of data and experience on the way.

We have thousands of knowledge based articles.

We have huge amount of developer docs. We know what works in experimentation and what doesn't work. And we've what we started to do is condense all this information to train a custom, artificial intelligence model to help you along the way. That's what I wanna show you today. And what's, interesting about this is that, most people at Optumati haven't even seen this. So you're getting a really sneak peek, to what we call, our new co pilot experiences.

But AI isn't really new to optimize me. Right? As you know, you know, experimentation at its core is a is a data science product, if you will, and we already have a lot of AI capabilities, whether it's adaptive audiences that allow you to create audiences based on natural language, or multi on bandits that allow you to automatically roll out winning experiences to the best performing variation, recommendations stats acceler which allows, which is a machine learning model to reduce sample sizes needed, anomaly detection, so on, and so on. But today, I want to talk about the co pilot experience. And so, what, what I found over the past eight years working with a lot of our customers is that along that experimentation life cycle, there's a lot of questions. Right? And we, our customers really wish they had sort of a technical resource they cite a strategic resource by they side to help with every aspect of experimentation.

And so this copilot experience is really there to help you across, along the entire way. I'm gonna throw you three examples of how we bring that to life. The first one is a question that we get quite a lot, which is, how does optimizely calculate statistical significance? Now, if you put this into chat u p t, you'll actually get a wrong answer.

Now, why is that? Now, chat u p t doesn't have access to our vast knowledge base and knowledge about our own products, obviously.

So what you'll find here is, Opel gives you a much better answer. It gives you insights about our unique sequential testing model. It can explain you how do we do how we do false discovery rate control, and it even gives you sources and links where you can find more information.

And it's a multi turn conversation, so you can, you know, continue talking with it as if it was an optimized employee that's helping you along the way.

Another example, let's say you're a developer and you're tasked with implementing experiments using our React SDK.

You can go in and ask Hey, I want to use the Optimaze rack SDK. Can you give me some example code of how to implement it?

You'll see that Opulcan create code for you that you can copy paste and it's easy to get started with our SDKs.

Now, again, you can customize this similar to how you can do in chat you can even paste in your own code, ask questions, and help you implement, anything that you want. And again, you'll find there's more sources that you'll get linked to.

So it's really, really helpful to, to actually get, you know, continue that journey along.

And then the third example I wanna show is, let's say you're not a developer, you're not a data scientist, so you're not, worried about in-depth how statistical significance works, But maybe you are getting started with personalization.

You're part of a b to b business and you're looking for some good use cases for personalization.

Now, again, Opel can help you out.

Let's take a look what we get, symmetric messaging, prioritized product listings, behavior retargeting.

These are all interesting ideas that you could start exploring.

And Opel here gives you the idea, gives you a use case, it gives you even gives you a case study that you can explore, with results and the inspiration for the personalization campaign.

Again, sources are spread throughout the answer, and it even provides a blog post where you can learn more.

And this is just the start We have a lot of ideas how this can help you every step of the way, to create variations, to create code, to really shorten the cycles that you're going through with experimentation, and really amplify your program.

And with that, thank you very much for being here today. Thanks for listening.

I hope you're ex as excited as I am. Remember the picture that is how I'm looking inside.

And now we'll open it up for questions. We still have a little bit of time. It's gonna be a microphone going around, so just raise your hand and we'll bring it to you.

Alright. We have two people here in the front.

There's the microphone. Alright. There we go. Brit, do you wanna come back up? Sure.

Quick question on Opel. Great, you know, functionality, but anything that's inputted, let's say, you know, financial institutions or where they don't wanna share information externally, maybe even with optimizing. He's that data housed Like, does that leave whatever is being entered in there? Or So it doesn't leave the optimizely ecosystem.

So this is not, you know, this is not going to open AI or anything. It doesn't the optimize the ecosystem. So it's as as if you would, let's say, put something in a text field in, not, in optimizely, or would it put in code in optimizely fermentation. It's sort of on that same level.

Well, in reference to the, the GA integration, especially with the GA form migration that everyone's probably in the process of completing right now.

Currently, we have, like, adaptive adaptive not adaptive. Although, I guess, it's a good thing to adopt. Anyways, adaptive audiences and optimizely are like a very important part of kind of building out personalization strategies and other other such, factors in how we experiment, will the GA four audience segmentation feature that you were mentioning, is that gonna be applicable with adaptive attribution of any kind? Or Well, what you can do in our audience builder is you combine these two things.

You can combine these two things. Right? So, I think adaptive audience will allow you to create, an audience well, based on natural language, and then you could say, like, I wanna target users that are, you know, part of that adaptive audience and that are part of a specific GA So you'll be able to combine these things if you want to using our audience builder. Cool.

Any more questions? Yep. We're there.

Mine's pretty fast. Is there somewhere online where we can see the, like, intended rollout dates for some of these? I saw you had them on the corner of the slides. But is there, like, a webpage we can reference these?

Yes. So there's, a couple of pages you can go to. So one is we actually now have a beta page. So optimize dot com slash beta, where you can sign up to anything that's in beta or that's that's about to come up as a, as a beta feature, and you can sign up.

It's sort of a wait list, and then you'll get an invite, if we, if we select you or when we add more people to the beta, and then we also, what we do is we publish, quarterly roadmap updates on our website as well. Where you can sort of see, okay, what's upcoming, what's currently on the roadmap to sort of have a have a future reach out on that. So a lot of these things, that you heard about today, we'll we'll be on the website. Alright.

And then Great question exactly which page we can get it too. I will mention the other thing is like in, difference to optimizely's in the past, we have recorded every single one of these sessions, and we're going to make the slides and the recordings available. At the end of the month. So there's editing magic that works, everything else, but then you will get copies of this.

We should chat after and I can get you page. We do have product roadmap pages for each of our product lines on the website.

Where we go through the road map and we have slides and things like that as well. We usually do these on a quarterly basis too. And if we don't wanna wait to the end of the month, we can make sure Tilo and I get all of these slides over to your like, this week, even. So bug them.

Can you just speak a little bit more to the winner rollout like, what actually happens when you click start this one? Absolutely. So what actually happens is that we set the audience to a hundred percent and run the system. The same way that you're probably doing today, which is why it's not a replacement for actually making the changes to your site.

We're not actually implementing any of the changes. But that way you can separate something that's actually been rolled out from a test that's still running. That's the problem right now. Right?

You have to like rename the test and just call it ruled out test and then you can't tell how long it's been running because it's been running, but it's also being set as a hot fix at a hundred percent. So that's the problem that we're solving, which is why I said it's sort of like buying some time for you as opposed to actually implementing any of the changes that you wanna make based on winning tests. Yeah.

Oh, that's a great question. I can touch with that. Okay. So there there's two things that are important as one is we'll we'll stop recording results because what'll happen otherwise is that you sort of, you impact your results, right, when you roll it to one hundred percent.

So we'll sort of, like, freeze the results in place. So, at any point in time, you'll still be able to access the results of the AP test. While the winner is already rolled out, or the variation that you want to is already rolled out. And then, it's not can't fully confirm it yet, but it's very likely that it'll also stop consuming impressions.

So you will not be charged sort of for, for winning a winner rollout variation.

Yeah. What if you wanted to run a tail on something you rolled out? So you wanted, like, a post process sort of observation of something that you're pushing to a hundred percent just in case of any uncertainty.

So you sort of wanna continue, capturing results? Yeah. I I totally understand why you'd wanna freeze it. For the sake of maintaining that reporting, but is there, like, some place where you could still look and see what, like, how that's performing after rollout?

So, no. So as soon as as soon as you essentially, it's sort of as if you would pause the experiment at at the time when you click roll out. So it's sort of concluded and then we we stop recording, results. Will all the events that are being defined if we're, like, passing those to, GA or some other external analytics tool, will those still be like logging and and sending along with the experiment ID, even if it's frozen? In results.

So that's a good question. The plan right now is no, that we actually will stop eventually. Understood.

We should check if that's something in a use case that you have to continue monitoring that. We just really need to make sure that we hone in on the right flags so that you can still separate that data. Right? Like what was a test versus what was like observation after the test?

You should talk about as of right now, we're planning on talking about. Definitely. My assumption is, like, that is the moment of insert. If you're using the optimizely roll out feature there, which is not the roll out feature we currently have, but we're using that, like, rolled out variation, I would assume that product managers and stakeholders would want to be able to observe that data at after rollout as much as during the experiments.

Yeah. I mean, at at that point in time, you're really rolling out to a hundred percent. So all of users in that specific audience will get it. And then it's really, you know, you can use your normal analytics solution to then, to then look at like how how that audience performs and how your website performs.

That's sort of the idea. It's very analogous to, in feature experimentation, in our products, it's exactly the same gateway. You experiment, you conclude the experiment, and then you go to a targeted delivery, sort of with a single click to just roll out whatever experience you want. So we wanna make that the same in our web experimentation solution where you go from experiment to roll out.

So I work for a specialty retail brand and brand voice is very important. And I think it was presented in one of the keynote sessions that with the use of Opel, you can kind of insert a brand voice filter of sorts. So how might you do that? And what would that entail?

Yeah. That's a good question. So in, in our content management and, sort of, orchestration platform that was presented, we partnering with writer dot ai. To sort of, deliver, you know, enable you to do brand voice and things like that as you create content. You can take that content and then, you know, test it through through experimentation as well. But it's probably in terms of the layer where you do all the planning and the content generation, that's going to be what was presented in the keynote, in our CMP solution and generated in there. But then you're you're absolutely able to test it through experimentation.

Without getting too far into the weeds, the experiment collaboration pieces that you've seen with regard to design collaboration workflows and everything else, those come from the CMP. That's actually the same back end that is powering at all, which is why it's a decade old and super robust and not actually a beta product at all. But that is one of the areas that we're looking to kind of like build the work that you're doing. So that text editor that gets the writer AI integration that knows your brand voice and everything else is actually available in experiment collaboration.

Today.

So we should chat a little bit more, but ultimately you will be able to take all of those features in that your creative and brand and marketing teams are using and then sort of like explore on top of them. Because they are the same underlying functionality. So that text editor that's available in experiment collaboration does have AI features already in it. It's generated AI. It doesn't yet know you and your language, but with the addition of writer, it will. So that will be part of experiment collaboration as soon as it's available in CMP.

Real simple question.

What happens with with the CMS, What happens to the AB testing there? How is, experimentation different? Is that version going away in, in the CMS? Or because these are two separate things. Yeah. So you're talking about, was demoed in the in the keynote, like, the experimentation directly from the CMS?

Yeah. So, like, our the the version that we use is eleven point nine. And so, like, there is AB testing there. It seems like a very, very, very light version compared to this new stuff. Is that going away?

I can't answer that fully because I'm I'm not part of the CMS team, so it would be a better question for them. But I assume that once we've sort of, built out capabilities in the CMS in terms of experimentation that will probably, make sure that it's aligned everywhere. So that will probably go away, but I I don't fully know, so it's a really good question for the CMS team. Yeah.

This team is just a lot more robust. So Yeah. Yeah. Because it is also it it will be based on sort of under the hood, it'll be based part based on our feature experimentation solution, so that I have all the capabilities under the hood that our experimentation has.

I think there was Did you have a question? Yeah.

Wait for the microphone, please. Thank you.

Alright.

The, the dashboard experiment dashboard, how customizable is it? Like, for example, I need to reach that sig at ninety seven percent.

Can I put those parameters in there or is it some standard canned thing that you guys have currently? So you can, you can set the significant thresholds in the optimizely setting. So if you if you, for your business decide, like, your threshold is eighty percent or ninety five percent or ninety seven percent, That's a setting you can you can edit. Okay.

Dashboard is gonna be for the fact, and it's think of it like widget based. Right? Like, you've got different reports that you wanna you wanna pull them in, set the timelines, set the parameters, set the segments, whatever it is that you wanna do, but it won't actually impact any of them. Actual testing.

Got it. Right? It's reporting on the back end. Well, it is a follow-up to that.

What are the most common use cases for some of that custom dashboard reporting that, you guys are envisioning. Yeah. Yeah. Go ahead.

I can touch on it. So, there's a couple. And I think it will evolve over time.

The most common one is that, people want to know sort of, like, how much, how many experiments are on, how how often did I win? What's my learning rate? Which is sort of the rate of that reach statistical significance.

And so, though, this, that's the most basic report that we're going to enable at the beginning. But from there, you very quickly get into, like, benchmarking, like, hey, four industry? How are you doing versus other, other companies?

Overall, you know, is your learning rate good? Is your win rate good? And then what are some recommendations for you? You know, so if your if your win, if your win rate is really high, are you actually being bold enough and these kind of things?

And then from there, We also wanna see how we can pull in more of the metrics you're tracking and figuring out, like, hey, how can we sort of prove out the value of your program or help you prove out the value of your program? Right? So, in aggregate over all of your experiments, how much did you actually impact things like conversion rate, etcetera, in, in more of an aggregate fashion? And so there's a lot that I think we, we have planned on that we wanna do to sort of really, give you a, yeah, more of a high level view over your entire program.

And like prove out the value of it. And then with that, we're really, really driving the intention around taking the analysis and bringing it back into play Right? Like this is a full life cycle, which means that it repeats every single time. We wanna make sure that when you're, for example, on that intake and you're doing an intake form and you're evaluating it, well, is this that we've run before.

Show me everything that we've ran in Q3 of twenty twenty two with conclusive results home page. Right. We want to be able to power you with an results repository as well so that you can make more intentional decisions about what to text test. Next.

And so that's going to be part of it as well. That is already available. As long as you are collecting the right information into experiment collaboration, we can actually already give you that results repository, and show you exactly what your team's been up. And then use that to inform what you do next?

Hi, Acelyn with so far here.

We often need to run experiments where our control is actually our variant just because of the way code's implemented.

And sometimes need to toggle that later in graphs to kind of show actual impact. Is that something that would be available in the reporting dashboard? Is to be able to actually toggle what your real variant was so you can show gains versus decreases.

Do you do you mean in the sort of reporting, overall program reporting that we're playing on the results page? The overall program reporting.

It's a good question. The thing is that the thing about program porting that we wanna do is yes, we wanna give you sort of standard reports. But also see if we can enable you to create custom reports. Right? In the end, like, this is going to be, probably based on, Google technology from looker, And so theoretically, we could enable you to generate any custom report that you would want based on the underlying data.

So in theory, yes, we'll just have to see how how, you know, far we go from sort of the standard reports. We wanna help you, sort of have a good dashboard to customize reporting as well.

Hi. Another question. Will we be able to tie in offline events and offline KPIs back into the program reporting?

So program report reporting. Maybe I'm using the wrong term on that. I mean, like, the overall ROI dashboard that you presented at the end, Yeah. So, what kind of offline events or events?

Like, can you give me some of them? Sure. So my primary KPI is, form submissions. Which enters into a pipeline down the line where I convert customer later on.

But I'd like to just because I have reached statistical significance on somebody filling out a form, doesn't actually mean that they convert later on. I'd like to tie that back in. Yeah. So that's less of a part, that's less program reporting that's more like actual experiment results and can I tie in offline metrics into the experiment results themselves?

There's actually already locked today that we support because with feature and web experimentation, yes, you can click, like, clicks and page views and custom metrics, but we also have a fully featured, like offline API where you can like batch import events from any data source to make them available and optimizely for results. So if you have things like lead conversion, and that happens later on via the phone and it's triggered by Salesforce, let's say, you can tie in these events back into optimizely to then report on them on the results. And then those results would still be available to be added into on that dashboard.

Yeah.

Alright. We're getting the wrap up sign from the back Tilo and I will both be available later this afternoon. I'll be in a session about experiment collaboration right after this. So hopefully I'll see some of you there, but definitely I'll speak on your behalf and say, come find us, send us down. We love talking about the road map, and we're excited to hear your ideas and your inputs too.

Thank you all. Thank you so much.

Gain control and maximize impact over features

How you master competition in a digital world

Orchestrating the content supply chain and the use of AI

Unlocking more value from the CMS

Got it all wrong? How to optimize your digital business with SEO

How to fail: Optimize learnings from every test you run

Pets at home: our journey so far with a Headless CMS

Creating B2C style experiences for B2B Customers

Habits of high-performing marketing teams

Spire UI & PIM helped create a unique experience for Supplyland

Everything you need to know about our CMP & CMS roadmaps

Mastering Growth and Performance through Strategic Experiments

BMC Software's Transformational CMP Journey

Reimagining marketing: how flexibility meets simplicity

Scaling a High-Performing Program to Fit Your Business

Seize the Moment: The Opportunity for Digital Leaders

Everything you need to know about the commerce roadmap

Creating, publishing and streamlining for the Omnichannel future

Omnichannel experiences with personalization at scale

The importance of mobile for driving commerce forward

How Calendly leverages personalization for their 20 million users

Experiment with a destination in mind – Outrigger

Advanced techniques for A/B experiment architectures

Content creation in a non-linear world

Using AI to optimize B2B content: making bold bets pay off

Planning for content at scale with Optimizely CMP

Your conversion strategy's secret weapon

Navigating the transition to a headless CMS

Optimizely Graph as your new search engine

Content personalization: how to win with your content

Embracing modern work principles using experiment collaboration

How to steadily improve your digital results

Experimenting across the product development lifecycle

Accelerating testing velocity with limited resources

How we got here and where we're going with Microsoft's AI

Unveiling the future of Optimizely's CMS

ConnectWise's journey to streamline multi-site experiences

The digital transformation story of Zoom

Expand your experiment horizons with bandit tests

Unveiling the Optimizely connector for Salesforce Commerce Cloud

Next gen personalized commerce: using AI, promotions and beyond

Integrating good data for the purposes of personalization

Google and Optimizely: Sneak peek at an AI powered future

Anticipating the future of B2B commerce: top trends and insights

A fireside chat with Angela Ahrendts DBE

Using experimentation to drive business growth