Experimenting with AI to Optimize B2B Content: Making Bold Bets Pay Off

B2B marketers are facing a once-in-a-generation perfect storm. New milestones in business requirements, technology innovations like generative AI, and stakeholder expectations are together accelerating the need to deliver more relevant audience experiences that impact revenue.

Most marketing leaders recognize improving the customer experience as a top objective. The path to scaling these experiences is paved with experimentation. But faced with pressure to deliver rapid results, marketing leaders may de-emphasize experimentation, when it's exactly what they need to achieve next-level results.

Build a culture of experimentation and move toward testing in every marketing motion across the customer lifecycle. Hear about successes B2B organizations have had with experimentation. Learn how to use the customer experience lifecycle as a framework for experimentation, along with a scientific, hypothesis-driven approach.

Download presentation

Transcript

Okay.

I've never spoken before coming right out of a Keshas song, but why not?

Hello, everyone. Phyllis Davidson. I'm from Forrester. I'm gonna talk to you today about experimentation and content.

And the biggest reason that AI is kind of the focus here is because Let's face it. Ai is just the focus of everything. It's a great, area to focus on when we talk about experimentation too.

Before I even go to my first slide, I just wanna post something to you, which is, you know, we're all involved in content, delivering content on behalf of marketing for the most part. Right? It in some capacity. But most of us are also buyers some kind of business buyer, or of course personal buyers who are all being targeted all the time by content. Right? I mean, every day, it's just you're just bombarded.

So how many people really think the content that they get is amazing and just really informative and great.

Yeah. Okay. So that's what I thought.

That's what I expected.

And here's the reality.

Vendor content sucks for the most part.

We tend to not like it. And we've got a lot of data on this, at Forrester. So we ask some very pointed questions in our surveys and our discussions with customers about how do you feel about the quality of the content that you get from the vendors you work with? So almost sixty percent say the information is extraneous.

I get better information elsewhere, as opposed to from the vendors. Fifty percent, it's just too much. And I can tell you on the other side of this, we have data on a huge amount of content waste. In fact, I saw one of the speakers yesterday used our data point of sixty five percent content goes to waste, meaning it's not being used in the way intended, right?

Fifty three percent, what they share with me is just useless. It has no value at all. And also, a lot of the material is just so product focused, so focused on the vendor as opposed to being focused on the challenges that the audience has, which is, of course, it foresters what we try to help our customers focus on. And then here's the biggest data point, which is a great one if you remember one thing from this presentation.

This is a good data point to remember. So we ask the question that if you're doing business with a vendor and the ongoing content that you get from them, because face it. Most of all, most of us are in businesses where we need to keep our customers. We need to make sure that we're focused on retention.

And we use content for that too. So get a load of this. Seventy seven percent more than three quarters say that if the contact they get from vendors they work with, Isn't any good? They're not gonna expand their contracts.

If you ever needed a data point to prove that content actually matters, That's a really good one to take back with you. And it's interesting because last year, that data point was, I think, sixty nine percent. So it's actually gone up. Interesting stuff.

And we know that organizations in fact really struggle to get content right. It's hard, especially in business to business. I think that this is a little bit easier for those of you that are in B2C, where you don't have terribly long or complex journeys, but in large B2B, enterprise sales, where you really have to keep people interested for a long time. This is a problem.

It's really hard. And we've talked for years about these four ours, and yet they continue to be a really big challenge for us. So we break down the content life cycle like this. Into these four stages.

Right? You could argue that there's even more because there's the stages of ongoing customer support. In other words, this isn't just the stages presale.

This is constantly happening. It's an always on engine, right? And what we find is that quite often, we don't see a clear strategy organizations have, and it's not an audience centered strategy. When it comes to figuring out what content to create, people are still using manual processes, So why one of the, you know, big conversations here is about managing the content supply chain or the content life cycle we still see a lot of businesses not doing that.

Limited personalization and customization. It's one of the big things we're all always talking about. Right? How do we do a better job with personalization.

And when I talk about experimentation today, that's largely what it's focused on is personalization.

And then just content and data chaos. So most customers I talk with really have no idea what content is truly engaging their audiences. They may know page clicks and email opens, but those aren't actually indications of engagement. I would say for the, amount of time I spend directly with clients consulting with them to help them, the performance issue is probably the biggest issue. Everyone's concerned about.

So here's the thing, we have a lot of opportunity.

And even before generative AI, and I've heard this point made by several others here at the conference, and it's made me feel really good because I I always thought I was right on this point When GenAI became the big thing, I thought to myself, okay, hold on. This is the shiny object.

But there's been AI in general.

In the technology platforms, people have been using four years.

And what we find quite often in B2B is that a lot of those more sophisticated features in the platforms people own aren't getting used. Because I don't know how to use them. So for all the attention on GenAI, guess what? I'm encouraging my clients to be just as focused on AI. In fact, what I say to people when we look at, what should we be adopting from a Gen AI standpoint?

I encourage them to first look at what they already have in their installed base and figure out, is there ai I'm not using? You know, yes, you can look at generative AI and lots of vendors are introducing tools in ways you can get to it like optimizely has this week, and that's great.

But use the AI for automation, machine learning, and other things that are built into your system today. Anyway, the other thing on performance is metadata and taxonomy.

So it amazes me how many companies I talk to who say taxonomy. Yeah. I'm not really sure who owns that. If you don't know your taxonomy, you're not tagging your content, personalization at the end of the day becomes very difficult.

So again, today, we're gonna focus on experimentation with AI. There can be experimentation at the content level, though, in many other ways. And if you're a forester, a client, I encourage you to reach out and talk to me about it. Okay.

So, Like I said, these are, some of the ways that AI can be applied to the content life cycle. And I do have here in this list more than just generative AI.

So even in the planning part of the cycle, there are ways to use AI to help ensure that you have fully filled out content strategy briefs. Right? That's something that you can actually program on the back end and use certain software packages for. Certainly, the biggest area is production.

Right? And that's what most of you, if you're looking at NAI, are probably looking at. Right? How can I draft content with GenAI?

How can I, create images? How can I go from text to image? And then there's also workflow automation, and that's where I encourage people if they're really still using manual processes, you've got to automate them in the content life cycle.

Promotion also is an area where we can talk about experimentation with AI. And of course, that has a lot to do with personalizing experiences.

A lot of the online, experimentation, if you're doing that today, is ultimately probably the goal is to, to optimize the experience and to convert people. Right? And that probably means you're trying to personalize the experience. So we're gonna dig into that more. And then even on the performance side, there's ways to use AI and predictive, analytics to help you figure out what do you need to do more of and what do you need to do less of.

Okay. So I want to give you a little bit of data on this whole AI thing that we've been talking about. So, let's see. Almost two years ago, I did a presentation and we used some data to talk about the future of content. And it was before generative AI had hit the way that it did. I'm trying to remember the month that had does anyone remember the month that chat GPT hit. I can't remember exactly when, but it's taken over our lives since then.

So before that time, We had some interesting data that we used to talk about how people are pushing r or not pushing the envelope when it comes to content strategy and operations.

And what we found is a lot of weaknesses in this area. So less than forty percent are using modular content in some way, are you using automation, are looking at content customization at scale?

Really low numbers. Not surprisingly. Again, this was before Chat GPT, an AI generated copy and content briefs, That existed, but we saw very low usage. And even workflow automation, see this number of twenty two percent had implemented That's a really low number. We're we should all be implementing workflow automization when it comes to the content life cycle. Okay. So then, chat JPT hit.

And we jumped on this, and, I think this was in June. We went ahead and did a poll survey to try to get of, okay, what are people thinking now that this is has hit? And the numbers are a lot different. Now, these aren't, the exact same data points.

But these are interesting, right? Almost eighty percent, and I'm sure you probably would all agree with this, that generative AI is gonna have a big impact on customer interactions, one way or the other. Right? Forty two percent, and I would argue this is probably even higher here a few months later.

Expect to be using, GenAI to improve personalization.

It makes sense.

There thirty nine percent, I would argue again, probably higher now, using AI to generate copy, and, apply it to their most important use cases. And we'll talk about some of those, other use cases today.

So in terms of the future and what that looks like, this is interesting because I've actually updated, these points and pulled back a little bit. And it's partly because of what we've seen happen in a short period of time after the introduction of Chat GPT in terms of privacy concerns. And the recognition that There's no way we're pulling the human out of, the life cycle of creating content completely anytime soon. Not really.

Will AI assist? And I think that's the big message. We've heard here. I heard it at Content Marketing World a few weeks ago.

That generative AI is going to be the ultimate assistant and writing support for us, but it's not gonna shut down jobs, and it will change things, but we don't have to worry about our jobs quite yet. At least not for that reason. Right? But What I had, previously here in the second point was that, AI will create activate and curate content with total, totally autonomously, right?

And I actually don't think that's true. I think we're we're away from that. And I think that's a good thing. Using content atoms, this is one of the things I'll talk about today.

So using modular content, potentially assembled on the fly based on signals you get from the audience is something that I believe is going to happen. And I'll talk today about that and why I think experimentation when it comes to modular content like that is gonna be critical.

Okay.

So experimentation.

I love this quote from Isaac Asimov, who's my favorite author. Right? The least arrogant way of gaining knowledge.

I really like that. So in talking about experimentation, one of the things that, I do feel is going to happen is that content experiments are different than the online experimentation that you're using optimizely for today. And it's different only because of the complex and the unstructured nature of the kinds of documents we're talking about.

Now in terms of what are the biggest points, of difference, from the kind of experimentation you might be doing. I believe that in the content area, we can bring personalization the last mile. And what that means to me is to the asset itself.

Okay? We've been personalizing the experience the way the assets delivered, when it's delivered, what it's delivered with, you've got a curated experience, and all of that's really good. But what if you were creating assets on the fly? In fact, I had an interesting talk with some of the optimizely guys yesterday, and we talked about the death of the asset. Now, I don't think this is gonna happen anytime real soon, but you can imagine a future where if you've tagged content components effectively.

And, you've got basically automation in the back end and other AI related tools.

You're using, potentially even GenAI to maybe produce some of the modules and automatically tag them. You should be able to assemble what feels like an asset to the user on the fly. Right? And you could potentially have a different one for every person. Now I would argue there's probably a point of diminishing returns, but really, that's the test. We need to figure out what is that point of diminishing returns. Wouldn't you rather have personalized content.

Okay. So also, if we get into content experimentation, what we could see is that personalization with content could really be self scaling. If you create enough modules that can be put together in many, many different ways, you've created a very scalable set of content. Right?

Now, can you do the type of experimentation that's done today that are, you know, micro elements of, what we deliver online and testing those against one another. It's probably a little more complicated. It might take a little longer. It might involve a few more people.

But I would argue that using the experimentation approach that, we talk about the scientific method, is very appropriate.

Okay. So how does that begin? And it always begins with a question.

Right? So what I've done here is I've just put what I would think of as the sort of highest level question for each of these phases of the content life cycle here. How can I use AI to inform content strategy? Right? And if you find a way to do that, how do you test it?

Can I scale production and images effectively with GenAI and so on? So you start with a question. But then applying the scientific method, which, by the way, we all pretty much learned in seventh grade, right? And it is the scientific method, but it makes so much sense to apply this rigor in business.

And I read a fantastic book called experimentation works, and optimizely folks recommended it to me. And, they actually helped provide some of the data for the book, And it really gets into the impact of experimentation on business. And it makes the point, by the way, that if you already know the outcome, If there's a change you're gonna make to what you do with content and you're a hundred percent confident of the outcome, maybe you shouldn't be, but let's say you are. Then there's no point in experimenting.

You want to experiment to push the envelope and truly try new things. So these are the seven steps in the official scientific method.

And I've put together just an example of what I'm talking about I thought that would make this very relevant. And then from there, I'm going to show you some really interesting customer examples. So let's take a white paper a generic white paper that, a company uses to promote a solution, you know, how to solve a challenge and it's promoted to many different industries.

Let's say there's the desire to have industry versions.

You could use GenAI to create that version. It's basically taking an industry module, and there's maybe several segments in the white paper where that module would go. Whatever it is, this could actually be a test that is set up manually. This could be written by people and set up as seven different assets or whatever it is.

But the point is to test, is it going to make a difference? So the first thing to do is figure out where do I know the most about those, vertical audiences that I think are the most important so that I'm choosing one to test that's gonna give me some good value. It shouldn't be one of the verticals I don't know anything about. I mean, I suppose that could be a different type of test, but let's assume you want this to work well.

So you're gonna evaluate the different industries, and then come up with a hypothesis. So the hypothesis here is that two industries in this example have been chosen because of the knowledge base that you have on the industries to try delivering that custom version. And our expectation is it will perform at least ten percent better. Right?

You need a real hypothesis. And most often when I see people try to innovate in content, they're not doing this, right? They're not developing a hypothesis, which you either want to disprove or prove, and testing against a control group.

So this to me is a very important direction the organizations particularly B2B with complex life cycle should be moving in.

Okay. So then you make a plan. In this case, we're saying, alright, we'll use outbound email and we'll test versions, you know, the industry against the generic. We'll send it out to, you know, a thousand, five thousand people, one way and five thousand the other way, all in those industries.

Right? And then you're gonna collect You're gonna run the experiment, right? And you have to define what performance is. That's the other, tricky thing.

So when we say, they'll perform ten percent better, we've gotta be specific in what we mean there. So in this case, we're saying it's a download. Could argue that even a download isn't necessarily indication of engagement, but it's as close as as, you know, we can get here. And then you're gonna collect your results across the versions.

And in this case, in this example, which I think is a very realistic example, One of the industries met that benchmark, okay, the, more than ten percent.

But the life sciences, industry modular example did not. And I think the the lesson there could be, you know, what? We didn't know that audience well enough. What we had it just wasn't interesting, or maybe we didn't pick the right audience, or maybe, you know, we, we just approached it the wrong way or picked the wrong personas.

There's a number of things that could very well lead to another test. But the point here is you now have some data to go back and say, alright, should we choose a third industry?

Should we evaluate some additional things about the industry that this didn't work for and run a different test? So this is just an example of applying the scientific method Again, I think AI will come into a lot of this, but this is really what you apply no matter what. Okay. And the other thing is, really using a rubric to figure out how to prioritize, possible experiments.

Right? And you wanna look at aligning to things that are already priorities in marketing.

You want to know that if your experiment, works, what you're trying to prove out works, Are you in a position to scale that? Right? So you wanna think through these things as you're deciding across experiments.

And what's your ability to execute? Do you have the right people and the available technology and so on? So that's just an idea of kind of a qualification rubric, what we think of it, or as a success rubric. And we actually do consulting with a lot of clients where we're helping them figure out JNAI projects and using a similar approach to prioritize.

You wanna think through before you make that choice. Okay. So I have some examples. I I have just enough time for them.

Three examples for you. Now, are these perfect examples of what I just showed you. No. Not really.

I don't I haven't really found people doing this at the level of content that I'm saying is gonna need to be what we do in the future. But I do have some very interesting examples that are content related. So the first one from checkpoint shot is a great guy. He's been at checkpoint a long time, very digitally experienced and digitally minded.

In his organization, everyone was told they cannot use generative AI, and then they also let Sean know that he was in charge of generative AI. So that was a little confusing. So he's trying to figure out what can we actually do? So he tried something, and I really love this example.

And it was a relatively easy thing to test. So, checkpoint was doing a lot of events, local events, and he had been using just a standard banner for all the local events. And he went ahead and used a generative AI to generate city images that were specific. He was able to do this very rapidly.

And he tested those specific city images to promote these events, against the standard and the standard would only get about a a one percent, click through rate. It did much better when using these, and this was easy and fast to do. A great way to get assistance from GenAI and get some really quick results.

What he's hoping to do, and this is really interesting and very much right up my alley with content is he believes it should be easy to create industry versions of a security white paper using GenAI Certainly, they would be edited or proofread by people, but still, you know, you've got a limited number, but when he thinks about the number of industries, and then translation.

So translation and localization is another big way. We're gonna see GenAI really pick up. So he is looking at, increase of in reach of, of forty times what they have today by doing this kind of thing. He hasn't tried it yet, but, as I was, working with him, this is what he was starting to put in place.

So and interesting. And I I would think of this as low hanging fruit, if you will. Okay. So shimona, one of my favorite customers from HCL.

Really smart lady. And she has a whole organization focused on experimentation.

And I got on the phone with her one day, and I said, okay. So, Shamona, tell me, what is your organization's approach to experimentation.

And she said, oh, you ought to hear my approach. I have seven pillars, and she went through them right away. I was very impressed, And here's what they are. I then said to her, you need to send me a slide. She said, I don't know that I haven't written down anywhere, and then she put this together. But the this was so smart. So this is their way of determining is a test, the right test to run, and they are running hundreds of tests at a time.

So they look at things like, is it gonna help with the experience we deliver? Is it gonna give or give us better insights into behavior?

Is it simply going to enhance our effectiveness or our productivity? Will it give us access to audiences we don't have access to today?

Right? So you can see what these seven pillars of innovation are. And she feels that there, and you'll see in my next example, also we have a very strong culture of experimentation.

So she created the heat map. It was actually based on some work we'd done with her. To look at the possibility of of what she should try with generative AI from a risk and impact standpoint.

And she basically set up a couple of pilots.

So in the first pilot, you can see we've got things with a lower level of risk. And this absolutely makes sense if you're gonna start, fooling around with generative AI. Right?

Impact. Yes. In some cases, maybe not huge, but You want to start with what's low risk.

You can see that one of the things they did was use this for, RFP responses, which is an interesting way for organizations to try using GenAI. It can get the people who are creating RFP responses going a lot more quickly, right? Probably speed their work and their ability to respond to more. So in the second phase, this is where she's getting into some of the stuff that I'm talking about, dynamic content creation, personalizing assets by whatever it is, industry, maybe ABM. Right? Right now, there are a lot of people personalizing at the ABM level. And there are some solutions out there that curate content specifically for ABM experiences.

And that makes good sense. Right? If you've got a given audience, you can generally figure out who they are. But we could probably take that further with the content.

Okay. So an example now this example is interesting. It is a enhancing email, example.

But the the goal here was to drive up human productivity, not to actually improve, email subject lines and email text. What she was trying to prove here is can I get this as good as with humans, with our generative AI tools so that I can put my humans on something of higher value? So I thought this was interesting because you look at these, percentages and you think, oh, k, so AI didn't really do very well. Well, that wasn't the point. Productivity was the point, which I think is really interesting.

Here's another one. So they've I believe have, a large language model in a closed loop. They loaded up their white paper and actually let their generative AI tool create a bunch of things, right, the social posts, and also an email nurture. I hadn't, heard of anyone doing that quite yet, but she's seeing interesting results from some of these things. So that's HCL Technologies.

Now, the next one is IBM.

And IBM's been out telling this story. I had the opportunity to speak to Ari Shenken. He's kind of a rock star.

In the experimentation area.

He's built a real culture of experimentation. They're running thousands of experiments They're using a host of different, technology, and also, and in particular, using Watson, right?

Which really makes sense. And I like the way he's pointing out that their focus was the user or what we think of as the customer experience.

And it has to be about business transformation, not just technology.

And the other point he made was you just have to get started because he observed when he's try to get this going, that there's a lot of relax reluctance. In fact, one of the points I found in Stephan Tomke's book was that for organizations that are really invested in experimentation.

The early days, they may hide it from their executives. Because there can be a tendency for executives to say, I don't wanna take the risk. I don't wanna spend time money, human resources on a bunch of experimentation.

So we actually, there were some data supplied by Forrester years ago when he wrote the book on the fact that so often executives had no idea that experimentation was happening in our organizations, but it was. Okay. So what was Ari, what has he been doing is a multi year transformation.

I'm just summarizing it for you. Now here's a really interesting point. You can see this is filled with percentages and data points that are pretty amazing. So IBM had the advantage in that they used IBM consulting to work with them on this. So basically IBM Consulting looked at Ari's organization as a client and provided all of the related, analysis to help make things happen in this project And you can see that, they had massive growth.

They were able to reduce things to me having been a campaign leader, Moving from twenty twenty eight hundred campaigns to a hundred is amazing, right? Forty dams.

To one. Okay. And you can see how much project cost was reduced. And then the integration of people process, excuse me, You can see that what this created was an opportunity to just get to market faster and what was especially notable to me is the decrease in web pages and assets.

So see that at the bottom here of number two.

Excuse me. So now the next thing he's working on is, in fact, vision for generative AI.

And there are already an eighty percent reduction in content spend.

And they're looking at reducing demand tools, looking at reducing that email creation time. So like Shamona at HCL, he's seeing a real opportunity when it comes to all the time and effort people spend on email as it relates to our content life cycles. So interesting.

He identified, four use cases. And of course, this immediately resonated with me when he showed me this slide, because it aligns to what we think of as the content life cycle.

Right? And he talks about the specific ways that they're applying Genai, Nai, in general, to each of these life cycle stages. You'll note that it includes machine translation, so big opportunity there, and they've seen some great results.

And just acceleration, and reduction in waste across, the other areas as well, using Watson AI for surfacing insights from data.

Here's an example, and this is an example of text to image functionality and this was actually a promotion for Watson Watson Assistant. And they saw much higher engagement with this campaign, and not only higher engagement in general, but they were able to identify c level responses, really interesting.

And again, of course, we don't all have an IBM consulting organization in our back pocket to help with this sort of thing.

But it's awfully good to get an understanding from what other organizations are doing. So the next thing up, and of course, I was talking to him about my vision for personalizing that last mile at the asset level, and they're looking at using, GenAI for content creation In particular, for derivative assets, so creating multiple assets from a single asset that's probably created by a human.

And they've been able to figure out, this is still this is underway. So he doesn't have results. He has assumptions, though. He's looking at an eighty percent reduction in spend. That is considerable.

And also just, a much greater speed to market, and a very high increase in content engagement. Now, I can't tell you exactly how they got to that number, but So far, the numbers have been right through his transformation.

So, a really interesting story that's out there.

Okay. So, one of the things that I learned in looking into experimentation and thinking this through and reading the book that I mentioned by Stefan Tonkey, I thought this was a really interesting insight that he added in the book.

So he talks about the fact that when it comes to experimentation and pushing the envelope that we have as people a natural propensity to argue against progress.

That is a tendency. In fact, he quotes a twentieth century economist named Albert Hirschman, who wrote on the rhetoric of reaction, and that's really of being too reactionary.

And he cited this as as basically coming into three areas, perversity being the first one. It's a little bit different definition of perversity than we usually think of, but The idea here is that no matter what you do, the potential is it's gonna backfire completely. That's one argument. Utility is the next one.

And the idea there is, well, you might be able to get things a little better, but you're never gonna get to the real source of the problem. So it's just not worth it. It's not really gonna make a dent. And then he refers, Stephan Tompi refers, to this jeopardy argument as the most dangerous one.

When it comes to an argument against innovation in organizations.

And it's the idea that it's no matter what you're thinking of, it's probably not worth the risk and the cost.

Right? It's very easy to have these arguments. They all remind me of my mother somehow, wanting me to not do things.

So the fact of the matter is there's a huge opportunity cost of inaction. And most of you are here at this event. So I I hope what I'm, saying here is probably already, you're you are not of this mindset. And in fact, you are a person in your organization that's looking to push that envelope. Right?

Okay. So I'm gonna give you some actions, and then we do have a couple of minutes for questions. So that's really good.

Alright. So first thing is a reality check. Now, some of you here are here. You're using Optimize Lee's experimentation platform So, hopefully, you have an experimentation culture.

I would argue that probably it doesn't extend the level of content and the content creators and people involved in that life cycle, doesn't extend to them as as much as it could. So that's something I'd like to suggest that you think about. Look for the low hanging fruit, right? You saw some examples today, and those can really be associated with efficiency, personalization, if you can do it, but there is some low hanging fruit that you could probably find in your organization.

Avoid technical debt, technology debt. This is a huge problem that we see where we'll see organizations buy one platform do a poor job of getting it adopted through their organization, decide that it just sinks, and then go out and buy the competitor. And wow, lo and behold, we have the same problem because the problem is adoption. And then instead, you go out and buy yet another piece of or to solve the problems, you didn't solve the first time. And essentially, you have debt with your technology.

And it's something we all have to some extent in our organizations.

I even see it in mine, and it's something to battling us as much as you can.

Okay. So in terms of extending experimentation, try something new. And this is where I say to people in your organization who are content creators, think about your what ifs, turn them into real experiments, apply that rigor. That I talked about earlier.

Give your teams a chance to brainstorm experiments and test them against a rubric like I was showing you. Mary big data and experimentation.

This is another really interesting point. You know, big data can help us see correlations in things, but not causation.

You need to experiment to really understand causation.

So the two together really gives you a lot of information, okay, contextualize the benefits for leadership. Again, if Having leadership support is a problem, you have to put the benefits into terms that they're they will understand. And sometimes that has to be about, hey, greater content engagement is actually gonna lead to more pipeline and revenue.

And then, really, so much of this is about data. If you're not already a data geek, make sure you partner with one or become a data geek.

We all know that everything comes down to data. In making our points and making us successful. So with that, we've got a few minutes left. Any questions, I'd love to have some questions from folks.

And we've got somebody with a microphone. Remember, I am a Forrester analyst, so I can I may have data points top of mind to share if you're looking for them?

Yes, sir. He's as far as he can be.

Checking.

Has it been edited?

Yes.

I I I don't I can't think of any example where editing is not happening. Now, it could be that if we're talking about something small like, okay, we're going to try to auto generate new colors in a banner. Maybe we're gonna check for the first three colors, and we see it's working, and we're gonna let it go for making other options available in the dam or whatever.

In those cases, maybe not. But for any kind of anything that's long copy, absolutely it's being edited.

Anyone else?

Thank you. So, actually, I'm gonna play off of that question as well about content being edited because I was very interested in your term. You called content atoms and how they compile what users can see as an asset. And I was wondering, like, at what level of granularity can you develop an asset, quote asset for customers using the content atoms so that it can be, like, an entire page that's been compiled of brand and legal approved copy that is put in a certain way, or are you gonna create, like, unique content by word, like, word by word that no one has edited at all?

So that you can hyper personalized that way, which, in this case, would be conflicting with needing an edit. Okay. So this is the perfect question. And I I actually think that all of that will come to be, but I the the idea of I also mentioned diminishing returns, right?

So I'm a huge proponent in looking into and beginning to use a modular approach, which would require. So let's take your first sort of example.

So let's say you wanna go a little further than just having a different introduction in an asset for each industry. That you actually there are other parts of the white paper that should be different for different audiences.

With some limitation, as long as you have, a effectively tagged those components.

Many different engines should be able to assemble them, you know, based on, you know, pre assessed, signals that would, you know, require that kind of response in a content asset for a particular individual. Right? But Someone would have to have written those modules or figure out figured out a way, at least in the early days, right?

That they're gonna be plugged in together and the darn thing is still gonna make sense, right? Cause I think that's sort of the question behind your question.

Is, yes, you can atomize, but if you take this too far, you're really gonna end up with something that makes sense.

I don't I don't see anyone really doing that at a, a granular enough level where they've run into the problem yet. But I would fully expect it. And I think in the early days, this is why I'm recommending experimentation.

I think it does require even if you use generator AI to generate the components, it requires figuring out what happens when these components come together, and do we always have something that, is readable or viewable and makes sense?

As for the idea of every word being net new, In essence, that's what we'd be doing with generative AI, right? And we'd be somehow setting up on the back end, you know, when people demonstrate interest an x, y, and z, we're gonna compile together and create asset X. We are a long way, I think, from doing that autonomously.

Could you do that, but it's not gonna be delivered on the fly, and then have people, you know, writers review it and fix it. Sure.

But that's not a you know, that doesn't meet all the the, the r's that we talked about at the very beginning in terms of meeting the requirement.

We're away from getting to that level, but this is why I say I think we have to test to see the point of diminishing returns. Because it may be, coming up with fifteen different modules to change, to, personalize an asset is too much.

But when you do nine, guess what? You get great returns. If you push beyond that, it's not necessary, or it could be that some audiences require a much more granular level of, you know, atomization and reassembly than others. But we don't know until we try. So I give you that.

Anyone else?

Okay. Thank you for Yes, sir.

So you mentioned testing there and measuring results, how does an organization that's using AI to create content like this actually know what the best measure is because consumption of content doesn't necessarily mean that content is valid or even relevant or true.

So you mentioned medical papers, for example, if AI rewrites that medical paper and people really, really wanna read it, but it doesn't convey the same information in the right way, How do you know what successful and wasn't?

Okay. So, this is a more complicated answer. So I am a firm believer, and I spend, again, I spend probably more time with clients on performance. And what I'm now talking about as content intelligence than just about anything else because it is so tricky. So what we have all measured for years is not actually performance. What we've measured is output and activity.

Okay? It has meaning, but it is not definitive information that we can take forward.

To inform what we do next. What I, try to work with customers on is coming up with a series of, metrics that are activity, output, and engagement, actual engagement with real people to look at on a quarterly basis to to actually surface insights that are meaningful.

Right? And then you can start to set things up, change your sensor and signal network the set that's set up throughout your your website so that you're optimizing to what really works. So what this means is in addition to those activity and output metrics, You're doing a content touch analysis where you're looking back to find the trends and actual content consumption of people you've closed business with. Now, most companies don't have that nailed down, at a statistically valid, it in, in a way in their data to do that for the entire, you know, every piece of closed business, but you don't need to because what you're looking for is a trend that proves out that the activity is meaningful.

Right? So you might see that you've got a medical paper that's getting lots of page views. But who is when is it actually being read and when is that meaningful? You have to look back from either your late stage pipeline or closed deals, a handful.

We tell people ten or fifteen to establish a trend, understand what looks like it's real, and then you can test that against the larger or the full data set for that one point. So it's about having a family of metrics that tell a story and actually produce real insights to inform what you do going forward. So hopefully that's helpful. I know it's not an easy answer.

I don't really think there is one at least not yet.

I'm gonna be doing, reports on content intelligence and what it means and who provides it and how to find it. There are vendors that are providing capabilities to help with this that are really interesting.

So that's something that will be coming up in my research in twenty twenty four. Anybody else? I know we've only got Yes. Hi. Oh, hi.

Do you have any tips, tricks, or recommendations for mitigating plagiarism done via AI for small companies who are just started out with AI content generation. Okay. So Here's the the first thing, and this is painful, but it's true. You really can't put any of your information into chat or any of the other public, large language models, because of the risk of your information being, plagiarized But because those models are public, it is very possible that anything that you get from them could come from someone else, could be a total delusion, you know, an AI, what do we call it? A ai delusion, Yes, hallucinations.

We right now, I mean, I think this is a big problem that's gonna have to be solved. Right now, what you this is why, again, people are so important.

I I've been using some generative AI myself. And basically, whatever it spits back at me, I immediately don't believe. And I it then becomes my job to check that and test it. I this is there's a very good example of this in education.

So there are teachers that are saying, oh my gosh, you you're not allowed to use generative AI. That's not gonna last. Smart teachers teach kids how to be successful prompt engineers and editors so that you can use this for a first draft But you should question everything that's in that text right now. This is gonna be a problem that we're gonna see. You know, I I think a year from now, maybe even six months from things will begin to look better. But most companies are starting to create their own large language models and inform those from public but keep it as a closed system. I think we're gonna see that be a very important direction.

Thank you everybody. I really appreciate the questions too.

Gain control and maximize impact over features

How you master competition in a digital world

Orchestrating the content supply chain and the use of AI

Unlocking more value from the CMS

Got it all wrong? How to optimize your digital business with SEO

How to fail: Optimize learnings from every test you run

Pets at home: our journey so far with a Headless CMS

Creating B2C style experiences for B2B Customers

Habits of high-performing marketing teams

Everything you need to know about the experimentation roadmap

Spire UI & PIM helped create a unique experience for Supplyland

Everything you need to know about our CMP & CMS roadmaps

Mastering Growth and Performance through Strategic Experiments

BMC Software's Transformational CMP Journey

Reimagining marketing: how flexibility meets simplicity

Scaling a High-Performing Program to Fit Your Business

Seize the Moment: The Opportunity for Digital Leaders

Everything you need to know about the commerce roadmap

Creating, publishing and streamlining for the Omnichannel future

Omnichannel experiences with personalization at scale

The importance of mobile for driving commerce forward

How Calendly leverages personalization for their 20 million users

Experiment with a destination in mind – Outrigger

Advanced techniques for A/B experiment architectures

Content creation in a non-linear world

Planning for content at scale with Optimizely CMP

Your conversion strategy's secret weapon

Navigating the transition to a headless CMS

Optimizely Graph as your new search engine

Content personalization: how to win with your content

Embracing modern work principles using experiment collaboration

How to steadily improve your digital results

Experimenting across the product development lifecycle

Accelerating testing velocity with limited resources

How we got here and where we're going with Microsoft's AI

Unveiling the future of Optimizely's CMS

ConnectWise's journey to streamline multi-site experiences

The digital transformation story of Zoom

Expand your experiment horizons with bandit tests

Unveiling the Optimizely connector for Salesforce Commerce Cloud

Next gen personalized commerce: using AI, promotions and beyond

Integrating good data for the purposes of personalization

Google and Optimizely: Sneak peek at an AI powered future

Anticipating the future of B2B commerce: top trends and insights

A fireside chat with Angela Ahrendts DBE

Using experimentation to drive business growth