[00:00:00] Welcome to Unpacking the Digital Shelf, where we explore brand manufacturing in the digital age.
[00:00:16] Hey everyone, Peter Crosby here from the Digital Shelf Institute. While Global Head of Digital Commerce Search at Unilever, Bob Bowman, now Search and Digital Shelf Expert at Win the Shelf and NEEM, was at the forefront of introducing AI into his global search and content operations to scale the business impact they were able to have.
[00:00:35] He joined Lauren Levack-Gilbert and me to share the under the cover essential components and strategies required to test and learn his way to better content, reaching more product pages at a lower cost. Welcome, Bob, to the podcast. We are so happy to have you on. Thank you. Thanks for having me. Well, you bring so much search and digital shelf expertise here.
[00:00:59] And so, you know, zooming in on that in these times when AI is kind of changing the game really in terms of how search is going to work and how shopping conversations are going to work in the future. And you've actually built out an AI tool to help with content creation and efficiency in the organizations that you work with. So we just have to pick your brain really is is really what this is all about.
[00:01:25] So I'd love to start by having you share what your role was you at Unilever and and the problems you were trying to solve and sort of what you're up to now. Sure. So, yeah, so definitely pick away. I'm really, really excited to share some of what what we had been working on and some of the lessons learned.
[00:01:48] But in terms of my background, I was the head of search on the at Unilever for the the global digital commerce team. So kind of overseeing search in digital commerce, you know, across across the globe. And what that meant was I particularly looked after kind of the organic side of things for search within digital commerce,
[00:02:12] which meant then I also did most of our kind of content guidelines as well. So creating content guidelines. And then really the goal was to use search and content to win the digital shelf. So I also oversaw, you know, that that side of things, which was actual measurement. So what do we need to do to win on the digital shelf with search and content? And then how do we measure and know whether or not we won? So I did.
[00:02:41] So I kind of ran the gambit gambit on that. And then in order to deliver that, we had a service called content as a service that a teammate of mine ran. And that was, you know, so we could drive content creation within the markets, you know, across the globe. So my role in that was to create the infrastructure that that that whole program and all of our content and search kind of ran on.
[00:03:09] So we had something called the content suite, which was a few different tools. But one in particular I had created kind of designed and developed called Query. And Query was, you know, proprietary tool. And it was where we orchestrated everything in terms of, you know, developing the content for our products. So we'd have, you know, our keywords in there. We'd write our content in there.
[00:03:35] We'd create, you know, visual assets, you know, elsewhere, but they'd still be approved in Query. Once everything, you know, flowed out of Query and was delivered to the digital shelf, all of our measurement at the digital shelf then flowed into Query. So we had that kind of end-to-end view of how, you know, you know, how we were doing. And then what we did is we had created to make all of this run.
[00:04:02] So we had infrastructure, we had agencies, but we had a process we called the searching content flywheel. And that process, you know, allowed us to be successful. But what we did is we took every step of that process and we built it over time into Query. And what that allowed us to do is to kind of start moving towards AI and automation. So once you have something in a system, you can do AI and automation. And why we needed it was we were trying to do this all at scale.
[00:04:31] So, you know, we had at any given time, you know, about 100,000 products in our system that we were managing and trying to create content for. We had about 10,000 products, you know, coming in every year. We were overseeing over 30 markets. So this is why we built the process and we built the infrastructure, but we still had more to go in terms of, you know, reaching our goals. So the AI and automation is kind of, you know, what we turn to.
[00:04:59] And then for today's topic, in terms of what we did in AI, we created something called the AI content generator. And what that was, was in Query, where we used to manually type in our, you know, our names, our bullets, our descriptions. We were now able to create that using AI.
[00:05:18] And we were able to do it, you know, with SEO, on brand voice, to our global standards, you know, incorporating, you know, topics and claims. And then we had a whole feedback and approval system. So that is, in a nutshell, kind of what I did and then what we created. So you really did AI before AI became a big craze, which is awesome.
[00:05:44] And that's why we want to pick your reign because you learn from your experience as people are building out these AI tools, strategies. Right now, I'm sure there's a lot of lessons learned that you got out of your entire project that you went through. So let's start with that. Like, how did you know AI was the right solution before you even kind of, like, jumped in to build Query? And how would you recommend brands assessing that now with so many different AI technologies out there? Yeah, yeah.
[00:06:13] So that's, yeah, that's tough. So first, it's good that you, I didn't mention that, but it's good that you mentioned that it was a little bit before it was a thing, right? So the timeline on this is, you know, when we went live with what I just described, actually, the first parts of it went live before ChatGPT went live. So we actually beat OpenAI to market. Obviously, we're using their tools. And you're going to be a trillion-dollar company just by yourself, too. Yeah, yeah.
[00:06:43] Hey, why not? So, yeah, so it was, so questions like this, I don't think, I don't know that they're getting easier, but back then, you know, we were kind of making decisions around this kind of in a vacuum, right? Because we didn't really have anything to look at in order to know what to do.
[00:07:03] So in terms of figuring out whether or not it's the best solution, I mean, so the lesson we learned is by starting not knowing the answer of whether or not it was the best solution. So when we started off, we had this idea that we wanted to use AI, and I think a lot of people have that idea now, right? It's, oh, we've got to do AI somehow, somewhere, for something. And I think we had a pretty good use case, right? Hey, we have this content that we're writing, can't we use AI?
[00:07:31] So I think it was logical, but, you know, we started off with the solution that I just mentioned, and that was really too much to start with, right, to be able to do all of our written content with AI. So what we found out was, you know, we went through like an RFP process and tried to find a partner that could help us create this.
[00:07:51] And what we found was that there were a lot of people willing to help us create this solution, but the formula was it was going to cost a lot of money, and it might or might not work. And that was just a risk that we couldn't take. So that wasn't really a great proposition for our perspective. So that's the thing that made us then start to think, well, is AI the best solution? What are we trying to do?
[00:08:20] So I think that's the lesson is, what are you trying to do? And then determine whether or not AI is right. So in our case, what we're trying to do is we're trying to scale our content production. So we wanted to reach, you know, more products, and we wanted to do it for less money. And less money could mean the same amount of money, but more products, right? So that was what the goal was. So what we thought about was, first, well, what could we do instead that could get us there?
[00:08:50] Is there something besides AI? And what we came up with was content reuse, right? So, and we'd been reusing content, but we said, you know what, can we use AI or automation to help us to reuse content really in a really easy way? We have all this content. Can we reuse it? So that's what we thought that we could do that with certainty, and we could do that at a price we could afford.
[00:09:15] So that was our alternative to AI while we then worked, you know, kind of on the sidelines on AI to see if we could regroup and come up with a better solution. So we worked towards content reuse. And then we also came up with this approach of there was so much risk that we heard in terms of going big with AI that we started to break down, well, what's causing the risks?
[00:09:45] And, you know, much of it was not having the right data in the right place. So we thought, well, could we get the data in the right place in the meantime? And what's the most efficient way of doing that? And I'll talk about that a little bit more later. But the basic approach was could we build other things that we need to build anyways, and that would get the data where it needs to go.
[00:10:09] And then the third thing that we did in trying to assess whether or not AI was right was we went forward with an AI solution, but for the easiest thing, which was for our what we call our perfect names or our product names. And our perfect names, we had eight part names and really three of those parts were creative. So we just focused on could we automatically create three of the eight parts with AI? And that's kind of what we went forward with.
[00:10:37] So we took that big thing that we're aiming for, which was, you know, basically shooting for the moon, but not knowing where the moon was and how much it was going to cost to get there and breaking it down and turning into three different. We followed three different paths, but they all had very little risk and the costs were known. So that's kind of how we assess things and how we were able to go forward.
[00:11:01] And I think for anybody listening, it's really important to get over that first part, which is, you know, why are you trying to use AI? And, you know, does it make sense? And even if it makes sense, like conceptually, does it make sense in terms of, you know, realistically? And how can you get what's your fastest path to success?
[00:11:25] I'm interested to hear from you how who the we is, because so often these projects, you can't just do them in a silo because they have, as you were talking about, they have risk implications. They certainly have technology implications. They certainly have technology implications. They so who were your partners? Who are the people in the room that needed to gather to to move this forward in a in across this process of consideration and implementation and measurement?
[00:11:52] Yeah, so that's a that's a it's a really long list. It depends on, you know, maybe I can shorten it for you. Give it a sec. Yeah, it depends on how you slice it and dice it. But the in terms of doing this, really, who is in the room? I'm trying to think of the shortest version of the list. It would be, you know, so my team.
[00:12:16] So we have, you know, search experts, content experts, you know, managing this, the business owners, right? Managing all this. And you reported up into where? Into e-commerce or up into? Yeah, e-commerce, which ultimately goes up into the kind of sales organization. Thanks.
[00:12:39] And then, yeah, so we had, you know, we as business owners, we had internal technology leads. So to help us manage the technology side of things. And then we had an external kind of developer, developer partner. So that was kind of the core of it. But we were working, obviously, with the markets and the end user.
[00:13:05] So at the end of the day, we were creating a, you know, I was a product owner, right? And the product was query. And we had a lot of constituents, you know, using it. So we had, and we were from agencies to marketers themselves to, you know, decom teams. So in terms of kind of who's in the room and that, it really depends on what we were building. But we were getting feedback in terms of what the markets needed, what the users needed as well.
[00:13:32] But the core team around the AI piece was, you know, was kind of the business owners, the technology leads, the external developer. And then we also included, you know, the agency as well that was creating the content. You were talking a little bit earlier about the importance of being able to create content that sticks with your brand voice,
[00:13:58] that is compliant as it's made, hopefully. And so that it's really, you know, adopting your tone and your voice. It's using all of that rich source data that AI can use in order to be able to do that. So the importance of standards in this and that it be as little editing work on the back end as possible. What did you learn about having the right standards in place, which then probably, I would assume,
[00:14:28] creates the right prompts so that AI can be, can be, have less hallucination, more accuracy? I'm just wondering about your process to do that. Yeah, so in terms of, if you think about it, so, I mean, it was kind of a long process and the process wasn't all, you know, related to AI, right?
[00:14:51] So we were lucky enough that we developed a process that then, you know, kind of fit seamlessly with our needs for AI, you know, once they came around. So these are things I would recommend to anybody, whether or not you're pursuing AI or not. So what we did is, the first thing we did is we created global content standards and search standards. So we had those, you know, kind of for years.
[00:15:14] And then that's kind of why we got into the business of creating tools was to help ensure that those standards were met. So when we built Query, Query was able to measure those standards and make sure that we were keeping those standards. So we had a keyword health score. We had a content completeness score and we had content quality score. So it was, it was all baked into the system.
[00:15:45] So the, you know, so the original need was how do we as a global team and how to, how does Unilever as a business, how do we know whether or not we're adhering to our own standard? So do we have a standard? Are we adhering to it? And we were able to see that from a distance. And then we're able to keep, you know, agencies accountable, et cetera, because we were able to instantly see at a minimum, you know, was the content hitting, you know, the minimum threshold that we were able to measure for our standard. So that was really important.
[00:16:12] So, you know, the keyword health is looking at, you know, things like volume and relevance. Completeness is looking for, you know, just what it sounds like, you know, are there, you know, are there six bullets? Are all of our name parts filled out? Things of that nature. And then what the quality score is looking at is it's looking at, you know, the obvious, you know, spelling, grammar, readability, but it's also applying our SEO standards as well.
[00:16:40] So it's looking at, you know, are those keywords, are they good and are they in the right place for SEO? So we had that established and we had that, you know, kind of that measurement in place. And that, it turns out, was actually critical for AI. Because if you think about it, well, actually, there's two things. First, there's the input part, right?
[00:17:03] So one of the risks that we found in our initial RFP was, you know, that there were, as I mentioned, there were gaps in content. So what we figured out was it's really important for the AI to know kind of what good looks like when, you know, and to be able to do it in a scaled way. So to be able to do it in a scaled way, it has to know, it has to be able to see other content that may be similar to what we're asking it to create and know that it's good.
[00:17:32] And that's what our content standards and our measured content was able to do. So first, the standards could be directly, you know, programmed into the, you know, into the prompt suite. But then also the measurement part helps the AI find or helps the system find things to give the AI as good examples. Then from an output perspective, I mean, if you think about it, you can go, especially now, right?
[00:18:00] You can go to any, you know, LLM and you could probably create a pretty cool looking set of bullets or description, et cetera. And I think at first pass, it would look great. But the challenge is it always looks great unless you have a standard to which you're trying to achieve. And then it starts to not look as, you know, look as good. So having that standard and having it measurable is really critical for the output from the AI.
[00:18:28] So being able to see instantly is at least meeting the minimum standard of, you know, what we're trying to achieve. And we know that through, you know, through the data. And then the other thing that we were able to do, and again, it's because we had it in place to begin with. It was part of our process. It was part of our tooling set was we had a feedback and approval process. So, again, we're using AI, but we're still getting, you know, human feedback and approval.
[00:18:56] And, in fact, it's the same human feedback and approval that we were getting before. And, in fact, that person doesn't even have to know that the content and not that we're being sneaky with it. But they don't necessarily have to know that it's coming from AI because they are approving it on the same standard that they've always approved it and through the same process. So having that in place was critical as well.
[00:19:20] And, again, it keeps you kind of safe from, you know, the obvious risks of just sending something out the door that was created by AI. Speaking of risks, there is a level of risk when working with AI. And there's also a risk of doing too much with AI and automating too much.
[00:19:39] So how did you work through slash convince your organization, get legal and regulatory and everybody on board to accept that risk? Well, from a legal and regulatory standpoint, you know, the company as a whole had standards. So we just simply had to go through the kind of processes that were in place to say that, you know, we were AI certified.
[00:20:08] So that part was, you know, that part was kind of, you know, easy because it was, you know, we didn't have to come up with a standard or to convince somebody of the standard. We just had to show that what we were building, you know, kind of met that standard. And then in terms of not biting off, you know, more than you can chew. I mean, I talked about that a little bit in the beginning about, you know, the original idea was, you know, to do it all.
[00:20:36] And we ended up doing it all, but it just wasn't possible, you know, from the get go. So we had to, you know, come up with something else. I think the other thing in terms of kind of how this applies to, you know, people that are listening is a lot of people are being asked to do it all. And I think the risk comes more from what you're being asked than what you're maybe what you're proposing to do. So it's really hard to, you know, kind of push back on that.
[00:21:04] So one of the ways that we approached it is, and it was difficult, even for us, even though it was, you know, a number of years ago, it was still difficult to try to navigate that. Hey, we have to do something with AI. And are you and then what are you? Is this really AI? Like, what are you doing? And it took it definitely took some pushback to get we got to where everybody wanted to be, but we had to do it in a kind of responsible way.
[00:21:31] So the way that we did it, and it's a way of thinking about is, you know, number one, go back to the first lesson of, you know, think about what is the part that you absolutely need AI for? And let's focus on that. But then, you know, I've kind of hinted at this along the way. But what we were able to do in terms of mitigating risk is we were able to build a number of tools that we kind of needed anyway.
[00:21:56] And those tools individually we needed, but collectively allowed for AI. So we were able to kind of go through the process of reaching, you know, you know, AI without the huge risk of saying we're going to do all these things and then we're going to have AI and let's see if we can do it. So what we did is we had the content scoring that I mentioned. So we were using it as a, it had a standalone benefit, right?
[00:22:24] It was something we had always used. It gave us our gap identification. So we were able to look in a market, what are the gaps? We are able to create briefs and go fill those gaps. So always had a use, totally worth building. But as I mentioned for AI, then it's going to have, it's your measurable inputs and it's your measurable outputs. Then I mentioned that we, we took a little bit of a left turn and we said, you know what, let's do content reuse.
[00:22:51] So content reuse is going to help us, you know, get towards our goal of scaled content or less. But in order to do that, we had to create a product to product network. And again, there wasn't a lot of risk. We were confident we could do it and we could afford it. But the product to product network, basically what that said is which product is related to what other product within Unilever. And believe it or not, you know, the system didn't actually know that on its own.
[00:23:20] So if you had a eight, you know, an eight count of Dove white bar and a four count of Dove white bar, the system didn't know those were the exact same thing, just different counts. So we had to create this network. And by having that network now from an AI perspective, AI now knows what products are related to others. So it can start making comparisons. And as you're trying to create content, it knows what to look for that's maybe similar.
[00:23:47] And it knows if it's good or not because we have the content scoring. The next thing we had to do is I mentioned we had a keyword score, you know, keyword health score. Our original keyword health score was just based on volume. And as you can imagine, you could you could game that system by putting keywords of high volume but low relevance, right? Completely irrelevant keywords. So we were trying to find a way to close that gap.
[00:24:13] And the way we did that is we created a tool for keyword relevance, but that created a product keyword network. So now we have a product to product network, a product to keyword network. And this now allows the AI to know not only which products are alike, which content looks good, but now it knows which keywords to use on a given product when it's when it's creating the content.
[00:24:38] And then, you know, finally, kind of on the input and output side, on the input side, we created a digital content brief. So, again, we'd always use the brief. We just made this digital. It made the whole process easier. But now AI has the information it needs for innovation, right, through the brief. And then we had the feedback and approval. Again, we had all of it before, but it was really critical.
[00:25:04] And, again, to your kind of risk mitigation, you know, we wanted โ I mean, that was a critical thing in terms of selling it to the business. And getting that AI certification was that we had an AI โ or, sorry, a feedback and approval process and tool in place. And then one other thing I'll add to this, and, again, I'm thinking through the lens of trying to manage expectations.
[00:25:28] It's really important, like, whether you're in an off-the-shelf tool or you're creating your own. If it works, it's pretty magical. But getting it to work is not magical, right? It's a lot โ there's a lot of work involved. So, you know, you had asked earlier, you know, who were the people that were kind of in the room.
[00:25:54] Well, you know, once this worked, anybody could look at it and be like, oh, wow, this is amazing. But to get it there, not only did we have to do all these things that, you know, that we've been talking about, but, you know, every week we had a call with a search expert, a content expert, a, you know, technology and infrastructure expert. The developer was Capgemini. I don't think I mentioned that, but Capgemini was our developer.
[00:26:22] And then we had an agency on the call. So, we had the process people, the subject matter experts, and then we had, you know, the AI experts and technologists on a call every week going through trying to make this thing meet a standard. So, the output is magical, but to get the inputs to it was a lot of expertise going into it.
[00:26:50] And I think it's really important to paint that picture when you're getting, you know, asked or pressured into, you know, creating these results. They are magical, so don't get me wrong, but there's a whole lot of work. And I don't know if it's understood always, like how much kind of expertise and forethought, et cetera, goes into making the magic happen.
[00:27:12] Because it really needs that training data, which you had to create to give it the parameters of standards and quality and et cetera, in order for it to not, because AI will try to give you the answer you're looking for based on whatever it has at its disposal.
[00:27:33] And so, it seems like if you just left it to its own devices and you didn't have those descriptions, it would have gone somewhere to try and figure out to do what it would do. And then you get content that doesn't meet standards. But it sounds like in some ways AI was a, I don't know if it was a forcing function, because it sounds like you were down those paths already for your non-AI scaled work.
[00:27:58] But putting it in a format in a way that AI could access it as part of its training was the critical piece to achieving trustworthy scale. Is that? Yeah, yeah, 100%.
[00:28:11] It's, I mean, so I guess my, I mean, you know, this is a specific use case, but if you're going to do what we did, doing it the way we did it is probably, I mean, any place on that path, if you stop, you're in a really good place, if that makes sense. Like even if you don't achieve AI, right? So if you have content guidelines, you know, you're in a good place and a better place than a lot of other CPGs.
[00:28:40] If you then have a structure to then scale that, now you're in an even better place. If you have a way to measure all that from end to end, oh my gosh, you're in a fantastic place. You're an elite, yeah. And then if even within that, if you can just add some automation to it, you're in an amazing place. And then of course there's the AI at the end. So that's the way we try to do it.
[00:29:03] We try to do it in a way that, you know, was the best way at any given moment, even if we stopped, that we'd be in a really good position. So I think that's, that was kind of the trick. Now it wasn't always planned, right? We didn't know we weren't like on our way to AI for years, but we were always trying to, you know, to set and keep our standards and scale them is basically. The, the overarching kind of direction we were taking.
[00:29:33] And, and at the end of the day, you know, the, the magic I would imagine is, wow, look at those results. And I don't know what, you know, when you sit back now and look at it and what you were trying to achieve, which is scale. Mm-hmm. At the required level of quality and accuracy, et cetera.
[00:29:56] What is, do you, do you have the, like we did 2x more content or, you know, what was the way in which you, once you stood back and you've been running it for a bit, they were like, this is actually magic. Do you, do you have a stat that really leaps out at you that gave you joy at the time? I mean, basically the way that we measured it is, you know, we felt that we were getting to about a 40% reduction in kind of time and therefore cost.
[00:30:26] And about a 50% increase in, in quality. So that's kind of how we, how we measured things. And then the idea behind that, as I mentioned before, it's not necessarily to, you know, pocket that 40%. It's more to, can we, there's really two things that we were trying to, or we always are trying to achieve is one, can we reach more products, right? Because oftentimes you have to prioritize and we're prioritizing our, you know, best sellers or whatever the case is.
[00:30:55] So can we expand that? And then the other thing is that search and content flywheel that I've mentioned, the final step on that flywheel before you go back to step one is continuous improvement. And, you know, we had a lot of ways of measuring in order and we conceptually had all the, all the ways that we could continuously improve. But frankly, it was hard enough to get around the flywheel once.
[00:31:23] So to then go around a second time on the same products was really challenging for us. So that was, that was the other thing that we were, we were aiming for with this is, could we get to more, um, optimum optimization? And did it, Bob, I just question, cause I'm doing a lot of research on org structures of the future and such. And, and a big piece of it is that a lot of this is going to get automated.
[00:31:49] Do you see a world where this eliminates certain types of roles because the automation is able to do the briefing and the content creation? And then you have more of an orchestrator versus having a fully built out content team and a retail media team and a search team. Like, how do you see this evolving? Knowing that today, this is not like most organizations are not, this is advanced, but like, let's look 10 years in the future. Like, what do you see that looking like? Oh, geez.
[00:32:20] I'm not quite sure. So, but you can't be wrong, Bob. So whatever you think. Yeah, yeah. No, I'm not going to be in 10 years to check on what you said. So you just go for it. Well, I think I I'm self-limited on the AI is not magic, uh, uh, thing that I've already said. So I understand that there is, there is a time and a place where it does become magic.
[00:32:48] Um, but right now it, I mean, it's magical and, but everything, um, you know, it's a tool and everything has to be turned into a product for lack of a better word. Right. Like an open, an open chat bar is not necessarily a product if you're trying to achieve a specific thing. Um, and we here, you know, for what I just described, we're trying to, uh, to achieve a specific thing.
[00:33:17] And it took, you know, years of work in order to achieve that specific thing with AI. So I think, you know, assuming that there's not, you know, this, this magical development, there's going to be a lot of this kind of creation of, I mean, you know, to my knowledge, these tools don't really exist or they, they, they, there's only a few players that have tools that can do what we created.
[00:33:43] So I think there's a long way off before this becomes kind of scaled in the sense of a lot of people having access to it for this kind of role. That's number one. Um, and then in terms of, uh, what you would want to do with that, I mean, our vision for it. And again, could this vision change over, you know, many years? I'm sure, I'm sure it can, and maybe it has to, but our vision again was to expand reach.
[00:34:11] So we had a lot that we weren't achieving with the people that we already, you know, had. So it wasn't about, you know, getting rid of people, but it was about, uh, eliminating the gap between what we were able to achieve with the people we had and what, in what we really wanted to achieve. So I think there's a, I think there's a long runway for that to play out, right.
[00:34:35] Of these tools to actually be in people's hands and to be useful and scaled. And then to reach, you know, kind of the objectives that people have before, um, it starts, you know, kind of, uh, you know, changing structure. Now, with that said, I know from a corporate perspective, I'm sure there'll be a lot of pressure, uh, there, but from a realistic, uh, you know, perspective, there's just there.
[00:35:02] Like, again, unless some magic happens someday, like, even if you had a magical solution right now and you dropped it, um, in any, you know, CPG's lap. I say magical meaning like it, it, it works, right. Not magical, like God-like, but how do you get that Harry Potter status? How do you, how do you get their content into it? I mean, not the content, the data into it. Where is the data? Who has the data?
[00:35:32] Uh, what is the data that you need and have you been collecting it? Where have you been collecting it? So there's so much that goes into it that I think this is totally achievable. And I think anybody listening can achieve what I just, everything I described, but you have to begin a process and you have to go through all the steps. Um, so I think, I think in the near, in the, well, you're, you're asking me maybe further into the future, but I think for the foreseeable future, there is plenty of runway.
[00:36:01] To, to make this useful, um, you know, to make this useful, uh, within the, you know, kind of construct of, of what we currently have, um, in terms of, um, you know, in terms of org structure. but I know there will be pressure against that as well. Yes, of course. And so to close out, I think I'd love to bring it really back to the humans because you're creating this tool, this platform, this solution
[00:36:29] and there's a lot of human feelings that come about when AI begins to enter somebody's work stream and it feels maybe risky. It feels maybe this is threatening my job. Don't make me figure all this out. How was the tool adopted that you created? Do you have any advice for folks that are trying to bring humans along
[00:36:58] in this adventure together? Yeah, I think, I'm trying to think, I think maybe there's two things. So first, in terms of the AI, honestly, we didn't see anybody take it really as a threat, I guess, the way that we were creating it or the way that we were deploying it. So we didn't have that challenge, although obviously that's completely reasonable that somebody would see it that way.
[00:37:29] But I think that we had two challenges. One we were overcoming and one we had to kind of rethink in order to overcome. So the first thing is, you know, I did a lot of change management and a lot of adoption of tools and standards and all this stuff at Unilever. And I'd like to think we were quite successful. But this, I thought would be the easiest ever because, well, it's AI
[00:37:58] and everybody's talking about AI and everybody wants AI and what are you doing in AI? And, but it actually wasn't because it's still required you to go into a system and to do a few things, approve a few things and click a button and make a choice. So there were still some steps, even though you didn't have to write anything and you didn't have to really, you know, know how to do any of those things. So on the one hand, on the one hand,
[00:38:27] we were lucky enough that we fed it through a system, right? We had created the flywheel. We had created content as a service. We had an agency partner. So we were able to get it adopted that way. That was like no problem because we had a whole process to run it through and we had a partnership. But from the everyday user who just wanted to go in and maybe could unlock the potential of AI, we had some markets that were desperate for help and therefore it was easy. But then we had others that,
[00:38:57] well, they weren't really creating it before. So it was a luxury for them to be able to create it on their own. So that means a little, even though it's, you know, so much less effort, it's a little bit more effort actually for them to do it on their own. So that's where we ran, we kind of ran into an adoption issue that we were, we just thought everybody would be so excited to be able to use it and it was so evident how useful it was.
[00:39:26] But that wasn't the case. So there we had a pivot and we had to really work backward from like, well, why, like what step is too much to take? What click is, you know, too many clicks? So we were working on that, you know, basically looking for ways to do things in bulk. So the answer just came out as opposed to maybe going through a process to get to the answer. So that was something we were working on. So on the agency side,
[00:39:55] we were in pretty good shape. on the user adoption. Again, if somebody was creating the content already, great adoption. If we wanted somebody to kind of get their feet wet with it, it was a little bit harder and we had to rethink the kind of user journey. But the other thing we were doing and that was successful and that I would recommend is something that we were always doing because we had query, because we had all these processes in place,
[00:40:24] we already were, you know, we already had a whole system in place for creating adoption. So what we did is we did a number of things, but we did a lot of, we basically tried to give adopters or high achievers as much sunshine as possible. So we had, you know, different awards. So people using query, you know, every single month,
[00:40:53] they would receive just the tiniest token of like an Amazon gift card, the top users. We had a lot of different outlets where we would show off people's work or case studies or whatever the case may be. We had an entire education series that was really well attended. Every two weeks we'd have, I mean, it wasn't all the same topics or topic, but we would have training around search and content. So we made education as accessible as possible.
[00:41:22] Query was filled with little short videos for how to do everything. And then we worked with, you know, with our users as much as possible to create a feedback loop to keep on, you know, bringing them things that would, you know, excite them. So, you know, we did all these things. And so we had a really great, you know, community. And overall, we had, you know, really, you know, really great adoption. And I would recommend that to anybody trying to, you know, kind of deliver,
[00:41:55] deliver anything, whether it's, you know, AI or any kind of, you know, process or system is to just think about how you can kind of reward people. And we never did it through, you know, punishing or calling out, you know, folks. We always did it through highlighting, you know, the people that were kind of the highest achievers or reaching our goals. And having those metrics, like I mentioned, being able to show, you know,
[00:42:24] which markets were green and, you know, completeness and quality and all these different things was really helpful. And, you know, again, we didn't have to show off the red ones, but we could show off the green ones. And that usually... And the reds know who they are. Exactly. That's the important part. Well, Bob, thank you so much. I mean, I think this, as always, any discussion of AI goes along with the discussion of the humanity involved. And how does all of that work together
[00:42:52] towards very clear objectives and goals? And your case study here today has been incredibly helpful. And I think helping our listeners start to imagine or continue their imagining of how to make it come to life in their organization. So it's really generous of you to share with us. Thank you so much. Excellent. Thank you so much for having me. Thanks again to Bob for sharing his AI case study with us. This will continue to be both a sprint and a marathon. So sign up for both races by becoming a member
[00:43:20] at digitalshelfinstitute.org. Thanks for being part of our community.


