TRANSCRIPT

[00:00:00] Sebastian: You're listening to the Insightful Connections podcast. Our guest today is Andrew Hui. Andrew is the AVP of Enterprise Consumer and Customer Journey Insights at TD Bank. His team of Consumer and Customer Journey Insights practitioners lead support for the Consumer and Customer Journey Insights needs of TD Bank, both in Canada and the US. His team is responsible for insights related to brand, citizenship, sponsorship, ESG, product, customer journey, and diverse communities. Prior to TD, Andrew was a VP at Ipsos and was at Cara, now Recipes Unlimited, where he managed the overall guest satisfaction research program, in addition to supporting the Kelsey's, Montana's, and Milestone brands within Consumer Insights. Andrew, thanks for being on the show today.

[00:00:41] Andrew: Thanks for having me. It's my pleasure.

[00:01:10] Sebastian: One thing just reading your bio there, I was a little intrigued by. When we talk about insights related to brand, citizenship, sponsorship, ESG, product, consumer, customer journey, and diverse communities, that's a lot of stuff that you guys are carrying. But I'm a little curious, you know, when I was reading that out, citizenship is something I don't typically think of as related to Consumer Insights. What does that sort of bucket refer to?

[00:01:27] Andrew: Yeah, as you think about the work that TD does to support different communities, think about things like the work that we do at Pride, the things that we do at Turing Black History Month or Indigenous History Month, Asian History Month, those are some areas that we work in. It's important to understand from the community, what are the things that are important to them? How can we speak in a way that resonates with members of that community in a way that is connected and meaningful to them? And so the research that we do in that space is to help us better understand what is going on in those spaces so that we can build stronger connections.

[00:02:04] Sebastian: Nice. I think that's going to be very relevant to the stuff that you and I are going to be discussing later on today. Just a little snippet of what we can expect. But I like to start with this context setting question for all of these interviews, which is, how did you end up in Consumer Insights? And how has that sort of accounted for the places you've gone since?

[00:02:21] Andrew: Yeah. So when I was actually listening to your podcast, I used to say half of the people who end up in Insights are people who chose to go into Insights. And then half the people who go into Insights fall into Insights. Listening to your podcast, it seems like there are a lot more people who are intentionally going into Insights. I'm not one of those people. I am on the fall into Insights category. And so yeah, what happened was after I did my MBA at Ivy, I was kind of looking for a role and fell into the strategy role at Kara. That role was all about managing the guest satisfaction survey, mining some syndicated Insights sources to get Insights to support the brands that you mentioned at the top. And that's where I really discovered my love for Insights. It was all about trying to figure out what the everyday consumer thought about various different topics. And I have a bit of a problem-solving mind, a puzzle-oriented mind. And understanding consumers was like an amazing puzzle for me to work through and unravel. And yeah, so from there, that kind of gave me the first taste. And a friend of mine worked at Ipsos at the time. And Ipsos is a great organization, great training ground for a lot of researchers. And I ended up there for probably, I think it was almost 10 years before I left. And then I ended up at TD. It's been a very, very interesting journey. I was not expecting at all to end up where I am.

[00:03:48] Sebastian: You've made the transition a couple of times between brand side and agency side. How do you describe the difference between the two in terms of your experience in operating in those worlds? And what's ultimately pulled you back and forth between the two?

[00:04:03] Andrew: Yeah, you know, brand or what I like to call client side, you know, that experience there is about driving action and change and helping the organization really understand the insights that are delivered and how to bring them to life. And so it's not uncommon on the client side, at least in my experiences, where you might be sitting on a piece of work or working with a piece of work for a very long period of time. The really fun parts of the agency side is the sheer diversity of work that you get to do. And when I think about my time at Ipsos, I think about all the things I learned starting off, working on the P&G account and learning the P&G process when it comes to quant research and how structured and methodical it is and how disciplined it is. And then expanding from that really great foundation into all the other types of research that exist. I started learning about customer segmentations, discrete choices, conjoint studies, turf analysis, all price testing, product development, all that sort of stuff comes out of a agency side of experience. So it's a really wide breadth of understanding of how industries differ, but how different industries also are similar in terms of fundamentally the research problems that they're trying to solve. And then when I came back to TD, it was one of those things where I was feeling the tug of wanting to see insights being activated again and taking that real breadth of knowledge that I acquired in my previous experience into TD and then reshaping from within TD what modern insights actually looks like and what it can be.

[00:05:36] Sebastian: Part of the reason I really wanted to get you on the show was to discuss TD's cultural sensitivities test. And you'll do a better job explaining what it is than I will, for sure. But just for the audience's context around the discussion, I think I'm describing it right. You can feel free to correct me if I'm wrong here. Yeah. This is sort of a lightweight, quick turn methodology that TD developed to understand if a creative concept risks offending or stereotyping their audiences. And it's something that you presented on and I was at your presentation back in 2023 at CRC in Chicago. It was a really compelling presentation. And to me, one of the reasons that I'm so keen to be having this conversation is it's such a amazing example of the work that research can do to help organizations meet very practical needs on very tight timelines. Right. And that's part of what I found so interesting about it. So I guess the first question that I wanted to ask you about TD's cultural sensitivities test, before we get too far into what it is, how it works, all that stuff is, where did the idea within the insights practice at TD, where did the idea of a cultural sensitivities test ultimately come from? And what was the problem you guys were seeking to solve?

[00:06:46] Andrew: The research team felt like we needed to find a way to support the organization after we We ran into a situation where one of the marketing activations got some criticism internally from some other TD colleagues who saw that particular activation and didn't make them feel particularly good about how their community was being represented. And again, like I mentioned, I have a bit of a puzzle problem solving mind as like looking at this one, I was like, how could we have intercepted this outcome a bit earlier before like all the creative was built and it was put into a space and it was being shown to the public, how could we play a role in helping the organization understand any of the risks associated with any of the pieces of creatives that we push out? And so we started from that idea problem statement, begin kind of looking around the organization of like, what other techniques can we apply to solve this? We knew that whatever we built had to be fast. It had to be reliable enough to at least start a conversation. And typically all the creative testing tools that we had, had these multi-week sort of arcs attached to it. I was having a conversation with our human-centered design partners within TD Bank. They're a separate group outside of my group. And I was just learning more about from a UX perspective, how the UX researchers design their programs and studies and what do they look for and how do they figure out when they've come across something that is problematic. And I took that idea of, can we use the same principles in UX design, but apply it in creative testing? And so that's the kind of the pieces now starting to come together. And we just started just, okay, we know that we can probably get away with talking to about 50 to 60 people and getting a reliable result. Let's just begin backtesting some of these ideas. We took that original idea that sparked the controversy and we ran it through the tool and began looking to see what were the signals that could have been identified. And we discovered, hey, you know what, it's not about typically when we evaluate a piece of creative, we might ask some questions and ask people whether people like it. And typically we focus on the top two box scores on that liking scale. Here we realized that we have to focus actually on the bottom side of that scale and on the dislike side of it. We also learned that when we started reading the verbatims, that's where things really came out of people expressing concerns about a piece of creative. And so that also came into our attention of something that we needed to look at. We needed to focus into the verbatims and what people are actually saying about a piece of creative. And so once we had those two pieces together, we did some backtesting, it seemed to work and we had a little bit more focus in terms of how to do this. We began working with some of the other teams across marketing and picking up other pieces of creative and again, just running them through the test to see what would happen. And it wasn't until another ad that came out, I think it was an insurance ad, where again, it was a TD colleague was looking at this ad, I think it was before it was launched or shortly after it was launched, and this colleague saying, this ad, just the way it's portrayed doesn't make me feel super good. We ran it through the test, and this was a test among Black communities, in fact. And we discovered, yeah, it was problematic. And it was quite an interesting experience here, because we had a very diverse team working on the creative, but everyone would come from a different lens, right? And experience these things in a different way. And that wasn't enough for us to detect that we need to actually put it in front of consumers and the consumer said, yep, this is validating, this is problematic. And it gave the creative teams that kind of the permission and the direction of what elements to change. And from there, it just, it became institutionalized into our process, and it's part of our creative development process now.

[00:10:42] Sebastian: I'm curious if you can walk me through kind of the nuts and bolts of what is the TD cultural sensitivities test? How does it work? How does it ultimately deliver against, I think, a really tight sort of operational environment?

[00:10:54] Andrew: Yeah, yeah, yeah. So the way it works is that it's a quant-based study. And so we work with one of our self-serve partners, where we've deeply embedded this. What we do is we ask a really short questionnaire. It's probably not more than about five minutes long, I think it's about 10, 15 questions or so. It is a monadic test. So all that means is that the people that see the creative, they only see one piece of creative that they go through. We interview 50 people, we only interview 50 people, 10 pop typically. That's at the most basic level, what it takes to get it to work. And we designed that with purpose so that we can actually move fast. If we added more people to the interviews, that just increases the time it takes to do the study. And we really needed a tool that have a really fast turnaround. So the way that we built it is in a way that allows us to get results in just a couple of days. And then the way that it's institutionalized into our process is we run the study, and then the results come out shortly thereafter. And it goes back into the creative teams. And we either we created a bit of a scorecard that says, are we all good? Are there no significant risks here? Are there some risks that we need to manage? Is this a stop sort of ad? And we provide that guidance to the team. So we try to make it super easy for them to consume. They don't have to read a long report. It's all synthesized in one simple page and gives clear direction of what needs to happen. And then the creative teams can take that feedback and have that discussion of what elements do they tweak, what pieces they need to change, and what they should do to take it forward.

[00:12:30] Sebastian: And if I remember correctly, there's basically only one of four recommendations that can come out of the test, right? Yes. Can you walk me through what those are?

[00:12:38] Andrew: Yeah. So the first one is proceed with caution. The second one is rework. And the fourth one is stop. So proceed means that there's like no issues. The risk is minimal, minimal, in terms of how we evaluate. Proceed with caution is that there are some things that haven't been identified, but again, the risk is minimal. We believe that the issues that are identified can be addressed within the current structure. And stop is like there are significant risks, we'd like it will probably require a reshoot, not like some edits to resolve.

[00:13:12] Sebastian: Right. Yeah. I find that to be one of the most interesting aspects of this methodology, right? Is that it always results in one of four recommendations. And the recommendations are so unambiguous. I do sometimes find, not trying to throw shade on anyone, but sometimes the recommendations in a market research report can end up a little muddy. And what I like about the cultural sensitivities test and the way that you guys have designed it is that it's so lightweight, it's so rapid. And it results in so much clarity in terms of what the team needs to do next with the findings of the research.

[00:13:48] Andrew: Yeah. Something I, sorry, I didn't mean to cut you off, but it's like something I've learned being on the client side is to make our recommendations as unambiguous as possible. And I find that oftentimes in the research industry, like we have this tendency as researchers to play the safe middle, right? It's like, oh, it could be this or it could be this or whatever. But at the end of the day, the decision makers need to make one decision. They're probably not going to make a blended decision. They're not going to play the middle. The best outcome is likely in a particular direction, right? And it would do us better as professionals to be more ambitious and declarative with our recommendations than playing the safe kind of gray zone.

[00:14:31] Sebastian: Yeah. Just building on that, I think that it's so interesting that this approach to research that is ultimately about, I think the way that you framed it is risk management, not necessarily risk avoidance, right? Yes. It can help surface some of the risks in creative. It can't eliminate all of them, right? Exactly. And I think one of the things that's so interesting is that as a methodology, it's actually so bold because it'll say, ah, kill this idea, right? Or, you know, this is totally fine, right? And I think both of those are two very, very declarative sort of conclusions of the research and things I think often, in my experience, market researchers would be scared to say, this is unambiguously fine, or this is unambiguously terrible. Sometimes we get very nervous around saying that sort of thing.

[00:15:12] Andrew: All under our base of N equals 50, right? And I could hear probably all the quant researchers listening to this cringing a little bit. It's like, no, no, you need at least 75 base or do this with at least 200 or 300.

[00:15:24] Sebastian: That was actually the next question I wanted to ask you is how did you guys align on a base of 50, which does seem quite light for this methodology as sufficient and what were sort of the principles that guided that decision?

[00:15:36] Andrew: Yeah, it goes back to the UX design principles, right? And there is a formula to solve the problem of how many issues or samples you need to take to find a problem that occurs at a certain percentage, right? And so imagine if you were running a factory that produces cans of soup, and you know that maybe this is a new factory, the probability of a can of soup not meeting the spec is 10% of this line, there's actually a formula that you can use to calculate how many cans of soup you need to pull randomly from this line of production to understand whether you are meeting that error rate, or that error rate is occurring or to be able to find those faulty cans of soup. And so one of the key things about the test that we designed is that it is not necessarily a measure of incidence, right? It is not to say like, we're not really here to say that the majority of or like 50% of people hate this creative or love this creative. Our purpose with this is to figure out is, does an issue exist or not? It is almost like a bit of a binary sort of intention, right? We do certainly get the benefit of incidence if we see a lot of accumulation from the results. But if the problem is to figure out whether an issue exists or not, the amount of sample that you need to drive to answer that question drops significantly. And so with that sort of frame in mind, that allowed us the freedom to create something that could move fast and detect, hey, there's a problem exists, let's have a discussion around it now of the risk management around it.

[00:17:17] Sebastian: I almost think of this as actually, in some ways, kind of similar to a qualitative approach for a couple of reasons. I mean, one is I understand that the open ends that are collected in this test that play a very significant role in helping the team identify what are the issues, but also that qualitative studies strive more towards thematic discovery than statistical reliability or repeatability. And that almost seems to be the emphasis of this study is sort of turning up where the issues exist, that they exist, and how serious are they so that teams can have a better understanding or greater clarity around how things might be perceived in market?

[00:17:57] Andrew: Yeah, that's totally true. And even when I think about qual even more broadly, I see it as a research sometimes goes through trends, maybe it's trends, or maybe just the conditions of the market are changing constantly, and it gets reflected as trends. But really, it's like about meeting the moment. And I think during the pandemic, it was heavy, heavy, heavy quant, quant, quant, quant, quant. Coming out of that, I've certainly noticed that there is much greater emphasis on the value of qual, understanding the thematics, understanding the texture, understanding deeply what is the motivations behind what people are saying. Such an important tool in the toolkit that needs to be leveraged, I think, a bit more now.

[00:18:36] Sebastian: So what are some of the limitations of this approach?

[00:18:39] Andrew: In terms of the sensitivity test, I think one of the things, like I said, what we can't really reliably do is put a specific percentage in terms of the risks associated with it, with a piece of creative. It's not designed to do that. And then even as we declare what people should do, we got to think about the discussion that comes around that as well, too, right? It is it is about a artifact that helps create alignment and consensus within an organization. If it's not doing those sorts of things, it can't necessarily stand on its own in that particular way.

[00:19:15] Sebastian: Since implementation, what would you say has been the biggest impact of the test at TD?

[00:19:20] Andrew: It's been fascinating. So we've done probably now, I can't remember the latest counts, probably 150, 160 of these. It was quite interesting for me to see when we started breaking down the numbers, the percentage of pieces of creative that fell into the different categories, with really most of it probably about three quarters in the proceed or proceed with caution space of it, and a much smaller number, about 10% kind of in the stop. And so for those pieces that got that stop rating, I think it has helped us stay out of trouble, if you will, or just to make sure that we were not accidentally having the inappropriate conversations with certain communities. And so, yeah, it has been extremely useful from that perspective of just making sure that we stay on the side that we intend to be on with our creative.

[00:20:09] Sebastian: What do you think other organizations can learn from TD's sensitivities test?

[00:20:13] Andrew: I think a couple of things. So at the very specific to the sensitivities test approach is that there is a way for organizations to be more deliberate with their communications and to be more thoughtful of how their communications lands within different communities. When it comes to the issues that different communities face, it is one thing to be an ally, but even allies don't experience the same sort of experiences that members of any particular community have experienced. I liken it to the idea of two strangers come upon each other walking the same hiking path, if you will, right? And so if I'm walking side by side with somebody, I can see the steps that they're taking. I'm walking the same path. I'm sharing that particular experience. But I don't have any of the experiences that led up to this point. And through conversation, I can kind of learn about it, but I've never really viscerally sort of experienced that. And so it's important to do things like create diverse teams as we build creative, but it's also, I think, important to involve consumers. So you get that broader view of the things that we create in context of all those different lived experiences out there. It's possible. It is not impossible to do that. There is a pathway for us to have this understanding, incorporate this understanding as we deliver our products and services to the different communities that we all serve. I think the other piece that comes out of this that is an even more zoomed out view is this idea that I've really been playing with and realizing over time, the idea that sometimes as researchers, we get really stuck in terms of looking at the tools that we immediately have and only using them in the ways that they were originally designed. Here, we took a problem that the UX teams have leveraged for a very long time and applied it into a consumer insights application in a way that yielded a really great outcome. I've noticed that there are so many opportunities that exist in this space where the blending of techniques, blending of different insight sources really yields powerful, powerful outcomes that are just amazing, that are predictive of what is going on with consumers and how they think. Andrew, last question. What keeps you motivated? What keeps me motivated? Hard problems. Hard problems keeps me motivated. I love working through complicated puzzles where the solutions aren't immediately apparent and obvious. Yeah, and there's so much of that in the research space right now. It is so, so fun to be able to build solutions to address these hard problems that we all face. Thanks for being on the show, Andrew. Thanks so much. I appreciate it.

Subscribe to Our Podcast Channel Today!

Dive into a world of insights and inspiration - Stay updated with every episode. Subscribe now and never miss a beat!

* indicates required