Skip to main content


Naming in an A.I. Age- Name Testing & Research with Mapprio’s Founder Jian Huang

In this episode, Mike Carr interviews Jian Huang, a professor from the University of Tennessee and the founder of research company Mapprio. They discuss the importance of name testing and research in the AI world. Jian explains that his platform focuses on understanding human data and uses social psychology to gather authentic data efficiently. They also discuss the concept of system one thinking, which involves intuitive and reactionary responses to names, as opposed to system two thinking, which is more logical and analytical. Jian’s platform allows for micro-segmentation and analysis of name testing data, taking into account factors such as order of name selection and latency. They also discuss the importance of considering different demographics and segments when conducting name testing and research.

YouTube video

Title: Mastering the Art and Science of Name Testing: Unveiling the Power of System 1 and System 2 Thinking 

Welcome to another riveting episode of “Naming in the AI World.” In this enlightening discussion, we delve deep into the fascinating realm of name testing, research, and AI, guided by none other than Professor Jian Huang from the University of Tennessee. As a computer science professor and the mind behind Mapprio, an innovative research platform, Jian Huang offers profound insights into the science of name testing. 

A Glimpse into the World of Jian Huang  

Jian begins by introducing himself as a computer science professor with a keen interest in large data visualization. His remarkable journey spans over two decades, during which he explored the intricacies of scientific data, funded by renowned organizations like the National Science Foundation and NASA. Around 2013, during a sabbatical, Jian embarked on a mission to unlock the enigmatic realm of human data through the creation of Mapprio. 

The Scientific Genesis of Mapprio  

Mapprio’s inception draws from the rich landscape of social psychology. Collaborating with experts from diverse fields, including psychology and design, Jian designed a user-friendly platform to peer into the collective psychology of groups. This innovative approach promised to efficiently extract valuable information from online observations. 

Challenges in Name Testing  

The discussion swiftly transitions to the complexities of name and brand testing. Jian outlines the trio of crucial criteria: time constraints, cost limitations, and reaching the desired target demographics. However, he emphasizes the oft-neglected fourth criterion—scientifically meaningful answers. 

System 1 vs. System 2 Thinking  

A captivating portion of the conversation delves into System 1 and System 2 thinking, shedding light on their significance in the context of naming and branding. Jian elucidates that System 1 thinking is all about intuitive, gut reactions to names. Traditional questionnaires typically overlook this critical aspect of human behavior. 

Metrics and Insights 

The conversation progresses to the metrics used in name testing, with a particular focus on the order of name selection, latency, and more. Jian underscores the importance of micro-segmentation for deciphering diverse audience reactions and garnering scientifically meaningful insights. 

Segmentation and Audience Reactions  

Jian and the host, Mike Carr, underscore the pivotal role of segmentation based on variables such as age and gender. They explore how varying audience reactions can significantly impact naming and branding strategies. Understanding these nuances, they emphasize, can be a game-changer. 

In this thought-provoking episode, you gain a profound understanding of the art and science of name testing. System 1 thinking, System 2 thinking, the power of segmentation, and the role of latency are all explored in detail. Mapprio’s groundbreaking approach to research promises to revolutionize the field, helping businesses make more informed branding decisions. 

Stay tuned for upcoming episodes of “Naming in the AI World,” where we continue to unravel the dynamic interplay between naming and AI. Don’t miss out on the latest trends and insights—subscribe to our podcast to stay informed! 

 

Transcription:

Mike Carr (00:03): 

So welcome everyone to another episode of Naming in the AI world, and we have a really special treat for you this week. We have a professor from the University of Tennessee who’s also an entrepreneur. He started his own research company and I’m going to ask Jian to tell us a little bit about himself, his background, what he teaches, and a little bit about his company Mapprio. And then we’ll get into some questions about name testing, research, and maybe even ai. So Jian, please tell us a little bit about who you are and what you’re doing. 

Jian Huang (00:40): 

Alright, thank you Mike. Thank you for having me. So my name is Jian Huang. I am a professor in computer science at University of Tennessee Knoxville. I’m wearing the Tennessee Red today, not the Texas Red. The research that I do is in large data visualization, and it has been for the past 20 years. And of course right around 2007, 2008 when the White House came out and said the word big data, suddenly everyone has to do big data. But I insist my work is large and not just big. And in my research and teaching, I deal with scientific data a lot and my research is funded by National Science Foundation, department of Energy, NASA, and so on. And right around 2013 I took a sabbatical and build a new method into a new platform. And that is to get and understand human data. So the idea is you could actually do a very interesting observation in a online platform and get very useful information out of it. 

(01:53): 

And the science actually comes from social psychology. So my friend, professor Gartenberg in the psychology department on our campus, he is an advisor. And also through collaborating with him, found out that the reason we can’t get real authentic data efficiently is in part because of the user experience is very monolithic, very, I say not trusting. So we got our school of design involved as well. So Professor Sarah Lowe, she’s the design professor. And then as a result of this collaboration, we have a user experience that looks awfully like a very user-friendly survey. However, it is actually about making observations that can peek into people’s psychology on the group basis. And that is the story behind Mapprio. 

Mike Carr (02:49): 

Sounds great. And just to be totally transparent with our audience, we’ve actually used Jian’s Mapprio platform with a number of our clients in the name testing space, which is of course what we’re all about. And I would agree with Jian that the interface has proven to be much more engaging. We tend to get a higher quality of respondent when we have the open ends in there. So at least for us, over a few dozen studies, I think to date it’s worked really well. So Jian, what I wanted to ask you about, since we’re sort of in the naming business, and I know your platform has applications far beyond that, but with respect to naming, what do you see as the make or break issue when it comes to name or brand name testing? 

Jian Huang (03:36): 

Well, I am not the leading expert, Mike, you are, but from a computer science point of view, it’s always this challenge of not being able to meet multiple criteria at the same time, right? So the number one criteria that you hear people talk about the most is the time, cost and the limit, the time cost and the dollar cost. And there’s a limit to all of that. So that’s criterion number one. And criterion number two, everyone talks about needing to reach enough people in the real targeted demographics. However, the third criteria is, I want to say the really important one. That is when you engage these people under such strict limits, can you still get scientifically meaningful answers from ’em? Because if all you meet is the time limit and you had enough people, so you have a large enough end, but then all of those are supposed to justify why your answer is meaningful. But in a traditional way, I want to say the typical testing that we are seeing out in the field are not meeting the scientifically meaningful criteria. 

Mike Carr (04:53): 

And one of the things that we’ve done with your platform, and we just completed a study for a large soft drink manufacturer where we had 600 completes in two different parallel studies. And so we were able to do some micro-segmentation and some analysis that was still statistically valid and very insightful. But the thing that I really wanted to ask you about, because to me this is what sets your platform apart from pretty much everybody else that we’ve used. And we used to be part of Nielsen, the market research firm before we started this company a few decades ago. So I don’t want to tell anybody how old I am, but it’s been a while. But one of the cool things about your platform is it delves into system one thinking as opposed to just system two thinking. So could you elaborate on just what is system one thinking versus system two and maybe why that’s a big deal, especially when it comes to name testing. 

Jian Huang (05:54): 

This morning I was having coffee together with Gary, as in Professor Gary Steinberg, and we chatted about a few things. One thing that he said, which was amazingly insightful, and I’m just going to parrot to sound smart for myself too, is names themselve are such an interesting construct in our society. If you think about it, almost all of the social identity are related to or even solely about tight named groups. So names have a very special place and it’s not just you hear a name, you say I logically like it. That’s never the case. There is a system, one intuitive reactionary part to it, some name upon hearing you already don’t like it, you don’t even want to know anything more about it. Some names just naturally give you this feeling of I’m actually intrigued, I already feel engaged, I just hope to learn more about it. 

(06:57): 

So that’s where I want to say name testing that’s solely based on people’s logical answers, which are all that you can get through a traditional questionnaire. You missed the mark. So I can’t logic myself into loving something. I love it already. So then how is our platform different? So our platform is really a psychology playground and it’s very playful, but the idea is if we treat people as taking exams, so kids are going through AP exams, trying to get a five, right? So in that case, their behavior is, I’m scared, I just want to give you the right answer. But if you let them play a little bit, give them some freedom, and there we say we trust them a little bit. In that case, you get to see how people actually behave. Now of course the data is a little noisy because human are human, it is not clean, not rigorous all the time. 

(07:54): 

But as long as you have the computing methods to turn noise into actual insights, the notion of letting people play is in that positive. So that’s where our platform was designed from a completely different angle. And of course it was enabled by all of the science progress. One in particular thing I want to name is social cognition. So not my field. So parroting someone else. Social cognition is a place where if we can indeed observe people’s collective attention or shared attention of a group of people, we get to understand what matters to them without needing to literally ask. And these are the reactionary how do gut, how is your gut feeling about this? So that’s where system one psychology data can be obtained. System two psychology, I want to say it doesn’t really exist of system one because if all you say is system two, those are logical answers. So when you take an exam, you are giving that same answers too. So just saying system two doesn’t really make a whole lot of sense. So when you say system one, then you say system two, suddenly the two sides of the coin both appear. 

Mike Carr (09:15): 

That’s great. And I think one of the things that we’ve learned and that we’ve suspected, and I think intuitively this makes a lot of sense, is unless it’s a very expensive item, almost nobody thinks about the name, they react to the name. So you think about walking down a target aisle or a Walmart aisle and you’re just glancing at products on the shelf, and if a name grabs you or if the name in the packaging grabs you don’t really think about, well, do I like it? It just grabs you. It just engages your attention and then you take the next step. And so what we’ve very successfully used your platform to do, and I think it’s one of the most exciting aspects to research that we’ve seen in decades, is we can still ask those system two questions, which is what most research today does. 

(10:06): 

Okay, which of these names would make you the most interested in trying the product? Or which of these names fits the positioning? But 95% of the consumers don’t even think about names that way. They just react to them. So I believe, and I want you to correct me if I’m wrong, there are some multiple metrics that you guys allow us to look at and it includes the order of name selection. So if we throw some names up on the screen and let people, as you say, play with those names, move ’em around with their fingers, what we’re looking for is, is there one name that most respondents immediately pick, right? It’s like their first choice and it’s often a very quick choice and then it’s pretty definitive, right? They’re not going back later and saying, well, maybe I don’t like about it. When you sort of get into that, well, is that no our more system too, or they even pick it at all, right? If it’s such a boring rummy name, they don’t even bother. So is that sort of how your platform works or am I missing something there? 

Jian Huang (11:08): 

Well, I can’t tell you everything then I have to kill you. But for your entire audience, I just want to say that we very openly state that we have not invented new science. No one can. What we did is we built a platform that is online that can make observation of this sort of thing more scalable than before. So overall, if you imagine there are two demographics, so female and say 18 to 34 and female, 35 and above, in that case exactly as Mike said. So if one group just didn’t even pick this name A at all, the other group all picked it already a difference. And then if one group picked this name early and the other late, you already know the difference. So all big data platforms, a lot of these computations are based on probability. So it’s a bunch of probability distribution functions we calculate on the backend. 

(12:14): 

But the constructs we look for involve whether things are picked. So yes, no, whether things are picked early or not, so wing versus loose. And then latency is a very interesting thing too. You see latency as a metric used in system one ad testing almost all the time. And I say 90% of the time you see that as the sole metric that’s measured. So latency is important. What we’re seeing though is if you truly want to make a testing scalable, you can’t assume the testing targets are giving you full attention. You cannot. They’re on their mobile device, they’re on their laptop, their kids are shouting in the background. So if you literally just based on latency itself, your data is probably not as reliable as you want it to be. And here’s another thing that we know when we are working data, we find patterns left and whether the patterns actually withstand other tests is the question. So in this case we use multiple, multiple probes, but overall what we are seeing is just every time when we see different demographics, and actually Mike, you and I were on the call a few times to discuss what is this pattern saying? But when we separate by micro segments, we can talk about stereotypes that we all have about people, well actually primarily ourselves, we can laugh about it. So that’s where this tool is actually fun to use. 

Mike Carr (13:56): 

I think latency and some of the flaws and some of the other methodologies that are out there that you’ve uncovered and help guide us through are hugely important. One of the things that we look at, and this may seem obvious to the listeners, but it’s not that obvious when you’re in the weeds looking at all this data, is you might find a name that people almost everybody in the respondent base reacts to immediately, right? So there’s zero latency, there’s very little delay, it’s picked first, but then you delve into it and all of a sudden you find it’s a very polarizing name that is everyone picked it first, but you have a fairly sizable segment that just didn’t like the name at all. And then you have another very sizable segment that love the name and that’s where the microsegmentation that you described is so valuable. So we may have names for some studies that really resonate with that Gen Z audience and that younger millennial, right? It’s a very hip in vogue, sort of cool edgy sounding name. They get it, they feel it’s like for them. And so they’re the ones that are all scoring it immediately and love it. And then we look at the folks my age a little bit older. We aren’t into all the Gen Z, younger millennial lingo. The edginess maybe turns this off a little bit. 

Jian Huang (15:20): 

Yeah 

Mike Carr (15:20): 

We react to it quickly, but it’s awful. 

(15:24): 

Have you seen that in some of the other research that you’ve done, maybe outside of the naming area? And do you have any opinions or guidance about that’s something to consider and maybe there’s some other factors to consider or not? 

Jian Huang (15:38): 

Naming and the branding are closely related, but if you take one step further on the business cycle, so what is the value proposition? So in that case, it is an important question in the innovation industry, if you take it one step further even. So how was the experience? What did you expect? Then you get into customer service. We’ve seen a lot of these patterns similar to what you described. One of honestly the most fun one on a share is there are situations where after you test using our approach, if you lo ok at the audience as a whole, so in aggregate all of the names or all of the items, whatever you’re testing, even it’s a service item come out to be almost the same in priority, almost exactly the same. In that case, you look at going like, wow, okay, everything is the same, but when you do the segments, so a lot of tools don’t give you the ability, but say you can, right? So when you can do the segments almost never do things, stay almost the same, 

Mike Carr (16:50): 

Right? 

Jian Huang (16:50): 

It’s almost always suddenly you’re still starting to see the separation, see the polars pull away from each other. It is the fact that you have an audience that’s really, you are assuming to be the same. That’s the problem. And so I was discussing this with a marketing professor in our business school and he gave me a joke. It’s a great joke. That is, if you truly have an average customer, the average customer should be half male, half female. The average customer should not know what age he or she is. The average customer should own every single product that’s in every single category and have experienced everything. And then if that’s who you are actually targeting, good luck. If that’s not, then you actually want to do the segments and see how things are different. 

Mike Carr (17:44): 

I love the segmentation that we can do with Mapprio and let’s just say a fairly simple segmentation where you’re looking at gender crossed by age or you’re looking at age crossed by income, and you really do get some insights, not just about differences, because gals sort of think about certain things very differently than guys do, and they often are more empathetic and they might like names that are higher touch and then are more inviting and more friendly. And these are gross generalizations, but guys might like something that’s a little bit more techy and a little bit more innovative, not always. And then you dive into the age differences and you even get a finer cut as to, well, this isn’t common across all gals or all guys. It’s very different if you look at that middle segment of maybe the 20 to 40 year old versus a younger or an older segment. So I know in a lot of the analysis that we do, you do get that seemingly sameness at the initial, all these names are okay, but none of ’em are great. You’re all sort of grouped together, but then you start breaking ’em out by age, by gender, income and other aspects. And you very quickly discern, oh my gosh, there’s a winner here. Not just on that system one reaction thinking, but on some of the other metrics we look at, which is pretty exciting. 

Ashley Elliott (19:08): 

Hey guys, thanks so much for tuning into this week’s episode of Naming in an AI Age. Join us next week for part two as Jian and Mike continue the conversation discussing AI and this impact on market research. 

Don't miss any blog posts!

Sign up to be notified of new content on our site.

We don’t spam! Read our privacy policy for more info.