Webinar

AI in market research 2024: State of the industry

This is an exclusive webinar that dives deep into the heart of AI applications within market research. "AI in Market Research: 2024 State of the Industry" is a pivotal webinar for market research professionals seeking to understand the evolving landscape of AI technologies and their impact on our industry.

Why watch?

In this session, Displayr's Tim Bock will guide you through:

  • Real Experiences: Learn from the Displayr and Q team's trials and triumphs with AI in market research software. Discover what AI initiatives made it into our software, which ones fell short, and the valuable lessons learned along the way.
  • Interactive Engagement: Participate in live polls and discussions to share your own experiences and perspectives with AI in market research. This is your opportunity to contribute to a broader understanding of AI's role and potential in market research.
  • Future Plans: Gain insights into the next wave of AI-driven strategies and technologies poised to transform market research. We'll share our roadmap and invite you to discuss future directions and possibilities.
  • Community Insights: Understand the broader AI trends in market research through aggregated poll results and discussions, providing a snapshot of industry-wide adoption, challenges, and opportunities.

Transcript

Today is all about AI in market research.

I'm gonna take you through our company's lived experience using AI.

I like to think we know a bit about market research and software, but I'm not gonna pretend I'm an AI guru. I'll share what we know. I look forward to learning from you.

I'll start with a quick overview of AI so that we can all use some standard language. I'm then gonna move on to what's worked and hasn't worked for generative AI at Displayr and Q. Then I'll talk about how we think about generative AI and market research. And lastly, we'll get onto our road map.

Artificial intelligence or AI has been around for a long time. Many of the old fashioned statistical tools like linear regression and cluster analysis are now referred to as AI.

But today, I'm just gonna be focusing on the newest kid on the AI block, generative AI, and this refers to AI that can create new content.

Before going any further, I've got a quick poll for you.

So which of the following best describes where you are at with generative AI today? So really talking about your company. I'd love it if you could enter where your company is at in terms of its usage of AI, and we will share this data with all of you. And we'll share the data from the other webinars we've run because we've also run webinars at two other time zones.

So I'll give you a few more moments to input your information.

So where are you with, where are you with AI today?

AI, I'm hopeless today. AI today.

I'll give you a few more moments. I've got seventy four percent of you have voted. Looking for a few more to vote, if you could.

Closing on my threshold.

As market researchers, you can hopefully appreciate the importance of a high response rate.

Alright. Nice work. They got you across the line.

And so you can see the results. And these are very similar to the results that we've got. We've run the same thing in North America and in the UK.

So the we've got about half of people are still at the experimental stage. And we've got a small proportion of people, thirteen percent, who are getting productivity gains of five percent or more, which is awesome for you guys. I feel very happy for you there.

So I asked ChatGPT to create this image for me. You can see the prompt I used at the bottom.

It only took a few seconds to create, and I think it did a really good job. Now I haven't just used it to show what AI is or generative AI is because you know that by now. I've done it because the people that create the generative AI models are pretty unanimous in their belief that the way to think about it is it's an assistant, and you need to manage it just like you would manage a human assistant.

Generative AI models, and there are many of them with magnificent names out there, they vary on three dimensions. The first is how the neural network architecture works. That is, what's the math they use to actually create the model from all of the data?

The second key variation is how big they are, and they tend to express this in terms of the number of parameters that are fit and CHATGPT four apparently has a trillion parameters.

But I like to think of it in terms of cost. So ChatGPT apparently costs something like a hundred million dollars to fit. What do I mean by the word fit? I mean, let's say you create a cross tab, and it takes half a second to compute and appear on your screen.

When they fit the large language models, that is the fitting process. So it might take months, maybe even years to fit, and in just in computer costs to do all of those calculations, a hundred million dollars for the biggest of them.

The third dimension that's really relevant when we think about generative AI model is how they're trained. What is the data that they've been trained on? Now most of the widely used large language models or generative models are large language models, which means they've been trained on words such as Wikipedia or books.

You've then got a whole lot that are trained on images, such as Midjourney and Delhi, and and I've kind of misspoken every bit. They're trained on images and words that describe those images would be a better description.

And then the leading model in the world today is CHAT GPT-four Turbo, and it's actually been trained on multiple types of data. And they use the term multimodal, bit like us in market research with our mixed mode, And it's been trained on language, it's been trained on computer code, and it's been trained on images.

This training is all important, and I'm gonna come back to it later. It has lots of implications in terms of how we use generative AI.

Most people have a passing familiarity with how the more traditional predictive models were trained, so I'm gonna use that as a launching point to explain how generative models are trained.

In the traditional types of models, you'd start with creating a database or a big data frame to use a bit of jargon or a data file, which contains predictive variables. And traditionally, these are often characteristics of people when we're talking about market research, and then some outcome variable or dependent variable. In this case here, it's showing whether these people churned or didn't churn from a particular service. Now in the jargon of AI and machine learning in general, this is referred to as labeled data. What that means is for each row of data, we have a specific label that indicates the state of the data. In this case, did they churn or did they not churn? Or it could be a number indicating how many dollars they spent with us, something like that.

Now generative AI does it differently.

If we look at large language models and the way all the models are trained differs quite a bit, but with large language models, they have as the input just lots and lots of words, for example, articles from Wikipedia.

These words turn into something called tokens, which is just kind of subwords, usually, and punctuation, and some of them get hidden.

And the way that the models are fit is they try and see how good they can be at predicting the hidden words.

And the really cool thing about that is you don't need to have a labeled dataset at all. This means you can just feed in any text you've got, and the model can fit to that text and become accurate on its own without the need for a labeled dataset. And so that fundamental breakthrough in thinking about how to build models allows us to train any amount of text, and that's the kind of the the key insight that led to these models becoming so powerful.

Now they tend to go through a second training step, which is often called fine tuning, which does use label data. Now ChatGPT don't share how they actually do their model training, but it looks like they fine tune it using things like the bad response category as a label. So they'll feed in a whole lot of queries that people have got and the answers that they provided and then the label that they use, which can try and improve their prediction on, is which were tagged as a bad response or not a bad response.

I'm gonna give you a worked example now to give you an understanding of how you can use generative AI to run software and to automate tasks. And I'm gonna explain the problem of how do you create a conversational user interface or UI to use the jargon using generative AI.

Now what's a conversational UI?

ChatGPT is a conversational UI. You go in, you start typing or you speak to it, you give it words, and it figures out what to do as a result.

Displayr and Q are instead graphical user interfaces primarily, which means you see things and you click on them and interact with your mouse. So two different types of user interfaces.

So the first step in creating a conversational UI is you need to have an API for whatever it is you're trying to automate.

So in the case of Displayr, when we create a conversational UI, we're trying to allow people to speak or type into to spy to tell it what to do. For example, create a crosstab. The first step is we have to create an API, which is essentially meaning we have to create a way that a user can drive our software by writing code that is not the graphical user interface. And we've already got that. It's called Qscript and Display. That's our API.

Step two is you need to select a generative AI model that is trained on a language similar to your API. And for us, we use ChatGPT because it's trained on JavaScript, and our API, Qscript, is very, very similar to JavaScript.

Then just as you would do when you use ChatGPT, we create a prompt.

This is a little part of the prompt. It's actually a very long prompt we use, and the key bit of the prompt is it contains instructions telling the model, the generative AI model, how to use our API.

So you can see it's got the kind of instructions you would typically give in the first paragraph here, but underneath, it's just got definitions about how our API works.

Then you need to create a way for the user to input their query.

And so we created Clippy, and it's very much a kind of chat GPT style interface with a wonderful queue logo over the top of it.

Then you substitute the query into the prompt. So if I go back to the prompt, it said insert query here. Whatever the user types, we insert there.

And then using code, we send that prompt to the generative AI model.

We get the code back. We check the code and resend it to the generative AI if there's a problem. So just like when you use chat gpt, you might ask a question, it gives you a bad answer. You go, no.

That's not what I meant to have another go. We do the same thing, but, again, write code. Then we get back this code, which is written in Qscript, and we run it. And we run it just like you would run Qscript in Displayr.

And if you've never done that in Displayr, I'll show you where you can do it.

Go and open Qscript editor. You paste in your code. You'd run it, and then it changes things like creates tables or whatever. So that's the basic process why how you hook up generative AI. And so if you wanna use anything that to use generative AI to automate any process in other software, you need to have an API for that other software to get it working, and then you need to write code to hook it up.

Now I'm gonna move on to what's worked well and what hasn't worked well for us.

So we built a conversational user interface using exactly the process that I described to you before to create tables. So you could type in, create a crosstab of preferred color by age, and it worked really well. We built the prototype, and we've abandoned it and not launched it. Why?

It's a gimmick.

Ultimately, it would be inefficient for the vast majority of our users.

Initially, you'd go a bit faster because you just got to type in words and didn't have to learn how to use that graphical user interface.

But you quickly get to a stage where it becomes very clear that it's more efficient to use a graphical user interface than to try and type things in with words.

And the best analogy to think about this is driving a car. Steering wheels are kind of optimized for steering.

If you had to use words, you know, turn a bit to the right, you'd end up in a lot of trouble. Right? You're either gonna sideswipe, you're gonna go off the road, or you're gonna miss your turn off because our language isn't sufficiently precise to allow us to give instructions in words for steering.

Whereas when we use a steering wheel, we get feedback from the car, it's just a much better experience. And that's generally the case with using data analysis software.

We get a lot of feedback that's very fiddly to modify our visualization, so we thought, let's build a conversational user interface for that. And we did it, and it worked really well. You could say to it, you know, hey, Display AI. Can you please change the first bar to red? And it would.

But when you say red, how red do you really mean?

Again, conversational AI is not good for such precise things. Move the legend to the left. How far to the left? So, again, nice gimmick. We abandoned we abandoned it.

We also, like most of you, I imagine, have got various AI tools for our teams internally to use with the hope that it will make them go much faster because you read a lot about, you know, forty percent faster using AI, and we wanted to experience that internally. We haven't experienced it, unfortunately. We've had very little adoption outside the marketing team. The engineers keep getting told that using Pilett or Copilot will save them heaps of time, but it tends to be according to them, it saves time if you're not an expert. If you're an expert, it wastes your time.

Where we're at today is we can't really work out if current technologies are only good for marketing or if only the marketing team really likes change.

We also tried to use AI to automate our support, And we did this was done differently to the approaches I've described before. Here, we did set up a labeled dataset, and then we will try to fine tune a large language model. The labeled dataset consisted of a whole lot of users' queries and the responses that we've sent out, to to users.

And the problem we got, and I should say we did it twice because we were so disappointed and hoped it would work, is that it just wasn't able to create good enough responses to share with our customers. We didn't feel that it was nearly as good as the support that our human team currently provide.

What went wrong? Well, two things went wrong, and they're really important. The first is we just didn't have enough data. We trained on hundreds of support tickets.

Training or even fine tuning a large language model or any other type of generative AI model really needs millions of data points, which we didn't have.

The second thing, and this is really important, is that large language models are not trained on survey data, so they don't really understand it. So you can't even really fine tune them very effectively to it. When is it not trained on it? Like, obviously, people's queries were expressed in words, but they're referring to concepts in their data. And these concepts weren't really understood by the large language models because it hadn't been trained to understand.

Don't worry. More failure to come, but we will get to successes.

The most common thing that we're asked to do via generative AI is we're asked to provide tools to to interpret and summarize data for users, and we built such a tool. You created a crosstab, you got a little nice button called summarize table, you clicked it, and it gave you a text based summary.

Again, abandoned.

The insights were banal. There were two bits to this. One is it was missing a lot of context.

So it was obvious to the AI that you might wanna know that there were more women than men in your sample, but generally you wouldn't.

But it could still tell you that. Similarly, it's obvious to you when you're doing a study for Diet Coke as a brand that you're gonna focus on the Diet Coke results less obvious to the AI. The other problem and the much more severe problem is hallucinations.

What we found is that we could build summaries that were usually right. Maybe they're right ninety nine percent of the time, but one percent of the time they gave the wrong result. Now Now this is a problem for our clients because they can't really say to their clients, hey. Just so you know, I've used AI, and ninety nine percent of what I'm gonna tell you was right.

So any responsible client would then have to read every single response and check the data to verify that it's accurate, and that's just less efficient than doing the actual interpretation yourself.

We created AI to improve the way people had written things in documents and just didn't work that well. Because it's not really core to our mission of helping people with data, we abandoned it.

We had an image generation feature. Often in dashboards or in presentations, you wanna have an image, for example, teenagers drinking Coke. We built it, worked really well, but it turned out that Microsoft had only provisioned servers to use this technology in Sweden, and we have lots of agreements with clients which prevent us from sending data out of their jurisdictions.

So we couldn't launch using the Microsoft services.

We could have rebuilt the whole thing from scratch. That would have spent millions of dollars, and it's pretty easy just to use tools like Dell e, ChatGPT, and Midjourney to create your images and paste them into display. And so we're gonna require that that's how you still do it.

We tried a page layout feature because that's something users ask us for. Failed.

We created a code pilot or copilot feature to help users write our code.

Again, we got it working and abandoned it. And the problem we had here was again, hallucinations. Sometimes it would give you an answer that was not really the answer to the question that you'd asked it. Now Copilot or Copilot type tools can work very well for engineers because engineers understand code and they can verify that it's worked. But most of our clients aren't engineers.

And the issue that we were really worried about is they would use the Copilot. It would tell them how to write code, give them a wrong answer, and they wouldn't have any way to work out it was wrong, so we didn't launch it.

And finally, we're getting to a success. This is a small little success, but it's an instructive success.

We've got this new feature which allows us to summarize labels when you combine data. What do I mean by this? Let me just zoom in a little bit.

In this data file, let me zoom out a little bit first.

In this data file, I've got three variables measuring satisfaction with different aspects of a phone company and Internet service.

The ratings on a scale of one to five, they're fairly long. A standard thing users do is they'll wanna look at that data, so I'll drag them to the page.

And when they look at them, they think, oh, I'd like to have them as a single table. And so in display, you do that by right clicking and going combine.

So they combine together. Now look what display has done. It has automatically created its own combined label, summarized and underlying data, and it did that using generative AI.

And the reason that worked well is because summarizing is the thing that generative AI models are really good at. They're trained on text data. Those text data often contain summaries, so it's just something that generative AI has been trained to do really well.

The next thing we did was a feature for shortening labels when importing data. I'll illustrate the workflow here. We're gonna add a dataset.

Now at the I'm gonna turn off using display AI to show you what happens here.

You upload a file.

It's a very small little file so you can appreciate what happens.

So in my file, I've got these variable labels, and they're quite verbose, which just makes it hard to use in a report because you have to kind of rewrite them each time.

So what we've instead done is we've made it so that it will automatically tidy the labels. So I'll go down. I'll turn on use the display AI, upload the file.

I'm gonna click that button twice. And so you can see it's used much shorter labels, making it a much easier data file to work with.

Again, it's worked because it's this summarization action is a core competency of large language models.

The next feature and this is actually our first feature that we've built using generative AI, and we built this in, I think, two thousand and eighteen, two thousand nineteen, well before most people even heard of the term generative AI. We weren't even using it back then.

We're using the technology without knowing what it was called. The this is automatically categorizing text data. Now we keep releasing new versions of this, and this is something which if you haven't looked at that this year, I'd encourage you to because it's been quite remarkable in how it works in the last couple of months.

We're gonna look at an example here that you'll many of you will have seen before. It's an example that relates to what people dislike about Tom Cruise. Now, if you've been watching Maverick, you're thinking, How could anyone dislike Tom Cruise? But this data was around the time there was a very messy divorce from Nicole Kidman, and a lot of people disliked it.

Now we go in, we select the data, we go text categorization, and the AI will start by creating a model for us with ten categories in English. We'll click cancel.

Oops.

Now there's some options I wanna draw to your attention. We can translate, so you can change the input and the output language.

You can also change how many categories it forms, and so you might leave it at ten. What I do when I do text categorization, which I do quite a lot, is I tend to put a large number of categories, say thirty, and then manually merge them together as that allows me to put my context together with the automatic categorization and stuff. Now rather than wait for this to compute, because it'll take a few minutes, we're gonna look at the one that I ran before.

And it wants you to read the categories it came up with there.

So the AI has formed the categories, and it's named the categories. They're really great descriptive names for these different categories. Let's go and look at negative sentiments about arrogance to get a feeling for how accurate it is. So the first person said, he acts like he's different to everybody else. They didn't say the word arrogant, but the generative AI, which understands the English language so well, is able to go, ah, that means arrogant.

Arrogance, arrogant, a simpler problem.

Argan, well, worked out it wasn't a chemical and it related to arrogance. Arrogant, arrogant. And this one's particularly clever. It worked out that when somebody says thinks he's God, that means arrogant. And what makes that clever is there's another category called negative sentiments about Scientology, and the AI didn't get confused and think that the word god meant Scientology, whereas older fashioned models for doing text analysis would have got that wrong because they would have treated it using a synonym based approach rather than trying to understand the meaning.

Also note that the generative AI has been really clever in the background. It's figured out that some people belong in two segments. This is an option you can control.

So this person here has said the reason they dislike him is due to his religion, and it's gone, oh, okay. Category four, Scientology. So it's worked out that Scientology is this religion, which is really pretty clever. And also, he is too good for everybody else, which is another way of saying he is arrogant. So I think this is at the stage where it's often much better than a person. I'm really, really happy with where this is at.

So automatically categorizing text works well again. I'll talk a bit more about that later.

Translating text, we've got a few features which do that. Works really well as large language models are trained in multiple languages and are greater. It's the same technology that's used for the various Google Translate features.

We built this tool called principal components analysis of text.

We're probably too clever here. We built it. We launched it. Virtually nobody uses it, such as life.

Should have done some market research.

So I've taken you through what has worked for us and what hasn't worked for us. I'd love to know what you have used generative AI to successfully do. So I'm not looking for what you're hoping will work. I'm not looking for your proof of concepts.

I'm trying to understand what you've actually rolled out into your businesses to make your businesses better using generative AI. So if you could type that into the questions field in GoToWebinar, I'd love to see them and I'll share them. We've done this same webinar in the UK and North America, and we'll share the results from that with everybody. And as you type stuff in, I'll also share the first responses here in an anonymous way, of course.

So in the GoToWebinar feature, in the questions box, please type how you're currently using generative AI successfully, If you are using it successfully, I should say, very keen to learn from you.

What can you share?

Cool. Now I don't wanna be too competitive here, but go Asia. We've North America and the UK typed many, many things. I've only got a couple, so please type some in. This is the pride of Asia and Australia. Let's see how many applications you guys can mention. It'd be very helpful.

Oh, I forgot New Zealand. Go New Zealand as well. Papua New Guinea, go.

Singapore, go. Keen to see what generative AI features you're currently using. Please put them into the little window, and I will share some of them.

Thank you for those of you that have put some in. I'll just paste in what we've got so far, but keep adding them. I really do wanna learn what you guys are using.

So here's some of the initial responses that people have added in. As I said, we'll collect these and share them with you across all of the interviews not interviews, across all of the webinars that we deal with.

Now I'm gonna take you through how we think about AI today based on our experiences.

I've expressed these as some principles, or it's perhaps a bit of a grandiose term.

The first one for us is to focus on efficiency. It's very easy to get captivated by things I can do that probably won't make our clients a whole lot faster.

So we we have to keep reminding ourselves it's not about the technology, it's about saving time. The second thing, and this is a really important learning, and I'll talk more about how to do this shortly, but it's really important to keep the cope small. And I find it helpful to use that metaphor of the AI as an assistant, which I talked about before. Just like if you're onboarding a flesh and blood assistant, it's often wise to give them small tasks and see how they go rather than trying to do very ambitious tasks, tends to be the same in terms of getting generative AI working.

For this next principle, I'm gonna see if you can work it out on your own. What's wrong with the image below? It's been generated with ChatGPT four or turbo, and it's which is in turn, I think, sent off to DALL E. What this is doing, this image, the prompt was, you know, create you Garfield the cat developing a generative AI road map for display.

Now it very politely said that it couldn't do Garfield for copyright reasons and was I okay with the ginger cat, which I was.

But what's gone wrong here? Can you see it?

That's right. It's done a really bad job at generating the text information. It's got weird spelling and odd language at the top. Why is that? Large language models and GPT four has been trained on large language models or is a large language model in part or it's a mixed mode model or a multimodal model.

They're trained on lots of text data. It should do okay, shouldn't it?

Why doesn't it? Well, you've got to look at how it's trained. So it's trained on billions of words of text. So when you give it the job of generating text, it's fabulous because it really understands text very deeply.

They're reasonably good at creating images that people describe.

Why is that? It's because they're trained on millions of images or maybe even billions, I don't know, and text descriptions of that. But it's not trained extensively on images containing text.

And so there's actually relatively little data that's being used to explicitly train it, and that's why it does such a bad job. Now this is actually a problem that is easily solved once it's been identified, but the best model in the world, CHAT GPT four turbo, can't currently do it. And so any task that I give it, which relies on it being able to create text image at the same time, it's not gonna be able to do.

It'll have to modify it in some way, which I could just put a text box over the top, of course. And this leads us to principle three.

And principle three is that a generative AI model skills are determined by the data used to train them. So when we're talking about large language models, they're great at answering questions.

You know this already. They're great at summarizing. You know this. Now it follows from the two previous things. If you're good at answering questions and you're good at summarizing, you're gonna be good at text categorization, which indeed is true.

They're great at translation.

They're great at language analysis. They can point out issues and improve your styling.

And they're great at something called style transfer.

An example I've given here is I rewrite my job application as a song in the style of Shakespeare. The reason they're able to do that is they've analyzed so much text data that they've noticed patterns in terms of style, and they're able to reapply those patterns elsewhere.

They're graded encoding that is turning text data into numbers, so it's a great tool for sentiment analysis, for example.

And they're great for creating conversational user interfaces because they're able to translate human language into computer languages because they're trained on both.

And those computer languages can then be fed into software to run. So these are the things that language models are really good at.

Now in terms of things really good, and we'll come back to kind of our road map and how it relates to these in a second. Principle four is look for time consuming jobs that align with AI's skills. So the two axes here, the first is, is the job time consuming?

We're best off focusing on time consuming jobs because then we're gonna get more of a benefit. And the second axis is, is the generative AI naturally good at the job? And this really comes down to, has it been trained on comparable data?

So text categorization and translation are the sweet spot. The generative AI have been trained on this data and are really good at this, and they're time consuming problems in market research, so they're no brainers.

Survey analysis is very time consuming, but the generative AMLs are not trained on it at all. They've read descriptions of survey analysis. They've even read survey analysis outputs like reports, but they haven't been trained on creation of survey analysis, that is the actual raw they don't have any of the raw data, and they don't have how we turn that raw data into reporting. And so while the job's time consuming, they can't actually naturally do it, so you're in danger zone land here.

Principle five, it's important to really think through if there's a correct answer or not when deciding how you're going to use generative AI.

Think of significance tests.

These are not things that models have been trained on. Yes. They can deal with very simple significance tests, but when you get into the world of market research with all of our column comparisons and the like, not something they've been trained on.

And there are thing and these are things where they're objectively correct answers. There are ways to read and misread significance tests. And so you're really in high risk land if you're using AI to do this kind of problem. Returning to data analysis for market research, there's not really a correct answer for most analyses, but it's not been trained on this kind of problem. So it could be useful if you've got an expert on hand moving to the bottom right.

Often, you can use tools like Copilot or Copilot to write code. That can be useful because the models are trained to do that.

And there are objectively correct answers, though, which means you need to have an expert in hand to check the work of the assistant.

And, again, the sweet spot is the top right. Things like text categorization and even language translation, there aren't objectively correct answers, and the models are really good at them, so this is where you wanna focus your time.

And now let's move on to our road map.

When we do our planning, we take it through the lens of what we call the data value chain. So let's quickly go over it for you. The first bit is you capture data. In market research, this is primarily questionnaires, but there are other things that we can use. You've got to store the data somewhere. You normalize it, it, clean it, recode it, etcetera.

You've got to create your tables. Maybe you're doing some advanced analysis. You've got to find out what is interesting. And then we've had to invent the jargon for this, but you've got to do little simple calculations like dividing one column by another or merging two columns. You gotta visualize your data, create your presentation or dashboard, and then share it with your clients. So and when you do all of those things cleverly, you end up with new strategies and happy clients, or happy stakeholders, or you're happy yourself if it's your own data.

So here and this is a busy slide, and we will send this slide and a recording of this webinar to all of you. Here, I've shown all of the kind of little jobs to be done or applications of AI by these, by these, data value chain stages. Now the color code is showing things we've already done. So green means we did it, we worked it, sorry, it worked and we've launched it. Red means we did it and it failed, and underline means it's on our roadmap.

So text categorization is the top of our road map. It's massively better, but there's a lot more we need to do. We need to do it better for big datasets, whole of other things we need to change in it to make it even better, and it's by far our main priority with generative AI at the at the moment.

We need to do a Copilot. Like, we didn't launch the Copilot we built, but we do still think we need it for particularly the data preparation stage. We think it'll be very helpful because the data preparation stage tends to be where the more technical users play who'll do a better job at kind of understanding if the code is right or wrong.

I showed you that the conversational UI we built for tables didn't work well enough for our users typically, but there is a case where I think it is good enough. And this is if you have stakeholders who you want to create their own tables, and they can do relatively simple things, in a dashboard, we're gonna make that available as a feature in the future.

We're also planning to add functionality for checking reports for errors, and I should have underlined this one here because it's missing. We're also gonna do some work on how you, we're going to do some work on data cleaning. So let me change that live.

Now what I'd love to know is what do you think I've got wrong in the road map?

What should we be doing that we're not doing? What can we do that will help you? Again, please drop it into the questions field. What would you like to see in our roadmap?

What's the best ways that we can support you as a business? The more info you tell us, the more we can actually do. Very keen to get your thoughts. If you've got any feedback on things that are in our road map where you think it's a bad idea, also please share that as well.

So please, using the questions field, share things about our roadmap.

So just to summarize, and don't stop sharing information. I'm very keen to get it. I've given you a quick overview of AI. We've looked at what did and didn't work for Displayr primarily.

I've taken you through how we think about generative AI and market research today and our road map. What questions do you have?

And I can already see the most common question. Begin reminder, tell us what you'd like in our road map. Add that into the chat feature, please. Not the chat feature. Sorry. Add it into the questions feature in GoToWebinar.

I should also add, if you wanna learn more and you're not a customer, please book a demo with the sales team. If you are a customer, just go through your normal support channel. That's support at display dot com or support at q dash research dot com. Don't book a demo because you'll get a salesperson and you really need somebody from our support teams or our customer success teams if you're an existing customer, as they're much more equipped to deal with the complicated questions that you will ask.

Now if you're not a customer, I've added a little into our chat field. You'll be able to see a little link if you want to book a demo. But what questions have we got? So the first question we've got here, is this available in q two?

So the text categorization feature is certainly going to come to queue. But because Displayr is a cloud based tool, we can roll stuff out quickly. When we roll stuff out to queue, we have to go through a much slower testing and release process, and so it takes longer to get into the spa into queue. In terms of the other AI functionality that we've got and that which we're gonna work on, that isn't going to go into Q. Why? It's because Q is a desktop software, and all of the generative AI technology is cloud based. And getting cloud based software to be integrated into desktop software isn't really a straightforward or or that useful thing to do, particularly as many of our customers who use Q Express, they don't want to use any cloud based technologies.

Let's see what other questions we've got.

Karen asks, can you do verbatim coding? And, yeah, look, I have done a bad job at showing this.

Going back to the example I was showing you before, if I have data like dislikes about Tom Cruise, and let's look at the raw data for that.

I clicked the wrong one.

So we're looking at the raw data here underneath, and you can see these are the verbatim responses that people have given, and we can see to the left of that the categories that they've been allocated to. And so the first category is positive sentiment about Tom Cruise, and we can see who was coded as being in it and who wasn't being coded in it.

The next question we asked is better format can we have better outputs for charts, graphics, limited formatting capabilities? Yeah, that's not really a thing that generative AI can actually do. So generative if if you think back to how I explained how you'd integrate generative AI for a conversation user interface, generative AI can only modify existing options. It can't add new options to something. So we would need to change our software, add the additional customization options, and then hook them up. So generative AI is not a shortcut, but we are always adding new options to our visualizations, and it's a major focus for this year.

Mike's asked, you'd like to have automatic creation of presentations.

And so we don't believe well, actually, let me that's bad for us. I was gonna say that we don't believe that's possible, but you can already automatically create presentations of a sort in display. So I can, for example, choose the mobile data file off here, which is cell phone data, and I can click plus, report, summary tables, or summary report, and display will create a short report for me automatically.

So we already do that type of technology.

Would we use the generative AI to produce that report?

No, we wouldn't. And the reason we wouldn't is the whole issue that I described before about hallucination.

It would just give you a report that you need to manually check and that will contain errors in it, and that's not in the business we're in.

Nearly all of our clients are pretty accomplished researchers, and they aren't happy with the quality of generative AI. They all get that they'd like generative AI automated way, all the analysis and reporting, but the technology isn't close to there. And it won't be till people start training models on survey response data, which I don't think anyone in the world is doing. We're sure not doing it.

Amy asks, data file setup re rebasing tables to totals.

Yeah. We could do that via generative AI user interface, I suppose. At the moment, rebasing to totals is just done. I'll show you how it's done in case you haven't seen it.

Let's go and look at so I have this table here. Notice that somewhat satisfies twenty nine percent. I'm just gonna rebase it. And I just do that by right click and go delete, and it's rebased all of those numbers. So that's already a pretty easy thing to do.

And Rob's got a whole lot of questions for me.

Well, it's a request list. Alright. So I will come back to that in the follow-up. Alright. I think I've addressed all of the questions people have. If I didn't, please reach out to me, tim dot bock, b o c k, at displayr dot com, or support at displayr dot com, or support at q dash research software dot com.

We will send you a copy of this recording. We'll send you the pages here. I'll send you the summary of all that information that has been provided to us by users about what they're using generative AI for and so on.

Thank you for joining this webinar. And for all our customers out there, thank you so much for being our customers.

Read more

Cookies help us provide, protect and improve our products and services. By using our website, you agree to our use of cookies (privacy policy).
close-image