It’s been a wild six months in AI.
While 95% of AI projects fail, market research is different — real, practical wins are becoming easier and faster. But are you focusing on the right strategies… or chasing hype?
In this candid webinar, Displayr CEO Tim Bock cuts through the noise: what agents can really do now, why analytics still needs a verify-and-correct workflow, and the winning vs losing strategies for using AI in market research.
In this webinar you will learn
- Conversational analysis is coming fast — what to do now
- Why AI can orchestrate analysis, but not replace it
- The verify + correct workflow every MR team needs
- Winning strategies: translation (text→code, text→categories), automating routine decisions
- Losing strategies: asking AI to do what humans can’t (e.g., advanced analyses)
- Live demos: AI for cleaning, coding, analysis, and reporting
Transcript
This is my semi-regular update on how AI is evolving and what the implications of this evolution are for the market research industry, principally focusing on data analysis and reporting.
I remain pretty shocked by the speed with which everything is changing with AI.
I'm going to dig into how AI affects market research. Most of my time is going to be spent looking at the new capabilities that have only become clear in the last six months or so. And perhaps this is just clear to me, but that's what I'm going to be talking about principally.
I'm also gonna look at what hasn't yet changed. There's a few areas where I come across clients from time to time or other colleagues who are believing that AI can do things that it actually can't do. And so I'll talk about them, and then I'm also gonna briefly review the things that we've known for quite a while that I I could do just in case you don't know it.
AI largely lives in our browsers. It can't manipulate physical objects. It can't pick up a pen. The days of the scifi robots aren't here yet.
As AI can't go to the pub or hang out near the water cooler, AI is usually missing some important context. So it's another reason why we humans are still relevant.
While I'm perpetually amazed at how well AI reasons, we do know that it's reasoning by working backwards from lots of patterns rather than by what the academics calls symbolic and formal logic. And we know that some of the consequences of this are that there’s certain types of problems it’s not very good at. For example, a baby can learn a language with much smaller amount of data than a large language model can. So humans are still much better at reasoning from small amounts of data, which is good. Again, we’re still useful.
And maybe this explains why, despite all the magical things that are getting built with large language models and with AI more generally, most of the genius about what to build, not most all of the genius about what to build, is really coming from humans. It's following our instructions. We don't have any documented examples near as I can tell of this new technology inventing something profound and new. It more helps us invent.
This next one catches people out quite a lot.
AI can't really do maths, can't do statistical calculations particularly well either. It's nowhere near as sophisticated as statistical software from, say, the 1960s, which is very lucky for us here at Displayr.
But I keep coming across people going, No, no, you’re wrong, Tim. I use it. I get my survey analysis done, and I’m always quite surprised. So I’m going to kind of show this off by asking ChatGPT if he thinks he can do math.
So go to ChatGPT, do a temporary chat.
Okay. The thing I want you to appreciate is these are very simple arithmetic mistakes.
Now, one of the funny things about large language models, not funny things, one of the cool things is they've recently developed this ability to remember things. That means it’s actually quite hard to actually catch out anymore when it does make a mistake, because it remembers the mistake and doesn’t make it again. But where this affects us in market research is if you give it a job which involves lots and lots and lots of separate calculations, there’s a good chance that there are some mistakes in there. And a very smart colleague of mine not one of our employees, I will say, said to me a while ago, actually, I use it for all of my data analysis and it doesn’t make mistakes. But think about it. If you’ve conducted a survey analysis and about ten thousand percentages of counts have been calculated, How would you know if ten percent of them are wrong? It’s not like you can manually redo them all. So not very good at that yet.
Now, this problem is another example of hallucination. It's guessing the answer to things, and often it's a magically good guess because it's got a lot of patterns but tends to guess wrong. And it's not very good at checking its own work. It's getting much better, but it still can't actually do it. When it makes a mistake, it doesn't usually know it's made that mistake. Again, something that hasn't changed yet.
Now, one of the big fixes for the inability of the newer technologies to do math is that we, not we, AI researchers invented the ability for large language models to use tools. And so, for example, when I show you a couple of the agents that we have in display later, you'll see that they are calculating lots of things. But what's going on in the background is it's not that the AI is doing the calculations. It's instead that the AI is looking through all of display's existing capabilities and going, okay, which technique do I wanna use? And so the AI is orchestrating the calculation of numbers, but not performing the calculation of numbers, if you get the distinction that I'm trying to explain.
Okay. Let's jump into a little more detail, though, on how AI is already useful in market research and what kind of level of impact we're seeing is.
A widely publicised study by MIT says that ninety five percent of enterprise AI applications have failed. It's a sobering number, isn't it? But a recent study by BCG says forty one percent of people fear losing their jobs.
It says over the next ten years to AI. So AI failing on the one hand, of the time, people being scared. Is it a hype kind of thing? No. It's nothing to do with hype.
I actually think the ninety five percent number is just wrong, but we'll come to that in a second and why we care. Now, fortunately, there was a study that was released last week of market research CEOs, obviously a very wise sample, conducted by proper market researchers, not those MIT people, such as the venerable Jean Marc Leger and Dave Schultz, and it shows really widespread adoption of AI in market research.
And I don't really come across any clients or prospective clients anymore who aren't using AI for something.
And so we're certainly this industry not failing. And I asked ChatGPT if we're kind of unique as an industry and which industries we're gonna be most affected. We did one of those wonderful authoritative analyses it does. And sure enough, we sit there as one of the industries which it already thinks is pretty heavily disrupted, and I think that's certainly true. We'll look at some of the specific cases about how it's disrupted shortly.
And when I say shortly, I mean right now. So there's all of these things that most of us are doing already, and I'm just gonna quickly show them in case you're not doing them and also as a way of getting us on the same playground. So And I'm gonna shove a new feature that our AI team built. So I go into this calculation menu at the top, and I'm gonna choose custom AI. And then I’m gonna draw a box. And this is where my results of my AI will appear. This is just like having, if you like, a version of Gemini or ChatGPT, and this is Gemini, within the app. But you’ll see why this is useful in a second. But I’m just going to do the kind of thing that you would do directly at ChatGPT or Gemini myself. Let’s ask it to help. Maybe we’ll zoom in a little bit for those of you that aren’t looking at massive monitors.
So I'm launching a new AI product designed for market research (and I'm actually not doing this, but let's imagine I am.) We might do this.
Product. Sorry. I can't spell, but, fortunately, it's good at correcting spelling. What other products like this exist?
So I'm getting it to do some secondary research for me on whether there's an opportunity for a synthetic respondent product.
Now I'm sure you're all doing this. And if you're not, you should be doing this. It's just a massive time save. And it gives me a report, and it tells me some of the key brands out there. So nice work. Huge time save. We’re all experiencing that. Or I could tell it to write me a questionnaire.
Surprisingly enough.. well, surprisingly to me, because I always thought I was good at writing questionnaires. And I still think I'm better, to be honest, than I think what we're about to see here is. But I find the questionnaires that it writes are actually much better than those that my students used to write when I was a teacher, an academic, and also much better than most novice researchers right now. Weirdly enough, it's tried to write us code to do it here, which is so it does make mistakes, and I'm just gonna tell that I want the output to be as text, which will avoid that issue.
So I've now got a questionnaire. You can try that at home. It's a pretty quick way of getting a questionnaire that we can iterate on.
Now it's also great at doing actual generation of synthetic respondents, so I'm gonna give it a new job. “Please generate twenty respondents providing a representative cross section of market researchers. So research into research.”
For each, describe how they would react to this new product. Alright. So this is how you can generate synthetic response. Obviously, there’s more nuanced and complicated approaches to doing this.
Okay. So now we've got a sample of twenty people telling us how they feel. And this will allow me to also illustrate a little more about the tool that I'm showing you at the moment and why it's useful. Because so far, it's just on things you could have done to it in. But now we’re gonna get it. We’re gonna create a new little box on our page, and we’re gonna tell it we’ll zoom in again so you can see it a bit easier. We’re gonna ask it now to what are the main themes in this table. And then we’re gonna click on the little name above the table.
And so what this tool is, it’s a very simple way that you can hook up your data directly to large language models. That’s the key point of this custom feature I’ve got. And so it’s now come down here, and it’s got themes. It’s identified in the data, so that’s pretty cool. And if you're not aware, you can get it to translate. Please do this in French.
And now for those of you that don't know, this is the basic mechanism of flow that is used for most automated AI driven text categorizations. And so it's just a third little step in this process where we'll insert a last little custom AI, and we'll say to it, Which okay. Work out which of these people what's a better phrase?
Each of the following themes, and then we'll click on all that with themes. This describes each of the respondents, and then we’ll click on the respondents. So just as I said, we’re feeding the data into the prompt directly.
Please return the result as a table where there is one row per respondent. So we're trying to get into the file format so that we can import that into whatever other tool we're using.
It didn't do a great job there, did it? Okay. Which of the following things best describes each of the response? AI doesn't a bit like working with dogs and children. Doesn't always do what you want. Let's each row is one row per respondent, and the columns show the themes.
The fear that the AI is not gonna work on command.
Alright. Cool. And so now we've got raw data in the right format. It's in French and along the way. Cool. So I think that is kind of cool technology, but you probably know about that technology already.
Other things that you hopefully know already are it's great at writing code. It’s great at creating images. It’s fast, and it’s gonna keep getting faster. It’s so cheap, and it’s gonna keep getting cheaper. And it’s smart, and it’s gonna keep getting smarter.
So what are the new capabilities? And we touched on persistent memory before. But the biggest breakthrough of the last six months isn't I'm sorry.
I've just got a little note from Lauren telling me that she's having a problem with the volume being maxed out. I’m going to speak up a little bit. Somebody tell me if I’m now shouting. It’s not my goal. Anyway, so what are the new capabilities?
The biggest break with the last six months, as I started saying, isn’t really technical. It’s one of mindset. We’ve all become much better at figuring out how to use AI, and there’s a lot of people out there who believe even if AI never improved, we’ve got years and years of productivity gains just in using what’s already built. And I’m one of the people who believes that. And I’ll explain about this using our latest agent, the data preparation agent. So I am going to bring in some data.
It's an example of survey many of you will have seen before. And I'm going to right click on the dataset and choose data preparation agent. And it is gonna pop me up a whole lot of options of things I could do for data cleaning. Whatever. Now I’m just gonna click continue rather than explain them. I encourage you to try it out. But I’ll tell you now about the key logic of how it works and why this is a really kind of important breakthrough, and it’s something that we just didn’t figure out until relatively recently.
So the kind of thing that a lot of people did, including us, when you're initially trying to get the modern AI or large language model techniques to work is you'd say to it, clean my data. And it tended to fail. Like, you could be impressed a little bit by some of the stuff it did or extremely impressed, but it still didn't quite get you to where you wanted to get.
And the breakthrough that we've had and a lot of people have had is it's just the wrong way to use the technology. The right way to use the technology is to break the problem into really small parts and then use the technology to coordinate those small parts. So, for example, what it's doing while I'm talking here is it's reading through all of the questions one by one, and it's checking that the labels and everything match the name. And then on a one by one basis, it kind of goes, Is this a good question name?
Is it not a good question? And it just tries to change that one question name, but it knows it's got this job of looking at the question name rather than general data cleaning and tidying. Similarly, it reads through each of the labels and goes, Is this a recommendation question? If so, maybe I'll calculate Net Promoter Score. But it has that specific little micro task.
It goes through and it went through. You can see it's given us a report of all the little things that's changed in this data file. We'll keep talking a little bit more about It goes through and goes, is it an open ended question? If it is, I'm gonna read through the responses and verify if they've given me a garbage response. Is it an ID variable? If so, I'm gonna look to see if it's a duplicate.
Is it a variable where somebody could or variable set where somebody could have done straight lining? If so, I'm gonna calculate the straight lining, but only if there's five or more variable sets. And I'm gonna have a more relaxed standard if there's a large number of variables in the variable set.
If I've got a lot of flags, should I delete some of those responses? So that's the kind of logic. You break the problem into lots of small parts, and then it turns out the large language models are great at performing individual tasks and orchestrating all of these individual tasks. So it's gone through. Done a whole lot of cleaning. You can see what it's done here.
We're gonna go on and look at another example. Now I'm keen to get people's feedback when they use this tool. And the more feedback we get, the more functionality we'll add to it.
Now last year, I generated this cat on Chat— or is it CATGPT? You can see my prompt. I'm always amazed by these cats. I was trying to make a point, which is that people give AI problems that it’s not good at. This was a naive dream, said in the webinar. But just as with data preparation, I was entirely wrong. Yes. If you give ChatGPT your dataset and a prompt like this, you aren’t gonna get something that’s commercial grade. You’re not gonna get something you can use. But if you break the problem down into all of its little parts and get the AI to do the parts it’s good at and get traditional statistical software to do the parts it’s good at and get the AI to orchestrate or coordinate the whole activity. It’s amazing what it can do. So if you haven’t seen it yet, let’s add to the bottom of the report.
A second agent, but the first one released is called the “Research Agent”.
Now it's going to once I had a quick look at the data, it's gonna offer to do the whole data preparation agent again, and we're gonna skip that because we've already done it.
Now what it's doing is it's gone through, and it's looked at all the data and it's tried to figure out why we collected the data, and it's trying to encourage us to give it a good prompt. Because as many of you know, if you say to somebody, Here's some data, find the insights, you don't get much of value. You need to share the context with the AI just as you would with a person. And so in an ideal world, I would spend time explaining to you how to make this prompt a whole lot better.
We've got another webinar which talks about this tool, I don't want to spend too much time on it, so I'm just going to click continue. But when you use this, do give it more specific information, please. And check out our other webinar on the research agent, which is on our homepage. And so now it's going through and it's trying to work out which data to use.
And it's gone through, and there's a whole lot of data cleaning stuff and respond ID that it's not recommended. It's recommended pretty much everything else Because one of the things it did in the background was it categorized all my text data automatically or coded automatically, I'm just gonna uncheck the two open enders I've got because I don't need them anymore, I don't wanna distract the large language model having to reprocess all that text data.
And so now it is going off and it’s reading through the tables, creating a plan. So reading through the data, creating analysis plan, doing a whole lot of analysis, and it’s going to write a report, which I’ll come back and show you. But back to new capabilities. So I’ve shown you the data preparation agent. I’ve shown you the research agent. And with each of them, I’ve explained this core logic of breaking down the problem to lots of parts and how AI then becomes completely magical in its ability to automate stuff.
There’s a next generation of magic, and we’re not quite there. We’re on the way there. We’re not quite there, but I’ll show you another product. This is a great product called Bolt, and there are lots of products like this out there. And these allow you to write in plain English what software you want to create. Now, we want to get to plain English, what research you want to create. Not there. Working on it.
But I will show you the kind of technology that’s now out there. And this is you, as market researchers, may be worried or excited by what AI is changing. In software engineering, people are excited, terrified. Because what I’m about to show you, there’s just no chance of having got this for less than about five thousand dollars if I was paying a consultant. And it could have ended up vastly more. I want an app that allows me to so we’re going to, together, build a text categorization app.
Hopefully, no.
Three.
Now what I'm trying to what I'm illustrating here is this new type of user interface called the conversational user interface. Now the whole strength of this user interface like this, where you say what you wanna have in plain English, is that you don't have to learn the app. Not the whole string. That's the first constraint.
You don't have to learn the app. You can just go and do it. You can also and I'm not going to illustrate this now, but you should hopefully do this with ChatGPT you can have conversations with it and clarify things. It's a much more iterative workflow.
The less obvious benefit of these new conversational user interface or reason why they're so cool, and it took us a while to work, or took me in particular, about a year to figure this out, is for many things, it's true this is a slow way to work. But the things that it's a slow way to work are the things that are getting entirely automated by AI. And so the complexity of products like Q and Display is because the user interfaces are designed so that you can efficiently do a lot of grunt work.
But if you get the AI to do that grunt work, you don't need to have such a complicated user interface. So our whole company is focused on this. You can see this thing is going off, and it's going through each of the steps. In about ten minutes, it's gonna have built the tool. I did it before the webinar. It's done. Let's click upload a file. We will find a decent file.
It's giving me the ability to choose the questions. I didn't ask it to. It's giving me a preview so I can verify that question ten is the one. In this case, it contains reasons for disliking mobile phones. We're going tell it to classify with AI, so it's going to do the text coding.
In theory, want to emphasize this is a completely working and usable app. If you have a routine business problem and you're able to fairly accurately describe what it does, you can build an app now to automatically do it. It's magical, terrifying if you're in my game of software, but magical.
So just as we showed you before, it's gone off to the large language model. I'm gonna now export it.
And very quick as a flash, it's created an export firmware. Click that again, isn't it? Yep. So it's exported the file. It's so fast. And we will quickly have a look at so we can see the Research Agent’s results here, but I’m gonna upload the file that I just did.
Okay. Let's have this new file at the bottom. And at the end, it's got this classification group. Alright. I'm just gonna drag into our portrait so we can look at the table. And so a few things to note. With a few words, we built a text categorization piece of software that we could reuse again and again and again. It’s amazing what it did. We went really quickly.
It actually isn't doing awesome text categorization. You'll see here it's got thirty seven percent of people unclassified. The data preparation agent that I showed you a little bit earlier, it was doing text categorization in the background before. It's vastly better than the tool I just showed you.
So if I've got a separate question here, if I smile, I'll be showing the same question. But thoughts about France, it's gone through and it's come up with really nice categories and it's managed to classify all of it. So there's still some skill knowledge you need to know here. But you could keep iterating with this guy here and keep giving it additional.
It's built the tool in the background, and you could say, look. Currently, it's not classifying all responses, and I want people to be allowed to be in multiple categories. So you give it conversational feedback, and it keeps going and quite remarkable.
Now the other agent which was going in the background was the Research Agent. I wanna point out the report to you. This is the executive summary it's written. So it's written a proper summary with proper conclusions. I'll give you a moment to read them.
Now, I've spent a lot of my earlier life training university students doing bachelor's and master's degrees at pretty well ranked universities, so clever people to write research reports. And virtually nobody writes a research report as good as this in the beginning. Notice the next bit is broken up into a nice hierarchical structure, and I can follow-up. And then I go to pages where it’s created charts for me, and it’s got commentary. So in no time at all, it’s written a report, which and does a reasonably good job. Is it as good a job as you do? Probably not.
But a report like this, you could easily pay five thousand dollars for or more, but the actual AI cost was about a cent. So quite magical, quite remarkable. So to trigger this in the report tree to the research agent and it runs the report, a question might come up. In tracking, you want to just make sure you've created a variable first up, which shows you the time periods you want to compare, and then you tell it to do a comparison.
So, back in the background here, AI has gone along, and it's given us, I need, an Anthropic AI key. So it's not all automated and too easy, but it's tried to approve all of the things. Quite magical stuff, isn't it? Alright. Gone through so much, we’re into question time. What questions do you have? What would you like to know? What would you like to tell me? Please use the questions field in GoToWebinar, and I will make my way through them.
Lauren says: “Other themes it finds good, though. How do they compare to what a human researcher identifies?” Okay. Great question.
So I'm going to not worry about BOLT because I don't think the BOLT one I did is commercial grade. If I wanted to spend some time iterating it, maybe I can get it to commercial grade. But we have a team which has spent five years building our text categorisation tool, so I feel you're not going to get there that quickly. Let's go into the thoughts about France data. We'll go in. We'll choose text categorization. OK, now I'm gonna tell it to create themes. I'm just gonna default to create ten.
If I said to a human being, create ten themes, will I get exactly the same themes? No. I won't.
But if I talk to two humans and ask them to generate ten themes each, will I get exactly the same same themes? No. They will not. I I don't have any evidence at all at this stage that humans are better at creating themes than AI.
I think, actually, AI is better than humans at this stage. But, obviously, if you're a legend at doing it, that's not true. Now I quickly went through to create ten, and ten's kind of a magical number. For most datasets, ten does a reasonably good job.
If it was an important problem for me, though, I wouldn't get it to create ten. I'd have a slightly different workflow: What I would do is, I would tell it to create twenty or thirty themes depending on how much data I had, and having created them, I would get it to classify all the data into the themes.
And we have another webinar on text analysis, text categorization, which I encourage you to check out.
So for our users, the text categorization is easily the biggest value point that they get out of AI today. The two new agents that I've just shown you are very new, though, and they're most useful for the research agent for a new researcher is particularly valuable. The data preparation agent only released last week, so we don't really know yet. So if we look at person number one, they've set fashion and food. Categories, food, fast.
And if we look over here, food, food, fashion, down here, fashion, which kind of makes sense. So I I I think at the classification level, it does as well as human beings. Again, we know that if you get two people, if you give them the same code frame, they tend to only get around eighty percent consistent between them anyway. And so the AI is not gonna be the same as you because people are not the same at doing this.
The the early case where I find that AI is systematically bad is where the data is ambiguous, but a human has context that makes the ambiguity disappear, which there can be cases. Like in our company, we have when, say, a customer churns, which is thankfully very rare, we classify the data on the reasons why they have churned. Now if the person didn't reply to us at all in correspondence and they never opened the software, we don't really know. Right? And so in that situation, a human being will tend to, if they're in the team that's responsible for the churn, they might go something like they didn't like the software, but they really don't know. Right? And then there's situations where humans and AI start to draw large differences.
Dan says, “Is Displayr working around converting insights into a short video summary as opposed to generating a traditional report?” No, Dan, we are not. It has never even been proposed to us before. It's an interesting idea. At the moment, our goal is very much on trying to automate the things that people use our software to do.
And so we're trying to make sure that somebody who has a workflow that currently might take them ten hours on a project, we can get that down to a few minutes. That's the kind of stuff that we're really focused on doing. A video summary would be broadening the bandwidth of what we're trying to do. We'd love to do it. It's a great idea, but, there are a lot of technologies out there which already do that kind of stuff pretty well, whereas there aren't a lot of technologies that are great at getting market research reporting analysis done faster.
Justina says: “I’ve used the Research Agent. Super useful. Thank you. But I find that I have to run it over and over and tweak the information to get what I want. For example, in the sample section, I mentioned I want to compare three groups. At the end of the research question, I want to profile all the research questions by these groups, like crosstab, but it doesn’t. Any recommendations?” I do have a recommendation for you there. And this is a really interesting point, so let's go into the Research Agent.
Now one of the interesting things about AI, and this is one of things about context, is and we're gonna skip this bit here, that it's really, really important that you describe where how the data has been collected. What does it represent? And that's what this first thing is trying to do. When we redo this as a conversation, it's more of a conversational check this explicitly.
Now, what you've done, Justina, is you've said, Ah, you've interpreted this as, I want to describe the samples I want to look at or the buckets I want to look at, which isn't going to help the adult, it's going to confuse it. What it's trying to understand is, for example, is it representative national sample of people aged eighteen plus in USA collected in August twenty twenty five in an online panel. So that's one example of a description of a sample. Now let me give you a different example sample explain why they're so different.
So either of these could be a description of the sample. And why does it matter to the AI? Well, it matters hugely. Let's say that in the sample we've got seventy percent men and thirty percent women. If I believe this is true, that difference between men and women is a data quality issue.
If I believe this is true, that difference between men and women is an insight. Men are much more likely to buy. Right? So you're actually unable to analyse data without having a clear description of the sample. This is something we always do automatically, but it's an example of something where when you think about how you break the AI down into break research into past tasks, this becomes one of those little micro tasks you have to work at. So this is what you want to focus on doing that.
You would then want to go in your research questions. You're going to profile all other questions by, and this is where you describe your three groups. But if you actually want to create these three groups and they're not in the sample today, the research agent is too crude. So would want to, if you've got these groups so let's say the groups you've got are age and you've decided that you want to have three buckets of age. Today, the research agent will just cross tab it by age and it's going to assume these age groups are the age groups you want to compare. And so if you really want to compare people aged under forty five, you would first set up the data to do that. So let's say you just want to compare people in two buckets, older or younger people.
Yeah. So you'd wanna do your data prep like this and give it the specific instruction to go through. Now, clearly, we would love to be doing all that through a conversational user interface. We ain't there yet, but we know we have to get there.
Phil says, “Can AI weight the sample?” Yes and no. If you have a weight in your dataset, it will use that weight. But if you haven’t created the weight, you’re gonna have to old school manually create it, which is you go to your data thing, click +, and you go Weight. And so let's say we're waiting by gender.
Anyway, and we have other videos and ebooks on how to do the waiting and lots of stuff you can do by waiting. So many of you are already mentally where we would like to be, which is you're like, I just want to use a conversational way of interacting and telling despite doing everything. We haven't built that yet. We know we have to build it. It's a lot harder than it might look. Well, harder for us, anyway. So we know we want to get there. Think everybody is aware of that particular future. Just a hard future to get to.
Daniel says, “Are there any plans to incorporate AI in the presentation side of Displayr for the formatting slides, changing fonts, editing the aesthetics of chats?” Yeah, we definitely know we have to do that. It's gonna take quite a while.
There are lots of good products out there. I wish I could remember them. Ray Poynter told me a great one the other day, and it's just escaped the tip of my head.
We do know we want to do that, but this kind of comes back to the point that Dan my response to Dan before, who was asking, you know, it'd be cool if we could have video generation reports, which I agree it's cool. We are very much focused on improving the efficiency of things that researchers currently do rather than focused on increasing the things that researchers can do. That's kind of the strategic call that we've made at this stage, because we've only got fifty or so engineers. So there's so much we can do, although we're obviously, in theory, getting much more productive using tools like Bolt.
Lauren asked, “How much of this is or will be available in Q?” None of this is available or will be available in Q. So Q, I love Q. It’s the sixth(… scary, isn’t it?) It’s the sixth market research product that I designed, but I did it in two thousand and five. Displayr is a much better product. You might be attached to Q, and I can appreciate that. But the challenge that Q has, it’s a desktop application. That greatly limits the ability to do a whole lot of things. And that’s one of the key reasons Displayr is cloud-based. Now, being cloud-based sometimes feels a little bit slow. It’s actually not really, but it feels a little bit slower in certain situations.
But it's the only way that most modern technology can be, such as large language models, can be incorporated is in a cloud based tool, which is what Despair is. So, no, this ain't going to come into Q. And more generally, it's kind of obvious to us that products like Q and all traditional data analysis tools are just going to disappear completely because AI will automate the grunt work and will interact via conversations.
That's the end, guys. All of our customers out there, thank you for being our customers. Love to get any feedback, any of the stuff that you're seeing here, and have a great rest of your days, everybody. Bye now.
