Survey reporting is changing—fast. In this 30-minute session, Displayr CEO Tim Bock reveals how AI is helping researchers move from data to decisions at lightning speed.
This video is also a first look at Displayr’s brand-new AI Research Agent—a breakthrough tool that automates the most time-consuming parts of research analysis.
► See the report created by the Research Agent featured in this webinar.
► See the presentation used for this webinar.
In this webinar you will learn
- How AI is turning 5 days of reporting into 5 minutes
- What’s inside Displayr’s AI Research Agent, including:
- Auto-generated summaries of key findings
- Visualized results with just a prompt
- Strategic recommendations written for you
- Dynamic reporting that updates when data changes
- What early adopters are doing (that you should be too)
- What this shift means for the future of your workflow
Transcript
I'm going to walk you through the basics of how AI is and will continue to make survey reporting fifty to five hundred times faster.
It was quite shocking when I was putting this webinar together to see just how quickly AI had changed things. As usual, I'm demonstrating from within Displayr. Sorry, queue users. All this tech requires Displayr.
We'll start with a quick overview of how AI in general is making things faster. Then we're going to look at how we can delegate analysis and reporting work to an agent.
Finally, we're gonna be the human in the loop, checking and correcting the agent's work.
As a species, we started inventing things around about three billion years ago.
The first useful AI starts around nineteen sixty five with expert systems, and these use software to apply if then rules.
We've got lots of these built into Displayr. So there's more than seventy odd statistical tests built into Displayr. And Displayr goes, if the data is of this type and it had a weight, apply this rule. So that's what an expert system is.
Ten years after the rule based expert systems came in, the statistical modelling started. And it started with SPSS and SAS and their regression features nine seventy five and seventy six, respectively.
The key thing about these the statistical modeling world is the kind of workflow was that as a user, you'd have to work out for a specific problem what data you needed to use, then you'd choose an algorithm, for example, ordinary least squares regression, and then make a whole lot of assumptions in terms of linearity and normality and check and test them.
And that required a high level of expertise.
And twenty years after this came machine learning, and the barrier of expertise dropped massively because the whole problem of making and testing assumptions ceased to be an issue with the really good models that came about. So, for example, support vector machines, random forest neural networks these were all able to just be chucked at any problem without worrying too much about things like normality and linearity. So that was a good step forward.
And in twenty eighteen comes what most people think of when we talk about AI, which is the foundation models. And the innovation here was that rather than having data for a specific problem, you'd instead get all the data in the world of a certain type, such as all the world's text, and then train a model on all of it. And then it turns out they can answer just about any question in that domain.
And this led to a revolution in terms of the use of AI. It ceased to be a tool just for experts, and now literally billions of people are using it.
And in twenty twenty three, the first agents started to appear.
These are tools where you can set a goal, constraints, and high level workflows, hook it up to a foundation model, and then the agent is able to perceive its environment, make plans, perform acts designed to fulfil its goals.
I'm going to show you a nice example of that later.
Coming soon: self improving agents. These are ones where we set goals and safety rails, and they, on their own, measure their performance, rewrite their code, create sub agents.
And then the last step that we can forecast with some confidence is down collections or societies of AI agents.
These will negotiate, share information between each other and allocate resources. What remains to be seen, though, is whether they're going be used by billions of people or indeed, maybe just the AI overlords. And times ahead.
Now when we think about an agent, what most of us want is an autonomous agent. We want something that can just do it all for us.
Now agents today are already much faster than humans, and I'll show you an example of this. But you already know this. When you interact with any chatbot and it figures stuff out for you, it's so much faster than the three hour wait you'd spend on the call.
And the big revolution that's really started to kick in this year is for some problems, the agents are providing better quality than graduates. And so we have here a four point scale. On the far left, we can barely see it, is the student at school? Then we have the recent graduate from university or college, the entry level worker for most market research roles. Then we've got somebody who's in their career, they know what they're doing. And finally, you've the old experts, people like me. Now where we are today is the agents are, as I'm going show you, for some problems already better than the entry level graduates.
Another area where they're known to be quite a lot better now is for a lot of support kind of problems.
So when should we use AI?
Three things need to be in place. It's got to be easy to spot errors. It's got to be easy to correct errors. And it's got to be faster to use AI.
That's the sweet spot.
So I'm going to show you what an agent can do. We're going to delegate some work to an agent. But before we do it, let me explain the problem we've got. Last time I presented this case study, I walked through and how, in forty minutes, I, an expert, could analyse it.
I'll give you a moment to read the concept test.
So we did a survey of three hundred people. We asked them some standard concept testing questions, collected all of the data.
As I said, I've shown people how in a previous webinar you can get the answers in forty minutes. Let's do it much faster than that using the AI agent. So we're in display now. We're gonna choose a new document. I am gonna choose one with my display of branding.
This is the workflow. We add data.
I'm using an SPSS data file just as in Displayr and other product queue in general. The better the quality of the data file, the easier the job for the agent because it's just got more information in it. It's cleaner.
Now the new option.
I'm gonna choose research agent. You'll see it's got beta. We're still trying to iron out some bugs, so we look forward to any feedback you have. Let's get it going.
So it's gonna start by scanning through the data, trying to work out what it thinks we were trying to achieve with the survey.
And it's doing this for a couple of reasons. One is just to figure out some stuff out for itself. The other thing it's trying to do, though, is it's gonna use it to generate a description of what we think the goals are. And the idea is that we edit it and give it more accurate information.
So Some people, they don't want to fill this in. They just go, tell me what's interesting.
And that's really not how you should manage somebody.
When you're managing an agent, just like a human being, the more context you give them, the better the chance they have of being successful.
And so with the research questions, it's come up with these fairly general ones, but we know usually you've got, like, some cutoff we're trying to get to with a concept test.
So I'm being quite specific here. And really, the thing I wanna know rather than the purchase intent, what seems to explain the purchase intent?
Are there things that can be tweaked to improve it?
There's some other questions which I'm happy to answer. Well, yeah. Alright. So that's a fairly good research objective. Now, in truth, if you gave this to a graduate and said, go forth on their first day in your office, little chance of success. But as I said, the AI agent is better than a recent graduate.
So it was quickly data and trying to work out which data was relevant to answering that question. And the reason it's showing us this, rather than just going straight into the report I'll talk about this a bit more later is that when working with AI and market research, we've a bit of a challenge, which is our clients are looking for us to give correct answers. They're not okay for us to be eighty percent correct and super fast they're after super fast and one hundred percent correct. And so the workflow with working with an agent needs to always be checked and correct. Now here, the agent thinks I'm interested in the duration of the questionnaire, which I'm not, so I'm going to deselect that.
And now we're going.
The agent is doing its thing. What's its thing? It's generating an analysis plan based on the information that I gave at the sample description, the background, the research questions, and also on the selected data.
When it's an analysis bayou, it's literally going across this table by this table. Then it's running these tables, and it's calculating statistical significance.
Then it's reading the tables.
It's trying to figure out what patterns it can see. It's drawing conclusions from each table. It's then organising these conclusions into themes. It's evaluating the research questions against those themes. It's attempting to draw conclusions.
If we've asked it for recommendations, it's coming up with recommendations. Then it's gonna start writing a report, which you can see it's done now.
In the report, it's putting together a whole lot of conclusions for me, And it's creating pages with charts and tables if appropriate. Alright. So I'll give you a moment to read what our little AI has done. It's really a big, magnificent AI.
Now we can follow these through and click the links.
And it's taking us to create a chart on its own. I'll give you a chance to read the interpretation.
So it's pretty remarkable, I think.
Just like with a good graduate, it's written a report for us, but we have the job of checking and correcting it. And the jargon that's used for this at the moment in AI is we're the human in the loop.
For checking it, there's a few things we wanna do. We wanna clean and tidy the data, and, ideally, we would have done that before getting the research agent to do something. And and, yes, we're working on automating this. An agent can do this too.
You wanna review each chart. Does the commentary match the data?
And you probably on bigger studies would wanna work in batches. So because this concept test was short, I threw all the data at the AI. But let's say I had a usage and attitude survey, I might go I might first get it to run all the brand awareness data, then the general brand health data, then the brand association data, then the usage data, and then get it to pull all of that together.
Correcting, where you can fix the data. You can improve the chart, and I'll show you in a second. You can give the research agent templates you want to use, or you can do the formatting via the page master. And you wanna update your text, you can manually edit it, Or as I'll show you, we can use the AI to automatically rewrite things.
Now when we look into the results and the report in a little more detail, you'll see that the research agent is choosing from these ten different ways of displaying the data. And it looks at the type of the data to work out which is appropriate. Now the way that you can customize this is you can use the save as template feature in Displayr. And let let's say that you didn't like this pie chart, and you don't like pie charts, and you just want to do a pie chart. You would change one of the pie charts to a bar chart, you right click and you go save as template, and if you type in this name, it then means next time the research agent runs, it will reuse your chart with whatever formatting you have applied.
So let's actually be the human in the loop.
We've got a title page.
The background, this is the context we provide, and this is kinda cool. Right? We're giving the same prompt to the AI as we need in our report.
We've got the table. We've got the page of all the summary. Now sometimes this will go over multiple pages. Here, squeezed it into one. It hasn't succeeded that well, but this could have done this over ten pages as well depending on how much information we've given it.
Now we can't really check this without reading the inputs.
Okay. So here we have a case where our graduate, our little research agent, has not, I think, been as awesome as I would like. It's gone with the word cloud, and it's read through all the text responses and had some key points. We can do better than that. The AI agent can't do better than that, but soon we'll be able to. So I'm gonna just get rid of goodbye. Don't care about that.
I'll even get rid of the I'll get rid of this placeholder as well. So what I'm gonna do is I'm gonna say, what do I particularly like? I'm gonna find that data here, and I'm going to tell it that I want to perform text categorisation.
There's a whole other webinar on text categorisation if you want to go into the details of this. You can do a lot here. You can edit how many themes it creates, what those themes are, and then I'm classifying the data into the themes.
But I'm just going quick and easy because my point here is just to show you how you can customize or how you can work with the research agent to make it even better. Alright. So now I've turned this into a table. Let us choose my preferred visualization, and I have a template that I'm gonna apply. I don't know why, but I really like this purple.
Okay. Now I can customize this. Now in this case here, I'm gonna customize a little bit more. I don't need the axis on the y the title on the y axis because it's already sitting here.
Alright. So it's gone. Now if I wanted to, as I said, I could apply save as template, give it one of those names that I mentioned before, and then the AI agent would use it going forward.
But I need it to provide an updated interpretation for me, so I'm just gonna say interpret it, and it will look at whatever data is on that page.
It remembered the prompt from before, but I could customize if I want to tell it something specific for this case.
So we've updated it. Now the next thing we might wanna do here, I'm gonna make it even more interesting.
I'm gonna let's go down to this next one here. It's the same. Now I could do that same process here.
Here, I've got my purchase intent. Now, in this case here, I've got purchase intent, and I've got a second question of priced purchase intent. Now, the research agent is not quite as good a researcher as me, so he doesn't know that the reason we do priced versus unpriced purchase intent in a concept test is because we want to compare them. So what I'm going to do is I'm going to take I'm going to cut the data from the priced from the unpriced purchase intent.
I'll put it on this page here.
I'm not sure what I've done there, but I seem to have failed.
I don't know what I did there. I'm gonna track it onto the page again.
So I can build up completely new analyses from scratch. Right? The fact that the research agent do it this way doesn't matter. I can do anything I want. Oops. Rather than do that, I will apply my template.
Get my good friend purple bar into play.
I'm gonna apply the template here as well.
Okay. And I see what happened. I typed over the same chart twice. So I'm then gonna find purchase intent.
Just gonna rename it so it's easy to use.
I'll put little text boxes on.
I'm gonna edit the comments.
Oops.
And I'm gonna make the point that it's not smart enough to make, but maybe it would have got give it all the data, which is that
unpriced purchase intent.
Okay. So this is conclusion Scott. Now we could go through and keep customizing anything in the report as much as we wanted.
Let's make these a bit bolder.
Now, obviously, the text at the beginning might not be correct anymore due to the update. Now we could manually edit it, but what I'm going to do is I'm just going to delete it.
And then I'm going to select all of the data that we have here. Just gonna right click and I'm gonna go interpret data.
And now it's gonna go through that whole process again of but it's not gonna create the crosstabs. Now it's just going through and it's reading all the commentary that I provide and all the tables and saying, okay. What are my conclusions gonna be now?
Now this can take a little oh, no. Super fast today. Alright. Let's just and it's done something quite interesting as well. So before it gave me one page, here it's decided it needs to fit it into multiple pages.
And so we can follow each of the links.
Okay. And so we've now got a different report. We could go through and edit that however we wanted.
Now, next stage here could be send this off to PowerPoint.
Some of you will go, will they be editable? Well, if I want them to be editable, I can, but then I've got a smaller range of types of things I can do. Because in case you're not familiar, display's got many more visualization types in PowerPoint. Right?
So if you choose editable, often they will look a bit different to what you intended. But I could go PowerPoint, Microsoft chart, and now it will be editable. And if I wanna make it all editable, I just create templates, save it as template like I showed before. Anyway, so I'm gonna go export report to PowerPoint.
And you can see this slide has come through exactly as we wanted, and you can confirm here that it's entirely editable PowerPoint. But, of course, it's twenty twenty five, so we don't have to go with PowerPoint. Maybe we wanna create interactive dashboard.
Let's do that.
Find the oh, I should have deleted this slide.
Now what I'm gonna do here is I'm gonna choose these two charts. And for those of you who aren't display users, we can choose this little option here called list box filters, and it's gonna now add some filters onto the page. So now we're gonna build up a little dashboard for our clients.
Let's move these things to the right, and let's make them the same size. And, obviously, I could spend some more time tidying this all up quite a lot, but it gets a bit boring for you guys to watch.
Okay. So these charts are dynamic, which means if I publish them and share the link with a client, they can then do their own exploration. I don't have to run reports manually. But this text isn't updating, so let's make it automatically update.
That is pretty cool. So watch this. Let's filter it just to look at what the younger people think.
So this is a great opportunity for self-service because we've now the sample size descriptions have updated. We can see the sample size. It's telling us what the conclusions are.
If I wanted to, I could give it some additional information over here in the prompt.
So here, it's using the prompt from before. I could tell it only look at the price purchase intent or whatever, and it would then dynamically do that. So share, publish off dashboard, and we're off to the races.
So we've gone through how AI is making everything faster. I delegated report writing to an agent, and gosh, wasn't it fast?
And then our responsibility as a human being, the human in the loop, was to check and correct the work of the agent, just like you would do with a junior researcher. Right? It's exactly the same process. Some junior researchers make some mistakes.
You don't know what mistakes. AI also makes mistakes. You don't know what mistakes. That's why you have to check.
The thing that I've hopefully communicated to you effectively is while it's written the whole report, everything is editable, reviewable, checkable, correctable.
So that's what we think is the good workflow.