How leading teams are streamlining workflows and what you can copy now
AI and automation are everywhere, but most research teams are still buried in manual work, repetitive tasks, and increasing expectations.
Meanwhile, clients and stakeholders now expect:
- Faster turnaround
- Consistent, repeatable outputs
- On-demand access to data and answers
- More strategic recommendations, not just decks
So how do you deliver all that without adding headcount, sacrificing rigor, or handing control to a black-box AI?
In this webinar you will learn
In this session, we’ll cut through the hype and show what high-performing insight teams are actually automating today and how you can adopt the same workflows.
We’ll cover:
- The automation landscape in MR (AI + non-AI)
- Real survey workflows being automated right now
- Where automation improves quality & consistency
- Where to be cautious and how to stay in control
- Practical tools & techniques (not theory)
- How to meet rising client/stakeholder expectations
You’ll leave with a roadmap to automate without losing rigor or credibility. This is a practical, no-fluff session for researchers who need to move faster, without cutting corners.
Transcript
This webinar is all about automation and AI, where the focus is improving the efficiency with which we do market research.
As always, I'm presenting from within Displayr. And for all you Q users out there, no. A lot of this technology will never get to Q. Q’s are our beloved old piece of software, but the really new technologies need to go into Displayr because they need to be hosted on the “cloud”. It's just how the new technologies all work, unfortunately.
We're gonna go through the three-step process for effective automation.
Step one is you understand the work to be automated. Step two is you identify the bottleneck, and then you design the best solution. And we'll spend most of our time on the third of these.
When automation fails, it's usually because the work being automated wasn't really understood.
Sometimes the senior manager's involved, and it goes like this. My team are hopeless and efficient. I need to automate their work. We're gonna get better software so they can do the job better and easier and faster.
Or as one of my colleagues recently said, the analysis and the reporting is the messy middle. I just wanna automate that messy middle. Get rid of all of the mess. I want it out of my workflow.
But if you think about it for a moment, how can you automate something if you don't understand it in extraordinary detail?
And the messy middle, it's messy because it's really hard to do, thus making it really hard to automate.
Failed initiatives often don't get past the fog level of understanding.
We need to get to a stage where we can name the job a little more precisely. Was very hard to automate.
You need to get to testable success criteria, or it's very hard to automate.
You need to get to the level to actually do much serious automation of what I call the happy path flow, which is a diagram or a flow chart which takes you through all of the steps and decisions that have to be made on a standard project.
Three detailed workflow that you need to be visualizing. And I'll talk to you about how to do this shortly.
And then if you wanna automate everything, you need to create what's called the all conditions map. And I say it's called this. ChatGPT suggested this name. I've not heard it before. It's basically a flowchart which deals with every possible variation, everything that can get wrong and describes how to address it in your workflow.
The first step in getting automation work well is really to talk to the people who do the current work and find out what is painful. If you instead take the approach of you're the manager and you're gonna automate the way they work, you tend to fail suddenly.
The second step is all about visibility. It's making it easy to spot the work and track the work and see where the pains are. Now in engineering, the way this is done is you have these things called Kanban boards or boards where you task, where every single thing somebody does appears as a little thing on a board, and you get to see how long it takes. And you get to see what order needs to be done. You get to see the bottlenecks.
It's a very key kind of principle, making all work visible. Because if the work's invisible, the people doing stuff, you don't know who's where, it's really hard to spot the problems.
Now here's a much more detailed diagram. This is the diagram that I actually created when we were building the research agent. It gives you an idea of the kind of the level of detail here that we're going through. And so we can zoom in a little bit here.
So just got ingesting data. And really, there's lots more detail there, but this is my box. And you go through and you can see all of the kind of detail about how research is done. So this is just a description.
It's actually quite a high level description of how market research is done as a diagram.
And we'll come back to that a little show.
So once you have understood the work, your next step is to understand the bottleneck. And amongst people that study efficiency and consult in this area, there's pretty much a uniform view, which is the way to improve is to start with the bottleneck. If you instead say, I just wanna automate everything, you tend to lose. And bottleneck is singular. There's just the one bottleneck.
This is a real example.
One of our clients some years ago had this issue, which was all about tabulation. They'd find that their projects would just get held up at tabulation stage. Most requests would take at least two days to perform.
And and that meant that if you had to if you ran some papers, then you need a follow-up table, another two days, and it just became so, so painful. So this was correctly identified as the bottleneck. But the client, unfortunately, jumped to a very quick and simple solution, which was the mistake, which was that they should replace the data processing team doing the the data processing, with instead getting the researchers to use some easy to use software to actually perform the tabulation themselves.
And this is a case where the bottleneck was identified, but it wasn't really understood in sufficient detail.
You're about to see some mistakes on this next page. I apologize.
But what happens? The numbers here don't add up, so I I did help.
What was going on for this company was they had lots of different types of studies, needless to say. They had a whole lot of very simple concept tests that they were doing. These only took twenty minutes to do, but the DP team was only doing them on a first in, first served basis. So if that wouldn't take twenty minutes, they were still gonna finish the big ugly tracker.
And the big ugly trackers, they were occurring. There were two kinds. There were ones where the question didn't change. They're pretty easy to automate.
And then the ones where they changed brand lists. And these are the ones taking all the time, but this was slowing down everything else.
And with the ad hoc studies, they had a similar situation. Many of were just super simple, but there's some big ugly max diff, and these numbers don't add up. This is a bit twice. Right?
So you've got a check work when you outsource it. I will admit, these are all my mistakes. Anyway, so you really need to go deep. You need to drill in and understand the problem in a lot of detail to be successful.
Now with the research agent, when we were building it, we did identify the problem. It's all the way down here.
And it's got two boxes, but they're actually the same pain point. And they're related to when we're going to automate commentary, it needed to be possible for users to edit that commentary. I'll talk about why that is a little bit later.
And so that's an example. Right? So you've got this big massive document with all of these kind of steps you go through, and you're identifying a single thing as a thing you need to fix next.
Alright. But how do we go about fixing the bottleneck? How do we design the best solution?
I'm gonna take you through the various options we have for automating things and how to think about automation. But I just wanna make a little point upfront, which is a point that people often most of you are gonna ignore this. So people love to come in and go, alright. I'm gonna replace everything and start again. But where things go bad for you when you do this, if you decide to automate a lot of things, you need to understand them in detail. It's a lot harder to understand a lot of things in detail than true detail. So that slows you down.
The bigger the thing you try and do at any one time, the more risk you're taking on. Particularly because you probably didn't do the understanding in sufficient detail, and so your risk grows.
The bigger what you try and automate does, the longer it's gonna take to do. And if you haven't done much automation work before, there's a whole lot of learning to be done. And if you go for something big and ambitious, you're gonna make all your beginner's mistakes really big. So the general advice is to start small.
Now in software engineering, this is a well known issue because what tends to happen is you spend years writing a piece of software.
And along the way, you develop some ugly bits of the software and new staff come and go, well, this is really ugly. We should fix it. And they look at the code base and they go, well, the code's really complicated. We should write this in new modern tools and stuff.
Make it all bonded nice. It'll take no time. Let's just do it. It'll save us time in the future.
But what goes wrong almost always is you come up with these big new software projects and you rebuild everything. And it just doesn't work because there was some level of detail that was inbuilt into the old software that the new software can't do. And so it ends up taking a surprisingly long time when you rebuild anything from scratch. And this has led to a general view in software that often the smart play is to not try and rebuild everything in that way.
You instead apply something called the strangler fig pattern.
The basic idea is this. Rather than replacing a whole lot of stuff, is rather than going, I'm gonna get rid of the messy middle and buy this new bit of software. What you instead do is you look at your process and you find just a little bit in it to automate or improve. And you leave everything there. You just add it on. And you keep adding on and adding on and adding on as your way of replacing the existing software. And it's called the strangler fig pattern because by the time you finished, you can't see the original tree and the original software disappears.
It's a slow and incremental process. Yes.
So we're trying to get more efficient. What are the tools that we have available to us? We're gonna spend most of this webinar working our way through the tools which are to improve the workflow, to reuse prior work, to buy tools, and to create new tools.
So how do we improve the workflow? And there are things you can do which make you more efficient, don't involve writing any code. What are they? There's frameworks, there's checklists, and things like that.
One of the things that I've implemented at every research company I've worked at, even when I was running the insights for in kind of corporations, when research companies, is insisted that all data files arrive in a specific format called an SPSS file in a with very detailed instructions about how it needed to be formatted. And the reason I used to do that is if you use Excel or CSV files, it actually takes twice as long to do the analysis. A lot of people that I come across who are using our software are using Excel CSV and they ask, can you do it? And I say, can, but you shouldn't.
And they often just ignore me. But this is one of these interesting cases where rather than remove friction, you improve efficiency by adding friction. If you have a very clear standard for how the data needs to be before you start analysis, you tend to get faster and more efficient.
Another example of how you can improve processes, which I think a lot of market researchers should do more than they do do, is focus on asking open ended questions in a way that makes them easy for AI to understand.
So the classic way, since the beginning of time, market researchers have followed up quantitative questions like satisfaction and NPS, is we go, why do you say that? And the qualities tell us that's a great question, but it's actually a stupid dumb question to do in quantitative research. Because in qualitative research, when the person goes, that's how I feel, you get to follow-up and go, and why is that? And you can keep asking, you know, five whys or whatever to get the detail. But in quote, you're just left with this garbage data. AI can't interpret it because a human can't interpret it.
Much better to be explicit. So if you're following up an open ended if you're following up an NPS rating, what do you dislike and what do you like? Then it ensures that you get data that the AI is probably gonna get right. And you can go even further. If you don't want the AI to have to figure out with the price and reliability of different things, because maybe you've got bad AI, you can really force the respondent to give the data in a structure that's not easy for the AI, but maybe you make it harder for the respondent. So whether the middle or the right option is the best one, I'm not completely sure.
The idea of reusing prior work is a way to be efficient. It's well known by everybody. You copy your old questionnaire. You edit a copy of old report.
You edit a copy of old charts. You reuse a standard workflow. But there are more sophisticated automated ways of reusing prior work. Let's have a little look at some of them.
So, yeah, for those of you who haven't seen Displayr before, let's say that I wanna create a table. This is the top two box scores.
And I wanna cross tab a bunch. And there's track and drop. So it's pretty simple to use. Right?
And maybe I wanna format this a little bit more.
And should I format it? Let's change the style.
Now if I wanna create another table, I don't wanna have to repeat all those steps. So the standard workflow that my software these days supports is you duplicate work, and then you modify. And so now I'm just gonna swap age, gender for the age, and it remembers the format. It remembers everything before.
So duplicate, modify, one of the ways of reusing. Another way of reusing is using templating systems. And if you haven't seen it in our software, this has improved a lot in recent years. So I might want to apply a template, which I do.
Then go apply template.
And I'm gonna choose my preferred bar chart with labels, I don't have to create the visualization. This way, I can create a uniform way of creating visualization that I apply across all of my presentations.
You don't think this is pretty? Fine. Create your own. Save it as a template.
Another form of automation that's increasingly popular is using AI to write commentary.
So let's have a go at doing this. I'm gonna go into the calculation menu. I'm gonna write custom AI, and I'm gonna write my a little bespoke prompt.
Please summarize my table. And I'm gonna click on the table that feeds the data in, and let's run the AI.
Well, it's a lot there, isn't it? It's too much for me.
Keep the summary short.
Two paragraphs. Currently, currently, tell me about things that are statistically significant.
So update our prompt, get to calculate again.
Alright. Cool. Now having done that, just with the visualization, I can save this prompt as a template again, and then I can reuse it on things.
Now prompting is the modern way of automating many things, but there's an older way. And it's still the main way for real automation, which is to write code from scratch. Let's have an example of this.
So what I'm gonna do is I'm gonna now drag across the table, and I'm gonna insert a custom code calculation. This is gonna create calculations. It's gonna write it in R, but it can in the background also do a lot of other languages through the R, such as Python, HTML, JavaScript. So what I'm gonna tell it to do though is I'm gonna say, okay.
Here's my table. I'm gonna hook it up to the data. So same approach as I should with customer. Alright? And then I'm gonna do this special magic called a shebang, which means a hash followed by an exclamation mark. And we can give it an instruction. We're gonna say, please create a pie chart.
Then I click this button, and that tells it to turn my instruction into code.
And so writing code is the standard way that most automation is something increasingly people are using AI to write the code, which is what I've done. Let's zoom out, tell it to calculate.
Cool. I've now got a pie chart. Now this ain't perfect. It's got the net on it.
It's not smart enough. And I emphasize that this is not a smart way to create pie charts in general. The much smarter way would just be to go and use the user interface. I'm just trying to illustrate the principle here.
But where these writing code becomes valuable is when you're trying to do something that's a bit unusual and nonstandard. Now the example I wrote here is of a pie chart. Right? Let's imagine I really wanted this.
How could I trick this up? Trick this up wrong way. How could I make this even easier to reuse? Right?
What I would do is at the moment, I've got code work. The first thing I would do is I would and this is, again, an advanced technique for automation, but I would start customizing the user interface.
I'm gonna paste something in here, which I've predone. So now I've added a little control here so I can select data.
The next thing I'm gonna do, though, is I'm gonna replace this r code with some code that colleague Misha wrote to do something very exotic called a waffle chart, which is not a chart type that's provided in square.
And I'm pasting a lot of code here, and the code that I'm pasting in, it is in three languages. It's JavaScript, and it's observable HQ as the oh, and HTML four languages, in a way.
And so now I've got this very exotic, very bespoke visualization, and I could customize this however I wanted in code. But doesn't mean everybody needs to know how to write code. Just as I showed you before, I could just go save as template, and then it can be reused. And any of my colleagues can find it in the menus.
A great approach to doing automation.
Now when you're reusing, there's a more general play built into all of our software which is updating. And if you don't, you know, I keep coming across users who don't do this and it shocks me because it's the main reason that we actually wrote Q back in the day. One of the main reasons and it's the main way of saving masses of time on many studies, particularly trackers. So here we've got a tracker study.
We're looking at the data for here. It says AT and T's Net Promoter Score is thirty one. It's a pretty detailed chart. If I had to recreate it every time I update the data, it would take a long time.
I've got the last four periods shown as a trend.
How do I update? Well, I click on the data file, and I just choose update, and I swap it for a new data file with new data. Now I could do this for a tracker, which is what I'm doing now. The same approach could be done to redo a study in a new market or to redo a study for a new concept.
It does a quick check, looking at the data to see if it's acceptable, how it's changed. I'm happy with that. And quick as a flash, you'll see it's now showing the NPS has changed from thirty four to thirty from thirty one to thirty four. And we're now seeing twelve weeks of data, and now nothing significant. So lots of things got changed automatically. It's a classic reuse thing. Just refresh the data.
So we've gone through reusing for prior work, and we've also started looking at creation of prompts and writing code, which are two things for creating new tools. Another standard approach is to just to buy a tool. So if you go back and you look at the approach that I'm describing you need to do, which is you or you understand the problem in detail, you fix the bottleneck, and then you create a wire figure or or so you identify the bottleneck and then you fix it. You don't have to do that yourself.
If you've got an expert who's selling you some software where they've understood it, maybe that does the whole job for you.
Now based on lots of requests from users, we are now going to hopefully, it's gone out twenty minutes ago, but I'm not sure it should be released by today. Fingers crossed. We're introducing a completely new way of using Displayr, which is via a computational user interface.
Excuse me.
Let's have a look. This isn't the normal Displayr version. This is a test version. Fingers crossed, we could see some weird stuff.
You'll see it's changed quite a lot. In the beginning, there's a conversation. We can ask it questions, but it's giving us suggestions.
Now it's not gonna give us options for everything. It's just showing, for reasons I'll explain shortly, the happy path, which is the standard way of analyzing a fairly straightforward survey.
So the first thing it wants to know that I've added data is it wants to know what I'm trying to achieve.
I need to develop a strategy for increasing towards to France.
Please share differences on how France is seen by different people.
Alright. It's a bit inarticulate. The more detail and the more articulate, give an obviously the AI can do better because this is another example of prompting.
Now the AI is gonna look at this and decide if it's good enough. And if it is, it's gonna ask us who the data is about.
Roughly representatives sample of US residents eighteen plus. Turns out you can't analyze market research data at all without knowing where the sample is from, which is why the AI is asking it.
And now it's starting to go through a more detailed workflow. It's going to recommend that we clean the data. Now, this takes about ten minutes because it would go through and it would even code all the data automatically. It's pretty cool. It can delete respondents. I'm just going to get a small number of things because I don't want you to get too bored waiting for it to work. So just do some basic data work to create top two boxes and check that the ordinal data is ordinal and so on.
Let's say you can do a lot now. As I'll come to a bit later, this is not gonna do exactly what you would do. There's things that it doesn't get automated such as looking for space, which it will do surely, but it's a great place to start. I I think I I would encourage you to be using this data preparation tool and to be getting this feedback on it. I I think it really automates a lot of work that makes life really easy for users.
But at the moment, it's set up so that if you knew nothing, it'll get you a reasonable job. So it's trying to go for both users, advanced users and the novice. And you'll get a lot of abilities customized as I showed you before. You can choose what you want to run.
Anyway, we still got much more work to do in it. And then it's gonna go through now. It's gonna create encourage us to create tables, and then it will get to show us how to create visualizations. It'll automate the commentary, create an executive summary, then we can export off to PowerPoint.
So that is the workflow, the the conversational user interface will encourage users to do.
So you can buy tools.
If you're a Displayr user, hopefully seeing stuff you like, and you can create tools. And creating prompting is obviously the big thing that's changed for all of us in Displayr. We've already looked at the custom AI. We use it to write code during the prompting approach.
There's also the text categorization, which is the most widely used of the tools in Displayr that is based on AI. There's ways to create variables, but rather write code to do the variables, you just give it a prompt. And then it's a research agent as well, which automates reporting for you. But lots of stuff there.
The big trend that the whole world is obsessed with now is building agents. They're not actually at a stage where a normal person, which I mean a non engineer, can be that successful, but I think that will change pretty quickly. I'm gonna walk through the processes of doing that. For many tools, these low code automation tools, which can be used to automate things like accounting, things like that, and various kind of administrative processes. And then there's tools, the more general toolkit of writing code, which I illustrated before.
But a question that it's important to understand, and I think is often misunderstood, is how good is AI today? Now today, the way that AI works is if you give it really small tasks and they're well thought through, it's pretty magically correct. So if you wanted to know if this sucks as a positive or negative sentiment, AI will solve that for you and it will do it really, really well. That microtasks. The mezzo tasks, however, it's not so good at. So if you say, read through the text, identify the themes, and tell me the percent in each theme, It'll either fail or it will do it but give you the wrong answers. And when I say give the wrong answers, it almost always gives you the wrong answers with a bigger question like this.
And so that means you need to have more complicated workflows where there's a human in the loop. Like, you want to get the AI to do these mezzo tasks, but usually it can't unless you help it, which means you need to be using tools allowed to check and correct.
And so you saw, for example, when I was showing you the data preparation agent for cleaning the data, it asked me which of the tools I wanted to use. So it was checking, allowed me to check what it was gonna do, and allowed me to correct what I wanted to do.
Similarly, if you use our text categorization tool, it can automate the process, but the ER mechanisms for checking correct. But how does it manage to automate the process if I've told you now, just then, that I I can't do it? Well, it's because most great AI tools today, the AI isn't actually the the magic. It's probably not quite accurate.
It's kind of the magic, but the AI gives you the ability to converse. But usually, there's more traditional programming tools that are being used by the AI in the background and also which control the AI. So if you remember I showed you that big diagram before for the research agent, that's really the architecture of how the research agent works. It's got through a whole lot of steps.
Now some of those steps, it asks the AI to perform, but it only asks the AI to do small things. It doesn't ask the AI to do big things because AI's not great at big things.
And by big, I mean, the mezzo tasks, which is the Greek for it, kind of in the middle, I believe, and the macro tasks. So we all want to ask the AI, can I launch my new product? And for some certain simple enough problems, it can do it correctly. The AI tends not to do it correctly on its own. It tends to do it correctly by using tools, which I'm about to describe in a bit more detail.
So what I'm now talking about is how agents actually work. It's an area where most people get it a little bit wrong. Most people's assumption is that an agent is well, they treat the agent like kind of the genie from Aladdin. It's just a thing. Build the agent, then it can do anything. But that's not what's really happening. So let me explain actually what agents do in the background, how they actually are built.
So the agent is given a system prompt. So somebody writes a generic set of instructions. They go for usually a few hundred or a few thousand words. They begin with something like, you're a researcher. Please help our users, etcetera.
System prompt.
Now that's a fairly straightforward thing to create. It's just prompting. Most of you would figure that out on your own. The bit that requires all of the technical expertise is creating tools for the agent.
So when you use ChatGPT and it searches the Internet, ChatGPT itself didn't search the Internet. Somebody built a search tool which ChatGPT talks to. When ChatGPT does math, it actually isn't the large language model performing the math. Instead, it's sending it off to another tool to do the math.
When it does data analysis, it's actually writing code in a language called Python, and it runs that code again using another tool.
So let's go into the detail here. Imagine you type into chat GPT into what's one plus one.
The agent, when it receives any message from you, it has two options. Option one is, do I respond to it? Do I know the answer already based on all the deep knowledge that's embedded in my large language model? If so, it gives the answer. And for one plus one, it would probably give an answer to two.
Or do I not know the answer? If I don't know the answer, what tool do I have? And if it has a tool, it sends the message to the tool. So it always finishes off.
The agent always just sends a message. The question is only who it sends it to. Does it send it to a tool or does it send it to a user? And if it sends it to a tool, it might send to the Python code tool or another agent.
Just one plus one. The Python code goes, I know the answer to that. It's two. And it returns that answer.
And so the agent then gets a reply from the tool. And it just goes back to the previous step, just in a permanent little loop, and goes, okay. Now I know more than I knew before. Can I respond to the user's question?
If so, it sends it. If it can't, it calls another tool or maybe the same tool again. And so for those of you that have tried some of those kind of vibe coding tools, this is what they just pick repeatedly doing. You think they're just sending thing, trying it.
Did it allow me to do what I want to do? No. I'll try it again just in a perpetual loop until it finishes.
Now in addition to choosing the tools we wanna do, we also have to choose the degree of automation. Again, keep it small. If you don't know things deeply, you have manual processes. The next step on being manual is structured manual, where you've got checklists, forms, and flowcharts, so you're ensuring consistency in reducing errors.
The next level to get to is happy path automation. So you never try for full automation. You should you should just focus on the simplest jobs automating them, not the most complicated jobs. And this is, again, the thing people get wrong. People think that the thing that they find impossible to do is the thing that they should automate, but you can't. You don't know enough unless you're happy enough to find an expert who's already automated.
And once you've got your happy path working, in the conversational agent, I'll you, it's just focused on the happy path of basic reporting. If you've got some exotic max theft study, the conversational agent will point you to documentation.
And the end goal is you get to full automation. It's surprisingly hard.
What is the link between what you know and what you can automate, you might ask. Or, actually, you probably won't ask because the mistake people always make is they've got a fog and they want full automation. That never works. You have to choose how to automate based on what you know.
And so until you're at this stage where you actually know that happy path, and if you can come up with a detailed diagram describing exactly how the work works, you can't do any real automation of any level.
And if you wanna get to the path of doing full automation, you have to get that very detailed map. Because if you someone has to create it right. And if you make the mistake that many people make, which is they go, alright. Let's hire some experts, consultants, and tell them we want a new tabulation system. You just gave them the fog.
And think the only way they can work at how to get the full automation is to understand it in detail. But they're probably just gonna guess, and you'll end up in a world of pain.
So what questions do you have? Please type them into the questions field and go to webinar.
Alternatively, you can reach me on this email address.
What questions do people have?
What can I show you more?
And Roberta has asked to see more of the conversation user interface. Alright.
So the next thing it's offering is to create summary tables that address your research objectives. And those of you that use the research agent, what you'll see is we've really pulled the research agent apart so that you have more control over the processes rather than having to wait five minutes for it all.
And for those of you that haven't found the questions field in GoToWebinar, you can click on it and add little questions.
And so it's gone through. And because we gave it a very broad prompt, it's basically gone analyze all the data. But this is a standard workflow that you need to check and correct. So we could override it. I'm okay with it.
And they'll keep asking questions while this churns along. Max. Hi. Max says, can you import templates from PowerPoint or do all templates in Displayr need to be created from scratch in Displayr?
All templates need to be created from scratch. I'll explain to you why. We get this request all the time, and it's one of the classic challenges with automation. I would say a rookie error with automation, which is to confuse the ease with which someone can describe an issue with the importance of issue. So in Displayr, you have a a thing called the page master where you have to create your templates.
It takes about half an hour to create a template. It would take a long time to ingest a PowerPoint thing automatically for a whole lot of technical reasons, so we don't build it. We tend to focus all our building on the things that researchers have to do all the time, whereas creating a template is a one off task. And if you can't figure it out, reach out to us in there. We've got a wonderful colleague called Claudia, and she loves helping customers build out their templates.
So we've now gone ahead and the research agent has created a whole of tables. We could read them or we could just tell to add the commentary, which we're gonna do. It tells us the research objectives and the sample description that we provided before because we might wanna change this. We might go, actually, I want you to focus on the following objectives, but I don't want continue.
Andrea says, when we ask a question to the research agent, what software is it using and does it stay locked in display?
Well, the what software it's using is a very big question because any software like display is really built on hundreds of other pieces of software. There's extraordinary complicated process of building software on the cloud.
Heaps and heaps of other tools are outsourced.
However, when you use the research agent and anything in display where we calculate, it's all been all the calculations are performed on servers that we control.
And they're all sitting in the Microsoft world. So they're all kind of we kind of outsource that to Microsoft. The actual running of the service of the host and the service for us. So all goes there. The main AI tool that we're using at the moment is we use Gemini, which is a model by Google.
And those requests will run on Google service. And if you have an enterprise agreement with us, it'll be Google service in your jurisdiction so it never leaves your jurisdiction.
Fiona says, could the templates be available as options in the visualization window? Well, they could, but they're not.
So the question is, could we change it so the templates say, I wanna apply a template rather than go apply template, I would go into the visualization.
Then you're like now, the reason we haven't actually done this is quite deliberate. So it seems obvious to many users that the visualization template should appear here as additional templates.
And indeed, we had that in our older templating tools. The reason we haven't done that is we want users to appreciate that you can template anything.
So I could choose a table with two charts and a commentary, and I could template them. So the template isn't a visualization thing. Visualizations is an application template. That's more powerful. I could even, if I was quite technical, I could build an entire segmentation suite of things where I've got one slide after another, all the calculations linked, template that, and then just insert that in a new document.
