How research workflows (and roles) will change in 2026 

Research workflows 2026

As AI becomes more embedded in research workflows, the conversation is evolving. Early excitement about what might be possible is giving way to more practical questions about quality, accountability, and how the whole workflow gets done when automation is no longer novel.

Why ‘humans in the loop’ is non-negotiable

When AI is used in small, contained ways, errors are easier to spot. But as reliance grows, mistakes become harder to detect and easier to propagate. Hallucinations, bias, and over-generalization can move through workflows unnoticed if teams aren’t actively supervising outputs.

Crucially, accountability doesn’t shift to the machine. Whether research is conducted for a client or used to inform internal decisions, responsibility still sits with people. Research workflows augmented by AI will need clear checkpoints where human judgment is applied.

This has spurred the concept of “AI high performers,” a very small subset of people whose bold ambitions have resulted in higher success rates in integrating AI into their research workflows, roles and workplaces. Their ambition is focused on using AI to drive growth, innovation, and reduce costs, and it’s working. So far, the difference between winning with AI and not seeing tangible results is a reimagined workflow to make the most of the benefits of the AI’s toolset, a clearly defined process, and proactive risk mitigation by specifying how and when model outputs need human validation. To put numbers to this, 64% of those classed as AI high performers have a rigorously designed process, compared to just 23% of the general workforce using AI.

AI supervision clearly isn’t an optional layer; it’s where setting clear objectives and taking responsibility for thoroughly thought-out implementation can set you apart. You can then further differentiate yourself and bring research expertise to enhance the quality made possible by time clawed back by automation.

What changes when teams rely on AI day to day

When AI becomes part of everyday research operations, subtle but significant shifts arise. One of these shifts is increased pressure of quality validation as research cycles get faster. When outputs are produced more quickly, there’s a desire to move through the next steps at the same speed. This creates a real risk: Confidence in the speed of AI outputs can outpace confidence in its raw usability.

Volume also increases, adding to this pressure pot. With speed comes space for more. More studies, more cuts of data, more summaries, more outputs. Without stronger prioritization, teams and stakeholders can find themselves overwhelmed by insight rather than empowered by it. The human research workflows shift to contextual interpretation and picking what’s useful for each stakeholder out of the data ocean.

“We live in a world where there is more and more information, and less and less meaning.” – Jean Baudrillard

We can learn a lot from looking at the wider impact of AI on the workforce outside of market research. A recent McKinsey study showed that AI is expected to reshape headcount in roles adjacent to insights work. For example, in knowledge management, expected employee reduction continues from 16% (observed last year) to 27% (expected this year). In marketing and sales, the trend continues from 18% (observed, last year) to 32% (expected, this year).

However, this doesn’t necessarily mean a talent drain, but perhaps a shift in a new direction. McKinsey also expects client-facing roles to grow by 25%, while non-client-facing roles to shrink. With this in mind, there are some skills a researcher can brush up on and leaders can encourage to meet new stakeholder expectations.

This matches the behavioral shift required by my new expectations. Automation bias (the tendency to trust machine output over human judgment) becomes more pronounced as AI feels increasingly competent. Teams must actively counter this by building habits of challenge, sense-checking, and contextual review into their research workflows. In short, less manual effort means the same level of responsibility shows up in different places.

Where synthetic data fits (and where it doesn’t)

Synthetic data is one of the more talked-about developments in AI-enabled research, and also one of the most misunderstood. Used carefully, synthetic approaches can extend analysis, fill gaps, and support early-stage exploration. But they’re not a replacement for real human input.

Today, a pragmatic guideline shared at Esomar Congress that complements their recent AI Code is that no more than 30% of a research project should rely on synthetic data. And the reason is simple: Synthetic data is only as good as the data used to generate it. Models extrapolate from the past, but the past isn’t always a reliable predictor of the future. Particularly in fast-moving markets or when cultural context matters.

How agentic AI may reshape research workflows over time

Looking further ahead, agentic AI has the potential to reshape research workflows more fundamentally.

Rather than a single system doing everything autonomously, agentic AI points toward multiple specialized agents supporting different stages of the research process. One agent might assist with survey design, another with data preparation, another with analysis, and another with reporting.

These agents still need direction, supervision, and integration into existing research workflows. They also need to be ready to plug into broader research systems without introducing fragmentation or risk. This is a recurring theme with new tools; integration is essential to mitigate risk and maintain a smooth workflow.

“If everything stays on the same tune, the minute you start to do field work, you can then start to report on the data… We can move from a sequential workflow to a parallel workflow.” – Tobi Andersson, General Manager for Market Research, Forsta

Explore more by listening to the Founders & Leaders podcast.

For most teams, agentic AI will arrive gradually. Early use cases will focus on well-defined, low-risk tasks, with wider adoption depending on whether these systems can integrate cleanly, support quality standards, and earn trust.

Read more: The hidden dangers of non-integrated AI

From hype to maturity: What the wider AI landscape tells us

Across industries, AI adoption has followed the same arc as any new technology: Early over-estimation of short-term impact, followed by more measured, value-driven implementation. Then initial experimentation gives way to consolidation, as organizations move from trying everything in an excited rush to focusing on what is delivering measurable results.

Adoption is also pretty uneven. Some regions and sectors are moving faster than others, and what works in one context doesn’t automatically translate to another. This pattern is already visible across industries. Recent research shows that while 88% of organizations report experimenting with AI, fewer than a third say they’ve achieved measurable business impact at scale.

What research teams should focus on next

In 2026, the research teams that succeed won’t be the ones chasing the most automation. They’ll be the ones enhancing their tried and tested research workflows with increased speed and layers of human intervention that build trust for stakeholders.

That means:

  • Building clear supervision points into AI-enabled processes
  • Strengthening data foundations and quality controls
  • Maintaining data security during experiments (keep an integrated workflow and try external tools with dummy data)
  • Exploring and preparing for future AI systems, but only when they’re ready to add value

Above all, it means recognizing that research doesn’t become better by removing humans from the process. It becomes better when human judgment is supported, not replaced by AI.

Want to find out more? Visit our solution page to book a demo with one of our experts to see how these tools could help you and your market research. 

News

Press Ganey Forsta launches Insurance HX to help insurers boost retention and loyalty 

Press Ganey Forsta launches Insurance HX to help insurers boost retention and loyalty AI-powered solution gives insurance companies real-time listening, automated workflows, and tailored dashboards to detect risk, recover faster, and improve policyholder experience  [CHICAGO, SEPTEMBER 3, 2025] – Press Ganey Forsta, the leading provider of experience measurement, data analytics, and insights that help companies […]

Read more
Press ganey forsta launches insurance hx to help insurers boost retention and loyalty 
News

Press Ganey Forsta expands research capabilities with deeper integration of qualitative tools on HX Platform

Press Ganey Forsta expands research capabilities with deeper integration of qualitative tools on HX Platform Integration brings focus groups, interviews, and ethnographic research together in one seamless solution  [CHICAGO, AUGUST 14, 2025] –Press Ganey Forsta, the leading provider of experience measurement, data analytics, and insights that help companies better understand and serve their customers, employees, […]

Read more
Press ganey forsta expands research capabilities with deeper integration of qualitative tools on hx platform

Learn more about our industry leading platform

FORSTA NEWSLETTER

Get industry insights that matter,
delivered direct to your inbox

We collect this information to send you free content, offers, and product updates. Visit our recently updated privacy policy for details on how we protect and manage your submitted data.