AI ethics: Future-proof your research

Artificial intelligence is racing ahead. Agentic systems can plan and act. Synthetic data can stand in for scarce signals. New laws and new expectations are arriving at the same time. Through it all, one truth holds. Ethics is your edge. And ethics, really, is your well-practiced position of power. While some industries might wrestle with the idea of, or dismiss AI ethics completely, it’s second nature to market research.
Why AI ethics matters more tomorrow than today
AI is becoming less of a tool and more of a teammate. That means more autonomy, more velocity, and more potential for mistakes at machine speed. An ethical foundation lets you move fast without breaking trust. It protects participants, preserves panel health, and strengthens client confidence. It also aligns you with the 2025 ICC and ESOMAR International Code, recently updated to emphasise the need for AI ethics, which centers duty of care, data minimization, privacy, transparency, bias awareness, synthetic data, and human oversight.
Read our full article on the 2025 ICC and ESOMAR International Code: AI in Market Research: Five rules to live by
The ICC/ESOMAR Code at a glance
What it is
The ICC and ESOMAR International Code on Market, Opinion and Social Research and Data Analytics is the global self-regulatory standard for our profession. The 2025 revision updates the Code for today’s tech stack, with clear expectations for AI-enabled work.
Why it matters
The Code protects participants, preserves public confidence, and sets a bar that often goes beyond law. It clarifies responsibilities for researchers and clients, so everyone in the chain knows what “good” looks like.
Key items researchers should apply now
- Article 1 – Duty of care. Conduct research with due care, avoid harm, and keep a bright line between research and non-research activities.
- Article 2 – Children and vulnerable people. Obtain appropriate consent and ensure methods are age and context appropriate.
- Article 3 – Data minimization. Collect and process only data that is relevant to the purpose; pass only the minimum personal data to suppliers.
- Article 4 – Primary data collection. Identify who you are, secure informed consent, explain recontact, and allow withdrawal; if automation is used in collection, say so.
- Article 5 – Secondary data. Ensure new uses are compatible with the original purpose; respect restrictions; prevent harm.
- Article 6 – Data protection and privacy. Provide a clear privacy notice; prevent re-identification even with advanced analytics; secure data; limit retention; handle cross-border transfers and breaches responsibly.
- Article 7 – Fit for purpose (client responsibilities). Use methods suitable for the population and objective; disclose when AI or emerging tech meaningfully informed analysis or interpretation and state the extent of human oversight.
- Article 8 – Transparency, confidentiality and responsibility. Be open about potential biases, respect IP, and keep results and communications confidential unless agreed.
- Article 9 – Publishing findings. Give enough information for the public to assess validity and disclose whether AI or synthetic data played a significant role and how humans oversaw the work.
- This area of AI ethics is particularly important as studies show that while current synthetic models are representative of the United States, they quickly become less accurate as the cultural distance widens.
Make trust a measurable feature
Trust grows when people can see what you do and why. Build that into your process and your product.
“Trust in the data we collect and analyze, and the insights we provide is paramount to the future of market research. With the new Code, ESOMAR provides the ethical guardrails to ensure that what we do is honest and transparent. As we charge headlong into the AI-driven world, this new code is designed to guide us as human researchers to use AI with humanity.” — Lucy Davison, ESOMAR Council Member
Practical moves
- Disclosure by default. Tell clients when AI is involved in sampling, analysis, or reporting. State the extent of human oversight. Tell participants when they interact with an automated interviewer and how their data is protected.
- Minimum in, maximum protection. Collect the least personal data required. Process in secure, access-controlled environments. Delete or anonymize as soon as the purpose is complete.
- Bias checks on a schedule. Compare AI outputs with human-coded samples across languages, ages, and cultures. Adjust methods or switch tools when fairness fails.
- Plain language method notes. Replace mystery with clarity. What data went in? What the system did. Where it performs poorly. Who reviewed and approved?
AI advances are here. The question is not whether you will use them, but how. Digest and own the ESOMAR Code and you will move faster and win more trust.
Related stories
Overcoming the CX Trust Deficit: Brands Can Win Loyalty Through Responsible Data Use
Overcoming the CX Trust Deficit: Brands Can Win Loyalty Through Responsible Data Use Global Press Ganey Forsta survey reveals most consumers are willing to share personal data for better experiences, but few trust brands to use it responsibly [CHICAGO, AUGUST 27, 2025] – Press Ganey Forsta, the leading provider of experience measurement, data analytics, and […]

Press Ganey Forsta expands research capabilities with deeper integration of qualitative tools on HX Platform
Press Ganey Forsta expands research capabilities with deeper integration of qualitative tools on HX Platform Integration brings focus groups, interviews, and ethnographic research together in one seamless solution [CHICAGO, AUGUST 14, 2025] –Press Ganey Forsta, the leading provider of experience measurement, data analytics, and insights that help companies better understand and serve their customers, employees, […]

Press Ganey Forsta Appoints Luke Williams as Chief Customer Experience and Research Officer
Press Ganey Forsta Appoints Luke Williams as Chief Customer Experience and Research Officer Press Ganey Forsta, a leading provider of experience measurement, data analytics, and insights, today announced the appointment of Luke Williams as Chief Customer Experience and Research Officer. Williams will lead enterprise customer experience and research strategy, reporting directly to Kyle Ferguson, CEO […]

Learn more about our industry leading platform
FORSTA NEWSLETTER
Get industry insights that matter,
delivered direct to your inbox
We collect this information to send you free content, offers, and product updates. Visit our recently updated privacy policy for details on how we protect and manage your submitted data.