May 10, 2026

Quirks London 2026: AI: Possibilities and Pragmatism for Research & Insights 

It was great to spend time at the Quirk’s Media Event in London this week, immersing in the latest thinking and case studies from the research and insights industry. There was only really one topic of conversation …. AI and its continued transformative impact on the Insights industry.

At Quirks London this year, it was inevitable that almost every conversation eventually came back to the same topic: AI and its growing role in research and insights. 
 
But what was more interesting was not the excitement around AI but the increasing pragmatism. The conversation has moved beyond “Can AI generate insights?” to much tougher questions around trust, interpretation, observation and decision quality. 
 
This distinction really matters, particularly for R&D and innovation teams. Here are five reflections and implications that particularly stood out for me. 

1. AI Embedding – Alongside Pragmatism and Honesty 

The industry feels notably more pragmatic than even 12 months ago. AI is increasingly being embedded into the day-to-day research process offering scale and speed, but the tone was not about a replacement for human researchers. 
 
Instead, the dominant theme was the idea of a “Research Stack”: AI bringing speed, scale and pattern recognition, while humans provide framing, moderation, interpretation and judgment. 
 
Perhaps most importantly, teams were openly discussing limitations and risks alongside benefits. Bolt.ai’s Vatsala Rathore and Anne Collard from Pladis led a refreshingly honest conversation that cut through much of the hype and focused on the realities of implementation, trade-offs and decision making. 

2. Synthetic Data : Powerful for Exploration, Risky for Validation 

Synthetic personas and “digital twins” were increasingly positioned as tools for exploration rather than validation. 
 
Teams described using them to pressure-test hypotheses, explore territories quickly and accelerate early-stage ideation.  But there was also healthy caution around the risk of “closed loop” learning,  where synthetic systems increasingly train against their own outputs rather than fresh human reality. 
 
If we only listen to digital echoes of existing data, we risk losing the human outliers that often spark real innovation. 
 
For R&D teams, this is particularly important. AI can accelerate early thinking dramatically, but innovation still depends on identifying unmet tensions, unexpected behaviours and emerging needs that may not yet exist clearly in historical data patterns. 

3. AI Moderation and the “Observation Gap” 

Several teams showcased increasingly sophisticated examples of AI moderation, including emotionally nuanced analysis at scale. Teams from MMR, GetWhy and Motives working with Anthropic shared compelling examples of how AI-moderated research is evolving rapidly. 
 
MMR also shared fascinating analysis suggesting that the “sweet spot” for qualitative scale may be lower than many teams assume. Even at relatively small sample sizes, clear insight structures and patterns were already emerging. 
 
But the discussions also highlighted what I’d describe as an “Observation Gap”. 
 
In product research (particularly when R&D teams are prototyping) some of the most valuable signals are still observational and contextual rather than articulated – the hesitation before product use, the work-around behaviours to compensate for product short-comings,  the individual way a user will handle and work with a product, the frustration someone never verbalises, or the meaning of the product within the reality of their home and routine. 
 
AI is improving rapidly at analysing what people say. But real product insight often lives in what people don’t say. 
 
For physical products especially, there remains enormous value in human observation, contextual immersion and the ability to pursue unexpected avenues in real time. 

4. More Data Does Not Mean Better Decisions 

One of the clearest tensions across the event was the growing gap between information generation and decision quality. 
 
AI can now generate enormous volumes of findings, summaries and outputs at speed. But volume does not equal value, and more outputs do not automatically create more clarity, confidence or actionability. 
 
This is increasing the importance of narrative, synthesis and interpretation. Several speakers highlighted that the value of insights increasingly lies not in generating more information, but in constructing memorable, credible and decision-driving stories from it. 
 
The point made by Edwin Taborda from L’Oréal particularly resonated with us at Untapped: what teams increasingly need is not simply more data, but better Stories. 
 
In many ways, this actually raises the strategic value of experienced researchers and innovation leaders rather than diminishing it. The role becomes less about collecting information and more about filtering noise, applying judgment and creating meaning. 

5. Minding the “Trust Gap” 

With AI approaches generating vast amounts of both insights and content, trust increasingly feels like the new currency. 
A credibility gap appears to be opening on two fronts. 
 
On the research side, teams value the speed and breadth AI can offer, but are demanding increasing transparency and evidence behind AI-generated conclusions. Strong research still requires stress-testing, interpretation and credible human judgment before decisions are made. 
 
On the consumer side, there is growing wariness around AI-generated content that lacks authenticity or a recognisable human perspective. 
 
The team from Resonant shared particularly interesting data showing that one of the strongest drivers of credibility is still a trusted human point of view. They also highlighted the increasing importance of credible data and citations in influencing discoverability and recommendation within LLM-driven search environments. 
 
As AI-generated content becomes more widespread, the premium on authenticity, credibility and human perspective is likely to increase rather than decline. 

Final Reflection 

The discussions at Quirks reinforced something we increasingly see in innovation work with R&D teams: the challenge is not about generating information, but about translating information into meaning, confidence and better decisions. 
 
AI will undoubtedly transform research. But the teams that succeed are unlikely to be the ones generating the most outputs. They will be the ones best able to combine AI capability with human judgment, contextual understanding and compelling narrative. 
 
At Untapped Innovation, this intersection between insight, product understanding and narrative is increasingly where we see the biggest opportunity for R&D teams. 

Reach out to our expert

Suzanne Allers

suzanne.allers@untappedinnovation.com

Suzanne Allers