View all posts

AMEC AI Day 2026: Five Things I Learned

John Croll  |  CEO, Truescope  |  March 2026

Five Things I Learned at AMEC AI Day North America

I spent a day at 3 Times Square in New York with global brand leaders, data scientists, measurement professionals, and technology vendors - all trying to answer the same question: what does AI actually mean for communications, media intelligence, and reputation management?

Here's what stood out. Not as a summary of sessions, but as a set of ideas that I think will fundamentally reshape how the industry works over the next three years.

1. Context engineering is the new competitive moat

The most cited concept of the day was 'context engineering' - the idea that raw AI output is only as good as the structured context fed into it. Rob Key from Conversion put it bluntly: AI consensus-building averages data, which means it dilutes nuance. The winners won't be those with the most data. They'll be those with the best interpretive layer sitting on top of it.

The model he proposed is elegant: source data flows into an interpretive layer (currently around 65% accurate across the industry), which is then refined by an enterprise-owned semantic framework - knowledge graphs, industry ontologies, proprietary editorial judgment.

'Context as a Service' is emerging as the product category that matters. Not data delivery. Structured data with domain expertise embedded.'

For anyone in media intelligence, this should be clarifying. The value was never in the volume of coverage collected. It was always in the meaning extracted from it. Context engineering is just a new name for something the best analysts have always done.

2. Earned media is now an AI input - and most brands aren't ready

Generative Engine Optimisation (GEO) was the dominant conversation of the day, and for good reason. What gets written about a brand in the press now directly shapes what AI tools tell consumers about that brand. The implications for PR and communications are profound.

The data is hard to ignore:

  • 82% of LLM responses draw from earned media - editorial content is the dominant training and citation source (Muck Rack)
  • 95% of citations in AI responses that drive purchase moments come from earned media (PepsiCo)
  • 61% of search results seen across major brand trust studies were driven by editorial content (Hard Numbers)

The most commercially significant insight came from Jonny Bentwood at PepsiCo. They track approximately one billion AI-mediated consumer experiences per day. Thirty percent of all AI prompts create a buying opportunity. AI is no longer just a search channel - it's a purchase channel.

PepsiCo has built a dedicated workflow to identify incorrect AI citations and correct them at source. Their framing: 'Market to Machines.' Brands must now optimise content for AI consumption, not just human readers.

The question is no longer whether your earned media strategy affects AI visibility. It does. The question is whether you're measuring it.

3. Trust is the industry's biggest unsolved problem

Across multiple sessions, the central tension was the same: AI accelerates insight delivery, but errors and hallucinations undermine confidence. The CARMA survey presented by Jennifer Sanchis was instructive - the top concern across respondents wasn't capability or cost. It was accuracy and reliability.

This matters beyond the technical. Audiences are applying the same scepticism to AI outputs that they apply to media coverage. Distrust correlates strongly with fake news proliferation. Disinformation is now appearing as a specific risk category in corporate reporting - it's entering boardroom conversations.

There's also an interesting split in attitudes: CEOs are the most optimistic about AI adoption; journalists and unions are the most sceptical. That's not surprising. But it does mean that communicators - who sit between those two groups - need to be fluent in both perspectives.

The takeaway: AI will be broadly trusted when outputs are explainable and verifiable. That's a product challenge as much as a technology challenge.

4. Reporting cycles are collapsing - and the tools haven't caught up

Geoff Sidari from Airadis described a shift that most communications teams will recognise: reporting cycles that used to run days or weeks are now expected to run in minutes. The problem is that the tools enabling this are largely disconnected. Meltwater, Cyabra, Audiense - the stack is fragmented, and integration is the pain point clients are actively trying to solve.

The 'Always On / Always Contextual / Always Connected' framework presented as the target state for enterprise comms teams isn't aspirational anymore. It's what large organisations are demanding right now.

Two ideas from the day deserve particular attention for anyone building in this space:

  • MCP protocols - a technical standard for AI tool connectivity - were flagged as a key emerging infrastructure layer worth understanding for any product roadmap
  • 'Compounding intelligence' - the idea that AI systems should learn and improve from each interaction - was highlighted as a future requirement, not a current capability

There was also a specific product gap called out explicitly: clients need a place to input their KPIs and communications strategy, and receive AI-optimised outputs and intelligence calibrated against those inputs. That doesn't fully exist yet.

5. The measurement framework is being rewritten

Impressions, reach, and positive/negative/neutral sentiment are giving way to a richer set of measures that connect communications activity to actual business outcomes - and now to AI visibility.

The metrics now in circulation include:

  • LLM visibility / AI Share of Voice - how often and how positively a brand appears in AI responses
  • Attitude classification - 15 distinct attitude types replacing simple sentiment, with several enterprise clients already making the switch
  • AI citation attribution - tracing which specific articles, outlets, and journalists are driving LLM citations
  • Answer-first content performance - measuring how well content is optimised for machine consumption

Jennifer Sanchis presented Gartner data suggesting the industry has passed through peak AI hype and is entering the trough of disillusionment. Unrealistic expectations are giving way to more grounded, practical adoption. That's actually healthy. It means the vendors with genuine capability - not just AI-washed positioning - are about to become more visible.

What this means for the industry

Three things are clear to me after a day of these conversations.

First, GEO is not a niche discipline. It's the next major frontier for earned media measurement. The infrastructure to measure AI Share of Voice, track LLM citation sources, and connect media strategy to AI visibility already largely exists. What's needed is the reporting layer that surfaces it for clients.

Second, context engineering is the value proposition the market has been struggling to articulate. Editorially curated, structured, domain-enriched intelligence is exactly what enterprise buyers need. That's not new - it's what good media intelligence has always been. But naming it correctly, in the language of the AI era, matters.

Third, workflow is the battleground. Clients are not frustrated by a lack of data. They're frustrated by disconnected tools, slow cycles, and outputs that don't connect to decisions. Always-on, integrated, contextual intelligence is the product category the market is moving toward.

The brands and vendors who understand these three shifts - and act on them - will define the next chapter of this industry.

___

John Croll is CEO of Truescope, a media intelligence company operating across Australia, New Zealand, the United States, and Singapore.

See how you can get more signals and insights from your media intelligence platform

g2.com truescope Media Monitoring HighPerformer Small Business
g2.com truescope Media Monitoring High Performer
g2.com truescope users love us badge