← Back to Debrief
AI & Marketing

Marketers Are Spending More on AI Visibility Tools and Getting Inconsistent Results Back

Marketers are being asked to pay enterprise prices for tools that cannot agree with each other on basic questions like "does our brand appear in this response."

Filip Ivanković··2 min read
2 min read

A growing number of marketers are questioning whether AI visibility tools are worth the spend. Digiday reports increasing scepticism toward platforms that promise to track and optimise brand appearances in AI-generated search results, with inconsistent measurement being the primary complaint.

The tools in question claim to show how often a brand appears in AI overviews, ChatGPT responses and other large language model outputs. The problem: different tools give different answers for the same queries, and none of them can reliably explain why.

The consistency problem

AI-generated search results are non-deterministic. Ask ChatGPT the same question twice and you may get different brand mentions. Ask it from different locations or accounts and the variation increases. That makes consistent measurement fundamentally harder than traditional search tracking, where rankings are relatively stable and verifiable.

Tools that scrape or simulate AI responses are measuring a moving target. The methodologies vary. The refresh rates vary. The sample sizes vary. Because there is no equivalent of Google Search Console for AI appearances, there is no ground truth to validate against.

0

Standardised metrics currently exist for measuring AI search visibility across platforms

Why it matters

AI search is genuinely reshaping how consumers discover brands. Google AI Overviews, ChatGPT with search, Perplexity and Microsoft Copilot are all directing traffic and shaping purchase decisions. The desire to measure and optimise for these channels is completely rational.

But the measurement infrastructure has not caught up. Marketers are spending on tools that deliver dashboards and charts without the underlying reliability that makes those numbers actionable. The risk is that teams make optimisation decisions based on noisy data, or worse, that leadership judges channel investment on metrics that do not hold up under scrutiny.

For Australian businesses with smaller budgets, the stakes are proportionally higher. An enterprise can absorb a $50,000 tool subscription as an experiment. A mid-market business making the same bet needs confidence that the data will actually inform decisions.

What to do about it

Do not sign annual contracts for AI visibility tools yet. The market is immature. Monthly or quarterly terms give you flexibility as better options emerge.
Run your own spot checks. Ask the same branded queries across ChatGPT, Perplexity and Google AI Overviews weekly. Track the results in a spreadsheet. Compare that against what your tool reports.
Focus on the inputs you can control: structured data, authoritative content, citations in reputable sources and brand mentions in high-quality publications. These are the signals that feed AI recommendations regardless of which platform generates the response.
Track referral traffic from AI sources in your analytics. GA4 can identify traffic from chatgpt.com, perplexity.ai and similar. Actual clicks matter more than estimated appearances.
Wait for Google to release AI-specific data in Search Console. That will be the ground truth signal the entire industry needs.
ShareLinkedInX

Debrief

Get the next one

No spam. No fluff. Just the next article, straight to your inbox.

Filip Ivanković
Filip IvankovićFounder, New Rebellion

10+ years leading performance marketing across agencies and in-house teams in Australia. Writes about the gap between marketing activity and commercial outcomes, and what it takes to close it.

Keep reading

All articles →

If this resonated

Let's talk about your marketing

30 minutes with a senior strategist. No pitch deck, no obligation. Just an honest conversation about what you need.