Verified 2026-05-03. Per-report cost figures and report tier routing are estimates only — see Quick Stats for caveats.
04 — Reports & Boosters
The fourth stage of the loop. Aggregated stats from Stage 3 become the artefacts customers actually look at: the Visibility Analysis Report and the Booster cards on the Performance page.
Visibility Analysis Reports
Reports are the core intelligence output. They are the artefact that gets forwarded to the CMO, attached to a budget request, or included in an agency QBR.
The Visibility Analysis Workflow runs on LangGraph Platform as a multi-node graph with two-phase Claude AI calls for qualitative analysis, paired with deterministic scoring (from Stage 3) that overrides any LLM score estimates.
What a report contains
| Section | Description |
|---|---|
| Visibility Score (0-100) | The golden optimisation metric. Deterministic. The number teams track week-over-week. |
| Audit Score (0-100) | Same deterministic model — used for first-touch quality measurement |
| Executive Summary | AI-generated. Leads with organic (unbranded) competitive rank. Provider-level breakdown. |
| Opportunities | Specific growth actions, categorised by team (Tech, Content, Marketing, Leadership) |
| Threats | Competitive and market risks with expected impact and urgency |
| Quick Wins | Low-effort, high-impact actions with priority ranking |
| Source Strategy | Citation analysis, domain diversity, content gap identification |
| Competitor Counter-Strategies | Specific actions to address competitor advantages |
| Phase 6 Metrics | Share of voice, position premium, co-mention analysis, sentiment by provider, query gaps |
Every section above either consumes deterministic stats from Stage 3, or generates qualitative copy from those stats via Claude.
Report generation tiers
Default model is Claude Sonnet 4.6. Higher tiers (Opus) and budget fallback (Gemini) are routed per-workspace via premium flags or override config; the tier list is not a public-facing menu.
Per-report cost figures circulated internally are estimates only. Do not quote externally without re-measuring against current billing.
Boosters
Boosters are the recommendations that come out of every report, presented as actionable cards on the Performance page. They bridge the gap between “what’s happening” and “what to do about it.”
Types of boosters
| Type | Icon | Description |
|---|---|---|
| Opportunity | Lightbulb | Growth actions: content gaps to fill, providers to target, keywords to optimise for |
| Threat | Alert | Competitive risks: competitor gaining share, negative sentiment trends, citation losses |
| Quick Win | Zap | Low-effort, high-impact actions: schema markup, FAQ pages, existing content optimisation |
| AI Model Strategy | CPU | Provider-specific recommendations: how to optimise for ChatGPT vs Perplexity vs Google AIO |
| Competitive Strategy | Target | Counter-moves: specific actions to close gaps against named competitors |
Each booster includes
- A clear action title
- Insight / context (why this matters)
- Category assignment (Tech, Content, Marketing, Leadership)
- Priority ranking
- Optional linked prompts (which queries this booster relates to)
From booster to action
Users click “Add to Actions” on any booster to promote it into the tracked action pipeline (Stage 5). This is a deliberate user choice, not automatic — the user decides what’s worth acting on. Actions are never auto-created.
See also
- Stage 3 — Aggregated Stats — what feeds the report
- Stage 5 — Actions & Content — what happens to a Booster after it’s promoted
- Stage 6 — Historical Tracking — how each report compares to the previous one