Accuracy Gaps
Tracked list of inaccuracies or unverified claims in the knowledge hub. Add entries when you spot something wrong or unverified; remove when fixed.
Format
Each entry:
- File: path relative to
knowledge/ - Issue: what the doc says vs what is actually true
- Truth (probable): best current understanding
- Who knows: person who can verify
- Status:
open|in-progress|fixed
Open gaps
Agent tool count + investigation flow count
- File:
reference/quick-stats.md - Issue: Doc currently flags agent tool count as “drifts weekly”. Useful to have a re-verification cadence rather than a constantly-stale number.
- Truth (probable): Verifiable from agent tool registry in
nudg3-workflows/agents/. Re-run quarterly or when admin_orchestrator adds a subgraph. - Who knows: Sandil
- Status: open
Per-report cost figures
- File:
flow/04-reports-and-boosters.md,reference/quick-stats.md - Issue: Quoted as ~$0.02–0.06 (Sonnet) and ~$0.006 (Cloro) — flagged as estimates. Never formally measured.
- Truth (probable): Real numbers require pulling Anthropic/Cloro billing for a sample window and dividing by report/response counts.
- Who knows: Sandil
- Status: open
Competitor funding figures
- File:
reference/competitive.md - Issue: Profound $58.5M, Peec ~$29M, Scrunch ~$19M, Hall $1.27M — accurate April 2026 but drifts.
- Truth (probable): Need date stamp + source link per figure. Re-check before any externally shared competitive deck.
- Who knows: Market research workstream
- Status: open
”Dark traffic 70.6% invisible in GA4” — single-source figure
- File:
start-here.md - Issue: Cited as ~70% with Loamly attribution. Single-source study, widely repeated by GEO vendors.
- Truth (probable): Directionally correct but methodologically thin. Treat as “industry estimate” in pitches; don’t claim it as a measured fact.
- Who knows: Research lives in
docs/research/ai-traffic/ - Status: open (mitigated by inline caveat in start-here.md)
Public API v1 exact endpoint count
- File:
reference/quick-stats.md - Issue: Stated as “~40 routes”. A grep over
nudg3-backend/api/public/v1/found 40@router.*decorators on 2026-05-03; some may be internal/admin sub-routes rather than customer-facing v1. Worth a one-off audit before the public API gets a published reference page. - Truth (probable): 38–40 customer-facing v1 routes.
- Who knows: Sandil / Lawrence
- Status: open
Fixed gaps (resolved 2026-05-03 verification pass)
Providers monitored — count and list — FIXED
Source code authority: nudg3-workflows/utils/provider_name_mapper.py. Base = ChatGPT, Perplexity, AI Overviews. Premium = Gemini, Claude, Grok, Copilot, AI Mode + premium model variants. Brave/Mistral confirmed not in mapper. architecture/pipeline-overview.md and reference/quick-stats.md updated.
Extraction pipeline version — FIXED
Production is v1.12 since 2026-04-23 (892ms/response, identity + disambig flags ON, fuzzy permanently OFF). Old v1.7 / v1.10 / v1.11 narrative replaced with full version table including v1.13 design state. v1.10 column also documents the rollback rather than presenting v1.10 as the current ship.
v1.8 Brand Discovery + v1.9 Disambiguation as separate planned capabilities — FIXED
Pipeline overview now reflects: v1.9 disambig merged into v1.10 and shipped (then partially rolled back, re-enabled at flag level in v1.12). v1.8’s matcher portion is superseded by v1.13 candidate-first; discovery+persistence halves still planned. Updated 2026-05-03.
Agent tool count “29 tools, 142 evals” — FIXED (claim removed)
The specific 29/142 numbers no longer appear in any knowledge hub doc — they were point-in-time and drift. quick-stats.md now says “drifts weekly — verify against agent registry” with a path pointer rather than a number.
Public API endpoint count “37 + 9 insights expansion” — FIXED
Insights endpoints shipped (commit c6ee216, 2026-04-04). quick-stats.md rewritten to “~40 v1 routes” with a source-code anchor. Exact count audit moved to its own open gap above.
MCP server tool count “10 tools” — FIXED (verified, not changed)
Counted 10 @mcp.tool() decorators in nudg3-mcp-server/src/nudg3_mcp/server.py on 2026-05-03. Listed by name in quick-stats.md.
Report generation models / costs — FIXED
Pipeline overview no longer presents Opus/Sonnet/Grok/Gemini as a public tier menu. It says “default is Sonnet 4.6, premium tiers routed per-workspace”. Costs flagged inline as estimates.
”AI Visibility Valuation BETA shipping April 30” — FIXED (claim removed)
Product overview no longer carries this claim. Current state (V1 valuation methodology v1.0.1 in PR #650, V1.1 Observed Mode pre-flight 2026-05-01 with 3 hard gates) lives in MEMORY and the active project doc — not yet in the knowledge hub. Will be added to roadmap.md when the project unblocks.
DataForSEO + Cloro cost claim “$0.006, 82-92% saving” — FIXED (claim contained)
Cost figure preserved as an estimate in quick-stats.md with estimate, not measured label. The “82–92% saving” comparative claim is no longer in any doc.
How to use this file
- Reference when promoting a doc from
accuracy: unverifiedtoverified. - Add new gaps as you notice them — one minute of flagging saves a bad pitch.
- Claim a gap by changing status to
in-progressand adding your name. - When a gap moves to fixed, capture the resolution one-liner under the dated section so future readers can audit the verification trail.