How VerisAI works
Audit maps root-cause signals observable in the snapshot.
You get an evidence-based, time-stamped snapshot of crawler-visible website facts, AI-visible interpretation, and the signal gaps that explain the difference.
AI Identity
AI Identity is the profile a model may infer about your company in a given run and time: what you do, who you serve, where you operate, and how credible you appear. It is derived from signals the model can resolve—not from intent.
Example: if your service taxonomy is inconsistent, models may infer the wrong category or misattribute your offerings.
AI Identity Governance
Governance means keeping your company identity machine-verifiable and stable over time: consistent entity anchors, crawlable identity pages, canonical consistency across variants, and structured data that supports entity resolution.
Audit (root-cause signals)
When AI outputs drift from reality, the drift usually matches the observable signal environment: missing identity anchors, inconsistent canonicals, blocked crawling, thin or contradictory content, or broken structured data. The audit maps specific model claims to specific observable signals and separates deterministic website facts from model-generated interpretation.
Crawl & indexability
robots.txt, sitemaps, indexability controls, canonical paths, and fetch consistency across URL variants—so crawlers see one stable source of truth.
Identity anchors
About/Contact/legal entity signals, locations, ownership, and other machine-resolvable anchors across key pages—kept consistent across variants and languages.
Structured data integrity
Organization / WebSite schema, contact points, identifiers, and validation of critical fields used for entity resolution—no conflicting IDs or ambiguous sameAs.
Content clarity
Service taxonomy and positioning language, contradictions, thin pages, and missing context that forces model inference—so models don’t ‘fill gaps’ with guesses.
AI Visibility Score
The audit produces a quantitative AI Visibility Score (0–100) across 8 layers:
- L1 — Gateway
- Whether AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, GrokBot) are allowed in robots.txt and can fetch the page. A blocked gateway immediately sets the score to 0.
- L2 — SSR Quality
- Whether the page delivers a complete, server-rendered HTML response: valid title, H1 tag, 500+ characters of visible content, and valid JSON-LD on initial load.
- L3 — Indexability
- Presence and accuracy of canonical tags, lang attribute, and valid JSON-LD structure. Penalties reduce the content score — prevents ambiguous entity signals.
- L4 — Content Quality
- Type-specific semantic scoring (Organization, Article, Product, etc.) — checks entity clarity, schema completeness, contact info, social links, team signals, and description quality.
- L5 — Technical SEO
- Baseline web health: HTTPS, valid sitemap, responsive viewport, and asset optimization (CSS, JS, image counts).
- L6 — On-Page SEO
- Semantic markup quality: heading hierarchy, alt text coverage, internal link density, and OpenGraph/Twitter card completeness.
- L7 — Multi-LLM Citation Readiness
- Per-platform citation signal scoring for ChatGPT, Gemini, Claude, and Perplexity — based on bot access, FAQ presence, question-format headings, definition lists, author attribution, and structured data.
- L8 — SEO Activity
- Signals of active SEO management: GTM/GA4, sitemap scale (30+ URLs), blog/content section, hreflang, advanced schema types, and 3rd-party SEO tool presence.
For detailed scoring methodology, formulas, and documentation sources, see AI Visibility Score Methodology.
AI Knowledge Diff
Knowledge Diff compares deterministic website facts with what AI systems say in a single run.
- Website fact extraction: VerisAI fetches the target domain and derives crawler-visible facts from VCL Layer 4 Ground Truth Completeness. This is deterministic website analysis, not generative fact extraction.
- Ground truth gate: If critical identity facts are missing or L4 completeness is too low, the diff stops and reports that stronger website ground truth is needed before AI comparison is reliable.
- AI narrative snapshot: When the gate passes, VerisAI queries ChatGPT, Gemini, Claude, and Perplexity for a company narrative in the same run.
- Per-platform diff: Each AI answer is compared with the L4-derived website facts to identify matched facts, discrepancies, missing facts, and hallucinated claims.
This is a time-stamped diagnostic snapshot. It does not claim continuous monitoring, historical trend analysis, or real-time alerting unless those services are explicitly configured separately.
Deliverables
Outputs are snapshot-based and time-stamped so you can rerun checks after fixes and verify whether AI interpretations converge toward crawler-visible website facts.
- AI Identity baseline (what selected AI systems claim in the snapshot + uncertainty patterns)
- Evidence map (claim → observable website signals and pages)
- Forensic crawlability and SEO-compatibility findings (with affected URLs)
- Knowledge Diff findings (where AI interpretation diverges from L4-derived website ground truth in the snapshot)