Scoring Methodology
65+ Ranking Factors Across 4 Evaluation Categories
Elmo scores websites across 65 factors that influence both traditional search rankings and AI-generated recommendations. Each category is scored independently and contributes to the overall grade.
Foundational crawl, index, and structure signals that affect discoverability.
Speed and UX signals tied to ranking, crawl efficiency, and user trust.
Signals that indicate expertise, usefulness, and intent match for human readers.
Signals that influence how LLM-powered systems interpret, cite, and recommend your brand.
1. Crawl accessibility
Checks whether search bots can access key templates and important paths.
2. Indexability controls
Validates noindex, canonical, and robots directives for correctness.
3. XML sitemap quality
Audits sitemap freshness, status codes, and alignment with indexable URLs.
4. Robots.txt policy
Reviews robots rules for accidental blocking and poor crawl prioritization.
5. HTTP status integrity
Flags broken pages, soft 404s, and status code mismatches.
6. Redirect hygiene
Detects redirect chains, loops, and unnecessary temporary redirects.
7. HTTPS enforcement
Confirms secure protocol usage and canonical consistency between variants.
8. Canonical architecture
Verifies duplicate pages resolve to intended canonical URLs.
9. URL readability
Scores slug clarity, stability, and avoidance of noisy parameters.
10. Internal link depth
Measures how many clicks core pages are from the homepage.
11. Orphan page detection
Finds pages with no internal links pointing to them.
12. Duplicate content control
Identifies pages with highly overlapping copy and mixed canonical targets.
13. Title tag coverage
Checks uniqueness, intent alignment, and title length.
14. Meta description quality
Evaluates presence, relevance, and rewrite risk in snippets.
15. Heading structure
Audits H1-H3 hierarchy and semantic section clarity.
16. Image alt coverage
Reviews descriptive alt attributes for accessibility and context.
17. Structured data presence
Measures schema type coverage for business and content entities.
18. Breadcrumb markup
Validates breadcrumb navigation and BreadcrumbList schema consistency.
19. Pagination handling
Checks paginated collections for crawl traps and weak signals.
20. Hreflang readiness
Audits multilingual annotations when international variants exist.
21. Thin page risk
Flags pages with low unique value or low informational depth.
22. Business identity consistency
Checks NAP and brand identity consistency in core pages and markup.
1. Largest Contentful Paint (LCP)
Measures how quickly primary page content becomes visible.
2. Cumulative Layout Shift (CLS)
Tracks visual stability and layout jumps during load.
3. Interaction to Next Paint (INP)
Evaluates responsiveness for taps, clicks, and keyboard input.
4. Time to First Byte (TTFB)
Assesses backend response latency and server-side bottlenecks.
5. Caching and compression
Checks gzip/brotli usage and cache-control strategy for static assets.
6. Mobile rendering quality
Verifies responsive behavior and viewport reliability on mobile devices.
1. Search intent alignment
Measures how directly the page answers likely query intent.
2. Topical depth
Checks whether pages cover core subtopics and practical specifics.
3. Content freshness
Looks for update cadence and outdated references in critical pages.
4. Original insight
Rewards first-party data, examples, and unique framing.
5. Factual consistency
Flags contradictory claims and unclear positioning statements.
6. Expertise signals
Evaluates visible qualifications, credentials, and domain knowledge.
7. Author transparency
Checks authorship clarity, bios, and editorial accountability.
8. Readability and structure
Scores scannability using headings, lists, and clear paragraph flow.
9. Entity completeness
Measures explicit mention of products, services, industries, and locations.
10. Keyword targeting clarity
Assesses primary and secondary phrase focus without stuffing.
11. Supporting internal links
Checks contextual links that reinforce topic clusters.
12. Conversion clarity
Evaluates CTA specificity and alignment with page purpose.
13. Local relevance
Reviews local context signals when geography matters.
14. FAQ usefulness
Measures quality and coverage of practical question-answer content.
15. Trust reinforcement
Checks testimonials, proof points, and policy transparency.
1. Entity definition clarity
Determines whether AI models can unambiguously identify your business.
2. Brand disambiguation
Checks naming overlap risk and context markers that reduce confusion.
3. Answer-first formatting
Scores whether pages provide concise direct answers near the top.
4. Question coverage
Measures breadth of natural-language questions your content addresses.
5. Citation-ready phrasing
Evaluates quote-worthy statements with clear, verifiable language.
6. Schema completeness for AI
Checks machine-readable schema supporting entity understanding.
7. Service specificity
Assesses explicit service, pricing, and audience qualifiers.
8. Geo qualifier strength
Measures location specificity for local and regional prompts.
9. About and contact transparency
Reviews trust pages that help models validate legitimacy.
10. Review and reputation signals
Looks for consistent social proof and sentiment evidence.
11. Cross-source consistency
Compares messaging consistency across owned and third-party mentions.
12. Long-tail conversational coverage
Audits prompt-style phrase coverage beyond short keywords.
13. Comparison content readiness
Checks whether pages support model-generated alternatives comparisons.
14. List and table extractability
Measures how easily models can pull structured facts and summaries.
15. Paragraph compression quality
Evaluates sentence density for chunking and retrieval quality.
16. Context-setting intros
Checks whether page intros provide fast, clear framing for retrieval.
17. Knowledge graph alignment
Assesses explicit entities, relationships, and supporting attributes.
18. Source attribution quality
Rewards outbound references and transparent claim provenance.
19. Multi-platform answer readiness
Scores adaptability for ChatGPT, Claude, Gemini, and Perplexity patterns.
20. Prompt-to-page matching
Measures alignment between likely prompts and available landing pages.
21. FAQ schema and Q&A structure
Checks structured FAQ markup and visible Q&A quality.
22. Recommendation confidence signals
Evaluates trust elements that support definitive AI recommendations.
Grade A
85-100Strong performance with minor refinements needed.
Eligible for AI and search visibility gains with focused optimization.
Grade B
70-84Solid baseline with notable improvement opportunities.
Good discoverability, but consistency and depth gaps limit upside.
Grade C
55-69Mixed quality with meaningful weaknesses across categories.
Inconsistent performance and reduced citation/recommendation confidence.
Grade D
0-54Critical issues impacting visibility and trust.
Requires foundational technical and content remediation before scaling.
Scores are designed for execution, not vanity. Recommendations are prioritized by impact, confidence, and implementation effort so teams can improve visibility in the shortest practical timeline.