How to Run a Technical SEO Audit (Templates + Tools)

How to Run a Technical SEO Audit

A technical SEO audit is the foundation for any sustainable organic search strategy. It finds the infrastructure, crawlability and indexability issues that prevent search engines from understanding and ranking your content.

This guide walks you through a repeatable, practical audit process with templates, a prioritisation matrix and recommended tools so you can go from discovery to tracked fixes quickly.

What a technical SEO audit should achieve

  • Identify indexability and crawlability problems.
  • Find performance and Core Web Vitals issues affecting user experience.
  • Discover structured data and canonicalisation problems.
  • Quantify impact and prioritise fixes with a clear ticketing workflow.
  • Provide an audit artefact you can re-run periodically and measure improvements against.

Audit overview: stages and cadence

  1. Scoping – define which sections, subdomains and environments to audit.
  2. Data collection – crawl, log files, Search Console and analytics.
  3. Analysis – surface issues, group them by type and estimate impact.
  4. Prioritisation – score issues by impact, effort and risk.
  5. Remediation – create tickets, assign owners and track changes.
  6. Verification – re-crawl and confirm fixes in Search Console and analytics.

1. Scoping the audit

Start by defining scope and goals. Not every “technical SEO audit” needs the entire site scanned, a targeted approach saves time and reduces noise.

  • Target: whole site, subdomain, site section (e.g. /blog/ or /products/) or a migration environment.
  • Goal: improve indexation, resolve traffic drops, prepare for migration or reduce crawl waste.
  • Stakeholders: SEO, dev, product, hosting and content teams. Identify owners early.

2. Data collection (must-have sources)

Collect the following sources before analysis, these are the inputs that make the audit actionable.

  • Site crawl report – use a crawler (Screaming Frog, Sitebulb or similar) to map URLs, response codes, metadata, headings and internal links.
  • Server log files – raw bot activity shows what search engines actually crawl and how often.
  • Google Search Console – index coverage, URL inspection, enhancement reports and search performance.
  • Analytics – organic traffic trends, landing page performance and conversion signals.
  • Page speed lab & field data – Lighthouse, WebPageTest, Core Web Vitals from Search Console and CrUX.
  • Sitemap XML – official list of URLs submitted to search engines.
  • Robots.txt and CDN/edge rules – ensure there are no inadvertent disallow rules or edge caching conflicts.

3. Crawl analysis

Run a full crawl and inspect structural issues with the following focus areas.

  • Response codes: identify 4xx, 5xx and unexpected 2xx responses.
  • Redirect chains: find chains longer than one hop and loops.
  • Canonical tags: mismatches, pointing to unrelated pages or missing where duplicates exist.
  • Duplicate content: similar titles, meta descriptions and body content, group by near-duplicate sets.
  • Internal linking: orphan pages, deep pages >4 clicks from the homepage and internal link equity distribution.
  • Meta tags: missing or duplicate titles and meta descriptions and oversize tags.
  • Hreflang (if applicable): incorrect tags and inconsistent rel-alternate annotations.

4. Log file analysis

Logs show real bot behaviour. Use them to detect what Googlebot is crawling and where crawl budget is spent.

  • Map log hits to your crawl report to see which URLs are crawled but not indexed.
  • Identify high-crawl pages that return 4xx or 5xx errors, these waste budget.
  • Find sections with low crawl frequency that nevertheless receive organic traffic, potential gaps in discovery.
  • Detect spikes in crawl activity after deployments, may indicate indexing triggers or regressions.

5. Index coverage & Search Console checks

Use Search Console to validate what the search engine reports about your site.

  • Coverage report – exported to identify reasons for excluded pages, and which affected pages are high-value.
  • Sitemaps report – confirm submitted sitemaps match crawled and indexed URLs.
  • URL inspection – spot-check representative URLs to validate canonical interpretation and indexing decisions.
  • Enhancements – review structured data, breadcrumbs and mobile usability reports for flagged issues.

6. Performance and Core Web Vitals

Performance affects both user experience and ranking signals. Compare field and lab data.

  • Core Web Vitals (LCP, FID/INP, CLS) from CrUX and Search Console.
  • Field data vs Lighthouse lab scores, prioritise field metrics for user impact.
  • Identify heavy pages (images, third-party scripts, large JavaScript bundles) and list remediation steps.

7. Structured data & rich results

Validate schema and check for markup issues that prevent rich results.

  • Use the Rich Results Test to check eligibility and errors.
  • Look for inconsistent data (wrong types, missing properties, inaccurate publish dates).
  • Confirm the schema present on canonical URLs and not on duplicates.

8. Crawl budget and index efficiency

Large sites especially must manage crawl budget efficiently.

  • Identify low-value URL patterns (calendar pages, faceted navigation) and block or noindex where appropriate.
  • Ensure sitemap XML prioritises canonical, high-value pages and excludes thin or duplicate content.
  • Consider server response time and rate-limiting rules if bots are being throttled.

9. Security, hosting and HTTP configuration

Site availability and secure configuration are non-negotiable.

  • Check SSL certificate validity and mixed-content issues.
  • Inspect HTTP headers: HSTS, X-Frame-Options and cache-control policies.
  • Confirm no indexing of staging sites or private environments via robots or basic auth.

10. Mobile & device checks

With mobile-first indexing, mobile behaviour is primary.

  • Compare mobile vs desktop render and resources, ensure parity for critical content.
  • Check dynamic rendering or client-side rendering issues where content might be hidden from crawlers.
  • Validate mobile usability issues in Search Console and fix navigation or viewport problems.

11. Prioritisation framework (impact × effort)

Not all issues are equal. Use a simple scoring matrix to prioritize.

  • Impact score (1–5): estimate sessions/conversions or visibility risk.
  • Effort score (1–5): developer time, complexity and testing required.
  • Risk score (1–5): chance the fix could adversely affect other systems (migrations, canonical changes).
  • Calculate a priority index: priority = (Impact × 2) − Effort − Risk. Higher values = higher priority.

12. Ticket template (copy/paste)


Title: [TECH SEO] {Short description} — {URL}

Description:
- Environment: production/staging
- URL(s) affected: {comma-separated}
- Symptom: {e.g. 404s, duplicate canonicals, slow LCP}
- Evidence:
  - Crawl extract: {link to crawl CSV}
  - Log sample: {link to log extract}
  - GSC reference: {link}
  - Analytics impact: {sessions, CTR, conversions}
- Recommended fix: {detailed steps}
- Rollback plan: {how to revert}
- Acceptance criteria: {how we'll validate fix}
- Priority: {P0/P1/P2}
- Owner: {team/person}
- Due date: {date}
  

13. Example findings and remediation

Here are common findings and practical remediations you can use in tickets.

  • Issue: Canonical tags pointing to parameterised versions.
    • Fix: Set canonical to clean, content-bearing URL and update sitemap. Add redirect rules if needed.
  • Issue: Large images causing slow LCP.
    • Fix: Implement responsive images with srcset, use modern formats (WebP/AVIF) and serve via CDN.
  • Issue: Faceted URLs indexed causing duplicate content.
    • Fix: Add rel=”canonical” to canonical page, noindex low-value combinations or block via robots where appropriate; add canonicalised URLs to sitemap.
  • Issue: Bot crawl spikes on calendar pages.
    • Fix: Disallow low-value URL patterns, exclude from sitemap and use robots.txt or meta robots tag.

14. Validation and verification

After the fix is deployed, verify with a set of repeatable checks.

  • Re-crawl the affected URLs and confirm status codes and canonical tags.
  • Check Search Console index status for the specific URLs and coverage report for improvements.
  • Monitor logs for changed crawl patterns and check that bot errors have reduced.
  • Track organic sessions and impressions for the affected templates or top-level sections for 2–12 weeks depending on the issue.

15. Reporting and stakeholder comms

Maintain a clear audit report and change log for transparency.

  • Executive summary: top issues, expected impact and next steps.
  • Detailed findings: CSV export of crawl issues and matched log samples.
  • Fix register: tickets created, owners and status with links to PRs and deploy IDs.
  • Follow-up timeline: when re-checks will occur and expected measurement windows.

16. Tools checklist

Recommended tools for an efficient technical SEO audit:

  • Crawlers: Screaming Frog, Sitebulb, DeepCrawl.
  • Log analysis: Botify Logs, Screaming Frog Log File Analyser, custom BigQuery pipelines.
  • Search Console: native GSC UI and exported CSVs.
  • Performance: Lighthouse, WebPageTest, CrUX (BigQuery) and PageSpeed Insights.
  • Structured data: Google Rich Results Test and schema validators.
  • Issue tracking: Jira, Trello, Asana with ticket templates.
  • Monitoring: rank trackers (Ahrefs/SEMrush), uptime/crawl monitors and analytics platforms.

17. Audit frequency and automation

How often should you run a technical audit? The answer depends on the site and release cadence.

  • Large sites (thousands of pages): monthly automated crawls and weekly log sampling.
  • Medium sites (hundreds of pages): quarterly full audits and monthly automated checks.
  • Small sites: biannual checks, with ad-hoc scans for migrations or large releases.
  • Automate alerts for critical issues (5xx spike, indexability drops, sitemap errors) and create automatic tickets for common low-risk fixes.

18. Common pitfalls and how to avoid them

  • Fixing low-impact items first – use the prioritisation framework to focus on what moves the needle.
  • Over-indexing thin pages – apply quality gates and canonical rules.
  • Deploying site-wide changes without rollback plans – always include acceptance criteria and rollback steps in tickets.
  • Ignoring mobile parity – validate both mobile and desktop outputs after changes.

19. Sample audit summary (one-page)

Use this template for executive stakeholders.

  • Scope: www.example.com – public site, excluding /staging/
  • Top issues: 3,200 duplicate meta descriptions; 5xx errors on product feeds; slow LCP on category pages.
  • Priority fixes: Fix canonical rules (P0), patch server error on feed (P0), compress category images (P1).
  • Timeline: P0 fixes within 7 days, P1 within 30 days.
  • Expected impact: recover up to 15% of lost crawl budget and reduce index exclusions by 60% in 8 weeks.

Conclusion

A well-run technical SEO audit turns chaotic site issues into a prioritised, trackable plan. Use the templates in this guide to collect consistent data, engage engineering teams with clear tickets and measure outcomes.

Technical SEO is an ongoing discipline, treat audits as part of your release cycle and automate checks where sensible to reduce surprises and protect organic visibility. Also read – Practical On-Page SEO Checklist.

Scroll to Top