How to Block Googlebot
Complete guide to blocking Googlebot (Google) from crawling your website using robots.txt, server configuration, and Switch workflows.
Should You Block Googlebot?
Caution: Googlebot is a search engine crawler. Blocking it will remove your pages from Google's search index, which directly impacts your organic traffic and visibility.
Only block Googlebot if you intentionally want to de-index your site from Google. Instead, consider using Switch to serve optimized content or manage specific page access.
Blocking Methods
1robots.txt
High for cooperative crawlersAdd a Disallow rule for Googlebot's user-agent string in your robots.txt file. This is the standard, cooperative method that well-behaved crawlers respect.
2Server-side UA filtering
HighConfigure your web server (nginx, Apache, Cloudflare) to reject requests matching Googlebot's user-agent patterns. This blocks at the network level before your application processes the request.
3Switch Journey Workflows
Highest — granular, real-time controlCreate a custom journey in Switch that detects Googlebot and routes it to a block action, challenge, redirect, or modified content — without touching your server configuration.
robots.txt — Block Googlebot
Add the following to your robots.txt file (at the root of your domain) to block Googlebot:
User-agent: Googlebot Disallow: / User-agent: googlebot Disallow: / User-agent: Google-InspectionTool Disallow: /
robots.txt — Allow with Restrictions
Alternatively, allow Googlebot on most pages while blocking specific directories:
User-agent: Googlebot Disallow: /private/ Allow: / User-agent: googlebot Disallow: /private/ Allow: / User-agent: Google-InspectionTool Disallow: /private/ Allow: /
Googlebot User-Agent Strings
Use these patterns to identify Googlebot in your server logs or firewall rules:
Frequently Asked Questions
Does blocking Googlebot affect my Google search rankings?
Yes. Blocking Googlebot will remove your site from Google Search entirely. Only do this if you intentionally want to de-index.
Does Googlebot respect robots.txt?
Yes, Googlebot respects robots.txt directives. Adding a Disallow rule for its user-agent will prevent it from crawling blocked paths.
Can I allow Googlebot on some pages but not others?
Yes. Use robots.txt to disallow specific directories, or use Switch journey workflows for granular page-level control with conditional logic.
Go beyond robots.txt
Switch detects Googlebot in real-time and lets you build custom journey workflows — block, challenge, redirect, or serve modified content. No server changes required.
Get Started Free