How to Block DuckDuckBot
Complete guide to blocking DuckDuckBot (DuckDuckGo) from crawling your website using robots.txt, server configuration, and Switch workflows.
Should You Block DuckDuckBot?
Caution: DuckDuckBot is a search engine crawler. Blocking it will remove your pages from DuckDuckGo's search index, which directly impacts your organic traffic and visibility.
Only block DuckDuckBot if you intentionally want to de-index your site from DuckDuckGo. Instead, consider using Switch to serve optimized content or manage specific page access.
Blocking Methods
1robots.txt
High for cooperative crawlersAdd a Disallow rule for DuckDuckBot's user-agent string in your robots.txt file. This is the standard, cooperative method that well-behaved crawlers respect.
2Server-side UA filtering
HighConfigure your web server (nginx, Apache, Cloudflare) to reject requests matching DuckDuckBot's user-agent patterns. This blocks at the network level before your application processes the request.
3Switch Journey Workflows
Highest — granular, real-time controlCreate a custom journey in Switch that detects DuckDuckBot and routes it to a block action, challenge, redirect, or modified content — without touching your server configuration.
robots.txt — Block DuckDuckBot
Add the following to your robots.txt file (at the root of your domain) to block DuckDuckBot:
User-agent: DuckDuckBot Disallow: / User-agent: duckduckbot Disallow: /
robots.txt — Allow with Restrictions
Alternatively, allow DuckDuckBot on most pages while blocking specific directories:
User-agent: DuckDuckBot Disallow: /private/ Allow: / User-agent: duckduckbot Disallow: /private/ Allow: /
DuckDuckBot User-Agent Strings
Use these patterns to identify DuckDuckBot in your server logs or firewall rules:
Frequently Asked Questions
Does blocking DuckDuckBot affect my Google search rankings?
No. Blocking DuckDuckBot does not affect your Google search rankings. Only blocking Googlebot impacts Google Search visibility.
Does DuckDuckBot respect robots.txt?
Yes, DuckDuckBot respects robots.txt directives. Adding a Disallow rule for its user-agent will prevent it from crawling blocked paths.
Can I allow DuckDuckBot on some pages but not others?
Yes. Use robots.txt to disallow specific directories, or use Switch journey workflows for granular page-level control with conditional logic.
Go beyond robots.txt
Switch detects DuckDuckBot in real-time and lets you build custom journey workflows — block, challenge, redirect, or serve modified content. No server changes required.
Get Started Free