How to Block DeepSeekBot

Complete guide to blocking DeepSeekBot (DeepSeek) from crawling your website using robots.txt, server configuration, and Switch workflows.

Operated by DeepSeekCommercial Crawlers

Should You Block DeepSeekBot?

DeepSeekBot collects data for AI model training. Blocking it prevents your content from being used in DeepSeek's AI products without affecting your search visibility.

This is a common and recommended action for sites that want to control how their content is used in AI training.

Blocking Methods

1robots.txt

High for cooperative crawlers

Add a Disallow rule for DeepSeekBot's user-agent string in your robots.txt file. This is the standard, cooperative method that well-behaved crawlers respect.

2Server-side UA filtering

High

Configure your web server (nginx, Apache, Cloudflare) to reject requests matching DeepSeekBot's user-agent patterns. This blocks at the network level before your application processes the request.

3Switch Journey Workflows

Highest — granular, real-time control

Create a custom journey in Switch that detects DeepSeekBot and routes it to a block action, challenge, redirect, or modified content — without touching your server configuration.

robots.txt — Block DeepSeekBot

Add the following to your robots.txt file (at the root of your domain) to block DeepSeekBot:

User-agent: DeepSeekBot
Disallow: /

User-agent: deepseek
Disallow: /

robots.txt — Allow with Restrictions

Alternatively, allow DeepSeekBot on most pages while blocking specific directories:

User-agent: DeepSeekBot
Disallow: /private/
Allow: /

User-agent: deepseek
Disallow: /private/
Allow: /

DeepSeekBot User-Agent Strings

Use these patterns to identify DeepSeekBot in your server logs or firewall rules:

DeepSeekBot
deepseek

Frequently Asked Questions

Does blocking DeepSeekBot affect my Google search rankings?

No. Blocking DeepSeekBot does not affect your Google search rankings. Only blocking Googlebot impacts Google Search visibility.

Does DeepSeekBot respect robots.txt?

Yes, DeepSeekBot respects robots.txt directives. Adding a Disallow rule for its user-agent will prevent it from crawling blocked paths.

Can I allow DeepSeekBot on some pages but not others?

Yes. Use robots.txt to disallow specific directories, or use Switch journey workflows for granular page-level control with conditional logic.

Go beyond robots.txt

Switch detects DeepSeekBot in real-time and lets you build custom journey workflows — block, challenge, redirect, or serve modified content. No server changes required.

Get Started Free