What is ZoomInfoBot: A Guide for Website Owners

ZoominfoBot is an Intelligence Gatherer used by the ZoomInfo platform to collect B2B contact and company data. I'll explain its purpose, its high-frequency crawl patterns, and the tiered server strategies you need to control its resource usage.
Dean Davis Long Jump
Dean Davis
November 2, 2025

If you’ve noticed ZoomInfoBot showing up in your server logs, you’re not the only one. This crawler is showing up on more and more sites. To manage your server properly, you need to know exactly what it is and why it’s hitting your website.

What is ZoomInfoBot

What ZoomInfoBot Does

The bot is designed to find and extract very specific business information from corporate websites, press releases, public filings, and other online sources.

  • Data Focus: It looks for things like contact information (names, job titles, business emails, phone numbers), company details, organisational charts, and technographics (what software a company uses).
  • Crawl Pattern: Its activity is usually highly targeted and can be aggressive. Instead of a slow, broad crawl, it often operates in high-frequency bursts when tracking a specific, valuable data point. This is why it can cause major performance spikes on web servers.

Essentially, its job is to keep ZoomInfo’s database up-to-date and comprehensive for the sales and marketing teams who pay to use it. Knowing its purpose helps you decide if its impact on your server resources is worth the data it might be collecting.

Preparing Your Server: Strategies for Managing Data-Gathering Bots

As web designers and developers, we have to deal with automated traffic every day. Most of it is good Google, Bing, etc. But a growing number of data gatherers, like ZoominfoBot, crawl with a specific goal: collecting business intelligence. Their non-standard, high-frequency crawling can spike server usage, slow down sites for actual visitors, and sometimes increase hosting costs.

The point isn’t to block every bot, but to manage how they use your resources. We need a fast, tiered defense that keeps things running smoothly.

I. Identifying the Traffic

Before you apply any rules, you need proof. The User-Agent (UA) string is the bot’s reliable signature, even if its IP address constantly shifts.

  • The Signature: Look for ZoominfoBot in your logs.
  • Log Locations:
    • Apache: Usually /var/log/apache2/access.log
    • Nginx: Often /var/log/nginx/access.log
  • Quick Check (Command Line): Use grep to quickly see the activity volume:
grep "ZoominfoBot" /var/log/nginx/access.log | less

II. Tier 1: Passive Guidance (robots.txt)

This is your first, non-aggressive step. It’s a standard request that compliant bots follow. If the bot is already causing problems, this is unlikely to solve them, but it’s the correct starting point.

  1. Request Full Exclusion: Ask the bot to avoid the entire site by placing this in your robots.txt file:
  2. User-agent: ZoominfoBot
  3. Disallow: /
  4. Request a Delay: You can also ask it to pause between page requests. [Inference] This is not a standardized directive, so many aggressive crawlers ignore it:
  5. User-agent: ZoominfoBot
  6. Crawl-delay: 5

III. Tier 2: High-Performance Blocking (UA Denial)

This is the most balanced and effective fix. By blocking the User-Agent directly in the web server config, you stop the request at the earliest point, saving CPU cycles.

Crucial Note: Zoominfo is a data company. It is separate from Zoom Video Communications. Do not block the IP ranges of the video service—you could disrupt legitimate staff use. Focus only on the User-Agent signature.

A. For Apache Servers (.htaccess)

Use mod_rewrite to check the User-Agent header and return a 403 Forbidden error. Place this code at the top of your .htaccess file:

# Block specific User-Agent
<IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{HTTP_USER_AGENT} ZoominfoBot [NC]
    RewriteRule .* - [F,L]
</IfModule>

B. For Nginx Servers (Recommended Best Practice: map)

Using the map directive in Nginx is faster for checking variables like the User-Agent.

Define the block variable (put this in the main http context):

map $http_user_agent $block_bot {
    default 0;           # 0 means allow
    ~*ZoominfoBot 1;     # 1 means block if UA matches (case-insensitive)
}

Apply the block (put this inside the specific server block for your site):

server {
    #...
    if ($block_bot) {
        # Returns 444, which is an Nginx-specific code that closes the connection immediately.
        return 444; 
    }
    #...
}

IV. Tier 3: Conditional Rate Limiting

If you need to allow limited access but prevent resource overload, rate limiting is the solution. It throttles the high-frequency bursts that cause the most server strain.

This method also works best in Nginx, allowing you to target the rate limit only at the identified User-Agent.

Define a limit zone (in the http block):

# Define a zone named bot_limit, tracking by IP, allowing 1 request per second
limit_req_zone $binary_remote_addr zone=bot_limit:10m rate=1r/s;

Apply the limit conditionally (inside your server block):

server {
    #...
    # If the $block_bot variable is 1 (i.e., it's ZoominfoBot)
    if ($block_bot = 1) {
        # Apply the limit: 1 request/second, with a burst allowance of 5.
        limit_req zone=bot_limit burst=5 nodelay; 
    }
    #...
}

By layering these controls, you can stop unwanted traffic efficiently, keeping your server resources free for the users who matter most.