Late 2025, we defended a gaming client through a 1.2 terabit per second DDoS attack. That's 120 fully-saturated 10 Gbps connections worth of junk traffic. Attack ran for hours, mixed UDP amplification and SYN floods.

Client stayed online. Zero downtime. Here's what we learned about defending against that scale of attack, and why most DDoS protection marketing is disconnected from operational reality.

Why Terabit Attacks Are Different

Below 100 Gbps, DDoS mitigation is straightforward. You need scrubbing capacity and decent packet filtering. Single well-connected location handles it fine.

Cross 500 Gbps and the game changes completely. No single location can absorb that traffic cleanly. Transit providers get nervous. Ports saturate. You need geographic distribution because physics demands it—attack traffic arrives from compromised devices worldwide, not from one region.

The 1.2 Tbps attack peaked over 300 Gbps simultaneously at individual locations. Source IPs were globally distributed. Compromised IoT devices, servers, residential routers—the usual botnet composition. This is normal for large-scale attacks.

The Part Nobody Mentions: Where Traffic Actually Gets Absorbed

DDoS protection marketing focuses on scrubbing capacity numbers. "1 Tbps mitigation!" "Unlimited protection!" These numbers ignore a critical question: where does attack traffic actually get handled?

Large-scale attacks don't arrive on a single pipe. A terabit attack arrives distributed globally. Attack traffic originates from compromised devices worldwide—residential routers, IoT cameras, compromised servers spread across continents. Not from a single source you can just block.

The mitigation happens at multiple layers: upstream providers absorbing portions of attack traffic, transit providers filtering known attack signatures, scrubbing centers at the edge handling what reaches them. It's distributed by necessity, not design choice.

This is why geographic distribution matters. It's not about redundancy or "faster" scrubbing—it's about having attack traffic absorbed regionally before it concentrates at a single chokepoint. You can't defend a terabit attack from a single location, regardless of your local scrubbing capacity.

Volumetric vs. Application-Layer Attacks

Volumetric attacks are easy. UDP amplification, DNS reflection, NTP amplification—these are trivial to identify and filter. You see the signature, block it, done. XDP can drop terabits of this traffic with minimal CPU overhead.

Application-layer attacks are harder because they look legitimate. HTTP floods using thousands of residential IPs making valid requests to your API. Each connection looks real. The aggregate volume is designed to exhaust your backend.

Rate limiting doesn't work when attacks use 50,000+ unique IPs. You need behavioral analysis to distinguish bots from users. That's complex and depends heavily on understanding your specific application.

Reality: most DDoS mitigation services handle volumetric attacks well. Application-layer protection is hit or miss depending on how much the provider understands your specific use case.

The Operational Reality Nobody Mentions

When you're under attack, you want real-time visibility. Attack vectors, traffic volume, what's being filtered, mitigation strategy—all the data that lets you make informed decisions.

Reality: most providers, including us, send you a notification that an attack was detected and is being mitigated. You get graphs later—maybe next day, maybe a few hours after. During active attacks, you're mostly waiting and hoping the mitigation works.

This isn't because providers are lazy. Building real-time attack telemetry that's actually useful is hard. Attack traffic analysis happens at the edge across multiple locations. Aggregating that data, making it meaningful, and presenting it without overwhelming users is non-trivial engineering.

So the realistic expectation is: you'll get notified when attacks start and end. You'll get summary data afterwards. Real-time dashboards exist but are usually enterprise-tier add-ons, not standard features.

The "Unlimited" Protection Myth

Every DDoS service has limits. Providers claiming "unlimited" either:

  • Haven't been tested with large enough attacks yet
  • Will null-route you when you hit their hidden threshold
  • Have fine print that qualifies "unlimited" into meaninglessness

Infrastructure has costs. If someone offers unlimited terabit-scale protection for $50/month, they're either subsidizing massive targets with revenue from small customers (unsustainable), or they're going to drop you when you actually need it (dishonest).

Read the Terms of Service. Seriously. "Unlimited" gets qualified. "Best effort." "Reasonable use." "Network protection policies." These are the clauses that let them null-route you during attacks.

What This Means for Infrastructure Operators

If you're evaluating DDoS protection, here's what matters:

  • Proven mitigation history: Ask what attacks they've actually defended against, not just capacity specs.
  • Geographic distribution: Matters for large-scale attacks. Single location = single point of saturation.
  • Limits and policies: What happens when you exceed capacity? Read the Terms of Service for real answers.
  • Your relative size: If you'd be their largest customer by traffic volume, you're at risk. When capacity tightens, big targets get dropped first.

The uncomfortable reality: when DDoS providers face attacks that threaten network stability, they'll drop the affected customer to protect everyone else. This is rational infrastructure management but sucks if you're the one getting null-routed at 3 AM.

There's no perfect solution. Just understand the tradeoffs and plan accordingly.

If You're a Target

Game servers for competitive titles, crypto infrastructure, controversial content—you attract motivated attackers. Budget DDoS protection designed for typical web hosting won't cut it when someone with a rented botnet decides you're worth targeting.

Plan before you need it. Understand your provider's actual capabilities and limits. Have backup plans. Know what happens if protection fails.

Getting dropped mid-attack because you hit your provider's undisclosed limit is worse than planning for adequate protection upfront. Ask hard questions before signing up, not after you're offline.

— Eric B

President, Sucura Networks

AS398999

← Back to Blog