Cloudflare’s 1.1.1.1 DNS Suffers Global Outage Due to Internal Misconfiguration
On July 14, 2025, one of the world’s most widely used public DNS resolvers—Cloudflare’s 1.1.1.1—suffered a global outage, leaving millions of users unable to access websites or utilize most internet services. The disruption, which lasted 62 minutes, was attributed to an internal configuration error linked to preparations for a new regional service deployment.
The incident began at 21:52 UTC when IP prefixes associated with 1.1.1.1 were unexpectedly withdrawn from global routing tables. This stemmed from a misconfiguration made on June 6 during preparations for the forthcoming Data Localization Suite (DLS); the resolver addresses were mistakenly included in the setup. The error went unnoticed until the configuration was activated as part of an infrastructure update.
As a result, DNS queries directed to IP addresses 1.1.1.1, 1.0.0.1, and their IPv6 equivalents failed to reach Cloudflare’s infrastructure. Name resolution became impossible, cutting off users from the web. All DNS protocols over UDP, TCP, and TLS were affected. However, DNS-over-HTTPS (DoH) remained functional for many clients, as they rely on the domain cloudflare-dns.com, which routes through separate IP addresses.
Amid the disruption, a BGP hijack of the 1.1.1.0/24 prefix was observed, originating from the Indian telecom provider Tata Communications. Cloudflare clarified that this was not the root cause of the incident—the hijack only occurred after Cloudflare had ceased announcing the affected route.
The issue was identified at 22:01 UTC, prompting engineers to begin reverting the faulty configuration. Route republication commenced at 22:20, partially restoring traffic. However, 23% of edge servers had already dropped their IP bindings, necessitating manual reconfiguration via Cloudflare’s internal change management system. Full routing restoration was achieved by 22:54 UTC.
The outage was made possible by shortcomings in an outdated topology mapping system, which required manual associations between IP prefixes and specific data centers. This system lacked staged rollout and rollback capabilities. Although the changes were manually reviewed, they bypassed the canary deployment mechanism, allowing the misconfiguration to propagate globally without containment.
Cloudflare acknowledged the failure and pledged to accelerate the deprecation of legacy systems, implement progressive updates with real-time monitoring, enhance documentation, and improve test coverage. Additionally, synchronized configurations between old and new routing control systems will be disabled.
Despite the incident, the 1.1.1.1 resolver remains a free and widely trusted DNS service. Cloudflare issued a formal apology and affirmed that the corrective measures taken will prevent similar disruptions in the future.