Over the past few years, I’ve led the migration of many externally exposed enterprise applications from F5 BIG-IP appliances to Cloudflare’s suite of products at a Global 500 company. Each migration has refined my understanding of what works, what doesn’t, and why the timing for this shift has never been more urgent.
This series will walk through the practical, step-by-step process of migrating a publicly accessible enterprise web application from F5 to Cloudflare. But before we get into configurations and DNS records, we need to answer two fundamental questions: Why am I writing this now? And why migrate at all?
Fair warning: This got long. Real long. Expand the table of contents below if you want to skip my opinions about vendor security and why I chose to move from F5 to Cloudflare, and get straight to the migration process:
Table of Contents
Open Table of Contents
Why I’m Writing This Now
The F5 Breach and the Dwell Time Problem
On October 15, 2025, F5 announced a security incident that should concern anyone running critical infrastructure on their platforms. The attackers had maintained access for approximately two years before detection. Two years. For a vendor whose products sit at the security perimeter of thousands of enterprises, that dwell time is staggering.
Compare that to how Cloudflare handled similar incidents. On October 18, 2023, Cloudflare discovered that Okta (their internal employee identity provider) had been compromised before Okta did and worked to rotate keys that may have been compromised. When the attacker started using non-rotated credentials to establish persistence in Cloudflare’s Atlassian environment during Thanksgiving 2023, Cloudflare detected and removed the attacker’s access quickly. Their dwell time was measured in days, not years. The difference isn’t luck. It’s a fundamental difference in security culture, tooling, and operational maturity.
I’m not writing this to pile on F5, but those of us who make decisions about products, services, and software need to constantly be evaluating those solutions and replacing them with more secure alternatives if the solutions don’t keep pace as attackers grow more advanced. When your WAF vendor becomes a weak point in your security posture, it’s time to reassess. I led the selection of Cloudflare for externally facing workloads just for this reason, while continuing to use F5 (and similar cloud provider products) for simpler internal load balancing.
The Multi-Cloud Reality
Recent major outages, including the AWS us-east-1 DynamoDB incident on October 21 and the global Azure Front Door outage on October 29, sparked renewed interest in running applications across multiple cloud providers. However, I want to state clearly that for most cases, multi-cloud is the worst practice. While some super mission-critical workloads might justify this approach, splitting individual applications across multiple providers rarely makes sense.
That said, I believe most organizations are multi-cloud or hybrid cloud by default, whether due to acquisitions, lack of strategy, or legacy systems that can’t migrate to the cloud. As my employer’s footprint expanded to support multiple cloud providers and an on-premises data center in each region, managing separate F5 BIG-IP deployments in each environment became unsustainable.
Cloudflare’s cloud-delivered architecture (which I’ll detail in the next section) solved this by providing consistent security and performance across all locations without multiplying our management overhead. While Cloudflare could help you load balance and fail over between different providers and on-premises, I wouldn’t recommend that for many reasons. If you have that kind of availability need, multi-region is where you should look first.
The Enterprise Knowledge Gap
Most Cloudflare content is written for either SMBs or companies building their own products and managing their own infrastructure end-to-end. That’s very different from typical enterprise use cases where you’re hosting purchased software, coordinating with application teams who may not understand networking, and working within change management processes that assume traditional architectures.
After fielding many recruiter messages specifically about my Cloudflare experience, I realized there’s genuine demand from enterprises looking to make this transition. The knowledge exists, but it’s scattered across migration guides on developers.cloudflare.com, community forums, and tribal knowledge. This series aims to add some clarity to a process that you could use to migrate enterprise applications with all the organizational complexity that entails.
Why Migrate from F5 to Cloudflare
The F5 security incident may be a trigger for you looking into this, but it wasn’t the root cause of our decision to migrate. That decision was driven by three deeper principles.
Legacy Network Vendors Are Becoming Security Liabilities
F5 and similar vendors were born in an era when network security meant hardware appliances in your data center inspecting packets as they passed through. They were genuine innovators in their time. But the world has changed faster than they have.
These vendors have increasingly shifted from building their own technology to assembling it from commodity parts. They source network processors from Cavium, Marvell, Broadcom, Intel, or MediaTek. They outsource design to contract studios and manufacturing to ODMs. They run decade-old versions of Linux and other open source software, often mixing in licensed software from companies like IP Infusion. This lets them ship “new features” faster on paper, but it has transformed them from technology companies into marketing companies that happen to sell network security equipment.
When a security vendor’s own product becomes a security risk, it’s a sign they’ve lost the plot. The F5 breach isn’t an isolated incident. It’s a symptom of an industry-wide shift away from deep technical ownership toward higher margins on integrated commodity components.
Buy from Builders
This realization led me to a principle I now apply when evaluating any technology purchase: Buy from Builders. When you choose to buy a piece of your technology stack, buy it from the people who are actually building that technology, not assembling it from someone else’s components.
Cloudflare exemplifies this principle, building (and rebuilding) more and more of their stack themselves. Rather than licensing a third-party WAF engine, they built their own in Rust after outgrowing their Lua-based implementation. Rather than buying DDoS mitigation technology, they developed their own detection and mitigation systems deeply integrated with their global network. When they needed better performance, they worked directly with hardware vendors on custom server designs rather than buying off-the-shelf appliances.
This matters because when you buy from builders, you’re buying from people who deeply understand the technology, can fix problems at the root cause, and have the capability to evolve the platform as threats change. When you buy from assemblers, you’re buying from people who are ultimately constrained by what their suppliers will provide.
The Architectural Shift: Appliances vs. Cloud-Delivered Security
The migration from F5 to Cloudflare represents a fundamental change in how security and delivery capabilities are deployed and managed.
F5’s appliance model (whether physical or virtual) requires dedicated infrastructure in each location where you need protection. You configure each appliance individually, manage its lifecycle, and scale by adding more boxes. This made sense when applications lived in a single data center, but it creates exponential operational complexity in today’s distributed environments.
Cloudflare’s cloud-delivered model inverts this approach. Capabilities run on Cloudflare’s global network of data centers, not in your infrastructure. You configure policies once, centrally, and they propagate globally in seconds. Scaling means adjusting settings, not deploying hardware. This architectural difference drives many of the operational and cost benefits we’ll discuss throughout this series.
Operational Complexity at Scale
When I started working with F5 BIG-IP, we only had to worry about one on-premises data center per region. By the time we were looking for the next solution, there were two cloud providers in addition to the on-prem with more on the horizon.
Each F5 location needs its own set of virtual or physical appliances, each requiring licensing, patching, monitoring, and configuration management. Deploying a new setting across many locations means touching many appliances. Ensuring consistency is a constant battle. Adding capacity means buying more hardware or licenses, even if demand has simply shifted from one region to another.
The cloud-delivered model eliminates these concerns. Adding a new location means pointing DNS and configuring a load balancer pool, not deploying appliances. We pay for usage and capabilities, not infrastructure, providing both operational simplicity and cost predictability.
High-Level Architecture and Capability Mapping
Our typical enterprise web application on F5 looks like this:
-
GTM (now called DNS) for global load balancing, directing users to the nearest regional deployment
-
LTM for local load balancing across at least two servers in each location
-
ASM for WAF and API protection
-
Three listening services:
-
Port 80 for HTTP to HTTPS redirect
-
Port 443 for front-end application traffic over HTTPS
-
Port 8443 for API traffic over HTTPS
The fundamental difference is deployment model: F5 runs as appliances in your infrastructure, while Cloudflare runs globally on their network. But functionally, the capabilities map cleanly:
| F5 Component | Cloudflare Equivalent | Key Capabilities |
|---|---|---|
| GTM (DNS) | Cloudflare DNS + Anycast + Load Balancing | Geographic steering, intelligent failover, health-based routing |
| LTM | Cloudflare Load Balancing | Origin pools, health checks, session affinity, weighted routing |
| ASM | Cloudflare WAF + API Gateway | Managed Rules, OWASP CRS, custom rules, JWT validation, schema enforcement |
| iRules | Cloudflare Rules, Transform Rules, Snippets, Workers | HTTP header manipulation, request/response transformation, custom logic |
Overview of the Migration Process
The detailed technical steps will come in later posts, but here’s the high-level sequence I would recommend after gaining experience across many migrations. Depending on the choices you make, you may perform each of these steps for every application migration (especially in the most proper subdomain-per-app setup) or you may perform some of these steps only once per zone, or not at all if you have a very simple app that follows web semantics so well that you may skip a step like testing or tuning.
-
Create application inventory with all F5 BIG-IP settings (pools, virtual servers, profiles, iRules)
-
Schedule kickoff call with application team to align on timing, testing approach, and rollback plans
-
Set up Cloudflare zone and update nameservers (or configure CNAME/Custom Hostname setup)
-
Enable Managed Rules and OWASP rules as baseline WAF protection
-
Configure HTTP to HTTPS redirects with ACME exception for certificate validation
-
Enable Total TLS for automated certificate management
-
Install TLS certificates on origin web servers to enable end-to-end encryption and enforce authenticated origin pulls
-
Set up Cloudflare load balancing with health checks, origin pools, and geo steering
-
Configure custom hostnames if using the utility domain approach
-
Create TXT DNS records for custom hostname pre-validation
-
Pre-test traffic through Cloudflare with application teams using hosts file or preview URLs
-
Validate ACME certificate processes to ensure renewals will work post-migration
-
Execute DNS cutover to Cloudflare during approved change window
-
Monitor for WAF false positives and application issues during stabilization period
-
Manage WAF rule exceptions and fine-tune rules based on observed traffic
-
Limit traffic to only Cloudflare IP addresses as an additional high-performance layer 4 protection in addition to authenticated origin pulls
-
Decommission F5 BIG-IP Virtual IPs once application is stable on Cloudflare
-
Final decommission of F5 BIG-IP: Move gateways from F5 BIG-IP to a remaining firewall
This process typically takes 2-4 weeks from kickoff to decommissioning F5, depending on application complexity, testing requirements, and availability of the application team. The actual cutover is usually measured in minutes.
I am specifically not covering some Cloudflare capabilities that I have gained experience with, some for the sake of brevity and some because I only have access to these functions on an employer’s account that I do not want to share on this blog. Some of those are:
-
Using Spectrum to handle applications that use more than just standard web protocols and ports.
-
Chaining Spectrum and Load Balancing together
-
Using Magic Transit and Magic Firewall to shield on-prem datacenters at L3 & L4 levels
-
Using Access to add authentication to an application (à la APM)
-
Using Cloudflare Tunnels (cloudflared) or MagicWAN for connectivity to servers that can’t take direct connections from the Internet.
What’s Next in This Series
This post established the why. The remaining posts will cover the how, in practical detail:
-
Zone Onboarding Options: Full setup with subdomains (Enterprise only), Partial with CNAME (Business/Enterprise), or alternate domain with custom hostnames (available on any paid plan). We’ll focus primarily on custom hostnames as the most approachable.
-
SSL/TLS at the Origin: Working with application teams to configure web servers for TLS, setting up NAT rules to enable direct origin connectivity for testing, and managing certificate deployment.
-
Load Balancing: Configuring origin pools, health checks, session affinity, and geo steering/failover to replicate and improve upon GTM+LTM functionality.
-
Edge Routing: Setting up HTTP to HTTPS redirects, and header manipulation or transform rules to meet application requirements.
-
WAF Foundations: Enabling Managed Rules and OWASP rules, creating exceptions, and establishing a sustainable WAF management process.
-
Pre-Production Testing: Using TXT record pre-validation for custom hostnames and SSL certificates to test traffic before DNS cutover, minimizing risk and enabling confident go-live decisions.
-
Go-Live Playbook: The actual cutover process, post-cutover validation checklist, monitoring priorities, and rollback procedures.
Each post will be grounded in real-world experience, with specific configurations, gotchas I’ve encountered, and practical tips that aren’t in the official documentation.
The migration from F5 to Cloudflare represents more than a technology swap. It’s a shift in architectural thinking from appliances you manage to capabilities you consume, from scaling infrastructure to scaling policies.
In the next post, we’ll dive into zone onboarding options and help you choose the right approach for your organization’s needs and constraints.