In my previous post, I promised to consider the topic of Ethernet Fabrics “versus” OpenFlow. See, some folks have claimed that OpenFlow and Ethernet Fabrics are both intended to solve the same problem. Well, one might also say that these distinct categories of fork are intended to solve the same problem (they both convey food):
One category of futuristic-über-champion-from-another-planet envisions OpenFlow/SDN as fully displacing all network-based intelligence. By that perspective, OpenFlow competes with every existing network-based protocol. Sure, and maybe someday I might get you to buy an entirely different kind of bridge from me.
The notion that there might be some competition arises in part because OpenFlow has such large (and fuzzy) scope. How large depends on whom you ask. I see OpenFlow as a key enabler to solve many, but not all, interesting problems, and as such I’m something of a fan. Quite a few champions see OpenFlow as ideal for solving every current networking pain point. Spanning Tree is a pain point, thus OpenFlow (some claim) is the answer, even though (in contrast to tried-and-true TRILL-based fabrics) no one has deployed a compelling demonstration of such a solution.
Such speculations can be entertaining, but here on present-day Earth it’s generally more productive to solve problems that haven’t already been solved. There are quite of few of those around, particularly in hyperscale virtualized data centers with workloads in the millions and tenants nearing six figures. Ultimate profitability will come from automation, super-high utilization rates for capital equipment and vast tracts of data center per administrator. Because these attributes are strategic differentiators, the operators can (and must) invest in development to solve the myriad of problems they are tackling daily. They’re not very interested in proprietary solutions that will lock them into a specific hardware vendor for years; nor are they going to wait patiently for traditional plodding standards bodies to address (piecemeal) each problem with a distributed network-based solution, wondering all the while whether the resulting disparate solutions play nicely together.
These and other bleeding-edge network operators are willing take a few risks. They want to work with forward-looking vendors willing to work within a well-tuned, flexible, highly automated and standards-based architecture that will enable the customer to realistically consider competitive bids over time. Oh yeah, and they need to transition from their current architecture to this new architecture in smooth-and-steady increments with minimal (zero would be nice) disruption, since it’s remarkably tricky to convince a few thousand customers that occasional brief scheduled downtime is acceptable. Yep, this is the Wild West, alright. This is part of Earth where OpenFlow is needed, where investment is worthwhile and risk can be justified.
Meanwhile, even among the SDN faithful, there are folks that recognize that some intelligence belongs in switches. Rapid and robust failover handling of multipathing is frequently listed as an example. So why would OpenFlow developers attempt to re-invent Ethernet Fabrics and then leave most of the intelligence in the switch? Mmm. What’s the value-add there? Yeah, not great. Especially since there are lots of exciting unsolved problems to address instead.
So, am I saying that OpenFlow and Ethernet Fabrics have nothing to do with each other, and never the “tine” shall meet? Hardly. As I just pointed out, OpenFlow is for solving new problems. Standardized Ethernet Fabrics are already meeting the large-scale, high-bandwidth, topology-independent, equal-cost-multipath, East-West-plus-North-South need. That need exists in many places on Earth, from large scale enterprise data centers to the hyperscale DC’s described above. In parallel, key players in the hyperscale architecture game are deploying hypervisor-based encapsulation (under control of pre-standard OpenFlow) to solve several problems. These encapsulated “overlay networks” still need a rocket-fast, isotropic transport to connect the boxes. That’s where Ethernet Fabrics come into the picture. Meanwhile, your run-of-the-mill large enterprise data center can deploy Ethernet Fabrics without the hypervisor-based overlay.
In summary, I meekly submit that there really is no controversy, and the technologies don’t compete (except in the way that markedly different forks compete). Instead, there are places for each, places for both, and places for neither. As an illustration, I’ve built the following complex (not-so-much) and exhaustive (even less) table of use cases for each category:
Again, this is based on today’s challenges and efforts. No doubt OpenFlow vendors will expand the problems that they address. They’ll focus where there’s ROI, which is highest for as-yet-unsolved problems. So chances are that I’ll have OpenFlow in my home office soon.
EthernetFabric.com blogger Lisa Caywood also contributed to this post.