By Kurt Marko, Contributor Server virtualization epitomizes the benefits of abstracting logical IT resources from their physical manifestation. Hypervisors like VMware vSphere or Microsoft Hyper-V create immense efficiencies and flexibility in data centers by allowing previously underutilized equipment to run multiple applications in software-isolated environments. Networks are the next piece of data center infrastructure to get the virtualization treatment and the ensuing products provide comparable improvements in efficiency, versatility and productivity. Software defined networks (SDNs) originated as academic research projects looking for ways to replace proprietary and expensive switches with commodity servers for network control, barebones, no name, and hence cheap, switches and routers and distributed network control software tasked with managing it all. These were modest goals: save a lot of money by substituting expensive name brand network equipment with off-the-shelf hardware managed by clever control software. Over time, particularly once incumbent network equipment vendors detected a threat to their business, SDN visions have grown broader and more holistic. The landmark identifying SDN’s Peak of Inflated Expectations may well be Cisco CEO John Chambers unveiling its grand plan, known as ACI, to transform IT for the coming age of über-connected devices, fully automated networks and eminently programmable interfaces. But as vendor SDN strategies get loftier and more abstract (just what does marketing copy like “holistic architecture with centralized automation and policy-driven application profiles” mean to the typical enterprise?) IT practitioners, tasked with actually keeping the lights on and coping with dramatic growth and diversity in network traffic, have grown leery of making wholesale design and equipment sacrifices on the altar of SDN. An InformationWeek SDN survey found that only 35% of respondents were very or completely willing to make significant network changes for SDN implementations. Indeed, one network consultant thinks that Cisco’s ACI strategy raises more questions than it answers. As SDN evolves, its focus has shifted from the physical to virtual world where network virtualization provides the same level of software programmability and mutability without the physical disruption. Understanding network virtualization is key to appreciating the burgeoning move to software defined, cloud data centers. As I wrote in an earlier column on SDN controllers “most enterprises are wary of such ‘big bang’ software, finding it better and less disruptive to incrementally add features to existing infrastructure. This is where the nexus of network virtualization leveraging cloud software like OpenStack, VMware vSphere or Cloud Stack and network overlays to existing Ethernet data center networks comes in.” But virtualized networks aren’t new, in fact they are required to operate multiple guest OSs on a physical server that might only have a couple network ports. Yet these hypervisor-resident virtual switches and NICs, whether using Open vSwitch (OVS), VMware’s standard (vSS) or distributed (vDS) switches are rather dumb devices, really more bridges than switches, and completely isolated from and ignorant of the underlying physical network topology they operate on. Network virtualization, using overlay software, tunneling protocols like VXLAN or NVGRE that encloses virtual traffic in conventional TCP/IP wrapper, and software plug-ins to hypervisor or cloud stack network interfaces like OpenStack Neutron, changes all that. Much like the hypervisor itself, network virtualization puts an abstraction layer between hardware and applications, creating a logical network fabric and virtual network services on top of the physical data center interconnect of routers and switches. And as we’ve learned with server virtualization, decoupling the physical and logical layers of infrastructure has several advantages. For networks, perhaps none is more important than the fact that network virtualization need not require changing the underlying hardware at all. As OpenFlow co-inventor and current chief networking architect for VMware, Martin Casado pointed out during the unveiling of VMware’s NSX overlay technology last summer, a standard engineering strategy for solving complex problems entails decoupling logically independent elements into subsystems that can develop and evolve independently. He compared virtualized network overlays like NSX to modular routers and switches where a common physical backplane handles the traffic and line cards transform different network interfaces to deliver services. As quoted in this column, Casado explains the physical/virtual relationship, “The overlay provides applications with network services and a virtual operations and management interface. The physical network is responsible for providing efficient transport.” Network virtualization makes it much easier to isolate and segment virtual networks than the traditional method of using a patchwork of VLANs. This makes it a prerequisite for multi tenant environments whether at a service provider or enterprise private cloud, but also improves the efficiency of overworked network staff in SMBs Virtual networks also offer the flexibility of having a complete set of logically independent network services and policies including address translation (NAT), address assignment (DHCP), load balancing and application distribution, and security gateways like firewalls, IPS, and VPN appliances. Furthermore, some vendors like Nuage (Alcatel-Lucent), PLUMgrid and VMware are building app stores for virtual network software, creating an ecosystem for third parties like Palo Alto Networks (next-gen firewall), Citrix (application delivery controllers), Silver Peak and F5 (ADCs and WAN optimization) and McAfee/Intel Security, Symantec and TrendMicro (virtual endpoint security). But all is not rosy in the land of virtual overlays, since unlike OpenFlow and other physical layer protocols, there are few standards and a myriad of incompatible products. The litany is highlighted by offerings from network titans like Alcatel-Lucent, Avaya, Cisco, Juniper and VMware, but includes a host of network software specialists like Embrane, Midokura, Plexxi, PLUMgrid, Tail-f and Vello Systems. One, Jeda Networks, has even applied network virtualization to storage in the form of a virtual SAN fabric and controller. While they all provide visibility to virtual interfaces and resources, traffic isolation and generally work with most physical network management platforms, they do so in different ways, have their own security and policy frameworks and varying degrees of integration with legacy hardware. In sum, network virtualization is a compelling concept with a vibrant products market, but one that is sufficiently chaotic and unstandardized that early adopters risk deployment complexities with existing network management software, the prospect of vendor churn as smaller players can acquired or eliminated and frequent software updates and feature changes. The OpenDaylight project, with support from most major IT vendors, offers hope for interoperable software and interfaces, and its first software release shows that the project is more than just a debating society. I expect to learn more next week at Interop and share insights her as I cover the show. In the meantime, for organizations already heavily invested in VMware and the vCloud management stack should, NSX is a no brainer. Likewise OpenStack shops should investigate some of the overlay products like IBM’s SDN for Virtual Environments (which is based on OpenDaylight), Midokura, Pluribus Networks or PLUMgrid that include plugins for the Neutron network service and work well with commercial OpenStack products like Mirantis or Piston. Meanwhile, the majority of enterprises without private cloud deployments and where Cisco is the dominant network vendor will want to watch, as the market matures, and wait on the development of ACI. Disclosure: I currently do consulting and analysis work for ONUG, a group of large enterprise network equipment, software and service buyers with sizable IT infrastructure that hopes to better align the needs of network software and equipment customers with the technology and product directions of network systems suppliers. As such ONUG is heavily invested in the future of network virtualization products and standards, where the topics will be front and center at its spring meeting.