On July 19th Intel announced it’s intent to acquire Fulcrum Microsystems. Over the last two weeks I have read a lot of commentary on who wins and who loses in this deal, with most of the stares directed at Cisco. I’m more interested in how virtual switching as a whole can gain from this acquisition.
Virtual switching means one thing to most people today; a software based switch in the hypervisor such as VMware’s vDS, the Cisco Nexus 1000V, or the Open vSwitch. In reality virtual switching can have a hardware component, or it can be purely hardware based on the concept of hypervisor bypass and Single Root I/O virtualization (SR-IOV). Cisco’s Virtual Interface Card (VIC) is a great example of a card designed with hypervisor bypass in mind. The VIC is a converged network adapter (CNA) with SR-IOV characteristics (it does not implement the SR-IOV spec) which can present multiple NICs and HBAs to the host. There are many use cases for the VIC but hypervisor bypass is the one most interesting to me.
With hypervisor bypass and a virtual NIC, like Cisco’s VIC or other SR-IOV NICs, the job of processing packets is handled outside of the hypervisor. There are a number of potential benefits including reduced CPU utilization and less context switching over a purely software based approach, as well as a number of downsides. While the VIC itself has been fairly successful (It can be used as a standard dual port 10G CNA) the utilization of it’s bypass capability has been hampered by scalability limitations and in some use cases the architectural design of forcing all traffic, regardless of destination, to the UCS Fabric Interconnect for processing. This means that packets between two VMs on the same hypervisor have to leave the server and be processed in the network.
Intel and others have had SR-IOV enabled NICs for a couple of years now so why is the Fulcrum acquisition so interesting?
There is nothing preventing a SR-IOV NIC from switching packets locally, but switching a packet and switching a packet intelligently are two different things. Current SR-IOV or SR-IOV like NICs don’t come close to matching the feature set of even VMware’s vDS and usually just forward packets to a switch to be processed like the Cisco VIC model. Intel now has the Fulcrum switching DNA and the market clout to accelerate the delivery of intelligent NICs. NICs which could provide SR-IOV like functionality and offer robust switching features local to the server, in either PCIe or LAN on motherboard (LOM) form. This could deliver on the concept of hypervisor bypass without the need to pass all traffic into the network for processing. There are a smaller players in this space already but no one is making real headway. Being at the NIC layer would afford Intel the ability to market itself to server admins, with a VMware vCenter Server plugin similar to the VMware vSwitch or vDS interface, and network admins, possibly with a more open approach such as OpenFlow.
I’m not throwing my support behind any of the hardware or software virtual switching scenario just yet. In my opinion it’s too early to tell if all of the major vendors will play nice together. We do need a scalable, flexible solution and it’s clear that Intel has the tools to to offer up a scenario that appears, on the surface, to achieve the best of both worlds.