Fibre Channel over Ethernet (FCoE) is a protocol standard ratified in June of 2009. FCoE provides the tools for encapsulation of Fibre Channel (FC) in 10 Gigabit Ethernet frames. The purpose of FCoE is to allow consolidation of low-latency, high performance FC networks onto 10GE infrastructures. This allows for a single network/cable infrastructure which greatly reduces switch and cable count, lowering the power, cooling, and administrative requirements for server I/O.
FCoE is designed to be fully interoperable with current FC networks and require little to no additional training for storage and IP administrators. FCoE operates by encapsulating native FC into Ethernet frames. Native FC is considered a ‘lossless’ protocol, meaning frames are not dropped during periods of congestion. This is by design in order to ensure the behavior expected by the SCSI payloads. Traditional Ethernet does not provide the tools for lossless delivery on shared networks so enhancements were defined by the IEEE to provide appropriate transport of encapsulated Fibre Channel on Ethernet networks. These standards are known as Data Center Bridging (DCB) which I’ve discussed in a previous post (http://www.definethecloud.net/?p=31.) These Ethernet enhancements are fully backward compatible with traditional Ethernet devices, meaning DCB capable devices can exchange standard Ethernet frames seamlessly with legacy devices. The full 2148 Byte FC frame is encapsulated in an Ethernet jumbo frame avoiding any modification/fragmentation of the FC frame.
FCoE itself takes FC layers 2-4 and maps them to Ethernet layers 1-2, this replaces the FC-0 Physical layer, and FC-1 Encoding Layer. This mapping between Ethernet and Fibre Channel is done through a Logical End-Point (LEP) which can by thought of as a translator between the two protocols. The LEP is responsible for providing the appropriate encoding and physical access for frames traveling from FC nodes to Ethernet nodes and vice versa. There are two devices that typically act as FCoE LEPs: Fibre Channel Forwarders (FCF) which are switches capable of both Ethernet and Fibre Channel, and Converged Network Adapters (CNA) which provide the server-side connection for a FCoE network. Additionally the LEP operation can be done using a software initiator and traditional 10GE NICs but this places extra workload on the server processor rather than offloading it to adapter hardware.
One of the major advantages of replacing FC layers 0-1 when mapping onto 10GE is the encoding overhead. 8GB Fibre Channel uses an 8/10 bit encoding which adds 25% protocol overhead, 10GE uses a 64/64 bit encoding which has about 2% overhead, dramatically reducing the protocol overhead and increasing throughput. The second major advantage is that FCoE maintains FC layers 2-4 which allows seamless integration with existing FC devices and maintains the Fibre Channel tool set such as zoning, LUN masking etc. In order to provide FC login capabilities, multi-hop FCoE networks, and FC zoning enforcement on 10GE networks FCoE relies on another standard set known as Fibre Channel initialization Protocol (FIP) which I will discuss in a lter post.
Overall FCoE is one protocol to choose from when designing converged networks, or cable-once architectures. The most important thing to remember is that a true cable-once architecture doesn’t make you choose your Upper Layer Protocol (ULP) such as FCoE, only your underlying transport infrastructure. If you choose 10GE the tools are now in place to layer any protocol of your choice on top, when and if you require it.
Thanks to my colleagues who recently provided a great discussion on protocol overhead and frame encoding…
Joe…Many enterprises do not use FC and that FCoE is really a moot protocol!
@Mike,
Mike that is a great point and you’re absolutely right many customers are not using FC in the first place so why FCoE? FCoE is a fairly logical next step for FC customers who want to consolidate but why go FCoE if I’m not using FC to begin with?
The first part of this is that Fibre Channel has a cost based barrier to adoption. Initially this was totally based on the cost, so shared storage was the domain of lerger enterprise who could afford the initial investment of an FC infrastructure in order to gain the administration and TCO benefits of shared storage. Fast forward to current times and the cost is more perception than reality. A small data center can deploy a best-of-breed SAN (switches, storage, HBAs, etc) for under 30K with the right bundles. I’ve seen/deployed small implementations at 20K. That’s not a big cost barrier.
The second part is that many companies have no major issues with using Direct Attache Storage (DAS) until they decide to virtualize. Once you decide to virtualize you need shared storage to gain the key features (I’m now talking about VMware specifically.) vMotion, HA, FT, DRS all require shared disk between physical servers. Now that the company needs shared disk they have three options: FC, iSCSI, NFS.
NFS can be great but it can’t do block data, this means no boot from SAN and limititations for some clustering apps, databases, etc. Because of that most companies look to FC or iSCSI. Say what you will about iSCSI but it would be hard to argue that the TCP/IP overhead is woth it, the implementations are straight forward or that final architecture is that much cheaper than FC. I’m happy to have that discussion with anyone but before we engage step back and ask ‘Given the choice of FC or iSCSI at the exact same cost and TCO which would you choose’? If you say iSCSI you probably need to crack a few protocol books and get to studying.
So now lets assume a good majority of customers looking to use shared storage for VMware will choose Fibre Channel. They may also be thinking about 10GE to increase throughput/reduce cable count. Why not combine the 2 and save myself the cost, administration, and overhead. If I’m about to dive into the vitualization market I’m going with FCoE.
FCoE is not the end all be all of protocols anymore than FC or iSCSI is. That being said there isn’t a better protocol out there for block data on shared infrastructure. For FC shops expanding there FC attachment rate and shops moving into or increasing their server virtualiztion rate FCoE is a great option.
For all those iSCSI fans yes I know you can make iSCSI behave beautifully and perform well in the right configuration and the right environment, that doesn’t mean it’s the best choice, but it also doesn’t mean it’s the wrong choice. FCoE, NFS, and iSCSI are three options that don’t have to be mutually exclusive and can all run on the same wire…as long as it’s the right wire.
If you do FCoE, don’t you ADD FC and Eth overhead? You write your article as if FCoE was a measure to reduce overhead. Besides, less overhead means less error correction. Anyway, well written!
лиÑÑ‚ Ñвинцовый 1 мм – моÑква алюминиевые бокÑÑ‹, купить алюминиевый уголок в моÑкве.