Data Center Bridging (DCB) is a group of IEEE standard protocols designed to support I/O consolidation. DCB enables multiple protocols with very different requirements to run over the same Layer 2 10 Gigabit Ethernet infrastructure. Because DCB is currently discussed along with Fibre Channel over Ethernet (FCoE) it’s not uncommon for people to think of them as part of FCoE. This is not the case, while FCoE relies on DCB for proper treatment on a shared network, DCB enhancements can be applied to any protocol on the network. DCB support is being built into data center hardware and software from multiple vendors and is fully backwards compatible with legacy systems (no forklift upgrades.) For more information on FCoE see my post on the subject (http://www.definethecloud.net/?p=80.)
Network protocols typically have unique requirements in regards to latency, packet/frame loss, bandwidth, etc. These differences have a large impact on the performance of the protocol in a shared environment. Differences such as flow-control and frame loss are the reason Fibre Channel networks have traditionally been separate physical infrastructures from Ethernet networks. DCB is the set of tools that allows us to converge these networks without sacrificing performance or reliability.
Lets take a look at the DCB suite:
Priority Flow Control (PFC) 802.1Qbb:
PFC is a flow control mechanism. PFC is designed to eliminate frame loss for specific traffic types on Ethernet networks. Protocols such as Small Computer System Interface (SCSI) which is used for block data storage are very sensitive to data loss. SCSI protocol is the heart of Fibre Channel which is a tool used to extend SCSI from internal disk to centralized storage across a network. In its native form on dedicated networks Fibre Channel has tools to ensure that frames are not lost as long as the network is stable. In order to move Fibre Channel across Ethernet networks that same ‘lossless’ behavior must be guaranteed, PFC is the tool to do that.
PFC uses a pause mechanism to allow a receiving device to signal a pause to the directly connected sending device prior to buffer overflow and packet loss. While Ethernet has had a tool to do this for some time (802.3x pause) it has always been at the link level. This means that all traffic on the link would be paused, rather than just a selected traffic type. Pausing a link carrying various I/O types would be a bad thing, especially for traffic such as IP Telephony and streaming video. Rather than pause an entire link PFC sends a pause signal for a single Class of Service (CoS) which is part of an 802.1Q Ethernet header. This allows up to 8 classes to be defined and paused independent of one another.
Congestion Management (802.1Qau):
When we begin pausing traffic in a network we have the potential to spread network congestion by causing choke points. Imagine trying to drive past a football stadium (football or American football pick your flavor) when the game is about to start. You’re stuck in dead lock traffic even though you’re not going to the game, if you’ve got that image your on the right track. Congestion management is a set of signaling tools used to push that congestion out of the network core to the network edge (if you’re thinking old school FECN and BECN you’re not far off.)
Bandwidth Management (802.1Qaz):
Bandwidth management is a tool for simple consistent application of bandwidth controls at Layer 2 on a DCB network. Bandwidth management allows specific traffic type to be guaranteed a percentage of available bandwidth based on its CoS. For instance on a 10GE network access port utilizing FCoE you could guarantee 40% of the bandwidth to FCoE. This provides a 4Gb tunnel for FCoE when needed but allows other traffic types to utilize that bandwidth when not in use for FCoE.
Data Center bridging Exchange (DCBX):
DCBX is a Layer 2 communication protocol that allows DCB capable devices to communicate and discover the edge of the DCB network, i.e. legacy devices. DCBX not only allows passing of information but provides tools for passing configuration. This is key to the consistent configuration of DCB networks. For instance a DCB switch acting as a Fibre Channel over Ethernet Forwarder (FCF) can let an attached Converged Network Adapter (CNA) on a server know to tag FCoE frames with a specific CoS and enable pause for that traffic type.
All in all the DCB features are key enablers for true consolidated I/O. They provide a tool set for each traffic type to be handled properly independent of other protocols on the wire. For more information on Consolidated I/O see my previous post Consolidated IO (http://www.definethecloud.net/?p=67.)
Great work with the blog Joe. Thanks for the initiative and the information. Ill be looking forward to reading more posts.
Hypnosis can produce unbelievable results and can play an important part
in our personal development. When planning your home theater system you will need to account for the audio
system and how it will be integrated into the entertainment room.
Typically the bays are a standardized size as well.
We fell in to this trap recently in our vituatlisarion project. Our external consultant’s hadn’t flagged up VMware’s lack of MSCS support on iSCSI SAN’s till after we’d started racking up our new HP Lefthand gear. Microsoft seem to support MSCS on Hyper-V with iSCSI, so why’s VMWare dragging their feet?