There were quite a few announcements this week at the HP Technology Forum in Vegas. Several of these announcements were extremely interesting, of these the ones that resonated the most with me were:
Superdome 2:
I’m not familiar with the Superdome 1 nor am I in any way an expert on non x86 architectures. In fact that’s exactly what struck me as excellent about this product announcement. It allows the mission critical servers that a company chooses to, or must run on non x86 hardware to run right alongside the more common x86 architecture in the same chassis. This further consolidates the datacenter and reduces infrastructure for customers with mixed environments, of which there are many. While there is a current push in some customers to migrate all data center applications onto x86 based platforms, this is not: fast, cheap, or good for every use case. Superdome 2 provides a common infrastructure for both the mission critical applications and the x86 based applications.
For a more technical description see Kevin Houston’s Superdome 2 blog: http://bladesmadesimple.com/2010/04/its-a-bird-its-a-plane-its-superdome-2-on-a-blade-server/.
Note: As stated I’m no expert in this space and I have no technical knowledge of the Superdome platform, conceptually it makes a lot of sense and it seems like a move in the right direction.
Common Infrastructure:
There was a lot of talk in some of the key notes about common look feel and infrastructure of the separate HP systems (storage, servers, etc.) At first I laughed this off as a ‘who cares’ but then I started to think about it. If HP takes this message seriously and standardizes rail kits, cable management, components (where possible), etc. this will have big benefits for administration and deployment of equipment.
If you’ve never done a good deal of racking/stacking of data center gear you may not see the value here, but I spent a lot of time on the integration side with this as part of my job. Within a single vendor (or sometimes product line) rail kits for server/storage, rack mounting hardware, etc can all be different. This adds time and complexity to integrating systems and can sometimes lead to less than ideal systems. For example the first vBlock I helped a partner configure (for demo purposes only) had the two UCS systems stacked on top of one another on the bottom of the rack with no mounting hardware. The reason for this was the EMC racks being used had different rail mounts than the UCS system was designed for. Issues like this can cause problems and delays, especially when the people in charge of infrastructure aren’t properly engaged during purchasing (very common.)
Overall I can see this as a very good thing for the end user.
HP FlexFabric
This is the piece that really grabbed my attention while watching the constant Twitter stream of HP announcements. HP FlexFabric brings network consolidation to the HP blade chassis. I specifically say network consolidation, because HP got this piece right. Yes it does FCoE, but that doesn’t mean you have to. FlexFabric provides the converged network tools to provide any protocol you want over 10GE to the blades and split that out to separate networks at a chassis level. Here’s a picture of the switch from Kevin Houston’s blog: http://bladesmadesimple.com/2010/06/first-look-hps-new-blade-servers-and-converged-switch-hptf/.
The first thing to note when looking at this device is that all the front end uplink ports look the same, so how do they split out Fibre Channel and Ethernet? The answer is Qlogic (the manufacturer of the switch) has been doing some heavy lifting on the engineering side. They’ve designed the front end ports to support the optics for either Fibre Channel or 10GE. This means you’ve got flexibility in how you use your bandwidth. The ability to do this is an industry first, although the Cisco Nexus 5000 hardware ASIC is capable and has been since FCS it was implemented on a per-module basis rather than per-port basis like this switch.Â
The next piece that was quite interesting and really provides flexibility and choice to the HP FlexFabric concept is their decision to use Emulex’s OneConnect adapter as the LAN on Motherboard (LOM.) This was a very smart decision by HP. Emulex’s OneConnect is a product that has impressed me from square one, it shows a traditionally Fibre Channel company embracing the fact that Ethernet is the future of storage but not locking the decision into an Upper Layer protocol (ULP.) OneConnect provides 10GE connectivity, TCP offload, iSCSI offload/boot, and FCoE capability all on the same card, now that’s a converged network! HP seems to have seen the value there as well and built this into the system board.Â
Take a step back and soak that in, LOM has been owned by Intel, Broadcom, and other traditional NIC vendors since the beginning. Emulex until last year was looked at as one of two solid FC HBA vendors. As of this week HP announced the ousting of the traditional NIC vendor for a traditional FC vendor on their system board. That’s a big win for Emulex. Kudos to Emulex for the technology (and business decisions behind it) and to HP for recognizing that value.
Looking a little deeper the next big piece of this overall architecture is that the whole FlexFabric system supports HP’s FlexConnect technology which allows a server admin to carve up a single physical 10GE link into four logical links which are presented to the OS as individual NICs.
The only drawback I see to the FlexFabric picture is the fact that FCoE is only used within the chassis and split into separate networks from there. This can definitely increase the required infrastructure depending on the architecture. I’ll wait to go to deep into that until I hear a few good lines of thinking on why that direction was taken.
Summary:
HP had a strong week in Vegas, these were only a few of the announcements, several others including mind blowing stuff from HP labs (start protecting John Conner now) can be found on blogs and HP’s website. Of all of the announcements FlexFabric was the one that really caught my attention. It embraces the idea of I/O consolidation without clinging to FCoE as the only way to do it and it greatly increases the competitive landscape in that market which always benefits the end-user/customer.
Comments, corrections, bitches moans, gripes and complaints all welcome.
HP sounds like they have some interesting stuff coming out. I enjoyed your summary very much, I just stumbled upon your site and this was the first post that I read. While I don’t have too much experience with data centers yet, its something I’ve always wanted to get into and am currently heading down that route as I try and get my CCNA and beyond. I’m definitely looking forward to reading more posts on your site. Thanks!
Blake,
Thanks for stopping by and the comment. Hp definitely had some great announcements this week. stay tuned for a post or two about announcments from Cisco Live in Vegas this week.
Hi,
I have been currently assigned the project of testing this new product HP Felxfabric, but I have no idea where to start with ?
Has anybody tested the NPIV feature on HP flexfabric. I would like to know the test plans for the same and also what were the results after the test?
Also would like to know more about what setup used?
MS, Thanks for reading. I haven’t personally tested the FlexFabric configuration yet, and it is not yet shipping. I’ll see if I can dig up some answers for you from colleagues and provide more detail when I have it.
Joe
I absolutely love your blog and find almost all of your post’s to be exactly I’m
looking for. can you offer guest writers to write content for you personally?
I wouldn’t mind composing a post or elaborating on a number of the subjects you write in relation to here.
Again, awesome site!