
Summary Bullets:
- Does server vendors’ increasing focus on higher-density, multi-node server platforms actually reflect a growing need for them in the typical enterprise, or is it just a response to the IT industry’s fascination with high-profile, mega data centers?
- Many of the new ‘multi-node’ servers that are appearing now come across as blade servers ‘lite,’ but it remains to be seen if they offer the same degree of flexibility, component redundancy and economy of scale as traditional blade systems.
I’ve been watching with great interest the new modular server systems being offered by big server vendors such as HP, Dell and Cisco, as well as a number of third-tier vendors, and I cannot help but be intrigued by the value proposition for these modular systems. Most are based on the extremely popular 2U server form factor and offer space for between two and eight server modules as well as aggregated networking and a fairly wide gamut of onboard storage options – all features that sound surprisingly similar to existing blade systems, but on a smaller scale.
Some vendors, such as Cisco and its new UCS M-Series, are using it primarily as a density platform to cram up to 16 sockets of Intel Xeon into that 2U space. On the other hand, Dell’s recently announced X2 system takes it in a slightly different direction by extending the architecture to include a choice of server modules based on Intel Xeon processors or Intel Atom-based ‘microservers,’ looking to flexibly target database, web, HPC and high-density applications. But, none of this is at all new to HP, which has been producing multi-node systems like this all along, with offerings such as the DL1000 Multi-Node server, the Apollo 6000 rack-scale platform and the granddaddy of all multi-node systems, the HP Moonshot, which packs 45 server modules into a single 4.3U enclosure. With six different server cartridges spanning Xeon, Atom, ARM and Opteron processors, the Moonshot really shows the best combination of density and flexibility of all, offering customization suitable for a number of specialized applications and enough density to support up to 1,600 servers per rack.
There are a number of similarities between these systems and their blade system counterparts. Shared resources for power and cooling, I/O that is aggregated at the chassis, the ability to hot-swap server modules without disrupting chassis-level operations and, above all, an ‘internal fabric’ (as some companies call it) that negotiates the different requirements between server modules, allowing for management at the chassis level across multiple processor platforms. Like I said earlier, blade systems lite. That being said, what intrigues me is how the growth of these systems seems to flow against the grain of a data center philosophy that has recently been more focused on generic, virtualized computing pools. And even though they could be, these new systems are not targeted at the virtualization market, but are instead adaptable, yet purpose-built hardware platforms that bring scale and flexibility to more specialized applications which are not necessarily well-served by generic virtualization. I think that the success of these new systems will serve as an indicator that purpose-built server hardware is still relevant in the modern data center.