It Takes More Than Technology to Knock Down a Silo

Steven Hill
Steven Hill

Summary Bullets:

  • The IT industry needs to acknowledge that institutional silos are a human construct rather than specifically a technology problem and realize that technology isn’t always the solution for purely human-based challenges.
  • To actually banish silos, a company needs to evolve its management philosophy and change its corporate culture from adversarial to cooperative, so that managers are no longer forced into wasteful turf wars over budgets and resources.

Many vendors are still using the ‘break down the IT silos’ message to sell their convergence message. Silos are not a new idea – I first learned of them over 20 years ago – and I believe they are not necessarily a byproduct of technology. Silos develop in every corporate environment through a combination of groups that are tasked with substantially different missions and driven by the competitive forces of budgeting, corporate power, bragging rights and other territorial pressures. It is a very human condition, and as such not a problem likely to be solved by technology. Continue reading “It Takes More Than Technology to Knock Down a Silo”

Docker Takes the Next Big Step in Hardening Its Container Ecosystem

Steven Hill
Steven Hill

Summary Bullets:

  • Open source containers may actually live up to the hype of becoming a viable alternative technology for hosting distributed applications.
  • Docker’s container technology will only succeed if it continues to build out a similarly strengthened and reliable management framework equal or greater to that of existing application environments.

Many IT decisions for enterprises are heavily based on reliability. They have to be, because the typical enterprise customer doesn’t have the time, budget, staff, or patience to deal with the fiddly-bits of making a fledgling system run, especially if there’s a Tier 1 vendor solution already available that meets most of the criteria. This has always been the open source dilemma for enterprise IT managers; is it worth the risk of reaching for the shiny brass ring of low-cost open source software when it’s just as easy to stay safe on the vendor pony? Well, sometimes it is, and the brass ring of container technology is getting closer and closer at a surprising rate. Continue reading “Docker Takes the Next Big Step in Hardening Its Container Ecosystem”

Gonna Carve Me a (Hybrid) Mountain

Steven Hill
Steven Hill

Summary Bullets:

  • The hybrid cloud could benefit from a strong, open-source model upon which to base future development.
  • Projects of this magnitude depend on a clear vision that can be shared across large groups.

For the last few weeks I’ve been deeply involved in research regarding open-source cloud, and in so doing I ran across a quote from Jim Whitehurst of Red Hat that compared the challenges of OpenStack to those faced during the creation of the Interstate-90, way back in 1956. I know many of us weren’t around then, but his point was well taken, it was a massive undertaking that ultimately benefited the entire nation and it led me to considering other projects that could serve as a simile to the creation of an open cloud framework. The Hoover dam came to mind, as did the space program, but just as I was pondering this a TV show came on about the carving of Mount Rushmore. BINGO! GENIUS! Continue reading “Gonna Carve Me a (Hybrid) Mountain”

Easy Usually Starts Out Being Hard

Steven Hill
Steven Hill

Summary Bullets:

  • Vendors have been surprisingly receptive to getting on board with the OpenStack initiative.
  • OpenStack is still a major challenge for companies to adopt.

Over the last few weeks there have been a number of key acquisitions in the world of hosted and managed private cloud startups. First tier vendors like IBM, EMC, and Cisco have all made significant investments in private cloud startups that have already built a respectable business providing a simplified path to an OpenStack private cloud through either hosting secure externalized private clouds (which seems like an oxymoron) or by offering a managed private cloud service that can take a lot of the pain out of building an OpenStack-based private cloud. But to me, this raises the obvious question of just why is private cloud so darn hard? Continue reading “Easy Usually Starts Out Being Hard”

A Data Center Oasis in the Sands of Las Vegas

Steven Hill
Steven Hill

Summary Bullets:

• Much attention is paid to the hardware infrastructure within your data center, but many of the long term energy savings can be gained through optimizing the architecture of the data center itself.• There are a number of high-efficiency, data center designs available for companies looking to build out an extremely scalable and efficient data center. The challenge lies in finding the best possible combination of location, security, fault protection, energy efficiency and availability of connectivity; and sometimes it really pays to think outside the box.

Ok, I’ll admit it, I’m a huge nerd that really enjoys visiting data centers. There’s something about the rows and rows of humming systems, the graceful elegance of a well-crafted cable bundle, and the blinkenlights…the blinkenlights. Anyway, a few weeks back I had the opportunity to tour a data center hosting facility in Las Vegas called SUPERNAP, designed and run by a company named Switch. My first thought was “Why, of all places Las Vegas?”, but this–along with every other question I could possibly think of—was answered during a two-hour tour of one of the most remarkable data center facilities I’ve ever seen.

The SUPERNAP hides in plain sight, tucked into a warehouse-y section of Las Vegas, but don’t try to look it up in Google Maps because it won’t show up. That’s only part of the world-class security they’ve built around the facility that starts with state of the art intrusion prevention systems and carries over to the extremely polite but well-armed security guard that trailed my host and I throughout the visit. Over the top, you say? Well, not really if you consider the expectations of the broad range of very serious customers that occupy the neatly caged rows upon rows upon rows of data center hardware they protect. In fact, I would say that almost everything about the SUPERNAP is over the top as part of an extremely well-planned and executed design; and that’s the way it should be.

So, what’s the big deal? Aren’t all mega-data centers the same? In some ways, yes, the problems of powering, cooling and protecting large data centers are pretty much universal, but the difference lies in how Switch and its founder Rob Roy chose to manage those challenges. Rather than basing his design around existing technology, he approached the problem from a completely fresh angle. The resulting SUPERNAP was a combination of intelligent industrial design and common sense that resulted in a purpose-built facility that’s more of a data center hosting machine rather than a mere building. When Roy learned that existing cooling systems weren’t up to the task of handling a 400,000 square foot facility with a 100-megawatt power envelope, Switch created and patented its own super-efficient and redundant 1,000-ton air handlers, plus a building-level cooling system that incorporates fans with an integrated, 500lb flywheel that will continue to spin for an hour after the loss of power. When traditional power system designs didn’t meet their strict criteria for stability and redundancy, Switch designed new ones that did. And when existing construction design and building techniques didn’t pass muster, Switch created new ones optimized for the specific nature and dynamic growth requirements of the data center facilities they envisioned.

The result is a data center environment that can easily support oversized racks with power consumption that exceeds 30Kw per rack, which means that customers can fit far more gear in less floor space, but this is only the beginning of the benefits the SUPERNAP’s economy of scale offers to customers. Another catalyst for the creation of the SUPERNAP design came when Mr. Roy was smart enough to snap up all the broadband contracts that were liquidated when Enron went belly-up. In the early 2000s, Enron was in the process of building out a communications network that it could market as another commodity and had convinced over 40 major carriers to hub in Las Vegas (ah-hah!), because of its geological stability and proximity to power resources. Switch was in the right place at the right time when Enron foundered. The growth of SUPERNAP has also made it one of the top power consumers in Nevada, and as such the state has given it the ability to negotiate its own power contracts, which come from a combination of traditional generation facilities as well as from solar, wind, geo-thermal and hydro-electric sources. The result is that SUPERNAP customers gain the benefit of Switch’s combined, $2 trillion buying power through cooperative programs for network, power and cloud services; and available whether customers are running a single rack or a thousand.

I’ve had the pleasure of touring a number of really great data centers around the world, but none so far have compared to the remarkable combination of intelligent architectural design, technology innovation and attention to detail that I experienced on my tour of the SUPERNAP. I would guess I’m not alone in my appreciation of the facility-Switch has an impressive list of over 1,000 customers that include logos like eBay, MGM, DreamWorks, Sony and Intuit; plus a number of high-security entities it would never divulge. And Switch does this while remaining fiercely independent and system vendor-agnostic; it’s obvious its key interest is in providing the best possible data center hosting facility possible. And, from all indications, in being extremely nice people to partner with. When the zombie apocalypse comes, I know where I wanna be.

Continue reading “A Data Center Oasis in the Sands of Las Vegas”

Stop the Budgeting Madness

Steven Hill
Steven Hill

Summary Bullets:

  • It’s almost a universal tradition that at the end of every year, there’s a scramble to spend departmental budgets to ensure that the funds will be available for the following year.
  • Returning thoughtfully planned, but eventually unspent funds shouldn’t be punished by reducing budget requests for the following year.

One of the most wasteful practices that I recall from my corporate years was the rush spending that always occurred at the end of the year to ‘ensure’ our budget requests for the next year weren’t cut. It was the biggest and silliest non-secret that I had ever run into at the time, but the truth was always there: if you don’t use it, you lose it AND next year’s budget will be reduced. Everybody knew that this practice went on, year after year, because (for whatever reason) there was this basic presumption that if you could return money at the end of the year, then you just wouldn’t need it the following year. This was true of capital budgets, supply budgets, and perhaps most difficult of all, maintenance budgets. As a manager, I always worked towards a truthful representation of the financial needs of my department at budget time, but I was amazed to learn that it was just a given that you HAD to pad it out to cover unforeseen problems as well as ensure that there was room for some discretionary spending throughout the year. Continue reading “Stop the Budgeting Madness”

Humans: Both the Problem and the Solution

Steven Hill
Steven Hill

Summary Bullets:

  • Increasing automation in the data center can be one of the best ways to reduce errors in a dynamic production environment.
  • Automation can also be a source for problems of a much greater scale because of the number of processes that can affected by errors within a large and complex environment.

It’s highly unlikely that American sociologist Robert Merton was thinking about cloud computing when he proposed his “Law of Unintended Consequences” in 1936, but it seems particularly apt in light of Microsoft’s revelations regarding the major Azure cloud storage outage of November 2014. Just this week, Microsoft released its root cause analysis that pointed to simple human error as the cause of the 11-hour storage outage that also took down any associated VMs, some of which took more than a day to get back online. Now I’m not here to pile on Microsoft; its response in fixing such a massive system crash can’t really be faulted. What does interest me is how vulnerable our complex and automated systems can still be after years of automation designed to remove human error from the equation. Continue reading “Humans: Both the Problem and the Solution”

Honey, I Shrunk the Blade Server

Steven Hill
Steven Hill

Summary Bullets:

  • Does server vendors’ increasing focus on higher-density, multi-node server platforms actually reflect a growing need for them in the typical enterprise, or is it just a response to the IT industry’s fascination with high-profile, mega data centers?
  • Many of the new ‘multi-node’ servers that are appearing now come across as blade servers ‘lite,’ but it remains to be seen if they offer the same degree of flexibility, component redundancy and economy of scale as traditional blade systems.

I’ve been watching with great interest the new modular server systems being offered by big server vendors such as HP, Dell and Cisco, as well as a number of third-tier vendors, and I cannot help but be intrigued by the value proposition for these modular systems. Most are based on the extremely popular 2U server form factor and offer space for between two and eight server modules as well as aggregated networking and a fairly wide gamut of onboard storage options – all features that sound surprisingly similar to existing blade systems, but on a smaller scale. Continue reading “Honey, I Shrunk the Blade Server”

The Old Guard: Out of the Frying Pan and into the Frying Pan

Steven Hill
Steven Hill

Summary Bullets:

  • The decision for HP to split into separate consumer and enterprise companies is long overdue, and done correctly it will allow both siblings to be more responsive to their respective markets.
  • By shedding low-margin business units IBM is doing the right things to allow them to continue as innovators without bogging themselves down with manufacturing considerations.

No other industry moves as fast as IT, and every vendor faces the challenge of evolving to remain current with the changing nature of this business. But the challenges for old-school industry stalwarts like IBM and HP are a little different, in part because they’re still simply perceived as “old-school” (irony intended), plus they have a legacy of products that they must continue to sell and support. Does this mean I give them a pass on everything they do? Not on your life – but I certainly admire the commitment it takes to recognize their own weaknesses and make the tough choices. Continue reading “The Old Guard: Out of the Frying Pan and into the Frying Pan”

The OpenPOWER Initiative May Actually Chart a Smarter Path for 64-Bit Computing

Steven Hill
Steven Hill

Summary Bullets:

  • The truly open development environment offered by the OpenPOWER Foundation makes the high-performance POWER platform much more accessible and will benefit from the input of participants from all facets of the system design community.
  • The high cost of midrange systems have always restricted them to high-performance, high availability tasks; but IBM’s program opening up the POWER processor platform to the world could usher in the next generation of affordable 64-bit computing options.

Open is an extremely overused word these days. In the world of cloud in which we live, the primary buzzword is always “open” with everyone falling over one and another to prove just how open they are. But there’s open and then there’s OPEN, as evidenced by IBM’s creation and ongoing support of the OpenPOWER Foundation. Now I’ll be the first to admit that I’ve become wary of vendor claims to “openness”, it usually means “we’ll expose our API’s so YOU can work with US”, but in the case of the POWER processor platform IBM has pulled out all the stops. As a member of OpenPOWER, you can get access to everything—blueprints, code—anything you want going back as far as you want, plus participation in a completely collaborative environment designed to inspire and embrace outside participation. Too good to be true? Not at all, and over 60 companies have signed on as partners so far, with hopefully more to follow. Continue reading “The OpenPOWER Initiative May Actually Chart a Smarter Path for 64-Bit Computing”