
Summary Bullets:
- Vendors have been surprisingly receptive to getting on board with the OpenStack initiative.
- OpenStack is still a major challenge for companies to adopt.
Over the last few weeks there have been a number of key acquisitions in the world of hosted and managed private cloud startups. First tier vendors like IBM, EMC, and Cisco have all made significant investments in private cloud startups that have already built a respectable business providing a simplified path to an OpenStack private cloud through either hosting secure externalized private clouds (which seems like an oxymoron) or by offering a managed private cloud service that can take a lot of the pain out of building an OpenStack-based private cloud. But to me, this raises the obvious question of just why is private cloud so darn hard?
Over the years, I’ve taken to looking at the rise of the cloud as the “GUI-ization” of the data center, because to me it resembles the challenges that companies like Apple and Microsoft faced when they began abstracting the underlying complexity of the personal computer decades ago, with the goal of making it more accessible to the non-tech. Up until then, personal computer use was relegated to the do-it-yourselfer hobbyist or the hardcore business users who were more than happy to put up with the technological challenges for either fun or the benefits of personal computing. But it wasn’t until the rise of MacOS and Windows that personal computing finally became ubiquitous. Today, that premise has gone even further with the simplicity offered by the tablet or smartphone, and everyone can now tap the remarkable power of these devices regardless of their technology skills.
But ironically, while all of that simplicity became available to the end user, much of the technology in the data centers that supported all those users and ran all of our businesses remained pretty much the same old do-it-yourself science projects of days past. Granted, the growth of server virtualization has gone a long way toward changing production environment from static to dynamic, but that’s only part of the story. The remaining challenge lies in offering the point and click, self-service capabilities offered by the cloud delivery model, but that’s turning out to be a far tougher nut to crack. Automation at that level demands on a foundational framework that supports all that automation and abstraction, PLUS access control, dynamic expansion, workload portability and policy management that can span hosts regardless of their physical location. Yikes: it’s a HUGE task if you think about it, and all the permutations of legacy physical infrastructure doesn’t help much.
We’re now experiencing the growing pains of extending the cloud delivery model to its logical conclusion, the ability for customers to dynamically place production workloads wherever they fit best, regardless of the infrastructure’s physical location. Technical nirvana – perhaps, but well within the capabilities of existing technology, provided vendors and customers can work together, making vendor-agnostic initiatives like OpenStack one of our best hopes for pulling this all together. But for all the effort that’s gone into building out the OpenStack framework available today, it’s still only a framework and the nuances of putting it into production on a do-it-yourself basis remains a substantial challenge for many but the most technically-savvy customers. Someday we’ll look back on this and wonder why we made this all so hard, but until that day customers and vendors alike should insist on the openness that will convert all this cloud confusion into simply IT as it should be. Until then, it’s entirely possible to start small and work up to it, and this growth in the managed OpenStack space gives customers the latitude to adopt the wonders of cloud automation on their own terms and grow into it as needs, technical capabilities and budget allows.