- Open source containers may actually live up to the hype of becoming a viable alternative technology for hosting distributed applications.
- Docker’s container technology will only succeed if it continues to build out a similarly strengthened and reliable management framework equal or greater to that of existing application environments.
Many IT decisions for enterprises are heavily based on reliability. They have to be, because the typical enterprise customer doesn’t have the time, budget, staff, or patience to deal with the fiddly-bits of making a fledgling system run, especially if there’s a Tier 1 vendor solution already available that meets most of the criteria. This has always been the open source dilemma for enterprise IT managers; is it worth the risk of reaching for the shiny brass ring of low-cost open source software when it’s just as easy to stay safe on the vendor pony? Well, sometimes it is, and the brass ring of container technology is getting closer and closer at a surprising rate.
Containers have taken the IT world by storm, and there are those who say container technology like that offered by current wunderkind Docker is set to go head to head with application environments like VMware – in part, because the very nature of container technology provides an alternative for many of the capabilities that virtualization platforms like VMware were built to deliver. Well, in some contexts, yes, and in others, no. For example, instead of spinning up a whole new and complete OS instance for every application, multiple Docker containers can instead take advantage of common resources already available within an existing OS instance by simply tapping into the resources made available by the Docker Engine. This does make them far more lightweight when compared to traditional virtualization, but container platforms do not yet have the same rich management, security, and resilience capabilities of virtualization; so traditional virtualization technologies aren’t likely to be retired anytime soon.
On August 11, 2015, Docker announced the availability of the Docker Content Trust, its next step in making Docker a stronger and more secure platform for distributed computing. This new capability creates a digitally signed environment for Docker images that can only help to address security and reliability concerns of the enterprise customer. Docker has also found a friend in IBM, which has signed on big time to the Docker container platform and has been building out its own business-hardened Docker environment that can be run on every IBM hardware platform, including the z Series mainframe. And knowing IBM, it’s not about to support anything the least bit sketchy on that particular system.
But, there is still a lot of work to do for Docker, and the company knows it. Fortunately, there are a number of projects underway to firm up the management and communication environment for containerized applications to provide a stronger framework for security and data access, as well as increased monitoring capabilities for container status and performance. Plus – I’ve said it before and I’ll say it again – API management should be a key consideration for container users in order to avoid the same version-based problems we all know and love from the days of Windows DLL-hell back in the 90s. Fortunately, IBM and a number of other vendors are taking this challenge very seriously, and Docker has promised the same; but forewarned is forearmed. That being said, there’s a lot to like in Docker, and as far as I can see, every step it has taken has been a step in the right direction.