For this New Year’s edition of Tune into the Cloud I spend a quiet, sunny morning sitting on the dock of a harbor bay – thinking of Otis Redding – to look at the incoming and outgoing streams into our little country called cloud. But basically I saw only one thing go by, namely: containers!
No trend is currently as hot as containers and particularly the popular container management system Docker. Although arguably just maturing, it has already reached almost mythical proportions of cloud hype. Docker would make virtualization superfluous, replace PaaS all together and thanks to unparalleled portability will put a final end to decades of platform and vendor lock-in. As a result there currently is no start-up or cloud provider who has not incorporated Docker prominently in its 2015 strategy.
But is Docker really the cure that heals all diseases? To answer that let’s look at the problems Docker itself says it addresses. Docker talks about “Build, Ship and Run, Any App, Anywhere” analogous to physical shipping containers, which thanks to standardized dimensions and connectors can be used without any problems or adjustments on trucks and ships in every continent. In fact the logo of Docker is a whale transporting a large numbers of containers on its back.
By offering a standard container format Docker addresses one of the most frequent discussions between developers and IT operations. Namely: “But on my development machine it runs fine!”. Now organisations may have had time for such discussions when they only took new developments into production once every quarter or twice a year. But for new methods like agile, scrum and continuous deployement, the logistics really need to be a lot more efficient.
The underlying problem of incompatibility is what the Linux community (Wikipedia) calls “Dependency Hell”. A condition Windows developers may still remember as “DLL Hell”. Any developer will aim to write as little code as possible and does so by using standard libraries (DLL: Dynamic Link Library) wherever possible. But in reality many of these “standard” libraries are not that standard. There are often several versions, which causes problems when merging code from multiple developers/companies or when moving code from a development or test environment to a shared production environment.
In the Windows environment this was pragmatically “solved” by placing each application on its own server (1 app per server). This caused the utilization(and thus efficiency) to decrease significantly. But thanks to virtualization we managed to somewhat address that. But with virtualization we were still running complete copies of the operating system for each application and we have to maintain, patch and update all these different libraries for each application. Back when companies still had a small number of fairly large and relatively static applications, this was acceptable. But when moving to thousands of micro services that need to scale daily from a few hundred to hundreds of thousands or even millions of users, then this method becomes simply to inefficient (it’s no coincidence that Google is one of the first and one of the largest users of container technology).
The Docker approach (in combination with the container technology of Linux) offers a more elegant and efficient solution to this problem. Thanks to an ingenious (nested, read-only) file system everything shared exists logically only once, and thus only uses scarce resources – such as memory – only once.
Another advantage of this approach, is that developers can focus more on the application in their container, while the operations team ensures that at any time sufficient buoyancy (a.k.a. ships) is available for the containers in production. Docker is hereby used to distribute the various application containers conveniently and effectively over the available vessels. Incidentally, those vessels are often still virtual machines, mainly because of security. Putting individual containers a float on the (public) ocean is namely still somewhat too sensitive to piracy.
The super relaxed “Dock of the Bay” is one of the last songs of soul legend Otis Redding, it was recorded just before he and his backing band died in a tragic plane crash. Redding was a singer – analogous to Docker – who originally was especially popular in his own (soul / linux) scene. But as he became more known his popularity grew also in the more conservative and traditional (pop / IT) market.
Read Complimentary Relevant Research
Cloud Computing Primer for 2018
Cloud is evolving from a market disruptor to an expected approach for traditional and next-generation IT. Our research offers actionable...
View Relevant Webinars
Redefining ITAM for the Digital Age
The role of IT asset management (ITAM), how it is defined and even the definition of an IT asset are changing in response to the challenges...
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.