“You can think of Docker [a container technology] as a shipping container for the online universe, a tool that lets developers neatly package software and move it from machine to machine. Today, when running large online applications such as a Google or a Twitter or a Facebook, developers and businesses often spread software across dozens, hundreds, or even thousands of machines, and Docker provides a more efficient means of doing so.”
Cade Metz on Wired.
Container technologies have existed for over a decade but have recently gained widespread acceptance in Data Centre and Cloud circles, including at organisations such as Google, Microsoft and Amazon. Containers effectively represent a more efficient virtualisation technique than traditional hypervisors because they share resources via an underlying operating system on physical servers rather than having to load an operating instance on every virtual machine running on the server. Containers provide efficiency where all application instances function on the same operating system, making it useful for scalable, distributed application architectures. Containers however, don’t allow multiple operating systems to run on a single physical device.
Containers have been used for making applications and workloads more portable by improving the packaging, deployment and management of applications.
Container technologies originally envisaged simple, portable applications, but are now seen to represent an attractive opportunity for scalable, virtualised networks and the management systems / OSS that support them. The challenges that are currently being worked on to support this include the composition of more complex building blocks that include clustering of multiple containers across different hosts and establishing relationships between applications and networking.
BTW. You may have noticed that Cisco recently acquired Embrane. Embrane has indicated that its heleos product/platform can bind links and elements together as a single SDN container.