Why Containerization?
A Lesson from History
Lets take a deep dive into the past infrastructure and explore the application deployment environments used at that time. Huge data centers, hundreds of servers (HP/SUN/IBM/DELL/CISCO etc.) and each server is hosting just a single application and requires all external (Cooling/PowerSupply/Access etc.) and internal (RAM/CPU/DISK etc.) resources.
Typically: One Application on One Physical Server Approach
This leads to many problems like Slow Deployments, Very Difficult to Scale, offcourse Complicated Migrations and un-used system resources.
Now comes the era of Hypervisor which changed the data center management completely and technology called “Hypervisor-Based Virtualization”.
Typically: One Physical server can contain multiple running applications
Virtualization solved all our problems of migrations, scaling the environments, better resource pooling but still every application need a dedicated Operating System, dedicated Resources like CPU/RAM and the most question of application portability?
Now comes the technology called “Containers”, as per our experience this technology was there in past also but with many limitations ex: Solaris Zones. Container based virtualization uses the Kernel on the host’s operating system to run multiple guest instances without installing an Operating System.
Each guest instance is called “Container” and have its own Filesystem, Process, Memory, Device and Network Ports.
“Container Looks like a VM but actually they are not a VM”
In our next posts we will explore and learn about Docker, Docker Terminology, Docker Architecture, Images, Container process, Actual Implementation and much more.