In this blog post
“There has never been an unexpectedly short debugging period in the history of computers.” – Steven Levy
Computer Programming and the art of writing algorithms from COBOL to AI/ML/DL has advanced a lot in the last decade, but the developers, operators, and testing teams are still dealing with the tricky problem of works-on-my-machine phenomenon. When it comes to continuous delivery, works-on-my-machine is one of the commonest and insidious hurdles. The underlying probable cause could be from simple version mismatch of system libraries, compilers to OS level so on. This phenomenon has become so common in the IT landscape that it even has a badge.

Thanks to virtualization and containerization, which has reduced it to Zero. This also begets the questions – Will Containerization replace the Virtualization? Are they counterparts? If not, how do they go hand in hand? Let’s dive in to see how they serve different purposes.
Virtualization
Virtualization is one of the innovative technologies that helps to create useful IT services using resources that are traditionally bound to hardware. Virtualization’s inception can be traced back to 1960 by IBM and got traction in the last two decades. Virtualization has transformed the way people compute.

As server processing power and capacity increased, bare metal applications couldn’t exploit the new abundance in resources. Thus, VMs were born. Virtualization is achieved by running software on top of physical servers to emulate a hardware system. In short, hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs.
Hypervisors can be categorized into two broad types
- Type 1 Hypervisor (also called bare metal or native)
- Type 2 Hypervisor (also known as hosted hypervisors)
Containerization
Containerization is a methodology of encapsulating and isolating applications and their required environment. In simpler terms, Containers are next-generation virtualization by packaging the application’s code, configurations, and required dependencies.
Containers can handle both stateless and stateful applications, although containers do not retain session information. Containers offer out-of-the-box scalability and high availability by running multiple instances of a container image simultaneously and by replacing the failed one by spinning the new instance in sub-seconds.
Traditional Deployment
Before Virtualization and Containerization, to deploy any application, the hardware capacity must be expanded. The OS must be installed or should use the existing one. Post installing necessary tools, dependencies like compilers and interpreters must be installed. Finally, the software can be run.

This type of deployment posed umpteen challenges.
- It takes a long time to expand hardware capacity, which in turn increases the capital and operating expenditures.
- Underutilized scaling of resources.
- Inability to isolate the faults allows works-on-my-machine phenomenon to thrive.
To resolve the above limitations, organizations began virtualizing physical machines.
Virtualized Deployment
To deploy any software, first, the Virtual Machines (VMs) have to be produced by virtualizing the underlying hardware. Virtualization allows running multiple VMs on a single server. The OS can be installed in each VM. Post installing necessary tools, dependencies like compilers and interpreters must be installed. Finally, the application can be run.

Limitations
- Virtualization can isolate faults at the VM level but is unable to isolate the faults at the application level.
- Virtualization allows better utilization of resources but in distributed computing if one of the VM is overtaxed, it might affect other applications running in same VM due to resource contention.
- For distributed computing, quick scalability is still a challenge.
- Virtualization is a little heavy as each VM includes a full copy of an operating system, the application and its required binaries and libraries which takes up more GBs.
Containerized Deployment
To deploy any application in Containers like Docker, CRI-O, Containerd, etc., post OS installation and the respective container engine has to be installed. The application can be spun in a docker with orchestrator like Kubernetes.

Virtualization vs Containerization
- Containerization brings works-on-my-machine phenomenon down to zero, as containers are primarily employed to package an application code and its dependencies which makes them totally isolated. This feature allows same container to travel from Dev to QA to Prod without any environmental disparities.
- Compared to Virtualization, containers are lightweight and require fewer resources. Since containers are process-isolated and doesn’t need a hardware hypervisor.
- VM are full-blown by nature this makes them to take at least few minutes to boot and be dev-ready. On the same aspect, the containers are fast, and it takes from milliseconds to seconds to start a container.
- Container based distributed cloud services has become de facto for any modern microservices based application.

In general Virtualization and Containerization are complementary and similar, nevertheless interchangeable. Virtualization solves the infrastructure problems like server processing and capacity planning, whereas the latter solves the application problems by improving DevOps, enabling microservices, increasing portability, and further improving resource utilization. Big enterprises tend to adopt the hybrid approach of containers for applications along with the traditional VMs.