Have you ever been in a situation where you made an amazing application that works perfectly on your machine but as soon as you share it with your colleague it collapses?
Before jumping straight into what docker is, let’s explore containers.
What are Containers?
Containers are the standard unit of application that packages code and it’s dependencies bundling them together ensuring the application to run smoothly from one computing environment to the other by Isolating the application from it’s environment so that the application runs uniformly in any environment regardless of the Infrastructure.
Basic Commands
$ docker run (creates and starts a container from an image)
$ docker ps (lists all running containers)
$ docker stop (stops a running container)
$ docker rm (removes a stopped container)
$ docker images (lists all available images on your local system)
What is Docker?
Docker is a software platform that allows you to build, test, and deploy applications quickly. Programmer will write a docker file by providing the necessary configurations, dependencies, and setup for the application software, from docker file developer builds the docker image which will have everything it should in order to run the application in any environment. Dev then runs this image creating a container.
As said earlier, The container will be the isolated instance of the environment where the application can execute exactly as defined in the docker file. Or to put it simply, a container is a running instance of a docker image.
To download and install visit - https://docs.docker.com
Docker Architecture
Docker uses a client-server architecture, where docker client and docker daemon work together with daemon as the core service that handles all the heavy lifting for docker, making it possible to build, ship, and run applications in containers.
Docker’s architecture has these core components:
Docker Client
Docker Daemon
Docker Registries
Docker Client
Docker client is the primary way most Docker users interact with docker, it provides a command line interface utility through which users can provide commands to docker client which then via REST APIs (docker engine api) sends them to daemon. The docker client can communicate with more than one daemon.
Docker Daemon
The Docker daemon (or dockerd) is the brain behind docker that runs in the background on host machine and listens for the API requests made by the docker client and it talks to the underlying operating system and takes necessary actions such as managing docker containers, images, networks, and volumes.
Docker Registries
Registries are the storage system for docker images where various different versions of an individual image is stored in various docker repositories. Docker has two registries - Public and Private. Docker Hub being the public registry. By default, when you pull an image, Docker searches the public registry and saves it locally on DOCKER_HOST
. You can pull images locally from these registries. Also, with enough access (where needed) users can push new images to the registry.
Docker file
Contains instructions for the creation of docker images, these images can be stored on internet using docker hub and can be pulled from there when needed by teams to create and run containers in any environment. All of this leads to the container having all the necessary dependencies, libraries, etc.
Example:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
(Note: LTS version is considered stable and receives long-term support in prod env)
Explanation:
FROM node:18: Used Node.js LTS version as the base.
WORKDIR /app: Set the working directory.
COPY package.json ./*: Copy dependency files for caching.
RUN npm install --production: Install production dependencies.
COPY . .: Copy the app code into the container.
EXPOSE 3000: Expose port 3000.
CMD ["npm", "start"]: Start the app using
npm start
.
for updated logs refer: https://docs.docker.com/reference/dockerfile/
Docker Image
A Docker image is more of a snapshot of the current environment and captures all it’s dependencies and libraries to be used in other computing environments. It’s a standalone executable file that is used to build a container. Docker container image can be shared and deployed at multiple locations at once.
Basic Commands
$ docker images (lists all available images on your local system)
$ docker pull (downloads an image from a remote registry)
$ docker rmi (removes one or more images from your local system)
$ docker tag (tags an image with a new name)
$ docker build (builds an image from a Dockerfile)
For more info refer: https://docs.docker.com/reference/cli/docker/image/
How Docker works?
Docker manages containers, ensuring they remain isolated from each other while efficiently sharing the host OS resources. Docker uses a layered filesystem, where each change or addition (like installing software) is stored as a separate layer. This makes containers fast, as they can share common layers, reducing redundancy and speedup the startup times.
Containerization vs Virtualization
Containerization and Virtualization are both technologies that allow you to run multiple applications on the same physical hardware, but they do so in different ways. Virtualization involves creating multiple virtual machines (VMs) on a single server, with each VM running its own operating system on top of a hypervisor, which adds an extra layer and consumes more resources. In contrast, containerization packages applications along with only their essential libraries and dependencies, running on the host OS without needing a full OS for each instance making containers fast.
Why use Docker?
Some of the most important traits which made Docker so much popular are:
Portability: Docker containers ensure that applications run consistently across any environment, eliminating the "works on my machine" issue.
Scalability: Docker allows applications to be scaled up or down, typically managed by orchestration tools like Kubernetes or Docker Swarm.
Reduced Risk of Downtime: Docker reduces conflicts between dependencies, promoting more reliable and consistent application performance.
OS and Platform Independent: Docker containers are platform-agnostic, enabling applications to run on any system supporting Docker.
Collaboration: Docker Hub enables teams to share containerized applications and images, similar to code collaboration with Git.
Demo
Pulling Alpine Image from Docker Hub
$ docker pull alpine Using default tag: latest latest: Pulling from library/alpine da9db072f522: Pull complete Digest: sha256:1e42bbe2508154c9126d48c2b8a75420c3544343bf86fd041fb7527e017a4b4a Status: Downloaded newer image for alpine:latest docker.io/library/alpine:latest
Running the alpine container interactively using
-it
,/#
shows that now we are in the alpine shell environment$ docker run -it alpine sh / #
Let us write a simple shell script inside our container and pipe it in
demo.sh
/ # echo -e "'#!/bin/sh\n\n# shell script demo\n\necho "Getting started with Doc ker"' > demo.sh
Making the script executable and executing it
/ # chmod +x demo.sh / # ./demo.sh Getting started with Docker
Outputs
Getting started with Docker
.
Next Steps
This was all about Docker, a containerization platform. Upnext, we'll dive into Kubernetes. Just as a large ship carries and manages many containers, Kubernetes is a tool for orchestrating containers. It is the most popular container orchestration engine (COE).