18. Deploying Applications on Kubernetes: Understanding Deployments and Services

18. Deploying Applications on Kubernetes: Understanding Deployments and Services

Fundamentals of Deployments and Services

Introduction

In our last article, we learned how to set up GKE clusters in two different modes. While having a Kubernetes cluster up and running is a major milestone, it is similar to having a well set-up stage with no performers or artists. Deployments and services are what allow us to put on a good show. These core Kubernetes concepts are essential for running and managing our containerized applications on GKE.

In this article, we'll take our first steps into the world of application deployment. We'll understand

  • Deployments and how to manage the lifecycles of your application's pods.

  • Services and how to provide reliable access to your application.

Deployments: Managing Your Pods

Think of your containerized application as needing multiple "performers" (pods) to function effectively. Deployments are like the stage manager in Kubernetes:

  • Replication: Deployments ensure that your desired number of pods are always running. If one fails, it automatically creates a replacement.

  • Updates: Need to roll out a new version of your application? Deployments orchestrate a seamless transition, updating pods in a controlled manner to minimize disruption.

  • Self-Healing: Deployments constantly monitor your pods. If issues arise, they'll attempt to restart them, ensuring your application remains available.

Walkthrough: Our First Deployment

Let's see this in action with a simplified "hello world" web application example in 6 simple steps. These six steps will cover creating the simple Hello World app, a Dockerfile, the Docker image, the Docker container, and then accessing the app.

  1. Create a new directory: In your console, create a new directory

     mkdir myapp
     cd myapp
    
  2. Create the web app

     echo "Hello, world!" > index.html
    
  3. Create the Dockerfile

     touch Dockerfile
    
  4. Open the Dockerfile and add the instructions

     FROM nginx
     COPY index.html /usr/share/nginx/html
    

    This Dockerfile defines a new Docker image that uses the official nginx image as base then copies the index.html file to the appropriate location in the image

  5. Start Docker & build docker image from dockerfile

     docker build -t myapp .
    

    This builds a new Docker image with the tag "myapp" using the Dockerfile in the current directory.

  6. Run the docker container from the image

     docker run -p 8080:80 myapp
    

    This runs the my app container and maps port 8080 on your local machine to port 80 in the container

  7. Access the app: localhost:8080 or [Public_IP]:8080

Services: Reaching Your Application

With our deployment in place, we have pods running our "hello world" web application. However, there's a catch:

  • Pods are Ephemeral: Pods can be rescheduled or relocated within the cluster with potentially different IP addresses. Relying on a single pod's IP for access leads to instability.

  • Enter the Service:. Think of a Service like the theater's box office and ushers. It provides a consistent way to interact with our application, regardless of individual pod changes. Key things Services do:

    • Stable Endpoint: A Service gets its own IP address and DNS name within the cluster. This address remains reliable even if pods behind the scenes get replaced.

    • Load Balancing: Services distribute incoming traffic across all the healthy pods backing them.

Walkthrough: Exposing Our Deployment

Let's create a basic Service to make our "hello world" application accessible. We'll use a LoadBalancer type for simplicity in this demo:

apiVersion: v1
kind: Service
metadata:
  name: hello-world-service
spec:
  type: LoadBalancer 
  selector:
    app: hello-world
  ports:
  - port: 80 
    targetPort: 80

Explanation:

  • selector: Tells the Service to target pods labeled 'app: hello-world' (matching our Deployment).

  • type: LoadBalancer: Instructs GKE to provision a cloud load balancer (giving us an external IP address for access).

  • ports: Define the port mapping between the Service and the pods.

Testing It Out

After applying this manifest with kubectl apply -f service-file.yaml, it may take a few minutes for the external IP to be assigned. Once it's available, visiting that IP in a web browser should display our "hello world" message!