Deploy Application on Kubernetes Using Jenkins Multicloud- GKE & Route53

Rushabh Mahale
8 min readMar 31, 2023

--

Organizations can escape being reliant on a single cloud vendor by adopting a multi-cloud strategy. It also makes it easier for customers to negotiate with service providers for better rates and service level agreements. Data centers for various cloud service companies are spread across various regions. Through the use of a multi-cloud approach, businesses can split up their tasks among various service providers, cutting down on latency and enhancing the user experience for clients in various geographies.

Figure: Architecture diagram for the implementation of the proposed solution

In this Architecture, we’ve specified how quickly we can deploy our application to multiple clouds using Jenkins, and today’s businesses are switching from monolithic to microservice architectures to improve their operations.

High-level Steps

  1. AWS and GCP infra by Aniket Kumavat link.
  2. EKS creation and Jenkins pipeline setup in AWS by Bhavesh Dhande (here)
  3. Jenkins pipeline setup in GCP by Siddhesh Patil (here)
  4. GKE creation and Route traffic using Route53 GKE and EKS by Rushabh Mahale

Note- Beginning with step number 4, I’ll create a cluster and map Route 53 to both EKS and GKE.

Prerequisites -

  1. VPCs in GCP and AWS need to be created link.
  2. Jenkins with ECR and GCR setup.

What is GKE Cluster

Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machines (specifically, Compute Engine instances) grouped together to form a cluster.

  • Here the set of instances combines and forms a cluster and each VM is called a node.
  • GKE is a Google-managed Service.
  • Monitoring and liveness probes for application containers, automatic scaling, rolling updates, and more.

Steps To create GKE Cluster

What is Autopilot?

GKE Autopilot is a mode of operation in GKE in which Google manages your cluster configuration, including your nodes, scaling, security, and other preconfigured settings. Autopilot clusters are optimized to run most production workloads, and provision computes resources based on your Kubernetes manifests. The streamlined configuration follows GKE best practices and recommendations for cluster and workload setup, scalability, and security.

What is Standard Cluster?

Standard Cluster is Our normal GKE Cluster Which consists of components like

  • Kubernetes Control Plane
    - Kube-apiserver
    - Kube-scheduler
    - kube-controller-manager
    - etcd
    - cloud-controller-manager
  • Kubernetes Worker Nodes
    - Nodes
    - Pods
    - Container Runtime Engine
    - kubelet
    - kube-proxy
    - Container Networking

Step 1.1 — Create a Standard cluster

Step 1.2— Cluster information like name and here I am creating cluster zonal select region for High Availability

Step 1.3 — Node details like name and select node size 1. create a single node cluster

Step 1.4 — Node section Select machine configuration and Boot disk type Balanced persistent disk — 20GIB Boot disk encryption — Google-managed encryption key

Step 1.5 — In the Cluster section select Network to create a public cluster for best practice make your Cluster private.

Step 1.6 In the Cluster Security Section enable Workload identity

Workload Identity

Workload Identity allows workloads in your GKE clusters to impersonate Identity and Access Management (IAM) service accounts to access Google Cloud services.

Create The Cluster This will take time to create Cluster

Deploy Application on GKE

Steps to deploy the application on GKE

Step1.7 go to connect and you will see the below command

gcloud container clusters get-credentials <CLUSTER_NAME> --zone <ZONE> --project <PROJECT_ID>she

Step 1.8 Now Lets Deploy our Deployment with 2 replicas

What is Deployment?

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets or to remove existing Deployments and adopt all their resources with new Deployments.

To know more about Deployment refer to this link.

What is Replica set

A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintained`, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template.

To know more about Deployment refer to this link.

Create a yaml file. Replace project id in image

vi deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: gke-app
spec:
replicas: 2
selector:
matchLabels:
app: gke-app
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: gke-app
spec:
containers:
- name: gke-app
image: asia.gcr.io/<PROJECT_ID>/gcp-app:latest
ports:
- name: http
containerPort: 80
kubectl apply -f deployment.yaml
kubectl get deployment

Step 1.9 Create Service

What is Service

Service is basically to expose your application to the outside world or within-cluster using a port where your application is been expose where users can access your application is known as Service in Kubernetes

Type Of Service in Kubernetes

  • Cluster IP: This is the default service in Kubernetes. This gives us a service inside the cluster that other applications inside the cluster can access. There is no external access.
  • Node Port: External traffic directly to our services. Opens a specific port on all nodes so that any traffic that is sent to this port is forwarded to the service.
  • Load Balancer: Becomes accessible externally through cloud provider load balancer.

In my case, I am using NodePort

vi service.yaml

apiVersion: v1
kind: Service
metadata:
name: gke-svc
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: gke-app
type: NodePort

Step 1.10Create Ingress

Ingress

It is not a type of service but it acts as a smart router. Spins up HTTP(s) load balancer for us. This lets us do both path and sub-domain-based routing to the backend service. It is a part of the Kubernetes cluster and runs as pods. In prod, we use ingress to expose applications to the internet. It is an object that allows access to k8s service from outside the k8s cluster.

There are two components of ingress:

  • Ingress Resource: Contains the rules to route the traffic.
  • Ingress Controller: Routes the traffic.

This will take time to create ingress if you are not able to see the website in the load balancer section go to health check and check your path and update it “/” to “/GCP

vi ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gke-ingress
spec:
rules:
- http:
paths:
- pathType: Prefix
path: "/GCP"
backend:
service:
name: gke-svc
port:
number: 80

You can also go to the Load balancer section and view the ingress-LB this is Google manage ingress

Now Copy and paste IP in the browser you should see you flask application is been served on /GCP path

Route traffic using Route53 GKE and EKS

What is Route 53?

Route 53 is a DNS service that connects the Internet traffic to appropriate servers hosting the requested Web application. Route 53 takes its name with reference to port 53. Unlike traditional DNS management services, Route 53, together with other AWS services, enables scalable, flexible, secure, and manageable traffic routing.

You can register your domain and route the DNS Traffic among vm and Load balancer. including the names of domains, hosted zones, and records

Steps to map GKE and EKS DNS to Route53

Step 2.1 — Go to AWS Application load balancer Copy the name and go to the network interface section and paste the name you will see 2 network interfaces copy the IPof 1st network interface

Step 2.2 — Copy the IP and paste it into Browser

Note: Bhavesh Dhande has configured the ALB Load balancer with ingress.

Step 2.3 — Go to GKE Side and check Load Balancer

Copy the IP and paste it into Browser

Step 2.4 — Go to Freenom where your Domain has been registered eg Go-Daddy, Bigrock, Hostinger.

Step 2.5 — Now to Go to Route53 service in AWS Register your Domain. Create Host zone

Step 2.6 — Now copy the name server and go to freenom manage domain section Copy all the ns-xxx as mentioned below screenshot

Use custom nameserver and enter route 53 server

Step 2.7 — Create a record

  • Record name — www.learningcloud.tk
  • Record type — A — Route traffic to an ipv4 address and some AWS resource
  • Value — Add EkS Application Load balancer IP and GKE Load balancer
  • Create record

Step 2.8 — Go to Browser and Copy your domain www.learning.tk and refresh you will see 2 websites it will act as a Load balancer

Here We go with GKE and EKS Flask application on multi-Cloud redirect traffic to the different clouds using Route53

Conclusion:

Managing DNS resolution for EKS and GKE clusters with Amazon Route 53 offers a highly available and flexible solution for delivering traffic to your Kubernetes apps. Both EKS and GKE support Route 53 integration, enabling you to control your cluster’s DNS using standard Route 53 features like alias entries and health checks.

In case of any questions regarding this article, please feel free to comment in the comments section or contact me via LinkedIn.

I want to thank my team at Guysinthecloud for all of their help.

Thank You

--

--