Setup a Jenkins Dynamic Provision AWS Master to GKE
Introduction
In the era of multi-cloud adoption, Jenkins stands as a linchpin, orchestrating seamless CI/CD across Amazon Web Services (AWS) and Google Kubernetes Engine (GKE). Dynamically provisioning build agents from AWS to GKE not only optimizes resources but aligns with a robust multi-cloud strategy. This approach offers flexibility, redundancy, and the ability to choose the best services from different providers. Jenkins, in facilitating multi-cloud synergy, ensures agile and resilient software development, providing businesses a strategic edge by avoiding vendor lock-in and optimizing costs while enhancing performance.
Architecture Diagram
Prerequisite -
- Setup VPN Connection Between AWS to GCP Link.
- Install Jenkins in AWS Ec2 Link.
- Create a Private GKE on the GCP Side Link.
- Create a GCP VM to Connect a GKE Cluster Privately Link.
Steps to be followed -
Step1- Install the required plugin
- Kubernetes plugin
- Docker API
- Docker Commons Plugin
- Google Oauth Credentials
Step2 — There are 2 methods to Authenticate your GKE with Jenkins master
- We can Download the Admin.conf file from this location ~/. kube/config Download this file and enter the file in Jenkins credentials but this is not a best practice we will follow 2nd Method
- Create a Service Account in Kubernetes and Bind respective pod roles on it then create a secret for that particular Service Account and Pass it to Jenkins Credentials this is the best practice we follow in the industry
Step3 — SSH into GCP vm and create this file as mentioned below also we will see what these files are doing
First of all, we will create a Service Account in the Kubernetes Cluster Which is a Kubernetes Service Account
- ksa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: test
automountServiceAccountToken: true
You should see Above output Jenkins service account is been created
Now we will create roles
- Role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: test
name: full-pod-access
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Kubectl apply -f role.yaml
Kubectl get role -n test
kubectl describe role full-pod-access -n test
You should see the below output as mentioned below
- Rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: full-pod-access-binding
namespace: test
subjects:
- kind: ServiceAccount
name: jenkins
namespace: test
roleRef:
kind: Role
name: full-pod-access
apiGroup: rbac.authorization.k8s.io
kubectl apply -f rolebinding.yaml
kubectl get rolebinding -n test
kubectl describe rolebinding full-pod-access-binding -n test
- secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: jenkins-secret
namespace: test
annotations:
kubernetes.io/service-account.name: jenkins
type: kubernetes.io/service-account-token
kubectl apply -f secrets.yaml
kubectl get secrets -n test
kubectl describe secrets -n test
Step 4 — Copy the secrets and paste them into jenkins credentials to do that part go to jenkins console → manage jenkins → security → Configure Credentials → Add Credentials
Step 5 Go to GKE Cluster and Copy the Jenkins master IP to the Control Authorized network
As shown below also you can refer to this document link.
Step 6 Now let's go to Jenkins Console in Manage Jenkins → System Configuration → Cloud
Note — You will see this Option After only installing Kubernetes Plugin
New Cloud Select Kubernetes Create a Node
Now in Kubernetes Cloud Details Copy Kubernetes internal IP of Cluster as shown below Screenshot
- Kubernetes URL — https://172.23.1.2:443
- Disable https certificate check
- Kubernetes namespace — test
- Credentials — your Credentials that you have configured before in step 4
- Websocket — enable
You will face this issue let’s see the issue and resolve this issue
Step 7 now the thing is we have to whitelist the VPC IP of AWS setup the firewall that is pre-created by GKE also if you create a Public GKE it’s Easy to Connect the Cluster but here My cluster is Fully Private and also Add the control plane ip which is managed by GCP VPC.
Step 8 To connect private Jenkins with private GKE you need to go to GCP Tunnels that have been established for AWS to GCP
Inside the tunnel go to the Cloud Router that is been associated with it edit the Cloud Router Create a Custom Route and Enable the option to Advertise all subnets visible to the Cloud Router
You will see all the Ip that are associated with subnet after that scroll down Add a Custom ranges
And in AWS side add Control plane range inSecurity Group in my case my control plane range is 172.23.1.0/28 allow HTTPS traffic in inbound rule as mentioned in below screenshot
Ok Now Let’s Check the Connection and try to Connect this time it should work.
If you face this Issue while Testing Connection below screenshot re-create the secret and add to credentials.
To more about the connectivity part in-depth follow this blog Configuring GCP Partner Interconnect: A Comprehensive Guide Link.
Step 9 Let’s add pod labels
- Key: my-Jenkins-agent
- values: my-Jenkins-agent
In the pod template add name and namespace, the namespace will be tested also in the container template add a Docker image so I have a Docker image with me but make sure that to make changes to your image depend upon your application. I am using this image — gaurav0408/jenkinseks:1 also this is this sample Dockerfile
# Use the official Jenkins JNLP agent image as the base
FROM jenkins/inbound-agent:latest
# Switch to the root user for installation
USER root
# Install necessary tools (adjust as needed)
RUN apt-get update && \
apt-get install -y
# Switch back to the Jenkins user
USER jenkins
Save the Configuration
Step 10 Create a Pipeline Job and i am simply Running the “Hello World” job here there is the script in Groovy.
pipeline {
agent {
label 'my-jenkins-agent'
}
stages {
stage('Run on GKE') {
steps {
// Your build steps go here
sh 'echo "Hello, World!"'
sh 'date'
}
}
}
}
In Console Output You should see that your job is Running and Created pod Agent successfully
Also if you do kubectl get po -n test you will see your pod run and get terminated after the job is done
Conclusion
The implementation of this setup aimed to achieve resource efficiency by minimizing CPU and RAM utilization as needed. By deploying a Node on a Virtual Machine (VM), users can efficiently execute multiple tasks across various Google Kubernetes Engine (GKE) pods. This approach enhances the optimization and scalability of the Jenkins environment, enabling the dynamic creation and termination of slave nodes. This strategy ensures a flexible and resource-conscious management of the Jenkins infrastructure.
In case of any questions regarding this article, please feel free to comment in the comments section or contact me via LinkedIn.
I want to thank my team at Guysinthecloud for all of their help.
Thank You