Netflix Clone CI/CD Pipeline | Jenkins | Docker | Kubernetes | Monitoring | DevSecOps

In these Project, we are cloning the Netflix API using the Bunch of tools. We are using the Grafana and Prometheus for monitoring All the tools. Use the Jenkins for CI/CD and K8s for the Deployment .

Step 1:Launch an Ubuntu(22.04) T2 Large Instance

Step 2 — Install Jenkins, Docker and Trivy


sudo apt-get install docker.io 
sudo usermod -aG docker ubuntu
newgrp docker

when we will run the pipeline , will get error so , we have to provide permission docker.sock

sudo chmod 777 /var/run/docker.sock

Installing the SonarQube -

docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
username admin
password admin

So , Now we are installing Trivy, which is used for Scanning the repository files and scan the Docker images

Install Trivy

vi trivy.sh
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy -y
sudo chmod +x trivy.sh
bash trivy.sh

Step 3: Create a TMDB API Key

Step 4 — Install Prometheus and Grafana On the new Server

Creating prometheus user

sudo useradd \
    --system \
    --no-create-home \
    --shell /bin/false prometheus

You can use the curl or wget command to download Prometheus.

wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz

Then, we need to extract all Prometheus files from the archive.

tar -xvf prometheus-2.47.1.linux-amd64.tar.gz

Create the directory for Prometheus

sudo mkdir -p /data /etc/prometheus

Now, let's change the directory to Prometheus and move some files.

cd prometheus-2.47.1.linux-amd64/

let's move the Prometheus binary and a promtool to the /usr/local/bin/. promtool is used to check configuration files and Prometheus rules.

sudo mv prometheus promtool /usr/local/bin/
sudo mv consoles/ console_libraries/ /etc/prometheus/

The main Prometheus configuration file.

sudo mv prometheus.yml /etc/prometheus/prometheus.yml

To avoid permission issues, you need to set the correct ownership for the /etc/prometheus/ and data directory.

sudo chown -R prometheus:prometheus /etc/prometheus/ /data/

You can delete the archive and a Prometheus folder when you are done.

cd
rm -rf prometheus-2.47.1.linux-amd64.tar.gz

Verify that you can execute the Prometheus binary by running the following command:


prometheus --version

To get more information and configuration options, run Prometheus Help.

prometheus --help

We're going to use Systemd, which is a system and service manager for Linux operating systems. For that, we need to create a Systemd unit configuration file.

sudo vim /etc/systemd/system/prometheus.service

Prometheus.service

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

StartLimitIntervalSec=500
StartLimitBurst=5

[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/data \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --web.listen-address=0.0.0.0:9090 \
  --web.enable-lifecycle

[Install]
WantedBy=multi-user.target
sudo systemctl enable prometheus
sudo systemctl start prometheus
sudo systemctl status prometheus

If , you encounter any issues with Prometheus or are unable to start it. The easiest way to find the problem is to use the journalctl command and search for errors.

journalctl -u prometheus -f --no-pager

Now we can try to access it via the browser. I'm going to be using the IP address of the Ubuntu server. You need to append port 9090 to the IP.

<public-ip:9090>

Install Node Exporter on Ubuntu 22.04

First, let's create a system user for Node Exporter by running the following command:

sudo useradd \
    --system \
    --no-create-home \
    --shell /bin/false node_exporter
wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz

Extract the node exporter from the archive.

tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz

Move binary to the /usr/local/bin.

sudo mv \
  node_exporter-1.6.1.linux-amd64/node_exporter \
  /usr/local/bin/

Clean up, and delete node_exporter archive and a folder.

rm -rf node_exporter*

Verify that you can run the binary.

node_exporter --version

Node Exporter has a lot of plugins that we can enable. If you run Node Exporter help you will get all the options.

node_exporter --help

Next, create a similar systemd unit file.

sudo vim /etc/systemd/system/node_exporter.service

node_exporter.service

[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

StartLimitIntervalSec=500
StartLimitBurst=5

[Service]
User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/node_exporter \
    --collector.logind

[Install]
WantedBy=multi-user.target
sudo systemctl enable node_exporter
sudo systemctl start node_exporter
sudo systemctl status node_exporter

If you have any issues, check logs with journalctl

journalctl -u node_exporter -f --no-pager

To create a static target, you need to add job_name with static_configs.

sudo vim /etc/prometheus/prometheus.yml

prometheus.yml

  - job_name: node_export
    static_configs:
      - targets: ["localhost:9100"]

By default, Node Exporter will be exposed on port 9100.

Since we enabled lifecycle management via API calls, we can reload the Prometheus config without restarting the service and causing downtime.

Before, restarting check if the config is valid.

promtool check config /etc/prometheus/prometheus.yml

Then, you can use a POST request to reload the config.

curl -X POST http://localhost:9090/-/reload

Check the targets section

http://<ip>:9090/targets

Install Grafana on Ubuntu 22.04

First, let's make sure that all the dependencies are installed.

sudo apt-get install -y apt-transport-https software-properties-common

Next, add the GPG key.

wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -

Add this repository for stable releases.

echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list

After you add the repository, update and install Garafana.

sudo apt-get update
sudo apt-get -y install grafana

To automatically start the Grafana after reboot, enable the service.

sudo systemctl enable grafana-server
sudo systemctl start grafana-server
sudo systemctl status grafana-server

Go to http://<ip>:3000 and log in to the Grafana using default credentials. The username is admin, and the password is admin as well.

username admin
password admin

To visualize metrics, you need to add a data source first.

Click Add data source and select Prometheus.

For the URL, enter localhost:9090 and click Save and test. You can see Data source is working.

Click on Save and Test.

Let's add Dashboard for a better view

Click on Import Dashboard paste this code 1860 and click on load

Select the Datasource and click on Import

You will see this output

Step 5 — Install the Prometheus Plugin and Integrate it with the Prometheus server

Let's Monitor JENKINS SYSTEM

Need Jenkins up and running machine

Goto Manage Jenkins --> Plugins --> Available Plugins

Once that is done you will Prometheus is set to /Prometheus path in system configurations

Nothing to change click on apply and save

To create a static target, you need to add job_name with static_configs. go to Prometheus server

sudo vim /etc/prometheus/prometheus.yml

Paste below code

  - job_name: 'jenkins'
    metrics_path: '/prometheus'
    static_configs:
      - targets: ['<jenkins-ip>:8080']

Before, restarting check if the config is valid.

promtool check config /etc/prometheus/prometheus.yml

Then, you can use a POST request to reload the config.

curl -X POST http://localhost:9090/-/reload

Check the targets section

http://<ip>:9090/targets

Let's add Dashboard for a better view in Grafana

Click On Dashboard --> + symbol --> Import Dashboard

Use Id 9964 and click on load

Select the data source and click on Import

Now you will see the Detailed overview of Jenkins

Step 6 — Email Integration With Jenkins and Plugin Setup

Install Email Extension Plugin in Jenkins

Go to your Gmail and click on your profile

Then click on Manage Your Google Account --> click on the security tab on the left side panel you will get this page(provide mail password).

dtaw cwkz hxgd xpgpOnce the plugin is installed in Jenkins, click on manage Jenkins --> configure system there under 

Click on Manage Jenkins--> credentials and add your mail username and generated password

This is to just verify the mail configuration

Now under the Extended E-mail Notification section configure the details as shown in the below images

Click on Apply and save.

Step 7 — Install Plugins like JDK, Sonarqube Scanner, NodeJs, OWASP Dependency Check

7A — Install Plugin

Goto Manage Jenkins →Plugins → Available Plugins →

Install below plugins

1 → Eclipse Temurin Installer (Install without restart)

2 → SonarQube Scanner (Install without restart)

3 → NodeJs Plugin (Install Without restart)

Plugins to be Install on the jenkins -

Eclipse Temurin installer PluginVersion1.5 NodeJS PluginVersion1.6.1 SonarQube Scanner for JenkinsVersion2.16.1

7B — Configure Java and Nodejs in Global Tool Configuration

Goto Manage Jenkins → Tools → Install JDK(17) and NodeJs(16)→ Click on Apply and Save

Apply & Save .

Step 8 — Configure Sonar Server in Manage Jenkins

Grab the Public IP Address of your EC2 Instance, Sonarqube works on Port 9000, so :9000. Goto your Sonarqube Server. Click on Administration → Security → Users → Click on Tokens and Update Token → Give it a name → and click on Generate Token

click on update Token

Create a token with a name and generate

copy Token

Goto Jenkins Dashboard → Manage Jenkins → Credentials → Add Secret Text. It should look like this

You will this page once you click on create

Now, go to Dashboard → Manage Jenkins → System and Add like the below image.

Click on Apply and Save

The Configure System option is used in Jenkins to configure different server

Global Tool Configuration is used to configure different tools that we install using Plugins

We will install a sonar scanner in the tools

In the SonarQube Dashboard add a quality gate also

Administration--> Configuration-->Webhooks

Click on Create

Let's go to our Pipeline and add the script in our Pipeline Script.

pipeline{
    agent any
    tools{
        jdk 'jdk17'
        nodejs 'node16'
    }
    environment {
        SCANNER_HOME=tool 'Sonar-Scanner'
    }
    stages {
        stage('clean workspace'){
            steps{
                cleanWs()
            }
        }
        stage('Checkout from Git'){
            steps{
                git branch: 'main', url: 'https://github.com/Bhushan0151/Netflix-clone.git'
            }
        }
        stage("Sonarqube Analysis "){
            steps{
                withSonarQubeEnv('sonar-server') {
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \
                    -Dsonar.projectKey=Netflix '''
                }
            }
        }
        stage("quality gate"){
           steps {
                script {
                    waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token' 
                }
            } 
        }
        stage('Install Dependencies') {
            steps {
                sh "npm install"
            }
        }
    }
}

See , How can we generate the SonarQube-Script (Already added in the Above code)

Click on Set-up -> Locally ->

Click on Build now, you will see the stage view like this

To see the report, you can go to Sonarqube Server and go to Projects.

You can see the report has been generated and the status shows as passed.

Step 9 — Install OWASP Dependency Check Plugins

First, we configured the Plugin and next, we had to configure the Tool

Goto Dashboard → Manage Jenkins → Tools →

Click on Apply and Save here.

Now go configure → Pipeline and add this stage to your pipeline and build.

stage('OWASP FS SCAN') {
            steps {
                dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
                dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
            }
        }
        stage('TRIVY FS SCAN') {
            steps {
                sh "trivy fs . > trivyfs.txt"
            }
        }

The stage view would look like this,

You will see that in status, a graph will also be generated and Vulnerabilities.

Step 10 — Docker Image Build and Push

We need to install the Docker tool in our system, Goto Dashboard → Manage Plugins → Available plugins → Search for Docker and install these plugins

Docker

Docker Commons

Docker Pipeline

Docker API

docker-build-step

and click on install without restart

Now, goto Dashboard → Manage Jenkins → Tools →

Add DockerHub Username and Password under Global Credentials

Add this stage to Pipeline Script

stage("Docker Build & Push"){
            steps{
                script{
                   withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){   
                       sh "docker build --build-arg TMDB_V3_API_KEY=37695eddd52bd25c6f350861ace0d189-t netflix ."
                       sh "docker tag netflix bhushann11/netflix:latest "
                       sh "docker push bhushann11/netflix:latest "
                    }
                }
            }
        }
        stage("TRIVY"){
            steps{
                sh "trivy image bhushann11/netflix:latest > trivyimage.txt" 
            }
        }

You will see the output below, with a dependency trend.

When you log in to Dockerhub, you will see a new image is created

Now Run the container to see if the game coming up or not by adding the below stage

stage('Deploy to container'){
            steps{
                sh 'docker run -d --name netflix -p 8081:80 sevenajay/netflix:latest'
            }
        }

stage view

<Jenkins-public-ip:8081>

You will get this output

Step 11 — Kuberenetes Setup

Connect your machines to Putty or Mobaxtreme

Take-Two Ubuntu 20.04 instances one for k8s master and the other one for worker.

Install Kubectl on Jenkins machine also.

sudo apt update -y
sudo apt install curl
curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client

Part 1 ----------Master Node------------

sudo hostnamectl set-hostname K8s-Master
exec bash

----------Worker Node------------

sudo hostnamectl set-hostname K8s-Worker
exec bash

Part 2 ------------Both Master & Node ------------

sudo apt-get update 

sudo apt-get install -y docker.io
sudo usermod -aG docker ubuntu
newgrp docker
sudo chmod 777 /var/run/docker.sock

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo snap install kube-apiserver

Part 3 --------------- Master ---------------

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
# in case your in root exit from it and run below commands
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

----------Worker Node------------

sudo kubeadm join <master-node-ip>:<master-node-port> --token <token> --discovery-token-ca-cert-hash <hash>

Copy the config file to Jenkins master or the local file manager and save it

copy it and save it in documents or another folder save it as secret-file.txt

Note: create a secret-file.txt in your file explorer save the config in it and use this at the kubernetes credential section.

Install Kubernetes Plugin, Once it's installed successfully

goto manage Jenkins --> manage credentials --> Click on Jenkins global --> add credentials

Install Node_exporter on both master and worker

Let's add Node_exporter on Master and Worker to monitor the metrics

First, let's create a system user for Node Exporter by running the following command:

sudo useradd \
    --system \
    --no-create-home \
    --shell /bin/false node_exporter

You can download Node Exporter from the same page.

Use the wget command to download the binary.

wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz

Extract the node exporter from the archive.

tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz

Move binary to the /usr/local/bin.

sudo mv \
  node_exporter-1.6.1.linux-amd64/node_exporter \
  /usr/local/bin/

Clean up, and delete node_exporter archive and a folder.

rm -rf node_exporter*

Verify that you can run the binary.

node_exporter --version

Node Exporter has a lot of plugins that we can enable. If you run Node Exporter help you will get all the options.

node_exporter --help

--collector.logind We're going to enable the login controller, just for the demo.

Next, create a similar systemd unit file.

sudo vim /etc/systemd/system/node_exporter.service

[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

StartLimitIntervalSec=500
StartLimitBurst=5

[Service]
User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/node_exporter \
    --collector.logind

[Install]
WantedBy=multi-user.target

Replace Prometheus user and group to node_exporter, and update the ExecStart command.

To automatically start the Node Exporter after reboot, enable the service.

sudo systemctl enable node_exporter
sudo systemctl start node_exporter
sudo systemctl status node_exporter

If you have any issues, check logs with journalctl

journalctl -u node_exporter -f --no-pager

At this point, we have only a single target in our Prometheus. There are many different service discovery mechanisms built into Prometheus. For example, Prometheus can dynamically discover targets in AWS, GCP, and other clouds based on the labels. In the following tutorials, I'll give you a few examples of deploying Prometheus in a cloud-specific environment. For this tutorial, let's keep it simple and keep adding static targets. Also, I have a lesson on how to deploy and manage Prometheus in the Kubernetes cluster.

To create a static target, you need to add job_name with static_configs. Go to Prometheus server

sudo vim /etc/prometheus/prometheus.yml

prometheus.yml

  - job_name: node_export_masterk8s
    static_configs:
      - targets: ["<master-ip>:9100"]

  - job_name: node_export_workerk8s
    static_configs:
      - targets: ["<worker-ip>:9100"]

Since we enabled lifecycle management via API calls, we can reload the Prometheus config without restarting the service and causing downtime.

Before, restarting check if the config is valid.

promtool check config /etc/prometheus/prometheus.yml

Then, you can use a POST request to reload the config.

curl -X POST http://localhost:9090/-/reload

Check the targets section

http://<ip>:9090/targets

final step to deploy on the Kubernetes cluster

stage('Deploy to kubernets'){
            steps{
                script{
                    dir('Kubernetes') {
                        withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'k8s', namespace: '', restrictKubeConfigAccess: false, serverUrl: '') {
                                sh 'kubectl apply -f deployment.yml'
                                sh 'kubectl apply -f service.yml'
                        }   
                    }
                }
            }
        }
        

How to set a path of Kubernetes

goto pipeline syntax -> sample step -> withkube config -> credentials is the secrete k8s.txt

-> generate the path

Stage View

In the Kubernetes cluster(master) give this command

kubectl get all 
kubectl get svc #use anyone

STEP 12:Access from a Web browser with

Output -

Monitoring -

Complete Pipeline

pipeline{
    agent any
    tools{
        jdk 'jdk17'
        nodejs 'node16'
    }
    environment {
        SCANNER_HOME=tool 'Sonar-Scanner'
    }
    stages {
        stage('clean workspace'){
            steps{
                cleanWs()
            }
        }
        stage('Checkout from Git'){
            steps{
                git branch: 'main', url: 'https://github.com/Bhushan0151/Netflix-clone.git'
            }
        }
        stage("Sonarqube Analysis "){
            steps{
                withSonarQubeEnv('sonar-server') {
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \
                    -Dsonar.projectKey=Netflix '''
                }
            }
        }
        stage("quality gate"){
           steps {
                script {
                    waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token' 
                }
            } 
        }
        stage('Install Dependencies') {
            steps {
                sh "npm install"
            }
        }
        
        stage('OWASP FS SCAN') {
            steps {
                dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
                dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
            }
        }
        stage('TRIVY FS SCAN') {
            steps {
                sh "trivy fs . > trivyfs.txt"
            }
        }
        
        stage("Docker Build & Push"){
            steps{
                script{
                   withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){   
                       sh "docker build --build-arg TMDB_V3_API_KEY=37695eddd52bd25c6f350861ace0d189 -t netflix ."
                       sh "docker tag netflix bhushann11/netflix:latest "
                       sh "docker push bhushann11/netflix:latest "
                    }
                }
            }
        }
        stage("TRIVY"){
            steps{
                sh "trivy image bhushann11/netflix:latest > trivyimage.txt" 
            }
        }
        
        stage('Deploy to container'){
            steps{
                sh 'docker run -d --name netflix -p 8081:80 bhushann11/netflix:latest'
            }
        }
        
        stage('Deploy to kubernets'){
            steps{
                script{
                    dir('Kubernetes') {
                        withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'K8s', namespace: '', restrictKubeConfigAccess: false, serverUrl: '') {
                                sh 'kubectl apply -f deployment.yml'
                                sh 'kubectl apply -f service.yml'
                            }
                        }
                    }
                }
            }
        }
    }


Step 13: Terminate instances.

Last updated