Deploying PHP/React/Vue Applications to AWS EC2 Using Jenkins Pipeline: A Practical Guide

In today’s continuous integration and deployment landscape, using Jenkins pipelines to manage the full lifecycle—from code checkout to build, test, and finally deployment—can greatly streamline your workflow. In this article, we’ll discuss how to create a robust Jenkins pipeline that deploys your PHP application or JavaScript-based front end (like React or Vue) directly to an AWS EC2 instance.

Overview

Using Jenkins Pipeline as Code allows you to version and manage your deployment logic along with your application code. In our example, we’ll cover a pipeline that:

  • Checks out your code from a repository.
  • Runs build processes (for instance, npm install and npm run build for front end apps, or appropriate steps for PHP).
  • Optionally runs tests.
  • Deploys the generated artifacts to an EC2 instance using secure methods (such as SSH).

Prerequisites

Before you begin, make sure you have:

  • Jenkins installed and configured with the required plugins (e.g., Pipeline, Credentials Binding, SSH Agent, etc.).
  • An AWS EC2 instance up and running, with proper SSH access.
  • Your application’s source code repository ready.
  • Credentials set up in Jenkins (for example, SSH keys stored in the Jenkins Credentials store) so that the deployment process can securely access the EC2 instance.

Pipeline Architecture

Our deployment pipeline will consist of several stages:

  1. Checkout: Pull the latest code from your repository.
  2. Build: Execute the build steps. For a React or Vue app, this could be running npm install and npm run build; for a PHP application, you might run Composer or other build scripts.
  3. Test: (Optional) Run any unit or integration tests.
  4. Deploy: Copy the built artifacts to the EC2 instance using a secure method such as SCP (or you might trigger an Ansible playbook).

The idea is to keep the sensitive deployment logic out of your codebase by using Jenkins’s credential management and dynamic injection of secrets at runtime.

Example Pipeline Snippet

Below is a proxy example of a Jenkinsfile that outlines the above stages. This snippet uses placeholders to represent critical sections so you can adapt it to your environment without revealing full sensitive details:

pipeline {
    agent any
    environment {
        // Replace these with real values for your environment.
        DEPLOY_SERVER = "ec2-user@ec2-xx-xx-xx-xx.compute-1.amazonaws.com"  // Your EC2 instance address.
        DEPLOY_PATH   = "/var/www/html/your-app"                              // Deployment directory on EC2.
        BUILD_DIR     = "build"                                               // Directory containing built assets.
    }
    parameters {
        string(name: 'BRANCH', defaultValue: 'main', description: 'Git branch to build and deploy')
    }
    stages {
        stage('Checkout') {
            steps {
                echo "Checking out branch ${params.BRANCH}..."
                checkout([
                    $class: 'GitSCM',
                    branches: [[name: params.BRANCH]],
                    doGenerateSubmoduleConfigurations: false,
                    extensions: [],
                    userRemoteConfigs: [[url: 'https://github.com/your-repo.git']]
                ])
            }
        }
        stage('Install Dependencies') {
            steps {
                script {
                    echo "Installing dependencies..."
                    // For a JavaScript front end (React/Vue): install Node packages.
                    // For PHP applications, you might use 'composer install' instead.
                    sh 'npm install'
                }
            }
        }
        stage('Build') {
            steps {
                script {
                    echo "Building the application..."
                    // Build command for your application.
                    // Example: for React/Vue, run "npm run build".
                    sh 'npm run build'
                }
            }
        }
        stage('Test') {
            steps {
                script {
                    echo "Running tests..."
                    // Run tests – adjust the command based on your project.
                    sh 'npm test'
                }
            }
        }
        stage('Deploy') {
            steps {
                script {
                    echo "Starting deployment to ${DEPLOY_SERVER}..."
                    // Optionally archive the built artifacts.
                    archiveArtifacts artifacts: "${BUILD_DIR}/**", fingerprint: true
                    
                    // Use SSH credentials stored in Jenkins to securely deploy the code.
                    withCredentials([sshUserPrivateKey(credentialsId: 'YOUR_SSH_KEY_ID', keyFileVariable: 'SSH_KEY')]) {
                        // Use rsync (or scp) to copy the build directory to the target EC2 instance.
                        sh """
                            rsync -avz -e "ssh -i ${SSH_KEY} -o StrictHostKeyChecking=no" ${BUILD_DIR}/ ${DEPLOY_SERVER}:${DEPLOY_PATH}/
                        """
                    }
                }
            }
        }
    }
    post {
        success {
            echo "Deployment succeeded!"
        }
        failure {
            echo "Deployment failed. Check the logs for details."
        }
        always {
            // Clean the workspace after the build.
            cleanWs()
        }
    }
}

Key Points in the Pipeline:

  • Environment Variables:
    The DEPLOY_SERVER and DEPLOY_PATH variables define the target EC2 instance and deployment directory. You can further secure these by using Jenkins’s configuration settings.
  • Credentials Management:
    The withCredentials block safely injects your SSH key into the build environment. This avoids hardcoding secrets in your pipeline code.
  • Proxy Code for Deployment:
    The actual deployment command (using scp in this case) is kept minimal and generic. In your real-world implementation, you might want to handle additional concerns like file backup, service restarts, or error handling.
  • Flexible Stages:
    You can adjust or add stages (such as additional testing or artifact archiving) depending on your specific application requirements.

Final Thoughts

Using Jenkins Pipeline as Code provides a clean, version-controlled way to manage your deployment processes. By leveraging Jenkins’s credentials and pipeline features, you can securely and efficiently deploy your PHP, React, or Vue applications to an AWS EC2 instance.

This guide should serve as a foundation—feel free to expand upon it with your custom logic, additional stages, or integration with other tools like Ansible or Kubernetes if needed. Happy deploying!

Elasticsearch: Cluster Setup

Elasticsearch requires very little configuration. We can run it as stand alone, just by installing it without any config change. But in case of cluster setup we need to do minor config changes.

Elastic.co provides comprehensive setup instruction which can be found here

Before we start:

Master node: Responsible for cluster management like discovering healthy nodes and adding or removing them from a cluster.

Data node: Stores data and runs search and aggregation.

node name / cluster name: Elasticsearch will assign node / cluster name automatically, but its better to change it, for better visibility and understanding.

Configuring cluster:

Lets setup cluster with 3 nodes, 1 master and two data nodes. Let assume master node IP as 192.168.100.10 and two data nodes IP’s are 192.168.100.20, 192.168.100.30 receptively.

On Master node:

Sudo vi /etc/elasticsearch/elasticsearch.yml

 cluster.name: "sit_env"
 #node name, please change it as required.
 node.name: "sit_node_1" 
 #path where data will be saved
 path.data: /var/lib/elasticsearch
 #path for log file, can be changed as required.
 path.logs: /var/log/elasticsearch
 node.master: true
 node.data: false
 #default number of shards
 index.number_of_shards:6
 #default number of replicas 1 ( total data nodes - 1)
 index.number_of_replicas:1
 #IP address of host.
 network.host: 192.168.100.10
 #port
 http.port:9200
 #path for backup snapshot to be saved.
 path.repo: /mnt/elasticsearch/elasticsearch_backup
 #IP of all nodes
 discovery.zen.ping.unicast.hosts: ["192.168.100.10","192.168.100.20", "192.168.100.30"]

save and exit

sudo service elasticsearch restart

Data node 1:

 cluster.name: "sit_env"
 #node name, please change it as required.
 node.name: "sit_node_2" 
 #path where data will be saved
 path.data: /var/lib/elasticsearch
 #path for log file, can be changed as required.
 path.logs: /var/log/elasticsearch
 node.master: true
 node.data: true
 #default number of shards
 index.number_of_shards:6
 #default number of replicas 1 ( total data nodes - 1)
 index.number_of_replicas:1
 #IP address of host.
 network.host: 192.168.100.20
 #port
 http.port:9200
 #path for backup snapshot to be saved.
 path.repo: /mnt/elasticsearch/elasticsearch_backup
 #IP of all nodes
 discovery.zen.ping.unicast.hosts: ["192.168.100.10","192.168.100.20", "192.168.100.30"] 

save and exit

sudo service elasticsearch restart

Data node 2:

 cluster.name: "sit_env"
 #node name, please change it as required.
 node.name: "sit_node_3" 
 #path where data will be saved
 path.data: /var/lib/elasticsearch
 #path for log file, can be changed as required.
 path.logs: /var/log/elasticsearch
 node.master: true
 node.data: true
 #default number of shards
 index.number_of_shards:6
 #default number of replicas 1 ( total data nodes - 1)
 index.number_of_replicas:1
 #IP address of host.
 network.host: 192.168.100.30
 #port
 http.port:9200
 #path for backup snapshot to be saved.
 path.repo: /mnt/elasticsearch/elasticsearch_backup
 #IP of all nodes
 discovery.zen.ping.unicast.hosts: ["192.168.100.10","192.168.100.20", "192.168.100.30"]  

save and exit

sudo service elasticsearch restart