Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Secure Pipeline

Table of contents

  1. Buildkite
  2. Travis
  3. Drone
  4. Tekton
    1. Sample Flow
    2. Cheatsheet
  5. Privacy as Code
  6. Continuous deployment security
    1. secureCodeBox
    2. ThreatMapper
  7. StackStorm
    1. Automated Vulnerability Scanning:
    2. Vulnerability Assessment:
    3. Incident Trigger:
    4. Incident Investigation:
    5. Notification and Alerting:
    6. Patching Vulnerable Systems:
    7. Network Isolation:
    8. User Account Lockout:
    9. Incident Status Update:
    10. Incident Resolution:
  8. Secure Pipeline Using Jenkins Declarative Pipeline
  9. References

A secure pipeline is a set of processes and tools used to build, test, and deploy software in a way that prioritizes security at every stage of the development lifecycle. The goal of a secure pipeline is to ensure that applications are thoroughly tested for security vulnerabilities and compliance with security standards before they are released into production.

A secure pipeline typically involves the following stages:

  1. Source Code Management: Developers use source code management tools, such as Git or SVN, to manage the code for the application.

  2. Build: The application code is built into executable code using a build tool, such as Maven or Gradle.

  3. Static Analysis: A static analysis tool, such as a SAST tool, is used to scan the code for security vulnerabilities.

  4. Unit Testing: Developers write unit tests to ensure that the application functions as expected and to catch any bugs or errors.

  5. Dynamic Analysis: A dynamic analysis tool, such as a DAST tool, is used to test the application in a running environment and identify any security vulnerabilities.

  6. Artifact Repository: The application and all its dependencies are stored in an artifact repository, such as JFrog or Nexus.

  7. Staging Environment: The application is deployed to a staging environment for further testing and validation.

  8. Compliance Check: A compliance tool is used to check that the application meets any regulatory or compliance requirements.

  9. Approval: The application is reviewed and approved for deployment to production.

  10. Deployment: The application is deployed to production using a deployment tool, such as Ansible or Kubernetes.

By implementing a secure pipeline, organizations can ensure that their applications are thoroughly tested for security vulnerabilities and compliance with security standards, reducing the risk of security breaches and ensuring that applications are more resilient to attacks.

Step 1: Set up version control

  • Use a version control system (VCS) such as Git to manage your application code.
  • Store your code in a private repository and limit access to authorized users.
  • Use strong authentication and authorization controls to secure access to your repository.

Step 2: Implement continuous integration

  • Use a continuous integration (CI) tool such as Jenkins or Travis CI to automate your build process.
  • Ensure that your CI tool is running in a secure environment.
  • Use containerization to isolate your build environment and prevent dependencies from conflicting with each other.

Step 3: Perform automated security testing

  • Use SAST, DAST, and SCA tools to perform automated security testing on your application code.
  • Integrate these tools into your CI pipeline so that security testing is performed automatically with each build.
  • Configure the tools to report any security issues and fail the build if critical vulnerabilities are found.

Step 4: Implement continuous deployment

  • Use a continuous deployment (CD) tool such as Kubernetes or AWS CodeDeploy to automate your deployment process.
  • Implement a release process that includes thorough testing and review to ensure that only secure and stable code is deployed.

Step 5: Monitor and respond to security threats

  • Implement security monitoring tools to detect and respond to security threats in real-time.
  • Use tools such as intrusion detection systems (IDS) and security information and event management (SIEM) systems to monitor your infrastructure and applications.
  • Implement a security incident response plan to quickly respond to any security incidents that are detected.

example of a secure CI/CD pipeline

# Define the pipeline stages
stages:
  - build
  - test
  - security-test
  - deploy

# Define the jobs for each stage
jobs:
  build:
    # Build the Docker image and tag it with the commit SHA
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Build Docker image
        run: |
          docker build -t myapp:$ .
          docker tag myapp:$ myapp:latest

  test:
    # Run unit and integration tests
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Install dependencies
        run: npm install
      - name: Run tests
        run: npm test

  security-test:
    # Perform automated security testing using SAST, DAST, and SCA tools
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Perform SAST
        uses: shiftleftio/[email protected]
        with:
          scan-targets: .
          shiftleft-org-id: $
          shiftleft-api-key: $
      - name: Perform DAST
        uses: aquasecurity/[email protected]
        with:
          image-ref: myapp:$
      - name: Perform SCA
        uses: snyk/actions@v1
        with:
          file: package.json
          args: --severity-threshold=high

  deploy:
    # Deploy the application to the production environment
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/master'
    steps:
      - name: Deploy to production
        uses: appleboy/ssh-action@master
        with:
          host: production-server.example.com
          username: $
          password: $
          script: |
            docker pull myapp:latest
            docker stop myapp || true
            docker rm myapp || true
            docker run -d --name myapp -p 80:80 myapp:latest

In this example, the YAML file defines a CI/CD pipeline with four stages: build, test, security-test, and deploy. Each stage has a job that performs a specific set of tasks. The build job builds a Docker image for the application, the test job runs unit and integration tests, the security-test job performs automated security testing using SAST, DAST, and SCA tools, and the deploy job deploys the application to the production environment.

Each job is defined with a runs-on parameter that specifies the operating system that the job should run on. The steps for each job are defined with name and run parameters that specify the name of the step and the command to run. The uses parameter is used to specify external actions or packages that should be used in the step.

The if parameter is used to conditionally run a job based on a specific condition, such as the branch or tag that triggered the pipeline. Secrets are stored in the GitHub repository’s secrets store and accessed using the $ syntax.

Buildkite

Within your pipeline configuration file (e.g., .buildkite/pipeline.yml), add a step for running the vulnerability scanning tool.

steps:
  - label: "Security Scan"
    command: |
      # Run the vulnerability scanning tool
      # Replace the command and options with the appropriate tool you're using
      my-vulnerability-scanner scan --output report.txt

      # Print the generated report
      cat report.txt

    # Define the conditions when this step should run (e.g., on specific branches or pull requests)
    branches: master

Travis

Open your project’s .travis.yml file for editing.

script:
  - |
    # Run the vulnerability scanning tool
    # Replace the command and options with the appropriate tool you're using
    my-vulnerability-scanner scan --output report.txt

    # Print the generated report
    cat report.txt

Drone

Open your project’s .drone.yml file for editing.

pipeline:
  security:
    image: your-vulnerability-scanner-image
    commands:
      - |
        # Run the vulnerability scanning tool
        # Replace the command and options with the appropriate tool you're using
        my-vulnerability-scanner scan --output report.txt

        # Print the generated report
        cat report.txt

Tekton

Sample Flow

1- Create a Dockerfile:

FROM golang:1.16-alpine
WORKDIR /app
COPY . .
RUN go build -o myapp
CMD ["./myapp"]

2- Create a Tekton Task (build-task.yaml):

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: build-task
spec:
  steps:
    - name: build
      image: golang:1.16-alpine
      workingDir: /workspace/source
      command:
        - go
      args:
        - build
        - -o
        - /workspace/myapp
        - .
      volumeMounts:
        - name: workspace
          mountPath: /workspace
    - name: package
      image: alpine
      command:
        - tar
      args:
        - czf
        - /workspace/myapp.tar.gz
        - -C
        - /workspace
        - myapp
      volumeMounts:
        - name: workspace
          mountPath: /workspace
    - name: publish
      image: ubuntu
      command:
        - echo
      args:
        - "Publishing artifact: /workspace/myapp.tar.gz"
      volumeMounts:
        - name: workspace
          mountPath: /workspace
  volumes:
    - name: workspace
      emptyDir: {}

3- Create a Tekton Pipeline (pipeline.yaml):

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: myapp-pipeline
spec:
  tasks:
    - name: build-task
      taskRef:
        name: build-task

4- Apply the Task and Pipeline:

kubectl apply -f build-task.yaml
kubectl apply -f pipeline.yaml

5- Create a Tekton PipelineRun (pipelinerun.yaml):

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: myapp-pipelinerun
spec:
  pipelineRef:
    name: myapp-pipeline

6- Apply the PipelineRun:

kubectl apply -f pipelinerun.yaml

Cheatsheet

1- Install Tekton Pipelines

kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

2- Create a Task

kubectl apply --filename <task-definition.yaml>

3- Create a Pipeline

kubectl apply --filename <pipeline-definition.yaml>

4- Create a PipelineRun

kubectl apply --filename <pipelinerun-definition.yaml>

5- List Pipelines

tkn pipeline list

6- Describe a Pipeline

tkn pipeline describe <pipeline-name>

7- List PipelineRuns

tkn pipelinerun list

8- Describe a PipelineRun

tkn pipelinerun describe <pipelinerun-name>

9- List Tasks

tkn task list

10- Describe a Task

tkn task describe <task-name>

11- List TaskRuns

tkn taskrun list

12- Describe a TaskRun

tkn taskrun describe <taskrun-name>

13- Create a TriggerBinding

kubectl apply --filename <triggerbinding-definition.yaml>

14- Create a TriggerTemplate

kubectl apply --filename <triggertemplate-definition.yaml>

15- Create a Trigger

kubectl apply --filename <trigger-definition.yaml>

16- List Triggers

tkn trigger list

17- Describe a Trigger

tkn trigger describe <trigger-name>

18- Delete a Pipeline

kubectl delete pipeline <pipeline-name>

19- Delete a PipelineRun

kubectl delete pipelinerun <pipelinerun-name>

20- Delete a Task

kubectl delete task <task-name>

Privacy as Code

Installs the Fides tool using pip, the Python package manager

pip install fides

Scans the specified directory for privacy-related issues and sensitive data

fides scan <directory_path>

Generates a detailed report of the scan results and saves it to the specified output file

fides report -o <output_file>

Specifies a pattern to exclude specific files or directories from the scan

fides scan --exclude <pattern>

Uses a custom ruleset file for the scan, allowing you to define specific privacy rules and checks

fides scan --ruleset <ruleset_file>

Ignores specific patterns or files from triggering false positive alerts during the scan.

fides scan --ignore <pattern>

Sets the output format for the generated report, such as JSON, CSV, or HTML

fides report --format <output_format>

Configures the scan to exit with a non-zero code if privacy issues are detected, enabling integration with CI/CD pipelines.

fides scan --exit-code

Continuous deployment security

secureCodeBox

Install secureCodeBox

kubectl apply -f https://raw.githubusercontent.com/secureCodeBox/secureCodeBox/master/deploy/complete.yaml
  1. Run a vulnerability scan
kubectl apply -f https://raw.githubusercontent.com/secureCodeBox/secureCodeBox/master/demo/scan-job.yaml
  1. Monitor scan progress
kubectl get scan -w
  1. View scan results
kubectl describe scan <scan-name>
  1. Integrate secureCodeBox with other security tools:
securecodebox-cli scan start --target <target-url> --scan-type <scan-type> --integration <integration-name>
or
Example: securecodebox-cli scan start --target https://example.com --scan-type zap-scan --integration jira
  1. Schedule regular scans using Kubernetes CronJobs
kubectl apply -f https://raw.githubusercontent.com/secureCodeBox/secureCodeBox/master/demo/scheduled-scan.yaml
  1. Integrate secureCodeBox with your CI/CD pipeline:
securecodebox-cli scan start --target <target-url> --scan-type <scan-type> --pipeline <pipeline-name>
or
Example: securecodebox-cli scan start --target https://example.com --scan-type nmap-scan --pipeline my-cicd-pipeline
  1. Schedule regular scans using Kubernetes CronJobs
kubectl edit hook <hook-name>

ThreatMapper

  1. Install ThreatMapper
git clone https://github.com/deepfence/ThreatMapper.git
cd ThreatMapper
./install.sh
  1. Perform a security assessment on a specific target:
threat-mapper scan <target-ip>
  1. View the scan results:
threat-mapper report <scan-id>
  1. Integrate ThreatMapper with your CI/CD pipeline:
threat-mapper scan --target <target-ip> --pipeline <pipeline-name>
Example: threat-mapper scan --target 192.168.0.1 --pipeline my-cicd-pipeline
  1. Customize scan policies by modifying the configuration files:
vim ~/.threat-mapper/config.yaml
  1. Enable notifications for scan results:
vim ~/.threat-mapper/config.yaml
  1. Configure the desired notification settings, such as email notifications or Slack alerts.
crontab -e

Add a cron job entry to execute the threat-mapper scan command at specified intervals.

  1. Integrate ThreatMapper with other security tools:
threat-mapper scan --target <target-ip> --integration <integration-name>
Example: threat-mapper scan --target 192.168.0.1 --integration jira

Monitor and address security issues based on the scan results: Regularly review the scan reports and take necessary actions to remediate the identified security issues.

  1. Generate visualizations and reports
threat-mapper visualize <scan-id>

This command generates visualizations of the scan results, such as network diagrams and attack surface maps.

StackStorm

Automated Vulnerability Scanning:

Description: Schedule regular vulnerability scans using a scanning tool like Nessus or Qualys.

Command/Code: st2 run vulnerability_scanner.scan

To schedule regular vulnerability scans using a scanning tool like Nessus or Qualys with StackStorm (st2), you can create a custom StackStorm pack and define a Python action that invokes the vulnerability scanning tool’s API. Here’s an example code snippet:

  • Create a new StackStorm pack:
st2 pack create vulnerability_scanner
  • Create a new Python action file scan.py within the pack:
# vulnerability_scanner/actions/scan.py

from st2common.runners.base_action import Action

class VulnerabilityScanAction(Action):
    def run(self):
        # Code to invoke the vulnerability scanning tool's API
        # Example: Nessus API call to start a scan
        # Replace <nessus_api_url>, <access_token>, and <scan_id> with your actual values
        response = requests.post(
            url="<nessus_api_url>/scans/<scan_id>/launch",
            headers={"X-ApiKeys": "<access_token>"},
        )
        if response.status_code == 200:
            return True
        else:
            return False
  • Register the action in the pack.yaml file:
# vulnerability_scanner/pack.yaml

actions:
  - vulnerability_scanner/actions/scan.py

This code provides a basic structure for invoking a vulnerability scanning tool’s API. You would need to modify it to fit your specific scanning tool’s API and authentication method.

Vulnerability Assessment:

Description: Retrieve vulnerability scan results and analyze them for critical vulnerabilities.

Command/Code: st2 run vulnerability_scanner.analyze_scan

  • Create a new StackStorm pack:
st2 pack create vulnerability_assessment
  • Create a new Python action file analyze.py within the pack:
# vulnerability_assessment/actions/analyze.py

from st2common.runners.base_action import Action
import requests

class VulnerabilityAssessmentAction(Action):
    def run(self):
        # Code to fetch vulnerability scan results from the scanning tool's API
        # Example: Nessus API call to retrieve scan results
        # Replace <nessus_api_url>, <access_token>, and <scan_id> with your actual values
        response = requests.get(
            url="<nessus_api_url>/scans/<scan_id>/results",
            headers={"X-ApiKeys": "<access_token>"},
        )

        if response.status_code == 200:
            results = response.json()
            # Perform analysis on the scan results
            # Example: Check for critical vulnerabilities
            critical_vulnerabilities = []
            for result in results:
                if result["severity"] == "Critical":
                    critical_vulnerabilities.append(result["name"])
            return critical_vulnerabilities
        else:
            return None
  • Register the action in the pack.yaml file:
# vulnerability_assessment/pack.yaml

actions:
  - vulnerability_assessment/actions/analyze.py

This code provides a basic structure for fetching vulnerability scan results from a scanning tool’s API and performing analysis on them. You would need to modify it to fit your specific scanning tool’s API and authentication method. Additionally, you can customize the analysis logic to suit your specific requirements.

Incident Trigger:

Description: Detect a critical vulnerability and trigger an incident response workflow.

Command/Code: st2 run incident.trigger

  • Create a new StackStorm pack:
st2 pack create incident_investigation
  • Create a new Python action file gather_info.py within the pack:
# incident_investigation/actions/gather_info.py

from st2common.runners.base_action import Action
import requests

class IncidentInvestigationAction(Action):
    def run(self, vulnerability):
        # Code to gather additional information about the vulnerability
        # Example: Query relevant logs or systems
        # Replace <log_url> and <search_query> with your actual values
        response = requests.get(
            url=f"<log_url>/search?query={vulnerability}"
        )

        if response.status_code == 200:
            logs = response.json()
            # Perform further analysis or extract relevant information from logs
            # Example: Return the log entries related to the vulnerability
            return logs
        else:
            return None
  • Register the action in the pack.yaml file:
# incident_investigation/pack.yaml

actions:
  - incident_investigation/actions/gather_info.py
  • Run the incident investigation action:
st2 run incident_investigation.gather_info vulnerability=<vulnerability_name>

This code provides a basic structure for gathering additional information about a vulnerability by querying relevant logs or systems. You would need to modify it to fit your specific log sources or systems and the query syntax for retrieving the relevant information.

Incident Investigation:

Description: Gather additional information about the vulnerability by querying relevant logs or systems.

Command/Code: st2 run incident.investigate

  • Create a new StackStorm pack:
st2 pack create incident_investigation
  • Create a new integration file investigate_vulnerability.yaml within the pack:
# incident_investigation/integrations/investigate_vulnerability.yaml

name: investigate_vulnerability
description: Gather additional information about a vulnerability by querying relevant logs or systems.

actions:
  - name: query_logs
    description: Query logs to gather information about the vulnerability
    enabled: true
    entry_point: query_logs.py
    runner_type: "python-script"
  • Create a new Python script file query_logs.py within the pack:
# incident_investigation/actions/query_logs.py

import requests
from st2common.runners.base_action import Action

class QueryLogsAction(Action):
    def run(self, vulnerability):
        # Code to query relevant logs or systems
        # Replace <log_url> and <search_query> with your actual values
        response = requests.get(
            url=f"<log_url>/search?query={vulnerability}"
        )

        if response.status_code == 200:
            logs = response.json()
            # Perform further analysis or extract relevant information from logs
            # Example: Return the log entries related to the vulnerability
            return logs
        else:
            return None
  • Register the integration in the pack.yaml file:
# incident_investigation/pack.yaml

integrations:
  - integrations/investigate_vulnerability.yaml

Notification and Alerting:

Description: Send notifications to the incident response team or stakeholders via Slack, email, or other communication channels.

Command/Code: st2 run notification.send

  • Create a new StackStorm pack:
st2 pack create notification_alerting
  • Create a new integration file send_notification.yaml within the pack:
# notification_alerting/integrations/send_notification.yaml

name: send_notification
description: Send notifications to the incident response team or stakeholders

actions:
  - name: send_slack_notification
    description: Send a notification to a Slack channel
    enabled: true
    entry_point: send_slack_notification.py
    runner_type: "python-script"

  - name: send_email_notification
    description: Send a notification via email
    enabled: true
    entry_point: send_email_notification.py
    runner_type: "python-script"
  • Create a new Python script file send_slack_notification.py within the pack:
# notification_alerting/actions/send_slack_notification.py

import requests
from st2common.runners.base_action import Action

class SendSlackNotificationAction(Action):
    def run(self, message, channel):
        # Code to send Slack notification
        # Replace <slack_webhook_url> with your actual webhook URL
        webhook_url = "<slack_webhook_url>"
        payload = {
            "text": message,
            "channel": channel
        }
        response = requests.post(url=webhook_url, json=payload)

        if response.status_code == 200:
            return True
        else:
            return False
  • Create a new Python script file send_email_notification.py within the pack:
# notification_alerting/actions/send_email_notification.py

import smtplib
from email.mime.text import MIMEText
from st2common.runners.base_action import Action

class SendEmailNotificationAction(Action):
    def run(self, message, recipient, sender, subject):
        # Code to send email notification
        # Replace <smtp_server>, <smtp_port>, <smtp_username>, and <smtp_password> with your email server details
        smtp_server = "<smtp_server>"
        smtp_port = <smtp_port>
        smtp_username = "<smtp_username>"
        smtp_password = "<smtp_password>"

        email_message = MIMEText(message)
        email_message["Subject"] = subject
        email_message["From"] = sender
        email_message["To"] = recipient

        try:
            with smtplib.SMTP(smtp_server, smtp_port) as server:
                server.login(smtp_username, smtp_password)
                server.send_message(email_message)
            return True
        except Exception as e:
            return str(e)
  • Register the integrations in the pack.yaml file:
# notification_alerting/pack.yaml

integrations:
  - integrations/send_notification.yaml
  • Send a Slack notification:
st2 run send_notification.send_slack_notification message=<notification_message> channel=<slack_channel>
  • Send an email notification:
st2 run send_notification.send_email_notification message=<notification_message> recipient=<recipient_email> sender=<sender_email> subject=<email_subject> smtp_server=<smtp_server> smtp_port=<smtp_port> smtp_username=<smtp_username> smtp_password=<smtp_password>

Patching Vulnerable Systems:

Description: Automatically patch vulnerable systems by executing scripts or running configuration management tools like Ansible.

Command/Code: st2 run remediation.patch

  • Create a new StackStorm pack:
st2 pack create vulnerability_patching
  • Create a new action file patch_vulnerable_systems.yaml within the pack:
# vulnerability_patching/actions/patch_vulnerable_systems.yaml

name: patch_vulnerable_systems
description: Automatically patch vulnerable systems

runner_type: "remote-shell-script"
enabled: true
entry_point: patch_vulnerable_systems.sh
  • Create a new shell script file patch_vulnerable_systems.sh within the pack:
# vulnerability_patching/actions/patch_vulnerable_systems.sh

# Code to patch vulnerable systems using Ansible or other configuration management tools
ansible-playbook -i inventory.ini patch_vulnerable_systems.yml
  • Create an Ansible playbook file patch_vulnerable_systems.yml:
# vulnerability_patching/actions/patch_vulnerable_systems.yml

- name: Patch vulnerable systems
  hosts: vulnerable_hosts
  tasks:
    - name: Apply security patches
      apt:
        name: "*"
        state: latest
        update_cache: yes
  • Register the action in the pack.yaml file:
# vulnerability_patching/pack.yaml

actions:
  - actions/patch_vulnerable_systems.yaml

Network Isolation:

Description: Isolate compromised systems from the network to prevent further damage.

Command/Code: st2 run remediation.isolate

  • Create a new StackStorm pack:
st2 pack create network-isolation
  • Create a new action file
st2 action create network_isolation.yaml
  • Open the network_isolation.yaml file and add the following content:
name: network_isolation
description: Isolate compromised systems from the network
runner_type: run-local
parameters:
  - name: ip_address
    description: IP address of the compromised system
    type: string
    required: true
entry_point: isolation.sh
  • Open the isolation.sh file and add the following content:
#!/bin/bash

ip_address=""

# Execute commands to isolate the system
iptables -A INPUT -s $ip_address -j DROP
iptables -A OUTPUT -d $ip_address -j DROP
  • Register the action:
st2 run packs.setup_virtualenv packs=network-isolation
  • Test the action by running:
st2 run network-isolation.network_isolation ip_address=<ip_address>

User Account Lockout:

Description: Lock user accounts associated with the identified vulnerability to limit access.

Command/Code: st2 run remediation.lock_account

  • Create a new StackStorm pack:
st2 pack create user-account-lockout
  • Create a new action file:
st2 action create user_account_lockout.yaml
  • Open the user_account_lockout.yaml file and add the following content:
name: user_account_lockout
description: Lock user accounts associated with the identified vulnerability
runner_type: run-local
parameters:
  - name: username
    description: Username of the user account to lock
    type: string
    required: true
entry_point: lockout.sh
  • Open the lockout.sh file and add the following content:
#!/bin/bash

username=""

# Execute commands to lock the user account
usermod -L $username
  • Register the action:
st2 run packs.setup_virtualenv packs=user-account-lockout
  • Test the action by running
st2 run user-account-lockout.user_account_lockout username=<username>

Incident Status Update:

Description: Update the status of an incident, providing real-time information on the remediation progress.

Command/Code: st2 run incident.update_status

  • Create a new StackStorm pack:
st2 pack create incident-status-update
  • Create a new action file
st2 action create incident_status_update.yaml
  • Open the incident_status_update.yaml file and add the following content:
name: incident_status_update
description: Update the status of an incident
runner_type: run-local
parameters:
  - name: incident_id
    description: Identifier of the incident
    type: string
    required: true
  - name: status
    description: New status of the incident
    type: string
    required: true
entry_point: status_update.sh
  • Open the status_update.sh file and add the following content:
#!/bin/bash

incident_id=""
status=""

# Execute commands to update the incident status
# E.g., update a ticketing system, send a notification, etc.
echo "Incident $incident_id status updated to $status"
  • Register the action:
st2 run packs.setup_virtualenv packs=incident-status-update
  • Test the action by running:
st2 run incident-status-update.incident_status_update incident_id=<incident_id> status=<new_status>

Incident Resolution:

Description: Close the incident after successful remediation and notify the team about the resolution.

Command/Code: st2 run incident.resolve

  • Create a new StackStorm pack:
st2 pack create incident-resolution
  • Create a new action file:
st2 action create incident_resolution.yaml
  • Open the incident_resolution.yaml file and add the following content:
name: incident_resolution
description: Resolve an incident and notify the team
runner_type: run-local
parameters:
  - name: incident_id
    description: Identifier of the incident
    type: string
    required: true
entry_point: resolution_script.sh
  • Open the resolution_script.sh file and add the following content:
#!/bin/bash

incident_id=""

# Execute commands to resolve the incident
# E.g., close a ticket, notify the team, etc.
echo "Incident $incident_id resolved successfully"
  • Register the action:
st2 run packs.setup_virtualenv packs=incident-resolution
  • Test the action by running:
st2 run incident-resolution.incident_resolution incident_id=<incident_id>

Secure Pipeline Using Jenkins Declarative Pipeline

pipeline {
    agent any
    
    environment {
        DOCKER_REGISTRY = "your_docker_registry"
        DOCKER_CREDENTIALS_ID = "your_docker_credentials_id"
        SONARQUBE_URL = "your_sonarqube_url"
        SONARQUBE_TOKEN = "your_sonarqube_token"
    }
    
    stages {
        stage('Build') {
            steps {
                script {
                    git 'https://github.com/devopscube/declarative-pipeline-examples.git'
                    sh 'mvn clean install'
                }
            }
        }
        
        stage('SonarQube Scan') {
            steps {
                withSonarQubeEnv('SonarQube') {
                    script {
                        sh "mvn sonar:sonar -Dsonar.projectKey=my_project -Dsonar.host.url=${SONARQUBE_URL} -Dsonar.login=${SONARQUBE_TOKEN}"
                    }
                }
            }
        }
        
        stage('Containerize') {
            steps {
                script {
                    sh "docker build -t ${DOCKER_REGISTRY}/my-app:${BUILD_NUMBER} ."
                    sh "docker login -u your_docker_username -p your_docker_password ${DOCKER_REGISTRY}"
                    sh "docker push ${DOCKER_REGISTRY}/my-app:${BUILD_NUMBER}"
                }
            }
        }
        
        stage('Deploy') {
            steps {
                script {
                    sh "kubectl apply -f kube-deployment.yaml"
                }
            }
        }
    }
    
    post {
        success {
            echo "Pipeline executed successfully!"
        }
        
        failure {
            echo "Pipeline execution failed!"
        }
        
        always {
            echo "Cleaning up..."
            sh "docker logout ${DOCKER_REGISTRY}"
        }
    }
}

In this pipeline, the stages include building the project, performing a SonarQube scan, containerizing the application, and deploying it using Kubernetes. The pipeline also handles post-execution actions based on the success or failure of the pipeline.

Make sure to replace the placeholders with appropriate values, such as your_docker_registry, your_docker_credentials_id, your_sonarqube_url, and your_sonarqube_token, to match your environment.

References

  • https://devopscube.com/declarative-pipeline-parameters/