The nVisium Blog

Event-Driven Kubernetes Security: Bringing in the Brigade

Brigade provides event-based scripting for Kubernetes that allows you to build complex pipelines and workflows between your containers and other systems. An open-source project released by the Microsoft Azure team, Brigade itself is written in Go and scripts are written for Brigade in JavaScript with limited access to Node.js APIs. If you like serverless runtimes or Function as a Service (FaaS) technology, then you'll love Brigade. Servers run as native pods and services on your Kubernetes cluster. Using Brigade, you can chain together functions and sequences of logic triggered by events and executed across containers. The focus of this post is to demonstrate the use-cases for utilizing Brigade to perform basic event-driven tasks in your cluster.

How Brigade Works

Brigade has several components related to its architecture that run within your cluster and handle events as they are triggered. Scripts are written in JavaScript, but loading external modules is not currently supported. Scripts are registered as handlers for events, and when triggered, allow you to chain together function execution across containers.

Brigade Architecture
Brigade Architecture

Gateway

The Gateway is configured as a deployment with an ingress or service that handles events and converts the trigger into a Brigade event. A Gateway receives a trigger and loads a script as a project and creates the event. GitHub and DockerHub webhooks are supported in the initial release with other integrations on the roadmap.

Controller

Brigade's Controller is implemented as a Kubernetes Deployment controller. It listens for events and handles them by starting workers to process them. It operates as a queue by starting workers to process objects in first-in-first-out (FIFO) order.

Worker

Workers execute scripts in pods where each worker runs exactly one script. A worker executes a script and finishes when the script completes, fails, or times out. Workers create shared storage for jobs to share their data on a single filesystem, and spawns a pod per job. When the build is complete, the worker attempts to destroy the resources.

API Server

The API server provided by Brigade offers a handful of endpoints used to query information on builds, projects, and jobs. The following endpoints are available:

  • /v1/projects
  • /v1/project/:id
  • /v1/project/:id/builds
  • /v1/build/:id
  • /v1/build/:id/jobs
  • /v1/job/:id
  • /v1/job/:id/logs
  • /v1/healthz

Please note that the API server does not require authentication, and thus you should either place an authenticating proxy in front of it or significantly limit access to the server's listening port. Additionally, ensure you require TLS encryption for communications with the API server.

Project

Projects are the context in which scripts are executed in Brigade. Using projects, you encapsulate key information such as authorization to run scripts, authentication, provisioning Version Control Systems(VCS), and provide an interface to Kubernetes APIs, such as secret management.

In some scenarios, a project will merely be a pointer to an external code repository, wherein code can dynamically be loaded from a proper versioning system. Brigade currently supports GitHub, and there are other integrations on the way.

Event

Events are the triggers that cause script execution. When an event is encountered that is registered within your brigade.js file, Brigade will run that script and execute your pipeline. An event has the following attributes:

  • Project - nvisium-project
  • Event Name - push
  • Entity - dockerhub
  • Script - defaults to ./brigade.js in the VCS
  • Payload - contains event data

Job

Jobs are launched by workers and executed as pods. Each container is expressed as a Job and runs until it completes, throws an error, or times out. It provides results to the worker that spawned it and interacts with the script in this way. We can run multiple tasks inside of a single job.

VCS Sidecar

VCS Sidecar is a Docker image that understands the underlying VCS repository and provides a layer of understanding to your builds and jobs. Currently, git-sidecar is the only supported module.

Getting Started

Brigade works on any Kubernetes implementation, whether it’s on Azure, GCP, or bare metal installations. To test Brigade, you'll be installing the server and running your scripts locally using Minikube. Minikube is used to locally test and deploy your k8s services. You’ll be using Minikube with VirtualBox, and the instructions assume you’re using OSX. Installing Minikube is easy with Homebrew:

brew cask install minikube

Once Minikube is installed, it will download the images the first time you start the service. To start Minikube, use this command:

minikube start

After you’ve started Minikube, you can manage your cluster through the kubectl command, the Dashboard, or via SSH. To start the dashboard, use:

minikube dashboard

The Dashboard will be running on your master node on TCP port 30000, such as http://:30000/

Next, you'll need to install Brigade. The preferred way to install Brigade is with Helm. Helm is a tool to manage Kubernetes workloads as packages. In a nutshell, you can deploy complex applications with the ease of a package manager. Helm uses Charts and their YAML format to define and configure applications. To get started, install Helm using Homebrew:

brew install kubernetes-helm

After installing Helm, install Brigade:

helm repo add brigade https://azure.github.io/brigade
helm install --namespace brigade brigade/brigade --name brigade-server

After you've installed the Brigade server, it's time to create a project and load scripts.

git clone https://github.com/Azure/brigade.git
cd brigade
helm install --name brigade ./nvisium-chart/brigade

The brigade-project Helm chart can be copied in order to create a new project. From the Git repository that you cloned in the previous step, execute the following sequence of commands:

cd nvisium-chart
helm inspect values ./brigade-project > values.yaml
helm install --name nvisium-project ./brigade-project -f values.yaml

The brig command line utility allows us to interact with the Brigade server to load scripts and control their execution. First, make sure your Go toolchain is installed and configured properly. Ensure that the $GOPATH variable is set in your ~/.bash_profile:

export GOPATH=/Users/yourusername/path-to-brigade

To build the client, make sure you are in the root of the cloned Brigade repository:

make bootstrap build-client
bin/brig --help

With brig installed, you can run scripts within our projects. From the local system, load a brigade.js file and run it on the server using the following syntax:

brig run -f brigade.js security/analyze-cluster

The Brigade repository contains over a dozen example scripts that you can run with the brig command to get a feel for what it's capable of: Example Scripts

Running Scripts

Now that you've set up your environment and we understand what Brigade offers, let’s create our first script, register it for an event, and trigger its execution. The source code for the examples in this post can be found on GitHub: brigade-security-scripts

Using Secrets

Kubernetes offers a secrets management API that gives you the ability to leverage underlying platform features to seamlessly pass your credentials to containers securely at runtime only to the users, containers, and processes you want to access them. As of Kubernetes 1.7, encryption of secrets at-rest is available as a feature ensuring the secrets arrived with their confidentiality preserved. Each of the secrets used in our Brigade scripts require creating a secret entry for each. The following syntax can be used to create each secret:

kubectl create secret generic db-user-pw --from-file=./username.txt --from-file=./password.txt secret "db-user-pw" created

You can also create a secret from a configuration:

$ kubectl create –f ./secret.yaml

Scanning for Vulnerable Components

Many software products use libraries and components with known vulnerabilities that introduce security problems into your otherwise good code. Fortunately, there are some terrific open-source tools that can help you stay on top of keeping vulnerable libraries out of your code. OWASP Dependency Check is a well-maintained and mature tool that you can run every time code is pushed to GitHub. The sample script to run the tool can be found on GitHub.

First, you need to integrate GitHub with Brigade. You can achieve this using Webhooks configured on the GitHub website. You'll need to make sure your gateway is publicly exposed and accessible over the internet, as GitHub needs to push webhook events to your endpoint as they happen. For development purposes, ngrok works well for creating a tunnel from your local development box, this purpose. You can create an endpoint with the following syntax:

ngrok tls -subdomain=encrypted 443

Follow the instructions here to set up the webhook integration: Configuring GitHub

After you've configured GitHub, you'll want to scan your code for vulnerable dependencies every time your developers push code. Create your brigade.js file in the root directory and watch for push events to occur, and trigger your pipeline at each commit. We also need to set job.storage.enabled = true so we can share the results from each job at the /mnt/brigade/share directory.

To scan the code and consume the results, you'll split your workflow into three parts: clone, scan, and report. In the reporting phase, we're sending the results to a Slack channel using webhooks.

const { events, Job, Group } = require("brigadier")

events.on("push", function(e, project) => {
  var clone = new Job("clone", "alpine/git:latest")
  clone.tasks = [
    "cd /mnt/brigade/share",
    "git clone " + project.repo.cloneUrl"
  ]
  var scan = new Job("scan", "owasp/dependency-check:latest", ["./bin/dependency-check.sh --project brigade-push-scan --out . --scan /mnt/brigade/share/" + project.repo.name])
  var report = new Job("slack-notify", "technosophos/slack-notify:latest", ["/slack-notify"])
  report.env = {
    SLACK_WEBHOOK: project.secrets.SLACK_WEBHOOK,
    SLACK_USERNAME: "DependencyCheckBot",
    SLACK_TITLE: "Vulnerable Dependency Identified in " + project.repo.name,
    SLACK_MESSAGE: "A vulnerable dependency was found in " + project.repo.name + " and <parse what you want>"
  }
  clone.storage.enabled = true
  scan.storage.enabled = true
  var runGroup = new Group()
  runGroup.add(clone)
  runGroup.add(scan)
  runGroup.add(report)
  runGroup.runAll()
})

After receiving the push event, you'll need to parse the repository information to determine what code you must clone. In this example, we are using a public repository for simplicity's sake. In reality, you may need to pass in an SSH key in order to clone. You should use a Kubernetes Secret for this rather than hardcoding it and persisting it in plain text. We use the alpine:git image to handle cloning our repository.

var clone = new Job("clone", "alpine/git:latest")

Then, we add commands to the clone job with tasks:

 clone.tasks = [
    "cd /mnt/brigade/share",
    "git clone " + project.repo.cloneUrl"
  ]

Next, you'll pump the code into Dependency Check to scan your dependencies and build files for vulnerable components. Dependency Check is published to DockerHub so it's straightforward to use in your pipeline.

var scan = new Job("scan", "owasp/dependency-check:latest", ["./bin/dependency-check.sh --project brigade-push-scan --out . --scan /mnt/brigade/share/" + project.repo.name])

Once Dependency Check completes, you'll need to do something with your output. Although there is an enterprise version with a reporting backend available for Dependency Check, we are simply going to publish our results to a Slack channel. First, we create our job and initial channel:

var slack = new Job("slack-notify", "technosophos/slack-notify:latest", ["/slack-notify"])

Then, we configure our environment variables for the Slack webhook:

  report.env = {
    SLACK_WEBHOOK: project.secrets.SLACK_WEBHOOK,
    SLACK_USERNAME: "DependencyCheckBot",
    SLACK_TITLE: "Vulnerable Dependency Identified in " + project.repo.name,
    SLACK_MESSAGE: "A vulnerable dependency was found in " + project.repo.name + " and <parse what you want>"
  }

Now, every time your developers push to a GitHub repository that you are monitoring, their project will automatically be scanned for vulnerable dependencies and your team will be notified in real-time.

Running Automated Tests

Brigade can run your test suite when code is committed or a new container is published. When code is pushed, an event is triggered that lets you fire off your test suite. Here is an example derived from Brigade samples and hosted on GitHub:

const { events, Job, Group } = require('brigadier')

events.on("push", function(e, project) {
    console.log("received push for commit " + e.commit)

    // Create a new job
    var node = new Job("python-test-runner")
    node.storage.enabled = false

    // We want our job to run the stock Docker Python 3 image
    node.image = "python:3"

    // Now we want it to run these commands in order:
    node.tasks = [
      "cd /src/",
      "ls -la",
      "pip install -r requirements.txt",
      "python setup.py test"
    ]

    // We're done configuring, so we run the job
    node.run()
  })

Additional Use Cases

We can do a lot more than just scan our dependencies and run tests. If you're curious about implementing your own webhook handlers to extend Brigade, take a look at how the DockerHub handler is built: DockerHub webhook handler

Hardening Brigade

Given the power of Brigade and the privileged access it has to receive events and execute code within containers in your cluster, it is important that you deploy it securely. The Brigade wiki has security best-practices to follow. From a Brigade and Kubernetes perspective, you need to ensure you're securely implementing the following controls:

  • Run Brigade in its own namespace - install it with helm install --namespace brigade brigade/brigade --name brigade-server and don't run more than one per namespace.
  • Use Role-Based Access Control (RBAC) - add the --set rbac-enabled=true flag when installing Brigade.
  • Avoid multi-tenancy - Brigade's access controls do not permit clean isolation between users
  • Utilize TLS for gateway webhooks and API communications.

From a programmatic perspective, Brigade allows us to run scripts inside of our containers as well as execute arbitrary JavaScript. We need to be careful that we’re safely handling untrusted data passed into our scripts via event triggers. For example, the following would allow for command injection if an attacker were able to control the payload passed into our script:

events.on("push", function(e, project) {
  var node = new Job("run-the-bad", alpine:3.6", "eval " + e.payload)
  node.run()
})

Ensure that you’re handling untrusted data safely if you are consuming it within your JavaScript. This also applies to other dangerous functions such as JavaScript's eval() and commonly exploited attack vectors. While you are only able to access a limited number of the Node plumbing for security reasons, there are still ways to get yourself in trouble.

Conclusion

As we’ve seen, the use-cases for Brigade are infinite and we hope you are as excited as we are about its potential. The shift towards moving our serverless models into orchestration frameworks with containers is accelerating and Brigade simplifies this task as well as makes advanced tasks less challenging and easier to implement. We hope the project continues to evolve and gains widespread adoption once people catch on and figure out how awesome it is.