6 min read
Vault Containers as a service

Ensuring the security of sensitive data within development workflows is paramount. However, managing secrets and credentials securely poses a significant challenge. In this blog, lets address this issue by exploring how to set up HashiCorp Vault inside a Docker container, enabling seamless integration within GitHub workflows.

If you want to know more about setting up vault and its configuration check here

Creating Container from Dockerfile

Lets start by creating a container with HashiCorp Vault installed. The Filesystem storage backend stores Vault’s data on the filesystem using a standard directory structure. It can be used for durable single server situations, or to develop locally where durability is not critical. Following is the Dockerfile to dockerize the Vault with Filesystem backend.

# base image
FROM alpine:3.14

# set vault version
ENV VAULT_VERSION 1.16.0

# create a new directory
RUN mkdir /vault

# download dependencies
RUN apk --no-cache add \
      bash \
      ca-certificates \
      wget \
      curl

# download and set up vault
RUN wget --quiet --output-document=/tmp/vault.zip https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip && \
    unzip /tmp/vault.zip -d /vault && \
    rm -f /tmp/vault.zip && \
    chmod +x /vault

# update PATH
ENV PATH="PATH=$PATH:$PWD/vault"

# add the config file
COPY ./vault-config.hcl /vault/config/vault-config.hcl

# copy jwt config.json file
COPY ./jwt-config.json /vault/jwt-config.json

# make a data dir and create vault.db file
RUN mkdir -p vault/data && touch vault/data/vault.db

# expose port 8200
EXPOSE 8200

# run vault
ENTRYPOINT ["vault"]

Following in the vault-config.hcl file.

In here I’m using the following Vault config file which defined the Filesystem backend configuration of the Vault. Here, I configured Vault to use the Filesystem backend, defined the listener for Vault, disabled TLS, and enabled the Vault UI. Read more information from the docs about configuring Vault.

listener "tcp" {
    address = "0.0.0.0:8200"
    # No TLS Certificate Generated
    tls_disable = true
}

# Database storage
storage "raft" {
    path = "./vault/data"
    node_id="node1"
}

# Prevent memory from being swapped to disk. Disable mclock in production
# disable_mclock = true

api_addr = "http://localhost:8200"
cluster_addr = "http://127.0.0.1:8201"
ui = true

We will be using JWT/OIDC authentication so that specific workflows are automatically authorized to retrieve secrets

{
    "name": "<your_role>",
    "role_type": "jwt",
    "user_claim": "repository",
    "bound_audiences": [
        "https://github.<enterprise_domain>.com/<org>"
    ],
    "bound_claims_type": "string",
    "bound_claims": {
    "job_workflow_ref": ["<OWNER>/<REPO>/.github/workflows/<WORKFLOW_FILE>.yaml@refs/heads/<ref>"]
    },
    "ttl": "100",
    "token_policies": ["quality-check-policy"]
}

Build the Dockerfile

docker build -t vault-service /path/to/dockerfile

Following is the docker-compose.yaml deployment to deploy the Vault. In here, the container will attempt to lock memory to prevent sensitive values from being swapped to disk and as a result must have --cap-add=IPC_LOCK provided to docker run. Since the Vault binary runs as a non-root user, setcap is used to give the binary the ability to lock memory. The VAULT_API_ADDR defines the HTTP API address of the Vault.

version: '3.8'

services:
  vault-service:
    image: vault-service:latest
    container: vault-service-container
    ports:
      - 8200:8200
    environment:
      - VAULT_ADDR=http://127.0.0.1:8200
      - VAULT_API_ADDR=http://127.0.0.1:8200
    command: server -config=/vault/config/vault-config.hcl
    cap_add:
      - IPC_LOCK

This spins up a container which you can connect to configure vault

docker exec -it vault-service-container /bin/sh

Let’s speedrun through the commands to initialize and enable JWT authentication

# Initialize
vault operator init -key-shares=1 -key-threshold=1

# Unseal
vault operator unseal <unseal_key>

# Login
vault login <root_token>

# Enable Audit Logging
mkdir logs
vault audit enable file file_path=./logs/vault_audit.log

# Enable kv engine
vault secrets enable -path=secret kv

# Enable JWT
vault auth enable jwt

# Configure policy for JWT
vault write auth/jwt/config \
     oidc_discovery_url="https://github.<enterprise_domain>.com/_services/token" \
     bound_issuer="https://github.<enterprise_domain>.com/_services/token"

# Login to vault ui http://127.0.0.1/ui and insert your secrets

# Create access policy
vault policy write quality-check-policy - << EOF
path "secret/data/<secret_path>" {
    capabilities = ["read"]
}
EOF

# jwt-config.json is alreay present in vault directory
# Write your jwt role
vault write auth/jwt/role/qm-action-role @config.json

# Check policy and role
vault read auth/jwt/role/<your_role>

TIP:

If you are copying sensitive data to your Dockerfile you can use this method:

# this is our first build stage, it will not persist in the final image
FROM ubuntu as intermediate

# add sesitive configuration and secrets on build
ARG <KEY>
COPY ...

FROM ubuntu
# copy the repository form the previous image
COPY --from=intermediate /src_dir /dest_dir
...

Create an Image from a Container

  1. To save a Docker container, we just need to use the docker commit command like this:
    ~ docker commit vault-service-container
    sha256:0c17f0798823c7febc5a67d5432b48f525320d671beb2e6f04303f3da2f10432
    
  2. Tag the Image:
    docker tag 0c17f0798823 ghcr.io/owner/repository_name/image_name:latest
    
    NOTE: The new image name of the container must follow the convention of the registry defined. In this example GitHub container registry was used
  3. Push: Ensure that you are logged in to your registry.
    docker push ghcr.io/owner/repository_name/image_name:latest
    

Using service containers in GitHub workflows

You can use service containers to connect databases, web services, memory caches, and other tools to your workflow. Service containers are Docker containers that provide a simple and portable way for you to host services that you might need to test or operate your application in a workflow. For example, your workflow might need to run integration tests that require access to a database and memory cache.

You can configure service containers for each job in a workflow. GitHub creates a fresh Docker container for each service configured in the workflow, and destroys the service container when the job completes. Steps in a job can communicate with all service containers that are part of the same job

name: Promotion / Deployment
on: push

jobs:
  ... 

  # Label of the container job
  quality-check-job:
    # Containers must run in Linux based operating systems
    runs-on: ubuntu-latest
    permissions:
          id-token: write
          contents: read
    # Docker Hub image that `quality-check-job` executes in
    container: node:latest

    # Service containers to run with `container-job`
    services:
      # Label used to access the service container
      quality-check:
        # Docker Hub image
        image: ghcr.io/owner/repository_name/image_name:latest
        ports:
          # Maps tcp port 5432 on service container to the host
          - 8200:8200

    steps:
    # <-----  USING VAULT ACTION  ----->
    - name: Retreive Secrets From Vault
      id: retrieve-secret-from-vault
      uses: hashicorp/vault-action@v2.7.4
      with:        
        method: jwt
        url: quality-check
        role: <your_role>
        jwtGithubAudience: "https://github.<enterprise_domain>.com/<org>"
        tlsSkipVerify: true # If TLS is disabled in your vault server
        exportEnv: true
        secrets: | 
          # Exporting to an environment variable 
          <your_secret_engine>/data/apikeys/ <your_secret> | YOUR_SECRET;
          ... other secrets