Mastering Configuration as Code with Configu

In the ever-evolving landscape of DevOps and cloud computing, the principle of Configuration as Code (CaC) has emerged as a cornerstone, revolutionizing how we manage and provision our software environments. Today, I’m thrilled to introduce you to Configu, a powerful tool that epitomizes the essence of CaC, offering a unified solution for managing configurations seamlessly across various environments. This comprehensive guide will delve into the intricacies of Configu, demonstrating its utility through a real-world application.


Video Chapters


Join the Newsletter and get FREE access to the Source Code

Unveiling the Power of Configu

At its core, Configu is an open-source tool designed to streamline, test, and automate the application of configurations across different environments. It embodies the principles of Configuration as Code, allowing teams to manage their configuration settings as if they were source code in a version control system.

But why is Configu becoming an indispensable tool in the DevOps toolkit? The answer lies in its ability to centralize configurations in one accessible location, enhancing visibility across teams and significantly reducing the margin for human error.

Why Embrace Configuration as Code with Configu?

The adoption of Configuration as Code practices, facilitated by tools like Configu, brings a multitude of benefits:

  • Centralization of Configurations: Configu consolidates configurations, enabling teams to have a single source of truth for all environment settings. This centralization fosters better collaboration and understanding across development, staging, and production environments. Since this is configuration as code, your configuration schemas live in version control systems.

  • Enhanced Visibility and Control: With all configurations stored centrally, teams gain unprecedented visibility into the configurations applied across environments, making it easier to track and audit changes.

  • Reduction of Human Error: By automating the application of configurations, Configu minimizes the risks associated with manual configuration processes, such as typos or omissions, thereby enhancing reliability.
  • Versatile Exporting Capabilities: Configu stands out with its ability to export configurations in multiple formats, catering to diverse deployment needs, from Kubernetes ConfigMaps to Terraform variable files.

Getting Started with Configu: A Step-by-Step Demonstration

Embarking on your journey with Configu is straightforward. This section will guide you through the initial setup later we will showcase how to leverage Configu for managing a monitoring stack that includes Prometheus, Grafana, and Loki.

Linux Install

The first step involves installing Configu on your system. Linux users, for instance, can utilize a curl script for installation.

curl | sh

Create an Example Schema

Once installed, initializing Configu is as simple as running a command that generates a schema file. This JSON-formatted file serves as a blueprint for your configurations, defining the structure and default values.

configu init --getting-started

You’ll get a file called start.cfgu.json that looks like this:

    "type": "RegEx",
    "pattern": "^(hello|hey|welcome|hola|salute|bonjour|shalom|marhabaan)$",
    "default": "hello"
  "SUBJECT": {
    "type": "String",
    "default": "world"
  "MESSAGE": {
    "type": "String",
    "template": "{{GREETING}}, {{SUBJECT}}!",
    "description": "Generates a full greeting message"

Login to Configu SaaS

Configu offers multiple config stores. We will use the SaaS configu store. You will need to first create an account at

Now we can login from the CLI.

configu login

Set Config Values

The Upsert command is used to create, update, or delete Configs in a ConfigStore. Let’s add some values as shown below.

configu upsert 
--store 'configu' --set 'Staging' --schema './start.cfgu.json' 
--config 'GREETING=bonjour'
configu upsert 
--store 'configu' --set 'QA' --schema './start.cfgu.json' 
--config 'SUBJECT=Sam'

Get Configuration Files

Now let’s see how to get some config files in multiple formats.

JSON Config Files

The Eval and Export commands are used to fetch and validate Configs as an Object from a ConfigStore as a .json file.

configu eval 
--store 'configu' --set 'QA' --schema './start.cfgu.json' 
| configu export 
--format 'JSON' 
> 'greeting.json'

and here is what our greeting.json configuration file looks like:

  "GREETING": "hello",
  "SUBJECT": "Sam",
  "MESSAGE": "hello, Sam!"

.Env Config Files

Let’s now export to a .env file.

configu eval 
--store 'configu' --set 'QA' --schema './start.cfgu.json' 
| configu export 
--format "Dotenv" 
> ".env"

and here is our .env file:

MESSAGE="hello, Sam!"

Kubernetes ConfigMap File

Time to now export to a Kubernetes ConfigMap file.

configu eval 
--store 'configu' --set 'QA' --schema './start.cfgu.json' 
| configu export 
--format 'KubernetesConfigMap' 
> "kubeconfigmap.yaml"

as you guessed, we get a properly formatted Kubernetes ConfigMap:

apiVersion: v1
kind: ConfigMap
  creationTimestamp: '2024-02-21T15:06:46.280Z'
  name: configs-1708528006280
  GREETING: hello
  MESSAGE: hello, Sam!

Terraform Configuration Files

Finally let’s export to a Terraform tfvars file.

configu eval 
--store 'configu' --set 'QA' --schema './start.cfgu.json' 
| configu export 
--format 'TerraformTfvars' 
> ""

and the result is a properly formatted .tfvars file.

greeting = "hello"
subject = "Sam"
message = "hello, Sam!"

Real-World Application: Building a Monitoring Stack

With Configu at our disposal, let’s tackle a practical use case: deploying a monitoring stack. Our toolchain includes GitHub Actions for CI/CD, Terraform for infrastructure provisioning, Ansible for configuration management, and Docker-Compose for running our application code.

This is our Repo structure:

├── Ansible
│   ├── ansible.cfg
│   ├── inventory
│   └── monitoringPlaybook.yaml
├── Intro
│   ├──
│   ├── greeting.json
│   ├── kubeconfigmap.yaml
│   ├── start.cfgu.json
│   └──
├── MonitoringStack
│   ├── alertmanager
│   │   └── config.yml
│   ├── docker-compose.yml
│   ├── grafana
│   │   ├── config.monitoring
│   │   └── provisioning
│   │       └── datasources
│   │           └── datasource.yml
│   ├── loki
│   │   └── config
│   │       └── loki-config.yaml
│   ├── prometheus
│   │   ├── alert.rules
│   │   └── prometheus.yml
│   └── promtail
│       └── config.yml
├── Terraform
│   ├── development.hcl
│   ├──
│   ├──
│   ├──
│   ├── production.hcl
│   ├── terraform.tfstate
│   ├── terraform.tfstate.backup
│   └──
└── monitoring.cfgu.json
12 directories, 27 files


Let’s set up our configurations ahead of time. You can do this via the UI or cli. We will do this once for dev and once for prod using the CLI below.

configu upsert --store 'configu' --set 'Development/Monitoring' --schema './monitoring.cfgu.json' 
-c 'prefix=configu' 
-c 'region=us-east-1' 
-c 'address_space=' 
-c 'subnet_prefix=' 
-c 'instance_type=t2.micro' 
-c 'my_aws_key=mykey.pem'
configu upsert --store 'configu' --set 'Production/Monitoring' --schema './monitoring.cfgu.json' 
-c 'prefix=configu' 
-c 'region=us-west-1' 
-c 'address_space=' 
-c 'subnet_prefix=' 
-c 'instance_type=t2.micro' 
-c 'my_aws_key=mykey_prod.pem'

Check the ConfigSets in the Configu UI:

ConfigSets in the Configu UI

Drilling into our Production/Monitoring ConfigSet we see all the configs.

Production/Monitoring ConfigSet

Note that the ip key won’t show up in the beginning, it will once we run the GitHub actions workflow.

and similarly, we can see the same keys but different values for the Development/Monitoring ConfigSet.

Development/Monitoring ConfigSet

We can now retrieve these values from within the pipeline.

GitHub Actions

We’ve set a GitHub Actions workflow that is triggered manually and requires “Deployment Environment” input to run.

Run the GitHub Actions Workflow

Go ahead and run the workflow once for production and once for development.

Once everything is Green, then you can go back to configu to get the IP address for the EC2 instance for both prod and dev.

GitHub Actions Workflow Success

Back in Configu we can see our IP address for dev and prod:

IP for Dev
IP for Prod

Now if you open a browser and type in the the <ip>:3000 you’ll get the Grafana dashboard and <ip>:9090 you’ll get the Prometheus dashboard.

Here’s a screenshot:

Grafana Dashboard

You can use the credentials:

username: admin
password: foobar

and here’s a screenshot from Prometheus:

Prometheus Dashboard

Moreover, below are a couple screenshots from the AWS console showing first the Production EC2 instance followed by the Development EC2 instance:

Production EC2 Instance
Development EC2 Instance

Now let’s take a look at the GitHub Actions Workflow which is heavily commented for your benefit.

# Define the name of the GitHub Actions workflow
name: Monitoring Stack
# Trigger the workflow manually with options for 'environment'
        description: 'Deployment Environment' # Description for the manual input
        required: true # Makes input mandatory
        default: 'development' # Sets the default environment to 'development'
        type: choice # Allows selection between predefined options
          - production
          - development
# Define the jobs to be run
    runs-on: ubuntu-latest # Specifies the runner environment
      TF_LOG: ERROR # Sets the Terraform log level to ERROR
      CONFIGU_ORG: ${{ secrets.CONFIGU_ORG }} # Sets the Configu organization from GitHub secrets
      CONFIGU_TOKEN: ${{ secrets.CONFIGU_TOKEN }} # Sets the Configu token from GitHub secrets
      TF_TOKEN_app_terraform_io: ${{ secrets.TF_TOKEN_app_terraform_io }} # Terraform Cloud token for authentication
    # Checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it
    - name: Checkout code
      uses: actions/checkout@v2
    # Sets up Configu CLI for use in actions
    - name: Setup Configu CLI
      uses: configu/setup-cli-action@v1
    # Conditionally exports configuration for Development environment
    - name: Export configurations for Development
      if: ${{ github.event.inputs.environment == 'development' }}
      run: configu eval --store 'configu' --set 'Development/Monitoring' --schema './monitoring.cfgu.json' | configu export --format 'TerraformTfvars' &gt; "./Terraform/"
    # Conditionally exports configuration for Production environment
    - name: Export configurations for Production
      if: ${{ github.event.inputs.environment == 'production' }}
      run: configu eval --store 'configu' --set 'Production/Monitoring' --schema './monitoring.cfgu.json' | configu export --format 'TerraformTfvars' &gt; "./Terraform/"
    # Sets up Python using version 3.x
    - name: Set up Python
      uses: actions/setup-python@v2
        python-version: '3.x'
    # Installs a specific version of Terraform
    - name: Install Terraform
      uses: hashicorp/setup-terraform@v2
        terraform_version: '1.5.7'
    # Initializes Terraform with dynamic backend configuration based on the selected environment
    - name: Initialize Terraform with Dynamic Backend Config
      run: terraform init
      working-directory: ./Terraform
        TF_CLI_ARGS_init: -backend-config=${{ github.event.inputs.environment }}.hcl
    # Applies the Terraform configuration
    - name: Apply Terraform Configuration
      run: terraform apply -auto-approve
      working-directory: ./Terraform
    # Generates an SSH key file from Terraform output
    - name: Generate SSH Key File
      run: terraform output -raw private_key &gt; /tmp/myKey.pem
      working-directory: ./Terraform
    # Sets the correct permissions for the SSH key
    - name: Set Permission for SSH Key
      run: chmod 400 /tmp/myKey.pem
    # Sets the public IP address as an environment variable for later use
    - name: Set Public IP as Env Var
      run: echo "PUBLIC_IP=$(terraform-bin output -raw public_ip)" &gt;&gt; $GITHUB_ENV
      working-directory: ./Terraform
    # Debug step to print the content of GITHUB_ENV to the log
    - name: Debug GITHUB_ENV Content
      run: cat $GITHUB_ENV
    # Conditionally adds the Terraform IP to Configu for the Development environment
    - name: Add Terraform IP to Configu for Development
      if: ${{ github.event.inputs.environment == 'development' }}
      run: configu upsert --store 'configu' --set 'Development/Monitoring' --schema './monitoring.cfgu.json' -c "ip=${PUBLIC_IP}"
    # Conditionally adds the Terraform IP to Configu for the Production environment
    - name: Add Terraform IP to Configu for Production
      if: ${{ github.event.inputs.environment == 'production' }}
      run: configu upsert --store 'configu' --set 'Production/Monitoring' --schema './monitoring.cfgu.json' -c "ip=${PUBLIC_IP}"
    # Updates the Ansible inventory with the public IP obtained from Terraform
    - name: Update Ansible Inventory with Public IP
      run: sed -i "s/&lt;placeholder_app&gt;/$PUBLIC_IP/g" Ansible/inventory
    # Installs Ansible using pip
    - name: Install Ansible
      run: pip3 install --user ansible
    # Debug step to show the updated Ansible inventory file
    - name: Show Ansible Inventory (Debugging)
      run: cat Ansible/inventory
    # Executes the Ansible playbook to deploy the monitoring stack
    - name: Run Ansible Playbook
      run: ansible-playbook --private-key /tmp/myKey.pem -i inventory monitoringPlaybook.yaml
      working-directory: ./Ansible


The Terraform configuration files are fairly straight forward and you can see them in the GitHub repository. Let’s hit on the highlights of the configuration file.

The Foundation: Terraform Setup

Our Terraform configuration begins with specifying the required Terraform version and the necessary providers, ensuring compatibility and a smooth execution process. We’re leveraging the AWS provider for resource management within AWS and the TLS provider for generating secure SSH keys, crucial for secure access to our instances.

Architecting the AWS Infrastructure

  • Virtual Private Cloud (VPC): We kick off by creating an AWS VPC with a custom CIDR block, establishing a private network space where our resources will reside. This VPC is the first step towards a modular and secure infrastructure.

  • Subnet Creation: Within our VPC, we carve out a subnet using a specified CIDR block. This subnet defines a sub-section of our network, tailored for deploying our resources.
  • Security Group Setup: Security is paramount, hence the configuration of a security group. This virtual firewall defines the rules controlling traffic to and from the resources within our VPC, specifically allowing SSH (port 22), Grafana (port 3000), and Prometheus (port 9090) access.
  • Internet Gateway: To connect our VPC to the internet, we provision an internet gateway. This component is essential for our instances to communicate with the outside world, facilitating monitoring data visibility.
  • Routing Table: The routing table configuration ensures that traffic from our subnet can reach the internet through the internet gateway, enabling outbound internet access.

Deploying the Compute Resource

  • EC2 Instance: The core of our monitoring stack is an AWS EC2 instance, selected to run Ubuntu 20.04. This instance will host our monitoring tools (Prometheus, Grafana, Loki) within Docker containers.

  • Elastic IP (EIP) Association: To maintain a consistent IP address for our instance, we allocate and associate an Elastic IP. This static IP simplifies access to our monitoring dashboard.
  • SSH Key Pair: Security remains a top priority, so we generate an RSA private key and create an AWS key pair. This key pair is crucial for SSH access, ensuring that only authorized users can connect to the EC2 instance.


Now take a look at our Ansible inventory file below:

monitoringvm ansible_host=&lt;placeholder_app&gt;

As mentioned in the GitHub Actions section, we replace the <placeholder_app> with the public IP address of the EC2 instance from the Terraform output.

Below is the Ansible playbook with inline comments.

# Define the target hosts and privilege escalation details
- hosts: all
  become_user: root # Execute tasks as the root user
  become: true # Enable privilege escalation
  # Begin defining the tasks to be executed on the target hosts
    # Task to install pip3 and unzip, essential tools for managing Python packages and extracting archives
    - name: Install pip3 and unzip
        update_cache: yes # Update the package cache to ensure we have the latest package versions
        pkg: # List of packages to install
        - python3-pip # The Python package installer
        - unzip # A utility to unpack compressed files
      register: result # Store the task result in a variable for later checks
      until: result is not failed # Retry the task until it succeeds
      retries: 5 # Number of retries
      delay: 5 # Delay between retries in seconds
    # Task to add the official Docker GPG key to the apt keyring to ensure package integrity
    - name: Add Docker GPG apt Key
        url: # URL to the Docker GPG key
        state: present # Ensure the key is present
    # Task to add the Docker repository to the system's software repository list
    - name: Add Docker Repository
        repo: deb focal stable # Docker repo to add
        state: present # Ensure the repository is present
    # Task to update apt cache and install Docker Community Edition
    - name: Update apt and install docker-ce
        name: docker-ce # The Docker package to install
        state: latest # Ensure the latest version is installed
        update_cache: true # Update the package cache
    # Task to install the Docker Python module, which Ansible uses to manage Docker containers
    - name: Install Docker module for Python
        name: docker # The Docker module for Python
    # Task to copy the MonitoringStack directory from the control node to the target host
    - name: Copy MonitoringStack directory to remote
        src: ../MonitoringStack/ # Source directory on the control node
        dest: /home/ubuntu/MonitoringStack/ # Destination on the target host
        directory_mode: 0755 # Set permissions for the copied directory
      become: true # Use privilege escalation
    # Task to deploy the monitoring stack using Docker Compose
    - name: Deploy monitoring stack docker compose -f /home/ubuntu/MonitoringStack/docker-compose.yml up -d # Command to launch Docker Compose
        chdir: "/home/ubuntu/MonitoringStack" # Change to this directory before executing the command
      become: true # Use privilege escalation
      become_user: root # Execute the command as the root user
      register: docker_compose_output # Store the command output for retries
      retries: 5 # Number of retries if the command fails
      delay: 10 # Delay between retries in seconds
      until: docker_compose_output.rc == 0 # Repeat the command until it succeeds


Finally, we get to the docker-compose file that runs our monitoring application stack. I’ve also added inline comments here.

version: '3.8' # Specifies the Docker Compose file format version
  prometheus_data: {} # Defines a volume for Prometheus data persistence
  grafana_data: {} # Defines a volume for Grafana data persistence
    image: prom/prometheus # Uses the official Prometheus image
    restart: always # Ensures Prometheus service restarts automatically
      - ./prometheus:/etc/prometheus/ # Mounts the Prometheus config directory
      - prometheus_data:/prometheus # Mounts the volume for data persistence
    command: # Overrides the default command to specify config and storage paths
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/usr/share/prometheus/console_libraries'
      - '--web.console.templates=/usr/share/prometheus/consoles'
      - 9090:9090 # Exposes Prometheus on port 9090
      - cadvisor # Ensures cadvisor service is started before Prometheus
    image: prom/node-exporter # Uses the official Node Exporter image
    volumes: # Mounts system directories for Node Exporter to monitor host metrics
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command: # Configures Node Exporter to ignore certain mount points
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - --collector.filesystem.ignored-mount-points
      - '^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)'
      - 9100:9100 # Exposes Node Exporter on port 9100
    restart: always
      mode: global # Deploys Node Exporter globally across all nodes in the swarm
    image: prom/alertmanager # Uses the official Alertmanager image
    restart: always
      - 9093:9093 # Exposes Alertmanager on port 9093
      - ./alertmanager/:/etc/alertmanager/ # Mounts the Alertmanager config directory
    command: # Specifies the Alertmanager config file
      - '--config.file=/etc/alertmanager/config.yml'
      - '--storage.path=/alertmanager'
    image: # Uses the cAdvisor image for container metrics
    volumes: # Mounts necessary directories for cAdvisor to monitor container metrics
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
      - 8080:8080 # Exposes cAdvisor on port 8080
    restart: always
      mode: global # Deploys cAdvisor globally across all nodes in the swarm
    image: grafana/grafana # Uses the official Grafana image
    user: '472' # Runs Grafana as a specified user
    restart: always
    environment: # Installs additional Grafana plugins
      GF_INSTALL_PLUGINS: 'grafana-clock-panel,grafana-simple-json-datasource'
      - grafana_data:/var/lib/grafana # Mounts the volume for Grafana data persistence
      - ./grafana/provisioning/:/etc/grafana/provisioning/ # Mounts the Grafana provisioning config directory
      - ./grafana/config.monitoring # Specifies the environment file for Grafana
      - 3000:3000 # Exposes Grafana on port 3000
      - prometheus # Ensures Prometheus service is started before Grafana
    image: grafana/loki:2.8.0 # Uses a specific version of the Loki image
      - ./loki/config:/mnt/config # Mounts the Loki config directory
      - 3100:3100 # Exposes Loki on port 3100
    command: -config.file=/etc/loki/local-config.yaml # Specifies the Loki config file
    restart: "always" # Ensures Loki service restarts automatically

Clean Up

I’ve created a clean-up GitHub actions workflow for us that will easily clean up everything by simply running terraform destroy. This workflow also has an input requirement for whether you want to destroy the dev or prod environments. You can go ahead and run this when you’re done.

Embracing Configu in Your DevOps Workflow

Configu transcends being merely a tool; it represents a paradigm shift towards more efficient, reliable, and collaborative configuration management. By adopting Configu, teams can harness the full potential of Configuration as Code, streamlining their DevOps practices and elevating their deployment strategies.

Final Thoughts

Configuration as Code is set to transform the landscape of software deployment and management. Configu stands at the forefront of this revolution, offering a robust, user-friendly platform for implementing CaC practices. Whether you’re looking to enhance collaboration, reduce errors, or simply streamline your configuration management process, Configu offers the tools and flexibility needed to achieve your objectives.

As we conclude this guide, I encourage you to explore Configu further and consider integrating it into your DevOps toolkit. The journey towards efficient configuration management begins with a single step, and Configu is here to guide you every step of the way.

Remember, in the world of DevOps, innovation is key, and with Configu, you’re well-equipped to navigate the complexities of configuration management with confidence and ease. Happy configuring!

Suggested Reading

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top