HashiCorpTechnology

Webblog App Part 3 – Consul Connect Service Mesh

service mesh picture

Overview

We’ve reached Part 3 of our Webblog app series. In this post, we show how to use Consul Connect Service Mesh with our Webblog app. As a reminder, my goal was to learn the different HashiCorp tools by developing a web app called the Webblog app. In Part 1, we discussed Infrastructure as Code and Policy as Code concepts. We also showed how to prepare the infrastructure for the app using Terraform. We built a Vault cluster backed by a Consul cluster for storage. This was all built on Google’s Kubernetes Engine (GKE) using Terraform providers. In Part 2, we looked at the typical crawl, walk, run phases of adopting HashiCorp Vault as a centralized Secrets Management solution.

In this post, we will move our simple application to use a service mesh to connect the python app to the MongoDB database.

tl;dr you can find the code for this Part in the Webblog Demo repo. Moreover, below is a video explanation.

Video Explanation

Pre-requisites

The following accounts are required to follow along:

Moving to Consul Connect Service Mesh

Before the Service Mesh

If you recall, the Webblog app is a simple Python/Flask app that talks to a MongoDB database. Both the Python app and MongoDB are containerized and orchestrated via Kubernetes. So far, this app talked to MongoDB using the native K8s service definition using CoreDNS. While this is fine for this simple app, in an environment with 100s of microservices, a service mesh solution is needed.

After the Service Mesh

We will now move the app to use the Consul Connect Service Mesh. Below is a diagram showing the control plane and the data plane for Consul Connect. You can also reference the Understand Consul Service Mesh guide to learn more about Consul Connect.

Webblog App using Consul Connect Service Mesh
Webblog App using Consul Connect Service Mesh

Consul Connect Configuration

Let’s take a look at how to configure Consul Connect for our application.

Consul Connect Injector

We need to add K8s annotations to allow the Consul Connect Injector to inject the Envoy proxy as a sidecar into our pods. This injector is basically a K8s mutating admission controller webhook. Below is an image showing the pods running in my Consul namespace in K8s. You can see a Consul server, two Consul clients, and the Consul Connect Injector Webhook.

Consul Connect Injector Webhook
Consul Connect Injector Webhook

K8s Annotations for the Consul Connect Injector

The configuration needed to allow Consul Connect to inject the envoy proxy into our python flask app is very simple.

  • First, we need to enable it with the “true” flag.
  • Second, we need to tell our app how to connect to MongoDB by specifying the upstream.

The format for the upstream looks like this:

<Consul Service to connect to>:<Port>:<Consul Data Centre>

Below is an image showing the configuration:

Python Flask App K8s Annotations for the Consul Injector
Python Flask App K8s Annotations for the Consul Injector

On the MongoDB side, we only need to add the annotation to enable connect-inject. Since MongoDB doesn’t go out to connect to anything, therefore, we don’t need to add the upstream annotation. Below is an example of that in the MongoDB helm chart’s values.yaml file.

MongoDB K8s Annotations for the Consul Injector
MongoDB K8s Annotations for the Consul Injector

As a result, we now have multiple sidecars as shown in the image below.

Multiple Sidecars Added to Our Pods
Multiple Sidecars Added to Our Pods

App Code Changes

All we need to do now is make a simple change to our Python app to tell it how to communicate with its proxy. The proxies then route the traffic between themselves seamlessly. In our .env file, we add the following:

DB_SERVER="127.0.0.1"
DB_PORT=27017

As you see, you just put the localhost or 127.0.0.1 for the app to talk to its local proxy which eventually connects to MongoDB. As a developer, Consul Connect really simplifies connectivity as I don’t have to worry about embedding the destination IP in my code.

Consul Connect Service Mesh in Action

Envoy Proxies Injected

We can now check our Consul dashboard. As you can see in the image below, we have our services registered along with their proxy sidecars.

Consul Dashboard Showing Registered Services with their Sidecar Proxies
Consul Dashboard Showing Registered Services with their Sidecar Proxies

Intentions

Think of Intentions as a way to define access control for services via Connect. They are used to control which services may establish connections to which other services. You can read more about Intentions in the official documentation. In the image below, we explicitly allow the frontend Python app to access the MongoDB service. The default intention behaviour is defined by the default ACL policy. If the default ACL policy is allow all, then all Connect connections are allowed by default. If the default ACL policy is deny all, then all Connect connections are denied by default. We will see this behaviour in part 4 of this series when we enable ACLs. In our case here, we don’t have any ACLs enabled so the default behaviour for Intentions is to allow all. I like to explicitly define intentions so things are clear.

Allowing the Python App Frontend to talk to MongoDB via Intentions
Allowing the Python App Frontend to talk to MongoDB via Intentions

Envoy Mutual Transport Layer Security (mTLS)

A very important feature of Consul Connect’s Service Mesh is securing the communication between microservices via mTLS. Consul Connect handles this for us out of the box with the built-in CA. There is also an option to make use of HashiCorp Vault for certificate management. To learn more, check out the documentation for certificate management with consul connect.

In the image below we see the Envoy certificate. It’s worth mentioning the short expiration date of the certificate. This is great security-wise and the added benefit is that we don’t need to manage the life-cycle of these certificates.

Mutual TLS Certificate used for Communication between Services
mTLS Certificate used for Communication between Services

Conclusion

There is a lot to cover when it comes to Consul Connect Service Mesh. We only scratched the surface by showing how easy and secure it is to connect two services. To learn more about Consul Connect’s features such as Layer 7 Traffic Management, the different available gateways, observability, and other topics, please refer to the documentation.

Up to this point, we saw how our Webblog app is running in a Consul Connect Service Mesh in K8s making use of Vault and built via Terraform. Join us in Part 4 of this journey as we explore how to use the entire HashiStack to run our Webblog app. We’ll see how we’ll replace K8s with Nomad as the orchestrator.

Leave a Reply

Your email address will not be published. Required fields are marked *