TeKanAid

HashiCorp Packer to Build a Ubuntu 20.04 Image Template in VMware


Automate the creation of Ubuntu 20.04 Image Templates in VMware using HashiCorp Packer

IaCHashiCorpTerraformPackerCI/CDVMware
Created: September 27, 2021 | Updated: November 30, 2021

Overview

Tools such as Packer and Terraform from HashiCorp have been widely used for Cloud environments. However, we've seen lately that VMware is also getting a lot of attention. In speaking with multiple customers, we've seen that private cloud teams are seeing the benefits of Infrastructure as Code (IaC) workflows used in cloud environments. As a result, these private cloud teams are also implementing IaC on-premises. The goal of this blog post is to help the private cloud teams to see an example of how to automate the creation of a Ubuntu 20.04 VMware template with Packer. In a subsequent blog post, we will see how to use Terraform to provision VMs by cloning this VMware template.

UPDATE: This is a shout-out to my colleague Kalen Arndt for his excellent work on Packer with cloud-init.

Code

Subscribe to my newsletter to get access to the source code
You only need to subscribe once. Already subscribed? Enter your name and email to get instant access to the code.

Video

Below is a video explanation and demo.

Hashicorp Packer to Build a Ubuntu 20.04 Template in VMware

Video Chapters

You can skip to the relevant chapters below:

  • 00:00 - Introduction
  • 01:30 - Setup
  • 02:29 - Demo Starts
  • 04:59 - Configuration Walk-Through Starts
  • 12:37 - Main Packer File
  • 16:43 - Packer Variables Files
  • 19:00 - Image Build Completes
  • 19:25 - Conclusion

Pre-requisites

The following is required to follow along:

  • Packer (tested on Packer v1.6.6)
  • Access to a vSphere instance (tested on vSphere v6.7)

Setup

Below is our setup diagram.

Setup Diagram

Configuration

Let's take a look at the most important configuration pieces needed.

Folder Structure

Below is the structure of the repo folder.

Folder Structure

Ubuntu Server Installer for 20.04 LTS

The new way to install Ubuntu is using something called subiquity server installer. The classic server debian-installer is discontinued. Therefore, we can't rely on the preseed file that we used in the past. Instead, we will rely on CloudInit. Notice we used the preseed file in the Ubuntu 18.04 in our HashiCorp Packer for VMware Ubuntu 18.04 templates video.

Here is the announcement:

With 20.04 LTS, we will be completing the transition to the live server installer and discontinuing the classic server installer based on debian-installer (d-i), allowing us to focus our engineering efforts on a single codebase. The next-generation subiquity server installer brings the comfortable live session and speedy install of Ubuntu Desktop to server users.

CloudInit

CloudInit is installed in the official Ubuntu 20.04 live server image. CloudInit uses a user-data file to configure things such as the below:

  • Setting a default locale
  • Creating a hostname
  • Generating ssh private keys
  • Adding ssh keys to a user's .ssh/authorized_keys so they can log in
  • Setting up ephemeral mount points

More information is provided on the CloudInit page.

User-Data File

The user-data file is in the http folder along with an empty file called meta-data. This meta-data file is required. It's used for cloud deployment, but since we are not deploying to the cloud we can leave it empty. Let's take a look at what the user-data file looks like.

User-Data File Content

Below is the content of the file. Notice that we can install packages here. We're also putting in the public key to be able to ssh into the machine later. We have the option to run both early and late commands. We've disabled ssh as an early-command because it interferes with Packer. Packer thinks that the process timed out and may result in an error.

#cloud-config
autoinstall:
    version: 1
    early-commands:
        # workaround to stop ssh for packer as it thinks it timed out
        - sudo systemctl stop ssh
    locale: en_US
    keyboard:
        layout: en
        variant: us
    packages: [open-vm-tools, openssh-server, net-tools, perl, open-iscsi, ntp, curl, vim, ifupdown, zip, unzip, gnupg2, software-properties-common, apt-transport-https, ca-certificates, lsb-release, python3-pip, jq]
    network:
        network:
            version: 2
            ethernets:
                ens192:
                    dhcp4: true
    identity:
        hostname: ubuntu-server
        username: ubuntu
        password: "$6$rounds=4096$ntlX/dlo6b$HXaLN4RcLIGaEDdQdR2VTYi9pslSeXWL131MqaakqE285Nv0kW9KRontQYivCbycZerUMcjVsuLl2V8bbdadI1"
    ssh:
        install-server: yes
        allow-pw: yes
        authorized-keys:
            - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb7fcDZfIG+SxuP5UsZaoHPdh9MNxtEL5xRI71hzMS5h4SsZiPGEP4shLcF9YxSncdOJpyOJ6OgumNSFWj2pCd/kqg9wQzk/E1o+FRMbWX5gX8xMzPig8mmKkW5szhnP+yYYYuGUqvTAKX4ua1mQwL6PipWKYJ1huJhgpGHrvSQ6kuywJ23hw4klcaiZKXVYtvTi8pqZHhE5Kx1237a/6GRwnbGLEp0UR2Q/KPf6yRgZIrCdD+AtOznSBsBhf5vqcfnnwEIC/DOnqcOTahBVtFhOKuPSv3bUikAD4Vw7SIRteMltUVkd/O341fx+diKOBY7a8M6pn81HEZEmGsr7rT ubuntu@ubuntu.local
    storage:
        layout:
            name: lvm
    user-data:
        disable_root: false
    late-commands:
        - echo 'ubuntu ALL=(ALL) NOPASSWD:ALL' > /target/etc/sudoers.d/ubuntu
        - curtin in-target --target=/target -- chmod 440 /etc/sudoers.d/ubuntu

User-Data Considerations

We need to generate a hashed password for the identity section in the user-data file. We use the mkpasswd utility on Ubuntu, but first we install the whois package to get the mkpasswd utility as shown below.

apt-get install whois
mkpasswd -m sha-512 --rounds=4096

We run the command above and hit enter when the Password: prompt appears.

Example:

Password:
$6$KU2P9m78xF3n$noEN/CV.0R4qMLdDh/TloUplmJ0DLnqi6/cP7hHgfwUu.D0hMaD2sAfxDT3eHP5BQ3HdgDkKuIk8zBh0mDLzO1

Main Packer Files

I've included both the hcl and the json configuration for Packer. Either one works. My preference is to move towards hcl to be consistent with the rest of the HashiCorp tools such as Terraform and Vault.

In this section, let's focus on the hcl configuration. The file is called ubuntu-20.04.pkr.hcl. It's pretty straightforward.

Notice the user-data and meta-data files are mounted as CDROM files below:

source "vsphere-iso" "linux-ubuntu-server" {
  ...truncated
  http_directory = var.http_directory
  boot_order = "disk,cdrom"
  boot_wait = var.vm_boot_wait
  cd_files = [
        "./${var.http_directory}/meta-data",
        "./${var.http_directory}/user-data"]
  cd_label = "cidata"
  ...truncated
}

Also notice in the build section how we call on a shell provisioner to execute scripts.

build {
  sources = [
    "source.vsphere-iso.linux-ubuntu-server"]
  provisioner "shell" {
    execute_command = "echo '${var.ssh_password}' | {{.Vars}} sudo -S -E bash '{{.Path}}'"
    environment_vars = [
      "BUILD_USERNAME=${var.ssh_username}",
    ]
    scripts = var.shell_scripts
    expect_disconnect = true
  }
 }

We also feed in variables files. One is vCenter configuration-specific called vsphere.pkrvars-example.hcl and the other is VM specific and called variables.pkrvars650GBdisk.hcl

Provisioning Scripts

Below is a script that is called during the provisioning phase called setup_ubuntu2004_withDocker.sh. Cleaning the machine-id is very important to make the template re-usable when cloning it to generate VMs using Terraform later.

#!/usr/bin/bash

echo '> Cleaning apt-get ...'
apt-get clean
# Cleans the machine-id.
echo '> Cleaning the machine-id ...'
rm /etc/machine-id
touch /etc/machine-id
# Start iscsi and ntp
echo '> Start iscsi and ntp ...'
systemctl restart iscsid
systemctl restart ntp
# Cleanup for linux customization in Terraform
mkdir /etc/dhcp3
# Fix VMware Customization Issues KB56409
sed -i '/^\[Unit\]/a After=dbus.service' /lib/systemd/system/open-vm-tools.service
awk 'NR==11 {$0="#D /tmp 1777 root root -"} 1' /usr/lib/tmpfiles.d/tmp.conf | tee /usr/lib/tmpfiles.d/tmp.conf
# Disable Cloud Init
touch /etc/cloud/cloud-init.disabled
# Install docker
apt update -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt update -y
apt-cache policy docker-ce
apt install docker-ce -y
groupadd docker
usermod -aG docker ubuntu

echo '> Packer Template Build -- Complete'

Build with Packer

Now that we have configured our files, it's time to build with Packer. As mentioned earlier, you can use JSON files or HCL.

Running Packer Build with HCL

packer build -force -on-error=ask -var-file variables.pkrvars100GBdisk.hcl -var-file vsphere.pkrvars.hcl ubuntu-20.04.pkr.hcl

Run Packer Build with JSON

packer build -force -on-error=ask -var-file variables-100GBdisk.json -var-file variables-secrets.json ubuntu-20.04.json

Follow the Packer output logs to see the image successfully generated. You could also check what's going on with the VM inside of your vSphere client.

Troubleshooting Tips

  • If packer gets stuck on Waiting for IP you may want to check your DHCP server. I'm using a home router and it had too many leases from running packer many times. I had to flush inactive DHCP clients or reboot the router which is faster.
  • Open the vSphere web console and take a look at the output as the VM is getting created. This can give you some hints as to what is going on.

Conclusion

In this blog post, we demonstrated how to create a Ubuntu 20.04 image in VMware using HashiCorp Packer. As you saw, our configuration is all defined in code. There is no need to click around inside the vSphere client to generate this image. We've automated the task of building gold images in VMware. This makes the process repeatable and self-documented, just a couple of benefits of IaC. The next step is to use Terraform to provision VMs by cloning this Packer generated image. You can find out more in our Build a Kubernetes k3s Cluster in vSphere with Terraform and Packer post.

References