Skip to content

4. Containerlab

Ansible
Git
Linux


Here I will setup a dedicated VM for Containerlab, install vrnetlab to build images, deploy 2 Cat9kv nodes, and veryify SSH reachability from the Ansible control node.

What I Will Be Completing In This Part

  • Provision a dedicated VM for Containerlab with nested virtualization and KVM passthrough
  • Install Docker and Containerlab
  • Transfer the Cat9kv .qcow image and build a vrnetlab Docker image from it
  • Write a Containerlab topology file deploying wan-r1 and wan-r2
  • Deploy the topology and verify console access to both devices
  • Configure management network routing so ansible-ctrl can reach both nodes over SSH

01 VM Specifications

Containerlab runs network device images as containers, but vrnetlab images boot a full virtual machine inside each container using QEMU/KVM. This requires a lot of resources, so I need to add enough RAM and CPU to the Containerlab VM.

Specs
OS:        Ubuntu Server 22.04 LTS
Hostname:  clab
CPU:       16 vCPU
RAM:       64 GB
Disk:      100 GB
Network:   1 NIC (bridged to management network)
CPU Type:  host

I assigned the VM a static IP and added a DNS entry for clab.


02 Docker Installation

I installed Docker the same I installed it on my Gitea VM.

First, I updated the packages

Bash
sudo apt update && sudo apt upgrade -y
sudo apt install -y ca-certificates curl gnupg

Add Docker GPG key

Bash
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gng
sudo chmod a+r /etc/apt/keyrings/docker.gpg

Add Docker repo

Bash
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Then install Docker

Bash
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Add user to docker group

Bash
sudo usermod -aG docker $USER
newgrp docker

Then verify

Bash
docker version

03 Containerlab Installation

I used the provided script from the official website to install Containerlab.

Bash
bash -c "$(curl -sL https://get.containerlab.dev)"

03a Cat9kv Image

I created a directory on the clab VM to store the images.

Bash
sudo mkdir -p /opt/vrnetlab-images
sudo chown $USER:$USER /opt/vrnetlab-images

Then I used WinSCP to transfer the image to that directory.

03b VRNetLab

vrnetlab is a project that packages network OS images into Docker containers. Each container runs QEMU internally to boot the actual network OS, but Containerlab manages it like any other container.

I installed the build dependencies and cloned the vrnetlab repo:

Bash
1
2
3
4
sudo apt install -y make git qemu-utils
cd /opt
git clone https://github.com/hellt/vrnetlab.git
cd vrnetlab
Line 1:
make is the build tool used by vrnetlab’s Makefules.
Line 3:
This is the hellt/vrnetlab fork, which is the actively maintained version and the one Containerlab officially supports.

I then copied the Cat9kv .qcow2 image into the Cat9kv build directory.

Bash
cp /opt/vrnetlab-images/cat9kv-prd-17.15.01.qcow2 /opt/vrnetlab/cat9kv/
Info Each network OS has its own directory in the vrnetlab repo. The Makefile in each directory knows how to build a Docker image from the .qcow2 file placed inside it.

Then I ran the build:

Bash
cd /opt/vrnetlab/cat9kv
make

The build takes several minutes since it creates a Docker image that packages the .qcow2 file with QEMU and a launch script.

To verify the image was created:

Bash
docker images | grep cat9kv
Expected Output
vrnetlab/vr-cat9kv   17.15.01   abc123def456   2 minutes ago   1.8GB

04 Topology File

Next, I created a directory for the topology files and wrote the first one that will deploy 2 Cat9kv nodes as WAN routers.

Bash
mkdir -p /opt/clab/topologies
cd /opt/clab/topologies
/opt/clab/topologies/wan.clab.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
---
name: wan

mgmt:
  network: mgmt-net
  ipv4-subnet: 172.20.20.0/24

topology:
  nodes:
    wan-r1:
      kind: cisco_cat9kv
      image: vrnetlab/vr-cat9kv:17.15.01
      mgmt-ipv4: 172.20.20.11
      startup-config: ../configs/wan-r1.cfg

    wan-r2:
      kind: cisco_cat9kv
      image: vrnetlab/vr-cat9kv:17.15.01
      mgmt-ipv4: 172.20.20.12
      startup-config: ../configs/wan-r2.cfg

  links:
    - endpoints: ["wan-r1:GigabitEthernet2", "wan-r2:GigabitEthernet2"]
Line 2:
The topology name. Containerlab prefixes all container names with clab-.
Lines 4-6:
The management network configuration. Continerlab creates a Docker bridge network named mgmt-net and attaches node’s management interface to it.
Lines 11, 17:
kind: cisco_cat9kv tells Containerlab this is a vrnetlab-based Cat9kv node.
Lines 13, 19:
Static management IP.
Lines 14, 20:
Startup configuration files that Containerlab pushes to the devices on first boot.
Lines 22-23:
A point-to-point link between the 2 routers.

I then created the startup config files that will be applied on boot.

Bash
mkdir -p /opt/clab/configs
/opt/clab/configs/wan-r1.cfg
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
hostname wan-r1
!
username admin privilege 15 secret admin
!
ip domain-name lab.local
crypto key generate rsa modulus 2048
!
interface GigabitEthernet1
 ip address dhcp
 no shutdown
!
ip ssh version 2
ip scp server enable
!
line vty 0 4
 login local
 transport input ssh
!
enable secret admin
!
Line 1:
Explanation here.
Line 2:
Explanation here.
/opt/clab/configs/wan-r2.cfg
hostname wan-r2
!
username admin privilege 15 secret admin
!
ip domain-name lab.local
crypto key generate rsa modulus 2048
!
interface GigabitEthernet1
 ip address dhcp
 no shutdown
!
ip ssh version 2
ip scp server enable
!
line vty 0 4
 login local
 transport input ssh
!
enable secret admin
!
Warning The credentials in the startup config are intentionally simple bootstrap credentials. It’s so Ansible can connect immediately.
Tip Make sure the credentials in the startup configs match what’s in the Ansible vault files.

04a Deploying the Topology

Bash
cd /opt/clab/topologies
sudo clab deploy -t wan.clab.yml
-t:
Specifies the topology file to deply

Once Containerlab reports the deployment is complete, I checked the status:

Bash
sudo clab inspect -t wan.clab.yml
Expected Output
+---+------------------+--------------+---------------------+------+---------+----------------+------+
| # |       Name       | Container ID |        Image        | Kind |  State  |  IPv4 Address  | IPv6 |
+---+------------------+--------------+---------------------+------+---------+----------------+------+
| 1 | clab-wan-wan-r1  | abc123...    | vrnetlab/vr-cat9kv  | ...  | running | 172.20.20.11/24|      |
| 2 | clab-wan-wan-r2  | def456...    | vrnetlab/vr-cat9kv  | ...  | running | 172.20.20.12/24|      |
+---+------------------+--------------+---------------------+------+---------+----------------+------+

05 Accessing Devices

I verified console and SSH access from the clab host directly before setting up cross-VM connectivity.

Console access via Containerlab

Bash
sudo docker exec -it clab-wan-wan-r1 telnet localhost 5000
telnet localhost 5000:
vrnetlab containers expose the device console on port 5000 inside the container. This connection is equivalent to connecting a console cable. Use Ctrl+] then quite to exit telnet.

I should see the IOS-XE prompt:

Expected Output
wan-r1#

SSH access from the clab host

Bash

After accepting the host key and entering the password, I should land on the IOS-XE privileged EXEC prompt.


06 Management Network Connectivity

Now to make routing between 2 subnets possible, since I need the Docker network inside the clab VM to be able to communicate with the Ansible control VM.

The approach I took is to make the clab VM act as a router between the physical management network and the Containerlab management network. I began by enabling IP forwarding on the clab VM and added a static route on ansible-ctrl.

1. Enable IP forwarding on clab

Enable it:

Bash
sudo sysctl -w net.ipv4.ip_foward=1
This enables Linux kernel’s IP forwarding capability.

The made it persistent:

Bash
echo "net.ipv4.ip_foward=1" | sudo tee -a /etc/sysctl.conf
Writes the setting to sysctl.conf so it survives reboots.

2. Add an iptables rule for NAT

Docker’s default networking applies NAT to outgoing traffic from containers, but it doesn’t automatically allow incoming routed traffic to reach containers. I added an iptables rule to allow forwarded traffic to the Containerlab management bridge:

Bash
sudo iptables -I DOCKER-USER -d 172.20.20.0/24 -j ACCEPT
sudo iptables -I DOCKER-USER -s 172.20.20.0/24 -j ACCEPT
DOCKER-USER:
Docker inserts its own iptables rules that can block forwarded traffic. The DOCKER-USER chain is specifically designed for user rules that should be evaluated before Docker’s own filtering.

Then made it persistent:

Bash
sudo apt install -y iptables-persistent
sudo netfilter-persistent save

3. Add a static route on ansible-ctrl

Added the route:

Bash
sudo ip route add 172.20.20.0/24 via 10.33.99.61
This tells ansible-ctrl that the 172.20.20.0/24 network is reachable through the clab VM.

Then made it persistent by adding it to the Netplan configuration:

/etc/netplan/00-installer-config.yaml
routes:
        - to: 172.20.20.0/24
          via: <clab-ip>
Bash
sudo netplan apply

06a Verification

I then verified connectivity from the Ansible control node.

Ping test

Bash
(.venv) $ ping -c 3 172.20.20.11
(.venv) $ ping -c 3 172.20.20.12

SSH test

Bash
(.venv) $ ssh [email protected]
Expected Output
Password:
wan-r1#

07 Commit & Push

The Containerlab topology and startup configs are infrastructure-as-code so they belong in the Git repo. I copied them to the project directory on ansible-ctrl and committed them.

Bash
(.venv) $ cd ~/network-automation-lab
(.venv) $ mkdir -p containerlab/topologies
(.venv) $ mkdir -p containerlab/configs

I used scp to transfer the files from the clab VM to the project directory:

Bash
(.venv) $ scp clab:/opt/clab/topologies/wan.clab.yml containerlab/topologies/
(.venv) $ scp clab:/opt/clab/configs/wan-r1.cfg containerlab/configs/
(.venv) $ scp clab:/opt/clab/configs/wan-r2.cfg containerlab/configs/

Then I committed using the feature branch workflow:

Bash
(.venv) $ git checkout -b feat/containerlab-wan
(.venv) $ git add -A
(.venv) $ git commit -m "feat: add containerlab topology for wan-r1 and wan-r2"
(.venv) $ git push -u origin feat/containerlab-wan

Then approved on Gitea.

After merging:

Bash
(.venv) $ git checkout main
(.venv) $ git pull origin main

Now I have a Containerlab VM with Docker setup.

Last updated on • Ernesto Diaz