pod01

Lab Task 3 - Building CI/CD pipelines

It is time to start building our first pipeline!

This guide will walk you through setting up and testing key tools within a GitLab CI/CD environment. If you are new to GitLab, it is recommended to start with the introduction section to understand the fundamentals before diving into the implementation.

If you are already experienced with GitLab and CI/CD pipelines, you can skip ahead and start working on the hands-on tasks.

The goal of this task is to validate the integration and functionality of the following tools within a GitLab pipeline:

  • Catalyst Center – Centralized automation and assurance for Cisco networks.
  • NDFC (Nexus Dashboard Fabric Controller) – Network fabric automation and operations at scale.
  • SCC (Security Cloud Control) – Cloud-based security policy management.
  • HashiCorp Vault – Secure secrets management and access control.
  • NetBox – Network source of truth for infrastructure automation.
If you are ready, let’s go beyond !

Introduction

GitLab

It is time to start building our first pipeline, to build your Docker images and push them to the GitLab Docker registry.

If you are new to GitLab and you have not done so much so far, please spend some time reading the following part. If you are used to GitLab and pipelines feel free to skip the introduction and move on to the implementation part:

Before you get started with the practical implementation of the pipeline, let’s zoom in a bit and discover the capabilities of the GitLab-CI pipeline and the possible workflows. 

The general workflow of a typical pipeline involves the following steps:

  1. A network engineer takes a current copy of the code base and works on the change.

  2. A network engineer submits the change to the repository.

  3. The repository can notify fellow network engineers or other personnel of the change. These parties can then either approve or reject it. 
    This enforces a Four-Eyes Principle for any change.

  4. The GitLab-CI system can continuously pull the repository for changes, or the repository can send a notification to the GitLab-CI system when changes happen. Either way, the GitLab-CI system will pull the latest version of the code.

  5. The GitLab-CI system will run automated tests to try to catch any breakage.

  6. If there are no faults found, the GitLab-CI system can choose to merge the change into the main code and optionally deploy it to the production system.

This is a generalized list of possible steps. The process can be different for each organization. For example, automated tests can be run as soon as the delta code is checked instead of after code review. Sometimes, the organization might choose to have a human engineer involved for sanity checks in between the steps.

In summary, pipelines are our vehicle to do this in an automated fashion.

Please have a look below, on the pipeline flow, which includes testing and notification.

GitLab is a powerful, all-in-one tool to handle the end-to-end DevOps collaboration. It hosts the code repository and handles the code testing, deployment and verification. It is one of the most popular DevOps tools used in the field today.

CI/CD in a nutshell:

Continuous Integration: Keeps your code (e.g. network configs/templates) in a shared repository, so you can test, collaborate and address conflicts early.

Continuous Delivery / Deployment:  You can release new code (= services) more often and in an incrementation fashion automatically.

GitLab covers a wide variety of different features and continuously implements new features.
For the full list of features check out the following website from GitLab: https://about.gitlab.com/features/?stage=plan 

Alright, let’s get into the details of how a pipeline is structured to get a better overview.

The .gitlab-ci.yml file contains instructions for GitLab CI/CD pipeline in a YAML format.
It contains two main sectionsstage and job definition:

In this example above, we defined three stages with the keyword ‘stages.’ In the execution section, we define the Docker base image to be pulled, the name of the job to be done, the stage it corresponds to, as well as the steps to be executed under ‘script.’ There can be optional instructions such as ‘artifacts’ under build and ‘when’ under deployment.

A comprehensive overview of the syntax of the .gitlab-ci.yml file can be found here: https://docs.gitlab.com/ee/ci/yaml/

Please take a look and make yourself aware of the possibilities.

Concept of Gitlab runner

GitLab Runner is a tool that is used to run a set of commands, called “jobs,” in a GitLab CI/CD pipeline. These jobs can include tasks such as building and testing code, deploying applications and performing other automated tasks.

You can choose to install the GitLab Runner application on the infrastructure that you own or manage.

If you do, you should install GitLab Runner on a machine that is separate from the one that hosts the GitLab instance for security and performance reasons.
(We will ignore this fact in the lab because of consumed resources 😉).

By using separate machines, it is possible to have different operating systems and tools, such as Kubernetes or Docker, on each.

Here are the most important concepts you need to understand:

  • GitLab Job: The smallest component of a pipeline, which contains one or more commands that need to be executed.
  • GitLab Runner: This is an agent installed on a different server from the GitLab server. The GitLab Runner receives instructions from the GitLab server in regards to which jobs to run. Each runner must be registered with the GitLab server.
  • Runner Executor: Each Runner will define at least one executor. An executor is essentially the environment where the job will be performed.

GitLab Runner has the following features (just to name a few):

  • Run multiple jobs concurrently.
  • Limit the number of concurrent jobs per-token.
  • It is written in Go and distributed as single binary without any other requirements.
  • Supports Bash, PowerShell Core, and Windows PowerShell.
    Works on GNU/Linux, macOS, and Windows (pretty much anywhere you can run Docker).

With GitLab, you can use different executors, depending on your needs:

  • Shell
  • SSH
  • VirtualBox
  • Parallels
  • Docker
  • Docker Machine
  • Kubernetes

In this lab, we are using a Shell and Docker runner, which we already deployed as shared project runners to serve all PODs in advance. 

Docker container

For each domain, we use a predefined Docker container. These containers are already cached on the GitLab runner to optimize bandwidth, so you don’t need to build them yourself or upload them to the GitLab registry. However, you are expected to ensure that the contents of each container remain visible and accessible.

Catalyst Center
docker
POD01
1FROM ubuntu:22.04
2
3RUN apt-get update && \
4    apt-get upgrade -y && \
5    apt-get install -y apt-utils && \
6    apt-get install -y gcc git curl python3-venv python3-pip ssh jq gnupg software-properties-common
7RUN python3 -m venv /root/ansible
8RUN . /root/ansible/bin/activate && \
9    pip install --upgrade pip && \
10    pip install requests-toolbelt && \
11    pip install urllib3 && \
12    pip install ansible && \
13    pip install dnacentersdk && \
14    pip install pynetbox && \
15    pip install jmespath && \
16    pip install ansible-lint && \
17    ansible-galaxy collection install cisco.dnac && \
18    ansible-galaxy collection install netbox.netbox
Nexus Dashboard Fabric Controller (NDFC)
docker
POD01
1FROM ubuntu:22.04
2
3RUN apt-get update && \
4    apt-get upgrade -y && \
5    apt-get install -y apt-utils && \
6    apt-get install -y gcc git curl python3-venv python3-pip ssh jq gnupg software-properties-common
7RUN python3 -m venv /root/ansible
8RUN . /root/ansible/bin/activate && \
9    pip install --upgrade pip && \
10    pip install requests-toolbelt && \
11    pip install urllib3 && \
12    pip install ansible && \
13    pip install jmespath
Security Cloud Control (SCC)
docker
POD01
1FROM ubuntu:22.04
2
3RUN apt-get update && \
4    apt-get upgrade -y && \
5    apt-get install -y apt-utils && \
6    apt-get install -y gcc git curl python3-venv python3-pip ssh jq gnupg software-properties-common
7RUN python3 -m venv /root/ansible
8RUN . /root/ansible/bin/activate && \
9    pip install --upgrade pip && \
10    pip install requests-toolbelt && \
11    pip install urllib3 && \
12    pip install ansible && \
13    pip install jmespath

Step 1: Catalyst Center validation

First you need to create the folder/file structure. Everything marked as a star needs to be created:

markup
POD01
1Repository folder pod01 (Folder) on your desktop
2├── .gitlab-ci.yml (*File)
3
4├── dnac (*Folder)
5
6├──── hosts (*File)
7
8├───  data (*Folder)
9
10└──── playbooks (*Folder)
11
12        └── 00_get-devices.yml (*File)
13

You end up in a file structure like this. You can create the structure in your VS Code. Next to the POD01 are symbols for creating a file and a folder:

Complete the files as follows:

hosts
yaml
POD01
[dnac_servers]
dnac_server
00_get-devices.yml
yaml
POD01
1---
2- hosts: dnac_servers
3  gather_facts: false
4  tasks:
5    - name: Get all Site V2
6      cisco.dnac.network_device_info:
7      register: result
8
9    - debug: var=result
.gitlab-ci.yml

Change all red highlighted lines to your correct pod id!

yaml
POD01
1stages:
2  - check_dnac
3
4get_devices:
5  stage: check_dnac
6  tags:
7    - docker-runner
8  image: cbeye592/ltrato-2600:dnac
9  id_tokens:
10    VAULT_ID_TOKEN:
11      aud: https://198.18.133.99:8200
12  secrets:
13    DNAC_HOST:
14      vault: DNAC/DNAC_HOST@pod01
15      file: false
16      token: $VAULT_ID_TOKEN
17    DNAC_VERIFY:
18      vault: DNAC/DNAC_VERIFY@pod01
19      file: false
20      token: $VAULT_ID_TOKEN
21    DNAC_USERNAME:
22      vault: DNAC/DNAC_USERNAME@pod01
23      file: false
24      token: $VAULT_ID_TOKEN
25    DNAC_PASSWORD:
26      vault: DNAC/DNAC_PASSWORD@pod01
27      file: false
28      token: $VAULT_ID_TOKEN
29  before_script:
30    - source /root/ansible/bin/activate
31    - chmod -R 700 dnac
32    - cd dnac
33  script:
34    - ansible-playbook -i hosts playbooks/00_get-devices.yml

Since this is your first pipeline file, a brief explanation is needed:

This GitLab CI/CD pipeline is defined in the .gitlab-ci.yml file and consists of a single stage named check_dnac.

Within this stage, there is a job called get_devices.

This job is configured to run on a runner tagged with docker-runner and utilizes the Docker image cbeye592/ltrato-2600:dnac

For secure access, it retrieves several secrets — such as DNAC_HOST, DNAC_VERIFY, DNAC_USERNAME, and DNAC_PASSWORD — from a Vault, authenticated using an ID token. Before executing the main script, the job activates a Python virtual environment, modifies permissions for the dnac directory, and navigates into it.

The primary task performed by the job is running an Ansible playbook named 00_get-devices.yml using the command 00_get-devices.ymlansible-playbook -i hosts playbooks/00_get-devices.yml.

This setup ensures that the job executes in a controlled environment with the necessary credentials and configurations to interact with Cisco DNA Center (DNAC).

Save all files and commit and sync the changes!

Remember to add a commit statement!

Validation

Validate the successful execution in GitLab and check if you get an output off all devices from the Catalyst Center.

Log in to GitLab and click on Jobs (Build > Jobs):

Step 2: NDFC validation

Additionally, you need to create the folder/file structure for NDFC. Everything marked as a star needs to be created:

markup
POD01
1Repository folder pod01 (Folder) on your Desktop
2├── ndfc (*Folder)
3
4├──── hosts (*File)
5
6├──── ansible.cfg (*File)
7
8├──── data (*Folder)
9
10└──── playbooks (*Folder)
11
12        └── 00_get-fabric.yml (*File)
13
00_get-fabric.yml

Create a new file called 00_get-fabric.yml in the ndfc/playbooks directory and add the following content:

yaml
POD01
1---
2- hosts: ndfc_servers
3  gather_facts: false
4  tasks:
5    - name: Get all fabrics
6      cisco.dcnm.dcnm_rest:
7        method: GET
8        path: "/appcenter/cisco/ndfc/api/v1/lan-fabric/rest/control/fabrics"
9      register: result
10    - debug:
11        var: result
hosts

Create a new file called hosts in the ndfc directory and add the following content:

yaml
POD01
1[ndfc_servers]
2198.18.133.100
3
4[all:vars]
5ansible_connection=ansible.netcommon.httpapi
6ansible_network_os=cisco.dcnm.dcnm
7ansible_httpapi_validate_certs=False
8ansible_httpapi_use_ssl=True
9ansible_httpapi_use_proxy=False
10ansible_httpapi_login_domain=DefaultAuth
ansible.cfg

Create a new file called ansible.cfg in the ndfc directory and add the following content:

yaml
POD01
1[defaults]
2interpreter_python = /usr/bin/env python
3deprecation_warnings = False
4host_key_checking = False
5inventory = hosts
6default_timeout = 180
7
8[persistent_connection]
9connect_timeout = 1000
10command_timeout = 1000
.gitlab-ci.yml

Replace the content of the .gitlab-ci.yml file with the following (it will add the execution of the second stage to validate the NDFC connection as well).

Or just copy the file’s second stage “get_fabric” and append it to the current one.

yaml
POD01
1stages:
2  - check_dnac
3  - check_ndfc
4
5get_devices:
6  stage: check_dnac
7  tags:
8    - docker-runner
9  image: cbeye592/ltrato-2600:dnac
10  id_tokens:
11    VAULT_ID_TOKEN:
12      aud: https://198.18.133.99:8200
13  secrets:
14    DNAC_HOST:
15      vault: DNAC/DNAC_HOST@pod01
16      file: false
17      token: $VAULT_ID_TOKEN
18    DNAC_VERIFY:
19      vault: DNAC/DNAC_VERIFY@pod01
20      file: false
21      token: $VAULT_ID_TOKEN
22    DNAC_USERNAME:
23      vault: DNAC/DNAC_USERNAME@pod01
24      file: false
25      token: $VAULT_ID_TOKEN
26    DNAC_PASSWORD:
27      vault: DNAC/DNAC_PASSWORD@pod01
28      file: false
29      token: $VAULT_ID_TOKEN
30  before_script:
31    - source /root/ansible/bin/activate
32    - chmod -R 700 dnac
33    - cd dnac
34  script:
35    - ansible-playbook -i hosts playbooks/00_get-devices.yml
36
37get_fabric:
38  stage: check_ndfc
39  tags:
40    - docker-runner
41  image: cbeye592/ltrato-2600:ndfc
42  id_tokens:
43    VAULT_ID_TOKEN:
44      aud: https://198.18.133.99:8200
45  secrets:
46    ansible_user:
47      vault: NDFC/ansible_user@pod01
48      file: false
49      token: $VAULT_ID_TOKEN
50    ansible_password:
51      vault: NDFC/ansible_password@pod01
52      file: false
53      token: $VAULT_ID_TOKEN
54  before_script:
55    - source /root/ansible/bin/activate
56    - chmod -R 700 ndfc
57    - cd ndfc
58    - echo "" >> hosts
59    - echo "ansible_user=$ansible_user" >> hosts
60    - echo "ansible_password=$ansible_password" >> hosts
61  script:
62    - ansible-playbook playbooks/00_get-fabric.yml
Validation

Check the pipeline status again in Gitlab and see the output of the second job in the pipeline:

Step 3: Security Cloud Control validation

Now create the files needed for the Security Cloud Control:

markup
POD01
1Repository folder pod01 (Folder)
2
3├── scc (*Folder)
4
5├──── data (*Folder)
6
7└──── playbooks (*Folder)
8
9        └── 00_get-devices.yml (*File)
10

The end result should look something like this:

➡️    ➡️    ➡️

00_get-devices.yml

Create a new file called 00_get-devices.yml in the scc**/playbooks** directory and add the following content:

yaml
POD01
1---
2- name: Retrieve Devices from Cloud-Delivered FMC
3  hosts: localhost
4  gather_facts: no
5  vars:
6    api_token: "{{ lookup('env', 'api_token') }}"
7
8  tasks:
9
10    - name: Retrieve Device List
11      uri:
12        url: "https://eu.manage.security.cisco.com/api/rest/v1/inventory/devices?limit=50&offset=0"
13        method: GET
14        headers:
15          Authorization: "Bearer {{ api_token }}"
16          Content-Type: "application/json"
17        validate_certs: yes
18      register: device_response
19
20    - name: API Response
21      debug:
22        var: "{{ device_response.json.count }}"
.gitlab-ci.yml

Update the pipeline file .gitlab-ci.yml with the following content: 

yaml
POD01
1stages:
2  - check_dnac
3  - check_ndfc
4  - check_scc
5
6get_devices:
7  stage: check_dnac
8  tags:
9    - docker-runner
10  image: cbeye592/ltrato-2600:dnac
11  id_tokens:
12    VAULT_ID_TOKEN:
13      aud: https://198.18.133.99:8200
14  secrets:
15    DNAC_HOST:
16      vault: DNAC/DNAC_HOST@pod01
17      file: false
18      token: $VAULT_ID_TOKEN
19    DNAC_VERIFY:
20      vault: DNAC/DNAC_VERIFY@pod01
21      file: false
22      token: $VAULT_ID_TOKEN
23    DNAC_USERNAME:
24      vault: DNAC/DNAC_USERNAME@pod01
25      file: false
26      token: $VAULT_ID_TOKEN
27    DNAC_PASSWORD:
28      vault: DNAC/DNAC_PASSWORD@pod01
29      file: false
30      token: $VAULT_ID_TOKEN
31  before_script:
32    - source /root/ansible/bin/activate
33    - chmod -R 700 dnac
34    - cd dnac
35  script:
36    - ansible-playbook -i hosts playbooks/00_get-devices.yml
37
38get_fabric:
39  stage: check_ndfc
40  tags:
41    - docker-runner
42  image: cbeye592/ltrato-2600:ndfc
43  id_tokens:
44    VAULT_ID_TOKEN:
45      aud: https://198.18.133.99:8200
46  secrets:
47    ansible_user:
48      vault: NDFC/ansible_user@pod01
49      file: false
50      token: $VAULT_ID_TOKEN
51    ansible_password:
52      vault: NDFC/ansible_password@pod01
53      file: false
54      token: $VAULT_ID_TOKEN
55  before_script:
56    - source /root/ansible/bin/activate
57    - chmod -R 700 ndfc
58    - cd ndfc
59    - echo "" >> hosts
60    - echo "ansible_user=$ansible_user" >> hosts
61    - echo "ansible_password=$ansible_password" >> hosts
62  script:
63    - ansible-playbook playbooks/00_get-fabric.yml
64
65get_ftd_devices:
66  stage: check_scc
67  tags:
68    - docker-runner
69  image: cbeye592/ltrato-2600:scc
70  id_tokens:
71    VAULT_ID_TOKEN:
72      aud: https://198.18.133.99:8200
73  secrets:
74    api_token:
75      vault: SCC/api_token@pod01
76      file: false
77      token: $VAULT_ID_TOKEN
78    cdo_host:
79      vault: SCC/cdo_host@pod01
80      file: false
81      token: $VAULT_ID_TOKEN
82  before_script:
83    - source /root/ansible/bin/activate
84    - chmod -R 700 ndfc
85    - cd scc
86  script:
87    - ansible-playbook playbooks/00_get-devices.yml
Validation

Check the pipline status in GitLab. You should be able to see all three stages which are executed successfully:

Congratulations! You have successfully set the stage for Task 4 by validating your ability to authenticate and retrieve information from all three domains!