Update Your Ansible Nautobot Environment & Helm Chart

Blog Detail

With the release of Nautobot 2.1.9 and 1.6.16 came new requirements for pynautobot to include an authentication token that for some initial calls that were not previously required. So to make sure that pynautobot (and subsequently Nautobot Ansible) and Nautobot Helm Chart work with the most recent version of Nautobot, new versions have been released.

pynautobot & Nautobot Ansible

First to check what version of pynautobot you have, you can run pip list to get that environment. Here is an example of using grep to only look for pynautobot.

❯ pip list | grep pynautobot
pynautobot         2.0.2

Nautobot 1.6 Environments

If you are continuing on the LTM release train of 1.6, your pynautobot needs to be upgraded to 1.5.2 in order to continue using the Ansible modules (4.5.0). No update to the Ansible modules is required-only the underlying pynautobot version. Complete this with:

pip install pynautobot==1.5.2

Accidental Upgrade to 2.x of pynautobot?

If you accidentally upgrade to the latest version of pynautobot but intended to be on 1.x, just issue the same command as above and you will get the right version. Nothing further would needs to be done-no harm.

pip install pynautobot=-1.5.2

Nautobot 2.1 Environments

For those with the latest Nautobot application version of 2.1.9, please upgrade the pynautobot instance in your Ansible environment to the latest of 2.1.1

pip install --upgrade pynautobot

Nautobot Helm Chart

First to check what version of Nautobot Helm Chart you have configured, you can run helm show chart nautobot/nautobot to get the full information about the configured chart. There will be multiple versions you will see in the output, the chart version that matters is the last line in the output and is a root key in the yaml output.

❯ helm show chart nautobot/nautobot
annotations:

... Truncated for bevity ...

sources:
- https://github.com/nautobot/nautobot
- https://github.com/nautobot/helm-charts
version: 2.0.5

Warning – READ BEFORE PROCEEDING

The latest version of the helm chart has a default version for Nautobot that is set to 2.1.9, if you are NOT providing custom image or statically declaring the version you WILL be upgraded to 2.1.9. For more information on using a custom image please see the documentation here or for using the Network to Code maintained images with a specific version please ensure nautobot.image.tag is set to the tagged version you are expecting to use. Below are some examples for values.yaml provided to a helm release.

If you are on a 1.X.X version of the helm chart please review the upgrade guide here before proceeding.

Custom Image

nautobot:
  image:
    registry: "ghcr.io"
    repository: "my-namespace/nautobot"
    tag: "1.6.16-py3.11"
    pullPolicy: "Always"
    pullSecrets:
      - ghcr-pull-secret

Network to Code Image

nautobot:
  image:
    tag: "1.6.16-py3.11"

Update Helm Repo

Before you can use the new version of the helm chart you must update the helm repo.

❯ helm repo update nautobot
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nautobot" chart repository
Update Complete. ⎈Happy Helming!⎈

Update Helm Release

Now you can proceed to update your helm release with the latest helm chart version.

❯ helm upgrade <name of helm release> -f values.yml --version 2.1.0
Release "nautobot" has been upgraded. Happy Helming!
NAME: nautobot
LAST DEPLOYED: Wed Mar 27 20:09:47 2024
NAMESPACE: default
STATUS: deployed
REVISION: 3
NOTES:
*********************************************************************
*** PLEASE BE PATIENT: Nautobot may take a few minutes to install ***
*********************************************************************

... Truncated for bevity ...

Conclusion

When issues do arise on playbooks that were previously working fine, it’s best to give your dependency software packages a quick update. Hope that this helps. Happy automating.

-Josh, Jeremy



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Deploying Nautobot to Kubernetes – Part 3

Blog Detail

In the previous two blog posts (Part1 and Part2), I discussed Nautobot Helm charts and how to deploy these Helm charts to Kubernetes following a GitOps approach. I used the minimal parameters needed to achieve a Nautobot deployment.

In the default deployment with Helm charts, the Nautobot version is bound to the Helm charts version, which means that the Helm charts version X.Y.Z will deploy Nautobot version A.B.C, which is bound to that version of Helm Charts. That is suitable for simple deployments and testing, but you usually want to add a custom configuration file, additional plugins, jobs, or other extensions to Nautobot.

The Managed Services team deals with this daily, as every customer has different requirements for their Nautobot deployment. With Kubernetes deployments, we must prepare Nautobot Docker images in advance. Following the “automate everything” approach, we also use automated procedures to build and deploy Nautobot images to Kubernetes.

In this blog post, I will show you a procedure you can use to automate the release process for your Nautobot deployment.

Prepare the Basic Dockerfile

In the first step, I will prepare the basic Dockerfile used to build the custom Nautobot image.

I already have a Git repository I created in the previous blog post explaining the GitOps approach to deploying Nautobot. I added Kubernetes objects and other files required for Kubernetes deployment in the Git repository. The structure of the Git repository looks like this:

.
└── nautobot-kubernetes
    ├── README.md
    ├── clusters
    │   └── minikube
    │       └── flux-system
    │           ├── gotk-components.yaml
    │           ├── gotk-sync.yaml
    │           ├── kustomization.yaml
    │           └── nautobot-kustomization.yaml
    └── kubernetes
        ├── helmrelease.yaml
        ├── kustomization.yaml
        ├── namespace.yaml
        ├── nautobot-helmrepo.yaml
        └── values.yaml

5 directories, 10 files

I will use the same repository to add files required for a custom image and the automation needed to build and deploy the new image.

Let’s create a base Dockerfile in the top directory in my repository.

.
└── nautobot-kubernetes
    ├── Dockerfile
    ├── README.md
    ├── clusters
    │   └── minikube
    │       └── flux-system
    │           ├── gotk-components.yaml
    │           ├── gotk-sync.yaml
    │           ├── kustomization.yaml
    │           └── nautobot-kustomization.yaml
    └── kubernetes
        ├── helmrelease.yaml
        ├── kustomization.yaml
        ├── namespace.yaml
        ├── nautobot-helmrepo.yaml
        └── values.yaml

5 directories, 11 files

Now that I have a Dockerfile, I will add some content.

ARG NAUTOBOT_VERSION=1.4.2
ARG PYTHON_VERSION=3.9
FROM ghcr.io/nautobot/nautobot:${NAUTOBOT_VERSION}-py${PYTHON_VERSION}

In this case, Docker will pull the base Nautobot image and assign a new tag to the image. I know this looks very simple, but I think this is a great first step. I can now test whether I can build my image.

<span role="button" tabindex="0" data-code="➜ ~ docker build -t ghcr.io/networktocode/nautobot-kubernetes:dev . [+] Building 75.3s (6/6) FINISHED => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 164B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for ghcr.io/nautobot/nautobot:1.4.2-py3.9 34.1s => [auth] nautobot/nautobot:pull token for ghcr.io 0.0s => [1/1] FROM ghcr.io/nautobot/nautobot:1.4.2-py3.9@sha256:59f4d8338a1e6025ebe0051ee5244d4c0e94b0223079f806eb61eb63b6a04e62 41.0s => => resolve ghcr.io/nautobot/nautobot:1.4.2-py3.9@sha256:59f4d8338a1e6025ebe0051ee5244d4c0e94b0223079f806eb61eb63b6a04e62 0.0s => => sha256:7a6db449b51b92eac5c81cdbd82917785343f1664b2be57b22337b0a40c5b29d 31.38MB / 31.38MB 15.6s => => sha256:b94fc7ac342a843369c0eaa335613ab9b3761ff5ddfe0217a65bfd3678614e22 11.59MB / 11.59MB 3.8s
~ docker build -t ghcr.io/networktocode/nautobot-kubernetes:dev .
[+] Building 75.3s (6/6) FINISHED                                                                                                                  
 => [internal] load build definition from Dockerfile                                                                                          0.1s
 => => transferring dockerfile: 164B                                                                                                          0.0s
 => [internal] load .dockerignore                                                                                                             0.0s
 => => transferring context: 2B                                                                                                               0.0s
 => [internal] load metadata for ghcr.io/nautobot/nautobot:1.4.2-py3.9                                                                       34.1s
 => [auth] nautobot/nautobot:pull token for ghcr.io                                                                                           0.0s
 => [1/1] FROM ghcr.io/nautobot/nautobot:1.4.2-py3.9@sha256:59f4d8338a1e6025ebe0051ee5244d4c0e94b0223079f806eb61eb63b6a04e62                 41.0s
 => => resolve ghcr.io/nautobot/nautobot:1.4.2-py3.9@sha256:59f4d8338a1e6025ebe0051ee5244d4c0e94b0223079f806eb61eb63b6a04e62                  0.0s
 => => sha256:7a6db449b51b92eac5c81cdbd82917785343f1664b2be57b22337b0a40c5b29d 31.38MB / 31.38MB                                             15.6s
 => => sha256:b94fc7ac342a843369c0eaa335613ab9b3761ff5ddfe0217a65bfd3678614e22 11.59MB / 11.59MB                                              3.8s
<.. Omitted ..>
 => => extracting sha256:8a4f3d60582c68bbdf8beb6b9d5fe1b0d159f2722cf07938ca9bf290dbfaeb6e                                                     0.0s
 => exporting to image                                                                                                                        0.0s
 => => exporting layers                                                                                                                       0.0s
 => => writing image sha256:a164511865f73bf08eb2a30a62ad270211b544006708f073efddcd7ef6a10830                                                  0.0s
 => => naming to ghcr.io/networktocode/nautobot-kubernetes:dev                                                                                0.0s

I can see that the image is successfully built:

~ docker image ls | grep nautobot-kubernetes
ghcr.io/networktocode/nautobot-kubernetes                                 dev                        a164511865f7   8 days ago      580MB

To make this process a bit easier, I will also create a Makefile, which will have some basic targets to simplify building, pushing, etc…. I will add targets, such as buildpush, etc. So instead of using the docker build -t ghcr.io/networktocode/nautobot-kubernetes:dev . command to build the image, I will use the make build command.

# Get current branch by default
tag := $(shell git rev-parse --abbrev-ref HEAD)

build:
	docker build -t ghcr.io/networktocode/nautobot-kubernetes:$(tag) .

push:
	docker push ghcr.io/networktocode/nautobot-kubernetes:$(tag)

pull:
	docker pull ghcr.io/networktocode/nautobot-kubernetes:$(tag)

I added three commands for now: buildpush, and pull. The default tag will be the current branch, but I can pass my custom tag to the make build command if I want. Now that this is ready, I can test my Makefile.

~ make build
docker build -t ghcr.io/networktocode/nautobot-kubernetes:main .
[+] Building 1.0s (5/5) FINISHED                                                                                                                   
 => [internal] load build definition from Dockerfile                                                                                          0.0s
 => => transferring dockerfile: 36B                                                                                                           0.0s
 => [internal] load .dockerignore                                                                                                             0.0s
 => => transferring context: 2B                                                                                                               0.0s
 => [internal] load metadata for ghcr.io/nautobot/nautobot:1.4.2-py3.9                                                                        0.8s
 => CACHED [1/1] FROM ghcr.io/nautobot/nautobot:1.4.2-py3.9@sha256:59f4d8338a1e6025ebe0051ee5244d4c0e94b0223079f806eb61eb63b6a04e62           0.0s
 => exporting to image                                                                                                                        0.0s
 => => exporting layers                                                                                                                       0.0s
 => => writing image sha256:a164511865f73bf08eb2a30a62ad270211b544006708f073efddcd7ef6a10830                                                  0.0s
 => => naming to ghcr.io/networktocode/nautobot-kubernetes:main                                                                               0.0s

Let me also test whether I can push the image to the Docker repository hosted on ghcr.io.

~ make push
docker push ghcr.io/networktocode/nautobot-kubernetes:main
The push refers to repository [ghcr.io/networktocode/nautobot-kubernetes]
3cec5ea1ba13: Mounted from nautobot/nautobot 
5f70bf18a086: Mounted from nautobot/nautobot 
4078cbb0dac2: Mounted from nautobot/nautobot 
28330db6782d: Mounted from nautobot/nautobot 
46b0ede2b6bc: Mounted from nautobot/nautobot 
f970a3b06182: Mounted from nautobot/nautobot 
50f757c5b291: Mounted from nautobot/nautobot 
34ada2d2351f: Mounted from nautobot/nautobot 
2fe7c3cac96a: Mounted from nautobot/nautobot 
ba48a538e919: Mounted from nautobot/nautobot 
639278003173: Mounted from nautobot/nautobot 
294d3956baee: Mounted from nautobot/nautobot 
5652b0fe3051: Mounted from nautobot/nautobot 
782cc2d2412a: Mounted from nautobot/nautobot 
1d7e8ad8920f: Mounted from nautobot/nautobot 
81514ea14697: Mounted from nautobot/nautobot 
630337cfb78d: Mounted from nautobot/nautobot 
6485bed63627: Mounted from nautobot/nautobot 
main: digest: sha256:c9826f09ba3277300a3e6d359a2daebf952485097383a10c37d6e239dbac0713 size: 4087

Great, this is working as well.

Deploy the Custom Image to Kubernetes

I have my initial image in the repository. Before automating the deployment, I will test whether I can deploy my custom image. To do that, I need to update ./kubernetes/values.yaml file. There are a couple of things you need to add to your values. Let me first show the current content of a file:

---
postgresql:
  postgresqlPassword: "SuperSecret123"
redis:
  auth:
    password: "SuperSecret456"
celeryWorker:
  replicaCount: 4

I added those values in my previous blog post. If I want to specify the custom image, I must define the nautobot.image section in my values. Check Nautobot Helm charts documentation for more details.

---
postgresql:
  postgresqlPassword: "SuperSecret123"
redis:
  auth:
    password: "SuperSecret456"
celeryWorker:
  replicaCount: 4
nautobot:
  image:
    registry: "ghcr.io"
    repository: "networktocode/nautobot-kubernetes"
    tag: "main"
    pullSecrets:
      - "ghcr.io"

I think the parameters are self-explanatory. I defined the Docker repository, the image, and the image tag. As this is a private repository, I must define pullSecrets as well. This section describes the Kubernetes secret used to pull the image from a private registry. I will create this secret manually. Of course, there are options to automate this step, using HashiCorp Vault, for example. But that is out of scope for this blog post. Well, let’s create the Kubernetes secret now. To do this, you need a token, which you can generate under your GitHub profile.

<span role="button" tabindex="0" data-code="➜ ~ kubectl create secret docker-registry –docker-server=ghcr.io –docker-username=ubajze –docker-password=
~ kubectl create secret docker-registry --docker-server=ghcr.io --docker-username=ubajze --docker-password=<TOKEN> -n nautobot ghcr.io
secret/ghcr.io created

Now that I have the Kubernetes secret, I have updated the values.yaml. I can commit and push the changes. Remember, Flux will do the rest for me, meaning the new image will be deployed automatically. Let’s do that and observe the process.

I must wait a few minutes for Flux to sync the Git repository. After that, Flux will reconcile the current Helm release and apply new values from values.yaml. The output below shows the intermediate state, where a new container is being created.

~ kubectl get pods
NAME                                      READY   STATUS              RESTARTS   AGE
nautobot-544c88f9b8-gb652                 0/1     ContainerCreating   0          0s
nautobot-577c89f9c7-fzs2s                 1/1     Running             0          2d
nautobot-577c89f9c7-hczbx                 1/1     Running             1          2d
nautobot-celery-beat-554fb6fc7c-n847n     0/1     ContainerCreating   0          0s
nautobot-celery-beat-7d9f864c58-2c9r7     1/1     Running             2          2d
nautobot-celery-worker-647cc6d8dd-564p6   1/1     Running             2          47h
nautobot-celery-worker-647cc6d8dd-7xwtb   1/1     Running             2          47h
nautobot-celery-worker-647cc6d8dd-npx42   1/1     Terminating         2          2d
nautobot-celery-worker-647cc6d8dd-plmjq   1/1     Running             2          2d
nautobot-celery-worker-84bf689ff-k2dph    0/1     ContainerCreating   0          0s
nautobot-celery-worker-84bf689ff-tp92c    0/1     Pending             0          0s
nautobot-postgresql-0                     1/1     Running             0          2d
nautobot-redis-master-0                   1/1     Running             0          2d

After a few minutes, I have a new deployment with a new image.

~ kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
nautobot-544c88f9b8-cwdnj                1/1     Running   0          2m21s
nautobot-544c88f9b8-gb652                1/1     Running   0          4m1s
nautobot-celery-beat-554fb6fc7c-n847n    1/1     Running   1          4m1s
nautobot-celery-worker-84bf689ff-5wzmh   1/1     Running   0          117s
nautobot-celery-worker-84bf689ff-dwjdt   1/1     Running   0          106s
nautobot-celery-worker-84bf689ff-k2dph   1/1     Running   0          4m1s
nautobot-celery-worker-84bf689ff-tp92c   1/1     Running   0          4m1s
nautobot-postgresql-0                    1/1     Running   0          2d
nautobot-redis-master-0                  1/1     Running   0          2d

I can prove that by describing one of the pods.

~ kubectl describe pod nautobot-544c88f9b8-cwdnj | grep Image
    Image:          ghcr.io/networktocode/nautobot-kubernetes:main
    Image ID:       ghcr.io/networktocode/nautobot-kubernetes@sha256:c9826f09ba3277300a3e6d359a2daebf952485097383a10c37d6e239dbac0713

As you can see, the image was pulled from my new repository, and the main tag is used. Great, this means my automated deployment is working. If I want to deploy a new image, I can build the new image and push the image to the Docker repository. Then I must update the tag in the ./kubernetes/values.yaml file, commit and push changes. Flux will automatically re-deploy the new image.

Automate the Deployment Process

The next step is to automate the deployment process. My goal is to deploy a new image every time I create a new release in GitHub. To do this automatically, I will use GitHub Actions CI/CD, triggered when a new release is created.

I want to have the following steps in my CI/CD workflow:

  • Lint
  • Build
  • Test
  • Deploy

I will simulate the Lint and Test steps, as this is out of scope for this blog post. But there is a value in linting your code and then testing the build.

Before specifying the CI/CD workflow I will add some more targets to my Makefile. I will use these commands in a workflow. So let me first update the Makefile.

# Get current branch by default
tag := $(shell git rev-parse --abbrev-ref HEAD)
values := "./kubernetes/values.yaml"

build:
	docker build -t ghcr.io/networktocode/nautobot-kubernetes:$(tag) .

push:
	docker push ghcr.io/networktocode/nautobot-kubernetes:$(tag)

pull:
	docker pull ghcr.io/networktocode/nautobot-kubernetes:$(tag)

lint:
	@echo "Linting..."
	@sleep 1
	@echo "Done."

test:
	@echo "Testing..."
	@sleep 1
	@echo "Done."

update-tag:
	sed -i 's/tag: \".*\"/tag: \"$(tag)\"/g' $(values)

I added three more targets in my Makefile. The lint and test targets simulate linting and testing. The update-tag target is more interesting. It replaces the current tag in ./kubernetes/values.yaml with the new tag specified when running the command. This command will be used in the CI/CD workflow to update the tag in a file. Apart from that, I will also commit and push the changes to the main branch directly from the CI/CD workflow. Flux will detect a change in the main branch and redeploy Nautobot using a new image specified in the values.yaml. Of course, this process is just one approach. Other options include updating the ConfigMap with the image tag directly from your CI/CD workflow. Choosing the correct approach depends on your use case.

Now that I have the basics, I can create a CI/CD definition for GitHub Actions. The workflows are defined in the YAML file and must be stored in the ./.github/workflows directory. Any YAML file in this directory will be loaded by GitHub Action and executed. I will not go into the details of GitHub Actions; that’s not the purpose of this blog post. You can achieve the same results on other platforms for CI/CD workflows.

So, let me create a file with the following content:

---
name: "CI/CD"
on:
  push:
    branches:
      - "*"
  pull_request:
  release:
    types:
      - "created"

permissions:
  packages: "write"
  contents: "write"
  id-token: "write"

jobs:
  lint:
    runs-on: "ubuntu-20.04"
    steps:
      - name: "Check out repository code"
        uses: "actions/checkout@v3"
      - name: "Linting"
        run: "make lint"
  build:
    runs-on: "ubuntu-20.04"
    needs:
      - "lint"
    steps:
      - name: "Check out repository code"
        uses: "actions/checkout@v3"
      - name: "Build the image"
        run: "make tag=${{ github.ref_name }} build"
      - name: "Login to ghcr.io"
        run: "echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u USERNAME --password-stdin"
      - name: "Push the image to the repository"
        run: "make tag=${{ github.ref_name }} push"
  test:
    runs-on: "ubuntu-20.04"
    needs:
      - "build"
    steps:
      - name: "Check out repository code"
        uses: "actions/checkout@v3"
      - name: "Run tests"
        run: "make test"
  deploy:
    runs-on: "ubuntu-20.04"
    needs:
      - "test"
    if: "${{ github.event_name == 'release' }}"
    steps:
      - name: "Check out repository code"
        uses: "actions/checkout@v3"
        with:
          ref: "main"
      - name: "Update the image tag"
        run: "make tag=${{ github.ref_name }} update-tag"
      - name: "Commit changes"
        run: |
          git config user.name github-actions
          git config user.email github-actions@github.com
          git commit -am "Updating the Docker image tag"
          git push origin main

I will give you just the overall description of a file, as the above GitHub Actions syntax is out of scope for this blog post. I created four jobs where GitHub Actions will execute each job after the previous job is finished successfully. The first job started is the lint job. In the build job, I will build the image, assign a tag (the name of the Git tag for the release) to the image, and push the image to the Docker repository. The test job is used to test the image that was built previously. And the deploy job will update the image tag in the ./kubernetes/values.yaml file. After that, it will commit and push the changes back to the repository in the main branch. The “if” statement means this job will only be executed if the CI/CD workflow is triggered by creating a new release in GitHub.

So, now I can give it a try. I will create a new release on GitHub.

Creating a release triggers a new workflow.

If I pull the latest changes from the GitHub repository, I can see that the tag was updated in the ./kubernetes/values.yaml file. The new value is now v0.0.1.

---
postgresql:
  postgresqlPassword: "SuperSecret123"
redis:
  auth:
    password: "SuperSecret456"
celeryWorker:
  replicaCount: 4
nautobot:
  image:
    registry: "ghcr.io"
    repository: "networktocode/nautobot-kubernetes"
    tag: "v0.0.1"
    pullSecrets:
      - "ghcr.io"

After a few minutes, the new image is deployed to Kubernetes.

~ kubectl describe pod nautobot-5b99dd5cb-5pcwp | grep Image
    Image:          ghcr.io/networktocode/nautobot-kubernetes:v0.0.1
    Image ID:       ghcr.io/networktocode/nautobot-kubernetes@sha256:3ca8699ed1ed970889d026d684231f1d1618e5adeeb383e418082b8f3e27d6ee

Great, my workflow is working as expected.

Release a New Nautobot Image

Remember, I created a very simple Dockerfile, which only pulls the Nautobot image and applies new tags. Usually, this is not a use case, so I will make this a bit more complex. I will install the Golden Config plugin and add the custom Nautobot configuration.

I must tell the Dockerfile how to install the plugin. So I will create the requirements.txt file and specify all plugins I want to install in my image. In the Dockerfile, I will install requirements from the file using the pip command.

So, let me first create the requirements.txt file to define the plugins and dependencies I want to install in my image. I will specify the Golden Config plugin, but I need to add the Nautobot Nornir plugin, as this is a requirement for the Golden Config plugin.

~ cat requirements.txt
nautobot_plugin_nornir==1.0.0
nautobot-golden-config==1.2.0

Just installing the plugins is not enough. I must also enable the plugins in the configuration and add the configuration parameters required for plugins. So I will take the base config from my current Nautobot deployment. I will store the configuration in the ./configuration/nautobot_config.py file. Then I will update the configuration file with the required plugin settings.

My repository contains the following files:

.
└── nautobot-kubernetes
    ├── Dockerfile
    ├── Makefile
    ├── README.md
    ├── clusters
    │   └── minikube
    │       └── flux-system
    │           ├── gotk-components.yaml
    │           ├── gotk-sync.yaml
    │           ├── kustomization.yaml
    │           └── nautobot-kustomization.yaml
    ├── configuration
    │   └── nautobot_config.py
    ├── kubernetes
    │   ├── helmrelease.yaml
    │   ├── kustomization.yaml
    │   ├── namespace.yaml
    │   ├── nautobot-helmrepo.yaml
    │   └── values.yaml
    └── requirements.txt

6 directories, 14 files

Now I must add additional instructions to the Dockerfile.

ARG NAUTOBOT_VERSION=1.4.2
ARG PYTHON_VERSION=3.9
FROM ghcr.io/nautobot/nautobot:${NAUTOBOT_VERSION}-py${PYTHON_VERSION}

COPY requirements.txt /tmp/

RUN pip install -r /tmp/requirements.txt

COPY ./configuration/nautobot_config.py /opt/nautobot/

I can now commit and push all changes and create a new release in GitHub to deploy the new image.

The release triggers a new CI/CD workflow, which updates the image tag in ./kubernetes/values.yaml to v0.0.2. I have to wait for Flux to sync the Git repository. After a few minutes, the new image is deployed to Kubernetes.

~ kubectl describe pod nautobot-55f4cfc777-82qvr | grep Image
    Image:          ghcr.io/networktocode/nautobot-kubernetes:v0.0.2
    Image ID:       ghcr.io/networktocode/nautobot-kubernetes@sha256:2bd861b5b6b74cf0f09a34fefbcca264c22f3df7440320742012568a0046917b

If I now connect to my Nautobot instance, I can see the plugins are installed and enabled.

As you can see, I updated the Nautobot deployment without even touching Kubernetes. All I have to do is update my repository and create a new release. Automation enables every developer to update the Nautobot deployment, even without having an understanding of Kubernetes.


Conclusion

In a series of three blog posts, I wanted to show how the Network to Code Managed Services team manages Nautobot deployments. We have several Nautobot deployments, so we must ensure that as many steps as possible are automated. The examples in these blog posts were quite simple. Of course, our deployments are much more complex. Usually we have multiple environments (production and integration at minimum), with many integrations with other systems, such as HashiCorp Vault, S3 backend storage, etc. Configuration in those cases is much more complex.

Anyway, I hope you understand how we are dealing with Nautobot deployments. If you have any questions regarding this, do not hesitate to reach out on Slack or contact us directly.

-Uros



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Deploying Nautobot to Kubernetes – Part 2

Blog Detail

In the previous blog post in the series, I discussed how to use Helm charts to deploy Nautobot to Kubernetes. Deploying Nautobot manually using Helm charts is a great first step. But if we want to move towards the GitOps paradigm, we should go a step further. In this blog post, I will show you how our managed services team manages Nautobot deployments using the GitOps approach.

Introduction to GitOps

What is GitOps? The idea is that the infrastructure is defined declaratively, meaning all infrastructure parameters are defined in Git. Then all infrastructure is deployed from a repository, serving as a single source of truth. Usually, the main branch defines the production infrastructure, but there can also be a version-tagged commit that deploys a specific version of an infrastructure. For every change, the infrastructure developer has to create a branch, push changes and open a pull request. This process allows for peer code reviews and CI testing to catch any problems early on. After the pull request is reviewed, the changes are merged to the main branch. The infrastructure defined in the main branch deploys the production environment automatically.

To deploy an application to Kubernetes, we need to define Kubernetes objects. Kubernetes allows us to define these objects in YAML files. Because infrastructure is defined in files, we can put these files under version control and use the GitOps paradigm to manage infrastructure.

We have YAML files on one side and a Kubernetes cluster on the other. To bridge the gap, we need a tool that automatically detects the changes in a tracked branch and deploys changes to the infrastructure. We could develop a custom script, but instead, we use an existing tool Flux, which is intended to do precisely that. We chose Flux because of its ease of use and because it does the heavy lifting for us.

Flux 101

Flux (or, in short, Flux) is a tool that, among other things, tracks a Git repository to ensure that a Kubernetes cluster’s state matches YAML definitions in a specific tag or branch. Flux will revert any accidental change that does not match the repository. So everything is declaratively described in a Git repository that serves as a source of truth for your infrastructure.

Flux also provides tooling for deploying Helm charts with specified values via additional Kubernetes objects defined with YAML in the same Git repository.

You can find more details on the Flux webpage.

Installing Flux to the Cluster

Before you start with Flux, you must install the Flux CLI tool. The installation is out of scope for this blog post, but it should be easy to install by following the official documentation.

After installing the CLI tool, you must bootstrap a cluster, which is done with the flux bootstrap command. This command installs Flux on a Kubernetes cluster. You need a Git repository to be used by Flux, as Flux pushes its configuration to a specified repository. Also, ensure you have credentials to authenticate to your Git repository.

At this point, I have to warn you. If you have multiple Kubernetes clusters in your Kube configuration file, ensure the current context is correctly set to the cluster you want to configure. You should check the current context with the kubectl config current-context command, otherwise you could break an existing cluster if you override the Flux configuration.

~ kubectl config current-context
minikube

I created an empty repository for this blog post containing only the README.md file.

.
└── nautobot-kubernetes
    └── README.md

1 directory, 1 file

I will now go ahead and bootstrap Flux. There are a couple of parameters that you need to pass as inputs to the command:

  • URL of the Git repository can be HTTPS or SSH
  • Credentials if using HTTPS
  • The branch name
  • The path in the repository to store Flux Kubernetes objects
<span role="button" tabindex="0" data-code="➜ ~ flux bootstrap git \ –url=https://github.com/networktocode/nautobot-kubernetes \ –username=ubajze \ –password=
~ flux bootstrap git \
  --url=https://github.com/networktocode/nautobot-kubernetes \
  --username=ubajze \
  --password=<TOKEN> \
  --token-auth=true \
  --branch=main \
  --path=clusters/minikube

► cloning branch "main" from Git repository "https://github.com/networktocode/nautobot-kubernetes"
✔ cloned repository
► generating component manifests
✔ generated component manifests
✔ committed sync manifests to "main" ("61d425fc2aa2ca9ca973cc15c244bb94741cf468")
► pushing component manifests to "https://github.com/networktocode/nautobot-kubernetes"
► installing components in "flux-system" namespace
✔ installed components
✔ reconciled components
► determining if source secret "flux-system/flux-system" exists
► generating source secret
► applying source secret "flux-system/flux-system"
✔ reconciled source secret
► generating sync manifests
✔ generated sync manifests
✔ committed sync manifests to "main" ("bc4b896ac2da2264bf126e357e2b491a8de01644")
► pushing sync manifests to "https://github.com/networktocode/nautobot-kubernetes"
► applying sync manifests
✔ reconciled sync configuration
◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
✔ Kustomization reconciled successfully
► confirming components are healthy
✔ helm-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ notification-controller: deployment ready
✔ source-controller: deployment ready
✔ all components are healthy

After a few minutes, Flux is installed. If you list your Pods, you should see additional pods created in the flux-system namespace.

~ kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
flux-system   helm-controller-7f4bb54ddf-4wxn8           1/1     Running   0          4m47s
flux-system   kustomize-controller-5b9955f9c7-lghgq      1/1     Running   0          4m47s
flux-system   notification-controller-6c9b987cd8-z6w5s   1/1     Running   0          4m47s
flux-system   source-controller-656d7789f7-62gbm         1/1     Running   0          4m47s
kube-system   coredns-558bd4d5db-lz8wd                   1/1     Running   0          3d6h
kube-system   etcd-minikube                              1/1     Running   0          3d6h
kube-system   kube-apiserver-minikube                    1/1     Running   0          3d6h
kube-system   kube-controller-manager-minikube           1/1     Running   2          3d6h
kube-system   kube-proxy-sq54f                           1/1     Running   0          3d6h
kube-system   kube-scheduler-minikube                    1/1     Running   0          3d6h
kube-system   storage-provisioner                        1/1     Running   1          3d6h

Also, the repository now contains additional files to define the Flux configuration in the cluster.

.
└── nautobot-kubernetes
    ├── README.md
    └── clusters
        └── minikube
            └── flux-system
                ├── gotk-components.yaml
                ├── gotk-sync.yaml
                └── kustomization.yaml

4 directories, 4 files

Flux also installs some Custom Resource Definitions (CRDs) used for various purposes, as you will see later in this post. One of the definitions is called GitRepository used to define a Git repository that Flux will track. Flux defines one GitRepository object for itself, and you can create more GitRepostiory objects for other applications.

~ kubectl get GitRepository -A
NAMESPACE     NAME          URL                                                    AGE     READY   STATUS
flux-system   flux-system   https://github.com/networktocode/nautobot-kubernetes   8m26s   True    stored artifact for revision 'main/bc4b896ac2da2264bf126e357e2b491a8de01644'
<span role="button" tabindex="0" data-code="➜ ~ kubectl get GitRepository -n flux-system flux-system -o yaml apiVersion: source.toolkit.fluxcd.io/v1beta2 kind: GitRepository metadata: creationTimestamp: "2022-09-09T13:24:52Z" finalizers: – finalizers.fluxcd.io generation: 1 labels: kustomize.toolkit.fluxcd.io/name: flux-system kustomize.toolkit.fluxcd.io/namespace: flux-system name: flux-system namespace: flux-system resourceVersion: "67765" uid: f7882fdf-0bfb-4825-92ad-780619d2d790 spec: gitImplementation: go-git interval: 1m0s ref: branch: main secretRef: name: flux-system timeout: 60s url: https://github.com/networktocode/nautobot-kubernetes
~ kubectl get GitRepository -n flux-system flux-system -o yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
  creationTimestamp: "2022-09-09T13:24:52Z"
  finalizers:
  - finalizers.fluxcd.io
  generation: 1
  labels:
    kustomize.toolkit.fluxcd.io/name: flux-system
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  name: flux-system
  namespace: flux-system
  resourceVersion: "67765"
  uid: f7882fdf-0bfb-4825-92ad-780619d2d790
spec:
  gitImplementation: go-git
  interval: 1m0s
  ref:
    branch: main
  secretRef:
    name: flux-system
  timeout: 60s
  url: https://github.com/networktocode/nautobot-kubernetes
<... Output omitted ...>

You can see that the spec section contains the URL, branch, and a couple of other parameters. Flux will track the repository https://github.com/networktocode/nautobot-kubernetes, and it will use the main branch, in my case. Of course, you can set up a different branch if you need to.

Your repository can contain more than just Kubernetes objects. You can have application code, tests, CI/CD definitions, etc. So, you have to tell Flux which files are for Kubernetes deployments in your repository. To do that, Flux introduces the CRD called Kustomization. This CRD is used to bridge the gap between the Git repository and the path in your repository. Apart from that, you can also specify an interval for reconciliation and some other parameters. A Flux Kustomization may point to a directory of Kubernetes objects or a directory with a Kubernetes kustomization kustomization.yaml in it. Unfortunately, the term kustomization is overloaded here and can be confusing, more on this later.

There is one Kustomization object for Flux in the Git repository.

~ kubectl get kustomization -n flux-system flux-system
NAME          AGE   READY   STATUS
flux-system   36m   True    Applied revision: main/bc4b896ac2da2264bf126e357e2b491a8de01644
<span role="button" tabindex="0" data-code="➜ ~ kubectl get kustomization -n flux-system flux-system -o yaml apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 kind: Kustomization metadata: creationTimestamp: "2022-09-09T13:24:52Z" finalizers: – finalizers.fluxcd.io generation: 1 labels: kustomize.toolkit.fluxcd.io/name: flux-system kustomize.toolkit.fluxcd.io/namespace: flux-system name: flux-system namespace: flux-system resourceVersion: "71920" uid: 764d3ceb-8512-47fa-a9c9-32434fa3587c spec: force: false interval: 10m0s path: ./clusters/minikube prune: true sourceRef: kind: GitRepository name: flux-system
~ kubectl get kustomization -n flux-system flux-system -o yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  creationTimestamp: "2022-09-09T13:24:52Z"
  finalizers:
  - finalizers.fluxcd.io
  generation: 1
  labels:
    kustomize.toolkit.fluxcd.io/name: flux-system
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  name: flux-system
  namespace: flux-system
  resourceVersion: "71920"
  uid: 764d3ceb-8512-47fa-a9c9-32434fa3587c
spec:
  force: false
  interval: 10m0s
  path: ./clusters/minikube
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-system
<... Output omitted ...>

You can see from the output that there is a sourceRef in the spec section, and this section defines the GitRepository object tracked. The path in the repository is defined with the path parameter. Flux will check the ./clusters/minikube directory in the Git repository for changes every 10 minutes (as defined with the interval parameter). All changes detected will be applied to a Kubernetes cluster.

At this point, I have Flux up and running.

Set Up Git Repository

Now that I have explained the basics, I can show you how to deploy Nautobot using Flux. I will create a new directory called ./kubernetes in my Git repository, where I will store files required for Nautobot deployment.

.
└── nautobot-kubernetes
    ├── README.md
    ├── clusters
    │   └── minikube
    │       └── flux-system
    │           ├── gotk-components.yaml
    │           ├── gotk-sync.yaml
    │           └── kustomization.yaml
    └── kubernetes

I have a directory, but how will Flux know where to look for Kubernetes objects? Remember, I already have the GitRepository object, which is used to sync data from Git to Kubernetes. And I can use the Kustomization object to map the GitRepository object to the path in that repository.

So let’s create the Kustomization object in the ./cluster/minikube/flux-system directory. I will call it nautobot-kustomization.yaml.

.
└── nautobot-kubernetes
    ├── README.md
    ├── clusters
    │   └── minikube
    │       └── flux-system
    │           ├── gotk-components.yaml
    │           ├── gotk-sync.yaml
    │           ├── kustomization.yaml
    │           └── nautobot-kustomization.yaml
    └── kubernetes

5 directories, 5 files
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: nautobot-kustomization
  namespace: flux-system
spec:
  force: false
  interval: 1m0s
  path: ./kubernetes
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-system

This object tells Flux to track changes in the GitRepository under the ./kubernetes directory. In my case, I will reuse the GitRepository object flux-system, but usually you would create another GitRepository object just for this purpose. You would probably ask me, what is the point of doing that? You get flexibility. Suppose you want to test deployment from a different branch. In that case, you can change the branch you track with the GitRepository, and Flux will automatically deploy Nautobot from a different branch.

At this point, I have to introduce another CRD called Kustomization. Wait, what? You already told us about Kustomization. Unfortunately, this is a different CRD with the same name. If you look carefully at the apiVersion, you can see that the value is different. This one uses kustomize.config.k8s.io/v1beta1, while the “first” Kustomization uses kustomize.toolkit.fluxcd.io/v1beta2.

You can read more about the Kustomization definition in the official documentation. It can be used to compose a collection of resources or generate resources from other sources, such as ConfigMap objects from files.

Take a look at the repository carefully. There is already one kustomization.yaml file in the ./clusters/minikube/flux-system. It contains the following:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- gotk-components.yaml
- gotk-sync.yaml

You can see a list of resources deployed for this Kustomization. There are currently two files included in the deployment. So, I will add another file called nautobot-kustomization.yaml under the resources object.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- gotk-components.yaml
- gotk-sync.yaml
- nautobot-kustomization.yaml

I can commit and push changes. After a couple of minutes, the Flux Kustomization object is applied:

~ kubectl get kustomization -n flux-system
NAME                     AGE     READY   STATUS
flux-system              2d21h   True    Applied revision: main/561d36ff58a7a0fa647e3206faf5ba2aa1cab149
nautobot-kustomization   4m48s   False   kustomization path not found: stat /tmp/kustomization-2081943599/kubernetes: no such file or directory

You can see that I have a new Kustomization object. Of course, because the directory is empty it is not included in the Git repository. So I have to add some files to make the deployment successful.

I will first create the ./kubernetes/namespace.yaml file, where I will define the Namespace object.

---
apiVersion: v1
kind: Namespace
metadata:
  name: nautobot

Next, I will create the ./kubernetes/kustomization.yaml file, where I will specify Nautobot resources. The first resource will be the namespace.yaml.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: nautobot

resources:
  - namespace.yaml

Right now, my directory structure looks like this:

.
└── nautobot-kubernetes
    ├── README.md
    ├── clusters
    │   └── minikube
    │       └── flux-system
    │           ├── gotk-components.yaml
    │           ├── gotk-sync.yaml
    │           ├── kustomization.yaml
    │           └── nautobot-kustomization.yaml
    └── kubernetes
        ├── kustomization.yaml
        └── namespace.yaml

5 directories, 7 files

Let’s commit and push the changes and wait for Flux to sync the Git repository in a few minutes. After the Git repository is synced I can check the status of Kustomizations.

~ kubectl get kustomization -n flux-system
NAME                     AGE     READY   STATUS
flux-system              2d22h   True    Applied revision: main/2208cac47b74342c7b86120fb55fe34f47c87b7b
nautobot-kustomization   17m     True    Applied revision: main/2208cac47b74342c7b86120fb55fe34f47c87b7b

You can see that the Nautobot Kustomization has been successfully applied. I can see a namespace called nautobot, meaning the Namespace object was installed successfully.

~ kubectl get namespace nautobot
NAME       STATUS   AGE
nautobot   Active   113s

Introduction to HelmRelease

In the previous blog post, I showed how to deploy Nautobot using Helm charts, which is a nice approach if you want to simplify Nautobot deployment to Kubernetes. I explained how to pass the input parameters (inline or from a YAML file) to the helm command. This is a simple way to deploy Nautobot manually. However, as we want to follow GitOps principles, we want to deploy everything automatically from a Git repository.

For this purpose, Flux has a special CRD called HelmRelease. If you take a look at the list of Pods installed by Flux, you can notice that there is a Pod called helm-controller. This Pod looks for any HelmRelease objects deployed to Kubernetes, and if there is one, it installs Helm charts. In principle, the helm-controller runs the helm install command.

If you remember from my previous blog post, I needed a Helm repository containing Helm charts for a particular application. In case you manually add a repository with the helm command, the repository is installed on your local machine. Using the helm install command deploys a Kubernetes object to a cluster. For Flux, we need to specify a repository for the Helm controller. That is why we have another CRD called HelmRepository.

Deploy Using HelmRelease

Now that you are familiar with HelmReleases, I can start adding Kubernetes objects to the ./kubernetes directory.

I will first define the Nautobot HelmRepository. I will create a file called ./kubernetes/nautobot-helmrepo.yaml with the following content:

---
apiVersion: "source.toolkit.fluxcd.io/v1beta2"
kind: "HelmRepository"
metadata:
  name: "nautobot"
  namespace: "nautobot"
spec:
  url: "https://nautobot.github.io/helm-charts/"
  interval: "10m"

I must update resources in the ./kubernetes/kustomization.yaml file to apply this file.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: nautobot

resources:
  - namespace.yaml
  - nautobot-helmrepo.yaml

A few minutes after pushing the changes, I can see that the HelmRepository object was created.

~ kubectl get helmrepository -A
NAMESPACE   NAME       URL                                       AGE   READY   STATUS
nautobot    nautobot   https://nautobot.github.io/helm-charts/   9s    True    stored artifact for revision 'efa67ddff2b22097e642cc39918b7f7a27c53042e19ba19466693711fe7dd80e'

To deploy Nautobot using Helm charts, I need two things:

  • ConfigMap containing values for Helm charts
  • HelmRelease object connecting values and Helm charts

I will first create a ConfigMap object. The easiest way is to create a file with values in YAML format. The same format is used if you deploy Helm charts manually, using the --values option. I will go ahead and create a file called ./kubernetes/values.yaml with the following content:

---
postgresql:
  postgresqlPassword: "SuperSecret123"
redis:
  auth:
    password: "SuperSecret456"

The Kustomization object (kustomize.config.k8s.io/v1beta1) supports the ConfigMap generator, meaning you can generate a ConfigMap from a file directly. So I will update the ./kubernetes/kustomization.yaml file to generate a ConfigMap.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: nautobot

resources:
  - namespace.yaml
  - nautobot-helmrepo.yaml

generatorOptions:
  disableNameSuffixHash: true

configMapGenerator:
  - name: "nautobot-values"
    files:
      - "values=values.yaml"

The configMapGenerator will create a ConfigMap with the name nautobot-values and the key called values containing the content of the file values.yaml. I will go ahead and push the changes.

After a few minutes, you can see the new ConfigMap created:

~ kubectl get configmaps -n nautobot nautobot-values -o yaml
apiVersion: v1
data:
  values: |
    ---
    postgresql:
      postgresqlPassword: "SuperSecret123"
    redis:
      auth:
        password: "SuperSecret456"
kind: ConfigMap
metadata:
  creationTimestamp: "2022-09-12T12:02:29Z"
  labels:
    kustomize.toolkit.fluxcd.io/name: nautobot-kustomization
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  name: nautobot-values
  namespace: nautobot
  resourceVersion: "107230"
  uid: 7cbcd132-faca-4b09-8af5-f2f68feb8dbb

Now that the ConfigMap is created, I can finally create a HelmRelease object. I will put the following content into a file ./kubernetes/helmrelease.yaml:

---
apiVersion: "helm.toolkit.fluxcd.io/v2beta1"
kind: "HelmRelease"
metadata:
  name: "nautobot"
spec:
  interval: "30s"
  chart:
    spec:
      chart: "nautobot"
      version: "1.3.12"
      sourceRef:
        kind: "HelmRepository"
        name: "nautobot"
        namespace: "nautobot"
      interval: "20s"
  valuesFrom:
    - kind: "ConfigMap"
      name: "nautobot-values"
      valuesKey: "values"

This Kubernetes object defines the chart that will be installed. The sourceRef specifies the HelmRepository used for this HelmRelease. The valuesFrom section describes where to take values from.

I must add this file to resources in the ./kubernetes/kustomization.yaml.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: nautobot

resources:
  - namespace.yaml
  - nautobot-helmrepo.yaml
  - helmrelease.yaml

generatorOptions:
  disableNameSuffixHash: true

configMapGenerator:
  - name: "nautobot-values"
    files:
      - "values=values.yaml"

This is how my repo structure looks at the end:

.
└── nautobot-kubernetes
    ├── README.md
    ├── clusters
    │   └── minikube
    │       └── flux-system
    │           ├── gotk-components.yaml
    │           ├── gotk-sync.yaml
    │           ├── kustomization.yaml
    │           └── nautobot-kustomization.yaml
    └── kubernetes
        ├── helmrelease.yaml
        ├── kustomization.yaml
        ├── namespace.yaml
        ├── nautobot-helmrepo.yaml
        └── values.yaml

5 directories, 10 files

I can now commit and push all changes. A few minutes after the push, you can see the HelmRelease object created in the nautobot namespace.

~ kubectl get helmreleases -n nautobot
NAME       AGE   READY     STATUS
nautobot   81s   Unknown   Reconciliation in progress

After a couple more minutes, I can also see Nautobot pods up and running.

~ kubectl get pods -n nautobot
NAME                                      READY   STATUS    RESTARTS   AGE
nautobot-577c89f9c7-fzs2s                 1/1     Running   1          2m28s
nautobot-577c89f9c7-hczbx                 1/1     Running   1          2m28s
nautobot-celery-beat-7d9f864c58-2c9r7     1/1     Running   3          2m28s
nautobot-celery-worker-647cc6d8dd-npx42   1/1     Running   2          2m28s
nautobot-celery-worker-647cc6d8dd-plmjq   1/1     Running   2          2m28s
nautobot-postgresql-0                     1/1     Running   0          2m28s
nautobot-redis-master-0                   1/1     Running   0          2m28s

So, what’s nice about this approach is that when changes are merged (or pushed) to the main branch, everything is deployed automatically.

Let’s say I want to scale up the Nautobot Celery workers to four. I can update the default value in the ./kubernetes/values.yaml to four:

---
postgresql:
  postgresqlPassword: "SuperSecret123"
redis:
  auth:
    password: "SuperSecret456"
celeryWorker:
  replicaCount: 4

I can commit and push the changes, and after the Git repository is synced, changes are applied automatically. In the real GitOps paradigm, you would probably create a new branch, add changes, push changes, and create a Pull Request. After a colleague reviews changes and changes are merged, these are automatically applied to a production environment. In my case, I will push changes to the main branch directly.

Waiting a few minutes for the Git repository to sync, I can now see four worker replicas:

~ kubectl get pods -n nautobot
NAME                                      READY   STATUS    RESTARTS   AGE
nautobot-577c89f9c7-fzs2s                 1/1     Running   1          14m
nautobot-577c89f9c7-hczbx                 1/1     Running   1          14m
nautobot-celery-beat-7d9f864c58-2c9r7     1/1     Running   3          14m
nautobot-celery-worker-647cc6d8dd-564p6   1/1     Running   0          2m58s
nautobot-celery-worker-647cc6d8dd-7xwtb   1/1     Running   0          2m58s
nautobot-celery-worker-647cc6d8dd-npx42   1/1     Running   2          14m
nautobot-celery-worker-647cc6d8dd-plmjq   1/1     Running   2          14m
nautobot-postgresql-0                     1/1     Running   0          14m
nautobot-redis-master-0                   1/1     Running   0          14m

Conclusion

In this blog post, I showed how to use a GitOps approach to deploy Nautobot using Helm charts. There are multiple advantages of using this approach. First, the infrastructure configuration is defined in a Git repository, serving as a source of truth for infrastructure. Second, you can introduce software development principles in your process, such as code reviews and others.

I encourage you to test this approach; and if you have any questions, reach out to us on our Slack channel.

~ Uros



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!