Day 17: Mastering Python Essential Libraries for Devops

Adrian Rubico

|

Dec 2, 2024

02:34 AM GMT+8

DevOps workflows rely heavily on Python because of its simplicity and wide range of libraries suited to automation and deployment management. As part of this blog, we will be exploring Python libraries that are essential for DevOps, providing examples and tasks that will help you to solidify your understanding of those libraries.

Python Libraries Commonly Used in DevOps

Below is a comprehensive list of Python libraries frequently used in DevOps for various tasks such as automation, configuration management, testing, and cloud integration:

LibraryDescription
jsonUsed for parsing, reading, and writing JSON data, crucial in configuration and API interactions.
pyymalFacilitates working with YAML files for configurations and infrastructure-as-code tasks.
requestsMakes HTTP requests simple and efficient, aiding in API integration.
osProvides functions to interact with the operating system for file handling, process management, and more.
subprocessEnables running shell commands directly from Python scripts.
paramikoAllows SSH-based automation, making it ideal for managing remote servers.
fabricSimplifies executing shell commands on multiple remote machines over SSH.
ansibleAutomates configuration management and application deployment using Python modules.
jinja2A templating engine for generating dynamic configuration files (e.g., Ansible playbooks).
boto3Python SDK for AWS, enabling seamless interaction with AWS services such as EC2, S3, and Lambda.
dockerManages Docker containers directly from Python scripts, useful for container orchestration.
djangoA web framework often used in building internal DevOps tools and dashboards.
terraformProvides programmatic interaction with Terraform configurations, enabling infrastructure automation.
pytestA powerful testing framework for writing unit, functional, and integration tests.

Tasks

Create and Read a JSON with Azure Service Details

This task involves creating a dictionary containing detailed information about Azure services, writing it to a JSON file, and then reading and printing only the services of type Compute.

Steps and Example Code:

  • Create a dictionary with Azure service details, including service name and type.
  • Write the dictionary to a JSON file using the json module.
  • Read the JSON file and use a for loop with a condition to print only services of type Compute.

create-azureservice.py

import json

# Create a dictionary with Azure service details
azure_services = {
    "1": {"name": "Virtual Machines", "type": "Compute"},
    "2": {"name": "App Services", "type": "Compute"},
    "3": {"name": "Azure Kubernetes Service", "type": "Container"},
    "4": {"name": "Blob Storage", "type": "Storage"}
}

# Write the dictionary to a JSON file
with open("azure_services_details.json", "w") as json_file:
    json.dump(azure_services, json_file, indent=4)
    print("Service details written to azure_services_details.json")

compute-azureservice.py

import json

# Read the JSON file and print services of type 'Compute'
with open("azure_services_details.json", "r") as file:
    data = json.load(file)
    print("Compute Services:")
    for service in data["services"]:
        if service["type"] == "Compute":
            print(f"- {service['name']}")

Explanation:

  • The dictionary includes multiple services, each identified by a key, with details about the name and type.
  • The if condition in the for loop filters and prints only the services with type == "Compute".
  • This approach demonstrates how to filter specific types of data from a JSON file, a common task in DevOps workflows.

Create and Read a YAML File with Azure Pipeline Configuration

In this task, we'll define our CI/CD pipelines using YAML, a popular format in DevOps. Using Python, you will read and process YAML configuration files. With this practical exercise, you will learn how to extract specific details from YAML configurations, which will enhance your ability to work with Infrastructure as Code (IaC).

  • Create a sample Azure Pipeline YAML file that includes multiple steps with script and displayName fields.
  • Read the YAML file using the yaml module (if not installed, pip install pyyaml).
  • Iterate through the steps in the pipeline, count the scripts, and print the displayName of each script.

azure-pipeline.yml

trigger: none

pool:
  vmImage: ubuntu-latest

jobs:
  - job: SetupDockerRepository
    displayName: "Set up Docker's APT Repository"
    steps:
      - script: |
          sudo apt-get update
          sudo apt-get install -y ca-certificates curl
          sudo install -m 0755 -d /etc/apt/keyrings
          sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
          sudo chmod a+r /etc/apt/keyrings/docker.asc

          echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
          sudo apt-get update
        displayName: "Add Docker's Official GPG Key and Repository"

  - job: InstallDocker
    displayName: "Install Docker"
    dependsOn: SetupDockerRepository
    steps:
      - script: |
          sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
        displayName: "Install Docker Engine and Plugins"

  - job: CheckDockerVersion
    displayName: "Check Docker Version"
    dependsOn: InstallDocker
    steps:
      - script: |
          docker --version
        displayName: "Verify Docker Installation"

displayname-count.py

import yaml

# Open the YAML file
with open("azure-pipelines.yml", "r") as file:
    data = yaml.safe_load(file)

# Count the scripts and display displayName
script_count = 0
for job in data['jobs']:
    for step in job['steps']:
        if 'script' in step:
            script_count += 1
            print(f"Step {script_count}: {step['displayName']}")

Explanation:

  • The script starts by importing the yaml library and loading the content of the azure-pipelines.yml file.
  • The jobs section is retrieved, and each job's displayName and number of steps are extracted.
  • For each job, it iterates through its steps to print the name of each step if it contains a script.

Wrapping Up

Throughout this blog post, we explored Python libraries widely used in DevOps and demonstrated their practical applications. From parsing YAML files for pipeline configurations to managing structured data in JSON, these tasks illustrate Python's ability to simplify automation and enhance productivity. In addition to dynamic data handling, YAML files read and analyzed provided a crucial skill for managing infrastructure as code. Automating more advanced processes requires these skills to streamline DevOps processes.

Stay tuned for the next blog, where we will dive into Mastering Docker to learn about containerization, managing Docker images, and deploying containerized applications effectively.

Discussion