Posts on Jan 1970

Chat with Your Zabbix: A Practical Guide to Integrating AI with Zabbix AI MCP Server

Unlocking Zabbix with AI: A Look at the Zabbix AI MCP Server

Good morning everyone, Dimitri Bellini here, back on my channel Quadrata! Today, we’re diving into something truly interesting, a bit experimental, and as always, involving our good friend Zabbix. This exploration comes thanks to a member of the Italian Zabbix community, Matteo Peirone, who reached out on LinkedIn to share a fascinating project he developed. I was immediately intrigued and knew I had to show it to you.

So, what are we talking about? It’s called the Zabbix AI MCP Server, and it allows us to instrument operations within Zabbix using artificial intelligence. Let’s break down what this means and how it works.

What is the Zabbix AI MCP Server?

At its core, the Zabbix AI MCP Server acts as an intermediary, bridging the gap between artificial intelligence and the Zabbix server’s APIs. Many of you might already be familiar with Zabbix APIs, which allow us to consult data or perform actions within our Zabbix environment. This project aims to simplify these interactions significantly, especially for those not deeply versed in API scripting.

To get started, we need a few key components:

  • An inference engine: This can be cloud-based or local (via Ollama or VLLM). I’ve been experimenting with a few.
  • An adequate AI model compatible with the engine.
  • The Zabbix AI MCP Server itself.
  • A small, yet crucial, project called mcp-to-openapi-proxy.
  • In my setup, I’m using Open Web UI as a chat interface, similar to ChatGPT, to interact with the AI.

Understanding MCP: Model Context Protocol

Before we go further, it’s important to understand what “MCP” stands for. It means Model Context Protocol. This protocol, invented by Anthropic (the creators of Claude), is designed to allow AI models to interact with external “tools.” These tools can be anything from platform functionalities to specific software features.

Essentially, MCP provides a standardized way for an AI to:

  1. Discover available tools and their capabilities (e.g., functions, resources).
  2. Understand how to use these tools, including descriptions and invocation methods.

This is particularly relevant for AI agents, which are sophisticated prompts instructed to perform tasks that might require external interactions, like research or system operations. MCP helps standardize these tool interactions, which can be a challenge as not all LLM models handle function calls equally well.

How the Zabbix AI MCP Server Works

The Zabbix AI MCP Server, developed by Matteo Peirone, leverages this MCP framework. It exposes Zabbix’s API functionalities as “tools” that an AI can understand and use. This means you can:

  • Consult data: Ask for the latest problems, analyze triggers, or get details about a host.
  • Perform actions: Create or update objects within Zabbix (if not in read-only mode).

All of this can be done without needing to write complex API scripts yourself!

The Architecture in My Setup:

Here’s how the pieces connect in my demonstration:

  1. Open Web UI: This is my chat interface where I type my requests in natural language.
  2. mcp-to-openapi-proxy: This acts as a bridge. Open Web UI is instructed to look for tools here. This proxy converts MCP functions into REST API endpoints (normalized in the OpenAPI standard) that Open Web UI can consume. It essentially acts as an MCP client.
  3. Zabbix AI MCP Server: This is the star of the show. The mcp-to-openapi-proxy communicates with this server. The Zabbix AI MCP Server is configured with the details of my Zabbix instance (URL, authentication token or credentials).
  4. Zabbix Server: The Zabbix AI MCP Server then interacts with the actual Zabbix server APIs to fetch data or perform actions based on the AI’s request.

Getting Started: Installation and Setup Guide

Here’s a brief rundown of how I got this up and running. It might seem a bit involved, but following these steps should make it manageable:

  1. Clone the Zabbix AI MCP Server Repository:

    git clone https://github.com/mpeirone/zabbix-mcp-server.git

    (You’ll find the repository on Matteo Peirone’s GitHub)

  2. Navigate into the directory and install dependencies:

    cd zabbix-mcp-server
    uv sync

    (I’m using uv here, which is a fast Python package installer and resolver).

  3. Configure the Zabbix AI MCP Server:
    Copy the example configuration file:

    cp config/.env.example .env

    Then, edit the .env file to include your Zabbix server URL, authentication method (token or user/password), and set READ_ONLY=false if you want to test creation/update functionalities (use with caution!).

    ZABBIX_URL="http://your-zabbix-server/api_jsonrpc.php"
    ZABBIX_TOKEN="your_zabbix_api_token"
    # or
    # ZABBIX_USER="your_user"
    # ZABBIX_PASSWORD="your_password"
    READ_ONLY=false

  4. Install and Run the mcp-to-openapi-proxy:
    This component exposes the MCP server over HTTP.

    pipx install uv
    uvx mcpo --port 8000 --api-key "top-secret" -- uv run python3.11 src/zabbix_mcp_server.py

    This command will typically start the proxy on port 8000, and it will, in turn, launch your Zabbix MCP server application. It will also generate an API token (e.g., “topsecret”) that you’ll need for Open Web UI.

  5. Set up Open Web UI:
    Deploy Open Web UI (e.g., via Docker). I’ve configured mine to connect to a local Ollama instance, but you can also point it to other LLM providers.
  6. Configure Tools in Open Web UI:

    • In Open Web UI, navigate to the Admin Panel -> Settings -> Connections to set up your LLM connection (e.g., Ollama, OpenAI, OpenRouter).
    • Then, go to Tools and add a new tool server:

      • URL: Point it to where your `mcp-to-openapi-proxy` is running (e.g., `http://my_server_ip:8000/`).
      • Authentication: Use “Bearer Token” and provide the token generated by `mcp-to-openapi-proxy` (e.g., “topsecret”).
      • Give it a name (e.g., “Zabbix MCP Proxy”) and ensure it’s enabled.

Putting It to the Test: Demo Highlights

In my video, I demonstrated a few queries:

  • “Give me the latest five Zabbix problems in a nice table.”
    Using a local Mistral model via VLLM, the system successfully called the Zabbix MCP Server and retrieved the problem data, formatting it into a table. The accuracy was good, matching the problems shown in my Zabbix dashboard.
  • Fetching Host Details:
    I asked, “Give me the details of the host called Zabbix server.” Initially, with the local model, the phrasing needed to be precise. Switching to a more powerful model like Gemini Pro (via OpenRouter) and specifying “hostname equal to Zabbix server” yielded the correct host ID and details. This highlights how the LLM’s capability plays a role in interpreting requests and using the tools.

One challenge observed is that sometimes, for more complex information (like correlating event IDs with host names not directly in the initial problem get API call), the AI might need more sophisticated tool definitions or better prompting to make multiple, related API calls. However, the beauty of MCP is that you could potentially create a custom “tool” within the Zabbix MCP Server that performs these multiple queries internally and presents a consolidated result.

The Potential and Why This is Exciting

This approach is incredibly versatile. For those not comfortable with APIs, it’s a game-changer. But even for seasoned users, it opens up possibilities for quick queries and potentially for building more complex AI-driven automation around Zabbix.

The Zabbix AI MCP Server is an experiment, something new and fresh. It’s a fantastic starting point that can be refined and improved, perhaps with your help and ideas! The fact that it’s built on an open standard like MCP means it could integrate with a growing ecosystem of AI tools and agents.

Join the Conversation!

This is just the beginning. It’s fascinating to think about how we can use methodologies like the MCP server not just within Zabbix, but across many other applications. The ability to automate and interact with complex systems using natural language is a powerful concept.

What do you think about this? Can you see yourself using something like this? What other use cases come to mind? Let me know in the comments below – I’m really keen to hear your thoughts and start a discussion on this topic.

If you found this interesting, please give the video a big thumbs up, and if you haven’t already, subscribe to Quadrata for more explorations into the world of open source and IT!

That’s all for today. See you next week!

A big thank you again to Matteo Peirone for this innovative project!


Connect with me and the community:

Read More
Chat with Your Zabbix: A Practical Guide to Integrating AI with Zabbix AI MCP Server

Unlocking Zabbix with AI: A Look at the Zabbix AI MCP Server

Good morning everyone, Dimitri Bellini here, back on my channel Quadrata! Today, we’re diving into something truly interesting, a bit experimental, and as always, involving our good friend Zabbix. This exploration comes thanks to a member of the Italian Zabbix community, Matteo Peirone, who reached out on LinkedIn to share a fascinating project he developed. I was immediately intrigued and knew I had to show it to you.

So, what are we talking about? It’s called the Zabbix AI MCP Server, and it allows us to instrument operations within Zabbix using artificial intelligence. Let’s break down what this means and how it works.

What is the Zabbix AI MCP Server?

At its core, the Zabbix AI MCP Server acts as an intermediary, bridging the gap between artificial intelligence and the Zabbix server’s APIs. Many of you might already be familiar with Zabbix APIs, which allow us to consult data or perform actions within our Zabbix environment. This project aims to simplify these interactions significantly, especially for those not deeply versed in API scripting.

To get started, we need a few key components:

  • An inference engine: This can be cloud-based or local (via Ollama or VLLM). I’ve been experimenting with a few.
  • An adequate AI model compatible with the engine.
  • The Zabbix AI MCP Server itself.
  • A small, yet crucial, project called mcp-to-openapi-proxy.
  • In my setup, I’m using Open Web UI as a chat interface, similar to ChatGPT, to interact with the AI.

Understanding MCP: Model Context Protocol

Before we go further, it’s important to understand what “MCP” stands for. It means Model Context Protocol. This protocol, invented by Anthropic (the creators of Claude), is designed to allow AI models to interact with external “tools.” These tools can be anything from platform functionalities to specific software features.

Essentially, MCP provides a standardized way for an AI to:

  1. Discover available tools and their capabilities (e.g., functions, resources).
  2. Understand how to use these tools, including descriptions and invocation methods.

This is particularly relevant for AI agents, which are sophisticated prompts instructed to perform tasks that might require external interactions, like research or system operations. MCP helps standardize these tool interactions, which can be a challenge as not all LLM models handle function calls equally well.

How the Zabbix AI MCP Server Works

The Zabbix AI MCP Server, developed by Matteo Peirone, leverages this MCP framework. It exposes Zabbix’s API functionalities as “tools” that an AI can understand and use. This means you can:

  • Consult data: Ask for the latest problems, analyze triggers, or get details about a host.
  • Perform actions: Create or update objects within Zabbix (if not in read-only mode).

All of this can be done without needing to write complex API scripts yourself!

The Architecture in My Setup:

Here’s how the pieces connect in my demonstration:

  1. Open Web UI: This is my chat interface where I type my requests in natural language.
  2. mcp-to-openapi-proxy: This acts as a bridge. Open Web UI is instructed to look for tools here. This proxy converts MCP functions into REST API endpoints (normalized in the OpenAPI standard) that Open Web UI can consume. It essentially acts as an MCP client.
  3. Zabbix AI MCP Server: This is the star of the show. The mcp-to-openapi-proxy communicates with this server. The Zabbix AI MCP Server is configured with the details of my Zabbix instance (URL, authentication token or credentials).
  4. Zabbix Server: The Zabbix AI MCP Server then interacts with the actual Zabbix server APIs to fetch data or perform actions based on the AI’s request.

Getting Started: Installation and Setup Guide

Here’s a brief rundown of how I got this up and running. It might seem a bit involved, but following these steps should make it manageable:

  1. Clone the Zabbix AI MCP Server Repository:

    git clone https://github.com/mpeirone/zabbix-mcp-server.git

    (You’ll find the repository on Matteo Peirone’s GitHub)

  2. Navigate into the directory and install dependencies:

    cd zabbix-mcp-server
    uv sync

    (I’m using uv here, which is a fast Python package installer and resolver).

  3. Configure the Zabbix AI MCP Server:
    Copy the example configuration file:

    cp config/.env.example .env

    Then, edit the .env file to include your Zabbix server URL, authentication method (token or user/password), and set READ_ONLY=false if you want to test creation/update functionalities (use with caution!).

    ZABBIX_URL="http://your-zabbix-server/api_jsonrpc.php"
    ZABBIX_TOKEN="your_zabbix_api_token"
    # or
    # ZABBIX_USER="your_user"
    # ZABBIX_PASSWORD="your_password"
    READ_ONLY=false

  4. Install and Run the mcp-to-openapi-proxy:
    This component exposes the MCP server over HTTP.

    pipx install uv
    uvx mcpo --port 8000 --api-key "top-secret" -- uv run python3.11 src/zabbix_mcp_server.py

    This command will typically start the proxy on port 8000, and it will, in turn, launch your Zabbix MCP server application. It will also generate an API token (e.g., “topsecret”) that you’ll need for Open Web UI.

  5. Set up Open Web UI:
    Deploy Open Web UI (e.g., via Docker). I’ve configured mine to connect to a local Ollama instance, but you can also point it to other LLM providers.
  6. Configure Tools in Open Web UI:

    • In Open Web UI, navigate to the Admin Panel -> Settings -> Connections to set up your LLM connection (e.g., Ollama, OpenAI, OpenRouter).
    • Then, go to Tools and add a new tool server:

      • URL: Point it to where your `mcp-to-openapi-proxy` is running (e.g., `http://my_server_ip:8000/`).
      • Authentication: Use “Bearer Token” and provide the token generated by `mcp-to-openapi-proxy` (e.g., “topsecret”).
      • Give it a name (e.g., “Zabbix MCP Proxy”) and ensure it’s enabled.

Putting It to the Test: Demo Highlights

In my video, I demonstrated a few queries:

  • “Give me the latest five Zabbix problems in a nice table.”
    Using a local Mistral model via VLLM, the system successfully called the Zabbix MCP Server and retrieved the problem data, formatting it into a table. The accuracy was good, matching the problems shown in my Zabbix dashboard.
  • Fetching Host Details:
    I asked, “Give me the details of the host called Zabbix server.” Initially, with the local model, the phrasing needed to be precise. Switching to a more powerful model like Gemini Pro (via OpenRouter) and specifying “hostname equal to Zabbix server” yielded the correct host ID and details. This highlights how the LLM’s capability plays a role in interpreting requests and using the tools.

One challenge observed is that sometimes, for more complex information (like correlating event IDs with host names not directly in the initial problem get API call), the AI might need more sophisticated tool definitions or better prompting to make multiple, related API calls. However, the beauty of MCP is that you could potentially create a custom “tool” within the Zabbix MCP Server that performs these multiple queries internally and presents a consolidated result.

The Potential and Why This is Exciting

This approach is incredibly versatile. For those not comfortable with APIs, it’s a game-changer. But even for seasoned users, it opens up possibilities for quick queries and potentially for building more complex AI-driven automation around Zabbix.

The Zabbix AI MCP Server is an experiment, something new and fresh. It’s a fantastic starting point that can be refined and improved, perhaps with your help and ideas! The fact that it’s built on an open standard like MCP means it could integrate with a growing ecosystem of AI tools and agents.

Join the Conversation!

This is just the beginning. It’s fascinating to think about how we can use methodologies like the MCP server not just within Zabbix, but across many other applications. The ability to automate and interact with complex systems using natural language is a powerful concept.

What do you think about this? Can you see yourself using something like this? What other use cases come to mind? Let me know in the comments below – I’m really keen to hear your thoughts and start a discussion on this topic.

If you found this interesting, please give the video a big thumbs up, and if you haven’t already, subscribe to Quadrata for more explorations into the world of open source and IT!

That’s all for today. See you next week!

A big thank you again to Matteo Peirone for this innovative project!


Connect with me and the community:

Read More
The Ultimate Guide to Open Source Backups: Kopia vs. Minarca

The Ultimate Guide to Open Source Backups: Kopia vs. Minarca

Good morning, everyone! It’s Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the world of open source. A few weeks ago, I had one of those heart-stopping moments we all dread: a chunk of my data suddenly became inaccessible. While I had some backup systems in place, they weren’t as reliable as I thought. That scare sent me on a mission to find the best, most robust open-source backup solution out there.

We often don’t think about backups until it’s too late. Whether it’s a broken hard drive, corrupted files, or the dreaded ransomware attack, having a solid backup strategy is the only thing that stands between you and disaster. But remember, as the saying goes, “The backup is guaranteed, but the restore is another story.” A good tool needs to handle both flawlessly.

My Hunt for the Perfect Backup Tool: The Checklist

Before diving into the options, I set some clear requirements for my ideal solution. It needed to be more than just a simple file-copying utility. Here’s what I was looking for:

  • Truly Open Source: The solution had to be freely available and transparent.
  • Cross-Platform: It must work seamlessly across Windows, Linux, and macOS.
  • Powerful Features: I was looking for advanced capabilities like:

    • Deduplication: A smart feature that saves significant space by not storing the same file or block of data more than once. This is different from compression, which just shrinks individual files.
    • Encryption: End-to-end encryption is a must, especially if I’m sending my data to a cloud provider. My data should be for my eyes only.

  • Flexible & Versatile: It needed a Command Line Interface (CLI) for automation and a Graphical User Interface (GUI) for ease of use. It also had to support a wide range of storage backends, from a local NAS (using protocols like SSH, NFS, SFTP) to cloud storage like Amazon S3 and Backblaze B2.
  • Simplicity: Finally, it couldn’t be a convoluted mess of scripts. The setup and daily management had to be straightforward.

The Contenders: Kopia vs. Minarca

My research led me to several interesting projects like Restic, Borg, and UrBackup, but two stood out for meeting almost all my criteria: Kopia and Minarca. They represent two fundamentally different philosophies in the backup world, and I put both to the test.

Deep Dive #1: Kopia – The Power User’s Choice

Kopia is a modern, feature-rich, agent-based backup tool. This means all the intelligence—deduplication, encryption, scheduling—resides in the agent you install on your computer. It directly manages the data in your chosen storage repository without needing a server component on the other end.

What I Love About Kopia (Pros):

  • Advanced Features: It offers fantastic cross-device deduplication, strong end-to-end encryption, compression, and error correction to ensure your data is always valid.
  • Snapshot-Based: Kopia creates point-in-time “snapshots” of your directories. This is great for versioning and ensures a consistent state for each backup.
  • Incredibly Versatile: It supports virtually every storage backend you can think of.
  • Resilient by Design: All your configuration policies are stored within the remote repository itself. If your computer dies, you can install Kopia on a new machine, connect to your repository, and all your settings and backup history are immediately available for a restore.

Potential Hurdles (Cons):

  • Learning Curve: While the GUI simplifies things, the underlying policy system for managing snapshot retention, scheduling, and exclusions is incredibly detailed and can be a bit overwhelming for beginners.
  • No Simultaneous Multi-Repository: It doesn’t natively support backing up to multiple repositories at the exact same time in a single job.

A Quick Kopia Walkthrough

Using Kopia involves three main steps:

  1. Connect to a Repository: You start by pointing the Kopia UI to your storage location, whether it’s a local folder, an SFTP server, or a cloud bucket like Backblaze B2. You’ll set a password that encrypts everything.
  2. Create a Snapshot: You then select a folder you want to back up and Kopia will create a snapshot. The first run will upload everything, but subsequent snapshots are incredibly fast, as it only processes and uploads the changes, thanks to deduplication.
  3. Restore: To restore, you browse your snapshots by date and time, find the files or folders you need, and either download them or “mount” the entire snapshot as a local drive to browse through it.

Deep Dive #2: Minarca – Simplicity and Centralized Control

Minarca takes a more traditional client-server approach. You install a lightweight agent on your machines and a server component on a central server (which can be a simple Linux box in your home or office). This architecture is fantastic for managing multiple devices.

Where Minarca Shines (Pros):

  • Incredibly User-Friendly: Both the agent and the server have beautiful, simple interfaces. It’s very intuitive to set up and manage.
  • Centralized Management: The web-based server dashboard is the star of the show. It gives you a complete overview of all your users and devices, with stats on backup duration, space used, and new or modified files. You can even restore files for any user directly from the web interface!
  • Multi-User/Multi-Device: It’s built from the ground up to support multiple users, each with multiple devices, all backing up to one central location.
  • Easy Installation: Setting up the Minarca server on an Ubuntu or Debian machine takes just a few simple commands.

What’s Missing (Cons):

  • No Built-in Advanced Features: Minarca currently leaves features like deduplication and encryption to the underlying storage system (for example, using a ZFS filesystem on your server).
  • File-Based, Not Snapshot-Based: It works by copying files (incrementally) rather than creating atomic snapshots like Kopia.

A Quick Minarca Walkthrough

Getting started with Minarca is a breeze:

  1. Set Up the Server: Install the Minarca server package on a Linux machine.
  2. Configure the Agent: On your client PC, install the Minarca agent and point it to your server’s address with the credentials you created.
  3. Create a Backup Job: In the agent, you can easily select which folders to include or exclude and set a simple backup schedule.
  4. Restore: The agent’s restore interface presents a calendar. You just click a date to see the available backup and browse the files you want to recover. Or, as an admin, you can do this from the central web dashboard.

Kopia vs. Minarca: Which One Is for You?

After testing both extensively, it’s clear they serve different needs.

Choose Kopia if: You are a power user who wants the most advanced features like end-to-end encryption and high-efficiency deduplication built right in. You are comfortable with a more technical setup and prefer a decentralized, agent-only approach.

Choose Minarca if: You value simplicity, ease of use, and centralized control. It’s the perfect solution if you need to manage backups for your family or a small office, and you want a clean dashboard to monitor everything at a glance.

My Final Thoughts and Your Turn

Both Kopia and Minarca are fantastic, robust open-source solutions that put you in control of your data. They are both miles ahead of just hoping for the best. The most important takeaway is this: don’t be like me and wait for a scare. Set up your backups today!

I’d love to hear from you. Which of these tools sounds more appealing to you? Are you using another open-source solution that you love? Let me know in the comments below. Your feedback is always welcome!


Thank you for reading, and don’t forget to subscribe to my YouTube channel for more deep dives into the world of open source.

Until next time, this is Dimitri. Ciao!

Read More
SigNoz: A Powerful Open Source APM and Observability Tool

Diving Deep into SigNoz: A Powerful Open Source APM and Observability Tool

Good morning everyone, I’m Dimitri Bellini, and welcome back to Quadrata, the channel where we explore the fascinating world of open source and IT. As I always say, I hope you enjoy these videos, and if you haven’t already, please consider subscribing and hitting that like button if you find the content valuable!

While Zabbix always holds a special place in our hearts for monitoring, today I want to introduce something different. I’ve been getting requests from customers about how to monitor their applications, and for that, you typically need an Application Performance Monitor (APM), or as it’s sometimes fancily called, an “Observability Tool.”

Introducing SigNoz: Your Open Source Observability Hub

The tool I’m excited to share with you today is called SigNoz. It’s an open-source solution designed for comprehensive observability, which means it helps you monitor metrics, traces (the calls made within your application), and even logs. This last part is a key feature of SigNoz, as it aims to incorporate everything you might need to keep a close eye on your applications.

One of its core strengths is that it’s built natively on OpenTelemetry. OpenTelemetry is becoming an industry standard for collecting telemetry data (metrics, traces, logs) from your applications and transmitting it to a backend like SigNoz. We’ll touch on the advantages of this later.

Why Consider SigNoz?

SigNoz positions itself as an open-source alternative to paid, proprietary solutions like Datadog or New Relic, which can be quite expensive. Of course, choosing open source isn’t just about avoiding costs; it’s also about flexibility and community. For home labs, small projects, or even just for learning, SigNoz can be incredibly useful.

Key Features of SigNoz

  • Application Performance Monitoring (APM): Out-of-the-box, you get crucial metrics like P99 latency, error rates, requests per second, all neatly presented in dashboards.
  • Distributed Tracing: This allows you to follow the path of a request as it travels through your application, helping you pinpoint bottlenecks and errors.
  • Log Management: A relatively recent but powerful addition, SigNoz can ingest logs, allowing you to search and analyze them, similar to tools like Greylog (though perhaps with fewer advanced log-specific features for now).
  • Metrics and Dashboards: SigNoz provides a user-friendly interface with customizable dashboards and widgets.
  • Alerting: You can set up alerts, much like triggers in Zabbix, to get notified via various channels when something goes wrong.

Under the Hood: The Architecture of SigNoz

Understanding how SigNoz is built is fundamental to appreciating its capabilities:

  • OpenTelemetry: As mentioned, this is the core component for collecting and transmitting data from your applications.
  • ClickHouse: This is the database SigNoz uses. ClickHouse is an open-source, column-oriented database management system that’s incredibly efficient for handling and querying millions of data points very quickly. It also supports high availability and horizontal scaling even in its open-source version, which isn’t always the case with other databases.
  • SigNoz UI: The web interface that allows you to visualize and interact with the data collected by OpenTelemetry and stored in ClickHouse.

For those wanting to try it out at home, you can easily get this all running with Docker.

The Power of OpenTelemetry

OpenTelemetry is a game-changer. It’s becoming a de facto standard, with even tools like Dynatrace now able to use OpenTelemetry as a data source. The community around it is very active, making it a solid foundation for a product like SigNoz.

Key advantages of OpenTelemetry include:

  • Standardization: It provides a consistent way to instrument applications.
  • Libraries and Agents: It offers out-of-the-box libraries and agents for most major programming languages, simplifying instrumentation.
  • Auto-Instrumentation (Monkey Patching): Theoretically, OpenTelemetry can automatically inject the necessary code into your application to capture telemetry data without you needing to modify your application’s source code significantly. You just invoke your application with certain environment parameters. I say “theoretically” because while I tried it with one of my Python applications, I couldn’t get it to trace anything. Let me know in the comments if you’d like a dedicated video on this; I’m curious to dig deeper into why it didn’t work for me straight away!

Getting Started: Installing SigNoz with Docker and a Demo App

For my initial tests, I used a demo application suggested by the SigNoz team. Here’s a rundown of how you can get started with a standalone Docker setup:

1. Install SigNoz

It’s straightforward:

  1. Clone the SigNoz repository: git clone https://github.com/SigNoz/signoz.git (or the relevant path from their docs).
  2. Navigate into the directory and run Docker Compose. This will pull up four containers:

    • SigNoz Hotel Collector (OpenTelemetry Collector): Gathers data from OpenTelemetry agents.
    • SigNoz Query Service/Frontend: The graphical interface.
    • ClickHouse Server: The database.
    • Zookeeper: Manages ClickHouse instances (similar to etcd).

You can usually find the exact commands in the official SigNoz documentation under the Docker deployment section.

2. Set Up the Sample FastAPI Application

To see SigNoz in action, I used their “Sample FastAPI App”:

  1. Clone the demo app repository: (You’ll find this on the SigNoz GitHub or documentation).
  2. Create a Python 3 virtual environment: It’s always good practice to isolate dependencies.
    python3 -m venv .venv
    source .venv/bin/activate

  3. Install dependencies:
    pip install -r requirements.txt

  4. Install OpenTelemetry components for auto-instrumentation:
    pip install opentelemetry-distro opentelemetry-exporter-otlp

  5. Bootstrap OpenTelemetry (optional, for auto-instrumentation):
    opentelemetry-bootstrap --action=install

    This attempts to find requirements for your specific application.

  6. Launch the application with OpenTelemetry instrumentation:

    You’ll need to set a few environment variables:

    • OTEL_RESOURCE_ATTRIBUTES: e.g., service.name=MyFastAPIApp (This name will appear in SigNoz).
    • OTEL_EXPORTER_OTLP_ENDPOINT: The address of your SigNoz collector (e.g., http://localhost:4317 if running locally).
    • OTEL_EXPORTER_OTLP_TRACES_EXPORTER: Set to otlp.
    • OTEL_EXPORTER_OTLP_PROTOCOL: Can be grpc or http/protobuf.

    Then, run your application using the opentelemetry-instrument command:

    OTEL_RESOURCE_ATTRIBUTES=service.name=FastApp OTEL_EXPORTER_OTLP_ENDPOINT="http://:4317" OTEL_EXPORTER_OTLP_TRACES_EXPORTER=otlp OTEL_EXPORTER_OTLP_PROTOCOL=grpc opentelemetry-instrument uvicorn main:app --host 0.0.0.0 --port 8000

    (Replace with the actual IP where SigNoz is running).
    The opentelemetry-instrument part is what attempts the “monkey patching” or auto-instrumentation. The application itself (uvicorn main:app...) starts as it normally would.

A Quick Look at SigNoz in Action

Once the demo app was running and sending data, I could see traces appearing in my terminal (thanks to console exporter settings). To generate some load, I used Locust with a simple configuration to hit the app’s HTTP endpoint. This simulated about 10 users.

Navigating to the SigNoz UI (typically on port 3301, or as configured, if you’re using the Docker setup that forwards to 8080 or another port for the frontend, but the collector often listens on 4317/4318), the dashboard immediately showed my “FastApp” service. Clicking on it revealed:

  • Latency, request rate, and error rate graphs.
  • A list of endpoints called.

Drilling down into the traces, I could see individual requests. For this simple “Hello World” app, the trace was trivial, just showing the HTTP request. However, if the application were more complex, accessing a database, for example, OpenTelemetry could trace those interactions too, showing you the queries and time taken. This is where it gets really interesting for debugging and performance analysis.

The SigNoz interface felt responsive and well-designed. I was quite impressed with how smoothly it all worked.

Final Thoughts and What’s Next

I have to say, SigNoz seems like a very capable and well-put-together tool. It’s definitely worth trying out, especially if you’re looking for an open-source observability solution.

I plan to test it further with a more complex application, perhaps one involving a database, to see how it handles more intricate call graphs and to really gauge if it can be a strong contender against established players for more demanding scenarios.

It’s also interesting to note that Zabbix has APM features on its roadmap, potentially for version 8. So, the landscape is always evolving! But for now, SigNoz is a noteworthy project, especially for those interested in comprehensive observability that includes metrics, traces, AND logs in one package. This log management capability could make it a simpler alternative to setting up a separate, more complex logging stack for many use cases, particularly in home labs or smaller environments.

So, what do you think? Have you tried SigNoz or other APM tools? Let me know in the comments below! If there’s interest, I can certainly make more videos exploring its features or trying out more complex scenarios.

Thanks for watching, and I’ll see you next week. A greeting from me, Dimitri!

Stay Connected with Quadrata:

📺 Subscribe to Quadrata on YouTube

💬 Join the Zabbix Italia Telegram Channel (Also great for general monitoring discussions!)

Read More
Red Hat Enterprise Linux 10 and Its Clones: What’s New and Which to Choose?

Red Hat Enterprise Linux 10 is Here: A Look at the New Release and Its Impact on Clones

Good morning everyone, I’m Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the world of open source and IT. This week, we’re shifting focus from our usual Zabbix discussions to something that’s recently made waves in the Linux world: the release of Red Hat Enterprise Linux 10 (RHEL 10)!

The arrival of a new RHEL version is always significant, especially for the enterprise sector. So, let’s dive into what RHEL 10 brings to the table and, crucially, how its popular clones – AlmaLinux, Rocky Linux, and Oracle Linux – are adapting to this new landscape. If you find this content useful, don’t forget to put a like and subscribe to the channel if you haven’t already!

What’s New in Red Hat Enterprise Linux 10?

Red Hat Enterprise Linux is renowned as the most important distribution in the enterprise world, where companies pay for official support and long-term stability. RHEL versions are designed to be stable over time, offer excellent hardware and software support, and typically come with at least 10 years of support. This makes it ideal for server and enterprise environments, rather than your everyday desktop.

With RHEL 10, Red Hat continues to build on this foundation while embracing new IT trends. Here are some of the key highlights:

Key Highlights of RHEL 10:

  • Kernel 6.12 (as mentioned): While the video mentions kernel 6.12, Red Hat typically focuses on stability for its enterprise releases. This means they select a mature kernel and then backport necessary bug fixes and feature advancements from newer kernels to their chosen stable version.
  • Image Mode with BootC: This is an interesting development that transforms the standard operating system concept towards an immutable distro model. Instead of traditional package management like RPM for updates, you create an OS image with everything needed. Updates involve deploying a new image, allowing for easy rollback to a previous version upon reboot. This simplifies OS updates significantly in certain contexts.
  • Wayland by Default: RHEL 10 moves decisively towards Wayland, with support for Xorg being removed, at least as the main desktop environment.
  • RDP for Remote Access: For remote access to RHEL 10 workstations, the system has transitioned from VNC to RDP (Remote Desktop Protocol), the well-known protocol from the Windows world, aiming for a more standard and efficient solution.
  • Architecture Upgrade to x86-64-v3: RHEL 10 has moved its baseline architecture level from x86-64-v2 to v3 during compilation. This means it embraces newer instruction sets in modern AMD and Intel processors but, importantly, drops support for older CPUs that don’t meet the v3 specification. If you’re running older hardware, this is a critical change.

The RHEL Clones: Navigating the Landscape After CentOS

As many of you know, Red Hat’s decision to shift CentOS from a direct RHEL rebuild to CentOS Stream (a rolling-release version that’s upstream of RHEL) led to the rise of several community-driven distributions aiming to fill the gap. The most prominent among these are AlmaLinux, Rocky Linux, and Oracle Linux, each striving to provide an enterprise-grade, RHEL-compatible experience.

With RHEL 10 out, these clones are now releasing their own version 10s. Let’s see how they’re approaching it.

AlmaLinux 10 “PurpleLion” – The Swift Innovator

AlmaLinux was quick to respond, releasing its version 10 (codename PurpleLion) just about a week after RHEL 10’s announcement. AlmaLinux is community-driven and has made some distinct choices:

  • Shift in Philosophy: While previously focused on “bug-for-bug” compatibility, AlmaLinux 10 aims for “Application Binary Interface (ABI)” compatibility with RHEL. This means it ensures applications compiled for RHEL will run on AlmaLinux, but it allows AlmaLinux to introduce its own improvements and not be 100% identical to RHEL.
  • Key Differentiators:

    • Kernel with Frame Pointers: AlmaLinux 10 tunes its kernel by enabling Frame Pointers. This can simplify debugging and profiling of the OS and applications, though it might introduce a slight performance overhead (around 1-2%).
    • Broader Hardware Support (x86-64-v2 and v3): Unlike RHEL 10’s default, AlmaLinux 10 provides options for both x86-64-v2 and x86-64-v3 architectures, offering kernels for older CPUs.
    • Continued SPICE Support: They continue to support the SPICE protocol for remote access to virtual machines, which was dropped in RHEL 9 and 10.
    • Additional Device Drivers: AlmaLinux 10 includes over 150 device drivers that Red Hat has dropped in its current version.
    • EPEL for v2: AlmaLinux is taking on the task of compiling EPEL (Extra Packages for Enterprise Linux) packages for the x86-64-v2 architecture, given their continued support for it.

  • Release Strategy: Aims to release major and minor versions as quickly as possible, gathering sources from various places.

Rocky Linux 10 – The Staunch Loyalist

Rocky Linux is known for its purist approach, aiming for 100% bug-for-bug compatibility with RHEL.

  • The Purist’s Choice: If you want a distribution that is as close to RHEL as possible, Rocky Linux is your go-to. The packages are intended to be bit-for-bit identical.
  • Release Strategy: More conservative. Rocky Linux typically waits for the general availability (GA) of the RHEL version before releasing its corresponding version to ensure full compatibility. As of this writing, Rocky Linux 10 has nightly builds available but is not yet officially released.
  • Kernel: The kernel is not altered and remains the same as in RHEL 10.
  • Architecture Support: Following RHEL’s lead, Rocky Linux 10 will focus on the x86-64-v3 architecture, meaning no default support for older v2 CPUs unless they decide to provide an alternative kernel.

Oracle Linux 10 – The Enterprise Powerhouse

Oracle Linux also maintains bug-for-bug compatibility with RHEL and is a strong contender, especially in enterprise environments.

  • Enterprise Focus: Offers 100% RHEL compatibility and the backing of a major vendor, Oracle, which also provides official support options.
  • Unbreakable Enterprise Kernel (UEK): A key differentiator is the option to use Oracle’s UEK. This is an Oracle-tuned kernel designed for better performance and efficiency, particularly for enterprise workloads and, naturally, Oracle’s own database products. Users can choose between the RHEL-compatible kernel or the UEK.
  • Release Timing: Oracle Linux releases typically follow RHEL’s official upstream release, as they need the sources to compile and verify. Version 10 is not yet available.

OpenELA: A Collaborative Effort for Enterprise Linux

After Red Hat changed how it provided sources, a new initiative called OpenELA (Open Enterprise Linux Association) was formed. This non-profit collaboration involves CIQ (the company behind Rocky Linux), Oracle, and SUSE. Their primary goal is to work together to obtain RHEL sources and continue to provide free and open enterprise Linux versions based on RHEL. Notably, AlmaLinux has chosen its own path and is not part of OpenELA.

Choosing Your Champion: AlmaLinux vs. Rocky Linux vs. Oracle Linux

So, with these options, which one should you choose?

  • AlmaLinux 10: Might be your pick if you appreciate faster release cycles, need support for older hardware (x86-64-v2), value features like enabled Frame Pointers, or require specific drivers/software (like SPICE) that RHEL has dropped. You’re okay with a distribution that’s binary compatible but not strictly bug-for-bug identical to RHEL.
  • Rocky Linux 10: If your priority is unwavering stability and 100% bug-for-bug compatibility with Red Hat Enterprise Linux, and you prefer a purely community-driven approach, Rocky Linux is likely the best fit.
  • Oracle Linux 10: If you’re operating in a large enterprise, might need commercial support, or could benefit from the potentially optimized performance of the Unbreakable Enterprise Kernel (UEK) for specific workloads, Oracle Linux is a strong contender.

The speed of release is also a factor for some. AlmaLinux tends to be the fastest, while Rocky Linux and Oracle Linux are a bit more measured. However, whether a release comes out a few weeks sooner or later might not be critical for many, as long as it’s timely.

My Perspective: Why Standardization Matters in the Enterprise

Personally, I’ve decided to stabilize on Red Hat and its derivatives for enterprise environments because standardization is fundamental. When you’re working in complex systems, introducing too many variables can lead to unpredictable problems.

I encountered a situation a while ago that illustrates this. We were setting up synthetic monitoring using a Selenium Docker container. It worked perfectly in our environment. However, for our client, who was using Podman (common in RHEL environments) instead of Docker as the container engine, the same setup was incredibly slow after just a few concurrent connections – the CPU would max out, and it was a complete mess. Understanding that subtle difference was key to resolving the issue.

This is why, for certain enterprise scenarios, I lean towards RHEL-based systems. They are often more rigorously tested and standardized for such environments, which can save a lot of headaches down the line.

Conclusion

The Linux distribution landscape is ever-evolving, and the release of RHEL 10 alongside new versions from its clones is a testament to this dynamism. Each distribution we’ve discussed offers a solid RHEL-compatible experience but caters to slightly different needs and philosophies.

I hope this overview has been interesting and helps you understand the current state of play. The choice ultimately depends on your specific requirements, whether it’s cutting-edge innovation, purist compatibility, or enterprise-grade support.

What are your thoughts on RHEL 10 and its clones? Which distribution are you leaning towards, or currently using, and why? Let me know in the comments below! The battle of distributions is always a hot topic!

Thanks for reading, and if you enjoyed this, please share it and make sure you’re subscribed to Quadrata on YouTube for more content like this. You can also join the conversation on our ZabbixItalia Telegram Channel.

A big greeting from me, Dimitri Bellini. Bye everyone, and I’ll see you next week!


Follow Quadrata:

Read More