Blog

NetBox and Zabbix: Creating the Ultimate Source of Truth for Your IT Infrastructure

NetBox and Zabbix: Creating the Ultimate Source of Truth for Your IT Infrastructure

Good morning everyone! Dimitri Bellini here, and welcome back to Quadrata, my channel dedicated to the world of open source and all the IT that I, and hopefully you, find fascinating.

This week, we’re diving into a powerful solution that addresses a common and persistent challenge in IT management: the lack of a single, reliable source of information. In large infrastructures, it’s often difficult to know what objects exist, where they are, what their relationships are, and how to even begin monitoring them. Many turn to a Configuration Management Database (CMDB), but keeping it manually updated is a struggle. What if we could automate this process?

That’s where a fantastic open-source project called NetBox comes in. And thanks to our friends at Open Source ICT Solutions, there’s a brilliant integration that connects it directly to Zabbix. Let’s explore how to build a true “source of truth” for our network.

What is NetBox and Why Do You Need It?

For those who may not know it, NetBox is a well-established and solid open-source tool designed to be the central repository for your entire IT environment. It’s more than just a spreadsheet; it’s a structured database for everything from your network devices to your data center cabling. It’s designed to be the single source of truth.

NetBox helps you model and document your infrastructure with incredible detail. Its core functionalities include:

  • IP Address Management (IPAM): A robust module for managing all your IP spaces, prefixes, and addresses.
  • Data Center Infrastructure Management (DCIM): Model your physical infrastructure, including data center layouts, rack elevations, and the exact placement of devices within them.
  • Cabling and Connections: Document and visualize every cable connection between your devices, allowing you to trace the entire path of a circuit.
  • Automation and Integration: With a powerful REST API and support for custom scripts, NetBox is built for automation, allowing you to streamline processes and integrate with other tools—like Zabbix!

While maintaining this level of documentation might seem daunting, the benefits are immense, especially when you can automate parts of the workflow.

The NetBox-Zabbix Integration: How It Works

The concept behind this integration is simple yet crucial to understand. The flow of information is one-way: from NetBox to Zabbix.

This is fundamental. NetBox acts as the source of truth, the master record. When you add or update a device in NetBox, the plugin provisions that device in Zabbix. It creates the host, assigns templates, sets up interfaces, and applies tags. It is not Zabbix sending information back to NetBox. This ensures your documentation remains the authoritative source.

I got all my inspiration for this setup from a fantastic blog post on Zabbix.com by the team at Open Source ICT Solutions. They created this plugin and provided an excellent guide, so a huge thank you to them!

Getting Started: A Step-by-Step Guide

For my setup, I wanted to simplify things, so I used a non-official Docker repository designed for NetBox plugin development. It makes getting up and running much faster.

H3: Setting Up the Environment

Here are the commands I used to get the environment ready:

# 1. Clone the unofficial Docker repo for NetBox plugin development
> git clone https://github.com/dkraklan/netbox-plugin-development-env.git
> cd netbox-plugin-development-env

# 2. Clone the nbxsync plugin from OpensourceICTSolutions into the plugins directory
> cd plugins/
> git clone https://github.com/OpensourceICTSolutions/nbxsync.git
> cd ..

# 3. Add the plugin to the NetBox configuration
> vi configuration/configuration.py

# Add 'nbxsync' to the PLUGINS list:
PLUGINS = [
'nbxsync'
]

# 4. Build and launch the Docker containers
> docker-compose build
> docker-compose up -d

And just like that, you should have a running NetBox instance with the Zabbix plugin installed!

Configuring the Zabbix Plugin in NetBox

Once NetBox is up, the configuration is straightforward.

  1. Add Your Zabbix Server: In the NetBox UI, you’ll see a new “Zabbix” menu at the bottom. Navigate there and add your Zabbix server. You’ll need to provide a name, the server URL (just the base URL, e.g., http://zabbix.example.com), and an API token from a Zabbix user with sufficient permissions.
  2. Sync Templates: After adding the server, you can click “Sync Templates.” The plugin will connect to your Zabbix instance and pull in all available templates, proxies, macros, and host groups. This is incredibly useful for later steps.
  3. Define Your Infrastructure: Before adding a device, you need to define some core components in NetBox. This is standard NetBox procedure:

    • Create a Site (e.g., your main office or data center).
    • Define a Manufacturer (e.g., Cisco).
    • Create a Device Role (e.g., Core Switch).
    • Create a Device Type, which is a specific model (e.gne., Cisco CBS350-24P). Here, you can go to the “Zabbix” tab and pre-assign a default Zabbix template for this device type, which is a huge time-saver!

Provisioning a New Device to Zabbix

Now for the magic. Let’s add a new switch and watch it appear in Zabbix.

  1. Create the Device: Create a new device in NetBox, assigning the Site, Device Role, and Device Type you created earlier.
  2. Add an IP Address: Go to the IPAM section and create an IP address that you will assign to the switch’s management interface.
  3. Configure the Zabbix Interface: Navigate back to your newly created device and click on the “Zabbix” tab.

    • Add a Host Interface. Select your Zabbix server, the interface type (e.g., SNMP), and assign the IP address you just created.
    • Add a Host Group. Assign the Zabbix host group where you want this device to appear.
    • Add any Tags you want. I created a “netbox” tag for easy identification.

  4. Sync to Zabbix: With all the information in place, simply click the “Sync to Zabbix” button. A background job will be queued.

If you switch over to your Zabbix frontend, you’ll see the new host has been created automatically, complete with the correct IP address, assigned to the right host group, linked to the correct template, and with the tags we defined. It’s that simple!

Even better, the integration also pulls some data back for viewing. In the “Zabbix Operation” tab within NetBox, you can see the latest problems for that specific device directly from Zabbix, giving you a unified view without leaving the NetBox interface.

Final Thoughts

I have to say, this is a truly impressive product. Of course, it requires a disciplined workflow to maintain the data in NetBox, but the payoff in consistency, automation, and having a single, reliable source of truth is enormous. From one dashboard, you can control your entire documented infrastructure and its monitoring configuration.

This project is actively developed, and the community is already making requests on GitHub. If you have ideas for new features or find any bugs, I encourage you to contribute. This is a tool that can be incredibly useful for anyone managing a Zabbix instance, from a small lab to a large server farm.

That’s all for today! I hope you found this overview interesting. It’s a powerful combination that can really level up your infrastructure management game.

What do you think? Have you used NetBox before? Let me know your thoughts on this integration in the comments below. As always, if you enjoyed the video and this post, please give it a thumbs up and subscribe to the channel if you haven’t already. See you next week!


Follow my work and join the community:

Read More
Back from Zabbix Summit: An Exclusive Look at the Future with Zabbix 8.0 LTS

Back from Zabbix Summit: An Exclusive Look at the Future with Zabbix 8.0 LTS

Good morning everyone, Dimitri Bellini here! It’s great to be back with you on Quadrata, my channel dedicated to the world of open source and IT. It’s been a little while since my last video, as I’ve been quite busy traveling between the Zabbix Summit in Riga and Gitex in Dubai. The energy at the Summit was incredible, and I returned not just with great memories but with a clear vision of the future of monitoring. And today, I want to share that vision with you.

So, grab a coffee, and let’s dive into a recap of the fantastic Zabbix Summit and, more importantly, the groundbreaking features coming in the next major release: Zabbix 8.0 LTS.

The Zabbix Summit Experience: A Global Community United

This year’s Zabbix Summit in Riga, Latvia, was special. Not only did it mark the 20th anniversary of Zabbix, but it brought together a massive, passionate community from all corners of the globe—from Japan to South America to the Middle East. It’s always amazing to connect with so many people, share use cases, and discuss what we all love about Zabbix.

We had the chance to tour the new Zabbix headquarters—a beautiful, modern space with floors dedicated to development, support, and commercial teams. I even had the pleasure of presenting a use case with my colleague Francesco, which was a fantastic experience. But the main event, the moment we were all waiting for, was Alexey Vladyshev’s keynote on what’s next for Zabbix.

The Main Event: What’s Coming in Zabbix 8.0 LTS

Zabbix 8.0 is the next Long-Term Support (LTS) release, which means it’s designed for enterprise environments that need stability and support for up to five years. This release isn’t just an incremental update; it’s a monumental leap forward, addressing many of the current limitations and pushing Zabbix firmly into the realm of a full-fledged observability platform.

Let’s break down the most exciting pillars of this upcoming release.

Revolutionizing Event Management with Complex Event Processing (CEP)

One of the biggest game-changers is the introduction of a native Complex Event Processing (CEP) engine. This is designed to drastically reduce monitoring noise and help us focus on the root cause of issues, not just the symptoms. The CEP will operate on two levels:

  • Single Event Processing: This allows for fine-grained control over individual events as they come in. We’ll be able to perform operations like filtering irrelevant events, normalizing data to a standard format, manipulating tags and severity on the fly, and even anonymizing sensitive information.
  • Multiple Event Processing: This is where the real magic happens. By analyzing events over specific time windows, Zabbix will be able to perform advanced correlation, deduplicate redundant alerts, and distinguish between a root cause and its resulting symptoms.

Best of all, we’ll be able to implement custom logic using JavaScript. Imagine enriching an incoming event with data from your CMDB before it even becomes a problem in Zabbix. The possibilities are endless!

Embracing Observability: APM and OpenTelemetry Integration

Zabbix is officially stepping into the world of Application Performance Monitoring (APM). To do this, it’s fundamentally changing how it handles data. Instead of simple time-series data, Zabbix 8.0 will be built to handle complex, structured JSON data natively.

This architectural shift opens the door for seamless integration with modern observability standards like OpenTelemetry. We will finally be able to ingest traces, logs, and metrics from applications directly into Zabbix and visualize them. During the presentation, we saw a mockup of a request trace, broken down step-by-step, allowing for deep root cause analysis right within the Zabbix UI. This is a massive step forward.

A New Engine for Unprecedented Scale

With all this new data, how will Zabbix scale? While standard databases like PostgreSQL and MySQL will still be supported for smaller setups, the focus for large-scale deployments is shifting to high-performance backends. The star of the show here is ClickHouse.

The new architecture will offload the ingestion process to Zabbix Proxies, which will write data directly to ClickHouse. The Zabbix Server will then query this data for visualization and processing. This design allows Zabbix to handle millions of values per second, making it suitable for even the most demanding environments.

A Fresh Face and Enhanced Usability

Let’s be honest, the Zabbix UI, while functional, could use a modern touch. The Zabbix team knows this, and a complete UI overhaul is planned for 8.0! We saw mockups of a cleaner, fresher, and more intuitive interface.

But perhaps one of the most requested features of all time is finally coming: customizable table views. In the “Problems” view and other tables, you will be able to show, hide, reorder, and sort columns as you see fit. It might seem like a small change, but it’s a huge quality-of-life improvement that we’ve been waiting for.

Monitoring on the Go: The Official Zabbix Mobile App

Finally, Zabbix is developing an official mobile application! This will bring essential monitoring capabilities right to your phone, including:

  • Push notifications for alerts.
  • Problem management and collaboration tools.
  • Aggregated views from multiple Zabbix servers.
  • Integration with both on-premise and Zabbix Cloud instances.

A Glimpse into the Future

Zabbix 8.0 LTS is shaping up to be the most significant release in the product’s 20-year history. It’s evolving from a best-in-class monitoring tool into a comprehensive observability platform ready to meet the challenges of modern IT infrastructures. The expected release date is around mid-2026, and I, for one, cannot wait.

I’ll be keeping a close eye on the public roadmap and will keep you updated as these features move through development. But now, I want to hear from you!

What feature are you most excited about? Is there something else you’d love to see in Zabbix? Let me know in the comments below!

That’s all for today. Thanks for joining me, and I’ll see you in the next video. Bye everyone!


Stay Connected:

Read More
Unleashing Your Inner Artist: A Deep Dive into AI Image Generation with ComfyUI

Unleashing Your Inner Artist: A Deep Dive into AI Image Generation with ComfyUI

Good morning, everyone! Dimitri Bellini here, back on my channel, Quadrata, where we explore the fascinating world of open source and IT. If you’ve been following along, you know we’ve spent a lot of time diving into Large Language Models (LLMs) for tasks like summarizing topics and answering questions. But today, we’re venturing into a different, more visual side of artificial intelligence: the creation of images.

We’re going to explore a powerful, node-based graphical interface called ComfyUI. This open-source tool allows you to build sophisticated workflows for generating AI images using Stable Diffusion models. Forget complex code; we’re talking about a visual playground for your creativity.

LLMs vs. Stable Diffusion: Understanding the AI Playground

Before we jump into ComfyUI, it’s crucial to understand the two different families of AI models we’re dealing with. They might both fall under the “AI” umbrella, but they function in fundamentally different ways.

Large Language Models (LLMs)

Think of models like GPT-4, Google Gemini, or Llama. Their world is text.

  • Purpose: To generate human-like text, answer questions, translate languages, and even write code.
  • How it works: At its core, an LLM is a master of prediction. It analyzes a sequence of words and predicts the most statistically probable next word to continue the sentence or paragraph. We can think of it as a super-intelligent person who excels at writing and conversation.
  • Tools: We often use engines like Ollama to run these models locally.

Stable Diffusion Models

This category is all about visuals. Models like Stable Diffusion 1.5 or the powerful Flux.1 are designed to be digital artists.

  • Purpose: To create complex, detailed images based on text descriptions (prompts).
  • How it works: The process is fascinating. It starts with a canvas of pure random noise—like the static on an old TV. Then, guided by your text prompt, the model gradually removes the noise (a process called denoising diffusion), adding details step-by-step until a coherent image emerges. It’s like an artist taking our instructions and sculpting a masterpiece from a block of marble.
  • Tools: This is where ComfyUI shines, providing the framework to control this artistic process.

Introducing ComfyUI: Your Visual Gateway to AI Art

So, why do we need a tool like ComfyUI? Because creating the perfect image isn’t always straightforward. ComfyUI provides a graphical interface that transforms the complex process of AI image generation into a manageable, visual workflow.

Why a Node-Based Interface?

Instead of writing lines of code, you connect different functional blocks, or “nodes,” together. Each node performs a specific task—loading a model, defining a prompt, sampling the image, upscaling the result, etc. You connect the output of one node to the input of another, creating a visual pipeline. This modular approach gives you incredible flexibility and granular control over every single step of the image generation process.

My Setup: Docker and NVIDIA Power

To keep things clean and avoid dependency headaches with Python versions, I prefer to run everything in a Docker container. For this demonstration, I’m using a fantastic community-built Docker image for ComfyUI (I’ll leave the link in my YouTube video description!). The heavy lifting is handled by my NVIDIA RTX 8000 GPU, which is essential for getting results in a reasonable amount of time.

A Practical Tour: 3 Amazing Things You Can Do with ComfyUI

Talk is cheap, so let’s dive into some practical examples to see what ComfyUI is capable of. I’ve set up a few different workflows to showcase its power.

1. Breathing Life into Old Photos: Upscaling with AI

First up, let’s tackle a common problem: low-resolution images. I took a tiny photo, just 300×345 pixels. By running it through an upscaling workflow in ComfyUI, I was able to increase its size by four times while adding incredible detail. When you zoom in on the original, it’s a blurry mess. But the upscaled version is sharp and clear. The AI didn’t just enlarge the pixels; it intelligently interpreted the image to add detail that wasn’t there before. It’s not perfect, as a better model would yield even cleaner results, but the difference is still night and day.

2. From Black and White to Vibrant Color: AI Colorization

Next, I took a classic black-and-white still from the historic film Metropolis. The image is iconic but lacks the vibrancy of color. Using a specific colorization model, ComfyUI analyzed the image and made an educated guess about the original colors. The result is a beautifully colored image that brings a whole new dimension to the scene. This is an amazing tool for restoring and reimagining historical photos and videos.

3. Text to Reality: Generating Images from Scratch

This is the most common use case and where the magic really happens. I used the Flux.1 Schnell model, an open-source powerhouse, to generate an image from a simple text prompt: “a computer technician with his penguin next to him in a server room.”

Watching the process is captivating. ComfyUI’s interface shows you which node is currently working, and you can see your system’s resource usage spike. My GPU hit 100%, and VRAM usage climbed to nearly 40 GB! After a few moments, the result appeared: a stunningly detailed, high-quality image of a technician and his penguin companion. Just a year ago, achieving this level of quality with open-source models at home was almost impossible. Today, it’s a reality.

Final Thoughts and Your Turn to Create

ComfyUI is an incredibly rich and powerful tool that puts professional-grade AI image generation into your hands. I’ll be honest—there’s a learning curve. The sheer number of nodes and settings can be intimidating at first. But the ability to build, customize, and share workflows makes it one of the most versatile platforms out there.

With a solution that is completely open source, you can have your own AI art studio running directly at home. I highly encourage you to give it a try. Play around with it, download different models, and see what you can create!

On a final note, I’ll be heading to the Zabbix Summit in Riga next week, so I might not be able to post a full video. However, I’m excited to discover the new features coming in Zabbix 8.0 and will be sure to share the highlights with you all!

What do you think? Have you tried ComfyUI or other Stable Diffusion tools? What kind of images would you like to create? Let me know in the comments below! Your feedback helps shape future content.

A big greeting from me, Dimitri, and see you next week!


Connect with me and the community:

Read More
Kestra 1.0: The Open-Source Orchestrator Embraces the AI Revolution

Kestra 1.0: The Open-Source Orchestrator Embraces the AI Revolution

Good morning, everyone! Dimitri Bellini here, and welcome back to Quadrata, my channel dedicated to the world of open source and IT. If you’re a regular viewer, you know how much I love exploring powerful, community-driven tools. And if you’re new, please consider subscribing to join our growing community!

This week, we’re revisiting a product I’m incredibly excited about: Kestra. I covered it before, but now it has hit a major milestone with its 1.0 release, and the new features are too good not to share. Kestra has officially reached maturity, evolving into a tool that’s not just powerful but also incredibly intelligent. Let’s dive in!

What is Kestra? A Quick Refresher

For those who might be new to it, Kestra is an open-source, self-hosted solution for orchestrating and automating complex processes. Think of it as the central nervous system for your IT operations. It solves a problem we’ve all faced: managing countless scripts written in different languages, scattered across various machines. After a few months, it becomes a nightmare to remember where everything is, how it works, and who has access to it.

Kestra brings order to this chaos by providing:

  • A centralized platform to manage all your automation workflows.
  • A declarative language (YAML) to define tasks, making them easy to version control with tools like Git.
  • Flexible task management, allowing you to run jobs sequentially, in parallel, or based on dependencies.
  • A massive library of pre-built plugins for seamless integration with databases, cloud platforms, notification systems, and more.
  • An event-driven architecture that can be triggered manually, via API calls (webhooks), or on a schedule, just like a crontab.

Essentially, it’s a language-agnostic powerhouse that allows different teams—whether they prefer Bash, Python, or Perl—to collaborate on a single, intuitive platform.

The Game-Changers in Kestra 1.0

The 1.0 release isn’t just an update; it’s an evolution. Kestra is boldly stepping into the “agentic world,” integrating AI in ways that genuinely enhance its capabilities.

Stepping into the Agentic World with AI

The headline feature is, of course, AI. Kestra 1.0 introduces an AI Copilot designed to help you generate the YAML code for your tasks. While I found it to be a bit hit-or-miss in its current state (it uses a simple version of Gemini), the concept is promising. For more reliable results, I actually recommend using the “Ask Kestra AI” feature on their official documentation website—it works much better!

What’s truly exciting is Kestra’s ability to be controlled by AI agents and, in turn, use agents to perform tasks. This opens up a world of possibilities for creating dynamic, intelligent automation that can adapt and respond to complex triggers. You can even integrate with self-hosted models using the Ollama plugin, keeping your entire stack private and self-sufficient.

Developer Experience and Usability Boosts

Beyond AI, version 1.0 brings several quality-of-life improvements:

  • Playground: You can now test individual tasks or small segments of your workflow without having to run the entire thing. This is a massive time-saver during development and debugging.
  • Flow Level SLA: For more business-oriented needs, you can now define and monitor Service Level Agreements (SLAs) for your flows. If a task that should take an hour is running longer, Kestra can alert you.
  • Plugin Versioning: In complex enterprise environments, you can now pin specific versions of plugins to a workflow, ensuring stability and preventing unexpected breakages from updates.
  • No-Code Editor for Apps: This is a standout feature, though currently for the Enterprise version. It allows you to create simple, interactive web UIs (Kestra Apps) for your workflows. Instead of exposing complex options, you can give users a clean form with input fields to launch a job. It’s a fantastic way to democratize your automation.

A Guided Tour of the Kestra Interface

I set up my Kestra instance easily using a simple container setup. The first thing you see is a comprehensive dashboard showing the status of all your jobs: successes, failures, and currently running tasks. It’s your mission control center.

Crafting Your First Flow: Code, No-Code, and AI Assistance

Workflows in Kestra are organized into Namespaces (think of them as projects), and each workflow is called a Flow. When you edit a flow, you’re presented with a powerful interface.

On one side, you have the YAML editor where you define your tasks. But here’s the magic: as you work, a documentation panel appears right next to your code, providing examples, properties, and definitions for the specific task type you’re using. No more switching tabs to look up syntax!

And if you’re not a fan of YAML, Kestra 1.0 introduces a fantastic no-code wizard. This form-based interface guides you through creating each step of your workflow, simplifying the process immensely. You can build complex automation without writing a single line of code, and the YAML is generated for you in the background.

Monitoring and Control

Once your flow is running, Kestra provides incredible visibility:

  • Topology View: A visual graph of your workflow, showing how tasks connect and the real-time progress of an execution.
  • Revisions: Kestra automatically versions every change you make to a flow. If something breaks, you can easily compare versions and restore a previous working state.
  • Logs: A powerful, searchable log interface (similar to ElasticSearch) lets you drill down to find exactly what happened during an execution.
  • Metrics: You can expose metrics from your flows to monitoring tools like Zabbix or Prometheus to track performance over time.

My Final Thoughts

Kestra 1.0 is a truly impressive release. It has matured from a powerful orchestrator into an intelligent automation platform that is both developer-friendly and accessible to those who prefer a no-code approach. The focus on AI, combined with major usability enhancements, makes it a compelling choice for anyone looking to streamline their IT processes.

As it’s open-source, you can try it out at home without any cost. I’m personally considering using it to automate parts of my video creation workflow! It’s that versatile.

I highly encourage you to give it a try. Explore the official documentation, check out the pre-made “Blueprints” to get started quickly, and see how it can simplify your work.


What do you think of Kestra 1.0? Are there other automation tools you love? Let me know in the comments below—your opinion is incredibly valuable! If you found this overview helpful, please give the video a thumbs up and subscribe for more content on open-source technology.

See you next week!

– Dimitri Bellini

Stay Connected:

Read More
Pangolin VPN: Secure Your Internal Services with Zero Open Ports

Pangolin VPN: Secure Your Internal Services with Zero Open Ports

Good morning and welcome, everyone! I’m Dimitri Bellini, and you’re here again with me on Quadrata, my channel dedicated to the world of open source and IT. This week, we’re diving into something new and exciting: a truly noteworthy tool that can help you in very specific situations.

We’re going to talk about Pangolin VPN, and its promise is right in the name: “Zero Open Ports.” While the concept of a secure tunnel isn’t new, Pangolin offers a unique, simplified approach. It’s an open-source, self-hosted solution that lets you create a reverse tunnel to your internal servers, all managed through a centralized, user-friendly platform. Let’s explore what makes it so special.

What is Pangolin VPN?

At its core, Pangolin is an open-source solution that allows you to install a complete secure access platform on your own machines. It’s built on top of WireGuard, but it’s not a classic VPN. Instead of manually configuring clients and punching holes in your firewalls, Pangolin centralizes everything. It acts as a secure gateway, protecting your internal web services and applications from direct exposure to the internet.

You essentially need two things to start:

  1. A machine with a public IP address (like a cheap VPS) to act as the central concentrator.
  2. A domain name to point to that machine.

From there, Pangolin handles the rest, creating a secure, elegant bridge to your private network without you having to mess with complex NAT or firewall rules.

Key Features That Make Pangolin Stand Out

Pangolin simplifies secure access by bundling several powerful features into one platform. Here are the most important ones:

  • Enhanced Security with Zero Exposure: This is the headline feature. You don’t expose any ports for your internal services (like Zabbix, Proxmox, or a custom web app) to the public internet. Everything is hidden behind the Pangolin platform and accessed securely over HTTPS.
  • Centralized Authentication and Permissions: Pangolin provides a robust system for managing user access. You can use simple password authentication, enable two-factor authentication (2FA), or integrate with an external Identity Provider (IDP) for Single Sign-On (SSO) with services like Google, Azure, and more.
  • Role-Based Access Control (RBAC): You have granular control over who can see what. Based on user roles, which can be pulled directly from your IDP, you can define policies to ensure users only have access to the specific applications they need.
  • Simplified Networking: Forget about complex firewall configurations. You simply install a lightweight agent on a machine inside your network, and it establishes a secure outbound connection to your public Pangolin server. That’s it.
  • Clientless Access for Users: For accessing web-based applications, your users don’t need to install any client software. All they need is a web browser. Pangolin acts as a reverse proxy, authenticates the user, and seamlessly connects them to the internal resource.
  • Full Control and Privacy: Since you host it yourself, you have complete control over your data and infrastructure. No third-party dependencies or data passing through external services.

How It Works: The Architecture

Pangolin is a suite of open-source tools working in harmony. The entire platform is packaged with Docker, making deployment a breeze. Here are the core components:

  • Pangolin: The central management console where you configure sites, resources, users, and policies.
  • Gerbil: A WireGuard management server developed by the Pangolin team to handle the underlying secure connections.
  • Traefik: A modern and powerful reverse proxy that handles incoming requests and routes them to the correct internal service.
  • Newt: A user-space WireGuard client. This is the agent you install on your internal network. The beauty of Newt is that it doesn’t require root privileges or kernel modules, and it runs on Linux, Windows, macOS, and more.

The workflow is simple: a user accesses a specific URL in their browser. The request hits your public Pangolin server, which uses Traefik to handle it. Pangolin checks the user’s authentication and permissions. If authorized, it routes the request through the secure WireGuard tunnel established by the Newt client to the correct service on your private network.

Getting Started: A Quick Installation Guide

Installing Pangolin is surprisingly straightforward. Here’s what you’ll need first.

Prerequisites

  • A host with Docker or Podman installed and a public IP address.
  • A domain name (e.g., yourdomain.com).
  • DNS records configured to point your domain and a wildcard subdomain (e.g., *.yourdomain.com) to your public host’s IP.
  • An email address for Let’s Encrypt SSL certificate generation.
  • The following ports open on your public host’s firewall: TCP 80, TCP 443, and the necessary UDP ports for WireGuard.

Installation Steps

The installation is handled by a simple script. Just run these commands on your public server:

curl -fsSL https://digpangolin.com/get-installer.sh | bash
sudo bash ./install.sh

The script will ask you a few questions:

  1. Your main domain: (e.g., quadrata.dev)
  2. The subdomain for the Pangolin service: It will suggest one (e.g., pg.quadrata.dev).
  3. Your email for Let’s Encrypt.
  4. Whether to use Gerbil to manage connections (say yes).
  5. A few other simple questions about email notifications and IPv6.

Once you answer, it will pull the necessary Docker containers and set everything up. At the end of the process, it will give you a registration token. Use this token to create your first admin user and password.

Configuring Your First Services

Once you’re logged into the Pangolin dashboard, the process is logical.

1. Create a “Site”

A “Site” in Pangolin represents your internal network. You’ll give it a name, and Pangolin will provide you with the command to deploy the Newt client agent inside that network. You can easily copy the docker run or Docker Compose configuration and deploy it on a machine within your LAN (I used my container management tool, Comodo, for this). Once the agent is running, it will connect to your Pangolin server, and the site will show as active.

2. Create a “Resource”

Next, you define the services you want to expose. Click on “Add Resource” and select “HTTPS Resource.”

  • Give it a name (e.g., “Ollama”). This will also become its subdomain (e.g., ollama.pg.quadrata.dev).
  • Select the “Site” you just created.
  • Enter the internal IP address and port of the service (e.g., 192.168.1.50:3000).

3. Assign Permissions

After creating the resource, you must define who can access it. In the resource’s “Authentication” tab, you can assign it to specific roles (like “Member”) or individual users. You can also enforce SSO for that specific application. Save your changes, and you’re done!

Now, when an authorized user navigates to ollama.pg.quadrata.dev, they will be prompted to log in via Pangolin and will then be seamlessly redirected to your internal Ollama service. It’s that simple!

What About a Full VPN?

Pangolin has recently introduced a beta feature for a more traditional VPN experience. You can create a “Client” in the dashboard, which is similar to creating a “Site.” This provides a configuration to run the Newt client directly on your laptop. Once connected, your machine becomes part of the secure network, allowing you to access any resource (not just web services) based on the permissions you define. You can even create “Client Resources” to open specific TCP/UDP ports for SSH, RDP, or other protocols, giving you fine-grained control.

Conclusion

Pangolin VPN is a fantastic and incredibly interesting product. It’s not trying to be a replacement for every VPN use case, but it excels at simplifying secure access to self-hosted web services. The combination of zero-exposure security, centralized SSO authentication, and role-based access control makes it a powerful tool for small businesses, homelab enthusiasts, and anyone looking to share internal applications without the headache of complex network configurations.

It’s a project that simplifies life in many circumstances, and I highly recommend giving it a try. The fact that it’s open source and self-hostable gives you the ultimate control and privacy.

Have you tried Pangolin or a similar tool? Let me know your thoughts and experiences in the comments below! I’d love to hear your opinion.


For more content on open-source and IT, make sure to subscribe to my channel!

➡️ YouTube Channel: Quadrata

➡️ Join the conversation on Telegram: Zabbix Italia Community

Thanks for reading, and see you next week. A greeting from Dimitri!

Read More
Revolutionize Your Zabbix Dashboards: RME Essential Custom Widgets

Revolutionize Your Zabbix Dashboards: RME Essential Custom Widgets

Good morning and welcome, everyone! It’s Dimitri Bellini, back again on Quadrata, my channel dedicated to the open-source world and the IT that I love. It’s been a little while since we talked about our good friend Zabbix, and I’m excited to share something I stumbled upon that I think you’re going to love.

While browsing the Zabbix support portal, I came across a community member, Ryan Eberle, who has developed an incredible set of custom widgets. His GitHub repository is a goldmine of enhancements that bring a whole new level of functionality and clarity to our Zabbix dashboards. These aren’t just minor tweaks; they are game-changing improvements that address many of the limitations we’ve all faced.

So, let’s dive in and see how you can supercharge your monitoring setup!

Getting Started: How to Install These Custom Widgets

Installing these widgets is surprisingly simple. Just follow these steps, and you’ll be up and running in no time.

Important Note: These modules are designed for Zabbix 7.2 and 7.4. They leverage new functions not available in the 7.0 LTS version, so they are not backward compatible.

  1. Clone the Repository: First, head over to the developer’s GitHub repository. Find the widget you want to install (for example, the Graph widget), click on “Code,” and copy the clone URL.
  2. Download to Your Server: SSH into your Zabbix server console. In a temporary directory, use the `git clone` command to download the widget files. For example:
    git clone [paste the copied URL here]
  3. Copy to the Zabbix Modules Directory: This is a crucial step. In recent Zabbix versions, the path for UI modules has changed. You need to copy the downloaded widget directory into:
    /usr/share/zabbix/ui/modules/
  4. Scan for New Modules: Go to your Zabbix frontend and navigate to Administration → General → Modules. Click the “Scan directory” button. This is a step many people forget! If you don’t do this, Zabbix won’t see the new widgets you just added.
  5. Enable the Widgets: Once the scan is complete, you will see the new modules listed, authored by Ryan Eberle. By default, they will be disabled. Simply click to enable each one you want to use.

A Deep Dive into the New Widget Capabilities

Now for the fun part! Let’s explore what these new widgets bring to the table. I’ve been testing the enhanced Graph, Table, and Host/Group Navigator widgets, and they are phenomenal.

The Graph Widget We’ve Always Wanted

The default vector graph in Zabbix is good, but Ryan’s version is what it should have been. It introduces features that dramatically improve usability.

  • Interactive Legend: You can now click on a metric in the legend to toggle its visibility on the graph. Want to focus on just one or two data series? Simply click to hide the others. Hold the Ctrl key to select multiple items. This is fantastic for decluttering complex graphs.
  • Sorted Tooltip/Legend: No more hunting through a messy tooltip! The legend now automatically sorts metrics, placing the ones with the highest current value at the top. When you hover over the graph, you get a clean, ordered list, making it instantly clear which metric is which.
  • Hide Zero-Value Metrics: You can configure the widget to automatically hide any metrics that have a value of zero. This cleans up the tooltip immensely, allowing you to focus only on the data that matters.
  • Advanced Label Customization: Using built-in macros and regular expressions, you can customize the data set labels. If you have very long item names, you can now extract just the part you need to keep your graphs clean and readable.
  • Data Multiplication: Need to convert a value on the fly? You can now apply a multiplier directly within the widget’s data set configuration. This is perfect for when you need to change units of measurement for display purposes without creating a new calculated item.

The difference is night and day. A cluttered, hard-to-read Zabbix graph becomes a clean, interactive, and insightful visualization.

The Ultimate Table Widget

While Zabbix has widgets like “Top hosts,” they’ve always felt a bit rigid. The new Table widget is incredibly flexible and allows you to build the exact views you need for any scenario.

One of my favorite features is the “Column per pattern” mode. Imagine you want to see the incoming and outgoing traffic for all network interfaces on a host, side-by-side. With this widget, you can!

Here’s how it works:

  • You define an item pattern for your rows (e.g., the interface name using tags).
  • You then define a pattern for each column (e.g., one for `bits.sent` and another for `bits.recv`).
  • The widget intelligently organizes the data into a clean table with interfaces as rows and your metrics as columns.

You can also add a footer row to perform calculations like sum or average. This is incredibly useful for getting an overview of a cluster. For instance, you can display the average CPU and memory utilization across all nodes in a single, elegant table.

Improved Navigation Widgets

The new Host/Group Navigator and Item Navigator also bring welcome improvements. The Host Navigator provides better filtering and a more intuitive way to navigate through host group hierarchies, which is especially helpful for complex environments. The Item Navigator includes a search box that works on tags, allowing you to quickly find and display specific groups of metrics in another widget, like our new super-graph!

Final Thoughts and a Call to Action

These custom widgets have genuinely enhanced my Zabbix experience. They add a layer of polish, usability, and power that was sorely missing from the default dashboards. It’s a testament to the strength of the open-source community, and I hope the Zabbix team takes inspiration from this work for future official releases.

Now, I want to hear from you. What do you think of these widgets? Are there any features you’ve been desperately wanting for your Zabbix dashboards? Let me know in the comments below! Perhaps if we gather enough feedback, we can share it with the developer and help make these tools even better.

If you enjoyed this video and found it helpful, please give it a nice like and subscribe for more content. See you next week!


Stay Connected with Quadrata:

Read More
Gartner’s Magic Quadrant: A Crystal Ball for IT or an Illusion?

Gartner’s Magic Quadrant: A Crystal Ball for IT or an Illusion?

Good morning, everyone! Dimitri Bellini here, and welcome back to Quadrata. I know I’ve been away for a few weeks—I managed to get a bit of a vacation in—and I’ve come back with a ton of ideas for new open-source software to share with you.

But today, I want to take a step back and have a more general chat. This is for anyone who works in a company that has to deal with much larger enterprise clients, or for anyone involved in the high-stakes decision of choosing which software to invest in. In the world of IT, there’s a powerful and often mysterious force that guides these decisions: the Gartner Magic Quadrant.

It’s often treated like a crystal ball, a tool that can predict the future of your tech empire and tell you exactly where to step next. While it’s certainly a useful instrument, it’s crucial to understand what it is, how it works, and most importantly, its limitations.

What Exactly Is the Gartner Magic Quadrant?

Simply put, the Magic Quadrant is a series of market research reports that provide a visual snapshot of a specific tech market. Whether it’s cloud computing, observability, or storage, Gartner maps out the main competitors, helping you understand the landscape at a glance. For a top manager who doesn’t have time to research hundreds of solutions, it simplifies the immense complexity of the IT world into a single, digestible chart.

Decoding the Four Squares

The “magic” happens within a four-square grid, where vendors are placed based on their “Ability to Execute” and “Completeness of Vision.” Here’s what each quadrant means:

  • Leaders (Top Right): These are the champions. They have a strong vision that aligns with Gartner’s view of the future and the resources to execute it. They are well-established, reliable, and considered the top players in their field.
  • Challengers (Top Left): These vendors dominate the market today and have a strong ability to execute, but their vision for the future might not be as clear or innovative. They are strong performers but may be less prepared for tomorrow’s shifts.
  • Visionaries (Bottom Right): These are the innovators. They understand where the market is going and have a compelling vision, but they may not have the resources or market presence to execute on that vision at scale just yet.
  • Niche Players (Bottom Left): These vendors focus successfully on a small segment of the market. They might be very good at what they do, but they lack either a broad vision or the ability to outperform others across the board.

Why the Magic Quadrant Is So Influential

If you’ve ever tried to sell a product to a large enterprise, you’ve likely been asked, “Are you in the Gartner Magic Quadrant?” If the answer is yes, the doors magically open. Why? Because it represents a safe choice.

There’s an old saying in IT: “No one ever got fired for buying IBM.” The Magic Quadrant works on a similar principle. A manager can point to it and say, “I chose a Leader. It was the best on the market according to the experts. If it doesn’t work out, what more could I have done?” It provides a shield of justification.

For vendors, being placed in the quadrant—especially as a Leader—is a powerful marketing tool. It validates their position in the market and instantly gives them credibility. It works for both the buyer and the seller.

The Hamlet-like Doubt: Is the Leader Always the Best Choice?

And here is where the critical thinking comes in. Just because a product is in the “Leaders” quadrant, does that automatically make it the right choice for your company? This is the fundamental question every manager should ask.

The process to get into the quadrant is incredibly complex and resource-intensive. It requires detailed reports on financials, sales strategy, customer feedback, marketing, and innovation. This creates a few potential issues:

1. It Favors the Already Favored

Large, multinational corporations have the money, specialized staff, and massive structures needed to provide Gartner with the exhaustive data required. This creates a high barrier to entry for small-to-medium-sized companies or innovative startups that might have a superior product but lack the corporate machinery to prove it according to Gartner’s specific methodology.

2. The Open Source Blind Spot

Open source solutions often don’t fit neatly into these corporate boxes. A powerful open-source tool might require more initial customization and “handiwork,” but in return, it offers unparalleled flexibility. The Magic Quadrant’s model can struggle to properly evaluate this trade-off, often overlooking solutions that could be a perfect fit for a company willing to invest in configuration over out-of-the-box features.

3. It’s Based on the Past, Not the Future

The analysis relies heavily on past performance and existing data. A truly disruptive, game-changing technology that doesn’t fit the standard parameters might not even make it onto the chart. By the time it does, it might be too late.

Conclusion: Use It as a Map, Not a Destination

So, what’s the takeaway? The Gartner Magic Quadrant is an excellent starting point. If you know nothing about a particular market, it gives you a fantastic overview of the key players. But your work doesn’t end there. The most critical step is due diligence.

You must dive deeper to understand your company’s unique, real-world needs. No two businesses are exactly alike, even if they’re in the same industry. To stay on the crest of the wave, you need a tool that is molded to your specific workflows, not a one-size-fits-all solution that’s beautiful and feature-packed but of which you’ll only use a fifth of its capabilities. Think about it: if you want the ultimate performance car, do you buy the best-selling Volkswagen, or do you seek out a niche masterpiece like a Ferrari or a Bugatti?

Choosing the Leader is the easy path. But putting in the passion and the effort to analyze, think, and then decide on the truly *right* tool—that’s what makes a great manager. Don’t just follow the chart; understand your needs, explore all options (even the niche ones!), and make an informed decision that will genuinely drive your business forward.


That’s all for today! I hope this discussion was useful. What are your thoughts on the Gartner Magic Quadrant? Have you used it to make decisions? Let me know in the comments below!

If you liked this post and the accompanying video, please give it a like and subscribe to the channel if you haven’t already. I’ll be back next week with a very interesting—and yes, niche—tool that I think you’ll love.

Bye everyone!

– Dimitri

Connect with me and the community:

Read More
Finally! OpenAI Enters the Open-Source Arena with Two New Models

Finally! OpenAI Enters the Open-Source Arena with Two New Models

Good morning, everyone! Dimitri Bellini here, and welcome back to Quadrata. For a while now, I’ve been waiting for something genuinely new to discuss in the world of artificial intelligence. The on-premise, open-source scene has been buzzing, but largely dominated by excellent models from the East. I was waiting for a major American player to make a move, and finally, the moment has arrived. OpenAI, the minds behind ChatGPT, have released not one, but two completely open-source models. This is a big deal, and in this post, I’m going to break down what they are, what they can do, and put them to the test myself.

What’s New from OpenAI? A Revolution in the Making

OpenAI has released two “open-weight” models, which means we have access to the model’s core infrastructure and the data it was trained on. This is fantastic news for developers, researchers, and hobbyists like us, as it allows for deep customization. The two new models are:

  • GPT-OSS-120B: A massive 120-billion parameter model.
  • GPT-OSS-20B: A more accessible 20-billion parameter model.

This move is a significant step, especially with a permissive Apache 2.0 license, which allows for commercial use. You can build on top of these models, fine-tune them with your own data, and deploy them in your applications without the heavy licensing restrictions we often see.

Key Features That Matter

So, what makes these models stand out? Here are the highlights:

  • Truly Open License: The Apache 2.0 license gives you immense freedom to innovate and even commercialize your work.
  • Designed for Agentic Tasks: These models are built to be “agents” that can interact with tools and perform complex, multi-step tasks. While the term “agentic” is a bit of a buzzword lately, the potential is there.
  • Deeply Customizable: With open weights, you can perform post-training to tailor the model to your specific needs, creating a specialized LLM for your unique use case.
  • Full Chain of Thought: A major point of contention with closed models is their “black box” nature. You get an answer but can’t see the reasoning. These models expose their entire thought process, allowing you to understand why they reached a certain conclusion. This transparency is crucial for debugging and trust.

Choosing Your Model: Hardware and Performance

The two models cater to very different hardware capabilities.

The Powerhouse: GPT-OSS-120B

This is the star of the show, with performance comparable to the closed GPT-3.5-Turbo model. However, running it is no small feat. You’ll need some serious hardware, like an NVIDIA H100 GPU with at least 80GB of VRAM. This is not something most of us have at home, but it’s a game-changer for businesses and researchers with the right infrastructure.

The People’s Model: GPT-OSS-20B

This is the model most of us can experiment with. It’s designed to be more “human-scale” and offers performance roughly equivalent to the `o3-mini` model. The hardware requirements are much more reasonable:

  • At least 16GB of VRAM on a dedicated NVIDIA GPU.
  • A tool like Ollama or vLLM to run it (at the time of writing, Ollama already has full support!).

This is the model I’ll be focusing my tests on today.

My Hands-On Test: Putting GPT-OSS-20B to Work with Zabbix

Benchmarks are one thing, but real-world performance is what truly counts. I decided to throw a few complex, Zabbix-related challenges at the 20B model to see how it would handle them. I used LM Arena to compare its output side-by-side with another strong model of a similar size, Qwen2.

Test 1: Zabbix JavaScript Preprocessing

My first test was a niche one: I asked the model to write a Zabbix JavaScript preprocessing script to modify the output of a low-level discovery rule by adding a custom user macro. This isn’t a simple “hello world” prompt; it requires an understanding of Zabbix’s specific architecture, LLD, and JavaScript context.

The Result: I have to say, both models did an impressive job. They understood the context of Zabbix, preprocessing, and discovery macros. The JavaScript they generated was coherent and almost perfect. The GPT-OSS model’s code needed a slight tweak—it wrapped the code in a function, which isn’t necessary in Zabbix, and made a small assumption about input parameters. However, with a minor correction, the code worked. Not bad at all for a model running locally!

Test 2: Root Cause Analysis of IT Events

Next, I gave the model a set of correlated IT events with timestamps and asked it to identify the root cause. The events were:

  1. Filesystem full on a host
  2. Database instance down
  3. CRM application down
  4. Host unreachable

The Result: This is where the model’s reasoning really shone. It correctly identified that the “Filesystem full” event was the most likely root cause. It reasoned that a full disk could cause the database to crash, which in turn would bring down the CRM application that depends on it. It correctly identified the chain of dependencies. Both GPT-OSS and Qwen2 passed this test with flying colors, demonstrating strong logical reasoning.

Test 3: The Agentic Challenge

For my final test, I tried to push the “agentic” capabilities. I provided the model with a tool to interact with the Zabbix API and asked it to fetch a list of active problems. Unfortunately, this is where it stumbled. While it understood the request and even defined the tool it needed to use, it failed to actually execute the API call, instead getting stuck or hallucinating functions. This shows that while the potential for tool use is there, the implementation isn’t quite seamless yet, at least in my initial tests.

Conclusion: A Welcome and Necessary Step Forward

So, what’s my final verdict? The release of these open-source models by OpenAI is a fantastic and much-needed development. It provides a powerful, transparent, and highly customizable alternative from a Western company in a space that was becoming increasingly dominated by others. The 20B model is a solid performer, capable of impressive reasoning and coding, even if it has some rough edges with more advanced agentic tasks.

For now, it stands as another great option alongside models from Mistral and others. The true power here lies in the community. With open weights and an open license, I’m excited to see how developers will improve, fine-tune, and build upon this foundation. This is a very interesting time for local and on-premise AI.

What do you think? Have you tried the new models? What are your impressions? Let me know your thoughts in the comments below!


Stay Connected with Me:

Read More
Copyparty: The Lightweight, Powerful File Server You Didn’t Know You Needed

Copyparty: The Lightweight, Powerful File Server You Didn’t Know You Needed

Good morning, everyone, and welcome back to Quadrata! This is my corner of the internet dedicated to the open-source world and the IT solutions that I—and hopefully you—find exciting. If you enjoy this kind of content, don’t forget to leave a like on the video and subscribe to the channel!

This week, we’re diving back into the world of open-source solutions. I stumbled upon a truly stunning tool in the file-sharing space that has a wonderful nostalgic feel, reminiscent of the BBS days of the 90s. It’s called Copyparty, and its charm lies not just in its retro vibe but in its incredible versatility. You can install it almost anywhere, making it a fantastic utility to have in your toolkit.

So, let’s take a closer look together.

What Exactly is Copyparty?

At its core, Copyparty is a web file server that allows you to share and exchange files. What makes it special is that it’s all contained within a single Python file. This makes it incredibly lightweight and portable. While you can run it directly, I prefer using it inside a Docker container for easier management and deployment.

But why use it? The answer is simplicity and performance. If you’ve ever needed to quickly move files between your PC and your NAS, or share a large file with a friend without jumping through hoops, Copyparty could be the perfect, high-performing solution for you.

A Surprising Number of Features in a Tiny Package

I was genuinely impressed by the sheer number of features packed into this tool. It’s highly customizable and offers much more than simple file transfers. Here’s a condensed list of its most interesting capabilities:

  • Smart Uploads & Downloads: When you upload a large file, Copyparty can intelligently split it into smaller chunks. This maximizes your bandwidth and, more importantly, allows for resumable transfers. If your connection drops halfway through, you can pick up right where you left off.
  • File Deduplication: To save precious disk space, Copyparty uses file hashes to identify and avoid storing duplicate files.
  • On-the-fly Compression: You can have files automatically zipped and compressed on the fly, which is another great space-saving feature.
  • Batch Renaming & Tagging: If you have a large collection of photos or, like in the old days, a folder full of MP3s, you can quickly rename them based on a specific pattern.
  • Extensive Protocol Support: It’s not just limited to HTTP. Copyparty supports a whole suite of protocols, including WebDAV, FTPS, TFTP, and Samba, making it a complete hub for file communication.
  • Truly Cross-Platform: It runs virtually everywhere: Linux, macOS, Windows, Android, and even on a Raspberry Pi, thanks to its optimized nature. Yes, you can install it directly on your phone!
  • Built-in Media Tools: Copyparty includes a surprisingly nice music player that can read metadata from your audio files (like BPM and duration) and a compact image browser for viewing your photos.
  • Powerful Command Line (CLI): For those who need to automate or optimize file transfers, there’s a full-featured command-line interface.

Tailor It to Your Needs: Configuration and Security

One of Copyparty’s greatest strengths is its customizability via a single configuration file, copyparty.conf. Here, you can enable or disable features, block connections from specific IP ranges, set upload limits based on disk space, and even change the UI theme.

For user management, you have a couple of options. You can use a simple user/password file or integrate with an external Identity Provider (IDP). The permission system is also very granular. Using a system of flags (like RW for read/write, MDA, etc.), you can define exactly what each user can do on specific paths. It might seem a bit “primordial” compared to modern web GUIs, but for a compact solution, it’s incredibly fast and effective to manage.

How to Install Copyparty with Docker

As I mentioned, my preferred method is using Docker. Copyparty’s developers provide a straightforward Docker Compose file that makes getting started a breeze. I use a GUI tool like Portainer to manage my containers, which simplifies the process even further.

Here’s a look at a basic docker-compose.yml structure:


services:
copyparty:
image: 9001/copyparty
ports:
- "3923:3923"
volumes:
# Volume for configuration file (copyparty.conf)
- /path/to/your/config:/cfg
# Volume for the files you want to share
- /path/to/your/data:/mnt
# ... other docker-specific configurations

In this setup, I’ve defined two key volumes:

  1. A volume for the configuration, where the copyparty.conf file lives.
  2. A mount point for the actual data I want to share or upload to.

Once you run docker-compose up -d, your service will be up and running!

A Walkthrough of the Web Interface

The official GitHub page has a wealth of information and even a live demo, but let me show you my installation. The interface has a fantastic vintage feel, but it’s packed with functionality.

Uploading and Sharing

Uploading a file is as simple as dragging and dropping. First, Copyparty hashes the file to check for duplicates. Then, it begins streaming the upload in a highly optimized way. Once uploaded, you’ll see details like the IP address it was uploaded from and the timestamp.

Sharing is just as easy. You can select a file, create a share link with a custom name, set a password, and even define an expiration date. It generates both a URL and a QR code, making it incredibly convenient to share with others.

Management and Media

The UI includes several helpful tools:

  • Control Center: See active clients, current uploads/downloads, and active shares.
  • Recent Uploads (Extinguisher Icon): Quickly view the latest files added to your share, which is useful for moderation in a multi-user environment.
  • Advanced Search (Lens Icon): A powerful search tool with a wide array of filters to find exactly what you’re looking for.
  • Settings (Gear Icon): Customize the UI, change the language, and tweak how files are displayed.

And don’t forget the built-in media player and image gallery, which turn your file share into a simple media server.

Monitoring

For advanced users, Copyparty can even export its metrics, allowing you to monitor its performance and status with tools like Zabbix. This is a testament to its professional-grade design.

Final Thoughts: Is Copyparty Right for You?

I think Copyparty is a fantastic and interesting product. It’s a very nice solution to try, especially because it’s so lightweight and can be installed almost anywhere. There are many situations where a fast, simple, and self-hosted file-sharing tool is exactly what you need.

Its blend of retro simplicity and modern, powerful features makes it a unique and valuable tool in the open-source world.

That’s all for this week! I’m always eager to hear your thoughts. Have you used Copyparty before? Or do you use another solution that you find more interesting? Let me know in the comments below—perhaps we can discuss it in a future video!

A big greeting from me, Dimitri, and see you next week. Bye everyone!


Follow my work and join the community:

Read More
My Deep Dive into NetLockRMM: The Open-Source RMM You’ve Been Waiting For

My Deep Dive into NetLockRMM: The Open-Source RMM You’ve Been Waiting For

Good morning everyone, I’m Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the fantastic world of open source and IT. If you’re managing multiple systems, you know the challenge: finding a reliable, centralized way to monitor and control everything without breaking the bank. Proprietary solutions can be costly, and the open-source landscape for this has been somewhat limited.

That’s why this week, I’m excited to show you a new product that tackles this problem head-on. It’s an open-source tool called NetLockRMM, and it’s designed to solve the exact problem of remote device management.

What is NetLockRMM?

NetLockRMM stands for Remote Monitoring and Management. It’s a self-hosted solution that gives you a single web portal to manage and control your remote hosts. Whether you’re dealing with servers or desktops running Windows, Linux, or macOS, this tool aims to bring them all under one roof. For those of us who use tools like Zabbix to manage numerous proxies or server installations, the idea of a single point of control is incredibly appealing.

Here are some of the key features it offers:

  • Cross-Platform Support: Agents are available for Windows, Linux, and macOS, covering most use cases.
  • System Monitoring: Keep an eye on vital parameters like CPU, RAM, and disk usage. While it’s not a full-fledged monitoring system like Zabbix, it provides a great overview for standard requirements.
  • Remote Control: Access a remote shell, execute commands, and even get full remote desktop access to your Windows machines directly from your browser.
  • File Transfer: Easily upload or download files to and from your managed hosts.
  • Automation: Schedule tasks and run scripts across your fleet of devices to automate maintenance and checks.
  • Multi-Tenancy: Manage different clients or departments from within the same instance.

Getting Started: The Installation and Setup Guide

One of the best parts about NetLockRMM is how simple it is to get up and running. Here’s a step-by-step guide to get you started.

Prerequisites

All you really need is a system with Docker installed. The entire application stack runs in containers, making deployment clean and isolated. If you plan to access the portal from the internet, you’ll also need a domain name (FQDN).

Step 1: The Initial Installation

The development team has made this incredibly easy. The official documentation points to a single Bash script that automates the setup.

  1. Download the installation script from their repository (https://docs.netlockrmm.com/en/server-installation-docker).
  2. Make it executable (e.g., chmod +x /home/docker-compose-quick-setup.sh).
  3. Run the script. It will ask you a few questions to configure your environment, such as the FQDN you want to use and the ports for the services.
  4. The script will then generate the necessary docker-compose.yml file and, if you choose, deploy the containers for you.

While you can easily manage the deployment from the command line, I’m getting quite fond of using a handy container management tool to deploy my stacks, which makes the process even more convenient.

Step 2: Activating Your Instance

Here’s an important point. While NetLockRMM is open-source, the developers have a fair model to support their work. To fully unlock its features, you need to get a free API key.

  1. Go to the official NetLockRMM website and sign up for an account.
  2. Choose the On-Premise Open Source plan. It’s free and allows you to manage up to 25 devices, which is very generous for home labs or small businesses.
  3. In your portal dashboard, navigate to “My Product” to find your API key.
  4. In your self-hosted NetLockRMM instance, go to Settings > System and paste the Member Portal API Key.

Without this step, the GUI will work, but you won’t be able to add any new hosts. So, make sure you do this first!

Step 3: Deploying Your First Agent

With the server running and activated, it’s time to add your first machine.

  1. In the NetLockRMM dashboard, click the deployment icon in the top navigation bar.
  2. Create a new agent configuration or use the default. This is where you’ll tell the agent how to connect back to your server.
  3. This is critical: For the API, App, and other URLs, make sure you enter the full FQDN, including the port number (e.g., https://rmm.yourdomain.com:443). The agent won’t assume the default port, and it won’t work without it.
  4. Select the target operating system (Windows, Linux, etc.) and download the customized installer.
  5. Run the installer on your target machine.
  6. Back in the NetLockRMM dashboard, the new machine will appear in the Unauthorized Hosts list. Simply authorize it to add it to your park of managed devices.

Exploring the Key Features in Action

Once an agent is authorized, you can click on it to see a wealth of information and tools. You get a summary of the OS, hardware specs, firewall status, and uptime. You can also browse running processes in the task manager and see a list of services.

Powerful Remote Control

The remote control features are where NetLockRMM truly shines. For Windows, the remote desktop access is fantastic. It launches a session right in your browser, giving you full GUI control without needing any other software. It’s fast, responsive, and incredibly useful.

For Linux, the remote terminal is currently more of a command-execution tool than a fully interactive shell, but it’s perfect for running scripts or a series of commands. You can also browse the file system and transfer files on all supported platforms.

Automation and Scripting

The automation section allows you to create policies and jobs that run on a schedule. You can define checks for disk space, running services, or even script your own checks. There’s also a growing library of community scripts you can use for common tasks, like running system updates on Ubuntu.

My Final Thoughts: A Promising Future

NetLockRMM is a young but incredibly promising project. It’s under very active development—when I checked their GitHub, the last release was just a few days ago! This shows a dedicated team working to improve the product.

It fills a significant gap in the open-source ecosystem, providing a powerful, modern, and easy-to-use RMM solution that can compete with paid alternatives. While there are a few cosmetic bugs and rough edges, the core functionality is solid and works well.

I believe that with community support—through feedback, bug reports, and contributions—this tool could become something truly special. I’ve already given them a star on GitHub, and I encourage you to check them out too.


I hope I’ve shown you something new and interesting today. This is exactly the kind of project we love to see in the open-source world.

But what do you think? Have you tried NetLockRMM, or do you use another open-source alternative for remote management? I’d love to hear your thoughts and recommendations in the comments below. Every comment helps me and the rest of the community learn.

And as always, if you enjoyed this deep dive, please subscribe to the channel for more content like this. See you next week with another episode!

Bye everyone, from Dimitri.

Stay in touch with Quadrata:

Read More
Taming Your Containers: A Deep Dive into Komodo, the Ultimate Open-Source Management GUI

Taming Your Containers: A Deep Dive into Komodo, the Ultimate Open-Source Management GUI

Hello everyone, Dimitri Bellini here, and welcome back to Quadrata, my corner of the internet dedicated to the world of open-source and IT. If you’re like me, you love the power and flexibility of containers. But let’s be honest, managing numerous containers and multiple hosts purely through the command line can quickly become overwhelming. It’s easy to lose track of what’s running, which services need attention, and how your host resources are holding up.

This week, I stumbled upon a solution that genuinely changed my mood and simplified my workflow: Komodo. It’s an open-source container management platform that is so well-made, I just had to share it with you.

What is Komodo?

At its core, Komodo is an open-source graphical user interface (GUI) designed for the management of containers like Docker, Podman, and others. It provides a centralized dashboard to deploy, monitor, and manage all your containerized applications, whether they are running locally or on remote hosts. The goal is to give you back control and visibility, turning a complex mess of shell commands into a streamlined, intuitive experience.

Key Features That Make Komodo Shine

  • Unified Dashboard: Get a bird’s-eye view of all your hosts and the containers running on them. Komodo elegantly displays resource usage (CPU, RAM, Disk Space), operating system details, and more, all in one place.
  • Multi-Host Management: Komodo uses a core-periphery architecture. You install the main Komodo instance on one server and a lightweight agent on any other hosts you want to manage. This allows you to control a “cluster” of machines from a single, clean web interface.
  • Effortless Deployments: You can deploy applications (which Komodo calls “stacks”) directly from Docker Compose files. Whether you paste the code into the UI, point to files on the server, or link a Git repository, Komodo handles the rest.
  • Automation and CI/CD: Komodo includes features for building images directly from your source code repositories and creating automated deployment procedures that can be triggered by webhooks.
  • Advanced User Management: You can create multiple users and groups, and even integrate with external authentication providers like GitHub, Google, or any OIDC provider.

How Does Komodo Compare to the Competition?

Many of you are probably familiar with Portainer. It has been a fantastic solution for years, but its focus has shifted towards Kubernetes, and the free Community Edition has become somewhat limited compared to its commercial offerings. Portainer pioneered the agent-based multi-host model, which Komodo has adopted and refined.

On the other end of the spectrum is Dockge, a much simpler tool focused on managing Docker Compose files on a single host. It’s a great, lightweight option, but Komodo offers a far more comprehensive suite of features for those managing a more complex environment.

Getting Started with Komodo: A Step-by-Step Guide

One of the best things about Komodo is how easy it is to set up. All you need is a machine with Docker installed.

1. Installation

The official documentation makes this incredibly simple (https://komo.do/docs/setup/mongo). The installation is, fittingly, container-based.

  1. Create a dedicated directory for your Komodo installation.
  2. Download the docker-compose.yml and .env files provided on the official Komodo website. Komodo uses a database to store its configuration, giving you the choice between MongoDB (the long-standing default) or Ferretdb (a Postgres-based alternative). For a simple start, the default files work perfectly.
  3. Run the following command in your terminal:
    docker-compose --env-file ./.env up -d

And that’s it! Komodo, its agent (periphery), and its database will start up as containers on your machine.

2. First-Time Setup

Once the containers are running, navigate to your server’s IP address on port 9120 (e.g., http://YOUR_SERVER_IP:9120). The first time you access the UI, it will prompt you to create an administrator account. Simply enter your desired username and password, and you’ll be logged into the main dashboard.

Exploring the Komodo Dashboard and Deploying an App

The dashboard is clean and intuitive. You’ll see your server(s) listed. The host where you installed Komodo is automatically added. You can easily add more remote hosts by installing the Komodo agent on them using a simple command provided in the UI.

Deploying Your First Stack (Draw.io)

Let’s deploy a simple application to see Komodo in action. A stack is essentially a project defined by a Docker Compose file.

  1. From the main dashboard, navigate to Stacks and click New Stack.
  2. Give your stack a name, for example, drawio-app.
  3. Click Configure. Select the server you want to deploy to.
  4. For the source, choose UI Defined. This allows you to paste your compose file directly.
  5. In the Compose file editor, paste the configuration for a Draw.io container. Here’s a simple example:
    services:
    drawio:
    image: jgraph/drawio
    ports:
    - "8080:8080"
    - "8443:8443"
    restart: unless-stopped

  6. Click Save and then Update.
  7. The stack is now defined, but not yet running. Click the Deploy button and confirm. Komodo will pull the image and start your container.

You can now see your running service, view its logs, and access the application by clicking the port link—all from within the Komodo UI. It’s incredibly slick!

Modifying a Stack

Need to change a port or an environment variable? It’s just as easy. Simply edit the compose file in the UI, save it, and hit Redeploy. Komodo will gracefully stop the old container and start the new one with the updated configuration.

Final Thoughts

I have to say, I’m thoroughly impressed with Komodo. It strikes a perfect balance between simplicity and power. It provides the deep visibility and control that power users need without a steep learning curve. The interface is polished, the feature set is rich, and the fact that it’s a thriving open-source project makes it even better.

I’ll definitely be adopting Komodo to manage the entropy on my own servers. It’s a fantastic piece of software that I can wholeheartedly recommend to anyone working with containers.

But that’s my take. What do you think? Have you tried Komodo, or do you use another tool for managing your containers? I’d love to hear your thoughts and suggestions in the comments below. Your ideas might even inspire a future video!

That’s all for today. A big salute from me, Dimitri, and I’ll see you next week!


Don’t forget to subscribe to my YouTube channel for more open-source content:

Quadrata on YouTube

Join our community on Telegram:

ZabbixItalia Telegram Channel

Read More
Unlock Your Servers from Anywhere: A Deep Dive into Apache Guacamole

Unlock Your Servers from Anywhere: A Deep Dive into Apache Guacamole

Good morning everyone, Dimitri Bellini here! Welcome back to Quadrata, my channel dedicated to the open-source world and the IT that I love—and that you, my viewers, clearly enjoy too.

In this post, we’re diving into a tool that’s a bit esoteric but incredibly powerful, something I first used years ago and have recently had the chance to rediscover: Apache Guacamole. No, it’s not a recipe for your next party; it’s a fantastic open-source tool that allows you to connect to your applications, shells, and servers using nothing more than a web browser.

What is Apache Guacamole?

At its core, Guacamole is a clientless remote desktop gateway. This means you can access your remote machines—whether they use RDP, SSH, VNC, or even Telnet—directly from Chrome, Firefox, or any modern HTML5 browser. Imagine needing to access a server while you’re away from your primary workstation. Instead of fumbling with VPNs and installing specific client software, you can just open a browser on your laptop, tablet, or even your phone and get full access. It’s a game-changer for convenience and accessibility.

The architecture is straightforward but robust. Your browser communicates with a web application (running on Tomcat), which in turn talks to the Guacamole daemon (`guacd`). This daemon acts as a translator, establishing the connection to your target machine using its native protocol (like RDP or SSH) and streaming the display back to your browser as simple HTML5.

Key Features That Make Guacamole Stand Out

Guacamole isn’t just a simple proxy; it’s packed with enterprise-grade features that make it suitable for a wide range of use cases:

  • Broad Protocol Support: It natively supports VNC, RDP, SSH, and Telnet, covering most of your remote access needs.
  • Advanced Authentication: You can integrate it with various authentication systems, including Active Directory, LDAP, and even Two-Factor Authentication (2FA), to secure access.
  • Granular Permissions: As an administrator, you can define exactly which users or groups can access which connections.
  • Centralized Logging & Screen Recording: This is a huge feature for security and compliance. Guacamole can log all activity and even record entire user sessions as videos, providing a complete audit trail of who did what and when.
  • Screen Sharing: Need to collaborate on a problem? You can share your active session with a colleague by simply sending them a link. You can both work in the same shell or desktop environment simultaneously.

Surprising Powerhouse: Where You’ve Already Seen Guacamole

While it might not be a household name, you’ve likely used Guacamole without even realizing it. It’s the powerful engine behind several major commercial products, including:

  • Microsoft Azure Bastion
  • FortiGate SSL Web VPN
  • CyberArk PSM Gateway

The fact that these major security and cloud companies build their products on top of Guacamole is a massive testament to its stability and power.

Getting Started: Installation with Docker

The easiest and most recommended way to get Guacamole up and running is with Docker. In the past, this meant compiling various components, but today, it’s a much simpler process. You’ll need three containers:

  1. guacamole/guacd: The native daemon that handles the protocol translations.
  2. guacamole/guacamole: The web application front-end.
  3. A Database: PostgreSQL or MySQL to store user and connection configurations.

Important Note: I found that the official docker-compose.yml file in the documentation can be problematic. The following method is based on a community-provided configuration that works flawlessly with the latest versions.

Step 1: Create a Directory and Initialize the Database

First, create a directory for your Guacamole configuration and data. Then, run the following command to have Guacamole’s container initialize the database schema for you. This script will pull the necessary SQL files and set up the initial database structure.


mkdir guacamole-data
docker run --rm guacamole/guacamole /opt/guacamole/bin/initdb.sh --postgres > guacamole-data/initdb.sql

Step 2: Create Your Docker Compose File

Inside your main directory, create a file named docker-compose.yml. This file will define the three services we need to run.


services:
guacd:
container_name: guacd
image: guacamole/guacd
volumes:
- /opt/guacamole/drive:/drive:rw
- /opt/guacamole/record:/record:rw
networks:
- guacamole
restart: always

guacdb:
container_name: guacdb
image: postgres:15.2-alpine
environment:
PGDATA: /var/lib/postgresql/data/guacamole
POSTGRES_DB: guacamole_db
POSTGRES_USER: guacamole_user
POSTGRES_PASSWORD: guacpass
volumes:
- /opt/guacamole/db-init:/docker-entrypoint-initdb.d:z
- /opt/guacamole/data:/var/lib/postgresql/data:Z
networks:
- guacamole
restart: always

guacamole:
container_name: guac-guacamole
image: guacamole/guacamole
depends_on:
- guacd
- guacdb
environment:
GUACD_HOSTNAME: guacd
POSTGRESQL_HOSTNAME: guacdb
POSTGRESQL_DATABASE: guacamole_db
POSTGRESQL_USER: guacamole_user
POSTGRESQL_PASSWORD: guacpass
RECORDING_SEARCH_PATH: /record
# uncomment if you're behind a reverse proxy
# REMOTE_IP_VALVE_ENABLED: true
# uncomment to disable brute-force protection entirely
# BAN_ENABLED: false
# https://guacamole.apache.org/doc/gug/guacamole-docker.html#running-guacamole-behind-a-proxy
volumes:
- /opt/guacamole/record:/record:rw
networks:
- guacamole
ports:
- 8080:8080/tcp
restart: always

networks:
guacamole:
name: guacamole

Be sure to change YourStrongPasswordHere to a secure password!

Step 3: Launch Guacamole

Now, from your terminal in the same directory, simply run:


docker-compose up -d

Docker will pull the images and start the three containers. In a minute or two, your Guacamole instance will be ready!

Your First Connection: A Quick Walkthrough

Once it’s running, open your browser and navigate to http://YOUR_SERVER_IP:8080/guacamole/.

The default login credentials are:

  • Username: guacadmin
  • Password: guacadmin

After logging in, head to Settings > Connections to add your first remote machine. Click “New Connection” and fill out the details. For an SSH connection, you’ll set the protocol to SSH and enter the hostname/IP, username, and password. For Windows RDP, you’ll do the same but may also need to check the “Ignore Server Certificate” box under the Parameters section if you’re using a self-signed certificate.

Once saved, your new connection will appear on your home screen. Just click it, and you’ll be dropped right into your remote session, all within your browser tab. You can have multiple sessions open at once and switch between them like browser tabs. To access features like the clipboard or file transfers, use the Ctrl+Alt+Shift key combination to open the Guacamole side menu.

A True Game-Changer for Remote Access

As you can see, Apache Guacamole is an incredibly versatile and powerful tool. Whether you’re a system administrator who needs a centralized access point, a developer working remotely, or a company looking to enhance security with a bastion host and session recording, it’s a solution that is both elegant and effective.

I highly recommend giving it a try. It’s one of those open-source gems that can fundamentally improve your workflow.

What are your thoughts? Have you used Guacamole or a similar tool before? Let me know in the comments below! And if you found this guide helpful, don’t forget to share it.


Thank you for reading! For more content on open-source and IT, make sure to subscribe to the channel.

YouTube Channel: Quadrata

Join our community on Telegram: ZabbixItalia Telegram Channel

See you in the next one!

Read More
Zabbix 7.4 is Here! A Deep Dive into the Game-Changing New Features

Zabbix 7.4 is Here! A Deep Dive into the Game-Changing New Features

Good morning, everyone! It’s Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the world of open-source IT. It’s an exciting week because our good friend, Zabbix, has just rolled out a major new version: Zabbix 7.4! After months of hard work from the Zabbix team, this release is packed with features that will change the way we monitor our infrastructure. So, let’s dive in together and explore what’s new.

The “Escapological” Feature: Nested Low-Level Discovery

Let’s start with what I consider the most mind-bending, “centrifugal” new feature: nested low-level discovery (LLD). Until now, LLD was fantastic for discovering objects like file systems or network interfaces on a host. But we couldn’t go deeper. If you discovered a database, you couldn’t then run another discovery *within* that database to find all its tablespaces dynamically.

With Zabbix 7.4, that limitation is gone! I’ve set up a demo to show you this in action. I created a discovery rule that finds all the databases on a host. From the output of that first discovery, a new “discovery prototype” of type “Nested” can now be created. This second-level discovery can then parse the data from the first one to find all the tablespaces specific to each discovered database.

The result? Zabbix first discovers DB1 and DB2, and then it automatically runs another discovery for each of them, creating items for every single tablespace (like TS1 for DB1, TS2 for DB1, etc.). This allows for an incredible level of granularity and automation, especially in complex environments like database clusters or containerized applications. This is a true game-changer.

And it doesn’t stop there. We can now also have host prototypes within host prototypes. Previously, if you discovered a VMware host and it created new hosts for each virtual machine, those new VM hosts couldn’t run their own discovery to create *more* hosts. Now they can, opening the door for multi-layered infrastructure discovery.

A Smarter Way to Onboard: The New Host Wizard

How many times have new users felt a bit lost when adding their first host to Zabbix? What hostname do I use? How do I configure the agent? The new Host Wizard solves this beautifully.

Found under Data Collection -> Host Wizard, this feature provides a step-by-step guide to get your monitoring up and running. Here’s a quick walkthrough:

  1. Select a Template: You start by searching for the type of monitoring you need (e.g., “Linux”). The wizard will show you compatible templates. Note that not all templates are updated for the wizard yet, but the main ones for Linux, Windows, AWS, Azure, databases, and more are already there.
  2. Define the Host: You provide a hostname and assign it to a host group, just like before, but in a much more guided way.
  3. Configure the Agent: This is where the magic happens. For an active agent, for example, you input your Zabbix server/proxy IP and configure security (like a pre-shared key). The wizard then generates a complete installation script for you to copy and paste directly into your Linux or Windows shell! This script handles everything—installing the agent, configuring the server address, and setting up the keys. It’s incredibly convenient.
  4. Fine-Tune and Deploy: The final step shows you the configurable macros for the template in a clean, human-readable format, making it easy to adjust thresholds before you create the host.

A quick heads-up: I did notice a small bug where the wizard’s script currently installs Zabbix Agent 7.2 instead of 7.4. I’ve already opened a ticket, and I’m sure the Zabbix team will have it fixed in a patch release like 7.4.1 very soon.

Dashboard and Visualization Upgrades

Real-Time Editing and a Fresh Look

Dashboards have received a major usability boost. You no longer have to click “Edit,” make a change to a widget, save it, and then see the result. Now, all changes are applied in real-time as you configure the widget. If you thicken a line in a graph, you see it thicken instantly. This makes dashboard creation so much faster and more intuitive.

Furthermore, Zabbix has introduced color palettes for graphs. Gone are the days of having multiple metrics on a graph with nearly identical shades of the same color. You can now choose a palette that assigns distinct, pleasant, and easily recognizable colors to each item, making your graphs far more readable.

The New Item Card Widget

There’s a new widget in town called the Item Card. When used with something like the Host Navigator widget, you can select a host, then select a specific item (like CPU Utilization), and the Item Card will populate with detailed information about that item: its configuration, recent values, a mini-graph, and any associated triggers. It’s a fantastic way to get a quick, focused overview of a specific metric.

Powerful Enhancements for Maps and Monitoring

Maps Get a Major Overhaul

Maps are now more powerful and visually appealing than ever. Here are the key improvements:

  • Element Ordering: Finally, we can control the Z-index of map elements! You can bring elements to the front or send them to the back. This means you can create a background image of a server rack and place your server icons perfectly on top of it, which was impossible to do reliably before.
  • Auto-Hiding Labels: To clean up busy maps, labels can now be set to appear only when you hover your mouse over an element.
  • Dynamic Link Indicators: The lines connecting elements on a map are no longer just tied to trigger status. You can now have their color or style change based on an item’s value, allowing you to visualize things like link bandwidth utilization directly on your map.

More Control with New Functions and Security

Zabbix 7.4 also brings more power under the hood:

  • OAuth 2.0 Support: You can now easily configure email notifications using Gmail and Office 365, as Zabbix provides a wizard to handle the OAuth 2.0 authentication.
  • Frontend-to-Server Encryption: For security-conscious environments, you can now enable encryption for the communication between the Zabbix web frontend and the Zabbix server.
  • New Time-Based Functions: Functions like first.clock and last.clock have been added, giving us more power to correlate events based on their timestamps, especially when working with logs.

Small Changes, Big Impact: Quality of Life Improvements

Sometimes it’s the little things that make the biggest difference in our day-to-day work. Zabbix 7.4 is full of them:

  • Inline Form Validation: When creating an item or host, Zabbix now instantly highlights any required fields you’ve missed, preventing errors before you even try to save.
  • Copy Button for Test Output: When you test a preprocessing step and get a large JSON output, there’s now a simple “Copy” button. No more struggling to select all the text in the small window!
  • New Templates: The library of official templates continues to grow, with notable additions for enterprise hardware like Pure Storage.

Final Thoughts

Zabbix 7.4 is a massive step forward. From the revolutionary nested discovery to the user-friendly Host Wizard and the countless usability improvements, this release offers something for everyone. It makes Zabbix both more powerful for seasoned experts and more accessible for newcomers.

What do you think of this new release? Is there a feature you’re particularly excited about, or something you’d like me to cover in more detail? The nested discovery part can be complex, so I’m happy to discuss it further. Let me know your thoughts in the comments below!

And with that, that’s all for today. See you next week!


Don’t forget to engage with the community:

  • Subscribe to my YouTube Channel: Quadrata
  • Join the discussion on the Zabbix Italia Telegram Channel: ZabbixItalia

Read More
Chat with Your Zabbix: A Practical Guide to Integrating AI with Zabbix AI MCP Server

Unlocking Zabbix with AI: A Look at the Zabbix AI MCP Server

Good morning everyone, Dimitri Bellini here, back on my channel Quadrata! Today, we’re diving into something truly interesting, a bit experimental, and as always, involving our good friend Zabbix. This exploration comes thanks to a member of the Italian Zabbix community, Matteo Peirone, who reached out on LinkedIn to share a fascinating project he developed. I was immediately intrigued and knew I had to show it to you.

So, what are we talking about? It’s called the Zabbix AI MCP Server, and it allows us to instrument operations within Zabbix using artificial intelligence. Let’s break down what this means and how it works.

What is the Zabbix AI MCP Server?

At its core, the Zabbix AI MCP Server acts as an intermediary, bridging the gap between artificial intelligence and the Zabbix server’s APIs. Many of you might already be familiar with Zabbix APIs, which allow us to consult data or perform actions within our Zabbix environment. This project aims to simplify these interactions significantly, especially for those not deeply versed in API scripting.

To get started, we need a few key components:

  • An inference engine: This can be cloud-based or local (via Ollama or VLLM). I’ve been experimenting with a few.
  • An adequate AI model compatible with the engine.
  • The Zabbix AI MCP Server itself.
  • A small, yet crucial, project called mcp-to-openapi-proxy.
  • In my setup, I’m using Open Web UI as a chat interface, similar to ChatGPT, to interact with the AI.

Understanding MCP: Model Context Protocol

Before we go further, it’s important to understand what “MCP” stands for. It means Model Context Protocol. This protocol, invented by Anthropic (the creators of Claude), is designed to allow AI models to interact with external “tools.” These tools can be anything from platform functionalities to specific software features.

Essentially, MCP provides a standardized way for an AI to:

  1. Discover available tools and their capabilities (e.g., functions, resources).
  2. Understand how to use these tools, including descriptions and invocation methods.

This is particularly relevant for AI agents, which are sophisticated prompts instructed to perform tasks that might require external interactions, like research or system operations. MCP helps standardize these tool interactions, which can be a challenge as not all LLM models handle function calls equally well.

How the Zabbix AI MCP Server Works

The Zabbix AI MCP Server, developed by Matteo Peirone, leverages this MCP framework. It exposes Zabbix’s API functionalities as “tools” that an AI can understand and use. This means you can:

  • Consult data: Ask for the latest problems, analyze triggers, or get details about a host.
  • Perform actions: Create or update objects within Zabbix (if not in read-only mode).

All of this can be done without needing to write complex API scripts yourself!

The Architecture in My Setup:

Here’s how the pieces connect in my demonstration:

  1. Open Web UI: This is my chat interface where I type my requests in natural language.
  2. mcp-to-openapi-proxy: This acts as a bridge. Open Web UI is instructed to look for tools here. This proxy converts MCP functions into REST API endpoints (normalized in the OpenAPI standard) that Open Web UI can consume. It essentially acts as an MCP client.
  3. Zabbix AI MCP Server: This is the star of the show. The mcp-to-openapi-proxy communicates with this server. The Zabbix AI MCP Server is configured with the details of my Zabbix instance (URL, authentication token or credentials).
  4. Zabbix Server: The Zabbix AI MCP Server then interacts with the actual Zabbix server APIs to fetch data or perform actions based on the AI’s request.

Getting Started: Installation and Setup Guide

Here’s a brief rundown of how I got this up and running. It might seem a bit involved, but following these steps should make it manageable:

  1. Clone the Zabbix AI MCP Server Repository:

    git clone https://github.com/mpeirone/zabbix-mcp-server.git

    (You’ll find the repository on Matteo Peirone’s GitHub)

  2. Navigate into the directory and install dependencies:

    cd zabbix-mcp-server
    uv sync

    (I’m using uv here, which is a fast Python package installer and resolver).

  3. Configure the Zabbix AI MCP Server:
    Copy the example configuration file:

    cp config/.env.example .env

    Then, edit the .env file to include your Zabbix server URL, authentication method (token or user/password), and set READ_ONLY=false if you want to test creation/update functionalities (use with caution!).

    ZABBIX_URL="http://your-zabbix-server/api_jsonrpc.php"
    ZABBIX_TOKEN="your_zabbix_api_token"
    # or
    # ZABBIX_USER="your_user"
    # ZABBIX_PASSWORD="your_password"
    READ_ONLY=false

  4. Install and Run the mcp-to-openapi-proxy:
    This component exposes the MCP server over HTTP.

    pipx install uv
    uvx mcpo --port 8000 --api-key "top-secret" -- uv run python3.11 src/zabbix_mcp_server.py

    This command will typically start the proxy on port 8000, and it will, in turn, launch your Zabbix MCP server application. It will also generate an API token (e.g., “topsecret”) that you’ll need for Open Web UI.

  5. Set up Open Web UI:
    Deploy Open Web UI (e.g., via Docker). I’ve configured mine to connect to a local Ollama instance, but you can also point it to other LLM providers.
  6. Configure Tools in Open Web UI:

    • In Open Web UI, navigate to the Admin Panel -> Settings -> Connections to set up your LLM connection (e.g., Ollama, OpenAI, OpenRouter).
    • Then, go to Tools and add a new tool server:

      • URL: Point it to where your `mcp-to-openapi-proxy` is running (e.g., `http://my_server_ip:8000/`).
      • Authentication: Use “Bearer Token” and provide the token generated by `mcp-to-openapi-proxy` (e.g., “topsecret”).
      • Give it a name (e.g., “Zabbix MCP Proxy”) and ensure it’s enabled.

Putting It to the Test: Demo Highlights

In my video, I demonstrated a few queries:

  • “Give me the latest five Zabbix problems in a nice table.”
    Using a local Mistral model via VLLM, the system successfully called the Zabbix MCP Server and retrieved the problem data, formatting it into a table. The accuracy was good, matching the problems shown in my Zabbix dashboard.
  • Fetching Host Details:
    I asked, “Give me the details of the host called Zabbix server.” Initially, with the local model, the phrasing needed to be precise. Switching to a more powerful model like Gemini Pro (via OpenRouter) and specifying “hostname equal to Zabbix server” yielded the correct host ID and details. This highlights how the LLM’s capability plays a role in interpreting requests and using the tools.

One challenge observed is that sometimes, for more complex information (like correlating event IDs with host names not directly in the initial problem get API call), the AI might need more sophisticated tool definitions or better prompting to make multiple, related API calls. However, the beauty of MCP is that you could potentially create a custom “tool” within the Zabbix MCP Server that performs these multiple queries internally and presents a consolidated result.

The Potential and Why This is Exciting

This approach is incredibly versatile. For those not comfortable with APIs, it’s a game-changer. But even for seasoned users, it opens up possibilities for quick queries and potentially for building more complex AI-driven automation around Zabbix.

The Zabbix AI MCP Server is an experiment, something new and fresh. It’s a fantastic starting point that can be refined and improved, perhaps with your help and ideas! The fact that it’s built on an open standard like MCP means it could integrate with a growing ecosystem of AI tools and agents.

Join the Conversation!

This is just the beginning. It’s fascinating to think about how we can use methodologies like the MCP server not just within Zabbix, but across many other applications. The ability to automate and interact with complex systems using natural language is a powerful concept.

What do you think about this? Can you see yourself using something like this? What other use cases come to mind? Let me know in the comments below – I’m really keen to hear your thoughts and start a discussion on this topic.

If you found this interesting, please give the video a big thumbs up, and if you haven’t already, subscribe to Quadrata for more explorations into the world of open source and IT!

That’s all for today. See you next week!

A big thank you again to Matteo Peirone for this innovative project!


Connect with me and the community:

Read More
Chat with Your Zabbix: A Practical Guide to Integrating AI with Zabbix AI MCP Server

Unlocking Zabbix with AI: A Look at the Zabbix AI MCP Server

Good morning everyone, Dimitri Bellini here, back on my channel Quadrata! Today, we’re diving into something truly interesting, a bit experimental, and as always, involving our good friend Zabbix. This exploration comes thanks to a member of the Italian Zabbix community, Matteo Peirone, who reached out on LinkedIn to share a fascinating project he developed. I was immediately intrigued and knew I had to show it to you.

So, what are we talking about? It’s called the Zabbix AI MCP Server, and it allows us to instrument operations within Zabbix using artificial intelligence. Let’s break down what this means and how it works.

What is the Zabbix AI MCP Server?

At its core, the Zabbix AI MCP Server acts as an intermediary, bridging the gap between artificial intelligence and the Zabbix server’s APIs. Many of you might already be familiar with Zabbix APIs, which allow us to consult data or perform actions within our Zabbix environment. This project aims to simplify these interactions significantly, especially for those not deeply versed in API scripting.

To get started, we need a few key components:

  • An inference engine: This can be cloud-based or local (via Ollama or VLLM). I’ve been experimenting with a few.
  • An adequate AI model compatible with the engine.
  • The Zabbix AI MCP Server itself.
  • A small, yet crucial, project called mcp-to-openapi-proxy.
  • In my setup, I’m using Open Web UI as a chat interface, similar to ChatGPT, to interact with the AI.

Understanding MCP: Model Context Protocol

Before we go further, it’s important to understand what “MCP” stands for. It means Model Context Protocol. This protocol, invented by Anthropic (the creators of Claude), is designed to allow AI models to interact with external “tools.” These tools can be anything from platform functionalities to specific software features.

Essentially, MCP provides a standardized way for an AI to:

  1. Discover available tools and their capabilities (e.g., functions, resources).
  2. Understand how to use these tools, including descriptions and invocation methods.

This is particularly relevant for AI agents, which are sophisticated prompts instructed to perform tasks that might require external interactions, like research or system operations. MCP helps standardize these tool interactions, which can be a challenge as not all LLM models handle function calls equally well.

How the Zabbix AI MCP Server Works

The Zabbix AI MCP Server, developed by Matteo Peirone, leverages this MCP framework. It exposes Zabbix’s API functionalities as “tools” that an AI can understand and use. This means you can:

  • Consult data: Ask for the latest problems, analyze triggers, or get details about a host.
  • Perform actions: Create or update objects within Zabbix (if not in read-only mode).

All of this can be done without needing to write complex API scripts yourself!

The Architecture in My Setup:

Here’s how the pieces connect in my demonstration:

  1. Open Web UI: This is my chat interface where I type my requests in natural language.
  2. mcp-to-openapi-proxy: This acts as a bridge. Open Web UI is instructed to look for tools here. This proxy converts MCP functions into REST API endpoints (normalized in the OpenAPI standard) that Open Web UI can consume. It essentially acts as an MCP client.
  3. Zabbix AI MCP Server: This is the star of the show. The mcp-to-openapi-proxy communicates with this server. The Zabbix AI MCP Server is configured with the details of my Zabbix instance (URL, authentication token or credentials).
  4. Zabbix Server: The Zabbix AI MCP Server then interacts with the actual Zabbix server APIs to fetch data or perform actions based on the AI’s request.

Getting Started: Installation and Setup Guide

Here’s a brief rundown of how I got this up and running. It might seem a bit involved, but following these steps should make it manageable:

  1. Clone the Zabbix AI MCP Server Repository:

    git clone https://github.com/mpeirone/zabbix-mcp-server.git

    (You’ll find the repository on Matteo Peirone’s GitHub)

  2. Navigate into the directory and install dependencies:

    cd zabbix-mcp-server
    uv sync

    (I’m using uv here, which is a fast Python package installer and resolver).

  3. Configure the Zabbix AI MCP Server:
    Copy the example configuration file:

    cp config/.env.example .env

    Then, edit the .env file to include your Zabbix server URL, authentication method (token or user/password), and set READ_ONLY=false if you want to test creation/update functionalities (use with caution!).

    ZABBIX_URL="http://your-zabbix-server/api_jsonrpc.php"
    ZABBIX_TOKEN="your_zabbix_api_token"
    # or
    # ZABBIX_USER="your_user"
    # ZABBIX_PASSWORD="your_password"
    READ_ONLY=false

  4. Install and Run the mcp-to-openapi-proxy:
    This component exposes the MCP server over HTTP.

    pipx install uv
    uvx mcpo --port 8000 --api-key "top-secret" -- uv run python3.11 src/zabbix_mcp_server.py

    This command will typically start the proxy on port 8000, and it will, in turn, launch your Zabbix MCP server application. It will also generate an API token (e.g., “topsecret”) that you’ll need for Open Web UI.

  5. Set up Open Web UI:
    Deploy Open Web UI (e.g., via Docker). I’ve configured mine to connect to a local Ollama instance, but you can also point it to other LLM providers.
  6. Configure Tools in Open Web UI:

    • In Open Web UI, navigate to the Admin Panel -> Settings -> Connections to set up your LLM connection (e.g., Ollama, OpenAI, OpenRouter).
    • Then, go to Tools and add a new tool server:

      • URL: Point it to where your `mcp-to-openapi-proxy` is running (e.g., `http://my_server_ip:8000/`).
      • Authentication: Use “Bearer Token” and provide the token generated by `mcp-to-openapi-proxy` (e.g., “topsecret”).
      • Give it a name (e.g., “Zabbix MCP Proxy”) and ensure it’s enabled.

Putting It to the Test: Demo Highlights

In my video, I demonstrated a few queries:

  • “Give me the latest five Zabbix problems in a nice table.”
    Using a local Mistral model via VLLM, the system successfully called the Zabbix MCP Server and retrieved the problem data, formatting it into a table. The accuracy was good, matching the problems shown in my Zabbix dashboard.
  • Fetching Host Details:
    I asked, “Give me the details of the host called Zabbix server.” Initially, with the local model, the phrasing needed to be precise. Switching to a more powerful model like Gemini Pro (via OpenRouter) and specifying “hostname equal to Zabbix server” yielded the correct host ID and details. This highlights how the LLM’s capability plays a role in interpreting requests and using the tools.

One challenge observed is that sometimes, for more complex information (like correlating event IDs with host names not directly in the initial problem get API call), the AI might need more sophisticated tool definitions or better prompting to make multiple, related API calls. However, the beauty of MCP is that you could potentially create a custom “tool” within the Zabbix MCP Server that performs these multiple queries internally and presents a consolidated result.

The Potential and Why This is Exciting

This approach is incredibly versatile. For those not comfortable with APIs, it’s a game-changer. But even for seasoned users, it opens up possibilities for quick queries and potentially for building more complex AI-driven automation around Zabbix.

The Zabbix AI MCP Server is an experiment, something new and fresh. It’s a fantastic starting point that can be refined and improved, perhaps with your help and ideas! The fact that it’s built on an open standard like MCP means it could integrate with a growing ecosystem of AI tools and agents.

Join the Conversation!

This is just the beginning. It’s fascinating to think about how we can use methodologies like the MCP server not just within Zabbix, but across many other applications. The ability to automate and interact with complex systems using natural language is a powerful concept.

What do you think about this? Can you see yourself using something like this? What other use cases come to mind? Let me know in the comments below – I’m really keen to hear your thoughts and start a discussion on this topic.

If you found this interesting, please give the video a big thumbs up, and if you haven’t already, subscribe to Quadrata for more explorations into the world of open source and IT!

That’s all for today. See you next week!

A big thank you again to Matteo Peirone for this innovative project!


Connect with me and the community:

Read More