Posts Taged zabbix

Why RAGFlow might be the Definitive Open Source AI Knowledge Base Solution

Why RAGFlow might be the Definitive Open Source AI Knowledge Base Solution

Good morning everyone! I’m Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the open-source world and technology that I love—and that I hope you love too.

If you’ve been following my recent experiments, you might remember my attempt to build a Telegram bot using a software called Clawdbot/Moltbot (or similar variants). To be honest? It was a bit of a glamorous failure. But in the world of technology, a failed experiment is just a stepping stone to a better solution.

That failure led me back to the drawing board and back to a tool I looked at about a year ago to see how much it had matured. I’m talking about RAGFlow. In this post, I’m going to walk you through why I believe RAGFlow is currently the best open-source solution for Retrieval-Augmented Generation (RAG), and I’ll show you how I used it to build a highly accurate AI assistant for the Zabbix Italia community.

What is RAGFlow and Why Do We Need It?

We are talking about RAG (Retrieval-Augmented Generation)—a paradigm in the AI world that allows us to extrapolate information from documents or texts and use that specific data to generate answers via an inference engine.

My first thought was to use Dify, which is fantastic for creating agents and pipelines. However, for deep document understanding, it didn’t quite hit the mark for my specific needs. RAGFlow, on the other hand, excels at solving the “Garbage In, Garbage Out” problem.

Here is why RAGFlow stands out:

  • Deep Document Understanding: Unlike basic text splitters, RAGFlow uses visual models (OCR & Layout Analysis) to understand the structure of a document—identifying titles, tables, and figures.
  • Template-Based Chunking: It classifies data intelligently. Whether you are uploading a resume, a manual, or an Excel table, RAGFlow tunes the ingestion process to fit the format.
  • Grounded Citations: This is the killer feature. When the AI answers, it provides a reference (and even an image snippet) showing exactly where in the document it found the information. This eliminates the fear of AI “hallucinations” or invented answers.
  • Heterogeneous Data Support: From Markdown and PDFs to Excel and Word documents, it handles it all.

The Goal: A Zabbix Help Bot

My objective was practical: I wanted to create a Telegram bot for the Zabbix Italia community. The idea was to feed the AI the official Zabbix documentation (specifically version 7.0.6) so that when a user asks a technical question, the bot provides a coherent, accurate answer based only on that documentation.

Using RAGFlow, I didn’t just want a full-text search; I wanted an assistant that could reply in a conversational tone while strictly adhering to the technical facts provided in the PDFs.

Installation: Getting RAGFlow Running

RAGFlow is a comprehensive stack. It’s not just a single container; it involves MySQL, Elasticsearch, MinIO, and more. Therefore, the best way to run it is via Docker Compose.

System Requirements

Be warned, this isn’t a lightweight tool. To run this effectively in your lab, you will need:

  • CPU: Minimum 4 vCPU
  • RAM: At least 16GB (Recommended)
  • GPU (Optional but recommended): I used an NVIDIA RTX 8000 for local inference, which provided excellent results.

Step-by-Step Setup

The installation is straightforward if you are familiar with Docker:

  1. Prepare the System: You must increase the virtual memory map count for Elasticsearch. Run:

    sudo sysctl -w vm.max_map_count=262144

  2. Clone the Repository:

    git clone https://github.com/infiniflow/ragflow.git

  3. Launch the Stack: Navigate to the docker directory and run:

    docker compose -f docker-compose.yml up -d

Once up, you can access the interface via your browser (HTTP on port 80 by default). I recommend setting up Nginx as a reverse proxy for security if you plan to expose this.

Configuration and The “100-Page” Lesson

Inside RAGFlow, I configured Ollama as my model provider, using a local model (Qwen 30B parameters) which offered a great balance of performance and quality.

A Critical Lesson Learned: During my testing, I initially tried to upload a massive 500-page PDF of the Zabbix documentation. This led to failures in the parsing stage. The issue likely stemmed from context window limits (my local model handles 32k tokens) or the chunking logic struggling with such a large file.

The solution? I split the documentation into smaller PDFs of about 100 pages each. Once I did that, the ingestion worked perfectly. If you are using local engines, keep your document sizes manageable!

The Results: Accuracy and Citations

The interface allows you to create a “Chat” application linked to your Knowledge Base. I set up a chat specifically for Zabbix 7.0.6.

When I asked, “Do you know Zabbix?” or requested “What’s new in version 7.0.6?”, the results were impressive. It didn’t just give me a generic summary; it listed specific technical details like TimescaleDB 2.17 support and PostgreSQL 17 compatibility.

Most importantly, it provided citations. I could click on a reference, and RAGFlow showed me the exact image crop of the PDF where it found that data. This is essential for professional environments—you can verify that the AI isn’t making things up.

Integrating with Telegram

RAGFlow provides a robust API, which made the final step of my project surprisingly easy. I wrote a small piece of software to act as a bridge between the Telegram API and RAGFlow.

Now, when a user in the Zabbix Italia channel sends a command, the bot forwards the query to RAGFlow, retrieves the answer (and potentially images), and sends it back to Telegram. It’s a seamless flow that adds immense value to the community.

Conclusion

In my opinion, RAGFlow is currently the definitive engine for open-source knowledge bases. It has matured significantly over the last few months, refining its imperfections and offering enterprise-grade features like deep document parsing and grounded citations.

Whether you are a developer looking to build a specialized agent or a company needing to organize internal documentation, this is a tool worth trying. It requires some hardware resources, but the payoff in accuracy and capability is worth it.

I’m curious to hear your thoughts. Have you tried RAGFlow? Do you think this “Grounded Citation” feature is as game-changing as I do? Let me know in the comments!

If you enjoyed this deep dive, please leave a like and subscribe. It really helps the channel.

A greeting from Dimitri, and see you next weekend!


Connect with me and the Community:

Read More
A Christmas Gift for Zabbix 7.0 Users: The Ultimate Table Widget is Here!

A Christmas Gift for Zabbix 7.0 Users: The Ultimate Table Widget is Here!

Good morning everyone, and welcome back to Quadrata! Given the special time of year, I want to wish you all a very Merry Christmas. This week, I have something truly special to share—a real game-changer for anyone using Zabbix, particularly the 7.0 Long-Term Support (LTS) release.

As many of you know, Zabbix 7.0 is the go-to version for production environments where stability and long-term support are critical. While it’s fantastic, I’ve always felt there were a few missing pieces in the dashboarding experience, especially when compared to some features in the newer 7.4 release. Today, that changes. Thanks to the incredible work of our friend Ryan Eberly, we now have a powerful, feature-rich Table widget backported to version 7.0!

Why This Widget is a Game-Changer for Zabbix 7.0

Dashboards are our window into the health of our infrastructure, but creating clean, dynamic, and truly useful tables in Zabbix 7.0 could sometimes be a challenge, especially when dealing with item prototypes from Low-Level Discovery (LLD). The default widgets are good, but they lack the flexibility needed for complex scenarios. Ryan’s “Zabbix Widget Table” fills this gap perfectly.

This isn’t just any table; it’s a highly customizable tool that allows you to present data in ways that were previously difficult or impossible to achieve in the 7.0 LTS version. It’s the Christmas gift we Zabbix users have been waiting for!

How to Install the Zabbix Table Widget

Getting this widget up and running is straightforward, but it requires one crucial step to ensure you get the correct version. Let’s walk through it.

Step 1: Connect to Your Zabbix Server

First, you’ll need to SSH into your Zabbix frontend server and navigate to the correct modules directory. For Zabbix 7.0, the path is:

/usr/share/zabbix/modules/

Note: This path was changed in Zabbix 7.4, so make sure you’re in the right place for 7.0.

Step 2: Clone the GitHub Repository

This is the most important part. You must clone the specific 7.0 branch from Ryan’s repository. If you don’t, you’ll get the latest version (for 7.4), and it won’t work.

Run the following command inside the modules directory:

git clone --branch 7.0 https://github.com/gryan337/zabbix-widgets-table.git

This will create a new directory named zabbix-widgets-table containing all the necessary files.

Step 3: Activate the Module in Zabbix

Once the files are in place, head over to your Zabbix UI.

  1. Navigate to Administration > General > Modules.
  2. Click the “Scan directory” button. Zabbix will discover the new widget.
  3. You will see a new module, likely named “Table” (Flexible Table Widget for Zabbix Dashboard). By default, it will be disabled.
  4. Simply click the status to Enable it.

That’s it! The widget is now ready to be used in your dashboards.

Exploring the Powerful Features

Now for the fun part. I’ve spent some time exploring what this widget can do, and the possibilities are immense. Here are a few examples to show you its power.

Example 1: Dynamic Network Interface Monitoring

Imagine you want a clean table showing the bits received and sent for all network interfaces on a router, and you want this table to update dynamically when you select a different device. With this widget, it’s easy!

  • Clean Interface Names: The widget can group items by a tag (e.g., “interface”), using the tag’s value as the row name. This gets rid of cluttered item names, leaving you with just the interface name.
  • Column per Pattern: You can define columns based on an item key pattern (like `* bits received`). This is perfect for handling item prototypes from LLD.
  • Interactive & Sortable: Every column is sortable and filterable right from the dashboard. Plus, you can configure the table to interact with other widgets, so clicking a row can update a graph to show that specific interface’s data.
  • Quick Links: Add a direct link to the “Latest data” history for any metric, which is incredibly useful for troubleshooting.

Example 2: Horizontal View for Disk Space

Want to monitor disk space across multiple dynamic filesystems (C:, D:, /root, /boot) for Windows or Linux hosts? You can create a compact, horizontal table.

  • Pattern Matching: Use a pattern to pull in all relevant filesystem items automatically.
  • Clean Up Labels with Regex: The item names for filesystems can be long (e.g., `fs/space/available`). The widget allows you to use Zabbix macros with regular expressions to clean these up on the fly, displaying just `/` or `/boot`.
  • Visual Indicators: Display data as numbers, bars, or indicators with custom thresholds for a quick visual status check.

Example 3: Three-Column Interface Status View

Another great use case is creating a simple list showing the operational status of all interfaces across multiple hosts. I configured a three-column layout to show the Host Name, Interface Name, and Status.

  • Trigger Severity Display: This is one of my favorite features. If a trigger is active for an item (e.g., an interface is down), the widget will display the trigger’s severity color right in the table. This provides an immediate, at-a-glance view of problems.
  • Host Context Menu: The widget also provides a quick-access menu for each host, letting you jump directly to its inventory, latest data, or problems.

A Big Thank You and a Call to Action

I am genuinely excited about what this widget brings to the Zabbix community. It adds a layer of professionalism and flexibility to our dashboards that was sorely needed in the 7.0 LTS release. The fact that Ryan Eberly took the time to backport this for us is a testament to the power of the open-source community.

I strongly encourage you to do two things:

  1. Try it out! Add it to your dashboards and see how it can improve your monitoring workflows.
  2. Show your appreciation. Head over to his GitHub repository and give it a star. If you find any bugs or have ideas for features, open an issue. He is actively developing these tools and values our feedback.

With this table widget, and hopefully more to come (like his custom graph widget!), our Zabbix 7.0 dashboards are on their way to becoming more powerful than ever.

Let me know in the comments what you think and what creative dashboards you build with it!

Until next week, happy monitoring, and once again, Merry Christmas!


Dimitri Bellini

My YouTube Channel: Quadrata

Join the conversation on Telegram: ZabbixItalia

Read More
NetBox and Zabbix: Creating the Ultimate Source of Truth for Your IT Infrastructure

NetBox and Zabbix: Creating the Ultimate Source of Truth for Your IT Infrastructure

Good morning everyone! Dimitri Bellini here, and welcome back to Quadrata, my channel dedicated to the world of open source and all the IT that I, and hopefully you, find fascinating.

This week, we’re diving into a powerful solution that addresses a common and persistent challenge in IT management: the lack of a single, reliable source of information. In large infrastructures, it’s often difficult to know what objects exist, where they are, what their relationships are, and how to even begin monitoring them. Many turn to a Configuration Management Database (CMDB), but keeping it manually updated is a struggle. What if we could automate this process?

That’s where a fantastic open-source project called NetBox comes in. And thanks to our friends at Open Source ICT Solutions, there’s a brilliant integration that connects it directly to Zabbix. Let’s explore how to build a true “source of truth” for our network.

What is NetBox and Why Do You Need It?

For those who may not know it, NetBox is a well-established and solid open-source tool designed to be the central repository for your entire IT environment. It’s more than just a spreadsheet; it’s a structured database for everything from your network devices to your data center cabling. It’s designed to be the single source of truth.

NetBox helps you model and document your infrastructure with incredible detail. Its core functionalities include:

  • IP Address Management (IPAM): A robust module for managing all your IP spaces, prefixes, and addresses.
  • Data Center Infrastructure Management (DCIM): Model your physical infrastructure, including data center layouts, rack elevations, and the exact placement of devices within them.
  • Cabling and Connections: Document and visualize every cable connection between your devices, allowing you to trace the entire path of a circuit.
  • Automation and Integration: With a powerful REST API and support for custom scripts, NetBox is built for automation, allowing you to streamline processes and integrate with other tools—like Zabbix!

While maintaining this level of documentation might seem daunting, the benefits are immense, especially when you can automate parts of the workflow.

The NetBox-Zabbix Integration: How It Works

The concept behind this integration is simple yet crucial to understand. The flow of information is one-way: from NetBox to Zabbix.

This is fundamental. NetBox acts as the source of truth, the master record. When you add or update a device in NetBox, the plugin provisions that device in Zabbix. It creates the host, assigns templates, sets up interfaces, and applies tags. It is not Zabbix sending information back to NetBox. This ensures your documentation remains the authoritative source.

I got all my inspiration for this setup from a fantastic blog post on Zabbix.com by the team at Open Source ICT Solutions. They created this plugin and provided an excellent guide, so a huge thank you to them!

Getting Started: A Step-by-Step Guide

For my setup, I wanted to simplify things, so I used a non-official Docker repository designed for NetBox plugin development. It makes getting up and running much faster.

H3: Setting Up the Environment

Here are the commands I used to get the environment ready:

# 1. Clone the unofficial Docker repo for NetBox plugin development
> git clone https://github.com/dkraklan/netbox-plugin-development-env.git
> cd netbox-plugin-development-env

# 2. Clone the nbxsync plugin from OpensourceICTSolutions into the plugins directory
> cd plugins/
> git clone https://github.com/OpensourceICTSolutions/nbxsync.git
> cd ..

# 3. Add the plugin to the NetBox configuration
> vi configuration/configuration.py

# Add 'nbxsync' to the PLUGINS list:
PLUGINS = [
'nbxsync'
]

# 4. Build and launch the Docker containers
> docker-compose build
> docker-compose up -d

And just like that, you should have a running NetBox instance with the Zabbix plugin installed!

Configuring the Zabbix Plugin in NetBox

Once NetBox is up, the configuration is straightforward.

  1. Add Your Zabbix Server: In the NetBox UI, you’ll see a new “Zabbix” menu at the bottom. Navigate there and add your Zabbix server. You’ll need to provide a name, the server URL (just the base URL, e.g., http://zabbix.example.com), and an API token from a Zabbix user with sufficient permissions.
  2. Sync Templates: After adding the server, you can click “Sync Templates.” The plugin will connect to your Zabbix instance and pull in all available templates, proxies, macros, and host groups. This is incredibly useful for later steps.
  3. Define Your Infrastructure: Before adding a device, you need to define some core components in NetBox. This is standard NetBox procedure:

    • Create a Site (e.g., your main office or data center).
    • Define a Manufacturer (e.g., Cisco).
    • Create a Device Role (e.g., Core Switch).
    • Create a Device Type, which is a specific model (e.gne., Cisco CBS350-24P). Here, you can go to the “Zabbix” tab and pre-assign a default Zabbix template for this device type, which is a huge time-saver!

Provisioning a New Device to Zabbix

Now for the magic. Let’s add a new switch and watch it appear in Zabbix.

  1. Create the Device: Create a new device in NetBox, assigning the Site, Device Role, and Device Type you created earlier.
  2. Add an IP Address: Go to the IPAM section and create an IP address that you will assign to the switch’s management interface.
  3. Configure the Zabbix Interface: Navigate back to your newly created device and click on the “Zabbix” tab.

    • Add a Host Interface. Select your Zabbix server, the interface type (e.g., SNMP), and assign the IP address you just created.
    • Add a Host Group. Assign the Zabbix host group where you want this device to appear.
    • Add any Tags you want. I created a “netbox” tag for easy identification.

  4. Sync to Zabbix: With all the information in place, simply click the “Sync to Zabbix” button. A background job will be queued.

If you switch over to your Zabbix frontend, you’ll see the new host has been created automatically, complete with the correct IP address, assigned to the right host group, linked to the correct template, and with the tags we defined. It’s that simple!

Even better, the integration also pulls some data back for viewing. In the “Zabbix Operation” tab within NetBox, you can see the latest problems for that specific device directly from Zabbix, giving you a unified view without leaving the NetBox interface.

Final Thoughts

I have to say, this is a truly impressive product. Of course, it requires a disciplined workflow to maintain the data in NetBox, but the payoff in consistency, automation, and having a single, reliable source of truth is enormous. From one dashboard, you can control your entire documented infrastructure and its monitoring configuration.

This project is actively developed, and the community is already making requests on GitHub. If you have ideas for new features or find any bugs, I encourage you to contribute. This is a tool that can be incredibly useful for anyone managing a Zabbix instance, from a small lab to a large server farm.

That’s all for today! I hope you found this overview interesting. It’s a powerful combination that can really level up your infrastructure management game.

What do you think? Have you used NetBox before? Let me know your thoughts on this integration in the comments below. As always, if you enjoyed the video and this post, please give it a thumbs up and subscribe to the channel if you haven’t already. See you next week!


Follow my work and join the community:

Read More
Back from Zabbix Summit: An Exclusive Look at the Future with Zabbix 8.0 LTS

Back from Zabbix Summit: An Exclusive Look at the Future with Zabbix 8.0 LTS

Good morning everyone, Dimitri Bellini here! It’s great to be back with you on Quadrata, my channel dedicated to the world of open source and IT. It’s been a little while since my last video, as I’ve been quite busy traveling between the Zabbix Summit in Riga and Gitex in Dubai. The energy at the Summit was incredible, and I returned not just with great memories but with a clear vision of the future of monitoring. And today, I want to share that vision with you.

So, grab a coffee, and let’s dive into a recap of the fantastic Zabbix Summit and, more importantly, the groundbreaking features coming in the next major release: Zabbix 8.0 LTS.

The Zabbix Summit Experience: A Global Community United

This year’s Zabbix Summit in Riga, Latvia, was special. Not only did it mark the 20th anniversary of Zabbix, but it brought together a massive, passionate community from all corners of the globe—from Japan to South America to the Middle East. It’s always amazing to connect with so many people, share use cases, and discuss what we all love about Zabbix.

We had the chance to tour the new Zabbix headquarters—a beautiful, modern space with floors dedicated to development, support, and commercial teams. I even had the pleasure of presenting a use case with my colleague Francesco, which was a fantastic experience. But the main event, the moment we were all waiting for, was Alexey Vladyshev’s keynote on what’s next for Zabbix.

The Main Event: What’s Coming in Zabbix 8.0 LTS

Zabbix 8.0 is the next Long-Term Support (LTS) release, which means it’s designed for enterprise environments that need stability and support for up to five years. This release isn’t just an incremental update; it’s a monumental leap forward, addressing many of the current limitations and pushing Zabbix firmly into the realm of a full-fledged observability platform.

Let’s break down the most exciting pillars of this upcoming release.

Revolutionizing Event Management with Complex Event Processing (CEP)

One of the biggest game-changers is the introduction of a native Complex Event Processing (CEP) engine. This is designed to drastically reduce monitoring noise and help us focus on the root cause of issues, not just the symptoms. The CEP will operate on two levels:

  • Single Event Processing: This allows for fine-grained control over individual events as they come in. We’ll be able to perform operations like filtering irrelevant events, normalizing data to a standard format, manipulating tags and severity on the fly, and even anonymizing sensitive information.
  • Multiple Event Processing: This is where the real magic happens. By analyzing events over specific time windows, Zabbix will be able to perform advanced correlation, deduplicate redundant alerts, and distinguish between a root cause and its resulting symptoms.

Best of all, we’ll be able to implement custom logic using JavaScript. Imagine enriching an incoming event with data from your CMDB before it even becomes a problem in Zabbix. The possibilities are endless!

Embracing Observability: APM and OpenTelemetry Integration

Zabbix is officially stepping into the world of Application Performance Monitoring (APM). To do this, it’s fundamentally changing how it handles data. Instead of simple time-series data, Zabbix 8.0 will be built to handle complex, structured JSON data natively.

This architectural shift opens the door for seamless integration with modern observability standards like OpenTelemetry. We will finally be able to ingest traces, logs, and metrics from applications directly into Zabbix and visualize them. During the presentation, we saw a mockup of a request trace, broken down step-by-step, allowing for deep root cause analysis right within the Zabbix UI. This is a massive step forward.

A New Engine for Unprecedented Scale

With all this new data, how will Zabbix scale? While standard databases like PostgreSQL and MySQL will still be supported for smaller setups, the focus for large-scale deployments is shifting to high-performance backends. The star of the show here is ClickHouse.

The new architecture will offload the ingestion process to Zabbix Proxies, which will write data directly to ClickHouse. The Zabbix Server will then query this data for visualization and processing. This design allows Zabbix to handle millions of values per second, making it suitable for even the most demanding environments.

A Fresh Face and Enhanced Usability

Let’s be honest, the Zabbix UI, while functional, could use a modern touch. The Zabbix team knows this, and a complete UI overhaul is planned for 8.0! We saw mockups of a cleaner, fresher, and more intuitive interface.

But perhaps one of the most requested features of all time is finally coming: customizable table views. In the “Problems” view and other tables, you will be able to show, hide, reorder, and sort columns as you see fit. It might seem like a small change, but it’s a huge quality-of-life improvement that we’ve been waiting for.

Monitoring on the Go: The Official Zabbix Mobile App

Finally, Zabbix is developing an official mobile application! This will bring essential monitoring capabilities right to your phone, including:

  • Push notifications for alerts.
  • Problem management and collaboration tools.
  • Aggregated views from multiple Zabbix servers.
  • Integration with both on-premise and Zabbix Cloud instances.

A Glimpse into the Future

Zabbix 8.0 LTS is shaping up to be the most significant release in the product’s 20-year history. It’s evolving from a best-in-class monitoring tool into a comprehensive observability platform ready to meet the challenges of modern IT infrastructures. The expected release date is around mid-2026, and I, for one, cannot wait.

I’ll be keeping a close eye on the public roadmap and will keep you updated as these features move through development. But now, I want to hear from you!

What feature are you most excited about? Is there something else you’d love to see in Zabbix? Let me know in the comments below!

That’s all for today. Thanks for joining me, and I’ll see you in the next video. Bye everyone!


Stay Connected:

Read More
Revolutionize Your Zabbix Dashboards: RME Essential Custom Widgets

Revolutionize Your Zabbix Dashboards: RME Essential Custom Widgets

Good morning and welcome, everyone! It’s Dimitri Bellini, back again on Quadrata, my channel dedicated to the open-source world and the IT that I love. It’s been a little while since we talked about our good friend Zabbix, and I’m excited to share something I stumbled upon that I think you’re going to love.

While browsing the Zabbix support portal, I came across a community member, Ryan Eberle, who has developed an incredible set of custom widgets. His GitHub repository is a goldmine of enhancements that bring a whole new level of functionality and clarity to our Zabbix dashboards. These aren’t just minor tweaks; they are game-changing improvements that address many of the limitations we’ve all faced.

So, let’s dive in and see how you can supercharge your monitoring setup!

Getting Started: How to Install These Custom Widgets

Installing these widgets is surprisingly simple. Just follow these steps, and you’ll be up and running in no time.

Important Note: These modules are designed for Zabbix 7.2 and 7.4. They leverage new functions not available in the 7.0 LTS version, so they are not backward compatible.

  1. Clone the Repository: First, head over to the developer’s GitHub repository. Find the widget you want to install (for example, the Graph widget), click on “Code,” and copy the clone URL.
  2. Download to Your Server: SSH into your Zabbix server console. In a temporary directory, use the `git clone` command to download the widget files. For example:
    git clone [paste the copied URL here]
  3. Copy to the Zabbix Modules Directory: This is a crucial step. In recent Zabbix versions, the path for UI modules has changed. You need to copy the downloaded widget directory into:
    /usr/share/zabbix/ui/modules/
  4. Scan for New Modules: Go to your Zabbix frontend and navigate to Administration → General → Modules. Click the “Scan directory” button. This is a step many people forget! If you don’t do this, Zabbix won’t see the new widgets you just added.
  5. Enable the Widgets: Once the scan is complete, you will see the new modules listed, authored by Ryan Eberle. By default, they will be disabled. Simply click to enable each one you want to use.

A Deep Dive into the New Widget Capabilities

Now for the fun part! Let’s explore what these new widgets bring to the table. I’ve been testing the enhanced Graph, Table, and Host/Group Navigator widgets, and they are phenomenal.

The Graph Widget We’ve Always Wanted

The default vector graph in Zabbix is good, but Ryan’s version is what it should have been. It introduces features that dramatically improve usability.

  • Interactive Legend: You can now click on a metric in the legend to toggle its visibility on the graph. Want to focus on just one or two data series? Simply click to hide the others. Hold the Ctrl key to select multiple items. This is fantastic for decluttering complex graphs.
  • Sorted Tooltip/Legend: No more hunting through a messy tooltip! The legend now automatically sorts metrics, placing the ones with the highest current value at the top. When you hover over the graph, you get a clean, ordered list, making it instantly clear which metric is which.
  • Hide Zero-Value Metrics: You can configure the widget to automatically hide any metrics that have a value of zero. This cleans up the tooltip immensely, allowing you to focus only on the data that matters.
  • Advanced Label Customization: Using built-in macros and regular expressions, you can customize the data set labels. If you have very long item names, you can now extract just the part you need to keep your graphs clean and readable.
  • Data Multiplication: Need to convert a value on the fly? You can now apply a multiplier directly within the widget’s data set configuration. This is perfect for when you need to change units of measurement for display purposes without creating a new calculated item.

The difference is night and day. A cluttered, hard-to-read Zabbix graph becomes a clean, interactive, and insightful visualization.

The Ultimate Table Widget

While Zabbix has widgets like “Top hosts,” they’ve always felt a bit rigid. The new Table widget is incredibly flexible and allows you to build the exact views you need for any scenario.

One of my favorite features is the “Column per pattern” mode. Imagine you want to see the incoming and outgoing traffic for all network interfaces on a host, side-by-side. With this widget, you can!

Here’s how it works:

  • You define an item pattern for your rows (e.g., the interface name using tags).
  • You then define a pattern for each column (e.g., one for `bits.sent` and another for `bits.recv`).
  • The widget intelligently organizes the data into a clean table with interfaces as rows and your metrics as columns.

You can also add a footer row to perform calculations like sum or average. This is incredibly useful for getting an overview of a cluster. For instance, you can display the average CPU and memory utilization across all nodes in a single, elegant table.

Improved Navigation Widgets

The new Host/Group Navigator and Item Navigator also bring welcome improvements. The Host Navigator provides better filtering and a more intuitive way to navigate through host group hierarchies, which is especially helpful for complex environments. The Item Navigator includes a search box that works on tags, allowing you to quickly find and display specific groups of metrics in another widget, like our new super-graph!

Final Thoughts and a Call to Action

These custom widgets have genuinely enhanced my Zabbix experience. They add a layer of polish, usability, and power that was sorely missing from the default dashboards. It’s a testament to the strength of the open-source community, and I hope the Zabbix team takes inspiration from this work for future official releases.

Now, I want to hear from you. What do you think of these widgets? Are there any features you’ve been desperately wanting for your Zabbix dashboards? Let me know in the comments below! Perhaps if we gather enough feedback, we can share it with the developer and help make these tools even better.

If you enjoyed this video and found it helpful, please give it a nice like and subscribe for more content. See you next week!


Stay Connected with Quadrata:

Read More
Finally! OpenAI Enters the Open-Source Arena with Two New Models

Finally! OpenAI Enters the Open-Source Arena with Two New Models

Good morning, everyone! Dimitri Bellini here, and welcome back to Quadrata. For a while now, I’ve been waiting for something genuinely new to discuss in the world of artificial intelligence. The on-premise, open-source scene has been buzzing, but largely dominated by excellent models from the East. I was waiting for a major American player to make a move, and finally, the moment has arrived. OpenAI, the minds behind ChatGPT, have released not one, but two completely open-source models. This is a big deal, and in this post, I’m going to break down what they are, what they can do, and put them to the test myself.

What’s New from OpenAI? A Revolution in the Making

OpenAI has released two “open-weight” models, which means we have access to the model’s core infrastructure and the data it was trained on. This is fantastic news for developers, researchers, and hobbyists like us, as it allows for deep customization. The two new models are:

  • GPT-OSS-120B: A massive 120-billion parameter model.
  • GPT-OSS-20B: A more accessible 20-billion parameter model.

This move is a significant step, especially with a permissive Apache 2.0 license, which allows for commercial use. You can build on top of these models, fine-tune them with your own data, and deploy them in your applications without the heavy licensing restrictions we often see.

Key Features That Matter

So, what makes these models stand out? Here are the highlights:

  • Truly Open License: The Apache 2.0 license gives you immense freedom to innovate and even commercialize your work.
  • Designed for Agentic Tasks: These models are built to be “agents” that can interact with tools and perform complex, multi-step tasks. While the term “agentic” is a bit of a buzzword lately, the potential is there.
  • Deeply Customizable: With open weights, you can perform post-training to tailor the model to your specific needs, creating a specialized LLM for your unique use case.
  • Full Chain of Thought: A major point of contention with closed models is their “black box” nature. You get an answer but can’t see the reasoning. These models expose their entire thought process, allowing you to understand why they reached a certain conclusion. This transparency is crucial for debugging and trust.

Choosing Your Model: Hardware and Performance

The two models cater to very different hardware capabilities.

The Powerhouse: GPT-OSS-120B

This is the star of the show, with performance comparable to the closed GPT-3.5-Turbo model. However, running it is no small feat. You’ll need some serious hardware, like an NVIDIA H100 GPU with at least 80GB of VRAM. This is not something most of us have at home, but it’s a game-changer for businesses and researchers with the right infrastructure.

The People’s Model: GPT-OSS-20B

This is the model most of us can experiment with. It’s designed to be more “human-scale” and offers performance roughly equivalent to the `o3-mini` model. The hardware requirements are much more reasonable:

  • At least 16GB of VRAM on a dedicated NVIDIA GPU.
  • A tool like Ollama or vLLM to run it (at the time of writing, Ollama already has full support!).

This is the model I’ll be focusing my tests on today.

My Hands-On Test: Putting GPT-OSS-20B to Work with Zabbix

Benchmarks are one thing, but real-world performance is what truly counts. I decided to throw a few complex, Zabbix-related challenges at the 20B model to see how it would handle them. I used LM Arena to compare its output side-by-side with another strong model of a similar size, Qwen2.

Test 1: Zabbix JavaScript Preprocessing

My first test was a niche one: I asked the model to write a Zabbix JavaScript preprocessing script to modify the output of a low-level discovery rule by adding a custom user macro. This isn’t a simple “hello world” prompt; it requires an understanding of Zabbix’s specific architecture, LLD, and JavaScript context.

The Result: I have to say, both models did an impressive job. They understood the context of Zabbix, preprocessing, and discovery macros. The JavaScript they generated was coherent and almost perfect. The GPT-OSS model’s code needed a slight tweak—it wrapped the code in a function, which isn’t necessary in Zabbix, and made a small assumption about input parameters. However, with a minor correction, the code worked. Not bad at all for a model running locally!

Test 2: Root Cause Analysis of IT Events

Next, I gave the model a set of correlated IT events with timestamps and asked it to identify the root cause. The events were:

  1. Filesystem full on a host
  2. Database instance down
  3. CRM application down
  4. Host unreachable

The Result: This is where the model’s reasoning really shone. It correctly identified that the “Filesystem full” event was the most likely root cause. It reasoned that a full disk could cause the database to crash, which in turn would bring down the CRM application that depends on it. It correctly identified the chain of dependencies. Both GPT-OSS and Qwen2 passed this test with flying colors, demonstrating strong logical reasoning.

Test 3: The Agentic Challenge

For my final test, I tried to push the “agentic” capabilities. I provided the model with a tool to interact with the Zabbix API and asked it to fetch a list of active problems. Unfortunately, this is where it stumbled. While it understood the request and even defined the tool it needed to use, it failed to actually execute the API call, instead getting stuck or hallucinating functions. This shows that while the potential for tool use is there, the implementation isn’t quite seamless yet, at least in my initial tests.

Conclusion: A Welcome and Necessary Step Forward

So, what’s my final verdict? The release of these open-source models by OpenAI is a fantastic and much-needed development. It provides a powerful, transparent, and highly customizable alternative from a Western company in a space that was becoming increasingly dominated by others. The 20B model is a solid performer, capable of impressive reasoning and coding, even if it has some rough edges with more advanced agentic tasks.

For now, it stands as another great option alongside models from Mistral and others. The true power here lies in the community. With open weights and an open license, I’m excited to see how developers will improve, fine-tune, and build upon this foundation. This is a very interesting time for local and on-premise AI.

What do you think? Have you tried the new models? What are your impressions? Let me know your thoughts in the comments below!


Stay Connected with Me:

Read More
Zabbix 7.4 is Here! A Deep Dive into the Game-Changing New Features

Zabbix 7.4 is Here! A Deep Dive into the Game-Changing New Features

Good morning, everyone! It’s Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the world of open-source IT. It’s an exciting week because our good friend, Zabbix, has just rolled out a major new version: Zabbix 7.4! After months of hard work from the Zabbix team, this release is packed with features that will change the way we monitor our infrastructure. So, let’s dive in together and explore what’s new.

The “Escapological” Feature: Nested Low-Level Discovery

Let’s start with what I consider the most mind-bending, “centrifugal” new feature: nested low-level discovery (LLD). Until now, LLD was fantastic for discovering objects like file systems or network interfaces on a host. But we couldn’t go deeper. If you discovered a database, you couldn’t then run another discovery *within* that database to find all its tablespaces dynamically.

With Zabbix 7.4, that limitation is gone! I’ve set up a demo to show you this in action. I created a discovery rule that finds all the databases on a host. From the output of that first discovery, a new “discovery prototype” of type “Nested” can now be created. This second-level discovery can then parse the data from the first one to find all the tablespaces specific to each discovered database.

The result? Zabbix first discovers DB1 and DB2, and then it automatically runs another discovery for each of them, creating items for every single tablespace (like TS1 for DB1, TS2 for DB1, etc.). This allows for an incredible level of granularity and automation, especially in complex environments like database clusters or containerized applications. This is a true game-changer.

And it doesn’t stop there. We can now also have host prototypes within host prototypes. Previously, if you discovered a VMware host and it created new hosts for each virtual machine, those new VM hosts couldn’t run their own discovery to create *more* hosts. Now they can, opening the door for multi-layered infrastructure discovery.

A Smarter Way to Onboard: The New Host Wizard

How many times have new users felt a bit lost when adding their first host to Zabbix? What hostname do I use? How do I configure the agent? The new Host Wizard solves this beautifully.

Found under Data Collection -> Host Wizard, this feature provides a step-by-step guide to get your monitoring up and running. Here’s a quick walkthrough:

  1. Select a Template: You start by searching for the type of monitoring you need (e.g., “Linux”). The wizard will show you compatible templates. Note that not all templates are updated for the wizard yet, but the main ones for Linux, Windows, AWS, Azure, databases, and more are already there.
  2. Define the Host: You provide a hostname and assign it to a host group, just like before, but in a much more guided way.
  3. Configure the Agent: This is where the magic happens. For an active agent, for example, you input your Zabbix server/proxy IP and configure security (like a pre-shared key). The wizard then generates a complete installation script for you to copy and paste directly into your Linux or Windows shell! This script handles everything—installing the agent, configuring the server address, and setting up the keys. It’s incredibly convenient.
  4. Fine-Tune and Deploy: The final step shows you the configurable macros for the template in a clean, human-readable format, making it easy to adjust thresholds before you create the host.

A quick heads-up: I did notice a small bug where the wizard’s script currently installs Zabbix Agent 7.2 instead of 7.4. I’ve already opened a ticket, and I’m sure the Zabbix team will have it fixed in a patch release like 7.4.1 very soon.

Dashboard and Visualization Upgrades

Real-Time Editing and a Fresh Look

Dashboards have received a major usability boost. You no longer have to click “Edit,” make a change to a widget, save it, and then see the result. Now, all changes are applied in real-time as you configure the widget. If you thicken a line in a graph, you see it thicken instantly. This makes dashboard creation so much faster and more intuitive.

Furthermore, Zabbix has introduced color palettes for graphs. Gone are the days of having multiple metrics on a graph with nearly identical shades of the same color. You can now choose a palette that assigns distinct, pleasant, and easily recognizable colors to each item, making your graphs far more readable.

The New Item Card Widget

There’s a new widget in town called the Item Card. When used with something like the Host Navigator widget, you can select a host, then select a specific item (like CPU Utilization), and the Item Card will populate with detailed information about that item: its configuration, recent values, a mini-graph, and any associated triggers. It’s a fantastic way to get a quick, focused overview of a specific metric.

Powerful Enhancements for Maps and Monitoring

Maps Get a Major Overhaul

Maps are now more powerful and visually appealing than ever. Here are the key improvements:

  • Element Ordering: Finally, we can control the Z-index of map elements! You can bring elements to the front or send them to the back. This means you can create a background image of a server rack and place your server icons perfectly on top of it, which was impossible to do reliably before.
  • Auto-Hiding Labels: To clean up busy maps, labels can now be set to appear only when you hover your mouse over an element.
  • Dynamic Link Indicators: The lines connecting elements on a map are no longer just tied to trigger status. You can now have their color or style change based on an item’s value, allowing you to visualize things like link bandwidth utilization directly on your map.

More Control with New Functions and Security

Zabbix 7.4 also brings more power under the hood:

  • OAuth 2.0 Support: You can now easily configure email notifications using Gmail and Office 365, as Zabbix provides a wizard to handle the OAuth 2.0 authentication.
  • Frontend-to-Server Encryption: For security-conscious environments, you can now enable encryption for the communication between the Zabbix web frontend and the Zabbix server.
  • New Time-Based Functions: Functions like first.clock and last.clock have been added, giving us more power to correlate events based on their timestamps, especially when working with logs.

Small Changes, Big Impact: Quality of Life Improvements

Sometimes it’s the little things that make the biggest difference in our day-to-day work. Zabbix 7.4 is full of them:

  • Inline Form Validation: When creating an item or host, Zabbix now instantly highlights any required fields you’ve missed, preventing errors before you even try to save.
  • Copy Button for Test Output: When you test a preprocessing step and get a large JSON output, there’s now a simple “Copy” button. No more struggling to select all the text in the small window!
  • New Templates: The library of official templates continues to grow, with notable additions for enterprise hardware like Pure Storage.

Final Thoughts

Zabbix 7.4 is a massive step forward. From the revolutionary nested discovery to the user-friendly Host Wizard and the countless usability improvements, this release offers something for everyone. It makes Zabbix both more powerful for seasoned experts and more accessible for newcomers.

What do you think of this new release? Is there a feature you’re particularly excited about, or something you’d like me to cover in more detail? The nested discovery part can be complex, so I’m happy to discuss it further. Let me know your thoughts in the comments below!

And with that, that’s all for today. See you next week!


Don’t forget to engage with the community:

  • Subscribe to my YouTube Channel: Quadrata
  • Join the discussion on the Zabbix Italia Telegram Channel: ZabbixItalia

Read More
Mastering SNMP in Zabbix: A Deep Dive into Modern Monitoring Techniques

Mastering SNMP in Zabbix: A Deep Dive into Modern Monitoring Techniques

Good morning, everyone! It’s Dimitri Bellini, and welcome back to my channel, Quadrata, where we explore the fascinating world of open source and IT. This week, I’m revisiting a topic that frequently comes up in my work: SNMP monitoring with Zabbix. There have been some significant updates in recent Zabbix versions, especially regarding how SNMP is handled, so I wanted to share a recap and dive into these new features.

If you enjoy this content, don’t forget to subscribe to the channel or give this video a thumbs up. Your support means a lot!

What Exactly is SNMP? A Quick Refresher

SNMP stands for Simple Network Management Protocol. It’s an internet standard protocol designed for collecting and organizing information about managed devices on IP networks. Think printers, switches, routers, servers, and even more specialized hardware. Essentially, it allows us to query these devices for valuable operational data.

Why Bother with SNMP?

You might wonder why we still rely on such an “old” protocol. The answer is simple:

  • Ubiquity: Almost every network-enabled device supports SNMP out of the box.
  • Simplicity (in concept): It provides a standardized way to access a wealth of internal device information without needing custom agents for every device type.

SNMP Fundamentals You Need to Know

Before diving into Zabbix specifics, let’s cover some SNMP basics:

  • Protocol Type: SNMP primarily uses UDP. This means it’s connectionless, which can sometimes make testing connectivity (like with Telnet for TCP) a bit tricky.
  • Components:

    • Manager: This is the entity that requests information. In our case, it’s the Zabbix Server or Zabbix Proxy.
    • Agent: This is the software running on the managed device that listens for requests from the manager and sends back the requested data.

  • Versions:

    • SNMPv1: The original, very basic.
    • SNMPv2c: The most commonly used version. It introduced improvements like the “GetBulk” operation and enhanced error handling. “c” stands for community-based.
    • SNMPv3: Offers significant security enhancements, including encryption and authentication. It’s more complex to configure but essential for secure environments.

  • Key Operations:

    • GET: Retrieves the value of a specific OID (Object Identifier).
    • GETNEXT: Retrieves the value of the OID following the one specified – useful for “walking” a MIB tree.
    • SET: (Rarely used in Zabbix for monitoring) Allows modification of a device’s configuration parameter via SNMP.
    • GETBULK: (Available in SNMPv2c and v3) Allows the manager to request a large block of data with a single request, making it much more efficient than multiple GET or GETNEXT requests. This is key for modern Zabbix performance!

The `GETBULK` operation is particularly important. Imagine querying a switch with 100 interfaces, and for each interface, you want 10 metrics. Without bulk requests, Zabbix would make 1000 individual requests. This can flood the device and cause its SNMP process to consume excessive CPU, especially on devices with less powerful processors. `GETBULK` significantly reduces this overhead.

Understanding OIDs and MIBs

You’ll constantly hear about OIDs and MIBs when working with SNMP.

  • OID (Object Identifier): This is a unique, numeric address that identifies a specific piece of information on a managed device. It’s like a path in a hierarchical tree structure. For example, an OID might point to a specific network interface’s operational status or its traffic counter.
  • MIB (Management Information Base): A MIB is essentially a database or a “dictionary” that describes the OIDs available on a device. It maps human-readable names (like `ifDescr` for interface description) to their numeric OID counterparts and provides information about the data type, access rights (read-only, read-write), and meaning of the data. MIBs can be standard (e.g., IF-MIB for network interfaces, common across vendors) or vendor-specific.

To navigate and understand MIBs and OIDs, I highly recommend using a MIB browser. A great free tool is the iReasoning MIB Browser. It’s a Java application that you can download and run without extensive installation. You can load MIB files (often downloadable from vendor websites or found via Google) into it, visually explore the OID tree, see the numeric OID for a human-readable name, and get descriptions of what each OID represents.

For example, in a MIB browser, you might find that `ifOperStatus` (interface operational status) returns an integer. The MIB will tell you that `1` means “up,” `2` means “down,” `3` means “testing,” etc. This information is crucial for creating value mappings in Zabbix to display human-friendly statuses.

SNMP Monitoring in Zabbix: The Evolution

Zabbix has supported SNMP for a long time, but the way we implement it has evolved, especially with recent versions.

The “Classic” Approach (Pre-Zabbix 6.4)

Traditionally, SNMP monitoring in Zabbix involved:

  • SNMP Agent Items: You’d create an item of type “SNMP agent” and provide the specific numeric OID (or its textual representation if the MIB was installed on the Zabbix server/proxy) in the “SNMP OID” field.
  • Discovery Rules: For discovering multiple instances (like network interfaces), you’d use a discovery rule, again specifying OIDs. The key would often look like `discovery[{#SNMPVALUE},oid1,{#IFDESCR},oid2,…]`. Each OID would populate a Low-Level Discovery (LLD) macro.

Limitations of the Classic Approach:

  • No True Bulk Requests: Even if “Use bulk requests” was checked in the host interface settings, it was more of an optimization for multiple *items* rather than fetching multiple OID values for a *single* item or discovery rule efficiently. Each OID in a discovery rule often meant a separate query.
  • Synchronous Polling: Each SNMP check would typically occupy a poller process until it completed.
  • Potential Device Overload: As mentioned earlier, many individual requests could strain the monitored device.

The Modern Approach with Zabbix 6.4+

Zabbix 6.4 brought a significant game-changer with new SNMP item types:

  • `snmp.get[OID]`: For fetching a single OID value.
  • `snmp.walk[OID1,OID2,…]`: This is the star of the show! It allows you to “walk” one or more OID branches and retrieve all underlying data in a single operation. The output is a large text block containing all the fetched OID-value pairs.

Key Benefits of the snmp ‘walk’ Approach:

  • True Bulk SNMP Requests: The snmp ‘walk’ item inherently uses bulk SNMP operations (for SNMPv2c and v3), making data collection much more efficient.
  • Asynchronous Polling Support: These new item types work with Zabbix’s asynchronous polling capabilities, meaning a poller can initiate many requests without waiting for each one to complete, freeing up pollers for other tasks.
  • Reduced Load on Monitored Devices: Fewer, more efficient requests mean less stress on your network devices.
  • Master/Dependent Item Architecture: The snmp ‘walk’ item is typically used as a “master item.” It collects a large chunk of data once. Then, multiple “dependent items” (including discovery rules and item prototypes) parse the required information from this master item’s output without making additional SNMP requests.

Implementing the Modern SNMP Approach in Zabbix

Let’s break down how to set this up:

1. Configure the SNMP Interface on the Host

In Zabbix, when configuring a host for SNMP monitoring:

  • Add an SNMP interface.
  • Specify the IP address or DNS name.
  • Choose the SNMP version (v1, v2, or v3). For v2c, you’ll need the Community string (e.g., “public” or whatever your devices are configured with). For v3, you’ll configure security name, levels, protocols, and passphrases.
  • Max repetitions: This setting (default is often 10) applies to snmp ‘walk’ items and controls how many “repeats” are requested in a single SNMP GETBULK PDU. It influences how much data is retrieved per underlying bulk request.
  • Use combined requests: This is the *old* “Use bulk requests” checkbox. When using the new snmp ‘walk’ items, this is generally not needed and can sometimes interfere. I usually recommend unchecking it if you’re fully embracing the snmp ‘walk’ methodology. The snmp ‘walk’ item itself handles the efficient bulk retrieval.

2. Create the Master snmp ‘walk’ Item

This item will fetch all the data you need for a set of related metrics or a discovery process.

  • Type: SNMP agent
  • Key: `snmp.walk[oid.branch.1, oid.branch.2, …]`

    Example: `snmp.walk[IF-MIB::ifDescr, IF-MIB::ifOperStatus, IF-MIB::ifAdminStatus]` or using numeric OIDs.

  • Type of information: Text (as it returns a large block of text).
  • Set an appropriate update interval.

This item will collect data like:


IF-MIB::ifDescr.1 = STRING: lo
IF-MIB::ifDescr.2 = STRING: eth0
IF-MIB::ifOperStatus.1 = INTEGER: up(1)
IF-MIB::ifOperStatus.2 = INTEGER: down(2)
...and so on for all OIDs specified in the key.

3. Create a Discovery Rule (Dependent on the Master Item)

If you need to discover multiple instances (e.g., network interfaces, storage volumes):

  • Type: Dependent item
  • Master item: Select the snmp ‘walk’ master item created above.
  • Preprocessing Steps: This is where the magic happens!

    • Add a preprocessing step: SNMP walk to JSON.

      • Parameters: This is where you define your LLD macros and map them to the OIDs from the snmp ‘walk’ output.


        {#IFDESCR} => IF-MIB::ifDescr
        {#IFOPERSTATUS} => IF-MIB::ifOperStatus
        {#IFADMINSTATUS} => IF-MIB::ifAdminStatus
        // or using numeric OIDs:
        {#IFDESCR} => .1.3.6.1.2.1.2.2.1.2
        {#IFOPERSTATUS} => .1.3.6.1.2.1.2.2.1.8

      • This step transforms the flat text output of snmp ‘walk’ into a JSON structure that Zabbix LLD can understand. It uses the SNMP index (the number after the last dot in the OID, e.g., `.1`, `.2`) to group related values for each discovered instance. Zabbix automatically makes `{#SNMPINDEX}` available.

The `SNMP walk to JSON` preprocessor will generate JSON like this, which LLD uses to create items based on your prototypes:


[
{ "{#SNMPINDEX}":"1", "{#IFDESCR}":"lo", "{#IFOPERSTATUS}":"1", ... },
{ "{#SNMPINDEX}":"2", "{#IFDESCR}":"eth0", "{#IFOPERSTATUS}":"2", ... }
]

4. Create Item Prototypes (Dependent on the Master Item)

Within your discovery rule, you’ll create item prototypes:

  • Type: Dependent item
  • Master item: Select the same snmp ‘walk’ master item.
  • Key: Give it a unique key, often incorporating LLD macros, e.g., `if.operstatus.[{#IFDESCR}]`
  • Preprocessing Steps:

    • Add a preprocessing step: SNMP walk value.

      • Parameters: Specify the OID whose value you want to extract for this specific item prototype, using `{#SNMPINDEX}` to get the value for the correct discovered instance.


        IF-MIB::ifOperStatus.{#SNMPINDEX}
        // or numeric:
        .1.3.6.1.2.1.2.2.1.8.{#SNMPINDEX}

    • Add other necessary preprocessing steps (e.g., “Change per second,” “Multiply by 8” for bits to Bytes/sec, custom scripts, value mapping).

For static items (not discovered) that should also use the data from the snmp ‘walk’ master item, you’d create them as dependent items directly under the host, also using the “SNMP walk value” preprocessor, but you’d specify the full OID including the static index (e.g., `IF-MIB::ifOperStatus.1` if you always want the status of the interface with SNMP index 1).

Practical Tips & Troubleshooting

  • Use `snmpwalk` Command-Line Tool: Before configuring in Zabbix, test your OIDs and community strings from your Zabbix server or proxy using the `snmpwalk` command (part of `net-snmp-utils` or similar packages on Linux).

    Example: `snmpwalk -v2c -c public your_device_ip IF-MIB::interfaces`

    Use the `-On` option (`snmpwalk -v2c -c public -On your_device_ip .1.3.6.1.2.1.2`) to see numeric OIDs, which can be very helpful.

  • Check Zabbix Server/Proxy Logs: If things aren’t working, the logs are your best friend. Increase debug levels if necessary.
  • Consult Zabbix Documentation: The official documentation is a valuable resource for item key syntax and preprocessing options.
  • Test Preprocessing Steps: Zabbix allows you to test your preprocessing steps. For dependent items, you can copy the output of the master item and paste it as input for testing the dependent item’s preprocessing. This is invaluable for debugging `SNMP walk to JSON` and `SNMP walk value`.

Wrapping Up

The introduction of snmp ‘walk’ and the refined approach to SNMP in Zabbix 6.4+ is a massive improvement. It leads to more efficient polling, less load on your monitored devices, and a more streamlined configuration once you grasp the master/dependent item concept with preprocessing.

While it might seem a bit complex initially, especially the preprocessing steps, the benefits in performance and scalability are well worth the learning curve. Many of the newer official Zabbix templates are already being converted to use this snmp ‘walk’ method, but always check, as some older ones might still use the classic approach.

That’s all for today! I hope this deep dive into modern SNMP monitoring with Zabbix has been helpful. I got a bit long, but there was a lot to cover!


What are your experiences with SNMP in Zabbix? Have you tried the new snmp ‘walk’ items? Let me know in the comments below!

Don’t forget to check out my YouTube channel for more content:

Quadrata on YouTube

And join the Zabbix Italia community on Telegram:

ZabbixItalia Telegram Channel

See you next week, perhaps talking about something other than Zabbix for a change! Bye everyone, from Dimitri Bellini.

Read More
Deep Dive into Zabbix Low-Level Discovery & the Game-Changing 7.4 Update

Deep Dive into Zabbix Low-Level Discovery & the Game-Changing 7.4 Update

Good morning, everyone, and welcome back to Quadrata! This is Dimitri Bellini, and on this channel, we explore the fascinating world of open source and IT. I’m thrilled you’re here, and if you enjoy my content, please give this video a like and subscribe if you haven’t already!

I apologize for missing last week; work had me on the move. But I’m back, and with the recent release of Zabbix 7.4, I thought it was the perfect time to revisit a powerful feature: Low-Level Discovery (LLD). There’s an interesting new function in 7.4 that I want to discuss, but first, let’s get a solid understanding of what LLD is all about.

What Exactly is Zabbix Low-Level Discovery?

Low-Level Discovery is a fantastic Zabbix feature that automates the creation of items, triggers, and graphs. Think back to the “old days” – or perhaps your current reality if you’re not using LLD yet. Manually creating monitoring items for every CPU core, every file system, every network interface on every host… it’s a painstaking and error-prone process, especially in dynamic environments.

Imagine:

  • A new mount point is added to a server. If you forget to add it to Zabbix, you won’t get alerts if it fills up. Disaster!
  • A network switch with 100 ports. Manually configuring monitoring for each one? A recipe for headaches.

LLD, introduced way back in Zabbix 2.0, came to rescue us from this. It allows Zabbix to automatically discover resources on a host or device and create the necessary monitoring entities based on predefined prototypes.

Why Do We Need LLD?

  • Eliminate Manual Toil: Say goodbye to the tedious task of manually creating items, triggers, and graphs.
  • Dynamic Environments: Automatically adapt to changes like new virtual machines, extended filesystems, or added network ports.
  • Consistency: Ensures that all similar resources are monitored in the same way.
  • Accuracy: Reduces the risk of human error and forgotten resources.

How Does Low-Level Discovery Work?

The core principle is quite straightforward:

  1. Discovery Rule: You define a discovery rule on a host or template. This rule specifies how Zabbix should find the resources.
  2. Data Retrieval: Zabbix (or a Zabbix proxy) queries the target (e.g., a Zabbix agent, an SNMP device, an HTTP API) for a list of discoverable resources.
  3. JSON Formatted Data: The target returns the data in a specific JSON format. This JSON typically contains an array of objects, where each object represents a discovered resource and includes key-value pairs. A common format uses macros like {#FSNAME} for a filesystem name or {#IFNAME} for an interface name.


    {
    "data": [
    { "{#FSNAME}": "/", "{#FSTYPE}": "ext4" },
    { "{#FSNAME}": "/boot", "{#FSTYPE}": "ext4" },
    { "{#FSNAME}": "/var/log", "{#FSTYPE}": "xfs" }
    ]
    }

  4. Prototype Creation: Based on the received JSON data, Zabbix uses prototypes (item prototypes, trigger prototypes, graph prototypes, and even host prototypes) to automatically create actual items, triggers, etc., for each discovered resource. For example, if an item prototype uses {#FSNAME} in its key, Zabbix will create a unique item for each filesystem name returned by the discovery rule.

The beauty of this is its continuous nature. Zabbix periodically re-runs the discovery rule, automatically creating entities for new resources and, importantly, managing resources that are no longer found.

Out-of-the-Box vs. Custom Discoveries

Zabbix comes with several built-in LLD rules, often found in default templates:

  • File systems: Automatically discovers mounted file systems (e.g., on Linux and Windows).
  • Network interfaces: Discovers network interfaces.
  • SNMP OIDs: Discovers resources via SNMP.
  • Others like JMX, ODBC, Windows services, and host interfaces.

But what if you need to discover something specific to your custom application or a unique device? That’s where custom LLD shines. Zabbix is incredibly flexible, allowing almost any item type to become a source for discovery:

  • Zabbix agent (system.run[]): Execute a script on the agent that outputs the required JSON.
  • External checks: Similar to agent scripts but executed on the Zabbix server/proxy.
  • HTTP agent: Perfect for querying REST APIs that return lists of resources.
  • JavaScript items: Allows for complex logic, multiple API calls, and data manipulation before outputting the JSON.
  • SNMP agent: For custom SNMP OID discovery.

The key is that your custom script or check must output data in the LLD JSON format Zabbix expects.

Configuring a Discovery Rule: Key Components

When you set up a discovery rule, you’ll encounter several important configuration tabs:

  • Discovery rule (main tab): Define the item type (e.g., Zabbix agent, HTTP agent), key, update interval, etc. This is where you also configure how Zabbix handles “lost” resources.
  • Preprocessing: Crucial for custom discoveries! You can take the raw output from your discovery item and transform it. For example, convert CSV to JSON, use regular expressions, or apply JSONPath to extract specific parts of a complex JSON.
  • LLD macros: Here, you map the keys from your discovery JSON (e.g., {#FSNAME}) to JSONPath expressions that tell Zabbix where to find the corresponding values in the JSON output from the preprocessing step.
  • Filters: Include or exclude discovered resources based on regular expressions matching LLD macro values.
  • Overrides: A more advanced feature allowing you to change specific attributes (like item status, severity of triggers, tags) for discovered objects that match certain criteria.

Managing Lost Resources: A Welcome Improvement

A critical aspect of LLD is how it handles resources that were previously discovered but are no longer present. For a long time, we had the “Keep lost resources period” setting. If a resource disappeared, Zabbix would keep its associated items, triggers, etc., for a specified duration (e.g., 7 days) before deleting them. During this period, the items would often go into an unsupported state as Zabbix tried to query non-existent resources, creating noise.

Starting with Zabbix 7.0, a much smarter option was introduced: “Disable lost resources.” Now, you can configure Zabbix to immediately (or after a period) disable items for lost resources. This is fantastic because:

  • It stops Zabbix from trying to poll non-existent resources, reducing load and unsupported item noise.
  • The historical data for these items is preserved until they are eventually deleted (if configured to do so via “Keep lost resources period”).
  • If the resource reappears, the items can be automatically re-enabled.

You can use these two settings in combination: for example, disable immediately but delete after 7 days. This offers great flexibility and a cleaner monitoring environment.

Prototypes: The Blueprints for Monitoring

Once LLD discovers resources, it needs templates to create the actual monitoring entities. These are called prototypes:

  • Item prototypes: Define how items should be created for each discovered resource. You use LLD macros (e.g., {#FSNAME}) in the item name, key, etc.
  • Trigger prototypes: Define how triggers should be created.
  • Graph prototypes: Define how graphs should be created.
  • Host prototypes: This is a particularly powerful one, allowing LLD to create *new hosts* in Zabbix based on discovered entities (e.g., discovering VMs from a hypervisor).

The Big News in Zabbix 7.4: Nested Host Prototypes!

Host prototypes have been around for a while, evolving significantly from Zabbix 6.0 to 7.0, gaining features like customizable interfaces, tags, and macro assignments for the discovered hosts. However, there was a significant limitation: a template assigned to a host created by a host prototype could not, itself, contain another host prototype to discover further hosts. In essence, nested host discovery wasn’t supported.

Imagine trying to monitor a virtualized environment:

  1. You discover your vCenter.
  2. You want the vCenter discovery to create host objects for each ESXi hypervisor. (Possible with host prototypes).
  3. Then, you want each discovered ESXi hypervisor host (using its assigned template) to discover all the VMs running on it and create host objects for those VMs. (This was the roadblock!).

With Zabbix 7.4, this limitation is GONE! Zabbix now supports nested host prototypes. This means a template applied to a discovered host *can* indeed contain its own host prototype rules, enabling multi-level, chained discoveries. This is a game-changer for complex environments like Kubernetes, container platforms, or any scenario with layered applications.

A Quick Look at How It Works (Conceptual Demo)

In the video, I demonstrated this with a custom LLD setup:

  1. Initial Discovery: I used a simple system.run item that read a CSV file. This CSV contained information about “parent” entities (simulating, say, hypervisors).
  2. Preprocessing: A “CSV to JSON” preprocessing step converted this data into the LLD JSON format.
  3. LLD Macros: I defined LLD macros like {#HOST} and {#HOSTGROUP}.
  4. Host Prototype (Level 1): A host prototype used these macros to create new hosts in Zabbix and assign them a specific template (let’s call it “Template A”).
  5. The Change in 7.4:

    • In Zabbix 7.0 (and earlier): If “Template A” itself contained a host prototype (e.g., to discover “child” entities like VMs), that nested host prototype would simply not appear or function on the hosts created by the Level 1 discovery. The Zabbix documentation even explicitly stated this limitation.
    • In Zabbix 7.4: “Template A” *can* now have its own discovery rules and host prototypes. So, when the Level 1 discovery creates a host and assigns “Template A”, “Template A” can then kick off its *own* LLD process to discover and create further hosts (Level 2).

This allows for a much more dynamic and hierarchical approach to discovering and monitoring complex infrastructures automatically.

Conclusion: Embrace the Automation!

Low-Level Discovery is an indispensable Zabbix feature for anyone serious about efficient and comprehensive monitoring. It saves incredible amounts of time, reduces errors, and keeps your monitoring setup in sync with your ever-changing IT landscape.

The introduction of “Disable lost resources” in Zabbix 7.0 was a great step forward, and now, with nested host prototypes in Zabbix 7.4, the power and flexibility of LLD have reached new heights. This opens up possibilities for automating the discovery of deeply layered applications and infrastructure in a way that wasn’t easily achievable before.

I encourage you to explore LLD in your Zabbix environment. Start with the out-of-the-box discoveries, and then don’t be afraid to dive into custom LLDs to tailor Zabbix perfectly to your needs.

What are your thoughts on Low-Level Discovery or this new Zabbix 7.4 feature? Are there any specific LLD scenarios you’d like me to cover in a future video? Let me know in the comments below! Your feedback is always appreciated.

Thanks for watching, and I’ll see you next week!

All the best,
Dimitri Bellini


Connect with me and the community:

Read More
Spice Up Your Zabbix Dashboards: Exploring the E-Chart Widget Module

Spice Up Your Zabbix Dashboards: Exploring the E-Chart Widget Module

Good morning everyone, and welcome back! It’s Dimitri Bellini here from Quadrata, your go-to channel for the open-source and IT world that I love – and hopefully, you do too!

This week, we’re diving back into our good friend, Zabbix. Why? Because a contact from the vibrant Brazilian Zabbix community reached out to me. He shared a fascinating project he’s been working on, aimed at enhancing the visual appeal of Zabbix dashboards. I was intrigued, and after taking a look, I knew I had to share it with you.

The project is called E-Chart Zabbix, and as the name suggests, it leverages the powerful Apache ECharts library to bring fresh, dynamic visualizations into our Zabbix frontend.

Discovering Apache ECharts: A Hidden Gem for Data Visualization

Before we jump into the Zabbix module itself, let’s talk about the foundation: Apache ECharts. Honestly, I was blown away. I wasn’t aware of such a rich, well-crafted open-source ecosystem for graphic libraries. We’ve often searched for good charting solutions for client projects, and I wish we’d found these sooner!

ECharts offers a fantastic array of chart types, far beyond the standard Zabbix offerings. Just look at some of the demos:

  • Interactive charts with smooth animations when filtering data series.
  • Easy export to PNG format – a simple but often crucial feature.
  • A vast selection including various pie charts, heatmaps, geomaps, and even radar charts (great for visualizing multi-dimensional performance metrics).

It’s a treasure trove of inspiration for anyone needing to present data effectively. I definitely plan to explore these libraries more myself.

Introducing the E-Chart Zabbix Module by Monzphere

The E-Chart Zabbix module, developed by the folks at Monzphere (a Brazilian company creating both paid and open-source Zabbix extensions), takes a selection of these ECharts visualizations and integrates them as widgets directly into the Zabbix dashboard.

Here are some of the widgets included:

  • Low Level Discovery (LLD) Table: This is a standout feature! It addresses a common request: displaying LLD items (like network interface stats) in a structured table format. It cleverly uses item name patterns (e.g., *:bits received and *:bits sent) to automatically create columns for related metrics. This is incredibly useful for seeing RX/TX, errors, or other stats side-by-side for multiple interfaces.
  • Interface Load Visualization: A graph where the line width dynamically represents the load or traffic volume.
  • Treemap: Excellent for quickly identifying the most significant metrics in a dataset by representing values as proportionally sized rectangles.
  • Horizontal Bar Chart: A familiar format, but enhanced with the ability to use wildcards (*) in item patterns to easily include all metrics from an LLD rule.
  • Funnel Chart: Another great way to visually compare the magnitude of different metrics.
  • Water Effect Chart: A visually appealing gauge-like chart.
  • Sunburst/Donut Chart: A hierarchical visualization, useful for nested data.

Important Note: While testing (on Zabbix 7.0), I noticed a couple of things that seem to be works in progress. For instance, the LLD table didn’t automatically convert BPS values to KBPS/MBPS, and some charts like the Pie and Water Effect seemed to only display one metric even when configured with multiple. It’s a new project, so some rough edges are expected, but the potential is definitely there!

Installation Guide (Zabbix 7.0)

Ready to try it out? Here’s how to install the module (I tested this on Zabbix 7.0, compatibility with older versions isn’t specified):

  1. Download the Module:
    Go to the E-Chart Zabbix GitHub repository. Click the ‘Code’ button and copy the repository URL or download the ZIP file.
  2. Access Your Zabbix Server Console:
    SSH into your Zabbix server.
  3. Clone or Extract Files:
    Navigate to a temporary directory. If you copied the URL, use git clone:
    git clone https://github.com/monzphere/echart-zabbix.git
    Or, if you downloaded the ZIP, upload and unzip it:
    unzip echart-zabbix-main.zip (The exact filename might vary).
  4. Move Module Files:
    Copy the contents of the downloaded/cloned directory (it should contain files like `Module.php`, directories like `Widget`, etc.) into your Zabbix frontend modules directory. The target directory is typically:
    /usr/share/zabbix/modules/echart-zabbix
    You might need to create the `echart-zabbix` directory first. Use a command like:
    sudo mkdir /usr/share/zabbix/modules/echart-zabbix
    sudo cp -r echart-zabbix-main/* /usr/share/zabbix/modules/echart-zabbix/
    (Adjust paths and use `sudo` if necessary).
  5. Activate in Zabbix Frontend:
    Log in to your Zabbix web interface.
    Navigate to: Administration -> General -> Modules.
  6. Scan and Enable:
    Click the “Scan directory” button. You should see the “ECharts” module listed, likely authored by Monzphere. By default, it will be disabled. Click on the “Disabled” status to enable it.

Using the New Widgets

Once enabled, you can add these new widgets to any dashboard:

  1. Edit your desired dashboard.
  2. Click “Add widget”.
  3. For the “Type”, select “ECharts”.
  4. A new dropdown will appear allowing you to choose the specific ECharts widget type (Treemap, LLD Table, Funnel, etc.).
  5. Configure the widget, primarily by defining the “Item pattern” to select the data you want to display. You can use wildcards (*) here.
  6. Save the widget and the dashboard.

While the configuration options are currently quite basic, the results can already add a nice visual touch and, in the case of the LLD table, significant functional value.

Context and Final Thoughts

It’s true that Zabbix itself has made strides in visualization, especially with recent improvements to maps (as I covered in a previous video – check it out!), honeycomb views, and gauges. Playing with map backgrounds, icons, and color themes can drastically improve the user experience.

However, the E-Chart Zabbix module offers *different* kinds of visualizations that aren’t natively available. It complements the existing Zabbix features, providing more tools for specific data presentation needs. The LLD table alone is a compelling reason to check it out.

This project is a great example of the community extending Zabbix’s capabilities. While it needs refinement, I believe it’s worth supporting. Trying it out and providing constructive feedback to the developers via GitHub issues is the best way to help it mature.

A big thank you to the Monzphere team for this contribution!

What do you think? Have you tried the E-Chart Zabbix module or Apache ECharts? Do you have other favorite ways to enhance your Zabbix dashboards? Let me know in the comments below!

Don’t forget to join the conversation in the Zabbix Italia Telegram channel – it’s a great community (if you didn’t know it existed, now you do!).

And of course, if you found this useful, please give the video a thumbs up and subscribe to Quadrata for more open-source and IT content.

Thanks for watching/reading, and I’ll see you next week!

– Dimitri Bellini

Read More
Vibe Coding with AI: Building an Automatic Zabbix Service Map

Vibe Coding with AI: Building an Automatic Zabbix Service Map

Good morning everyone! Dimitri Bellini here, back on the Quadrata channel, your spot for diving into the open-source world and IT topics I love – and hopefully, you do too!

If you enjoy these explorations, please give this video a thumbs up and subscribe if you haven’t already. Today, we’re venturing into the exciting intersection of coding, Artificial Intelligence, and Zabbix. I wanted to tackle a real-world challenge I faced: using AI to enhance Zabbix monitoring, specifically for event correlation.

Enter “Vibe Coding”: A Different Approach

Lately, there’s been a lot of buzz around “Vibe Coding.” What is it exactly? Honestly, it feels like a rather abstract concept! The general idea is to write code, perhaps in any language, by taking a more fluid, iterative approach. Instead of meticulous planning, designing flowcharts, and defining every function upfront, you start by explaining your goal to an AI and then, through a process of trial, error, and refinement (“kicks and punches,” as I jokingly put it), you arrive at a solution.

It’s an alternative path, potentially useful for those less familiar with specific languages, although I believe some foundational knowledge is still crucial to avoid ending up with a complete mess. It’s the method I embraced for the project I’m sharing today.

What You’ll Need for Vibe Coding:

  • Time and Patience: It’s an iterative process, sometimes frustrating!
  • Money: Accessing powerful AI models often involves API costs.
  • Tools: I used VS Code along with an AI coding assistant plugin. The transcript mentions “Cline” and “Roocode”; a popular and evolving option in this space is Continue.dev (which might be what was referred to, as tool names evolve). These tools integrate AI directly into the IDE.

The Zabbix Challenge: Understanding Service Dependencies

My initial goal was ambitious: leverage AI within Zabbix for better event correlation to pinpoint the root cause of problems faster. However, Zabbix presents some inherent challenges:

  • Standard Zabbix configurations (hosts, groups, tags) don’t automatically define the intricate dependencies *between* services running on different hosts.
  • Knowledge about these dependencies is often siloed within different teams in an organization, making manual mapping difficult and often incomplete.
  • Zabbix, by default, doesn’t auto-discover applications and their communication pathways across hosts.
  • Existing correlation methods (time-based, host groups, manually added tags) are often insufficient for complex scenarios.

Creating and maintaining a service map manually is incredibly time-consuming and struggles to keep up with dynamic environments. My objective became clear: find a way to automate service discovery and map the communications between them automatically.

My Goal: Smarter Event Correlation Through Auto-Discovery

Imagine a scenario with multiple Zabbix alerts. My ideal outcome was to automatically enrich these events with tags that reveal their relationships. For example, an alert on a CRM application could be automatically tagged as dependent on a specific database instance (DB Instance) because the system detected a database connection, or perhaps linked via an NFS share. This context is invaluable for root cause analysis.

My Vibe Coding Journey: Tools and Process

To build this, I leaned heavily on VS Code and the AI assistant plugin (let’s refer to it as Continue.dev for clarity, acknowledging the transcript’s terms “Cline/Clean”). The real power came from the Large Language Model (LLM) behind it.

AI Model Choice: Claude 3 Sonnet

While local, open-source models like Llama variants exist, I found they often lack the scale or require prohibitive resources for complex coding tasks. The most effective solution for me was using Claude 3 Sonnet via its API (provided by Anthropic). It performed exceptionally well, especially with the “tool use” features supported by Continue.dev, which seemed more effective than with other models I considered.

I accessed the API via OpenRouter, a handy service that acts as a broker for various AI models. This provides flexibility, allowing you to switch models without managing separate accounts and billing with each provider (like Anthropic, Google, OpenAI).

Lessons Learned: Checkpoints and Context Windows

  • Use Checkpoints! Continue.dev offers a “Checkpoint” feature. Vibe coding can lead you down wrong paths. Checkpoints let you revert your codebase. I learned the hard way that this feature relies on Git. I didn’t have it set up initially and had to restart significant portions of work. My advice: Enable Git and use checkpoints!
  • Mind the Context Window: When interacting with the AI, the entire conversation history (the context) is crucial. If the context window of the model is too small, it “forgets” earlier parts of the code or requirements, leading to errors and inconsistencies. Claude 3 Sonnet has a reasonably large context window, which was essential for this project’s success.

The Result: A Dynamic Service Map Application

After about three hours of work and roughly $10-20 in API costs (it might have been more due to some restarts!), I had a working proof-of-concept application. Here’s what it does:

  1. Connects to Zabbix: It fetches the list of hosts monitored by my Zabbix server.
  2. Discovers Services & Connections: For selected hosts, it retrieves information about running services and their network connections.
  3. Visualizes Dependencies: It generates a dynamic, interactive map showing the hosts and the communication links between them.

The “Magic Trick”: Using Netstat

How did I achieve the automatic discovery? The core mechanism is surprisingly simple, albeit a bit brute-force. I configured a Zabbix item on all relevant hosts to run the command:

netstat -ltunpa

This command provides a wealth of information about listening ports (services) and established network connections, including the programs associated with them. I added some preprocessing steps (initially aiming for CSV, though the core data comes from netstat) to make the data easier for the application to parse.

Live Demo Insights

In the video, I demonstrated the application live. It correctly identified:

  • My Zabbix server host.
  • Another monitored host (Graph Host).
  • My own machine connecting via SSH to the Zabbix host (shown as an external IP since it’s not monitored by Zabbix).
  • Connections between the hosts, such as the Zabbix agent communication (port 10050) and web server access (port 80).
  • Clicking on hosts or connections reveals more details, like specific ports involved.

While the visual map is impressive (despite some minor graphical glitches that are typical of the rapid Vibe Coding process!), the truly valuable output is the underlying relationship data. This data is the key to achieving the original goal: enriching Zabbix events.

Next Steps and Your Thoughts?

This application is a proof-of-concept, demonstrating the feasibility of automatic service discovery using readily available data (like netstat output) collected by Zabbix. The “wow effect” of the map is nice, but the real potential lies in feeding this discovered dependency information back into Zabbix.

My next step, time permitting, is to tackle the event correlation phase – using these discovered relationships to automatically tag Zabbix problems, making root cause analysis much faster and more intuitive.

What do you think? I’d love to hear your thoughts, ideas, and suggestions in the comments below!

  • Have you tried Vibe Coding or similar AI-assisted development approaches?
  • Do you face similar challenges with service dependency mapping in Zabbix or other monitoring tools?
  • Are there specific use cases you’d like me to explore further?

Don’t forget to like this post if you found it interesting, share it with others who might benefit, and subscribe to the Quadrata YouTube channel for more content like this!

You can also join the discussion on the ZabbixItalia Telegram Channel.

Thanks for reading, have a great week, and I’ll see you in the next video!

Bye from Dimitri.

Read More
Exploring Zabbix 7.4 Beta 1: What’s New and What I’m Hoping For

Exploring Zabbix 7.4 Beta 1: What’s New and What I’m Hoping For

Good morning everyone! Dimitri Bellini here, back on the Quadrata channel – your spot for everything Open Source and the IT topics I find fascinating (and hopefully, you do too!). Thanks for tuning in each week. If you haven’t already, please consider subscribing and hitting that like button; it really helps the channel!

This week, we’re diving into something exciting: the latest Zabbix 7.4 Beta 1 release. This is a short-term support (STS) version, meaning it’s packed with new features that pave the way for the next Long-Term Support (LTS) release, expected later this year. With Beta 1 out and release candidates already tagged in the repositories, the official 7.4 release feels very close – likely within Q2 2024. So, let’s break down what’s new based on this first beta.

Key Features Introduced in Zabbix 7.4 Beta 1

While we don’t have that dream dashboard I keep showing (maybe one day, Zabbix team!), Beta 1 brings several practical and technical improvements.

Performance and Internals: History Cache Management

A significant technical improvement is the enhanced management of the history cache. Sometimes, items become disabled (manually or via discovery) but still occupy space in the cache, potentially causing issues. Zabbix 7.4 introduces:

  • Automatic Aging: Zabbix will now automatically clean up these inactive items from the cache.
  • Manual Aging Command: For environments with many frequently disabled objects, you can now manually trigger this cleanup at runtime using a command. This helps free up resources and maintain stability.
  • Cache Content Analysis: For troubleshooting, there are now better tools to analyze cache content and adjust log verbosity in real-time, which is invaluable in critical production environments.

UI and Widget Enhancements

  • Item History Widget Sorting: The Item History widget (introduced in 7.0) gets a much-needed update. When displaying logs, you can now choose to show the newest entries first, making log analysis much more intuitive than the old default (oldest first).
  • Test Item Value Copy Button: A small but incredibly useful UI tweak! When testing items, especially those returning large JSON payloads, you no longer need to manually select the text. There’s now a dedicated copy icon. Simple, but effective!
  • User Notification Management: Finally! Users can now manage their own notification media types (like email addresses) directly from their user settings via a dedicated menu. Previously, this required administrator intervention.

New Monitoring Capabilities

  • New ICMP Ping Item (`icmpping`): A new item key for ICMP checks includes a crucial `retry` option. This helps reduce noise and potential engine load caused by transient network issues. Instead of immediately flagging an object as unreachable/reachable and potentially triggering unnecessary actions or internal retries, you can configure it to try, say, 3 times before marking the item state as unreachable. This should lead to more stable availability monitoring.
  • New Timestamp Functions & Macros: We have new functions (like `item.history.first_clock`) that return timestamps of the oldest/newest values within an evaluation period. While the exact use case isn’t immediately obvious to me (perhaps related to upcoming event correlation or specific Windows monitoring scenarios?), having more tools for time-based analysis is interesting. Additionally, new built-in timestamp macros are available for use in notifications.

Major Map Enhancements

Maps receive some fantastic updates in 7.4, making them much more powerful and visually appealing:

  • Item Value Link Indicators: This is huge! Previously, link status (color/style) could only be tied to triggers. Now, you can base link appearance on:

    • Static: Just a simple visual link.
    • Trigger Status: The classic method.
    • Item Value: Define thresholds for numeric item values (e.g., bandwidth usage) or specific strings for text items (e.g., “on”/”off”) to change the link’s color and line style. This opens up possibilities for visualizing performance directly on the map without relying solely on triggers.

  • Auto-Hiding Labels: Tired of cluttered maps with overlapping labels? You can now set labels to hide by default and only appear when you hover over the element. This drastically improves readability for complex maps.
  • Scalable Background Images: Map background images will now scale proportionally to fit the map widget size, preventing awkward cropping or stretching.

One thing I’d still love to see, maybe before the final 7.4 release, is the ability to have multiple links between two map objects (e.g., representing aggregated network trunks).

New Templates and Integrations

Zabbix continues to expand its out-of-the-box monitoring:

  • Pure Storage FlashArray Template: Monitoring for this popular enterprise storage solution is now included.
  • Microsoft SQL for Azure Template: Enhanced cloud monitoring capabilities.
  • MySQL/Oracle Agent 2 Improvements: Simplifications for running custom queries directly via the Zabbix Agent 2 plugins.

What I’m Hoping For (Maybe 7.4, Maybe Later?)

Looking at the roadmap and based on some code movements I’ve seen, here are a couple of features I’m particularly excited about and hope to see soon, possibly even in 7.4:

  • Nested Low-Level Discovery (LLD): This would be a game-changer for dynamic environments. Imagine discovering databases, and then, as a sub-task, discovering the tables within each database using discovery prototypes derived from the parent discovery. This structured approach would simplify complex auto-discovery scenarios (databases, Kubernetes, cloud resources). I have a strong feeling this might make it into 7.4.
  • Event Correlation: My big dream feature! The ability to intelligently link related events, identifying the root cause (e.g., a failed switch) and suppressing the symptoms (all the hosts behind it becoming unreachable). This would significantly reduce alert noise and help focus on the real problem. It’s listed on the roadmap, but whether it lands in 7.4 remains to be seen.
  • Alternative Backend Storage: Also on the roadmap is exploring alternative backend solutions beyond traditional SQL databases (like potentially TimescaleDB alternatives, though not explicitly named). This is crucial groundwork for Zabbix 8.0 and beyond, especially for handling the massive data volumes associated with full observability (metrics, logs, traces).
  • New Host Wizard: A guided wizard for adding new hosts is also in development, which should improve the user experience.

Wrapping Up

Zabbix 7.4 is shaping up to be another solid release, bringing valuable improvements to maps, performance, usability, and monitoring capabilities. The map enhancements based on item values and the history cache improvements are particularly noteworthy from this Beta 1.

I’ll definitely keep you updated as we get closer to the final release and if features like Nested LLD or Event Correlation make the cut!

What do you think? Are these features useful for you? What are you hoping to see in Zabbix 7.4 or the upcoming Zabbix 8.0? Let me know in the comments below – I’m always curious to hear your thoughts and often pass feedback along (yes, I’m known for being persistent with the Zabbix team, like Alexey Vladyshev!).

Don’t forget to check out the Quadrata YouTube channel for more content like this.

And if you’re not already there, join the conversation in the ZabbixItalia Telegram Channel – it’s a great place to connect with other Italian Zabbix users.

That’s all for today. Thanks for reading, and I’ll catch you in the next one!

– Dimitri Bellini

Read More
Unlock Powerful Visualizations: Exploring Zabbix 7.0 Dashboards

Unlock Powerful Visualizations: Exploring Zabbix 7.0 Dashboards

Good morning everyone! Dimitri Bellini here, back with another episode on Quadrata, my channel dedicated to the world of open source and IT. Today, we’re diving back into our favorite monitoring tool, Zabbix, but focusing on an area we haven’t explored much recently: **data visualization and dashboarding**, especially with the exciting improvements in Zabbix 7.0.

For a long time, many of us (myself included!) might have leaned towards Grafana for sophisticated dashboards, and rightly so – it set a high standard. However, Zabbix has been working hard, taking inspiration from the best, and Zabbix 7.0 introduces some fantastic new widgets and capabilities that significantly boost its native dashboarding power, pushing towards better observability of our collected metrics.

Why Zabbix Dashboards Now Deserve Your Attention

Zabbix 7.0 marks a significant step forward in visualization. The web interface’s dashboard section has received substantial upgrades, introducing new widgets that make creating informative and visually appealing dashboards easier than ever. Forget needing a separate tool for basic visualization; Zabbix now offers compelling options right out of the box.

Some of the key additions in 7.0 include:

  • Gauge Widgets: For clear, immediate visualization of single metrics against thresholds.
  • Pie Chart / Donut Widgets: Classic ways to represent proportions.
  • On-Icon Widget: Excellent for compactly displaying the status of many items (like host availability).
  • Host Navigator & Item Navigator: Powerful tools for creating dynamic, interactive dashboards where you can drill down into specific hosts and their metrics.
  • Item Value Widget: Displays a single metric’s value with trend indication.

Building a Dynamic Dashboard in Zabbix 7.0: A Walkthrough

In the video, I demonstrated how to leverage some of these new features. Let’s recap the steps to build a more dynamic and insightful dashboard:

Step 1: Creating Your Dashboard

It all starts in the Zabbix interface under Dashboards -> All dashboards. Click “Create dashboard”, give it a meaningful name (I used “test per video”), and you’ll be presented with an empty canvas, ready for your first widget.

Step 2: Adding the Powerful Graph Widget

The standard graph widget, while not brand new, has become incredibly flexible.

  • Host/Item Selection: You can use wildcards (*) for both hosts (e.g., Linux server*) and items (e.g., Bits received*) to aggregate data from multiple sources onto a single graph.
  • Aggregation: Easily aggregate data over time intervals (e.g., show the sum or average traffic every 3 minutes).
  • Stacking: Use the “Stacked” option combined with aggregation to visualize total resource usage (like total bandwidth across multiple servers).
  • Multiple Datasets: Add multiple datasets (like ‘Bits received’ and ‘Bits sent’) to the same graph for comprehensive views.
  • Customization: Control line thickness, fill transparency, handling of missing data, axis limits (e.g., setting a max bandwidth), legend display, and even overlay trigger information or working hours.

This allows for creating dense, informative graphs showing trends across groups of systems or interfaces.

Step 3: Introducing Interactivity with Navigators

This is where Zabbix 7.0 dashboards get really dynamic!

Host Navigator Setup

Add the “Host Navigator” widget. Configure it to target a specific host group (e.g., Linux Servers). You can further filter by host status (enabled/disabled), maintenance status, or tags. This widget provides a clickable list of hosts.

Item Navigator Setup

Next, add the “Item Navigator” widget. The key here is to link it to the Host Navigator:

  • In the “Host” selection, choose “From widget” and select your Host Navigator widget.
  • Specify the host group again.
  • Use “Item tags” to filter the list of items shown (e.g., show only items with the tag component having the value network).
  • Use “Group by” (e.g., group by the component tag) to organize the items logically within the navigator. (Note: In the video, I noticed a slight confusion where the UI might label tag value filtering as tag name, something to keep an eye on).

Now, clicking a host in the Host Navigator filters the items shown in the Item Navigator – the first step towards interactive drill-down!

Step 4: Visualizing Single Metrics (Gauge & Item Value)

With the navigators set up, we can add widgets that react to our selections:

Gauge Widget

Add a “Gauge” widget. Configure its “Item” setting to inherit “From widget” -> “Item Navigator”. Now, when you select an item in the Item Navigator (after selecting a host), this gauge will automatically display that metric’s latest value. Customize it with:

  • Min/Max values and units (e.g., %, BPS).
  • Thresholds (defining ranges for Green, Yellow, Red) for instant visual feedback.
  • Appearance options (angles, decimals).

Item Value Widget

Similarly, add an “Item Value” widget, also inheriting its item from the “Item Navigator”. This provides a simple text display of the value, often with a trend indicator (up/down arrow). You can customize:

  • Font size and units.
  • Whether to show the timestamp.
  • Thresholds that can change the background color of the widget for high visibility.

Step 5: Monitoring Multiple Hosts with On-Icon

The “On-Icon” widget is fantastic for a compact overview of many similar items across multiple hosts.

  • Configure it to target a host group (e.g., Linux Servers).
  • Select a specific item pattern relevant to status (e.g., agent.ping).
  • Set thresholds (e.g., Red if value is 0, Green if value is 1) to color-code each host’s icon based on the item value.

This gives you an immediate “at-a-glance” view of the health or status of all hosts in the group regarding that specific metric. The icons automatically resize to fit the widget space.

Putting It All Together

By combining these widgets – graphs for trends, navigators for interactivity, and gauges/item values/on-icons for specific states – you can build truly powerful and informative dashboards directly within Zabbix 7.0. The ability to dynamically filter and drill down without leaving the dashboard is a massive improvement.

Join the Conversation!

So, that’s a first look at the enhanced dashboarding capabilities in Zabbix 7.0. There’s definitely a lot to explore, and these new tools significantly improve how we can visualize our monitoring data.

What do you think? Have you tried the new Zabbix 7.0 dashboards? Are there specific widgets or features you’d like me to cover in more detail? Let me know in the comments below!

If you found this useful, please give the video a thumbs up and consider subscribing to the Quadrata YouTube channel for more content on open source and IT.

And don’t forget to join the conversation in the ZabbixItalia Telegram Channel – it’s a great place to ask questions and share knowledge with fellow Zabbix users.

Thanks for reading, and I’ll see you in the next one!

– Dimitri Bellini

Read More
Zabbix 7.0 Synthetic Monitoring: A Game Changer for Web Performance Testing

Zabbix 7.0 Synthetic Monitoring: A Game Changer for Web Performance Testing

Good morning everyone, I’m Dimitri Bellini, and welcome back to Quadrata! This is my channel dedicated to the open-source world and the IT topics I’m passionate about – and hopefully, you are too.

In today’s episode, we’re diving back into our friend Zabbix, specifically version 7.0, to explore a feature that I genuinely believe is a game-changer in many contexts: Synthetic Monitoring. This powerful capability allows us to simulate and test complex user interaction scenarios on our websites, corporate pages, and web applications. Ready to see how it works? Let’s jump in!

What Exactly is Synthetic Monitoring in Zabbix 7.0?

As I briefly mentioned, synthetic monitoring is a method used to simulate how a real user interacts with a web page or application. This isn’t just about checking if a page loads; it’s about mimicking multi-step journeys – like logging in, navigating through menus, filling out forms, or completing a checkout process.

While this concept might seem straightforward, having it seamlessly integrated into a monitoring solution like Zabbix is incredibly valuable and not always a given in other tools. (1:09-1:16)

The Key Ingredients: Zabbix & Selenium

To make this magic happen, we need a couple of key components:

  • Zabbix Server or Proxy (Version 7.0+): This is our central monitoring hub.
  • Selenium: This is the engine that drives the browser simulation. I strongly recommend running Selenium within a Docker container, ideally on a machine separate from your Zabbix server for better resource management.

Selenium is a well-established framework (known for decades!) that allows us to automate browsers. One of its strengths is the ability to test using different browsers like Chrome, Edge, Firefox, and even Safari, ensuring your site works consistently across platforms. Zabbix interacts with Selenium via the WebDriver API, which essentially acts as a remote control for the browser, allowing us to send commands without writing complex, browser-specific code.

For our purposes, we’ll focus on the simpler Selenium Standalone setup, specifically using the Chrome browser container, as Zabbix currently has the most robust support for it.

How the Architecture Works

The setup is quite logical:

  1. Your Zabbix Server (or Zabbix Proxy) needs to know where the Selenium WebDriver is running.
  2. Zabbix communicates with the Selenium container (typically on port 4444) using the WebDriver protocol.
  3. Selenium receives instructions from Zabbix, executes them in a real browser instance (running inside the container), and sends the results back.

If you need to scale your synthetic monitoring checks, using Zabbix Proxies is an excellent approach. You can dedicate specific proxies to handle checks for different environments.

The Selenium Docker container also provides useful endpoints:

  • Port 4444: Besides being the WebDriver endpoint, it often hosts a web UI to view the status and current sessions.
  • Port 7900 (often): Provides a VNC/web interface to visually watch the browser automation in real-time – fantastic for debugging!

Setting Up Your Environment

Getting started involves a couple of configuration steps:

1. Zabbix Configuration

You’ll need to edit your zabbix_server.conf or zabbix_proxy.conf file and set these parameters:

  • WebDriverURL=http://:4444/wd/hub (Replace with the actual IP/DNS of your Selenium host)
  • StartBrowserPollers=1 (Start with 1 and increase based on workload)

Remember to restart your Zabbix server or proxy after making these changes.

2. Installing the Selenium Docker Container

Running the Selenium Standalone Chrome container is straightforward using Docker. Here’s a typical command:

docker run -d -p 4444:4444 -p 7900:7900 --shm-size="2g" --name selenium-chrome selenium/standalone-chrome:latest

  • -d: Run in detached mode.
  • -p 4444:4444: Map the WebDriver port.
  • -p 7900:7900: Map the VNC/debug view port.
  • --shm-size="2g": Allocate shared memory (important for browser stability, especially Chrome).
  • --name selenium-chrome: Give the container a recognizable name.
  • selenium/standalone-chrome:latest: The Docker image to use. You can specify older versions if needed.

Crafting Your Monitoring Scripts with JavaScript

The heart of synthetic monitoring in Zabbix lies in JavaScript. Zabbix utilizes its familiar JavaScript engine, now enhanced with a new built-in object: browser.

This browser object provides methods to interact with the web page via Selenium:

  • browser.Navigate('https://your-target-url.com'): Opens a specific URL.
  • browser.FindElement(by, target): Locates an element on the page. The by parameter can be methods like browser.By.linkText('Click Me'), browser.By.tagName('button'), browser.By.xpath('//div[@id="login"]'), browser.By.cssSelector('.submit-button').
  • element.Click(): Clicks on a previously found element.
  • browser.CollectPerfEntries(): Gathers performance metrics and, crucially, takes a screenshot of the current page state.

The output of these scripts is a JSON object containing performance data (response times, status codes, page weight) and the screenshot encoded in Base64.

Testing Your Scripts

You can test your JavaScript code before deploying it in Zabbix:

  • Zabbix Web Interface: The item configuration page has a “Test” button.
  • zabbix_js Command-Line Tool: Useful for quick checks and debugging.

    zabbix_js --webdriver http://:4444/wd/hub -i your_script.js -p 'dummy_input' -t 60

    (Remember to provide an input parameter -p even if your script doesn’t use it, and set a reasonable timeout -t. Piping the output to jq (| jq .) makes the JSON readable.

Visualizing the Results in Zabbix

Once your main “Browser” item is collecting data (including the Base64 screenshot), you can extract specific pieces of information using:

  • Dependent Items: Create items that depend on your main Browser item.
  • Preprocessing Steps: Use JSONPath preprocessing rules within the dependent items to pull out specific metrics (e.g., $.steps[0].responseTime).
  • Binary Item Type: Zabbix 7.0 introduces a binary item type specifically designed to handle data like Base64 encoded images.

This allows you to not only graph performance metrics but also display the actual screenshots captured during the check directly in your Zabbix dashboards using the new Item History widget. Seeing a visual snapshot of the page, especially when an error occurs, is incredibly powerful for troubleshooting!

Looking Ahead

Synthetic monitoring in Zabbix 7.0 is a fantastic addition, opening up new possibilities for ensuring application availability and performance from an end-user perspective.

Here at Quadrata, we’re actively exploring ways to make creating these JavaScript scenarios even easier for everyone. We might even have something to share at the Zabbix Summit this year, so stay tuned!

I hope this overview gives you a solid start with Zabbix 7’s synthetic monitoring. It’s a feature with immense potential.

What do you think? Have you tried it yet, or do you have specific use cases in mind? Let me know in the comments below!

If you found this video helpful, please give it a thumbs up and consider subscribing to the Quadrata channel for more open-source and IT content.

Don’t forget to join our Italian Zabbix community on Telegram: ZabbixItalia!

Thanks for watching, and see you next week. Bye everyone!

Read More
Visualizing Your Infrastructure: A Deep Dive into Zabbix Maps

Visualizing Your Infrastructure: A Deep Dive into Zabbix Maps

Good morning everyone, Dimitri Bellini here, and welcome back to Quadrata – your go-to channel for open source and IT solutions! Today, I want to dive into a feature of our good friend Zabbix that we haven’t explored much yet: Zabbix Maps.

Honestly, I was recently working on some maps, and while it might not always be the most glamorous part of Zabbix, it sparked an idea: why not share what these maps are truly capable of, and perhaps more importantly, what they aren’t?

What Zabbix Maps REALLY Are: Your Digital Synoptic Panel

Think of Zabbix Maps as the modern, digital equivalent of those old-school synoptic panels with blinking lights. They provide a powerful graphical way to represent your infrastructure and its status directly within Zabbix. Here’s what you can achieve:

  • Real-time Host Status: Instantly see the overall health of your hosts based on whether they have active problems.
  • Real-time Event Representation: Visualize specific problems (triggers) directly on the map. Imagine a specific light turning red only when a critical service fails.
  • Real-time Item Metrics: Display actual data values (like temperature, traffic throughput, user counts) directly on your map, making data much more intuitive and visually appealing.

The core idea is to create a custom graphical overview tailored to your specific infrastructure, giving you an immediate understanding of what’s happening at a glance.

Clearing Up Misconceptions: What Zabbix Maps Are NOT

It’s crucial to understand the limitations to use maps effectively. Often, people hope Zabbix Maps will automatically generate network topology diagrams.

  • They are NOT Automatic Network Topology Maps: While you *could* manually build something resembling a network diagram, Zabbix doesn’t automatically discover devices and map their connections (who’s plugged into which switch port, etc.). Tools that attempt this often rely on protocols like Cisco’s CDP or the standard LLDP (both usually SNMP-based), which aren’t universally available across all devices. Furthermore, in large environments (think thousands of hosts and hundreds of switches), automatically generated topology maps quickly become an unreadable mess of tiny icons and overlapping lines. They might look cool initially but offer little practical value day-to-day.
  • They are NOT Application Performance Monitoring (APM) Relationship Maps (Yet!): Zabbix Maps don’t currently visualize the intricate relationships and data flows between different application components in the way dedicated APM tools do. While Zabbix is heading towards APM capabilities, the current map function isn’t designed for that specific purpose.

For the nitty-gritty details, I always recommend checking the official Zabbix documentation – it’s an invaluable resource.

Building Blocks of a Zabbix Map

When constructing your map, you have several element types at your disposal:

  • Host: Represents a monitored device. Its appearance can change based on problem severity.
  • Trigger: Represents a specific problem condition. You can link an icon’s appearance directly to a trigger’s state.
  • Map: Allows you to create nested maps. The icon for a sub-map can reflect the most severe status of the elements within it – great for drilling down!
  • Image: Use custom background images or icons to make your map visually informative and appealing.
  • Host Group: Automatically display all hosts belonging to a specific group within a defined area on the map.
  • Shape: Geometric shapes (rectangles, ellipses) that can be used for layout, grouping, or, importantly, displaying text and real-time data.
  • Link: Lines connecting elements. These can change color or style based on a trigger’s status, often used to represent connectivity or dependencies.

Zabbix also provides visual cues like highlighting elements with problems or showing small triangles to indicate a recent status change, helping you focus on what needs attention.

Bringing Maps to Life with Real-Time Data

One of the most powerful features is embedding live data directly onto your map. Instead of just seeing if a server is “up” or “down,” you can see its current CPU load, network traffic, or application-specific metrics.

This is typically done using Shapes and a specific syntax within the shape’s label. In Zabbix 6.x and later, the syntax looks something like this:

{?last(/Your Host Name/your.item.key)}

This tells Zabbix to display the last received value for the item your.item.key on the host named Your Host Name. You can add descriptive text around it, like:

CPU Load: {?last(/MyWebServer/system.cpu.load[,avg1])}

Zabbix is smart enough to often apply the correct unit (like Bps, %, °C) automatically if it’s defined in the item configuration.

Let’s Build a Simple Map (Quick Guide)

Here’s a condensed walkthrough based on what I demonstrated in the video (using Zabbix 6.4):

  1. Navigate to Maps: Go to Monitoring -> Maps.
  2. Create New Map: Click “Create map”. Give it a name (e.g., “YouTube Test”), set dimensions, and optionally choose a background image.

    • Tip: You can upload custom icons and background images under Administration -> General -> Images. I uploaded custom red/green icons and a background for the demo.

  3. Configure Map Properties: Decide on options like “Icon highlighting” (the colored border around problematic hosts) and “Mark elements on trigger status change” (the triangles for recent changes). You can also filter problems by severity or hide labels if needed. Click “Add”.
  4. Enter Constructor Mode: Open your newly created map and click “Constructor”.
  5. Add a Trigger-Based Icon:

    • Click “Add element” (defaults to a server icon).
    • Click the new element. Change “Type” to “Trigger”.
    • Under “Icons”, select your custom “green” icon for the “Default” state and your “red” icon for the “Problem” state.
    • Click “Add” next to “Triggers” and select the specific trigger you want this icon to react to.
    • Click “Apply”. Position the icon on your map.

  6. Add Real-Time Data Display:

    • Click “Add element” and select “Shape” (e.g., Rectangle).
    • Click the new shape. In the “Label” field, enter your data syntax, e.g., Temp: {?last(/quadrata-test-host/test.item)} (replace with your actual host and item key).
    • Customize font size, remove the border (set Border width to 0), etc.
    • Click “Apply”. Position the shape.
    • Important: In the constructor toolbar, toggle “Expand macros” ON to see the live data instead of the syntax string.

  7. Refine and Save: Adjust element positions (you might want to turn off “Snap to grid” for finer control). Remove default labels if they clutter the view (Map Properties -> Map element label type -> Nothing). Click “Update” to save your changes.

Testing with `zabbix_sender`

A fantastic tool for testing maps (especially with trapper items) is the zabbix_sender command-line utility. It lets you manually push data to Zabbix items.

Install the `zabbix-sender` package if you don’t have it. The basic syntax is:

zabbix_sender -z -s -k -o

For example:

zabbix_sender -z 192.168.1.100 -s quadrata-test-host -k test.item -o 25

Sending a value that crosses a trigger threshold will change your trigger-linked icon on the map. Sending a different value will update the real-time data display.

Wrapping Up

So, there you have it – a look into Zabbix Maps. They aren’t magic topology generators, but they are incredibly flexible and powerful tools for creating meaningful, real-time visual dashboards of your infrastructure’s health and performance. By combining different elements, custom icons, backgrounds, and live data, you can build truly informative synoptic views.

Don’t be afraid to experiment! Start simple and gradually add complexity as you get comfortable.

What are your thoughts on Zabbix Maps? Have you created any cool visualizations? Share your experiences or ask questions in the comments below!

If you found this helpful, please give the video a thumbs up, share it, and subscribe to Quadrata for more content on Zabbix and open source solutions.

Also, feel free to join the conversation in the Zabbix Italia Telegram channel – it’s a great community!

Thanks for reading, and I’ll see you in the next post!

– Dimitri Bellini

Read More