Posts Taged open-source

OpenObserve: A High-Performance Modern Observability

OpenObserve: A High-Performance Modern Observability

Good morning, everyone! Dimitri Bellini here, and welcome back to Quadrata, my channel dedicated to the open-source world and the IT I love. As you know, I’m a big fan of my friend Zabbix, but it’s crucial to keep our eyes on the horizon, understand where the world is moving, and explore new solutions that meet the demands of our customers and the community.

That’s why today, I want to introduce you to a solution I’ve had the pleasure of getting to know: OpenObserve. It’s another powerful tool in the observability space, but it approaches the task in a refreshingly different way.

What is OpenObserve and Why Should You Care?

OpenObserve is a cloud-native, open-source observability platform designed to be a unified backend for your logs, metrics, and traces. Think of it as a lightweight yet powerful alternative to heavyweights like Elasticsearch, Splunk, or Datadog. It tackles a key challenge many of us face: consolidating different monitoring tools into a single, cohesive platform.

Instead of juggling separate tools like Prometheus for metrics, Loki for logs, and Jaeger for traces, OpenObserve brings everything under one roof. This unified approach simplifies your workflow and provides a single pane of glass to view the health of your entire infrastructure.

The Game-Changing Features

What really caught my attention are the core functionalities that make OpenObserve stand out:

  • Massive Cost Reduction: This is a big one. By using a specific format called Parquet and a stateless architecture that leverages object storage (like S3, MinIO, or even a local disk), OpenObserve can drastically reduce storage costs. They claim it can be up to 140 times lower than Elasticsearch! For anyone managing hundreds of gigabytes of data per day, this is a revolutionary benefit.
  • Blazing-Fast Performance: The entire engine is written in Rust. We’ve heard a lot about Rust, especially in the Linux kernel world, and for good reason. It’s an incredibly optimized and efficient language. This means OpenObserve can ingest a massive amount of data with a significantly lower memory and CPU footprint compared to Java-based solutions.
  • Simplified Querying: If you’re comfortable with SQL, you’ll feel right at home. OpenObserve allows you to query your logs using standard SQL-based syntax, which dramatically lowers the learning curve. For metrics, it also supports PromQL, giving you the best of both worlds.
  • Native OpenTelemetry Support: It seamlessly integrates with OpenTelemetry, the emerging standard for collecting traces and metrics. This makes it incredibly easy to instrument your applications, whether they’re written in Go, Python, or another language, and start sending data to OpenObserve.
  • Real-time Alerting: Right from the UI, you can define alerts based on log patterns or metric thresholds, similar to what you might do in Prometheus.

Under the Hood: The Technology Stack

I always believe it’s fundamental to understand the components of a solution to appreciate its engineering. OpenObserve is built on a stack of impressive open-source technologies:

  • Rust: The core language, providing memory safety and high performance.
  • Apache Arrow DataFusion: A powerful query engine that enables the SQL support on top of Parquet files.
  • Apache Parquet: A columnar storage format developed by the Apache Foundation that allows for incredible data compression and efficient querying.
  • NATS: A lightweight and high-performance messaging system used for communication and coordination between nodes in a clustered setup.
  • Vue.js: The framework used to build the modern and reactive web interface.
  • SQLite / PostgreSQL: SQLite is used for metadata in simple, standalone deployments, while PostgreSQL is recommended for robust, high-availability production environments.

Getting Started with OpenObserve

One of the best parts is how easy it is to get started. For testing and simple use cases, you just need Docker. The architecture is straightforward: collectors like FluentBit, Vector, or OpenTelemetry send data to your OpenObserve container, which writes to a local disk. This simple setup can already handle an impressive ingestion rate of over 2TB of data per day on a single machine.

For high-availability (HA) production environments, the architecture scales out using Kubernetes, with distinct roles for routers, ingesters, queriers, and more, all coordinated by NATS and backed by object storage.

A Quick Tutorial: Installation with Docker

You can get a test environment running in minutes. It’s as simple as running a single Docker command. Here is the command I used, which you can customize with your own user and password:


docker run -d --name openobserve \
-p 5080:5080 \
-e ZO_ROOT_USER_EMAIL="admin@example.com" \
-e ZO_ROOT_USER_PASSWORD="Complexpass#123" \
-v /opt/openobserve-data:/data \
public.ecr.aws/zinclabs/openobserve:latest

I manage my containers with a tool that simplifies deployment, where I just fill in the image, ports, environment variables, and volume. It’s incredibly straightforward!

A Look at the Dashboard and Final Thoughts

Once you log in, you’re greeted with a clean dashboard showing key stats like ingested events and storage size. The “Data Sources” section is fantastic, providing you with ready-to-use instructions for ingesting data from Kubernetes, Linux, Windows, various databases, and more. This makes the initial setup a breeze.

The log exploration interface will feel familiar to anyone who has used Splunk, with powerful SQL-based querying and on-the-fly filtering. You can visualize metrics, build custom dashboards, analyze application traces with service maps, and even dive into real user monitoring.

What truly impressed me, however, is their licensing model. For self-hosted deployments, you can use the full enterprise version for free for up to 200GB of data ingestion per day. This includes features like single sign-on (SSO) and role-based access control (RBAC). This is a brilliant move that allows smaller teams and environments to leverage the full power of the platform without a cost barrier. A big round of applause to the OpenObserve team for that!

Conclusion: Keep a Close Eye on This One

So, is OpenObserve an interesting solution? Absolutely. It’s a project to watch closely. It has a smart approach—a lightweight, non-pachydermic solution built with exciting technologies like Rust and Parquet. It seems to have a finesse that sets it apart from the many other open-source observability tools out there.

I encourage you to take a look at it. The project is moving fast, and it offers a compelling combination of performance, cost-efficiency, and user-friendliness.

That’s all for today! Let me know your thoughts in the comments below. Do you find these all-in-one observability solutions useful? I’d love to hear from you.

A greeting from Dimitri, see you next week!


Don’t forget to like this video and subscribe to my channel for more open-source content:

My YouTube Channel: Quadrata

Join the conversation on Telegram: Zabbix Italia

Read More
Wren AI: Can You Really Talk to Your Database? An Open-Source Deep Dive

Wren AI: Can You Really Talk to Your Database? An Open-Source Deep Dive

Good morning everyone, Dimitri Bellini here, back on Quadrata, my channel dedicated to the world of open source and IT. This week, we’re diving back into artificial intelligence, but with a practical twist that could change the game for many, especially in the world of business analytics.

I stumbled upon a fascinating open-source solution called Wren AI. Its promise is simple yet powerful: to let you explore complex databases and extract insights using plain, natural language. No more wrestling with intricate SQL queries just to get a simple answer. Intrigued? Let’s take a look at what it can do.

What is Wren AI? The Dawn of Generative Business Intelligence

At its core, Wren AI is an open-source tool for what’s being called Generative Business Intelligence (GenBI). Imagine asking your database, “What was our total revenue by product category last year?” and getting not just a table of numbers, but also the SQL query that generated it and a ready-to-use chart. That’s the magic of Wren AI. It acts as a translator between your human questions and the structured language of your database.

The goal is to empower users who aren’t necessarily SQL wizards—like business analysts or managers—to intertwine data from various sources and get sensible answers. Anyone who has ever tried to navigate the complex relationships in a database like Zabbix knows that finding the right connections between tables is far from trivial.

Key Features at a Glance

  • Natural Language to SQL & Charts: The main event. Ask questions in English (or other languages) and get back precise SQL queries and visualizations.
  • Broad Database Support: It connects to a wide range of data sources, including Postgres, MySQL, Microsoft SQL Server, CSV files, and more.
  • AI-Powered Insights: It uses Large Language Models (LLMs) to understand your request, analyze the database schema, and generate answers.
  • Semantic Layer: An intelligent layer that analyzes your database schema and relationships, ensuring the LLM has the correct context to generate accurate queries.

My Setup: Going Local with Ollama

To get started, you just need Docker. For the AI brain, you can connect to a cloud service like OpenAI or Gemini, but to complicate things (and for the fun of it!), I decided to run everything locally. I used Ollama to host a powerful inference engine right within my own infrastructure, running the Qwen3 32-billion parameter model. While it’s not the fastest setup, it keeps all my data in-house and proves the concept works without relying on external APIs.

The installation involves running a script they provide, but since I was using Ollama, I had to do a bit of manual configuration. This involved downloading a specific configuration file for Ollama, customizing it, and setting up some environment variables before launching the installer. Once it’s up, it leaves you with a standard Docker Compose file, so managing the stack of containers becomes straightforward.

Wren AI in Action: From Question to Dashboard

Once installed, the first step is connecting Wren AI to your data source. After providing the credentials for my Postgres test database (a sample database of DVD sales), Wren AI immediately got to work.

1. Schema Discovery

The tool automatically scanned the database, identified all the tables, and even mapped out the relationships between them. This visual representation of the schema is the foundation for everything that follows. You can even add relationships manually if needed before deploying the model.

2. Asking Questions

This is where the fun begins. I started with a simple business question:

“What is the revenue generated by films per category in 2022?”

My local Ollama instance kicked into gear (I could see my GPU usage spike to over 80%!), and after a short wait, Wren AI returned a complete answer. It didn’t just give me a number; it provided a full breakdown by category.

3. Visualizing the Data

The real power lies in the output tabs:

  • Answer: A clear, text-based summary of the findings.
  • View SQL: The exact SQL query it generated. This is a fantastic learning tool and a great starting point for further optimization.
  • Chart: An automatically generated bar chart visualizing the revenues per category. This chart can be pinned to a dashboard directly within Wren AI or exported as an SVG or PNG file—perfect for dropping into a presentation or report.

The Big Catch: A Controversial Open-Source Model

Now, for the part that didn’t sit right with me. As powerful as Wren AI is, the open-source version has a significant limitation: you can only connect to one data source. Once you configure it, the interface locks you out from adding or changing to another database. The only way to switch is to completely reset the entire project.

This feels less like a feature-limited version and more like a “castrated” one. I’m a firm believer in the open-source philosophy, and while I understand companies need to make money, crippling such a core function feels like using the “open source” label primarily as a marketing tool. Allowing users to connect to at least a few data sources would make it a genuinely usable product for home labs or small projects, while still leaving enterprise features like single sign-on or advanced security for the paid version.

Final Thoughts

Despite my criticism of its licensing model, Wren AI is an incredibly interesting and powerful solution. The ability to simplify data analysis for large, complex databases is a massive value-add, potentially saving countless hours and making data more accessible to everyone in an organization.

It’s a fantastic proof-of-concept for the future of business intelligence. The technology works, and with a decent local hardware setup or a cloud LLM, it can deliver real insights quickly. A pity, then, that the open-source version is held back by what seems to be an artificial limitation.

What do you think? Is this a fair model for an open-source project, or does it go against the spirit of the community? Have you tried any similar GenBI tools? Let me know your thoughts in the comments below!

If you enjoyed this deep dive, please give the video a like and subscribe to the Quadrata channel for more content on open source and IT.

A greeting from Dimitri, bye everyone!


Stay Connected with Me:

Read More
Unleashing Your Inner Artist: A Deep Dive into AI Image Generation with ComfyUI

Unleashing Your Inner Artist: A Deep Dive into AI Image Generation with ComfyUI

Good morning, everyone! Dimitri Bellini here, back on my channel, Quadrata, where we explore the fascinating world of open source and IT. If you’ve been following along, you know we’ve spent a lot of time diving into Large Language Models (LLMs) for tasks like summarizing topics and answering questions. But today, we’re venturing into a different, more visual side of artificial intelligence: the creation of images.

We’re going to explore a powerful, node-based graphical interface called ComfyUI. This open-source tool allows you to build sophisticated workflows for generating AI images using Stable Diffusion models. Forget complex code; we’re talking about a visual playground for your creativity.

LLMs vs. Stable Diffusion: Understanding the AI Playground

Before we jump into ComfyUI, it’s crucial to understand the two different families of AI models we’re dealing with. They might both fall under the “AI” umbrella, but they function in fundamentally different ways.

Large Language Models (LLMs)

Think of models like GPT-4, Google Gemini, or Llama. Their world is text.

  • Purpose: To generate human-like text, answer questions, translate languages, and even write code.
  • How it works: At its core, an LLM is a master of prediction. It analyzes a sequence of words and predicts the most statistically probable next word to continue the sentence or paragraph. We can think of it as a super-intelligent person who excels at writing and conversation.
  • Tools: We often use engines like Ollama to run these models locally.

Stable Diffusion Models

This category is all about visuals. Models like Stable Diffusion 1.5 or the powerful Flux.1 are designed to be digital artists.

  • Purpose: To create complex, detailed images based on text descriptions (prompts).
  • How it works: The process is fascinating. It starts with a canvas of pure random noise—like the static on an old TV. Then, guided by your text prompt, the model gradually removes the noise (a process called denoising diffusion), adding details step-by-step until a coherent image emerges. It’s like an artist taking our instructions and sculpting a masterpiece from a block of marble.
  • Tools: This is where ComfyUI shines, providing the framework to control this artistic process.

Introducing ComfyUI: Your Visual Gateway to AI Art

So, why do we need a tool like ComfyUI? Because creating the perfect image isn’t always straightforward. ComfyUI provides a graphical interface that transforms the complex process of AI image generation into a manageable, visual workflow.

Why a Node-Based Interface?

Instead of writing lines of code, you connect different functional blocks, or “nodes,” together. Each node performs a specific task—loading a model, defining a prompt, sampling the image, upscaling the result, etc. You connect the output of one node to the input of another, creating a visual pipeline. This modular approach gives you incredible flexibility and granular control over every single step of the image generation process.

My Setup: Docker and NVIDIA Power

To keep things clean and avoid dependency headaches with Python versions, I prefer to run everything in a Docker container. For this demonstration, I’m using a fantastic community-built Docker image for ComfyUI (I’ll leave the link in my YouTube video description!). The heavy lifting is handled by my NVIDIA RTX 8000 GPU, which is essential for getting results in a reasonable amount of time.

A Practical Tour: 3 Amazing Things You Can Do with ComfyUI

Talk is cheap, so let’s dive into some practical examples to see what ComfyUI is capable of. I’ve set up a few different workflows to showcase its power.

1. Breathing Life into Old Photos: Upscaling with AI

First up, let’s tackle a common problem: low-resolution images. I took a tiny photo, just 300×345 pixels. By running it through an upscaling workflow in ComfyUI, I was able to increase its size by four times while adding incredible detail. When you zoom in on the original, it’s a blurry mess. But the upscaled version is sharp and clear. The AI didn’t just enlarge the pixels; it intelligently interpreted the image to add detail that wasn’t there before. It’s not perfect, as a better model would yield even cleaner results, but the difference is still night and day.

2. From Black and White to Vibrant Color: AI Colorization

Next, I took a classic black-and-white still from the historic film Metropolis. The image is iconic but lacks the vibrancy of color. Using a specific colorization model, ComfyUI analyzed the image and made an educated guess about the original colors. The result is a beautifully colored image that brings a whole new dimension to the scene. This is an amazing tool for restoring and reimagining historical photos and videos.

3. Text to Reality: Generating Images from Scratch

This is the most common use case and where the magic really happens. I used the Flux.1 Schnell model, an open-source powerhouse, to generate an image from a simple text prompt: “a computer technician with his penguin next to him in a server room.”

Watching the process is captivating. ComfyUI’s interface shows you which node is currently working, and you can see your system’s resource usage spike. My GPU hit 100%, and VRAM usage climbed to nearly 40 GB! After a few moments, the result appeared: a stunningly detailed, high-quality image of a technician and his penguin companion. Just a year ago, achieving this level of quality with open-source models at home was almost impossible. Today, it’s a reality.

Final Thoughts and Your Turn to Create

ComfyUI is an incredibly rich and powerful tool that puts professional-grade AI image generation into your hands. I’ll be honest—there’s a learning curve. The sheer number of nodes and settings can be intimidating at first. But the ability to build, customize, and share workflows makes it one of the most versatile platforms out there.

With a solution that is completely open source, you can have your own AI art studio running directly at home. I highly encourage you to give it a try. Play around with it, download different models, and see what you can create!

On a final note, I’ll be heading to the Zabbix Summit in Riga next week, so I might not be able to post a full video. However, I’m excited to discover the new features coming in Zabbix 8.0 and will be sure to share the highlights with you all!

What do you think? Have you tried ComfyUI or other Stable Diffusion tools? What kind of images would you like to create? Let me know in the comments below! Your feedback helps shape future content.

A big greeting from me, Dimitri, and see you next week!


Connect with me and the community:

Read More
Kestra 1.0: The Open-Source Orchestrator Embraces the AI Revolution

Kestra 1.0: The Open-Source Orchestrator Embraces the AI Revolution

Good morning, everyone! Dimitri Bellini here, and welcome back to Quadrata, my channel dedicated to the world of open source and IT. If you’re a regular viewer, you know how much I love exploring powerful, community-driven tools. And if you’re new, please consider subscribing to join our growing community!

This week, we’re revisiting a product I’m incredibly excited about: Kestra. I covered it before, but now it has hit a major milestone with its 1.0 release, and the new features are too good not to share. Kestra has officially reached maturity, evolving into a tool that’s not just powerful but also incredibly intelligent. Let’s dive in!

What is Kestra? A Quick Refresher

For those who might be new to it, Kestra is an open-source, self-hosted solution for orchestrating and automating complex processes. Think of it as the central nervous system for your IT operations. It solves a problem we’ve all faced: managing countless scripts written in different languages, scattered across various machines. After a few months, it becomes a nightmare to remember where everything is, how it works, and who has access to it.

Kestra brings order to this chaos by providing:

  • A centralized platform to manage all your automation workflows.
  • A declarative language (YAML) to define tasks, making them easy to version control with tools like Git.
  • Flexible task management, allowing you to run jobs sequentially, in parallel, or based on dependencies.
  • A massive library of pre-built plugins for seamless integration with databases, cloud platforms, notification systems, and more.
  • An event-driven architecture that can be triggered manually, via API calls (webhooks), or on a schedule, just like a crontab.

Essentially, it’s a language-agnostic powerhouse that allows different teams—whether they prefer Bash, Python, or Perl—to collaborate on a single, intuitive platform.

The Game-Changers in Kestra 1.0

The 1.0 release isn’t just an update; it’s an evolution. Kestra is boldly stepping into the “agentic world,” integrating AI in ways that genuinely enhance its capabilities.

Stepping into the Agentic World with AI

The headline feature is, of course, AI. Kestra 1.0 introduces an AI Copilot designed to help you generate the YAML code for your tasks. While I found it to be a bit hit-or-miss in its current state (it uses a simple version of Gemini), the concept is promising. For more reliable results, I actually recommend using the “Ask Kestra AI” feature on their official documentation website—it works much better!

What’s truly exciting is Kestra’s ability to be controlled by AI agents and, in turn, use agents to perform tasks. This opens up a world of possibilities for creating dynamic, intelligent automation that can adapt and respond to complex triggers. You can even integrate with self-hosted models using the Ollama plugin, keeping your entire stack private and self-sufficient.

Developer Experience and Usability Boosts

Beyond AI, version 1.0 brings several quality-of-life improvements:

  • Playground: You can now test individual tasks or small segments of your workflow without having to run the entire thing. This is a massive time-saver during development and debugging.
  • Flow Level SLA: For more business-oriented needs, you can now define and monitor Service Level Agreements (SLAs) for your flows. If a task that should take an hour is running longer, Kestra can alert you.
  • Plugin Versioning: In complex enterprise environments, you can now pin specific versions of plugins to a workflow, ensuring stability and preventing unexpected breakages from updates.
  • No-Code Editor for Apps: This is a standout feature, though currently for the Enterprise version. It allows you to create simple, interactive web UIs (Kestra Apps) for your workflows. Instead of exposing complex options, you can give users a clean form with input fields to launch a job. It’s a fantastic way to democratize your automation.

A Guided Tour of the Kestra Interface

I set up my Kestra instance easily using a simple container setup. The first thing you see is a comprehensive dashboard showing the status of all your jobs: successes, failures, and currently running tasks. It’s your mission control center.

Crafting Your First Flow: Code, No-Code, and AI Assistance

Workflows in Kestra are organized into Namespaces (think of them as projects), and each workflow is called a Flow. When you edit a flow, you’re presented with a powerful interface.

On one side, you have the YAML editor where you define your tasks. But here’s the magic: as you work, a documentation panel appears right next to your code, providing examples, properties, and definitions for the specific task type you’re using. No more switching tabs to look up syntax!

And if you’re not a fan of YAML, Kestra 1.0 introduces a fantastic no-code wizard. This form-based interface guides you through creating each step of your workflow, simplifying the process immensely. You can build complex automation without writing a single line of code, and the YAML is generated for you in the background.

Monitoring and Control

Once your flow is running, Kestra provides incredible visibility:

  • Topology View: A visual graph of your workflow, showing how tasks connect and the real-time progress of an execution.
  • Revisions: Kestra automatically versions every change you make to a flow. If something breaks, you can easily compare versions and restore a previous working state.
  • Logs: A powerful, searchable log interface (similar to ElasticSearch) lets you drill down to find exactly what happened during an execution.
  • Metrics: You can expose metrics from your flows to monitoring tools like Zabbix or Prometheus to track performance over time.

My Final Thoughts

Kestra 1.0 is a truly impressive release. It has matured from a powerful orchestrator into an intelligent automation platform that is both developer-friendly and accessible to those who prefer a no-code approach. The focus on AI, combined with major usability enhancements, makes it a compelling choice for anyone looking to streamline their IT processes.

As it’s open-source, you can try it out at home without any cost. I’m personally considering using it to automate parts of my video creation workflow! It’s that versatile.

I highly encourage you to give it a try. Explore the official documentation, check out the pre-made “Blueprints” to get started quickly, and see how it can simplify your work.


What do you think of Kestra 1.0? Are there other automation tools you love? Let me know in the comments below—your opinion is incredibly valuable! If you found this overview helpful, please give the video a thumbs up and subscribe for more content on open-source technology.

See you next week!

– Dimitri Bellini

Stay Connected:

Read More
Pangolin VPN: Secure Your Internal Services with Zero Open Ports

Pangolin VPN: Secure Your Internal Services with Zero Open Ports

Good morning and welcome, everyone! I’m Dimitri Bellini, and you’re here again with me on Quadrata, my channel dedicated to the world of open source and IT. This week, we’re diving into something new and exciting: a truly noteworthy tool that can help you in very specific situations.

We’re going to talk about Pangolin VPN, and its promise is right in the name: “Zero Open Ports.” While the concept of a secure tunnel isn’t new, Pangolin offers a unique, simplified approach. It’s an open-source, self-hosted solution that lets you create a reverse tunnel to your internal servers, all managed through a centralized, user-friendly platform. Let’s explore what makes it so special.

What is Pangolin VPN?

At its core, Pangolin is an open-source solution that allows you to install a complete secure access platform on your own machines. It’s built on top of WireGuard, but it’s not a classic VPN. Instead of manually configuring clients and punching holes in your firewalls, Pangolin centralizes everything. It acts as a secure gateway, protecting your internal web services and applications from direct exposure to the internet.

You essentially need two things to start:

  1. A machine with a public IP address (like a cheap VPS) to act as the central concentrator.
  2. A domain name to point to that machine.

From there, Pangolin handles the rest, creating a secure, elegant bridge to your private network without you having to mess with complex NAT or firewall rules.

Key Features That Make Pangolin Stand Out

Pangolin simplifies secure access by bundling several powerful features into one platform. Here are the most important ones:

  • Enhanced Security with Zero Exposure: This is the headline feature. You don’t expose any ports for your internal services (like Zabbix, Proxmox, or a custom web app) to the public internet. Everything is hidden behind the Pangolin platform and accessed securely over HTTPS.
  • Centralized Authentication and Permissions: Pangolin provides a robust system for managing user access. You can use simple password authentication, enable two-factor authentication (2FA), or integrate with an external Identity Provider (IDP) for Single Sign-On (SSO) with services like Google, Azure, and more.
  • Role-Based Access Control (RBAC): You have granular control over who can see what. Based on user roles, which can be pulled directly from your IDP, you can define policies to ensure users only have access to the specific applications they need.
  • Simplified Networking: Forget about complex firewall configurations. You simply install a lightweight agent on a machine inside your network, and it establishes a secure outbound connection to your public Pangolin server. That’s it.
  • Clientless Access for Users: For accessing web-based applications, your users don’t need to install any client software. All they need is a web browser. Pangolin acts as a reverse proxy, authenticates the user, and seamlessly connects them to the internal resource.
  • Full Control and Privacy: Since you host it yourself, you have complete control over your data and infrastructure. No third-party dependencies or data passing through external services.

How It Works: The Architecture

Pangolin is a suite of open-source tools working in harmony. The entire platform is packaged with Docker, making deployment a breeze. Here are the core components:

  • Pangolin: The central management console where you configure sites, resources, users, and policies.
  • Gerbil: A WireGuard management server developed by the Pangolin team to handle the underlying secure connections.
  • Traefik: A modern and powerful reverse proxy that handles incoming requests and routes them to the correct internal service.
  • Newt: A user-space WireGuard client. This is the agent you install on your internal network. The beauty of Newt is that it doesn’t require root privileges or kernel modules, and it runs on Linux, Windows, macOS, and more.

The workflow is simple: a user accesses a specific URL in their browser. The request hits your public Pangolin server, which uses Traefik to handle it. Pangolin checks the user’s authentication and permissions. If authorized, it routes the request through the secure WireGuard tunnel established by the Newt client to the correct service on your private network.

Getting Started: A Quick Installation Guide

Installing Pangolin is surprisingly straightforward. Here’s what you’ll need first.

Prerequisites

  • A host with Docker or Podman installed and a public IP address.
  • A domain name (e.g., yourdomain.com).
  • DNS records configured to point your domain and a wildcard subdomain (e.g., *.yourdomain.com) to your public host’s IP.
  • An email address for Let’s Encrypt SSL certificate generation.
  • The following ports open on your public host’s firewall: TCP 80, TCP 443, and the necessary UDP ports for WireGuard.

Installation Steps

The installation is handled by a simple script. Just run these commands on your public server:

curl -fsSL https://digpangolin.com/get-installer.sh | bash
sudo bash ./install.sh

The script will ask you a few questions:

  1. Your main domain: (e.g., quadrata.dev)
  2. The subdomain for the Pangolin service: It will suggest one (e.g., pg.quadrata.dev).
  3. Your email for Let’s Encrypt.
  4. Whether to use Gerbil to manage connections (say yes).
  5. A few other simple questions about email notifications and IPv6.

Once you answer, it will pull the necessary Docker containers and set everything up. At the end of the process, it will give you a registration token. Use this token to create your first admin user and password.

Configuring Your First Services

Once you’re logged into the Pangolin dashboard, the process is logical.

1. Create a “Site”

A “Site” in Pangolin represents your internal network. You’ll give it a name, and Pangolin will provide you with the command to deploy the Newt client agent inside that network. You can easily copy the docker run or Docker Compose configuration and deploy it on a machine within your LAN (I used my container management tool, Comodo, for this). Once the agent is running, it will connect to your Pangolin server, and the site will show as active.

2. Create a “Resource”

Next, you define the services you want to expose. Click on “Add Resource” and select “HTTPS Resource.”

  • Give it a name (e.g., “Ollama”). This will also become its subdomain (e.g., ollama.pg.quadrata.dev).
  • Select the “Site” you just created.
  • Enter the internal IP address and port of the service (e.g., 192.168.1.50:3000).

3. Assign Permissions

After creating the resource, you must define who can access it. In the resource’s “Authentication” tab, you can assign it to specific roles (like “Member”) or individual users. You can also enforce SSO for that specific application. Save your changes, and you’re done!

Now, when an authorized user navigates to ollama.pg.quadrata.dev, they will be prompted to log in via Pangolin and will then be seamlessly redirected to your internal Ollama service. It’s that simple!

What About a Full VPN?

Pangolin has recently introduced a beta feature for a more traditional VPN experience. You can create a “Client” in the dashboard, which is similar to creating a “Site.” This provides a configuration to run the Newt client directly on your laptop. Once connected, your machine becomes part of the secure network, allowing you to access any resource (not just web services) based on the permissions you define. You can even create “Client Resources” to open specific TCP/UDP ports for SSH, RDP, or other protocols, giving you fine-grained control.

Conclusion

Pangolin VPN is a fantastic and incredibly interesting product. It’s not trying to be a replacement for every VPN use case, but it excels at simplifying secure access to self-hosted web services. The combination of zero-exposure security, centralized SSO authentication, and role-based access control makes it a powerful tool for small businesses, homelab enthusiasts, and anyone looking to share internal applications without the headache of complex network configurations.

It’s a project that simplifies life in many circumstances, and I highly recommend giving it a try. The fact that it’s open source and self-hostable gives you the ultimate control and privacy.

Have you tried Pangolin or a similar tool? Let me know your thoughts and experiences in the comments below! I’d love to hear your opinion.


For more content on open-source and IT, make sure to subscribe to my channel!

➡️ YouTube Channel: Quadrata

➡️ Join the conversation on Telegram: Zabbix Italia Community

Thanks for reading, and see you next week. A greeting from Dimitri!

Read More
Gartner’s Magic Quadrant: A Crystal Ball for IT or an Illusion?

Gartner’s Magic Quadrant: A Crystal Ball for IT or an Illusion?

Good morning, everyone! Dimitri Bellini here, and welcome back to Quadrata. I know I’ve been away for a few weeks—I managed to get a bit of a vacation in—and I’ve come back with a ton of ideas for new open-source software to share with you.

But today, I want to take a step back and have a more general chat. This is for anyone who works in a company that has to deal with much larger enterprise clients, or for anyone involved in the high-stakes decision of choosing which software to invest in. In the world of IT, there’s a powerful and often mysterious force that guides these decisions: the Gartner Magic Quadrant.

It’s often treated like a crystal ball, a tool that can predict the future of your tech empire and tell you exactly where to step next. While it’s certainly a useful instrument, it’s crucial to understand what it is, how it works, and most importantly, its limitations.

What Exactly Is the Gartner Magic Quadrant?

Simply put, the Magic Quadrant is a series of market research reports that provide a visual snapshot of a specific tech market. Whether it’s cloud computing, observability, or storage, Gartner maps out the main competitors, helping you understand the landscape at a glance. For a top manager who doesn’t have time to research hundreds of solutions, it simplifies the immense complexity of the IT world into a single, digestible chart.

Decoding the Four Squares

The “magic” happens within a four-square grid, where vendors are placed based on their “Ability to Execute” and “Completeness of Vision.” Here’s what each quadrant means:

  • Leaders (Top Right): These are the champions. They have a strong vision that aligns with Gartner’s view of the future and the resources to execute it. They are well-established, reliable, and considered the top players in their field.
  • Challengers (Top Left): These vendors dominate the market today and have a strong ability to execute, but their vision for the future might not be as clear or innovative. They are strong performers but may be less prepared for tomorrow’s shifts.
  • Visionaries (Bottom Right): These are the innovators. They understand where the market is going and have a compelling vision, but they may not have the resources or market presence to execute on that vision at scale just yet.
  • Niche Players (Bottom Left): These vendors focus successfully on a small segment of the market. They might be very good at what they do, but they lack either a broad vision or the ability to outperform others across the board.

Why the Magic Quadrant Is So Influential

If you’ve ever tried to sell a product to a large enterprise, you’ve likely been asked, “Are you in the Gartner Magic Quadrant?” If the answer is yes, the doors magically open. Why? Because it represents a safe choice.

There’s an old saying in IT: “No one ever got fired for buying IBM.” The Magic Quadrant works on a similar principle. A manager can point to it and say, “I chose a Leader. It was the best on the market according to the experts. If it doesn’t work out, what more could I have done?” It provides a shield of justification.

For vendors, being placed in the quadrant—especially as a Leader—is a powerful marketing tool. It validates their position in the market and instantly gives them credibility. It works for both the buyer and the seller.

The Hamlet-like Doubt: Is the Leader Always the Best Choice?

And here is where the critical thinking comes in. Just because a product is in the “Leaders” quadrant, does that automatically make it the right choice for your company? This is the fundamental question every manager should ask.

The process to get into the quadrant is incredibly complex and resource-intensive. It requires detailed reports on financials, sales strategy, customer feedback, marketing, and innovation. This creates a few potential issues:

1. It Favors the Already Favored

Large, multinational corporations have the money, specialized staff, and massive structures needed to provide Gartner with the exhaustive data required. This creates a high barrier to entry for small-to-medium-sized companies or innovative startups that might have a superior product but lack the corporate machinery to prove it according to Gartner’s specific methodology.

2. The Open Source Blind Spot

Open source solutions often don’t fit neatly into these corporate boxes. A powerful open-source tool might require more initial customization and “handiwork,” but in return, it offers unparalleled flexibility. The Magic Quadrant’s model can struggle to properly evaluate this trade-off, often overlooking solutions that could be a perfect fit for a company willing to invest in configuration over out-of-the-box features.

3. It’s Based on the Past, Not the Future

The analysis relies heavily on past performance and existing data. A truly disruptive, game-changing technology that doesn’t fit the standard parameters might not even make it onto the chart. By the time it does, it might be too late.

Conclusion: Use It as a Map, Not a Destination

So, what’s the takeaway? The Gartner Magic Quadrant is an excellent starting point. If you know nothing about a particular market, it gives you a fantastic overview of the key players. But your work doesn’t end there. The most critical step is due diligence.

You must dive deeper to understand your company’s unique, real-world needs. No two businesses are exactly alike, even if they’re in the same industry. To stay on the crest of the wave, you need a tool that is molded to your specific workflows, not a one-size-fits-all solution that’s beautiful and feature-packed but of which you’ll only use a fifth of its capabilities. Think about it: if you want the ultimate performance car, do you buy the best-selling Volkswagen, or do you seek out a niche masterpiece like a Ferrari or a Bugatti?

Choosing the Leader is the easy path. But putting in the passion and the effort to analyze, think, and then decide on the truly *right* tool—that’s what makes a great manager. Don’t just follow the chart; understand your needs, explore all options (even the niche ones!), and make an informed decision that will genuinely drive your business forward.


That’s all for today! I hope this discussion was useful. What are your thoughts on the Gartner Magic Quadrant? Have you used it to make decisions? Let me know in the comments below!

If you liked this post and the accompanying video, please give it a like and subscribe to the channel if you haven’t already. I’ll be back next week with a very interesting—and yes, niche—tool that I think you’ll love.

Bye everyone!

– Dimitri

Connect with me and the community:

Read More
Copyparty: The Lightweight, Powerful File Server You Didn’t Know You Needed

Copyparty: The Lightweight, Powerful File Server You Didn’t Know You Needed

Good morning, everyone, and welcome back to Quadrata! This is my corner of the internet dedicated to the open-source world and the IT solutions that I—and hopefully you—find exciting. If you enjoy this kind of content, don’t forget to leave a like on the video and subscribe to the channel!

This week, we’re diving back into the world of open-source solutions. I stumbled upon a truly stunning tool in the file-sharing space that has a wonderful nostalgic feel, reminiscent of the BBS days of the 90s. It’s called Copyparty, and its charm lies not just in its retro vibe but in its incredible versatility. You can install it almost anywhere, making it a fantastic utility to have in your toolkit.

So, let’s take a closer look together.

What Exactly is Copyparty?

At its core, Copyparty is a web file server that allows you to share and exchange files. What makes it special is that it’s all contained within a single Python file. This makes it incredibly lightweight and portable. While you can run it directly, I prefer using it inside a Docker container for easier management and deployment.

But why use it? The answer is simplicity and performance. If you’ve ever needed to quickly move files between your PC and your NAS, or share a large file with a friend without jumping through hoops, Copyparty could be the perfect, high-performing solution for you.

A Surprising Number of Features in a Tiny Package

I was genuinely impressed by the sheer number of features packed into this tool. It’s highly customizable and offers much more than simple file transfers. Here’s a condensed list of its most interesting capabilities:

  • Smart Uploads & Downloads: When you upload a large file, Copyparty can intelligently split it into smaller chunks. This maximizes your bandwidth and, more importantly, allows for resumable transfers. If your connection drops halfway through, you can pick up right where you left off.
  • File Deduplication: To save precious disk space, Copyparty uses file hashes to identify and avoid storing duplicate files.
  • On-the-fly Compression: You can have files automatically zipped and compressed on the fly, which is another great space-saving feature.
  • Batch Renaming & Tagging: If you have a large collection of photos or, like in the old days, a folder full of MP3s, you can quickly rename them based on a specific pattern.
  • Extensive Protocol Support: It’s not just limited to HTTP. Copyparty supports a whole suite of protocols, including WebDAV, FTPS, TFTP, and Samba, making it a complete hub for file communication.
  • Truly Cross-Platform: It runs virtually everywhere: Linux, macOS, Windows, Android, and even on a Raspberry Pi, thanks to its optimized nature. Yes, you can install it directly on your phone!
  • Built-in Media Tools: Copyparty includes a surprisingly nice music player that can read metadata from your audio files (like BPM and duration) and a compact image browser for viewing your photos.
  • Powerful Command Line (CLI): For those who need to automate or optimize file transfers, there’s a full-featured command-line interface.

Tailor It to Your Needs: Configuration and Security

One of Copyparty’s greatest strengths is its customizability via a single configuration file, copyparty.conf. Here, you can enable or disable features, block connections from specific IP ranges, set upload limits based on disk space, and even change the UI theme.

For user management, you have a couple of options. You can use a simple user/password file or integrate with an external Identity Provider (IDP). The permission system is also very granular. Using a system of flags (like RW for read/write, MDA, etc.), you can define exactly what each user can do on specific paths. It might seem a bit “primordial” compared to modern web GUIs, but for a compact solution, it’s incredibly fast and effective to manage.

How to Install Copyparty with Docker

As I mentioned, my preferred method is using Docker. Copyparty’s developers provide a straightforward Docker Compose file that makes getting started a breeze. I use a GUI tool like Portainer to manage my containers, which simplifies the process even further.

Here’s a look at a basic docker-compose.yml structure:


services:
copyparty:
image: 9001/copyparty
ports:
- "3923:3923"
volumes:
# Volume for configuration file (copyparty.conf)
- /path/to/your/config:/cfg
# Volume for the files you want to share
- /path/to/your/data:/mnt
# ... other docker-specific configurations

In this setup, I’ve defined two key volumes:

  1. A volume for the configuration, where the copyparty.conf file lives.
  2. A mount point for the actual data I want to share or upload to.

Once you run docker-compose up -d, your service will be up and running!

A Walkthrough of the Web Interface

The official GitHub page has a wealth of information and even a live demo, but let me show you my installation. The interface has a fantastic vintage feel, but it’s packed with functionality.

Uploading and Sharing

Uploading a file is as simple as dragging and dropping. First, Copyparty hashes the file to check for duplicates. Then, it begins streaming the upload in a highly optimized way. Once uploaded, you’ll see details like the IP address it was uploaded from and the timestamp.

Sharing is just as easy. You can select a file, create a share link with a custom name, set a password, and even define an expiration date. It generates both a URL and a QR code, making it incredibly convenient to share with others.

Management and Media

The UI includes several helpful tools:

  • Control Center: See active clients, current uploads/downloads, and active shares.
  • Recent Uploads (Extinguisher Icon): Quickly view the latest files added to your share, which is useful for moderation in a multi-user environment.
  • Advanced Search (Lens Icon): A powerful search tool with a wide array of filters to find exactly what you’re looking for.
  • Settings (Gear Icon): Customize the UI, change the language, and tweak how files are displayed.

And don’t forget the built-in media player and image gallery, which turn your file share into a simple media server.

Monitoring

For advanced users, Copyparty can even export its metrics, allowing you to monitor its performance and status with tools like Zabbix. This is a testament to its professional-grade design.

Final Thoughts: Is Copyparty Right for You?

I think Copyparty is a fantastic and interesting product. It’s a very nice solution to try, especially because it’s so lightweight and can be installed almost anywhere. There are many situations where a fast, simple, and self-hosted file-sharing tool is exactly what you need.

Its blend of retro simplicity and modern, powerful features makes it a unique and valuable tool in the open-source world.

That’s all for this week! I’m always eager to hear your thoughts. Have you used Copyparty before? Or do you use another solution that you find more interesting? Let me know in the comments below—perhaps we can discuss it in a future video!

A big greeting from me, Dimitri, and see you next week. Bye everyone!


Follow my work and join the community:

Read More
My Deep Dive into NetLockRMM: The Open-Source RMM You’ve Been Waiting For

My Deep Dive into NetLockRMM: The Open-Source RMM You’ve Been Waiting For

Good morning everyone, I’m Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the fantastic world of open source and IT. If you’re managing multiple systems, you know the challenge: finding a reliable, centralized way to monitor and control everything without breaking the bank. Proprietary solutions can be costly, and the open-source landscape for this has been somewhat limited.

That’s why this week, I’m excited to show you a new product that tackles this problem head-on. It’s an open-source tool called NetLockRMM, and it’s designed to solve the exact problem of remote device management.

What is NetLockRMM?

NetLockRMM stands for Remote Monitoring and Management. It’s a self-hosted solution that gives you a single web portal to manage and control your remote hosts. Whether you’re dealing with servers or desktops running Windows, Linux, or macOS, this tool aims to bring them all under one roof. For those of us who use tools like Zabbix to manage numerous proxies or server installations, the idea of a single point of control is incredibly appealing.

Here are some of the key features it offers:

  • Cross-Platform Support: Agents are available for Windows, Linux, and macOS, covering most use cases.
  • System Monitoring: Keep an eye on vital parameters like CPU, RAM, and disk usage. While it’s not a full-fledged monitoring system like Zabbix, it provides a great overview for standard requirements.
  • Remote Control: Access a remote shell, execute commands, and even get full remote desktop access to your Windows machines directly from your browser.
  • File Transfer: Easily upload or download files to and from your managed hosts.
  • Automation: Schedule tasks and run scripts across your fleet of devices to automate maintenance and checks.
  • Multi-Tenancy: Manage different clients or departments from within the same instance.

Getting Started: The Installation and Setup Guide

One of the best parts about NetLockRMM is how simple it is to get up and running. Here’s a step-by-step guide to get you started.

Prerequisites

All you really need is a system with Docker installed. The entire application stack runs in containers, making deployment clean and isolated. If you plan to access the portal from the internet, you’ll also need a domain name (FQDN).

Step 1: The Initial Installation

The development team has made this incredibly easy. The official documentation points to a single Bash script that automates the setup.

  1. Download the installation script from their repository (https://docs.netlockrmm.com/en/server-installation-docker).
  2. Make it executable (e.g., chmod +x /home/docker-compose-quick-setup.sh).
  3. Run the script. It will ask you a few questions to configure your environment, such as the FQDN you want to use and the ports for the services.
  4. The script will then generate the necessary docker-compose.yml file and, if you choose, deploy the containers for you.

While you can easily manage the deployment from the command line, I’m getting quite fond of using a handy container management tool to deploy my stacks, which makes the process even more convenient.

Step 2: Activating Your Instance

Here’s an important point. While NetLockRMM is open-source, the developers have a fair model to support their work. To fully unlock its features, you need to get a free API key.

  1. Go to the official NetLockRMM website and sign up for an account.
  2. Choose the On-Premise Open Source plan. It’s free and allows you to manage up to 25 devices, which is very generous for home labs or small businesses.
  3. In your portal dashboard, navigate to “My Product” to find your API key.
  4. In your self-hosted NetLockRMM instance, go to Settings > System and paste the Member Portal API Key.

Without this step, the GUI will work, but you won’t be able to add any new hosts. So, make sure you do this first!

Step 3: Deploying Your First Agent

With the server running and activated, it’s time to add your first machine.

  1. In the NetLockRMM dashboard, click the deployment icon in the top navigation bar.
  2. Create a new agent configuration or use the default. This is where you’ll tell the agent how to connect back to your server.
  3. This is critical: For the API, App, and other URLs, make sure you enter the full FQDN, including the port number (e.g., https://rmm.yourdomain.com:443). The agent won’t assume the default port, and it won’t work without it.
  4. Select the target operating system (Windows, Linux, etc.) and download the customized installer.
  5. Run the installer on your target machine.
  6. Back in the NetLockRMM dashboard, the new machine will appear in the Unauthorized Hosts list. Simply authorize it to add it to your park of managed devices.

Exploring the Key Features in Action

Once an agent is authorized, you can click on it to see a wealth of information and tools. You get a summary of the OS, hardware specs, firewall status, and uptime. You can also browse running processes in the task manager and see a list of services.

Powerful Remote Control

The remote control features are where NetLockRMM truly shines. For Windows, the remote desktop access is fantastic. It launches a session right in your browser, giving you full GUI control without needing any other software. It’s fast, responsive, and incredibly useful.

For Linux, the remote terminal is currently more of a command-execution tool than a fully interactive shell, but it’s perfect for running scripts or a series of commands. You can also browse the file system and transfer files on all supported platforms.

Automation and Scripting

The automation section allows you to create policies and jobs that run on a schedule. You can define checks for disk space, running services, or even script your own checks. There’s also a growing library of community scripts you can use for common tasks, like running system updates on Ubuntu.

My Final Thoughts: A Promising Future

NetLockRMM is a young but incredibly promising project. It’s under very active development—when I checked their GitHub, the last release was just a few days ago! This shows a dedicated team working to improve the product.

It fills a significant gap in the open-source ecosystem, providing a powerful, modern, and easy-to-use RMM solution that can compete with paid alternatives. While there are a few cosmetic bugs and rough edges, the core functionality is solid and works well.

I believe that with community support—through feedback, bug reports, and contributions—this tool could become something truly special. I’ve already given them a star on GitHub, and I encourage you to check them out too.


I hope I’ve shown you something new and interesting today. This is exactly the kind of project we love to see in the open-source world.

But what do you think? Have you tried NetLockRMM, or do you use another open-source alternative for remote management? I’d love to hear your thoughts and recommendations in the comments below. Every comment helps me and the rest of the community learn.

And as always, if you enjoyed this deep dive, please subscribe to the channel for more content like this. See you next week with another episode!

Bye everyone, from Dimitri.

Stay in touch with Quadrata:

Read More
Taming Your Containers: A Deep Dive into Komodo, the Ultimate Open-Source Management GUI

Taming Your Containers: A Deep Dive into Komodo, the Ultimate Open-Source Management GUI

Hello everyone, Dimitri Bellini here, and welcome back to Quadrata, my corner of the internet dedicated to the world of open-source and IT. If you’re like me, you love the power and flexibility of containers. But let’s be honest, managing numerous containers and multiple hosts purely through the command line can quickly become overwhelming. It’s easy to lose track of what’s running, which services need attention, and how your host resources are holding up.

This week, I stumbled upon a solution that genuinely changed my mood and simplified my workflow: Komodo. It’s an open-source container management platform that is so well-made, I just had to share it with you.

What is Komodo?

At its core, Komodo is an open-source graphical user interface (GUI) designed for the management of containers like Docker, Podman, and others. It provides a centralized dashboard to deploy, monitor, and manage all your containerized applications, whether they are running locally or on remote hosts. The goal is to give you back control and visibility, turning a complex mess of shell commands into a streamlined, intuitive experience.

Key Features That Make Komodo Shine

  • Unified Dashboard: Get a bird’s-eye view of all your hosts and the containers running on them. Komodo elegantly displays resource usage (CPU, RAM, Disk Space), operating system details, and more, all in one place.
  • Multi-Host Management: Komodo uses a core-periphery architecture. You install the main Komodo instance on one server and a lightweight agent on any other hosts you want to manage. This allows you to control a “cluster” of machines from a single, clean web interface.
  • Effortless Deployments: You can deploy applications (which Komodo calls “stacks”) directly from Docker Compose files. Whether you paste the code into the UI, point to files on the server, or link a Git repository, Komodo handles the rest.
  • Automation and CI/CD: Komodo includes features for building images directly from your source code repositories and creating automated deployment procedures that can be triggered by webhooks.
  • Advanced User Management: You can create multiple users and groups, and even integrate with external authentication providers like GitHub, Google, or any OIDC provider.

How Does Komodo Compare to the Competition?

Many of you are probably familiar with Portainer. It has been a fantastic solution for years, but its focus has shifted towards Kubernetes, and the free Community Edition has become somewhat limited compared to its commercial offerings. Portainer pioneered the agent-based multi-host model, which Komodo has adopted and refined.

On the other end of the spectrum is Dockge, a much simpler tool focused on managing Docker Compose files on a single host. It’s a great, lightweight option, but Komodo offers a far more comprehensive suite of features for those managing a more complex environment.

Getting Started with Komodo: A Step-by-Step Guide

One of the best things about Komodo is how easy it is to set up. All you need is a machine with Docker installed.

1. Installation

The official documentation makes this incredibly simple (https://komo.do/docs/setup/mongo). The installation is, fittingly, container-based.

  1. Create a dedicated directory for your Komodo installation.
  2. Download the docker-compose.yml and .env files provided on the official Komodo website. Komodo uses a database to store its configuration, giving you the choice between MongoDB (the long-standing default) or Ferretdb (a Postgres-based alternative). For a simple start, the default files work perfectly.
  3. Run the following command in your terminal:
    docker-compose --env-file ./.env up -d

And that’s it! Komodo, its agent (periphery), and its database will start up as containers on your machine.

2. First-Time Setup

Once the containers are running, navigate to your server’s IP address on port 9120 (e.g., http://YOUR_SERVER_IP:9120). The first time you access the UI, it will prompt you to create an administrator account. Simply enter your desired username and password, and you’ll be logged into the main dashboard.

Exploring the Komodo Dashboard and Deploying an App

The dashboard is clean and intuitive. You’ll see your server(s) listed. The host where you installed Komodo is automatically added. You can easily add more remote hosts by installing the Komodo agent on them using a simple command provided in the UI.

Deploying Your First Stack (Draw.io)

Let’s deploy a simple application to see Komodo in action. A stack is essentially a project defined by a Docker Compose file.

  1. From the main dashboard, navigate to Stacks and click New Stack.
  2. Give your stack a name, for example, drawio-app.
  3. Click Configure. Select the server you want to deploy to.
  4. For the source, choose UI Defined. This allows you to paste your compose file directly.
  5. In the Compose file editor, paste the configuration for a Draw.io container. Here’s a simple example:
    services:
    drawio:
    image: jgraph/drawio
    ports:
    - "8080:8080"
    - "8443:8443"
    restart: unless-stopped

  6. Click Save and then Update.
  7. The stack is now defined, but not yet running. Click the Deploy button and confirm. Komodo will pull the image and start your container.

You can now see your running service, view its logs, and access the application by clicking the port link—all from within the Komodo UI. It’s incredibly slick!

Modifying a Stack

Need to change a port or an environment variable? It’s just as easy. Simply edit the compose file in the UI, save it, and hit Redeploy. Komodo will gracefully stop the old container and start the new one with the updated configuration.

Final Thoughts

I have to say, I’m thoroughly impressed with Komodo. It strikes a perfect balance between simplicity and power. It provides the deep visibility and control that power users need without a steep learning curve. The interface is polished, the feature set is rich, and the fact that it’s a thriving open-source project makes it even better.

I’ll definitely be adopting Komodo to manage the entropy on my own servers. It’s a fantastic piece of software that I can wholeheartedly recommend to anyone working with containers.

But that’s my take. What do you think? Have you tried Komodo, or do you use another tool for managing your containers? I’d love to hear your thoughts and suggestions in the comments below. Your ideas might even inspire a future video!

That’s all for today. A big salute from me, Dimitri, and I’ll see you next week!


Don’t forget to subscribe to my YouTube channel for more open-source content:

Quadrata on YouTube

Join our community on Telegram:

ZabbixItalia Telegram Channel

Read More
Zabbix 7.4 is Here! A Deep Dive into the Game-Changing New Features

Zabbix 7.4 is Here! A Deep Dive into the Game-Changing New Features

Good morning, everyone! It’s Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the world of open-source IT. It’s an exciting week because our good friend, Zabbix, has just rolled out a major new version: Zabbix 7.4! After months of hard work from the Zabbix team, this release is packed with features that will change the way we monitor our infrastructure. So, let’s dive in together and explore what’s new.

The “Escapological” Feature: Nested Low-Level Discovery

Let’s start with what I consider the most mind-bending, “centrifugal” new feature: nested low-level discovery (LLD). Until now, LLD was fantastic for discovering objects like file systems or network interfaces on a host. But we couldn’t go deeper. If you discovered a database, you couldn’t then run another discovery *within* that database to find all its tablespaces dynamically.

With Zabbix 7.4, that limitation is gone! I’ve set up a demo to show you this in action. I created a discovery rule that finds all the databases on a host. From the output of that first discovery, a new “discovery prototype” of type “Nested” can now be created. This second-level discovery can then parse the data from the first one to find all the tablespaces specific to each discovered database.

The result? Zabbix first discovers DB1 and DB2, and then it automatically runs another discovery for each of them, creating items for every single tablespace (like TS1 for DB1, TS2 for DB1, etc.). This allows for an incredible level of granularity and automation, especially in complex environments like database clusters or containerized applications. This is a true game-changer.

And it doesn’t stop there. We can now also have host prototypes within host prototypes. Previously, if you discovered a VMware host and it created new hosts for each virtual machine, those new VM hosts couldn’t run their own discovery to create *more* hosts. Now they can, opening the door for multi-layered infrastructure discovery.

A Smarter Way to Onboard: The New Host Wizard

How many times have new users felt a bit lost when adding their first host to Zabbix? What hostname do I use? How do I configure the agent? The new Host Wizard solves this beautifully.

Found under Data Collection -> Host Wizard, this feature provides a step-by-step guide to get your monitoring up and running. Here’s a quick walkthrough:

  1. Select a Template: You start by searching for the type of monitoring you need (e.g., “Linux”). The wizard will show you compatible templates. Note that not all templates are updated for the wizard yet, but the main ones for Linux, Windows, AWS, Azure, databases, and more are already there.
  2. Define the Host: You provide a hostname and assign it to a host group, just like before, but in a much more guided way.
  3. Configure the Agent: This is where the magic happens. For an active agent, for example, you input your Zabbix server/proxy IP and configure security (like a pre-shared key). The wizard then generates a complete installation script for you to copy and paste directly into your Linux or Windows shell! This script handles everything—installing the agent, configuring the server address, and setting up the keys. It’s incredibly convenient.
  4. Fine-Tune and Deploy: The final step shows you the configurable macros for the template in a clean, human-readable format, making it easy to adjust thresholds before you create the host.

A quick heads-up: I did notice a small bug where the wizard’s script currently installs Zabbix Agent 7.2 instead of 7.4. I’ve already opened a ticket, and I’m sure the Zabbix team will have it fixed in a patch release like 7.4.1 very soon.

Dashboard and Visualization Upgrades

Real-Time Editing and a Fresh Look

Dashboards have received a major usability boost. You no longer have to click “Edit,” make a change to a widget, save it, and then see the result. Now, all changes are applied in real-time as you configure the widget. If you thicken a line in a graph, you see it thicken instantly. This makes dashboard creation so much faster and more intuitive.

Furthermore, Zabbix has introduced color palettes for graphs. Gone are the days of having multiple metrics on a graph with nearly identical shades of the same color. You can now choose a palette that assigns distinct, pleasant, and easily recognizable colors to each item, making your graphs far more readable.

The New Item Card Widget

There’s a new widget in town called the Item Card. When used with something like the Host Navigator widget, you can select a host, then select a specific item (like CPU Utilization), and the Item Card will populate with detailed information about that item: its configuration, recent values, a mini-graph, and any associated triggers. It’s a fantastic way to get a quick, focused overview of a specific metric.

Powerful Enhancements for Maps and Monitoring

Maps Get a Major Overhaul

Maps are now more powerful and visually appealing than ever. Here are the key improvements:

  • Element Ordering: Finally, we can control the Z-index of map elements! You can bring elements to the front or send them to the back. This means you can create a background image of a server rack and place your server icons perfectly on top of it, which was impossible to do reliably before.
  • Auto-Hiding Labels: To clean up busy maps, labels can now be set to appear only when you hover your mouse over an element.
  • Dynamic Link Indicators: The lines connecting elements on a map are no longer just tied to trigger status. You can now have their color or style change based on an item’s value, allowing you to visualize things like link bandwidth utilization directly on your map.

More Control with New Functions and Security

Zabbix 7.4 also brings more power under the hood:

  • OAuth 2.0 Support: You can now easily configure email notifications using Gmail and Office 365, as Zabbix provides a wizard to handle the OAuth 2.0 authentication.
  • Frontend-to-Server Encryption: For security-conscious environments, you can now enable encryption for the communication between the Zabbix web frontend and the Zabbix server.
  • New Time-Based Functions: Functions like first.clock and last.clock have been added, giving us more power to correlate events based on their timestamps, especially when working with logs.

Small Changes, Big Impact: Quality of Life Improvements

Sometimes it’s the little things that make the biggest difference in our day-to-day work. Zabbix 7.4 is full of them:

  • Inline Form Validation: When creating an item or host, Zabbix now instantly highlights any required fields you’ve missed, preventing errors before you even try to save.
  • Copy Button for Test Output: When you test a preprocessing step and get a large JSON output, there’s now a simple “Copy” button. No more struggling to select all the text in the small window!
  • New Templates: The library of official templates continues to grow, with notable additions for enterprise hardware like Pure Storage.

Final Thoughts

Zabbix 7.4 is a massive step forward. From the revolutionary nested discovery to the user-friendly Host Wizard and the countless usability improvements, this release offers something for everyone. It makes Zabbix both more powerful for seasoned experts and more accessible for newcomers.

What do you think of this new release? Is there a feature you’re particularly excited about, or something you’d like me to cover in more detail? The nested discovery part can be complex, so I’m happy to discuss it further. Let me know your thoughts in the comments below!

And with that, that’s all for today. See you next week!


Don’t forget to engage with the community:

  • Subscribe to my YouTube Channel: Quadrata
  • Join the discussion on the Zabbix Italia Telegram Channel: ZabbixItalia

Read More