Posts on Jan 1970

My Deep Dive into NetLockRMM: The Open-Source RMM You’ve Been Waiting For

My Deep Dive into NetLockRMM: The Open-Source RMM You’ve Been Waiting For

Good morning everyone, I’m Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the fantastic world of open source and IT. If you’re managing multiple systems, you know the challenge: finding a reliable, centralized way to monitor and control everything without breaking the bank. Proprietary solutions can be costly, and the open-source landscape for this has been somewhat limited.

That’s why this week, I’m excited to show you a new product that tackles this problem head-on. It’s an open-source tool called NetLockRMM, and it’s designed to solve the exact problem of remote device management.

What is NetLockRMM?

NetLockRMM stands for Remote Monitoring and Management. It’s a self-hosted solution that gives you a single web portal to manage and control your remote hosts. Whether you’re dealing with servers or desktops running Windows, Linux, or macOS, this tool aims to bring them all under one roof. For those of us who use tools like Zabbix to manage numerous proxies or server installations, the idea of a single point of control is incredibly appealing.

Here are some of the key features it offers:

  • Cross-Platform Support: Agents are available for Windows, Linux, and macOS, covering most use cases.
  • System Monitoring: Keep an eye on vital parameters like CPU, RAM, and disk usage. While it’s not a full-fledged monitoring system like Zabbix, it provides a great overview for standard requirements.
  • Remote Control: Access a remote shell, execute commands, and even get full remote desktop access to your Windows machines directly from your browser.
  • File Transfer: Easily upload or download files to and from your managed hosts.
  • Automation: Schedule tasks and run scripts across your fleet of devices to automate maintenance and checks.
  • Multi-Tenancy: Manage different clients or departments from within the same instance.

Getting Started: The Installation and Setup Guide

One of the best parts about NetLockRMM is how simple it is to get up and running. Here’s a step-by-step guide to get you started.

Prerequisites

All you really need is a system with Docker installed. The entire application stack runs in containers, making deployment clean and isolated. If you plan to access the portal from the internet, you’ll also need a domain name (FQDN).

Step 1: The Initial Installation

The development team has made this incredibly easy. The official documentation points to a single Bash script that automates the setup.

  1. Download the installation script from their repository (https://docs.netlockrmm.com/en/server-installation-docker).
  2. Make it executable (e.g., chmod +x /home/docker-compose-quick-setup.sh).
  3. Run the script. It will ask you a few questions to configure your environment, such as the FQDN you want to use and the ports for the services.
  4. The script will then generate the necessary docker-compose.yml file and, if you choose, deploy the containers for you.

While you can easily manage the deployment from the command line, I’m getting quite fond of using a handy container management tool to deploy my stacks, which makes the process even more convenient.

Step 2: Activating Your Instance

Here’s an important point. While NetLockRMM is open-source, the developers have a fair model to support their work. To fully unlock its features, you need to get a free API key.

  1. Go to the official NetLockRMM website and sign up for an account.
  2. Choose the On-Premise Open Source plan. It’s free and allows you to manage up to 25 devices, which is very generous for home labs or small businesses.
  3. In your portal dashboard, navigate to “My Product” to find your API key.
  4. In your self-hosted NetLockRMM instance, go to Settings > System and paste the Member Portal API Key.

Without this step, the GUI will work, but you won’t be able to add any new hosts. So, make sure you do this first!

Step 3: Deploying Your First Agent

With the server running and activated, it’s time to add your first machine.

  1. In the NetLockRMM dashboard, click the deployment icon in the top navigation bar.
  2. Create a new agent configuration or use the default. This is where you’ll tell the agent how to connect back to your server.
  3. This is critical: For the API, App, and other URLs, make sure you enter the full FQDN, including the port number (e.g., https://rmm.yourdomain.com:443). The agent won’t assume the default port, and it won’t work without it.
  4. Select the target operating system (Windows, Linux, etc.) and download the customized installer.
  5. Run the installer on your target machine.
  6. Back in the NetLockRMM dashboard, the new machine will appear in the Unauthorized Hosts list. Simply authorize it to add it to your park of managed devices.

Exploring the Key Features in Action

Once an agent is authorized, you can click on it to see a wealth of information and tools. You get a summary of the OS, hardware specs, firewall status, and uptime. You can also browse running processes in the task manager and see a list of services.

Powerful Remote Control

The remote control features are where NetLockRMM truly shines. For Windows, the remote desktop access is fantastic. It launches a session right in your browser, giving you full GUI control without needing any other software. It’s fast, responsive, and incredibly useful.

For Linux, the remote terminal is currently more of a command-execution tool than a fully interactive shell, but it’s perfect for running scripts or a series of commands. You can also browse the file system and transfer files on all supported platforms.

Automation and Scripting

The automation section allows you to create policies and jobs that run on a schedule. You can define checks for disk space, running services, or even script your own checks. There’s also a growing library of community scripts you can use for common tasks, like running system updates on Ubuntu.

My Final Thoughts: A Promising Future

NetLockRMM is a young but incredibly promising project. It’s under very active development—when I checked their GitHub, the last release was just a few days ago! This shows a dedicated team working to improve the product.

It fills a significant gap in the open-source ecosystem, providing a powerful, modern, and easy-to-use RMM solution that can compete with paid alternatives. While there are a few cosmetic bugs and rough edges, the core functionality is solid and works well.

I believe that with community support—through feedback, bug reports, and contributions—this tool could become something truly special. I’ve already given them a star on GitHub, and I encourage you to check them out too.


I hope I’ve shown you something new and interesting today. This is exactly the kind of project we love to see in the open-source world.

But what do you think? Have you tried NetLockRMM, or do you use another open-source alternative for remote management? I’d love to hear your thoughts and recommendations in the comments below. Every comment helps me and the rest of the community learn.

And as always, if you enjoyed this deep dive, please subscribe to the channel for more content like this. See you next week with another episode!

Bye everyone, from Dimitri.

Stay in touch with Quadrata:

Read More
Taming Your Containers: A Deep Dive into Komodo, the Ultimate Open-Source Management GUI

Taming Your Containers: A Deep Dive into Komodo, the Ultimate Open-Source Management GUI

Hello everyone, Dimitri Bellini here, and welcome back to Quadrata, my corner of the internet dedicated to the world of open-source and IT. If you’re like me, you love the power and flexibility of containers. But let’s be honest, managing numerous containers and multiple hosts purely through the command line can quickly become overwhelming. It’s easy to lose track of what’s running, which services need attention, and how your host resources are holding up.

This week, I stumbled upon a solution that genuinely changed my mood and simplified my workflow: Komodo. It’s an open-source container management platform that is so well-made, I just had to share it with you.

What is Komodo?

At its core, Komodo is an open-source graphical user interface (GUI) designed for the management of containers like Docker, Podman, and others. It provides a centralized dashboard to deploy, monitor, and manage all your containerized applications, whether they are running locally or on remote hosts. The goal is to give you back control and visibility, turning a complex mess of shell commands into a streamlined, intuitive experience.

Key Features That Make Komodo Shine

  • Unified Dashboard: Get a bird’s-eye view of all your hosts and the containers running on them. Komodo elegantly displays resource usage (CPU, RAM, Disk Space), operating system details, and more, all in one place.
  • Multi-Host Management: Komodo uses a core-periphery architecture. You install the main Komodo instance on one server and a lightweight agent on any other hosts you want to manage. This allows you to control a “cluster” of machines from a single, clean web interface.
  • Effortless Deployments: You can deploy applications (which Komodo calls “stacks”) directly from Docker Compose files. Whether you paste the code into the UI, point to files on the server, or link a Git repository, Komodo handles the rest.
  • Automation and CI/CD: Komodo includes features for building images directly from your source code repositories and creating automated deployment procedures that can be triggered by webhooks.
  • Advanced User Management: You can create multiple users and groups, and even integrate with external authentication providers like GitHub, Google, or any OIDC provider.

How Does Komodo Compare to the Competition?

Many of you are probably familiar with Portainer. It has been a fantastic solution for years, but its focus has shifted towards Kubernetes, and the free Community Edition has become somewhat limited compared to its commercial offerings. Portainer pioneered the agent-based multi-host model, which Komodo has adopted and refined.

On the other end of the spectrum is Dockge, a much simpler tool focused on managing Docker Compose files on a single host. It’s a great, lightweight option, but Komodo offers a far more comprehensive suite of features for those managing a more complex environment.

Getting Started with Komodo: A Step-by-Step Guide

One of the best things about Komodo is how easy it is to set up. All you need is a machine with Docker installed.

1. Installation

The official documentation makes this incredibly simple (https://komo.do/docs/setup/mongo). The installation is, fittingly, container-based.

  1. Create a dedicated directory for your Komodo installation.
  2. Download the docker-compose.yml and .env files provided on the official Komodo website. Komodo uses a database to store its configuration, giving you the choice between MongoDB (the long-standing default) or Ferretdb (a Postgres-based alternative). For a simple start, the default files work perfectly.
  3. Run the following command in your terminal:
    docker-compose --env-file ./.env up -d

And that’s it! Komodo, its agent (periphery), and its database will start up as containers on your machine.

2. First-Time Setup

Once the containers are running, navigate to your server’s IP address on port 9120 (e.g., http://YOUR_SERVER_IP:9120). The first time you access the UI, it will prompt you to create an administrator account. Simply enter your desired username and password, and you’ll be logged into the main dashboard.

Exploring the Komodo Dashboard and Deploying an App

The dashboard is clean and intuitive. You’ll see your server(s) listed. The host where you installed Komodo is automatically added. You can easily add more remote hosts by installing the Komodo agent on them using a simple command provided in the UI.

Deploying Your First Stack (Draw.io)

Let’s deploy a simple application to see Komodo in action. A stack is essentially a project defined by a Docker Compose file.

  1. From the main dashboard, navigate to Stacks and click New Stack.
  2. Give your stack a name, for example, drawio-app.
  3. Click Configure. Select the server you want to deploy to.
  4. For the source, choose UI Defined. This allows you to paste your compose file directly.
  5. In the Compose file editor, paste the configuration for a Draw.io container. Here’s a simple example:
    services:
    drawio:
    image: jgraph/drawio
    ports:
    - "8080:8080"
    - "8443:8443"
    restart: unless-stopped

  6. Click Save and then Update.
  7. The stack is now defined, but not yet running. Click the Deploy button and confirm. Komodo will pull the image and start your container.

You can now see your running service, view its logs, and access the application by clicking the port link—all from within the Komodo UI. It’s incredibly slick!

Modifying a Stack

Need to change a port or an environment variable? It’s just as easy. Simply edit the compose file in the UI, save it, and hit Redeploy. Komodo will gracefully stop the old container and start the new one with the updated configuration.

Final Thoughts

I have to say, I’m thoroughly impressed with Komodo. It strikes a perfect balance between simplicity and power. It provides the deep visibility and control that power users need without a steep learning curve. The interface is polished, the feature set is rich, and the fact that it’s a thriving open-source project makes it even better.

I’ll definitely be adopting Komodo to manage the entropy on my own servers. It’s a fantastic piece of software that I can wholeheartedly recommend to anyone working with containers.

But that’s my take. What do you think? Have you tried Komodo, or do you use another tool for managing your containers? I’d love to hear your thoughts and suggestions in the comments below. Your ideas might even inspire a future video!

That’s all for today. A big salute from me, Dimitri, and I’ll see you next week!


Don’t forget to subscribe to my YouTube channel for more open-source content:

Quadrata on YouTube

Join our community on Telegram:

ZabbixItalia Telegram Channel

Read More
Unlock Your Servers from Anywhere: A Deep Dive into Apache Guacamole

Unlock Your Servers from Anywhere: A Deep Dive into Apache Guacamole

Good morning everyone, Dimitri Bellini here! Welcome back to Quadrata, my channel dedicated to the open-source world and the IT that I love—and that you, my viewers, clearly enjoy too.

In this post, we’re diving into a tool that’s a bit esoteric but incredibly powerful, something I first used years ago and have recently had the chance to rediscover: Apache Guacamole. No, it’s not a recipe for your next party; it’s a fantastic open-source tool that allows you to connect to your applications, shells, and servers using nothing more than a web browser.

What is Apache Guacamole?

At its core, Guacamole is a clientless remote desktop gateway. This means you can access your remote machines—whether they use RDP, SSH, VNC, or even Telnet—directly from Chrome, Firefox, or any modern HTML5 browser. Imagine needing to access a server while you’re away from your primary workstation. Instead of fumbling with VPNs and installing specific client software, you can just open a browser on your laptop, tablet, or even your phone and get full access. It’s a game-changer for convenience and accessibility.

The architecture is straightforward but robust. Your browser communicates with a web application (running on Tomcat), which in turn talks to the Guacamole daemon (`guacd`). This daemon acts as a translator, establishing the connection to your target machine using its native protocol (like RDP or SSH) and streaming the display back to your browser as simple HTML5.

Key Features That Make Guacamole Stand Out

Guacamole isn’t just a simple proxy; it’s packed with enterprise-grade features that make it suitable for a wide range of use cases:

  • Broad Protocol Support: It natively supports VNC, RDP, SSH, and Telnet, covering most of your remote access needs.
  • Advanced Authentication: You can integrate it with various authentication systems, including Active Directory, LDAP, and even Two-Factor Authentication (2FA), to secure access.
  • Granular Permissions: As an administrator, you can define exactly which users or groups can access which connections.
  • Centralized Logging & Screen Recording: This is a huge feature for security and compliance. Guacamole can log all activity and even record entire user sessions as videos, providing a complete audit trail of who did what and when.
  • Screen Sharing: Need to collaborate on a problem? You can share your active session with a colleague by simply sending them a link. You can both work in the same shell or desktop environment simultaneously.

Surprising Powerhouse: Where You’ve Already Seen Guacamole

While it might not be a household name, you’ve likely used Guacamole without even realizing it. It’s the powerful engine behind several major commercial products, including:

  • Microsoft Azure Bastion
  • FortiGate SSL Web VPN
  • CyberArk PSM Gateway

The fact that these major security and cloud companies build their products on top of Guacamole is a massive testament to its stability and power.

Getting Started: Installation with Docker

The easiest and most recommended way to get Guacamole up and running is with Docker. In the past, this meant compiling various components, but today, it’s a much simpler process. You’ll need three containers:

  1. guacamole/guacd: The native daemon that handles the protocol translations.
  2. guacamole/guacamole: The web application front-end.
  3. A Database: PostgreSQL or MySQL to store user and connection configurations.

Important Note: I found that the official docker-compose.yml file in the documentation can be problematic. The following method is based on a community-provided configuration that works flawlessly with the latest versions.

Step 1: Create a Directory and Initialize the Database

First, create a directory for your Guacamole configuration and data. Then, run the following command to have Guacamole’s container initialize the database schema for you. This script will pull the necessary SQL files and set up the initial database structure.


mkdir guacamole-data
docker run --rm guacamole/guacamole /opt/guacamole/bin/initdb.sh --postgres > guacamole-data/initdb.sql

Step 2: Create Your Docker Compose File

Inside your main directory, create a file named docker-compose.yml. This file will define the three services we need to run.


services:
guacd:
container_name: guacd
image: guacamole/guacd
volumes:
- /opt/guacamole/drive:/drive:rw
- /opt/guacamole/record:/record:rw
networks:
- guacamole
restart: always

guacdb:
container_name: guacdb
image: postgres:15.2-alpine
environment:
PGDATA: /var/lib/postgresql/data/guacamole
POSTGRES_DB: guacamole_db
POSTGRES_USER: guacamole_user
POSTGRES_PASSWORD: guacpass
volumes:
- /opt/guacamole/db-init:/docker-entrypoint-initdb.d:z
- /opt/guacamole/data:/var/lib/postgresql/data:Z
networks:
- guacamole
restart: always

guacamole:
container_name: guac-guacamole
image: guacamole/guacamole
depends_on:
- guacd
- guacdb
environment:
GUACD_HOSTNAME: guacd
POSTGRESQL_HOSTNAME: guacdb
POSTGRESQL_DATABASE: guacamole_db
POSTGRESQL_USER: guacamole_user
POSTGRESQL_PASSWORD: guacpass
RECORDING_SEARCH_PATH: /record
# uncomment if you're behind a reverse proxy
# REMOTE_IP_VALVE_ENABLED: true
# uncomment to disable brute-force protection entirely
# BAN_ENABLED: false
# https://guacamole.apache.org/doc/gug/guacamole-docker.html#running-guacamole-behind-a-proxy
volumes:
- /opt/guacamole/record:/record:rw
networks:
- guacamole
ports:
- 8080:8080/tcp
restart: always

networks:
guacamole:
name: guacamole

Be sure to change YourStrongPasswordHere to a secure password!

Step 3: Launch Guacamole

Now, from your terminal in the same directory, simply run:


docker-compose up -d

Docker will pull the images and start the three containers. In a minute or two, your Guacamole instance will be ready!

Your First Connection: A Quick Walkthrough

Once it’s running, open your browser and navigate to http://YOUR_SERVER_IP:8080/guacamole/.

The default login credentials are:

  • Username: guacadmin
  • Password: guacadmin

After logging in, head to Settings > Connections to add your first remote machine. Click “New Connection” and fill out the details. For an SSH connection, you’ll set the protocol to SSH and enter the hostname/IP, username, and password. For Windows RDP, you’ll do the same but may also need to check the “Ignore Server Certificate” box under the Parameters section if you’re using a self-signed certificate.

Once saved, your new connection will appear on your home screen. Just click it, and you’ll be dropped right into your remote session, all within your browser tab. You can have multiple sessions open at once and switch between them like browser tabs. To access features like the clipboard or file transfers, use the Ctrl+Alt+Shift key combination to open the Guacamole side menu.

A True Game-Changer for Remote Access

As you can see, Apache Guacamole is an incredibly versatile and powerful tool. Whether you’re a system administrator who needs a centralized access point, a developer working remotely, or a company looking to enhance security with a bastion host and session recording, it’s a solution that is both elegant and effective.

I highly recommend giving it a try. It’s one of those open-source gems that can fundamentally improve your workflow.

What are your thoughts? Have you used Guacamole or a similar tool before? Let me know in the comments below! And if you found this guide helpful, don’t forget to share it.


Thank you for reading! For more content on open-source and IT, make sure to subscribe to the channel.

YouTube Channel: Quadrata

Join our community on Telegram: ZabbixItalia Telegram Channel

See you in the next one!

Read More
Zabbix 7.4 is Here! A Deep Dive into the Game-Changing New Features

Zabbix 7.4 is Here! A Deep Dive into the Game-Changing New Features

Good morning, everyone! It’s Dimitri Bellini, and welcome back to Quadrata, my channel dedicated to the world of open-source IT. It’s an exciting week because our good friend, Zabbix, has just rolled out a major new version: Zabbix 7.4! After months of hard work from the Zabbix team, this release is packed with features that will change the way we monitor our infrastructure. So, let’s dive in together and explore what’s new.

The “Escapological” Feature: Nested Low-Level Discovery

Let’s start with what I consider the most mind-bending, “centrifugal” new feature: nested low-level discovery (LLD). Until now, LLD was fantastic for discovering objects like file systems or network interfaces on a host. But we couldn’t go deeper. If you discovered a database, you couldn’t then run another discovery *within* that database to find all its tablespaces dynamically.

With Zabbix 7.4, that limitation is gone! I’ve set up a demo to show you this in action. I created a discovery rule that finds all the databases on a host. From the output of that first discovery, a new “discovery prototype” of type “Nested” can now be created. This second-level discovery can then parse the data from the first one to find all the tablespaces specific to each discovered database.

The result? Zabbix first discovers DB1 and DB2, and then it automatically runs another discovery for each of them, creating items for every single tablespace (like TS1 for DB1, TS2 for DB1, etc.). This allows for an incredible level of granularity and automation, especially in complex environments like database clusters or containerized applications. This is a true game-changer.

And it doesn’t stop there. We can now also have host prototypes within host prototypes. Previously, if you discovered a VMware host and it created new hosts for each virtual machine, those new VM hosts couldn’t run their own discovery to create *more* hosts. Now they can, opening the door for multi-layered infrastructure discovery.

A Smarter Way to Onboard: The New Host Wizard

How many times have new users felt a bit lost when adding their first host to Zabbix? What hostname do I use? How do I configure the agent? The new Host Wizard solves this beautifully.

Found under Data Collection -> Host Wizard, this feature provides a step-by-step guide to get your monitoring up and running. Here’s a quick walkthrough:

  1. Select a Template: You start by searching for the type of monitoring you need (e.g., “Linux”). The wizard will show you compatible templates. Note that not all templates are updated for the wizard yet, but the main ones for Linux, Windows, AWS, Azure, databases, and more are already there.
  2. Define the Host: You provide a hostname and assign it to a host group, just like before, but in a much more guided way.
  3. Configure the Agent: This is where the magic happens. For an active agent, for example, you input your Zabbix server/proxy IP and configure security (like a pre-shared key). The wizard then generates a complete installation script for you to copy and paste directly into your Linux or Windows shell! This script handles everything—installing the agent, configuring the server address, and setting up the keys. It’s incredibly convenient.
  4. Fine-Tune and Deploy: The final step shows you the configurable macros for the template in a clean, human-readable format, making it easy to adjust thresholds before you create the host.

A quick heads-up: I did notice a small bug where the wizard’s script currently installs Zabbix Agent 7.2 instead of 7.4. I’ve already opened a ticket, and I’m sure the Zabbix team will have it fixed in a patch release like 7.4.1 very soon.

Dashboard and Visualization Upgrades

Real-Time Editing and a Fresh Look

Dashboards have received a major usability boost. You no longer have to click “Edit,” make a change to a widget, save it, and then see the result. Now, all changes are applied in real-time as you configure the widget. If you thicken a line in a graph, you see it thicken instantly. This makes dashboard creation so much faster and more intuitive.

Furthermore, Zabbix has introduced color palettes for graphs. Gone are the days of having multiple metrics on a graph with nearly identical shades of the same color. You can now choose a palette that assigns distinct, pleasant, and easily recognizable colors to each item, making your graphs far more readable.

The New Item Card Widget

There’s a new widget in town called the Item Card. When used with something like the Host Navigator widget, you can select a host, then select a specific item (like CPU Utilization), and the Item Card will populate with detailed information about that item: its configuration, recent values, a mini-graph, and any associated triggers. It’s a fantastic way to get a quick, focused overview of a specific metric.

Powerful Enhancements for Maps and Monitoring

Maps Get a Major Overhaul

Maps are now more powerful and visually appealing than ever. Here are the key improvements:

  • Element Ordering: Finally, we can control the Z-index of map elements! You can bring elements to the front or send them to the back. This means you can create a background image of a server rack and place your server icons perfectly on top of it, which was impossible to do reliably before.
  • Auto-Hiding Labels: To clean up busy maps, labels can now be set to appear only when you hover your mouse over an element.
  • Dynamic Link Indicators: The lines connecting elements on a map are no longer just tied to trigger status. You can now have their color or style change based on an item’s value, allowing you to visualize things like link bandwidth utilization directly on your map.

More Control with New Functions and Security

Zabbix 7.4 also brings more power under the hood:

  • OAuth 2.0 Support: You can now easily configure email notifications using Gmail and Office 365, as Zabbix provides a wizard to handle the OAuth 2.0 authentication.
  • Frontend-to-Server Encryption: For security-conscious environments, you can now enable encryption for the communication between the Zabbix web frontend and the Zabbix server.
  • New Time-Based Functions: Functions like first.clock and last.clock have been added, giving us more power to correlate events based on their timestamps, especially when working with logs.

Small Changes, Big Impact: Quality of Life Improvements

Sometimes it’s the little things that make the biggest difference in our day-to-day work. Zabbix 7.4 is full of them:

  • Inline Form Validation: When creating an item or host, Zabbix now instantly highlights any required fields you’ve missed, preventing errors before you even try to save.
  • Copy Button for Test Output: When you test a preprocessing step and get a large JSON output, there’s now a simple “Copy” button. No more struggling to select all the text in the small window!
  • New Templates: The library of official templates continues to grow, with notable additions for enterprise hardware like Pure Storage.

Final Thoughts

Zabbix 7.4 is a massive step forward. From the revolutionary nested discovery to the user-friendly Host Wizard and the countless usability improvements, this release offers something for everyone. It makes Zabbix both more powerful for seasoned experts and more accessible for newcomers.

What do you think of this new release? Is there a feature you’re particularly excited about, or something you’d like me to cover in more detail? The nested discovery part can be complex, so I’m happy to discuss it further. Let me know your thoughts in the comments below!

And with that, that’s all for today. See you next week!


Don’t forget to engage with the community:

  • Subscribe to my YouTube Channel: Quadrata
  • Join the discussion on the Zabbix Italia Telegram Channel: ZabbixItalia

Read More