Posts on Jan 1970

OpenObserve: A High-Performance Modern Observability

OpenObserve: A High-Performance Modern Observability

Good morning, everyone! Dimitri Bellini here, and welcome back to Quadrata, my channel dedicated to the open-source world and the IT I love. As you know, I’m a big fan of my friend Zabbix, but it’s crucial to keep our eyes on the horizon, understand where the world is moving, and explore new solutions that meet the demands of our customers and the community.

That’s why today, I want to introduce you to a solution I’ve had the pleasure of getting to know: OpenObserve. It’s another powerful tool in the observability space, but it approaches the task in a refreshingly different way.

What is OpenObserve and Why Should You Care?

OpenObserve is a cloud-native, open-source observability platform designed to be a unified backend for your logs, metrics, and traces. Think of it as a lightweight yet powerful alternative to heavyweights like Elasticsearch, Splunk, or Datadog. It tackles a key challenge many of us face: consolidating different monitoring tools into a single, cohesive platform.

Instead of juggling separate tools like Prometheus for metrics, Loki for logs, and Jaeger for traces, OpenObserve brings everything under one roof. This unified approach simplifies your workflow and provides a single pane of glass to view the health of your entire infrastructure.

The Game-Changing Features

What really caught my attention are the core functionalities that make OpenObserve stand out:

  • Massive Cost Reduction: This is a big one. By using a specific format called Parquet and a stateless architecture that leverages object storage (like S3, MinIO, or even a local disk), OpenObserve can drastically reduce storage costs. They claim it can be up to 140 times lower than Elasticsearch! For anyone managing hundreds of gigabytes of data per day, this is a revolutionary benefit.
  • Blazing-Fast Performance: The entire engine is written in Rust. We’ve heard a lot about Rust, especially in the Linux kernel world, and for good reason. It’s an incredibly optimized and efficient language. This means OpenObserve can ingest a massive amount of data with a significantly lower memory and CPU footprint compared to Java-based solutions.
  • Simplified Querying: If you’re comfortable with SQL, you’ll feel right at home. OpenObserve allows you to query your logs using standard SQL-based syntax, which dramatically lowers the learning curve. For metrics, it also supports PromQL, giving you the best of both worlds.
  • Native OpenTelemetry Support: It seamlessly integrates with OpenTelemetry, the emerging standard for collecting traces and metrics. This makes it incredibly easy to instrument your applications, whether they’re written in Go, Python, or another language, and start sending data to OpenObserve.
  • Real-time Alerting: Right from the UI, you can define alerts based on log patterns or metric thresholds, similar to what you might do in Prometheus.

Under the Hood: The Technology Stack

I always believe it’s fundamental to understand the components of a solution to appreciate its engineering. OpenObserve is built on a stack of impressive open-source technologies:

  • Rust: The core language, providing memory safety and high performance.
  • Apache Arrow DataFusion: A powerful query engine that enables the SQL support on top of Parquet files.
  • Apache Parquet: A columnar storage format developed by the Apache Foundation that allows for incredible data compression and efficient querying.
  • NATS: A lightweight and high-performance messaging system used for communication and coordination between nodes in a clustered setup.
  • Vue.js: The framework used to build the modern and reactive web interface.
  • SQLite / PostgreSQL: SQLite is used for metadata in simple, standalone deployments, while PostgreSQL is recommended for robust, high-availability production environments.

Getting Started with OpenObserve

One of the best parts is how easy it is to get started. For testing and simple use cases, you just need Docker. The architecture is straightforward: collectors like FluentBit, Vector, or OpenTelemetry send data to your OpenObserve container, which writes to a local disk. This simple setup can already handle an impressive ingestion rate of over 2TB of data per day on a single machine.

For high-availability (HA) production environments, the architecture scales out using Kubernetes, with distinct roles for routers, ingesters, queriers, and more, all coordinated by NATS and backed by object storage.

A Quick Tutorial: Installation with Docker

You can get a test environment running in minutes. It’s as simple as running a single Docker command. Here is the command I used, which you can customize with your own user and password:


docker run -d --name openobserve \
-p 5080:5080 \
-e ZO_ROOT_USER_EMAIL="admin@example.com" \
-e ZO_ROOT_USER_PASSWORD="Complexpass#123" \
-v /opt/openobserve-data:/data \
public.ecr.aws/zinclabs/openobserve:latest

I manage my containers with a tool that simplifies deployment, where I just fill in the image, ports, environment variables, and volume. It’s incredibly straightforward!

A Look at the Dashboard and Final Thoughts

Once you log in, you’re greeted with a clean dashboard showing key stats like ingested events and storage size. The “Data Sources” section is fantastic, providing you with ready-to-use instructions for ingesting data from Kubernetes, Linux, Windows, various databases, and more. This makes the initial setup a breeze.

The log exploration interface will feel familiar to anyone who has used Splunk, with powerful SQL-based querying and on-the-fly filtering. You can visualize metrics, build custom dashboards, analyze application traces with service maps, and even dive into real user monitoring.

What truly impressed me, however, is their licensing model. For self-hosted deployments, you can use the full enterprise version for free for up to 200GB of data ingestion per day. This includes features like single sign-on (SSO) and role-based access control (RBAC). This is a brilliant move that allows smaller teams and environments to leverage the full power of the platform without a cost barrier. A big round of applause to the OpenObserve team for that!

Conclusion: Keep a Close Eye on This One

So, is OpenObserve an interesting solution? Absolutely. It’s a project to watch closely. It has a smart approach—a lightweight, non-pachydermic solution built with exciting technologies like Rust and Parquet. It seems to have a finesse that sets it apart from the many other open-source observability tools out there.

I encourage you to take a look at it. The project is moving fast, and it offers a compelling combination of performance, cost-efficiency, and user-friendliness.

That’s all for today! Let me know your thoughts in the comments below. Do you find these all-in-one observability solutions useful? I’d love to hear from you.

A greeting from Dimitri, see you next week!


Don’t forget to like this video and subscribe to my channel for more open-source content:

My YouTube Channel: Quadrata

Join the conversation on Telegram: Zabbix Italia

Read More
Zabbix 8.0 Alpha1 Is Here: A First Look at the Future of Monitoring

Zabbix 8.0 Alpha1 Is Here: A First Look at the Future of Monitoring

Good morning, everyone! Dimitri Bellini here, back with another episode on Quadrata, my channel dedicated to the world of open source and IT. This week, we’re diving into something I know many of you have been eagerly anticipating: the first alpha release of Zabbix 8.0!

This is a major Long-Term Support (LTS) release, and the roadmap is packed with exciting features that promise to reshape how we approach monitoring and observability. So, let’s explore what’s new, what’s coming, and what you can already get your hands on.

The Vision for Zabbix 8.0: A Focus on Observability

Before we get into the specifics of the alpha, it’s worth looking at the grand vision for Zabbix 8.0. The development is heavily focused on expanding into the realm of full-fledged observability. This means more than just collecting metrics; it’s about gaining deeper insights into our systems.

Key areas of development include:

  • OpenTelemetry and Log Ingestion: A huge step forward will be the native handling of OpenTelemetry data and enhanced log ingestion. This requires a robust backend, and Zabbix is exploring solutions like ClickHouse or OpenSearch to manage the massive amount of JSON-structured data that comes with it.
  • Event Correlation: A feature I’m personally very excited about is the advanced event correlation engine. This will be a game-changer for reducing message entropy and alert noise, allowing us to pinpoint root causes more effectively.
  • Enhanced Network Monitoring: We’re also seeing a big push in network monitoring, with support for data streaming via NetFlow and sFlow, tying directly into the broader observability goals.

What’s New in the First Alpha Release?

While the full vision will unfold over the coming months, the first alpha already delivers some fantastic quality-of-life improvements and new functionalities. Here are the highlights that stood out to me.

Finally! Inherited Tags in Latest Data

This is a big one. For a long time, the “Latest Data” page has been a source of frustration because it didn’t inherit tags from templates. If you’ve been in the Zabbix world for a while, you know that since the removal of “Applications,” filtering data for a specific component, like MySQL, became a bit of a chore. The community has been vocal about this, and I’m thrilled to say Zabbix has listened. Now, tags from your templates are visible directly in the Latest Data view, making it incredibly easy to filter and segregate items from the OS versus a specific application.

Streamlined SAML Authentication

For anyone working in an enterprise environment, managing SAML certificates could be a bit “rustic.” You had to manually place certificate files into the server’s file system. Zabbix 8.0 introduces a much more professional solution: you can now upload and manage SAML certificates directly through the web interface under Administration -> Authentication. This is a small but significant change that simplifies setup, reduces errors, and makes the whole process much cleaner.

New and Improved Templates

Zabbix continues to deliver excellent out-of-the-box templates. This release brings new additions for networking gear from Aruba, Cisco, StormShield, and Vyatta. Furthermore, the Proxmox template has received a much-needed overhaul. With the recent shifts in the virtualization landscape (looking at you, VMware), many are turning to Proxmox, and it’s great to see Zabbix providing a more modern, robust template for it. There are also improvements to the MySQL template, specifically around replication monitoring, and native monitoring support for Ceph storage, which is heavily used in Proxmox environments.

A Fresh Coat of Paint: New Font and UI Tweaks

Zabbix has moved on from the trusty Arial font to a new, more refined typeface. While the difference is subtle, it gives the interface a slightly more modern and elegant feel. You might not notice it at first glance, but it’s part of a continuous effort to improve the user experience.

New Visualization Power: The Scatterplot Widget

Version 8.0 introduces a brand-new dashboard widget: the scatterplot. This might not be for every use case, but it’s incredibly powerful for visualizing the relationship between two different metrics across multiple hosts. For example, you could plot the signal-to-noise ratio for dozens of access points, allowing you to instantly identify outliers and potential issues. It’s a fantastic tool for spotting correlations and anomalies that would be lost in a standard time-series graph.

Under the Hood Improvements

There are also some important changes that improve stability and performance, particularly in large-scale environments:

  • Improved Event Cleanup: When a trigger is deleted, the associated events are now cleaned up immediately, rather than waiting for the housekeeper process.
  • Smarter Proxy Throttling: The logic for how proxies send data to the server when the history cache is under pressure has been revised. This helps prevent data storms from proxies overwhelming the Zabbix server and avoids getting stuck in loops, which could happen in large installations with heavy log monitoring.

What’s Next? Alpha 2 and the Road to Release

The journey to Zabbix 8.0 LTS, expected around mid-2026, is just beginning. Work is already underway for Alpha 2, which is slated to introduce the JSON item type and the ClickHouse backend support—foundational pieces for the observability features we discussed. These additions will be critical for handling streaming data from OpenTelemetry and other sources, truly pushing Zabbix into the next era of monitoring.

I am incredibly excited to see these features develop and to test how they transform our ability to monitor complex, application-centric environments.

What Are You Most Excited About?

That’s a wrap for my first look at Zabbix 8.0 Alpha! From my perspective, the moves toward observability and better event correlation are the most exciting developments. But I want to hear from you!

What features are you most looking forward to? Is it the OpenTelemetry integration, the advanced event correlation, or perhaps the network topology improvements? Let me know in the comments below!

And if you’re not already part of our community, I invite you to join the conversation.

Thanks for tuning in. A big greeting from me, Dimitri, and I’ll see you next week. Bye everyone!

Read More
Wren AI: Can You Really Talk to Your Database? An Open-Source Deep Dive

Wren AI: Can You Really Talk to Your Database? An Open-Source Deep Dive

Good morning everyone, Dimitri Bellini here, back on Quadrata, my channel dedicated to the world of open source and IT. This week, we’re diving back into artificial intelligence, but with a practical twist that could change the game for many, especially in the world of business analytics.

I stumbled upon a fascinating open-source solution called Wren AI. Its promise is simple yet powerful: to let you explore complex databases and extract insights using plain, natural language. No more wrestling with intricate SQL queries just to get a simple answer. Intrigued? Let’s take a look at what it can do.

What is Wren AI? The Dawn of Generative Business Intelligence

At its core, Wren AI is an open-source tool for what’s being called Generative Business Intelligence (GenBI). Imagine asking your database, “What was our total revenue by product category last year?” and getting not just a table of numbers, but also the SQL query that generated it and a ready-to-use chart. That’s the magic of Wren AI. It acts as a translator between your human questions and the structured language of your database.

The goal is to empower users who aren’t necessarily SQL wizards—like business analysts or managers—to intertwine data from various sources and get sensible answers. Anyone who has ever tried to navigate the complex relationships in a database like Zabbix knows that finding the right connections between tables is far from trivial.

Key Features at a Glance

  • Natural Language to SQL & Charts: The main event. Ask questions in English (or other languages) and get back precise SQL queries and visualizations.
  • Broad Database Support: It connects to a wide range of data sources, including Postgres, MySQL, Microsoft SQL Server, CSV files, and more.
  • AI-Powered Insights: It uses Large Language Models (LLMs) to understand your request, analyze the database schema, and generate answers.
  • Semantic Layer: An intelligent layer that analyzes your database schema and relationships, ensuring the LLM has the correct context to generate accurate queries.

My Setup: Going Local with Ollama

To get started, you just need Docker. For the AI brain, you can connect to a cloud service like OpenAI or Gemini, but to complicate things (and for the fun of it!), I decided to run everything locally. I used Ollama to host a powerful inference engine right within my own infrastructure, running the Qwen3 32-billion parameter model. While it’s not the fastest setup, it keeps all my data in-house and proves the concept works without relying on external APIs.

The installation involves running a script they provide, but since I was using Ollama, I had to do a bit of manual configuration. This involved downloading a specific configuration file for Ollama, customizing it, and setting up some environment variables before launching the installer. Once it’s up, it leaves you with a standard Docker Compose file, so managing the stack of containers becomes straightforward.

Wren AI in Action: From Question to Dashboard

Once installed, the first step is connecting Wren AI to your data source. After providing the credentials for my Postgres test database (a sample database of DVD sales), Wren AI immediately got to work.

1. Schema Discovery

The tool automatically scanned the database, identified all the tables, and even mapped out the relationships between them. This visual representation of the schema is the foundation for everything that follows. You can even add relationships manually if needed before deploying the model.

2. Asking Questions

This is where the fun begins. I started with a simple business question:

“What is the revenue generated by films per category in 2022?”

My local Ollama instance kicked into gear (I could see my GPU usage spike to over 80%!), and after a short wait, Wren AI returned a complete answer. It didn’t just give me a number; it provided a full breakdown by category.

3. Visualizing the Data

The real power lies in the output tabs:

  • Answer: A clear, text-based summary of the findings.
  • View SQL: The exact SQL query it generated. This is a fantastic learning tool and a great starting point for further optimization.
  • Chart: An automatically generated bar chart visualizing the revenues per category. This chart can be pinned to a dashboard directly within Wren AI or exported as an SVG or PNG file—perfect for dropping into a presentation or report.

The Big Catch: A Controversial Open-Source Model

Now, for the part that didn’t sit right with me. As powerful as Wren AI is, the open-source version has a significant limitation: you can only connect to one data source. Once you configure it, the interface locks you out from adding or changing to another database. The only way to switch is to completely reset the entire project.

This feels less like a feature-limited version and more like a “castrated” one. I’m a firm believer in the open-source philosophy, and while I understand companies need to make money, crippling such a core function feels like using the “open source” label primarily as a marketing tool. Allowing users to connect to at least a few data sources would make it a genuinely usable product for home labs or small projects, while still leaving enterprise features like single sign-on or advanced security for the paid version.

Final Thoughts

Despite my criticism of its licensing model, Wren AI is an incredibly interesting and powerful solution. The ability to simplify data analysis for large, complex databases is a massive value-add, potentially saving countless hours and making data more accessible to everyone in an organization.

It’s a fantastic proof-of-concept for the future of business intelligence. The technology works, and with a decent local hardware setup or a cloud LLM, it can deliver real insights quickly. A pity, then, that the open-source version is held back by what seems to be an artificial limitation.

What do you think? Is this a fair model for an open-source project, or does it go against the spirit of the community? Have you tried any similar GenBI tools? Let me know your thoughts in the comments below!

If you enjoyed this deep dive, please give the video a like and subscribe to the Quadrata channel for more content on open source and IT.

A greeting from Dimitri, bye everyone!


Stay Connected with Me:

Read More