Posts on Jan 1970

I switched from Linux to MacOs: My First Month Experience

I switched from Linux to MacOs: My First Month Experience

Good morning everyone, Dimitri Bellini here! Welcome back to my channel, Quadrata, where we dive into the world of open source and IT. If you’ve been following along, you’ll know I recently discussed my decision to potentially move from my trusty Linux setup to the world of Apple laptops.

Well, the temptation won. I bought it. After over 20 years deeply rooted in the Linux ecosystem, primarily using Fedora on various ThinkPads for the last decade, I took the plunge and acquired a MacBook Pro M4. It was a step I took with some apprehension, mainly driven by a quest for increased productivity and hardware that felt truly integrated and powerful, especially after my ThinkPad T14S started showing its age with battery life and overheating issues during video calls and rendering.

The Machine: Why the MacBook Pro M4?

I didn’t go for the Air as initially considered. I found a fantastic deal on a MacBook Pro M4 (around €1650), which made the decision easier. Here’s what I got:

  • CPU/GPU: 12 cores (4 performance, 8 efficiency) + GPU cores + Neural Engine
  • RAM: 24GB Unified Memory (great for VMs and local AI models)
  • Storage: 512GB SSD (the compromise for the price – 1TB was significantly more expensive)
  • Display: A stunning high-resolution display with excellent brightness.

Coming from years of using used ThinkPads – which were workhorses, true Swiss Army knives – this felt like a significant hardware upgrade, especially considering the price point I managed to secure.

Hardware Impressions: The Good, The Bad, and The Workarounds

Hardware: The Good Stuff

  • Battery Life: This is a game-changer. I’m easily getting 10-12 hours of normal use (coding, web browsing, conferences, shell usage). Standby time is phenomenal; I barely turn it off anymore. This was simply unattainable on my previous x86 Linux laptops.
  • Display: Truly gorgeous. It’s hard to compare with most laptops in the same price range I found this Mac in.
  • Touchpad: Exceptional. The haptic feedback and precision are on another level. It genuinely enhances the daily user experience.
  • Speakers: Finally! Decent built-in audio. I can actually listen to music with bass and clarity. This also translates to much better web call experiences – the microphone and speaker combination works so well I often don’t need a headset.
  • Performance (CPU/GPU/Neural Engine): It handles my workload smoothly. The real surprise was running local AI models. I tested Gemma 3 (around 10GB, 12 billion parameters) and got around 23 tokens/second. This opens up possibilities for local AI experimentation without needing a dedicated GPU rig.
  • USB-C Flexibility: Having three USB-C/Thunderbolt ports is adequate, and the ability to charge via any of them using a Power Delivery hub (which also connects my peripherals) is incredibly convenient. One cable does it all.

Hardware: The Not-So-Good

  • Keyboard: This is my biggest hardware gripe. The key travel is very shallow. While better than some cheap laptops, it feels like typing on plastic compared to the ThinkPad keyboards I’m used to.
  • Weight: At 1.6kg, the MacBook Pro is noticeably heavier than my old T14S. Quality materials add weight, I suppose.
  • Non-Upgradeable Storage: 512GB isn’t huge, and knowing I can’t upgrade it later means careful storage management is essential. You *must* choose your storage size wisely at purchase.

Hardware Workarounds

To address the storage limitation for non-critical files, I found a neat gadget: the BaseQi MicroSD Card Adapter. It sits flush in the SD card slot, allowing me to add a high-capacity MicroSD card (I used a SanDisk Extreme Pro) for documents and media. It’s not fast enough for active work or applications due to latency, but perfect for expanding storage for less performance-sensitive data. I sync these documents to the cloud as a backup.

For the keyboard, since I mostly work docked, I bought an external mechanical keyboard: the Epomaker EK68 (or possibly a similar model like the AJAZZ K820 Pro mentioned). It’s a 75% layout keyboard with great tactile feedback that I got for around €50 – a worthwhile investment for comfortable typing.

Diving into macOS: A Linux User’s Perspective

Okay, let’s talk software. Coming from the freedom and structure of Linux, macOS feels… different. Sometimes simple, sometimes restrictive.

The Frustrations

  • No Native Package Manager: This was jarring. Hunting for software online, downloading DMG files – it felt archaic compared to `apt` or `dnf`. The App Store exists, but often has weird pricing discrepancies compared to direct downloads, and doesn’t have everything.
  • The Dock: I used to admire it from afar on Linux. Now that I have it? I find it mostly useless. It takes up space and doesn’t offer the workflow benefits I expected.
  • Finder (File Manager): Oh, Finder. It feels incredibly basic. Simple tasks like moving files often default to copy-paste. Customizing it to show path bars or folder info requires digging into options. Searching defaults to the entire Mac instead of the current folder, which is maddening. It feels permeated throughout the OS and hard to escape.
  • Application Closing: Clicking the ‘X’ often just minimizes the app instead of closing it. You need to explicitly Quit (Cmd+Q) or Force Quit. It’s a different paradigm I’m still adjusting to.
  • Monotasking Feel: The OS seems optimized for focusing on one application at a time. While this might benefit single-app workflows (video editing, music production), it feels less efficient for my typical multi-tasking sysadmin/developer style. The strong single-core performance seems to reflect this philosophy.
  • System Settings/Control Center: There are *so many* options, and finding the specific setting you need can feel like a maze.

The Silver Linings & Essential Tools

It’s not all bad, of course. The UI, while sometimes frustrating, is generally coherent. And thankfully, the community has provided solutions:

  • Homebrew: This is **ESSENTIAL**. It brings a proper package manager experience to macOS (`brew install `). It makes installing and updating software (especially open-source tools) sane. Install this first!
  • iTerm2: A vastly superior terminal emulator compared to the default Terminal. Highly customizable and brings back a familiar Linux-like terminal experience.
  • Oh My Zsh (or Oh My Bash): Customizes the shell environment for a better look, feel, and useful shortcuts/plugins. Works great with iTerm2.
  • Forklift: A paid, dual-pane file manager. It’s my current replacement for Finder, offering features like tabs, sync capabilities (Google Drive, etc.), and a more productive interface. Still evaluating, but much better than Finder for my needs.
  • Zed: A fast, modern code and text editor. It starts quickly and handles my text editing needs well.
  • LibreOffice: My go-to office suite. Works perfectly via Homebrew (`brew install libreoffice`).
  • Inkscape & GIMP: Open-source staples for vector and raster graphics. Both easily installable via Homebrew (`brew install inkscape gimp`) and cover my needs perfectly.
  • Latest: A handy utility (installable via Brew) that scans your applications (even those not installed via Brew or the App Store) and notifies you of available updates. Helps manage the entropy of different installation methods.
  • WireGuard & Tunnelblick: Essential VPN clients. WireGuard has an official client, and Tunnelblick is my preferred choice for OpenVPN connections on macOS.

Key System Setting Tweaks

After watching countless videos, here are a few settings I changed immediately:

  • Window Tiling (Sequoia+): Enabled tiling but *removed the margins* between windows to maximize screen real estate.
  • Touchpad: Enabled “Secondary Click” (two-finger tap) for right-click functionality.
  • Dock: Enabled “Show battery percentage”. Set the Dock to “Automatically hide and show”. Removed unused default app icons and recent application suggestions to minimize clutter.

The Verdict After One Month

So, am I happy? It’s complicated.

The hardware is undeniably premium. The performance, display, battery life, and touchpad are fantastic. For the price I paid, it feels like great value in that regard.

However, productivity isn’t magically perfect. macOS has its own quirks, bugs, and limitations. Using third-party (especially open-source) applications doesn’t always feel as seamless as on Linux. The “it just works” mantra isn’t universally true.

The software experience requires adaptation and, frankly, installing several third-party tools to replicate the workflow I was comfortable with on Linux. Homebrew is the saving grace here.

Overall, it’s a high-quality machine with some frustrating software paradigms for a long-time Linux user. The experience is *coherent*, which is better than the sometimes fragmented feel of Windows, but coherence doesn’t always mean *better* for my specific needs.

Will I stick with it? Time will tell. Maybe in another month, I’ll be fully converted. Or maybe I’ll be cheering even louder for the Asahi Linux project to bring full Linux support to the M4 chips!

What Are Your Thoughts?

This is just my experience after one month. I’m still learning! What are your tips for a Linux user transitioning to macOS? What essential apps or settings have I missed? Let me know in the comments below!

If you found this useful, please give the video a thumbs up, share it, and subscribe to the Quadrata YouTube channel if you haven’t already!

Also, feel free to join the discussion on the ZabbixItalia Telegram Channel.

Thanks for reading, and see you next week!

– Dimitri Bellini

Read More
Automating My Video Workflow with N8N and AI: A Real-World Test

Automating My Video Workflow with N8N and AI: A Real-World Test

Good morning everyone, Dimitri Bellini here! Welcome back to Quadrata, my channel dedicated to the open-source world and the IT topics I find fascinating – and hopefully, you do too.

This week, I want to dive back into artificial intelligence, specifically focusing on a tool we’ve touched upon before: N8N. But instead of just playing around, I wanted to tackle a real problem I face every week: automating the content creation that follows my video production.

The Challenge: Bridging the Gap Between Video and Text

Making videos weekly for Quadrata is something I enjoy, but the work doesn’t stop when the recording ends. There’s the process of creating YouTube chapters, writing blog posts, crafting LinkedIn announcements, and more. These tasks, while important, can be time-consuming. My goal was to see if AI, combined with a powerful workflow tool, could genuinely simplify these daily (or weekly!) activities.

Could I automatically generate useful text content directly from my video’s subtitles? Let’s find out.

The Toolkit: My Automation Stack

To tackle this, I assembled a few key components:

  • N8N: An open-source workflow automation tool that uses a visual, node-based interface. It’s incredibly versatile and integrates with countless services. We’ll run this using Docker/Docker Compose.
  • AI Models: I experimented with two approaches:

    • Local AI with Ollama: Using Ollama to run models locally, specifically testing Gemma 3 (27B parameters). The latest Ollama release (0.1.60 at the time of recording, though versions update) offers better support for models like Gemma.
    • Cloud AI with Google AI Studio: Leveraging the power of Google’s models via their free API tier, primarily focusing on Gemini 2.5 Pro due to its large context window and reasoning capabilities.

  • Video Transcripts: The raw material – the subtitles generated for my YouTube videos.

Putting it to the Test: Automating Video Tasks with N8N

I set up an N8N workflow designed to take my video transcript and process it through AI to generate different outputs. Here’s how it went:

1. Getting the Transcript

The first step was easy thanks to the N8N community. I used a community node called “YouTube Transcript” which, given a video URL, automatically fetches the subtitles. You can find and install community nodes easily via the N8N settings.

2. Generating YouTube Chapters

This was my first major test. I needed the AI to analyze the transcript and identify logical sections, outputting them in the standard YouTube chapter format (00:00:00 - Chapter Title).

  • Local Attempt (Ollama + Gemma 3): I configured an N8N “Basic LLM Chain” node to use my local Ollama instance running Gemma 3. I set the context length to 8000 tokens and the temperature very low (0.1) to prevent creativity and stick to the facts. The prompt was carefully crafted to explain the desired format, including examples.

    Result: Disappointing. While it generated *some* chapters, it stopped very early in the video (around 6 minutes for a 25+ minute video), missing the vast majority of the content. Despite the model’s theoretical capabilities, it failed this task with the given transcript length and my hardware (RTX 8000 GPUs – good, but maybe not enough or Ollama/model limitations).

  • Cloud Attempt (Google AI Studio + Gemini 2.5 Pro): I switched the LLM node to use the Google Gemini connection, specifically targeting Gemini 2.5 Pro with a temperature of 0.2.

    Result: Much better! Gemini 2.5 Pro processed the entire transcript and generated accurate, well-spaced chapters covering the full length of the video. Its larger context window and potentially more advanced reasoning capabilities handled the task effectively.

For chapter generation, the cloud-based Gemini 2.5 Pro was the clear winner in my tests.

3. Crafting the Perfect LinkedIn Post

Next, I wanted to automate the announcement post for LinkedIn. Here, the prompt engineering became even more crucial. I didn’t just want a generic summary; I wanted it to sound like *me*.

  • Technique: I fed the AI (Gemini 2.5 Pro again, given the success with chapters) a detailed prompt that included:

    • The task description (create a LinkedIn post).
    • The video transcript as context.
    • Crucially: Examples of my previous LinkedIn posts. This helps the AI learn and mimic my writing style and tone.
    • Instructions on formatting and including relevant hashtags.
    • Using N8N variables to insert the specific video link dynamically.

  • Result: Excellent! The generated post was remarkably similar to my usual style, captured the video’s essence, included relevant tags, and was ready to be published (with minor review).

4. Automating Blog Post Creation

The final piece was generating a draft blog post directly from the transcript.

  • Technique: Similar to the LinkedIn post, but with different requirements. The prompt instructed Gemini 2.5 Pro to:

    • Generate content in HTML format for easy pasting into my blog.
    • Avoid certain elements (like quotation marks unless necessary).
    • Recognize and correctly format specific terms (like “Quadrata”, my name “Dimitri Bellini”, or the “ZabbixItalia Telegram Channel” – https://t.me/zabbixitalia).
    • Structure the text logically with headings and paragraphs.
    • Include basic SEO considerations.

  • Result: Success again! While it took a little longer to generate (likely due to the complexity and length), the AI produced a well-structured HTML blog post draft based on the video content. It correctly identified and linked the channels mentioned and formatted the text as requested. This provides a fantastic starting point, saving significant time.

Key Takeaways and Challenges

This experiment highlighted several important points:

  • Prompt Engineering is King: The quality of the AI’s output is directly proportional to the quality and detail of your prompt. Providing examples, clear formatting instructions, and context is essential. Using AI itself (via web interfaces) to help refine prompts is a valid strategy!
  • Cloud vs. Local AI Trade-offs:

    • Cloud (Gemini 2.5 Pro): Generally more powerful, handled long contexts better in my tests, easier setup (API key). However, subject to API limits (even free tiers have them, especially for frequent/heavy use) and potential costs.
    • Local (Ollama/Gemma 3): Full control, no API limits/costs (beyond hardware/electricity). However, requires capable hardware (especially GPU RAM for large contexts/models), and smaller models might struggle with complex reasoning or very long inputs. Performance was insufficient for my chapter generation task in this test.

  • Model Capabilities Matter: Gemini 2.5 Pro’s large context window and reasoning seemed better suited for processing my lengthy video transcripts compared to the 27B parameter Gemma 3 model run locally (though further testing with different local models or configurations might yield different results).
  • Temperature Setting: Keeping the temperature low (e.g., 0.1-0.2) is vital for tasks requiring factual accuracy and adherence to instructions, minimizing AI “creativity” or hallucination.
  • N8N is Powerful: It provides the perfect framework to chain these steps together, handle variables, connect to different services (local or cloud), and parse outputs (like the Structured Output Parser node for forcing JSON).

Conclusion and Next Steps

Overall, I’m thrilled with the results! Using N8N combined with a capable AI like Google’s Gemini 2.5 Pro allowed me to successfully automate the generation of YouTube chapters, LinkedIn posts, and blog post drafts directly from my video transcripts. While the local AI approach didn’t quite meet my needs for this specific task *yet*, the cloud solution provided a significant time-saving and genuinely useful outcome.

The next logical step is to integrate the final publishing actions directly into N8N using its dedicated nodes for YouTube (updating descriptions with chapters) and LinkedIn (posting the generated content). This would make the process almost entirely hands-off after the initial video upload.

This is a real-world example of how AI can move beyond novelty and become a practical tool for automating tedious tasks. It’s not perfect, and requires setup and refinement, but the potential to streamline workflows is undeniable.

What do you think? Have you tried using N8N or similar tools for AI-powered automation? What are your favourite use cases? Let me know in the comments below! And if you found this interesting, give the video a thumbs up and consider subscribing to Quadrata for more content on open source and IT.

Thanks for reading, and see you next week!

Bye everyone,
Dimitri

Read More