Everything posted by Blogger
-
The Range Syntax Has Come to Container Style Queries and if()
by: Daniel Schwarz Thu, 13 Nov 2025 15:00:20 +0000 The range syntax isn’t a new thing. We‘re already able to use it with media queries to query viewport dimensions and resolutions, as well as container size queries to query container dimensions. Being able to use it with container style queries — which we can do starting with Chrome 142 — means that we can compare literal numeric values as well as numeric values tokenized by custom properties or the attr() function. In addition, this feature comes to the if() function as well. Here’s a quick demo that shows the range syntax being used in both contexts to compare a custom property (--lightness) to a literal value (50%): #container { /* Choose any value 0-100% */ --lightness: 10%; /* Applies it to the background */ background: hsl(270 100% var(--lightness)); color: if( /* If --lightness is less than 50%, white text */ style(--lightness < 50%): white; /* If --lightness is more than or equal to 50%, black text */ style(--lightness >= 50%): black ); /* Selects the children */ * { /* Specifically queries parents */ @container style(--lightness < 50%) { color: white; } @container style(--lightness >= 50%) { color: black; } } } Again, you’ll want Chrome 142 or higher to see this work: CodePen Embed Fallback Both methods do the same thing but in slightly different ways. Let’s take a closer look. Range syntax with custom properties In the next demo coming up, I’ve cut out the if() stuff, leaving only the container style queries. What’s happening here is that we’ve created a custom property called --lightness on the #container. Querying the value of an ordinary property isn’t possible, so instead we save it (or a part of it) as a custom property, and then use it to form the HSL-formatted value of the background. #container { /* Choose any value 0-100% */ --lightness: 10%; /* Applies it to the background */ background: hsl(270 100% var(--lightness)); } After that we select the container’s children and conditionally declare their color using container style queries. Specifically, if the --lightness property of #container (and, by extension, the background) is less than 50%, we set the color to white. Or, if it’s more than or equal to 50%, we set the color to black. #container { /* etc. */ /* Selects the children */ * { /* Specifically queries parents */ @container style(--lightness < 50%) { color: white; } @container style(--lightness >= 50%) { color: black; } } } CodePen Embed Fallback /explanation Note that we wouldn’t be able to move the @container at-rules to the #container block, because then we’d be querying --lightness on the container of #container (where it doesn’t exist) and then beyond (where it also doesn’t exist). Prior to the range syntax coming to container style queries, we could only query specific values, so the range syntax makes container style queries much more useful. By contrast, the if()-based declaration would work in either block: #container { --lightness: 10%; background: hsl(270 100% var(--lightness)); /* --lightness works here */ color: if( style(--lightness < 50%): white; style(--lightness >= 50%): black ); * { /* And here! */ color: if( style(--lightness < 50%): white; style(--lightness >= 50%): black ); } } CodePen Embed Fallback So, given that container style queries only look up the cascade (whereas if() also looks for custom properties declared within the same CSS rule) why use container style queries at all? Well, personal preference aside, container queries allow us to define a specific containment context using the container-name CSS property: #container { --lightness: 10%; background: hsl(270 100% var(--lightness)); /* Define a named containment context */ container-name: myContainer; * { /* Specify the name here */ @container myContainer style(--lightness < 50%) { color: white; } @container myContainer style(--lightness >= 50%) { color: black; } } } With this version, if the @container at-rule can’t find --lightness on myContainer, the block doesn’t run. If we wanted @container to look further up the cascade, we’d only need to declare container-name: myContainer further up the cascade. The if() function doesn’t allow for this, but container queries allow us to control the scope. Range syntax with the attr() CSS function We can also pull values from HTML attributes using the attr() CSS function. In the HTML below, I’ve created an element with a data attribute called data-notifs whose value represents the number of unread notifications that a user has: <div data-notifs="8"></div> We want to select [data-notifs]::after so that we can place the number inside [data-notifs] using the content CSS property. In turn, this is where we’ll put the @container at-rules, with [data-notifs] serving as the container. I’ve also included a height and matching border-radius for styling: [data-notifs]::after { height: 1.25rem; border-radius: 1.25rem; /* Container style queries here */ } Now for the container style query logic. In the first one, it’s fairly obvious that if the notification count is 1-2 digits (or, as it’s expressed in the query, less than or equal to 99), then content: attr(data-notifs) inserts the number from the data-notifs attribute while aspect-ratio: 1 / 1 ensures that the width matches the height, forming a circular notification badge. In the second query, which matches if the number is more than 99, we switch to content: "99+" because I don’t think that a notification badge could handle four digits. We also include some inline padding instead of a width, since not even three characters can fit into the circle. To summarize, we’re basically using this container style query logic to determine both content and style, which is really cool: [data-notifs]::after { height: 1.25rem; border-radius: 1.25rem; /* If notification count is 1-2 digits */ @container style(attr(data-notifs type(<number>)) <= 99) { /* Display count */ content: attr(data-notifs); /* Make width equal the height */ aspect-ratio: 1 / 1; } /* If notification count is 3 or more digits */ @container style(attr(data-notifs type(<number>)) > 99) { /* After 99, simply say "99+" */ content: "99+"; /* Instead of width, a little padding */ padding-inline: 0.1875rem; } } CodePen Embed Fallback But you’re likely wondering why, when we read the value in the container style queries, it’s written as attr(data-notifs type(<number>) instead of attr(data-notifs). Well, the reason is that when we don’t specify a data type (or unit, you can read all about the recent changes to attr() here), the value is parsed as a string. This is fine when we’re outputting the value with content: attr(data-notifs), but when we’re comparing it to 99, we must parse it as a number (although type(<integer>) would also work). In fact, all range syntax comparatives must be of the same data type (although they don’t have to use the same units). Supported data types include <length>, <number>, <percentage>, <angle>, <time>, <frequency>, and <resolution>. In the earlier example, we could actually express the lightness without units since the modern hsl() syntax supports that, but we’d have to be consistent with it and ensure that all comparatives are unit-less too: #container { /* 10, not 10% */ --lightness: 10; background: hsl(270 100 var(--lightness)); color: if( /* 50, not 50% */ style(--lightness < 50): white; style(--lightness >= 50): black ); * { /* 50, not 50% */ @container style(--lightness < 50) { color: white; } @container style(--lightness >= 50) { color: black; } } } Note: This notification count example doesn’t lend itself well to if(), as you’d need to include the logic for every relevant CSS property, but it is possible and would use the same logic. Range syntax with literal values We can also compare literal values, for example, 1em to 32px. Yes, they’re different units, but remember, they only have to be the same data type and these are both valid CSS <length>s. In the next example, we set the font-size of the <h1> element to 31px. The <span> inherits this font-size, and since 1em is equal to the font-size of the parent, 1em in the scope of <span> is also 31px. With me so far? According to the if() logic, if 1em is equal to less than 32px, the font-weight is smaller (to be exaggerative, let’s say 100), whereas if 1em is equal to or greater than 32px, we set the font-weight to a chunky 900. If we remove the font-size declaration, then 1em computes to the user agent default of 32px, and neither condition matches, leaving the font-weight to also compute to the user agent default, which for all headings is 700. Basically, the idea is that if we mess with the default font-size of the <h1>, then we declare an optimized font-weight to maintain readability, preventing small-fat and large-thin text. <h1> <span>Heading 1</span> </h1> h1 { /* The default value is 32px, but we overwrite it to 31px, causing the first if() condition to match */ font-size: 31px; span { /* Here, 1em is equal to 31px */ font-weight: if( style(1em < 32px): 100; style(1em > 32px): 900 ); } } CodePen Embed Fallback CSS queries have come a long way, haven’t they? In my opinion, the range syntax coming to container style queries and the if() function represents CSS’s biggest leap in terms of conditional logic, especially considering that it can be combined with media queries, feature queries, and other types of container queries (remember to declare container-type if combining with container size queries). In fact, now would be a great time to freshen up on queries, so as a little parting gift, here are some links for further reading: Media queries Container queries Feature queries The Range Syntax Has Come to Container Style Queries and if() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Why I Stopped Hating Systemd
by: Umair Khurshid Thu, 13 Nov 2025 18:02:00 +0530 Every few years, the Linux world finds something to fight about. Sometimes it is about package managers, sometimes about text editors, but nothing in recent memory split the community quite like systemd. What began as an init replacement quietly grew into a full-blown identity crisis for Linux itself, complete with technical manifestos, emotional arguments, and more mailing list drama than I ever thought possible. I did not plan to take a side in that debate. Like most users, I just wanted my server to boot, my logs to make sense, and my scripts to run. Yet systemd had a way of showing up uninvited. It replaced the old startup scripts I had trusted for years and left me wondering whether the Linux I loved was changing into something else entirely. Over time, I learned that this was not simply a story about software. It was a story about culture, trust, and how communities handle change. What follows is how I got pulled into that argument, what I learned when I finally stopped resisting. I have even created a course on using advanced automation with systemd. That much I use systemd now. Advanced Automation with systemdTake Your Linux Automation Beyond CronLinux HandbookUmair KhurshidHow I Got Pulled into the Systemd DebateMy introduction to systemd was not deliberate and arrived uninvited with an update, quietly replacing the familiar tangle of /etc/init.d scripts on Manjaro. The transition was abrupt enough to feel like a betrayal as one morning, my usual service apache2 restart returned a polite message: Use systemctl instead. That was the first sign that something fundamental had changed. I remember the tone of Linux forums around that time was half technical and half existential. Lennart Poettering, systemd’s creator, had become a lightning rod for criticism. To some, he was the architect of a modern, unified boot system; to others, he was dismantling the very ethos that made Unix elegant. I was firmly in the second camp! Back then, my world revolved around small workstations and scripts I could trace line by line. The startup process was tangible, you just had to open /etc/rc.local, add a command, and know it would run. When Fedora first adopted systemd in 2011, followed later by Debian, I watched from a distance with the comfort of someone on a “safe” distribution, but it was only a matter of time. Ian Jackson of Debian called the decision to make systemd the default a failure of pluralism and Devuan was born soon after as a fork of Debian, built specifically to keep sysvinit alive. On the other side, Poettering argued that systemd was never meant to violate Unix principles, but to reinterpret them for modern systems where concurrency and dependency tracking actually mattered. I followed those arguments closely, sometimes nodding along with Jackson’s insistence on modularity, other times feeling curious about Poettering’s idea of “a system that manages systems.” Linus Torvalds chimed in occasionally, not against systemd itself but against its developers’ attitudes, which kept the controversy alive far longer than it might have lasted. At that point, I saw systemd as something that belonged to other distributions, an experiment that might stabilize one day but would never fit into the quiet predictability of my setups. That illusion lasted until I switched to a new version of Manjaro in 2013, and the first boot greeted me with the unmistakable parallel startup messages of systemd. The Old World: init, Scripts, and PredictabilityBefore systemd, Linux startup was beautifully transparent. When the machine booted, you could almost watch it think. The bootloader passed control to the kernel, the kernel mounted the root filesystem, and init took over from there. It read a single configuration file, /etc/inittab, and decided which runlevel to enter. Each runlevel had its own directory under /etc/rc.d/, filled with symbolic links to shell scripts. The naming convention S01network, S20syslog, K80apache2was primitive but logical. The S stood for “start,” K for “kill,” and the numbers determined order. The whole process was linear, predictable, and very, very readable. If something failed, you could open the script and see exactly what went wrong, and debugging was often as simple as adding an echo statement or two. On a busy day, I would edit /etc/init.d/apache2, add logging to a temporary file, and restart the service manually. A typical init script looked something like this: #!/usr/bin/env sh ### BEGIN INIT INFO # Provides: myapp # Required-Start: $network # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start myapp daemon ### END INIT INFO case "$1" in start) echo "Starting myapp" /usr/local/bin/myapp & ;; stop) echo "Stopping myapp" killall myapp ;; restart) $0 stop $0 start ;; *) echo "Usage: /etc/init.d/myapp {start|stop|restart}" exit 1 esac exit 0Crude, yes, but understandable. Even if you did not write it, you could trace what happened as it was pure shell, running in sequence, and every decision it made was visible to you. This simplicity was based on the Unix principle of small, composable parts. You could swap out one script without affecting the rest and even bypass init entirely by editing /etc/rc.local, placing commands there for one-off startup tasks. The problem, however, was because everything ran in a fixed order, startup could be painfully slow in comparison to systemd. If a single service stalled, the rest of the boot process might hang and dependencies were implied rather than declared. I could declare that a service “required networking,” but the system had no reliable way to verify that the network was fully up. Also, parallel startup was almost if not completely impossible. Distributions like Ubuntu experimented with alternatives such as Upstart, which tried to make init event driven rather than sequential, but it never fully replaced the traditional init scripts. When systemd appeared, it looked like another attempt in the same category, a modernization effort destined to break the things I liked most. From my perspective, init was not broken. It was slow, yes, but there was total control, which I did not like getting replaced by a binary I could not open in a text editor. The early systemd adopters claimed that it was faster, cleaner, and more consistent, but the skeptics (which included me) saw it as a power grab by Red Hat. The subsequent years have proven this fear somewhat justified, as maintaining non-systemd configurations has become increasingly difficult but the standardization has also made Linux more predictable across distributions. Looking back, I now realize that I had mistaken predictability for transparency. The old init world felt clear because it was simple, not because it was necessarily better designed. As systems grew more complex with services that had to interact asynchronously, containers starting in milliseconds, and dependency chains that spanned multiple layers, the cracks began to show. What Systemd Tried to FixIt took me a while to understand that systemd was not an act of defiance against the Unix philosophy but a response to a real set of technical problems. Once I started reading Lennart Poettering’s early blog posts, a clearer picture emerged. His goal was not to replace init for its own sake but to make Linux startup faster, more reliable, and more predictable. Poettering and Kay Sievers began designing systemd in 2010 at Red Hat, with Fedora as its first proving ground (now my daily driver). Their idea was to build a parallelized, dependency-aware init system that could handle modern workloads gracefully. Instead of waiting for one service to finish before starting the next, systemd would launch multiple services concurrently based on declared dependencies. At its heart, systemd introduced unit files, small declarative configurations that replaced the hand-written shell scripts of SysV init. Each unit described a service, socket, target, or timer in a consistent, machine-readable format. The structure was both simple and powerful: [Unit] Description=My Custom Service After=network.target [Service] ExecStart=/usr/local/bin/myapp --config /etc/myapp.conf Restart=on-failure User=myuser [Install] WantedBy=multi-user.targetTo start it, you no longer edited symlinks or runlevels. You simply ran: sudo systemctl start myapp.service sudo systemctl enable myapp.serviceThe old init scripts had no formal way to express “start only after this other service has finished successfully.” They relied on arbitrary numbering and human intuition. With systemd, relationships were declared using After=, Before=, Requires=, and Wants=. [Unit] Description=Web Application After=network.target database.service Requires=database.serviceThis meant systemd could construct a dependency graph at boot and launch services in optimal order, improving startup times dramatically. Systemd also integrated timers (replacing cron for system tasks), socket activation (starting services only when needed), and cgroup management (to isolate processes cleanly). The critics called it “scope creep,” but Poettering’s argument was that the components it replaced were fragmented and poorly integrated, and building them into a single framework reduced complexity overall. That claim divided the Linux world! On one side were distributions like Fedora, Arch, and openSUSE, which adopted systemd quickly. They saw its promise in boot speed, unified tooling, and clear dependency tracking. On the other side stood Debian and its derivatives, which valued independence and simplicity (some of you new folks might find it odd given their current Rust adoption). Debian’s Technical Committee vote in 2014 was one of the most contentious in its history. When systemd was chosen as the default, Ian Jackson resigned from the committee, citing an erosion of choice and the difficulty of maintaining alternative inits. That decision directly led to the birth of Devuan whose developers described systemd as “an intrusive monolith,” a phrase that captured the mood of the opposition. Yet, beneath the politics, systemd was solving problems that were not just theoretical. Race conditions during boot were common and service dependencies often failed silently. On embedded devices and containerized systems, startup order mattered in ways SysV init could not reliably enforce. The Real Arguments (and What They Miss)The longer I followed the systemd controversy, the more I realized that the arguments around it were not always about code and were also about identity. Every debate thread eventually drifted toward the same philosophical divide that should Linux remain a collection of loosely coupled programs, or should it evolve into a cohesive, centrally managed system? When I read early objections from Slackware and Debian maintainers, they were rarely technical complaints about bugs or performance. They were about trust and the Unix philosophy “do one thing well” that had guided decades of design. init was primitive but modular, and its limitations could be fixed piecemeal. systemd, by contrast, felt like a comprehensive replacement that tied everything together under a single logic (the current debate around C being memory unsafe and Rust adoption are quite similar in the form). Devuan’s founders said that once core packages like GNOME began depending on systemd-logind, users effectively lost the ability to choose another init system. That interdependence was viewed as a form of lock-in at the architecture level. Meanwhile, Lennart Poettering maintained that systemd was modular internally, just not in the fragmented Unix sense. He described systemd as an effort to build coherence into an environment that had historically resisted it. I remember reading Linus’ comments on the matter around 2014. He was not against systemd per se but his frustration (and it has not changed much) was about developer behavior which called out what he saw as unnecessary hostility from both sides, maintainers blocking patches, developers refusing to accommodate non-systemd setups, and the cultural rigidity that had turned a design debate into a purity contest. His opinion was pragmatic that as long as systemd worked well and did not break things needlessly, it was fine. The irony was that both camps were right in their own way. The anti-systemd camp correctly foresaw that once GNOME and major distributions adopted it, alternatives would fade and the pro-systemd side correctly argued that modern systems needed unified control and reliable dependency management. As someone who would later get into Sysadmin and DevOps, now I feel like the conversation missed the fact that Linux itself had changed. By the early 2010s, most servers were running dozens of services, containers were replacing bare-metal deployments, and hardware initialization had become vastly more complex. Boot time was no longer the slow, linear dance it used to be and was a more of a network of parallelized events that had to interact safely and recover from failure automatically. I once tried to build a stripped-down Debian container without systemd, thinking I could recreate the old init world but it was an enlightening failure. I spent hours wiring together shell scripts and custom supervision loops, all to mimic what a single Restart=on-failure directive did automatically in systemd. That experience showed me what most arguments missed that the problem was not that systemd did too much, but that the old approach expected the user to do everything manually. For instance, consider a classic SysV approach to restarting a service on crash. You would write something like this: #!/bin/sh while true; do /usr/local/bin/myapp status=$? if [ $status -ne 0 ]; then echo "myapp crashed with status $status, restarting..." >&2 sleep 2 else break fi doneIt worked, but it was a hack. systemd gave you the same reliability with a single configuration line: Restart=on-failure RestartSec=2The simplicity of that design was hard to deny. Yet to many administrators, it still felt like losing both control and familiarity. The cultural resistance was amplified by how fast systemd grew. Each release seemed to absorb another subsystem: udevd, logind, networkd, and later resolved. Critics accused it of “taking over userland” but the more I examined those claims, the more I saw a different pattern. Each of those tools replaced a historically fragile or inconsistent component that distributions had struggled to maintain separately but the cultural resistance was amplified by how fast systemd grew. Critics also pointed to the technical risk of consolidating so much functionality in one project, fearing a single regression could break the entire ecosystem. The defensive tone of Poettering’s posts did not help, and over time, his name became synonymous with the debate itself. But even among the loudest critics, there was a reluctant recognition that systemd had improved startup speed, service supervision, and logging consistency and what they feared was not its functionality but its dominance. The most productive discussions I saw were not about whether systemd was “good” or “bad,” but about whether Linux had space for diversity anymore. In a sense, systemd’s arrival forced the community to confront its own maturity. You could no longer treat Linux as a loose federation of components; it had become a unified operating system in practice, even if the philosophy still insisted otherwise. By the time I reached that conclusion, the debate had already cooled. Most distributions had adopted systemd by default, Devuan had carved out its niche, and the rest of us were learning to live with the new landscape. I began to see that the real question was not whether systemd broke the Unix philosophy, but whether the old Unix philosophy could still sustain the complexity of modern systems. What I Learned After Actually Using ItAt some point, resistance gave way to necessity. As often happens, I started managing servers (Cent OS) that already used systemd, so learning it was no longer optional. What surprised me most was how quickly frustration turned into familiarity once I stopped fighting it. The commands that had felt alien at firs began to make sense. The first time I ran systemctl status nginx.service, I understood what Poettering had been talking about. Instead of a terse message like “nginx is running,” I saw a complete summary including the process ID, uptime, memory usage, recent logs, and the exact command used to start it. It was the kind of insight that had previously required grepping through /var/log/syslog and several ps invocations. Here is what a typical status output looked like: It was immediately practical as I could see that the service was running, its exact configuration file path, and its dependencies all in one place. When a service failed, systemd logged everything automatically. Instead of checking multiple files, I could simply run: journalctl -u nginx.service -bThat -b flag restricted the logs to the current boot, saving me from wading through old entries. It was efficient in a way the traditional logging setup never was. Then there was dependency introspection. I could visualize the startup tree with: systemctl list-dependencies nginx.serviceThis command revealed the entire boot relationship graph, showing what started before and after Nginx. For anyone who had ever debugged slow boots by adding echo statements to init scripts, this was revolutionary. Over time, I began writing my own unit files. They were simple once you got used to the syntax. I remember converting a small Python daemon I had once managed with a hand-rolled init script. The old version had been about thirty lines of conditional shell logic. The new one was six lines: [Unit] Description=Custom Python Daemon After=network.target [Service] ExecStart=/usr/bin/python3 /opt/daemon.py Restart=always RestartSec=5 [Install] WantedBy=multi-user.targetThat was all it took to handle startup order, failure recovery, and clean shutdown without any custom scripting. The first time I watched systemd automatically restart the process after a crash, I felt a mix of admiration and reluctant respect. Some of my early complaints persisted such as the binary log format of journald still felt unnecessary. I understood why it existed, structured logs allowed richer metadata but it broke the old habit of inspecting logs with less and grep. I eventually learned that you could still pipe output: journalctl -u myapp.service | grep ERRORSo even that compromise turned out to be tolerable. I also began to appreciate how much time I saved not having to worry about service supervision. Previously, I had used supervisord or custom shell loops to keep processes alive but with systemd, it was built-in. When a process crashed, I could rely on Restart=on-failure or Restart=always. If I needed to ensure that something ran only after a network interface was up, I could just declare: After=network-online.target Wants=network-online.targetAlso, one thing that most discussions about systemd missed was built-in service sandboxing. For all the arguments about boot speed and complexity, few people talked about how deeply systemd reshaped security at the service level. The [Service] section of a unit file is not just about start commands, it can define isolation boundaries in a way that old init scripts never could. Directives like PrivateTmp, ProtectSystem, RestrictAddressFamilies, and NoNewPrivileges can drastically reduce the attack surface of a service. A web server, for instance, can be locked into its own temporary directory with PrivateTmp=true and denied access to the host’s filesystem with ProtectSystem=full. Even if compromised, it cannot modify critical paths or open new network sockets. Still, I had to get past a subtle psychological barrier as for years, I had believed that understanding the system meant being able to edit its scripts directly and add to it the social pressure. With systemd, much of that transparency moved behind declarative configuration and binary logs. It felt at first like a loss of intimacy but as I learned more about how systemd used cgroups to track processes, I realized it was not hiding complexity but just managing it. A perfect example came when I started using systemd-nspawn to spin up lightweight containers. The simplicity of systemd-nspawn -D /srv/container was eye-opening as it showed how systemd was not just an init system but a general process manager, capable of running containers, user sessions, and even virtual environments with consistent supervision. At that point, I began reading the source code and documentation rather than Reddit threads. I discovered how deeply it integrated with Linux kernel features like control groups and namespaces and what had seemed like unnecessary overreach began to look more like a natural evolution of process control. The resentment faded and in its place came something more complicated, an understanding that my dislike of systemd had been rooted in nostalgia as much as principle but in a modern environment with hundreds of interdependent services, the manual approach simply did not scale though I respect people who shoot at things like building their own AWS alternative. Systemd was not perfect and especially it was opinionated and sometimes too aggressive in unifying tools. Yet once I accepted it as a framework rather than an ideology, it stopped feeling oppressive and just became another tool, powerful when used wisely, irritating when misunderstood. By then, I had moved from avoidance to proficiency and could write units, debug services, and configure dependencies with ease. I no longer missed the old init scripts as maintenance time became important to me. Why Systemd Controversy Still MattersBy now, most major distributions have adopted systemd, and the initial outrage has faded into the background. Yet the debate never truly disappeared it just changed form. It became less about startup speed or PID 1 design, and more about philosophy. What kind of control should users have over their systems? How much abstraction is too much? The systemd debate persists because it touches something deeper than process management which is about the identity of Linux itself. The traditional Unix model prized minimalism and composability, one tool for one job. systemd, by contrast, represents a coordinated platform which integrates logging, device management, service supervision, and even containerization. To people like me, that integration feels awesome to others, it feels like betrayal. For administrators who grew up writing init scripts and manipulating processes by hand, systemd signaled a loss of transparency which replaced visible shell logic with declarative files and binary logs, and assumed responsibility for things that used to belong to the user but for newer users, especially those managing cloud-scale systems, it offered a coherent framework that actually worked the same everywhere. Though I’m not a huge fan of the word "trade-off" but unfortunately it defines most of modern computing. The more complexity we hide, the less friction we face in day-to-day tasks, but the more we depend on the hidden layer behaving correctly. It is the same tension that runs through all abstraction, from container orchestration to AI frameworks. Even now, forks and alternatives appear from time to time such as runit, s6, OpenRC each promising a return to simplicity but few large distributions switch back, because the practical benefits of systemd outweigh nostalgia. Still, I think the discomfort matters as it reminds us that simplicity is not just a technical virtue but a cultural one. The fear of losing control keeps the ecosystem diverse. Projects like Devuan exist not because they expect to overtake Debian, but because they preserve the possibility of a different path. The real lesson, for me, is not about whether systemd is good or bad. It is about what happens when evolution in open source collides with emotion. Change in infrastructure is not just a matter of better code, it is also a negotiation between habits, values, and trust. When I type systemctl now, I no longer feel resistance as I just see a tool that grew faster than we were ready for, one that forced a conversation the Linux world was reluctant to have. The controversy still matters because it captures the moment when Linux stopped being a loose federation of ideas and started becoming an operating system in the full sense of the word. That transition was messy, and it probably had to be. If you have come this far, you likely see that systemd is more than just an init system, it’s a complete automation framework. If you want to explore that side of it, my course Advanced Automation with systemd walks through how to replace fragile cron jobs with powerful, dependency-aware timers, sandboxed tasks, and resource-controlled services. It’s hands-on and practical! Advanced Automation with systemdTake Your Linux Automation Beyond CronLinux HandbookUmair Khurshid
-
Nitrux 5.0.0 Released: A 'New Beginning' That's Not for Everyone (By Design)
by: Sourav Rudra Thu, 13 Nov 2025 12:13:58 GMT Nitrux is a Debian-based Linux distribution that has always stood out for its bold design choices. It even made our list of the most beautiful Linux distributions. Earlier this year, the project made a significant announcement. They discontinued its custom NX Desktop and the underlying KDE Plasma base, prioritizing a Hyprland desktop experience combined with their in-house developed app distribution methods. Now, the first major release reflecting this redefined approach is finally here. 🆕 Nitrux 5.0.0: What's New? The release uses OpenRC 0.63 as its init system instead of systemd. This is paired with either Liquorix kernel 6.17.7 or a CachyOS-patched kernel, depending on your hardware, and the desktop experience is Wayland-only. KDE Plasma, KWin, and SDDM are gone. In their place, you get Hyprland with Waybar for the panel, Crystal Dock for application launching, and greetd as the login manager, and QtGreet as the display manager. Wofi serves as the application launcher, while wlogout handles logout actions. Nitrux 5.0.0 ships with an immutable root filesystem powered by NX Overlayroot. This provides system stability and rollback capabilities through the Nitrux Update Tool System (nuts). Plus, there is Nitrux's new approach to software management. NX AppHub and AppBoxes are now the primary methods for installing applications. Flatpak and Distrobox remain available as complementary options. There are many updated apps and tooling in this release too: Podman 5.6.1Docker 26.1.5Git 2.51.0Python 3.13.7OpenRazer 3.10.3MESA 25.2.3BlueZ 5.84PipeWire 1.4.8The developers are clear about who Nitrux is for. It is designed for users who see configuration as empowerment, not inconvenience. This isn't a distribution trying to please everyone. The team put it this way in their announcement: These are not additions for the sake of novelty, but extensions of the same philosophy—emphasizing that Nitrux targets modern, powerful hardware. Tuned for real machines: a track weapon, not a city commuter—built for those who drive, not spectate.📥 Download Nitrux 5.0.0The nitrux-contemporary-cachy-nvopen ISO is designed for NVIDIA hardware. It includes the NVIDIA Open Kernel Module and uses the CachyOS-patched kernel. The nitrux-contemporary-liquorix-mesa ISO targets AMD and Intel graphics. It ships with the Liquorix kernel and MESA drivers. Both versions are also available through SourceForge. Nitrux 5.0 (SourceForge)A fresh installation is strongly recommended for this release. Updates from Nitrux 3.9.1 to 5.0.0 are not supported. Future updates will be delivered through the Nitrux Update Tool System. Also, virtual machines are not supported natively, as the team removed many VM-specific components. You can learn more in the release notes. Suggested Read 📖 Here are the Most Beautiful Linux Distributions in 2025Aesthetically pleasing? Customized out of the box? You get the best of both worlds in this list.It's FOSSAnkush Das
-
Graph RAG: Elevating AI with Dynamic Knowledge Graphs
by: Nitij Taneja Thu, 13 Nov 2025 09:50:29 GMT Introduction In the rapidly evolving landscape of Artificial Intelligence, Retrieval-Augmented Generation (RAG) has emerged as a pivotal technique for enhancing the factual accuracy and relevance of Large Language Models (LLMs). By enabling LLMs to retrieve information from external knowledge bases before generating responses, RAG mitigates common issues such as hallucination and outdated information. However, traditional RAG approaches often rely on vector-based similarity searches, which, while effective for broad retrieval, can sometimes fall short in capturing the intricate relationships and contextual nuances present in complex data. This limitation can lead to the retrieval of fragmented information, hindering the LLM's ability to synthesize truly comprehensive and contextually appropriate answers. Enter Graph RAG, a groundbreaking advancement that addresses these challenges by integrating the power of knowledge graphs directly into the retrieval process. Unlike conventional RAG systems that treat information as isolated chunks, Graph RAG dynamically constructs and leverages knowledge graphs to understand the interconnectedness of entities and concepts. This allows for a more intelligent and precise retrieval mechanism, where the system can navigate relationships within the data to fetch not just relevant information, but also the surrounding context that enriches the LLM's understanding. By doing so, Graph RAG ensures that the retrieved knowledge is not only accurate but also deeply contextual, leading to significantly improved response quality and a more robust AI system. This article will delve into the core principles of Graph RAG, explore its key features, demonstrate its practical applications with code examples, and discuss how it represents a significant leap forward in building more intelligent and reliable AI applications. Key Features of Graph RAG Graph RAG distinguishes itself from traditional RAG architectures through several innovative features that collectively contribute to its enhanced retrieval capabilities and contextual understanding. These features are not merely additive but fundamentally reshape how information is accessed and utilized by LLMs. Dynamic Knowledge Graph Construction One of the most significant advancements of Graph RAG is its ability to construct a knowledge graph dynamically during the retrieval process. Traditional knowledge graphs are often pre-built and static, requiring extensive manual effort or complex ETL (Extract, Transform, Load) pipelines to maintain and update. In contrast, Graph RAG builds or expands the graph in real time based on the entities and relationships identified from the input query and initial retrieval results. This on-the-fly construction ensures that the knowledge graph is always relevant to the immediate context of the user's query, avoiding the overhead of managing a massive, all-encompassing graph. This dynamic nature allows the system to adapt to new information and evolving contexts without requiring constant re-indexing or graph reconstruction. For instance, if a query mentions a newly discovered scientific concept, Graph RAG can incorporate this into its temporary knowledge graph, linking it to existing related entities, thereby providing up-to-date and relevant information. Intelligent Entity Linking At the heart of dynamic graph construction lies intelligent entity linking. As information is processed, Graph RAG identifies key entities (e.g., people, organizations, locations, concepts) and establishes relationships between them. This goes beyond simple keyword matching; it involves understanding the semantic connections between different pieces of information. For example, if a document mentions "GPT-4" and another mentions "OpenAI," the system can link these entities through a "developed by" relationship. This linking process is crucial because it allows the RAG system to traverse the graph and retrieve not just the direct answer to a query, but also related information that provides richer context. This is particularly beneficial in domains where entities are highly interconnected, such as medical research, legal documents, or financial reports. By linking relevant entities, Graph RAG ensures a more comprehensive and interconnected retrieval, enhancing the depth and breadth of the information provided to the LLM. Contextual Decision-Making with Graph Traversal Unlike vector search, which retrieves information based on semantic similarity in an embedding space, Graph RAG leverages the explicit relationships within the knowledge graph for contextual decision-making. When a query is posed, the system doesn't just pull isolated documents; it performs graph traversals, following paths between nodes to identify the most relevant and contextually appropriate information. This means the system can answer complex, multi-hop questions that require connecting disparate pieces of information. For example, to answer "What are the main research areas of the lead scientist at DeepMind?", a traditional RAG might struggle to connect "DeepMind" to its "lead scientist" and then to their "research areas" if these pieces of information are in separate documents. Graph RAG, however, can navigate these relationships directly within the graph, ensuring that the retrieved information is not only accurate but also deeply contextualized within the broader knowledge network. This capability significantly improves the system's ability to handle nuanced queries and provide more coherent and logically structured responses. Confidence Score Utilization for Refined Retrieval To further optimize the retrieval process and prevent the inclusion of irrelevant or low-quality information, Graph RAG utilizes confidence scores derived from the knowledge graph. These scores can be based on various factors, such as the strength of relationships between entities, the recency of information, or the perceived reliability of the source. By assigning confidence scores, the framework can intelligently decide when and how much external knowledge to retrieve. This mechanism acts as a filter, helping to prioritize high-quality, relevant information while minimizing the addition of noise. For instance, if a particular relationship has a low confidence score, the system might choose not to expand retrieval along that path, thereby avoiding the introduction of potentially misleading or unverified data. This selective expansion ensures that the LLM receives a compact and highly relevant set of facts, improving both efficiency and response accuracy by maintaining a focused and pertinent knowledge graph for each query. How Graph RAG Works: A Step-by-Step Breakdown Understanding the theoretical underpinnings of Graph RAG is essential, but its true power lies in its practical implementation. This section will walk through the typical workflow of a Graph RAG system, illustrating each stage with conceptual code examples to provide a clearer picture of its operational mechanics. While the exact implementation may vary depending on the chosen graph database, LLM, and specific use case, the core principles remain consistent. Step 1: Query Analysis and Initial Entity Extraction The process begins when a user submits a query. The first step for the Graph RAG system is to analyze this query to identify key entities and potential relationships. This often involves Natural Language Processing (NLP) techniques such as Named Entity Recognition (NER) and dependency parsing. Conceptual Code Example (Python): import spacy from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity import networkx as nx # Load spaCy nlp = spacy.load("en_core_web_sm") # Step 1: Extract entities def extract_entities(query): doc = nlp(query) return [(ent.text.strip(), ent.label_) for ent in doc.ents] query = "Who is the CEO of Google and what is their net worth?" extracted_entities = extract_entities(query) print(f"🧠 Extracted Entities: {extracted_entities}" Step 2: Initial Retrieval and Candidate Document Identification Once entities are extracted, the system performs an initial retrieval from a vast corpus of documents. This can be done using traditional vector search (e.g., cosine similarity on embeddings) or keyword matching. The goal here is to identify a set of candidate documents that are potentially relevant to the query. Conceptual Code Example (Python - simplified vector search): # Step 2: Retrieve candidate documents corpus = [ "Sundar Pichai is the CEO of Google.", "Google is a multinational technology company.", "The net worth of many tech CEOs is in the billions.", "Larry Page and Sergey Brin founded Google." ] vectorizer = TfidfVectorizer() corpus_embeddings = vectorizer.fit_transform(corpus) def retrieve_candidate_documents(query, corpus, vectorizer, corpus_embeddings, top_k=2): query_embedding = vectorizer.transform([query]) similarities = cosine_similarity(query_embedding, corpus_embeddings).flatten() top_indices = similarities.argsort()[-top_k:][::-1] return [corpus[i] for i in top_indices] candidate_docs = retrieve_candidate_documents(query, corpus, vectorizer, corpus_embeddings) print(f"📄 Candidate Documents: {candidate_docs}") Step 3: Dynamic Knowledge Graph Construction and Augmentation This is the core of Graph RAG. The extracted entities from the query and the content of the candidate documents are used to dynamically construct or augment a knowledge graph. This involves identifying new entities and relationships within the text and adding them as nodes and edges to the graph. If a base knowledge graph already exists, this step augments it; otherwise, it builds a new graph from scratch for the current query context. Conceptual Code Example (Python - using NetworkX for graph representation): # Step 3: Build or augment graph def build_or_augment_graph(graph, entities, documents): for entity, entity_type in entities: graph.add_node(entity, type=entity_type) for doc in documents: doc_nlp = nlp(doc) person = None org = None for ent in doc_nlp.ents: if ent.label_ == "PERSON": person = ent.text.strip().strip(".") elif ent.label_ == "ORG": org = ent.text.strip().strip(".") if person and org and "CEO" in doc: graph.add_node(person, type="PERSON") graph.add_node(org, type="ORG") graph.add_edge(person, org, relation="CEO_of") return graph # Create and populate the graph knowledge_graph = nx.Graph() knowledge_graph = build_or_augment_graph(knowledge_graph, extracted_entities, candidate_docs) print("🧩 Graph Nodes:", knowledge_graph.nodes(data=True)) print("🔗 Graph Edges:", knowledge_graph.edges(data=True)) Step 4: Graph Traversal and Contextual Information Retrieval With the dynamic knowledge graph in place, the system performs graph traversals starting from the query entities. It explores the relationships (edges) and connected entities (nodes) to retrieve contextually relevant information. This step is where the "graph" in Graph RAG truly shines, allowing for multi-hop reasoning and the discovery of implicit connections. Conceptual Code Example (Python - graph traversal): # Step 4: Graph traversal def traverse_graph_for_context(graph, start_entity, depth=2): contextual_info = set() visited = set() queue = [(start_entity, 0)] while queue: current_node, current_depth = queue.pop(0) if current_node in visited or current_depth > depth: continue visited.add(current_node) contextual_info.add(current_node) for neighbor in graph.neighbors(current_node): edge_data = graph.get_edge_data(current_node, neighbor) if edge_data: relation = edge_data.get("relation", "unknown") contextual_info.add(f"{current_node} {relation} {neighbor}") queue.append((neighbor, current_depth + 1)) return list(contextual_info) context = traverse_graph_for_context(knowledge_graph, "Google") print(f"🔍 Contextual Information from Graph: {context}") Step 5: Confidence Score-Guided Expansion (Optional but Recommended) As mentioned in the features, confidence scores can be used to guide the graph traversal. This ensures that the expansion of retrieved information is controlled and avoids pulling in irrelevant or low-quality data. This can be integrated into Step 4 by assigning scores to edges or nodes and prioritizing high-scoring paths. Step 6: Information Synthesis and LLM Augmentation The retrieved contextual information from the graph, along with the original query and potentially the initial candidate documents, is then synthesized into a coherent prompt for the LLM. This enriched prompt provides the LLM with a much deeper and more structured understanding of the user's request. Conceptual Code Example (Python): def synthesize_prompt(query, contextual_info, candidate_docs): return "\n".join([ f"User Query: {query}", "Relevant Context from Knowledge Graph:", "\n".join(contextual_info), "Additional Information from Documents:", "\n".join(candidate_docs) ]) final_prompt = synthesize_prompt(query, context, candidate_docs) print(f"\n📝 Final Prompt for LLM:\n{final_prompt}") Step 7: LLM Response Generation Finally, the LLM processes the augmented prompt and generates a response. Because the prompt is rich with contextual and interconnected information, the LLM is better equipped to provide accurate, comprehensive, and coherent answers. Conceptual Code Example (Python - using a placeholder LLM call): # Step 7: Simulated LLM response def generate_llm_response(prompt): if "Sundar" in prompt and "CEO of Google" in prompt: return "Sundar Pichai is the CEO of Google. He oversees the company and has a significant net worth." return "I need more information to answer that accurately." llm_response = generate_llm_response(final_prompt) print(f"\n💬 LLM Response: {llm_response} import matplotlib.pyplot as plt plt.figure(figsize=(4, 3)) pos = nx.spring_layout(knowledge_graph) nx.draw(knowledge_graph, pos, with_labels=True, node_color='skyblue', node_size=2000, font_size=12, font_weight='bold') edge_labels = nx.get_edge_attributes(knowledge_graph, 'relation') nx.draw_networkx_edge_labels(knowledge_graph, pos, edge_labels=edge_labels) plt.title("Graph RAG: Knowledge Graph") plt.show() This step-by-step process, particularly the dynamic graph construction and traversal, allows Graph RAG to move beyond simple keyword or semantic similarity, enabling a more profound understanding of information and leading to superior response generation. The integration of graph structures provides a powerful mechanism for contextualizing information, which is a critical factor in achieving high-quality RAG outputs. Practical Applications and Use Cases of Graph RAG Graph RAG is not just a theoretical concept; its ability to understand and leverage relationships within data opens up a myriad of practical applications across various industries. By providing LLMs with a richer, more interconnected context, Graph RAG can significantly enhance performance in scenarios where traditional RAG might fall short. Here are some compelling use cases: 1. Enhanced Enterprise Knowledge Management Large organizations often struggle with vast, disparate knowledge bases, including internal documents, reports, wikis, and customer support logs. Traditional search and RAG systems can retrieve individual documents, but they often fail to connect related information across different silos. Graph RAG can build a dynamic knowledge graph from these diverse sources, linking employees to projects, projects to documents, documents to concepts, and concepts to external regulations or industry standards. This allows for: Intelligent Q&A for Employees: Employees can ask complex questions like "What are the compliance requirements for Project X, and which team members are experts in those areas?" Graph RAG can traverse the graph to identify relevant compliance documents, link them to specific regulations, and then find the employees associated with those regulations or Project X. Automated Report Generation: By understanding the relationships between data points, Graph RAG can gather all necessary information for comprehensive reports, such as project summaries, risk assessments, or market analyses, significantly reducing manual effort. Onboarding and Training: New hires can quickly get up to speed by querying the knowledge base and receiving contextually rich answers that explain not just what something is, but also how it relates to other internal processes, tools, or teams. 2. Advanced Legal and Regulatory Compliance The legal and regulatory domains are inherently complex, characterized by vast amounts of interconnected documents, precedents, and regulations. Understanding the relationships between different legal clauses, case laws, and regulatory frameworks is critical. Graph RAG can be a game-changer here: Contract Analysis: Lawyers can use Graph RAG to analyze contracts, identify key clauses, obligations, and risks, and link them to relevant legal precedents or regulatory acts. A query like "Show me all clauses in this contract related to data privacy and their implications under GDPR" can be answered comprehensively by traversing the graph of legal concepts. Regulatory Impact Assessment: When new regulations are introduced, Graph RAG can quickly identify all affected internal policies, business processes, and even specific projects, providing a holistic view of the compliance impact. Litigation Support: By mapping relationships between entities in case documents (e.g., parties, dates, events, claims, evidence), Graph RAG can help legal teams quickly identify connections, uncover hidden patterns, and build stronger arguments. 3. Scientific Research and Drug Discovery Scientific literature is growing exponentially, making it challenging for researchers to keep up with new discoveries and their interconnections. Graph RAG can accelerate research by creating dynamic knowledge graphs from scientific papers, patents, and clinical trial data: Hypothesis Generation: Researchers can query the system about potential drug targets, disease pathways, or gene interactions. Graph RAG can connect information about compounds, proteins, diseases, and research findings to suggest novel hypotheses or identify gaps in current knowledge. Literature Review: Instead of sifting through thousands of papers, researchers can ask questions like "What are the known interactions between Protein A and Disease B, and which research groups are actively working on this?" The system can then provide a structured summary of relevant findings and researchers. Clinical Trial Analysis: Graph RAG can link patient data, treatment protocols, and outcomes to identify correlations and insights that might not be apparent through traditional statistical analysis, aiding in drug development and personalized medicine. 4. Intelligent Customer Support and Chatbots While many chatbots exist, their effectiveness is often limited by their inability to handle complex, multi-turn conversations that require deep contextual understanding. Graph RAG can power next-generation customer support systems: Complex Query Resolution: Customers often ask questions that require combining information from multiple sources (e.g., product manuals, FAQs, past support tickets, user forums). A query like "My smart home device isn't connecting to Wi-Fi after the latest firmware update; what are the troubleshooting steps and known compatibility issues with my router model?" can be resolved by a Graph RAG-powered chatbot that understands the relationships between devices, firmware versions, router models, and troubleshooting procedures. Personalized Recommendations: By understanding a customer's past interactions, preferences, and product usage (represented in a graph), the system can provide highly personalized product recommendations or proactive support. Agent Assist: Customer service agents can receive real-time, contextually relevant information and suggestions from a Graph RAG system, significantly improving resolution times and customer satisfaction. These use cases highlight Graph RAG's potential to transform how we interact with information, moving beyond simple retrieval to true contextual understanding and intelligent reasoning. By focusing on the relationships within data, Graph RAG unlocks new levels of accuracy, efficiency, and insight in AI-powered applications. Conclusion Graph RAG represents a significant evolution in the field of Retrieval-Augmented Generation, moving beyond the limitations of traditional vector-based retrieval to harness the power of interconnected knowledge. By dynamically constructing and leveraging knowledge graphs, Graph RAG enables Large Language Models to access and synthesize information with unprecedented contextual depth and accuracy. This approach not only enhances the factual grounding of LLM responses but also unlocks the potential for more sophisticated reasoning, multi-hop question answering, and a deeper understanding of complex relationships within data. The practical applications of Graph RAG are vast and transformative, spanning enterprise knowledge management, legal and regulatory compliance, scientific research, and intelligent customer support. In each of these domains, the ability to navigate and understand the intricate web of information through a graph structure leads to more precise, comprehensive, and reliable AI-powered solutions. As data continues to grow in complexity and interconnectedness, Graph RAG offers a robust framework for building intelligent systems that can truly comprehend and utilize the rich tapestry of human knowledge. While the implementation of Graph RAG may involve overcoming challenges related to graph construction, entity extraction, and efficient traversal, the benefits in terms of enhanced LLM performance and the ability to tackle real-world problems with greater efficacy are undeniable. As research and development in this area continue, Graph RAG is poised to become an indispensable component in the architecture of advanced AI systems, paving the way for a future where AI can reason and respond with a level of intelligence that truly mirrors human understanding. Frequently Asked Questions 1. What is the primary advantage of Graph RAG over traditional RAG? The primary advantage of Graph RAG is its ability to understand and leverage the relationships between entities and concepts within a knowledge graph. Unlike traditional RAG, which often relies on semantic similarity in vector space, Graph RAG can perform multi-hop reasoning and retrieve contextually rich information by traversing explicit connections, leading to more accurate and comprehensive responses. 2. How does Graph RAG handle new information or evolving knowledge? Graph RAG employs dynamic knowledge graph construction. This means it can build or augment the knowledge graph in real-time based on the entities identified in the user query and retrieved documents. This on-the-fly capability allows the system to adapt to new information and evolving contexts without requiring constant re-indexing or manual graph updates. 3. Is Graph RAG suitable for all types of data? Graph RAG is particularly effective for data where relationships between entities are crucial for understanding and answering queries. This includes structured, semi-structured, and unstructured text that can be transformed into a graph representation. While it can work with various data types, its benefits are most pronounced in domains rich with interconnected information, such as legal documents, scientific literature, or enterprise knowledge bases. 4. What are the main components required to build a Graph RAG system? Key components typically include: **LLM (Large Language Model): **For generating responses. Graph Database (or Graph Representation Library): To store and manage the knowledge graph (e.g., Neo4j, Amazon Neptune, NetworkX). Information Extraction Module: For Named Entity Recognition (NER) and Relation Extraction (RE) to populate the graph. Retrieval Module: To perform initial document retrieval and then graph traversal. Prompt Engineering Module: To synthesize the retrieved graph context into a coherent prompt for the LLM. 5. What are the potential challenges in implementing Graph RAG? Challenges can include: Complexity of Graph Construction: Accurately extracting entities and relations from unstructured text can be challenging. Scalability: Managing and traversing very large knowledge graphs efficiently can be computationally intensive. Data Quality: The quality of the generated graph heavily depends on the quality of the input data and the extraction models. Integration: Seamlessly integrating various components (LLM, graph database, NLP tools) can require significant engineering effort. 6. Can Graph RAG be combined with other RAG techniques? Yes, Graph RAG can be combined with other RAG techniques. For instance, initial retrieval can still leverage vector search to narrow down the relevant document set, and then Graph RAG can be applied to these candidate documents to build a more precise contextual graph. This hybrid approach can offer the best of both worlds: the broad coverage of vector search and the deep contextual understanding of graph-based retrieval. 7. How does confidence scoring work in Graph RAG? Confidence scoring in Graph RAG involves assigning scores to nodes and edges within the dynamically constructed knowledge graph. These scores can reflect the strength of a relationship, the recency of information, or the reliability of its source. The system uses these scores to prioritize paths during graph traversal, ensuring that only the most relevant and high-quality information is retrieved and used to augment the LLM prompt, thereby minimizing irrelevant additions. References Graph RAG: Dynamic Knowledge Graph Construction for Enhanced Retrieval Note: This is a conceptual article based on the principles of Graph RAG. Specific research papers on "Graph RAG" as a unified concept are emerging, but the underlying ideas draw from knowledge graphs, RAG, and dynamic graph construction. Original Jupyter Notebook (for code examples and base content) Retrieval-Augmented Generation (RAG) Lewis, P., et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv preprint arXiv:2005.11401. https://arxiv.org/abs/2005.11401 Knowledge Graphs Ehrlinger, L., & Wöß, W. (2016). Knowledge Graphs: An Introduction to Their Creation and Usage. In Semantic Web Challenges (pp. 1-17). Springer, Cham. https://link.springer.com/chapter/10.1007/978-3-319-38930-1_1 Named Entity Recognition (NER) and Relation Extraction (RE) Nadeau, D., & Sekine, S. (2007). A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1), 3-26. https://www.researchgate.net/publication/220050800_A_survey_of_named_entity_recognition_and_classification NetworkX (Python Library for Graph Manipulation) https://networkx.org/ spaCy (Python Library for NLP) https://spacy.io/ scikit-learn (Python Library for Machine Learning) https://scikit-learn.org/
-
FOSS Weekly #25.46: sudo-rs Issues, Kaspersky on Linux, Flathub Troubles, Homelab Starter and More Linux Stuff
by: Abhishek Prakash Thu, 13 Nov 2025 04:29:03 GMT Here is the news. It's FOSS News (news.itsfoss.com) doesn't exist anymore, at least not as a separate entity. All news articles are now located under the main website: https://itsfoss.com/news/ I merged the two portals into one. Now, you just have to log into one portal to enjoy your membership benefits. I hope it simplifies things for you, specially if you are a Plus member. Let's see what else you get in this edition of FOSS Weekly: A new ODF document standard release.Open source alternative to Help Scout.YouTube clamping down on tech YouTubers.Fixing thumbnail issues in Fedora 43Ubuntu's Rust transition hitting yet another hurdle.And other Linux news, tips, and, of course, memes!This edition of FOSS Weekly is supported by Internxt. SPONSORED You cannot ignore the importance of cloud storage these days, especially when it is encrypted. Internxt is offering 1 TB of lifetime, encrypted cloud storage for a single payment. Make it part of your 3-2-1 backup strategy and use it for dumping data. At least, that's what I use it for. Get Internxt Lifetime Cloud Storage 📰 Linux and Open Source NewsA new Rust-related problem has cropped up in the land of Ubuntu.ODF 1.4 is here as the next evolution for the open document standard.You can now play classic D3D7 games on Linux with this new project.YouTube recently deleted some Windows 11 bypass tutorials with some absurd claims.Kaspersky antivirus software is now available for Linux users. Personally, I don't use any such software on Linux.Big Tech being Big Tech. A creator claimed that his videos about bypassing Windows 11's mandatory online account were removed by YouTube. YouTube Goes Bonkers, Removes Windows 11 Bypass Tutorials, Claims ‘Risk of Physical Harm’When will these Big Tech platforms learn?It's FOSSSourav Rudra🧠 What We’re Thinking AboutCould GNOME Office be a thing? Roland has some convincing points: It’s Time to Bring Back GNOME Office (Hope You Remember It)Those who used GNOME 2 in the 2000’s would remember the now forgotten GNOME Office. I think it’s time to revive that project.It's FOSSRoland TaylorOn a side note, I found out that Flathub is ranking on Google for NSFW keywords. What a Shame! FlatHub is Ranking on Google for Po*nHub DownloadsAnd it’s not Google’s fault this time.It's FOSSAbhishek Prakash🧮 Linux Tips, Tutorials, and LearningsYou can fix that annoying issue of GNOME Files not showing image thumbnails on Fedora, btw. Fixing Image Thumbnails Not Showing Up in GNOME Files on Fedora LinuxTiny problem but not good for the image of Fedora Linux, pun intended.It's FOSSAbhishek PrakashTheena suggests some ways to reclaim your data privacy. Switching to a private email service like Proton is one of the recommendations. If you are absolutely new to the Linux commands, we have a hands-on series to help you out. Linux Command Tutorials for Absolute BeginnersNever used Linux commands before? No worries. This tutorial series is for absolute beginners to the Linux terminal.It's FOSS👷 AI, Homelab and Hardware CornerOwnership of digital content is an illusion, until you take matters into your own hands. Our self-hosting starter pack should be a good starting point. The Self-Hosting Starter Pack: 5 Simple Tools I Recommend To Get Started With Your HomelabSelf-hosting isn’t rocket science—if I can do it, so can you!It's FOSSTheena Kumaragurunathan🛍️ Linux eBook bundleThis curated library (partner link) of courses includes Supercomputers for Linux SysAdmins, CompTIA Linux+ Certification Companion, Using and Administering Linux: Volumes 1–2, and more. Plus, your purchase supports the Room to Read initiative! Explore the Humble offer here✨ Project HighlightsDon't let its name fool you. Calcurse is a powerhouse of a tool that can be your go-to for any calendar management needs (like a boon, almost). Command Your Calendar: Inside the Minimalist Linux Productivity Tool CalcurseA classic way to stay organized in the Linux terminal with a classic CLI tool.It's FOSSRoland TaylorHelp Scout is known for abrupt pricing changes; why not switch to a platform that actually cares? Tired of Help Scout Pulling the Rug from Under You? Try This Free, Open Source AlternativeDiscover how FreeScout lets you run your own help desk without vendor lock-in or surprise price hikes.It's FOSSSourav Rudra📽️ Videos I Am Creating for YouThe latest video shows my recommendations for Kitty terminal configuration changes. Subscribe to It's FOSS YouTube Channel Linux is the most used operating system in the world. but on servers. Linux on desktop is often ignored. That's why It's FOSS made it a mission to write helpful tutorials and guides to help use Linux on their personal computer. We do it all for free. No venture capitalist funds us. But you know who does? Readers like you. Yes, we are an independent, reader supported publication helping Linux users worldwide with timely news coverage, in-depth guides and tutorials. If you believe in our work, please support us by getting a Plus membership. It costs just $3 a month or $99 for a lifetime subscription. Join It's FOSS Plus 💡 Quick Handy TipIn the Konsole terminal emulator, you can use the right-click context menu to open any folder with a specific tool. For example, if you are inside a directory, right-click and select the "Open Folder With" option. From the list, select an application. So, for instance, if Dolphin is selected, the location will be opened in the file manager. If Kate is selected, that location is opened in the editor. Other than that, if you enable the "Underline Files" option in Configure Konsole →Profiles → Edit Profile → Mouse → Miscellaneous, you can even right-click and open files in GUI tools right from the terminal. 🎋 Fun in the FOSSverseCan you get all the answers to this Linux distro logo quiz? Guess the Distro from its LogoThere is a logo and four distro names. Guess which one it belongs to. It’s that simple.It's FOSSAbhishek Prakash🤣 Meme of the Week: Such words can hurt the soul, you know. 😵 🗓️ Tech Trivia: On November 9, 2004, Mozilla Firefox 1.0 was released, introducing a faster, safer web-browsing experience with features like tabbed browsing and popup blocking, marking a major challenge to Microsoft’s Internet Explorer dominance. 🧑🤝🧑 From the Community: One of the developers of antiX Linux has announced that the first beta release of antiX 25 is now live! antiX 25 Beta 1 Available for Public TestingantiX-25-full-beta1available for public testing November 5, 2025 by anticapitalista Here is the first beta iso of antiX-25 (64bit). Bullet point notes for now. based on Debian 13 ‘trixie’ 4 modern systemd-free init systems – runit (default), s6-rc, s6-66 and dinit new default look usual ‘antiX magic’ you should be able to boot live in the non-default init and it should then become the default after install. Please note that user intervention will be required more than previous versions o…It's FOSS CommunityProwlerGr❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
IBM Joins OpenSearch Software Foundation to Advance AI-Powered Search and RAG
by: Sourav Rudra Wed, 12 Nov 2025 17:12:36 GMT The OpenSearch Software Foundation is a vendor-neutral organization under the Linux Foundation that hosts the OpenSearch Project. It recently appointed a new Executive Director, and the project itself has already seen over 1 billion software downloads since launch. If you didn't know, OpenSearch focuses on search, analytics, observability, and vector database capabilities. What's Happening: During this year's KubeCon + CloudNativeCon North America conference, the foundation announced that IBM has joined as a Premier Member. This move comes at a time when enterprises are increasingly adopting retrieval-augmented generation (RAG) for AI applications. The membership costs $150,000 annually, where IBM joins existing Premier Members, including AWS, SAP, and Uber. IBM currently uses OpenSearch in production through DataStax, its subsidiary. The company integrated JVector with OpenSearch for high-performance vector search at billion-vector scale. During the announcement, Ed Anuff, the VP of Data and AI Platforms Strategy at IBM, added that: As part of IBM’s work in the evolution of AI, we’re thrilled to contribute to the development of OpenSearch. By joining the Foundation, we are helping ensure that production generative Al can be built on a robust open source foundation.What to Expect: IBM will contribute enterprise-grade enhancements to OpenSearch's security and observability features. The company plans to share high-availability patterns tested through IBM Cloud deployments. The focus areas include vector search performance improvements and multimodal document ingestion. IBM also aims to advance the developer experience for building AI agents. Plus, the company is on track to announce a new open source project featuring OpenSearch at the OpenRAG Summit on November 13. Reflecting on the partnership's significance, Bianca Lewis remarked: IBM’s commitment to the OpenSearch Software Foundation is a testament to the role open source search and analytics play in AI-enabled enterprises of the future. Our member organizations help shape and develop the tools and technology that make intelligent operations a reality, and we are thrilled that IBM has joined the Foundation, strengthening our community and mission.Suggested Read 📖 OpenSearch Foundation Strengthens Leadership with New Executive DirectorBianca Lewis becomes executive director of the OpenSearch Foundation.It's FOSSSourav Rudra
-
Kaspersky Antivirus is Now Available for Linux. Will You Use it?
by: Sourav Rudra Wed, 12 Nov 2025 15:11:51 GMT The Linux ecosystem is facing increasing pressure from threat actors, who are getting more clever day-by-day, threatening critical infrastructure worldwide. Servers powering essential services, industrial control systems, and enterprise networks all rely on Linux, and these attackers know it. What was once considered a relatively safe ecosystem is now a lucrative target. 🥲 This brings us to Kaspersky, the Russian cybersecurity firm with a reputation. The company was banned from selling its antivirus software and cybersecurity products in the U.S. back in July 2024. But for users outside the U.S., Kaspersky just announced something interesting. They are bringing antivirus protection to home Linux users. Though, it remains to be seen, whether this addresses genuine security needs or if it's just security theater for worried penguins. 🚧This piece of software is not FOSS. We covered it because it is available for Linux!Kaspersky for Linux: What Does it Offer?Kaspersky has expanded its consumer security lineup to include Linux. This marks the first time their home user products officially support the platform. The company adapted their existing business security solution for home users. Support covers major 64-bit distributions, including Debian, Ubuntu, Fedora, and RED OS. Depending on the plan you opt for, the feature set includes real-time monitoring of files, folders, and applications to detect and eliminate malware. Behavioral analysis detects malware on the device for proactive defense. Removable media like USB drives and external hard drives get scanned automatically upon connection. This prevents the spread of viruses across devices and networks. Anti-phishing alerts users when attempting to follow phishing links in emails and on websites. Online payment protection verifies the security of bank websites and online stores before financial transactions. Anti-cryptojacking prevents unauthorized crypto mining on devices to protect system performance, and AI-powered scanning blocks infected files, folders, and applications upon detecting viruses, ransomware trojans, password stealers, and other malware. Though, there is one important thing to consider: Kaspersky for Linux isn't GDPR-ready, so keep this in mind if you are an EU-based user concerned about data protection compliance. Get Kaspersky for LinuxAn active paid subscription is required to download and use Kaspersky for Linux. A 30-day free trial is available for users who want to test before committing to a paid plan. Both DEB and RPM packages are provided for easy installation. The official installation guide contains detailed setup instructions. Kaspersky for LinuxVia: Phoronix
-
Ubuntu's Rust Transition Hits Another Bump as sudo-rs Security Vulnerabilities Show Up
by: Sourav Rudra Wed, 12 Nov 2025 13:29:24 GMT Ubuntu's move to Rust-based system utilities has hit some bumps. Earlier, a bug in the Rust-based date command broke automatic updates. The command returned current time instead of file modification timestamps, causing Ubuntu 25.10 systems to stop automatically checking for software updates. That issue was quickly fixed, but now, two security vulnerabilities have been found in sudo-rs. Better Now than LaterThe first vulnerability involves password exposure during timeouts. When users type a password but don't press enter, the timeout causes those keystrokes to replay onto the console. This could reveal partial passwords in shell history or on screen. The second issue affects timestamp authentication. When Defaults targetpw or Defaults rootpw options are enabled, sudo-rs incorrectly recorded the wrong user ID in timestamps. This allowed bypassing authentication by reusing cached credentials even when policy required a different password. Patches for both issues have been released in sudo-rs 0.2.10. Ubuntu is set to push the fixes through a Stable Release Update (SRU). These bugs being caught in Ubuntu 25.10 is actually a good sign. The interim release serves as a testing ground before Ubuntu 26.04 LTS arrives in April 2026. Finding critical security flaws now allows developers ample time to address them. Here's the Fix!At the time of writing, the updated sudo-rs package had not yet arrived in the Ubuntu 25.10 repositories. But it should be available soon. Once the update is live, you can get the fix using the graphical Software Updater tool by launching it from your application menu and installing any available security updates. sudo-rs' upgrade process on Ubuntu 25.10. Alternatively, you can use the terminal. Run these commands one after the other to get the patch: sudo-rs updatesudo-rs upgradePS: Using sudo instead of sudo-rs also works the same. Via: Phoronix Suggested Read 📖 sudo vs sudo-rs: What You Need to Knowsudo-rs is poised to take over. Here’s what you should know about sudo-rs as a sudo user.It's FOSSAbhishek Prakash
-
Role Model Blog: Chandni Sharma, Neomore
by: Ani Wed, 12 Nov 2025 11:14:36 +0000 The only constant in life is change. About meI’m Chandni, currently working as a UX Designer at Neomore, where I create user experiences for SAP applications. My work begins with understanding workflows through user research and interviews. From there, I create wireframes, prototypes, and user flows, collaborating closely with consultants, developers, and stakeholders to ensure that every design is both technically feasible and genuinely user-friendly. Starting my career in the IT field My career began as a software developer, which gave me a strong foundation in how digital products are built. Over time, I realized that what truly fascinated me was the human side of technology, the way people interact with systems, and how design can make that interaction possible. Being naturally people-oriented, I transitioned into UX design to focus on understanding users’ needs and creating experiences that make complex systems user-friendly and enjoyable. Chandni Sharma, Consultant, UX & Application Innovation, Neomore My Background I earned my Bachelor of Technology degree in India. Later, I moved to Finland when my husband was relocated here, and I quickly fell in love with the country. What started as a visit soon became a long-term decision to build both my life and career here. When I first arrived, it was a challenging time to find a job. It was right after Nokia’s major economic decline, and many experienced professionals had entered the job market. Each time I applied, I often got a reply that the position had been filled by someone who had just left Nokia. Being young and less experienced, it was difficult to compete. Instead of giving up, I decided to focus on learning. I pursued a bachelor’s degree in user experience at Haaga-Helia and later completed my master’s in computer science at Aalto University, specializing in Service Design and UX. These studies not only deepened my technical knowledge but also boosted my confidence. They helped me successfully transition from software development to becoming a UX designer and researcher, an expert in creating meaningful user experiences. My path to Neomore When I was studying, I had heard of SAP but didn’t really understand what the field was about. Later, while searching for jobs, I came across openings for UX roles in SAP and became curious to know how user experience fits into SAP. I decided to apply, and during the interviews, the team explained the work in detail. I realized it would be both a challenging and rewarding learning journey. That’s how I joined Neomore. I had some prior experience from another company, and now, this December, I’ll be completing three years at Neomore. Working at NeomoreWhat truly motivates me to go to the office every day is the people and the culture at Neomore. The supportive and inspiring environment they’ve built makes a huge difference, which keeps me motivated no matter how challenging a project or task might be. I have to admit, it’s the people and the culture that make Neomore such a great place to work. Enjoying my work The best part of my job is the constant learning that comes with working in the SAP industry. Every day, I gain new insights into different processes and how our clients operate, areas I never imagined I’d explore. For example, I’ve learned how manufacturing works, what kinds of machines are used in woodworking, and the challenges people face in their daily routines. Understanding these real-world contexts and discovering how technology can help them is both motivating and exciting. This continuous learning keeps me energized, no matter how challenging the work gets. The necessary skills in the IT fieldProblem-solving and curiosity are some of the most important skills to have. In my work, curiosity drives me to ask questions and explore different perspectives. When I listen to people, my curious nature helps me go deeper into their stories and uncover hidden insights. I’m not afraid to ask questions—though I’m always mindful of how and when to ask them. This openness allows me to gather valuable answers, identify the real problems, and map out effective solutions. Ultimately, curiosity and the courage to ask are key to meaningful problem-solving in any field. Other important skills to have are good communication skills, being empathetic towards users, and having knowledge of software development. Being skilled in technology, I can bridge the technology and human needs into something that can really make a difference. Overcoming challengesTo overcome challenges, I’ve learned that having patience with oneself is essential. You need to give time to keep yourself updated on technologies so that you can level up with developers, stakeholders, and users. To keep yourself up-to-date, you must embrace continuous learning to help yourself bridge the gaps, and I’m grateful that Neomore strongly supports professional growth. They regularly encourage us to learn and discuss ways to develop further. For me, patience and lifelong learning are the keys to overcoming challenges in this field. The key to solving problemsWhenever I get stuck on a problem and cannot find a solution, I pause for a moment instead of forcing myself to continue. At the office, we have a pool table, so I often take a short break to play a quick game, sometimes alone, sometimes with a colleague. That brief change of focus helps clear my mind. It is a simple routine, but it really helps me get back into the right state of mind to solve the problems effectively. Sources of energyI get a lot of energy by being surrounded by people, either friends or family. This has been a good source of energy for me. After becoming a mother, my children became my greatest source of energy. Playing with my kids, listening to their stories, and doing whatever they want to do brings my mind to balance and provides me with a lot of energy to continue and thrive in any situation I am in. Always Start with WhyStart with Why, written by Simon Sinek, has had a profound impact on my work and, more broadly, on my life. For example, when I talk to a client, I listen to their needs and always ask why they want to have what they want. This allows me to go deeper into their needs, and I get a clearer idea of what they are requesting, which helps me to help them get their solutions. About the impact of AII strongly believe that AI is a powerful tool designed to help us. There’s a saying that the only constant in life is change, and AI is a part of that change. Instead of fearing it, we should learn, understand, and use it to our advantage. Every technology has both positive and negative sides, and AI is no different. The key is to understand both aspects and use them responsibly. Personally, I actively explore and learn from different AI tools, finding ways they can support my work and growth. While some worry that AI might take jobs or have negative effects, I see it as an opportunity to evolve and work smarter. Note: Between the time of writing this blog post and publishing it, Chandni Sharma’s employment at Neomore ended. Neomore, Chandni and Women in Tech decided to publish this blog post regardless since Chandni will always be a role model to our communities. The post Role Model Blog: Chandni Sharma, Neomore first appeared on Women in Tech Finland.
-
The Self-Hosting Starter Pack: 5 Simple Tools I Recommend To Get Started With Your Homelab
by: Theena Kumaragurunathan Wed, 12 Nov 2025 07:21:41 GMT In my last column, Ownership is an illusion, unless you self-host, I encouraged readers to go down the self-hosting path. My thesis was simple: ownership of digital assets (movies, music, games, books, software) is an illusion, and that the only way to move away from this make-believe was to embrace self-hosting. For people like me, non-programmer types, this is easier said than done: Free and Open Source (FOSS) can seem intimidating because often (not always) FOSS asks you to embrace granular control over convenience and ease-of-use. The author's server, a repurposed 14 year old ThinkPad, ©Theena Kumaragurunathan, 2025When non-tech people see my server (an old ThinkPad T420) nestled in my book-shelf, running ‘bpytop’ of all things, they assume that I am engaged in some hackery: ‘What is this Matrix shit?’, a friend once wondered. When I told him that it was nothing more than a file-server for my media (movies, music and books), and then showed him my Jellyfin instance running inside my browser, I could see he was having a lightbulb moment: ‘Can you do this for me?’ Sure, I told him, but I offered him a better choice: ‘I’ll show you what you need to know in order to do this yourself, and then we will create a media server for you together.’ Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime, right? That journey with my friend took us from basics Linux commands to installation of Plex/Jellyfin, which is well beyond the scope of this article (let us know if that is something you are interested in, non-techie/programmer readers). Instead in this column, I will offer an abdridged version. Ask yourself why you need thisI had a clear motivation to go down this path during the pandemic: I wanted to backup my media collection of music and run it off Plex Server. My friend wanted to self-host his movies so that he didn’t have to wade through his hard-disks when selecting a movie to watch. What is your motivation? What you need to get started on the homelab bandwagonAn old computer or laptop. This needs to be in working order. Mine is an old ThinkPad T420 (which is 14 years old, and I am its 3rd owner). Anything from the last decade and a half ought to do. You can also get a Raspberry Pi. I would also prefer an older machine with an ethernet port; connection stability is better when your server has a wired connection to your network in my experience. Pick an operating system: I chose Debian server. You can host many of the applications listed below on a Windows install too, but your mileage may vary. If you want an even easier way, try the YunoHost Linux distro. The author's self-hosting stack, ©Theena Kumaragurunathan, 2025Start your homelab by self-hosting these softwareYou don't have to deploy all the recommendations. Think about which one would fit your requirements the most. Select it and then deploy it. Once that is successful, try next. One project at a time. 📋I am not going to include installation and set-up instructions. Those things may differ based on the choice of your operating system as well as hardware. These are just the recommendations to put you on the right track.Jellyfin: Your own NetflixEnjoying local movies on TV with JellyfinJellyfin is your home theater. It organizes movies and shows, fetches artwork, and streams to TV, browser, and phone. I chose Jellyfin media server software because setup is simple. On Debian or Ubuntu, you can use the official guide, or run it with Docker and point it at your media folders. It has no subscriptions and no tracking. 💡Keep your server on wired ethernet for stable playback, and enable hardware transcoding only if your CPU or GPU supports it.Kavita: Your own Kindle libraryThe author's instance of Kavita running on his local server, ©Theena Kumaragurunathan, 2025 Kavita is a self hosted library for books, PDFs, comics, and manga. It has a fast reader, rich metadata, OPDS, and good user management. I use it to keep my EPUBs and essays in one place with clean reading progress across devices. 💡Sort files into clear folders, let Kavita watch those folders, and enable OPDS if you read on third party apps.Nextcloud: Your own Google driveNextcloud is your personal file cloud. Sync your files, share links, and extend it with Notes, Calendar, and Contacts. It feels like a private Dropbox that runs on your hardware. The server has regular releases and clear upgrade docs. If you are new, use the web installer or Docker and start with Files before adding apps. 💡Keep it simple. Install Files first, set up the desktop client, and only add one or two apps after you are comfortable.Immich: Your own Google PhotosImmich is a private photo and video backup with mobile apps on Android and iOS. It does face recognition, search, albums, and multi user support. It is fast and designed for large libraries. Installation is straightforward with Docker Compose. Begin with the official site, then the server and apps. 💡Turn on automatic mobile backup, keep originals on the server, and use albums for curation.Navidrome: Your own SpotifyNavidrome turns your music collection into a streaming service. It indexes quickly, supports Subsonic clients, and runs well on modest hardware. You can use a single binary or Docker and attach your music folder. 💡Install ffmpeg for transcoding, clean your tags for better library browsing, and test a few clients until one fits your flow.Putting It TogetherA practical starter map looks like this. Jellyfin for movies and shows. Kavita for books and PDFs. Nextcloud for files and sharing. Immich for your photos. Navidrome for music. Run all five on Debian server or YunoHost or on Docker if you prefer containers. Keep your server on wired Ethernet. Back up the data folders in your home network. Start with one service, get comfortable, then add another. The point is not perfection. It is owning your library and making it available to the people you care about, without asking permission from a platform that can lock you out at a whim. Enjoy your home lab 🏠🥼
-
You Can Play Classic D3D7 Games on Linux With This New Project, But Don’t Expect Perfection
by: Sourav Rudra Tue, 11 Nov 2025 17:23:36 GMT D7VK is a new Vulkan-based translation layer for Direct3D 7. It relies on DXVK’s Direct3D 9 backend and works with Wine on Linux. The project is open source and actively maintained. The developer behind it is WinterSnowfall, who has also worked on D8VK between 2023 and 2024. That project has since been merged into the larger DXVK project that's extensively used by Linux users. You have to understand that D7VK is not meant to run every Direct3D 7 game. Titles that mix D3D7 with older DirectDraw or GDI calls may fail to launch or show graphical glitches. So, compatibility is experimental and limited. It works by translating Direct3D 7 calls to Direct3D 9 through DXVK, allowing Vulkan-based 3D application rendering on Linux. Sadly, there is no official list of supported games yet. Some games work well, others have issues. Missing textures, crashes, and black screens are common. The issues page on the project's GitHub repo shows which games are behaving poorly. It is a good way to see what currently works. 📋PCGamingWiki's list of Direct3D 2-7 games is also a handy resource to have if you want to test a specific Direct3D 7 game.What’s nice is how the developer sets expectations right from the start. They are upfront about the experimental nature of the project. This clarity makes it easier to test games without getting disappointed. For fans of late 90s and early 2000s games, D7VK could be handy. It won’t fix everything, but it opens the door to running older Direct3D 7 games on Linux. Want to Check it Out?The D7VK GitHub repository has the source code. You can manually compile it and place it in your Wine prefix directory to try it out. D7VK supports a HUD overlay and frame rate limiting through DXVK. These features will help you track performance and debug graphical issues. D7VK (GitHub)Suggested Read 📖 Is Linux Ready For Mainstream Gaming In 2025?Linux is quietly gaining ground on Windows in the gaming space. But how well does it actually perform? Here’s what I experienced.It's FOSSSourav Rudra
-
Common Networking Issues Every DevOps Engineer Encounters
by: Umair Khurshid Tue, 11 Nov 2025 20:03:47 +0530 Networking problems rarely announce themselves clearly. A deployment fails, a pod cannot reach its database, or a service responds intermittently. The logs look clean, yet something feels wrong. Most engineers eventually learn one painful truth: when everything else seems fine, it is usually the network. From misrouted traffic to invisible firewalls, let me walk you through the most frequent networking issues that DevOps engineers encounter in Linux environments. I also explain how to investigate, diagnose, and fix each class of problem using real commands and reasoning. All this comes from the experience I have gained after spending years. The same experience also yielded this Linux networking microcourse that you should definitely check out. Linux Networking at ScaleMaster advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair KhurshidIt’s Almost Always the NetworkWhen an application behaves unpredictably, the first instinct is to look at the code. Developers dig through logs, restart containers, or roll back deployments. In many cases, the application is not the culprit and it's the network. Early in my career, I used to dread these moments as application logs would show nothing but retries and timeouts. The developers would swear nothing changed and the operations team would swear they touched nothing. Yet packets were vanishing into the void and that is how I began to take networking seriously not because I wanted to, but because I had to. A good troubleshooting approach begins by proving that connectivity works at every layer. Start simple: ping -c 4 8.8.8.8 ping -c 4 example.com If the first command succeeds but the second fails, DNS is the culprit. If both fail, it is a routing or firewall issue. This baseline test should always come before looking into application-level logs. Then, verify whether the local host can reach its gateway and whether packets are returning: ip route show traceroute 8.8.8.8The Routing Rabbit HoleRouting problems are deceptively subtle as traffic flows one way but not the other, or only some destinations are reachable. The root cause often hides in Linux’s routing tables or in policies added by container frameworks. Start by displaying the active routes: ip routeThis shows the kernel’s routing decisions. For more detailed analysis, especially in multi-interface or container setups, check which route a particular destination would take: ip route get 1.1.1.1If a host has multiple network interfaces or is part of a VPN or overlay, verify that the correct table is being used. Linux supports multiple routing tables, and policy routing determines which one applies. Check the rules: ip rule showMisconfigured rules can cause asymmetric routing, where packets leave through one interface but return on another. Firewalls often drop these replies because they appear invalid. One reliable fix is to assign separate routing tables for each interface and use ip rule add with from or fwmark selectors to control the path. For example, to route traffic from 192.168.10.0/24 through a specific gateway: ip route add default via 192.168.10.1 dev eth1 table 10 ip rule add from 192.168.10.0/24 table 10Always check for reverse path filtering: cat /etc/resolv.conf Set it to 2 (loose mode) on multi-homed hosts to prevent dropped packets due to asymmetric routes. Routing issues rarely announce themselves clearly. The key is to map how packets should travel, then prove it with ip route get, traceroute, and tcpdump. DNS: The Eternal SuspectNo other component gets blamed as frequently or incorrectly as DNS. Even the recent AWS outage that took down half of the internet was reportedly caused by DNS. When an application cannot reach its dependency, the first guess is always “maybe DNS is broken.” Sometimes it is, but often the problem is caching, misconfiguration, or unexpected resolution order. Start by checking the configured resolvers: cat /etc/resolv.conf Most distros these use systemd-resolved, the file may point to a stub resolver at 127.0.0.53. To see the active DNS servers: resolvectl statusIf resolution is inconsistent between services, the problem may be namespace isolation. Containers often have their own /etc/resolv.conf, copied at startup. If the host’s DNS changes later, containers keep using outdated resolvers. Test resolution directly: dig example.com dig @8.8.8.8 example.comCompare responses from the default resolver and a public one. If only the latter works, the issue lies in internal DNS or local caching. A subtle but common failure arises from nsswitch.conf. The order of resolution methods (files dns myhostname) determines whether /etc/hosts entries or mDNS override DNS queries. In container-heavy environments, this can lead to confusing name collisions. 💡DNS problems are not always network failures, but they produce identical symptoms. That is why verifying DNS resolution early saves hours of debugging.Even when DNS works, it can still mislead you. I remember spending an hour debugging a connection issue that turned out to be caused by an unexpected IPv6 AAAA record. The application preferred IPv6 but the route to that subnet was broken. The fix was as simple as setting precedence ::ffff:0:0/96 100 in /etc/gai.conf. MTU and Fragmentation HeadachesThe Maximum Transmission Unit or MTU defines how large a packet can be before it needs fragmentation. When this number mismatches between interfaces, tunnels, or virtual networks, packets vanish without trace. You get intermittent timeouts, partial uploads, and mysterious hangs in SSH sessions. To check the MTU on an interface: ip link show eth0To test path MTU discovery, use ping with increasing packet sizes: ping -s 1472 8.8.8.8Regular ICMP echoes may succeed even when TCP traffic fails. To detect MTU issues, you need to force the “do not fragment” flag: ping -M do -s 1472 8.8.8.8If it fails, lower the size until it succeeds. The MTU equals payload plus 28 bytes (ICMP and IP headers). In virtualized or overlay environments (VXLAN, WireGuard, GRE, eBPF), encapsulation overhead reduces the effective MTU. For example, VXLAN adds 50 bytes. Setting MTU to 1450 instead of 1500 avoids fragmentation. Adjust interface MTU safely: ip link set dev eth0 mtu 1450Applications sensitive to latency often experience erratic behavior because of hidden fragmentation. Once MTU mismatches are corrected, those mysterious slowdowns vanish. In container environments, MTU mismatches become especially painful. Overlay networks such as Flannel or Calico encapsulate packets inside UDP tunnels, reducing available space. If the MTU is not adjusted inside the container, performance plummets. A single missing ip link set dev eth0 mtu 1450 can make a cluster look broken. Overlay Networks and Ghost PacketsModern clusters rely on overlays to connect containers across hosts. VXLAN, WireGuard, and similar technologies encapsulate traffic into tunnels, creating virtual networks. They are convenient but introduce new failure modes that look invisible to traditional tools. A common symptom is “ghost packets” which is traffic that appears to leave one node but never arrives at another. The tunnel endpoint logs nothing, yet connectivity fails. The first step is to confirm that the tunnel interfaces exist and are up: ip link show type vxlanCheck if the remote endpoint is reachable outside the tunnel: ping <remote_host_ip>If that fails, the problem is not the overlay but the underlay, the physical or cloud network below it. Next, verify that encapsulated traffic is not filtered. VXLAN uses UDP port 4789 by default, and WireGuard uses 51820. Ensure that firewalls on both ends allow those ports. To inspect whether encapsulation is functioning: tcpdump -i eth0 udp port 4789If packets appear here but not on the remote host, NAT or routing between the nodes is rewriting source addresses in a way that breaks return traffic. WireGuard adds its own layer of complexity. Its peers are identified by public keys, not IP addresses, so if the endpoint’s IP changes (for example, in cloud autoscaling), you must update its Endpoint in the configuration: wg set wg0 peer <public-key> endpoint <new-ip>:51820 Overlay debugging requires seeing both worlds at once, the logical (tunnel) and physical (underlay) networks. Always verify that encapsulated packets can travel freely and that MTU accommodates the overhead. Most ghost packets die because of either firewall drops or fragmentation within the tunnel. When Firewalls and Conntrack Betray YouFirewalls are supposed to protect systems, but when they fail silently, they create some of the hardest problems to diagnose. Linux’s connection tracking layer (conntrack) manages the state of every connection for NAT and stateful inspection. When its table fills or rules conflict, packets disappear with no visible error. Start by checking the current number of tracked connections: cat /proc/sys/net/netfilter/nf_conntrack_count cat /proc/sys/net/netfilter/nf_conntrack_maxI have debugged a number of microservice cluster where outbound connections failed intermittently and the culprit is overloaded conntrack table. Each NAT-ed connection consumes an entry, and the table silently drops new connections once full. The solution to this issue is simply increasing the limit: sysctl -w net.netfilter.nf_conntrack_max=262144For persistent tuning, add it to /etc/sysctl.conf. State timeouts can also cause intermittent loss and long lived connections often expire in conntrack while still active on the application side. Adjust the TCP established timeout: sysctl -w net.netfilter.nf_conntrack_tcp_timeout_established=3600Firewalls configured with nftables or iptables can complicate debugging when NAT or DNAT rules are applied incorrectly. Always inspect the active NAT table: nft list table natMake sure destination NAT and source NAT are paired correctly because Asymmetric NAT produces connection resets or silence. In high-throughput environments, offloading some rules to nftables sets with maps improves performance and reduces conntrack pressure. This is one of the areas where modern Linux firewalls significantly outperform legacy setups. Conntrack issues are often invisible until you look directly into its state tables. Once you learn to monitor them, many “random” connectivity problems turn out to be predictable and fixable. Lessons I Wish I Learned EarlierEvery engineer eventually learns that networking failures tend to follow recognizable patterns, and identifying those patterns early can save hours of unnecessary panic. 1. Always check the local host first. Half of network incidents begin with something as simple as a down interface, a missing route, or an outdated /etc/resolv.conf. 2. Validate one layer at a time. Use ping for reachability, dig for DNS, traceroute for routing, tcpdump for packet visibility, and nft list ruleset for firewalls and never skip steps. 3. Document assumptions. When debugging, write down what you believe should happen before testing. Networking surprises often come from assumptions no one verified. 4. Monitor the invisible. Connection tracking, queue lengths, and interface drops are invisible in standard metrics. Expose them to your monitoring system to catch silent failures early. 5. Learn how Linux really routes. Most complex issues trace back to misunderstood routing tables, policy rules, or namespaces. Understanding these mechanisms transforms troubleshooting from guessing to knowing. Wrapping UpThe more you troubleshoot Linux networking, the more you realize it is not about memorizing commands. It is about building mental models of how packets move, how policies influence paths, and how the kernel’s view of the network differs from yours. For DevOps engineers managing modern infrastructure, from bare metal to Kubernetes that understanding becomes essential. Once you have fixed enough DNS loops, routing asymmetries, and conntrack overflows, the next logical step is to study how Linux handles these problems at scale: multiple routing tables, virtual routing instances, nftables performance tuning, encrypted overlays, and traffic shaping. The Linux Networking at Scale course builds directly on these foundations. It goes deeper into policy routing, nftables, overlays, and QoS, the exact skills that turn network troubleshooting into design. I highly recommend checking it out. Linux Networking at ScaleMaster advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair Khurshid
-
Tired of Help Scout Pulling the Rug from Under You? Try This Free, Open Source Alternative
by: Sourav Rudra Tue, 11 Nov 2025 14:22:16 GMT Having a reliable help desk solution is a must for any consumer-facing business in today's digital age. Whether you handle customer emails, support tickets, or live chat, a good help desk system keeps your communication organized and your customers happy. Sadly, many companies take advantage of this need. They push users into walled gardens where access to basic features can change on a whim and key tools get locked behind paywalls. Help Scout's pricing as of November 11, 2025.One such case has been of Help Scout, which switched to a more expensive pricing plan. After customer backlash, the company reverted to a revised plan that was slightly cheaper than the one that sparked the outrage. But, what if I told you there was an alternative that does not make you anxious about sudden pricing changes? Something that lets you build your own setup, keep your data close, and pay only for what you actually need. FreeScout Doesn't Lock You InFreeScout is an open source help desk and shared mailbox built with PHP and Laravel. It is licensed under AGPL 3.0, which means the code is freely available, and you can self-host it on your own server without having to pay any user-based costs. You only pay for hosting and optional paid modules that expand functionality. Modules cover integrations, push notifications, and specialized features. Everything else, from ticket handling to automation, works out of the box once you install FreeScout. Other than the usual help desk features like shared inboxes, agent collision detection, canned responses, and user management, FreeScout offers flexibility that few platforms can match. FreeScout goes a step further with self-hosting, custom domains, API access, and full database control. You decide how your data is stored, backed up, and secured. For organizations that care about privacy and sovereignty, this makes a real difference. It also supports mobile apps for Android and iOS. Push notifications require a paid server-side module, but once configured, your team can manage tickets directly from their phones with no extra cloud dependencies. If you want integrations, FreeScout connects with Slack, Telegram, and other services. There are modules for CRM tools, customer portals, and even AI-assisted responses (via Community modules). Some Things to Keep in MindRunning FreeScout does need some technical setup. You will manage hosting, updates, and backups. Adding advanced features like AI-powered replies or analytics will take extra configuration and can add costs over time. Depending on your setup, you may still rely on FreeScout modules or community support. That means moving away later could take planning, though you always keep your data since it lives on your own server. In contrast, Help Scout and Zendesk provide everything under a single roof. They handle hosting, maintenance, and scaling for you but limit backend customization and control. You use what they provide within their rules. Overall, what FreeScout offers beats any walled garden solution, especially for people running small businesses or larger teams that value data ownership and predictable costs over convenience that comes with lock-in. Want to Deploy It?You can try FreeScout in your browser using its live demo. If you would like hosting it yourself, the official installation guide covers every step for various kinds of setups. Plus, there are apps for both Android and iOS. However, in order to for them to work with your FreeScout instance, you must do some additional config work. FreeScout🚀Run your own instance of FreeScout effortlessly in the cloud with PikaPods! Start free with $5 welcome credit 😎If you are considering a move from another help desk like Help Scout or Zendesk, you should check out the official migration guide, and if you are interested in the source code, then you can visit the project's GitHub repository. Suggested Read 📖 5 Signs Your Proprietary Workflow Is Stifling Your Creativity (And What You Can Do About It)If these signs feel familiar, your creativity may be stifled by proprietary constraints.It's FOSSTheena Kumaragurunathan
-
What a Shame! FlatHub is Ranking on Google for Po*nHub Downloads
by: Abhishek Prakash Tue, 11 Nov 2025 11:07:18 GMT Imagine that one of the most prestigious open source software websites starts showing up in top results for "pornhub downloader". This is actually happening with Flathub, the official web-based app store for Flatpak packages. Here's a demo I made while risking to spoil my relationship with Google: 0:00 /0:20 1× And no, I was not particularly looking for a one-handy utility like this 👼 I was using Ahrefs, a SEO tool used for monitoring web rankings, among other things. This is when I noticed that FlatHub was ranking for terms it should not have been. Flathub ranking for words it wouldn't wantNot just that, out of the top 10 ranked pages, at least 2 of them are NSFW tag pages. Top ranking pages are not something Flathub would be proud ofShady developer piggybacking on Flathub's reputationThis would have been one innovative, fun way to make more people use open source software only the application that uses these tags is not open source software at all. There are actually three of these applications, all of which were created by the same developer, called Warlord Software. I am not going to link out to this website out of spite. Similar kinds of apps, from the same developerIf you visit the Flathub page of these applications, nothing seems extra ordinary, just a regular downloader app for Linux. Seems like a regular downloader app until it is notBut when you scroll down to the tag section, this is where you see the root cause of the problem. All three apps are using those NSFW tags. This is a deliberate act of exploiting the good reputation of Flathub to get more people to dowload these applications before getting them on the paid version. Yes, all these three apps have premium licenses as well. Before you say that this is all no-issue and there is nothing wrong with offering an app for downloading videos from adult websites, let me tell you that you will find no such tags or words mentioned anywhere on the developer's website: No NSFW words on developer's own website where these apps are offeredHere's what's going on...See, it is nearly impossible for a new website or application to rank for popular but highly competitive keywords like 'xyz downloader'. There are numerous websites and tools that let you download online videos from x number (orn XXX number) of websites. So this developer created a few downloader apps that have no special features, offered Flatpak versions for Linux users, published them on Flathub and tagged them with the NSFW keywords. With verified tag, the app looks more legit and tempting to download. It is easy for a highly reputed website like Flathub to rank highly for those terms. This way, a shrewd developer who would have never been able to get even 100 downloads on his own got more than 250,000 downloads. There are tons of good downloader applications for Linux. They can also use these keywords, but we only have apps made by a certain developer doing this. This is pure exploitation of the Flathub ecosystem. Flathub is not to be blamed hereIt's not entirely their fault that someone added NSFW words and used it to sell shady properitary apps. Although they should be more careful about such clear exploitation of their web reputation. Now, it may seem like I am making an issue out of nothing. Perhaps. I actually noticed this a few months ago. I wanted to write about it but then I decided to ignore a 'non-issue'. Few months later, Flathub was still ranking for all kind of this-hub, that-tube, xyz-hamster downloaders, and I could not tolerate it anymore. Lovely folks at Flatpak/Flathub/Fedora, please take note. My rant ends here.
-
Chris’ Corner: Browser Feature Testing
by: Chris Coyier Mon, 10 Nov 2025 18:00:39 +0000 It’s interesting to me to think about during a lot of the web’s evolution, there were many different browser engines (more than there are now) and they mostly just agreed-on-paper to do the same stuff. We focus on how different things could be cross-browser back then, which is true, but mostly it all worked pretty well. A miracle, really, considering how unbelievably complicated browsers are. Then we got standards and specifications and that was basically the greatest thing that could have happened to the web. So we put on our blue beanies and celebrate that, which also serves as a reminder to protect these standards. Don’t let browsers go rogue, people! Then, still later, we actually got tests. In retrospect, yes, obviously, we need tests. These are now web-platform-tests (WPT), and they help all the browser engines make sure they are all doing the right thing. Amazing. (Side note: isn’t it obnoxious how many billions of dollars goes into newfangled browsers without any of them contributing or funding actual browser engine work?) I only recently just saw browserscore.dev by Lea Verou as well. Yet another tool to keep browsers honest. Frankly I’m surprised how low all browsers score on those tests. I read in one of Lea’s commit messages “We’re not WPT, we’re going for breadth not depth.” which I found interesting. The Browser Score tests run in the browser and pretty damn fast. I haven’t run them myself, but I have a feeling WPT tests take… a while. How can we improve on all this? Well a gosh-darn excellent way to do it is what the companies that make browsers have already been doing for a number of years: Interop. Interop is a handshake deal from these companies that they are going to get together and pick some great things that need better testing and fixed up implementations and then actually do that work. Interop 2025 looks like it went great again. It’s that time again now, and these browser companies are asking for ideas for Interop 2026. If you have something that bugs you how it works cross-browser, now is a great time to say so. Richard has some great ideas that seem like perfect fits for the task. Godspeed, ya’ll. We can’t all be like Keith and just do it ourselves.
-
How Developers Use Proxies to Test Geo Targeted APIs?
by: Neeraj Mishra Mon, 10 Nov 2025 16:40:16 +0000 Creating and updating geo targeted APIs may seem easy, but there are countless challenges involved. Every country, every city, and every mobile network can respond differently and will require distinct adjustments. When pricing endpoints contain location-based compliance features and payment options, testing them will require more than one physical location. Proxies are a crucial part of the developer’s toolkit–they enable you to virtually “stand” in another country to observe what the users see. Developers encounter many problems when it comes to testing geo targeted APIs and it is the use of proxies that addresses this concern. In this article, we will outline the proxy use case and its benefits, the different proxy types, and potential challenges. We will maintain a practical approach so that you can pass it to a QA engineer or a backend developer and they will be able to use it directly. What Are Geo Targeted APIs and Why Do They Matter? A geo targeted API is an API that customizes its response according to clients’ geographical location. Such locations are primarily determined by an IP address, sometimes by headers, and in specific situations by account data. Streaming services provide different content to different countries, hotel booking systems adjust prices based on geographical location, ride-hailing apps change currency according to local clientele, and fintech apps restrict viewable payment services based on geographical payment regulations. Why are developers so focused on this? Such APIs also need to be consistent, compliant, and predictable, and for good reason. When users in Poland see prices in USD instead of the local PLN, or people in the UK see services that are not legally available to them, there are likely customer dissatisfaction, transaction failures, or, in the worst case, regulatory issues to deal with. Ensuring that geo logic is accurately tested is not optional; for anything that concerns money, content, or the law, it is essential built-in QA. If a team is based in a single location, it is predictable that all requests that they attempt are from that location. Mock the API is an option, but that will not give you enough information about what the real upstream service will return, and that’s critical information. A way to disguise requests as if they come from a different geographical location is necessary, that is the function of a proxy in this situation. Why Proxies Are the Easiest Way to Test Location-Based Responses? A proxy server acts as an intermediary that conveys your request to the target API and returns the response. One important element is the API only sees the proxy’s IP address and not yours. Assuming the proxy is in Germany, the API will think the request is coming from Germany; the same applies to Brazil, the API will see Brazil. A developer can use a good proxy pool to send an API request from 10 different countries and check if the API is working correctly. You also don’t have to set up test infrastructure in different regions. No cloud instances have to be set up in various geographies every time you want to test. You don’t have to rely on colleagues from different countries to participate in “just a check” test. Simply route the request through a different IP address and analyze the results. Another reason for the popularity of proxies in this task is that they work on the network level. There is no need to alter the API code itself, only the API caller needs to be changed. This enables QA engineers and backend developers to test production-like behavior without changing the production logic. Typical Workflow: How Developers Actually Use Proxies in Testing Let’s break down a realistic workflow you’d see in a team that regularly tests geo targeted APIs. Define the geo scenarios First, the team decides which locations they need to test: EU vs US, specific countries like UK, Canada, Germany, UAE, or mobile-only markets. This list often mirrors business logic in the API. Choose or rotate proxies for those locations The tester/developer picks proxy endpoints that match those locations. A good provider will offer a large choice of countries so you don’t have gaps in testing. Send the same API request through different proxies The team sends the same endpoint call – say, /v1/pricing?product=123 – but with the client configured to use different proxy IPs. The API should return different currencies, prices, availability, language, or content depending on the location. Capture and compare responses Responses are saved and compared either manually or with automated tests. If Germany and France receive the same content but they were supposed to be different, that’s a bug. Automate for regression Once the pattern is confirmed, the team bakes it into CI/CD or scheduled tests. Every time the API is deployed, the test suite calls it from multiple countries via proxies to ensure nothing broke. That’s the core idea: same request, different exit IP, compare output. Which Types of Proxies Are Best for Geo API Testing? Not all proxies are equal, and developers learn this quickly once they start hitting real services. Some APIs are strict, some are lenient, and some are downright suspicious of automated traffic. So choosing the right proxy type matters. Here is a simple comparison to help decide: Proxy TypeBest Use CaseProsConsDatacenter proxiesFast functional testing across many countriesHigh speed, good for automation, cheaperSome services detect them as non-residentialResidential proxiesTesting real-user conditions and stricter APIsHigh trust, looks like normal user trafficSlower, often more expensiveMobile proxiesTesting mobile-only features and app endpointsSeen as mobile users, great for app testingMost expensive, limited availabilityRotating proxiesLarge-scale multi-geo automated testingIP freshness, less blocking over many callsHarder to debug single fixed IP behaviour For most backend teams, datacenter proxies are enough to verify logic: does the API return EUR to a German IP and GBP to a UK IP? For QA teams testing production-like flows, residential or mobile proxies are better, because many modern APIs personalise content or apply security rules based on the perceived “realness” of the IP. If you need a flexible source of geo IPs for dev and QA, using a provider like proxys.io is convenient because you can pick locations on demand and plug them into your scripts without overcomplicated setup. Key Things Developers Test with Proxies Developers don’t use proxies for fun; they use them to answer very specific questions about how a geo targeted API behaves. Here are the most common areas they validate: Currency and localisation (USD vs EUR vs GBP, date formats, language headers) Regional availability (is this product/service actually shown in this market?) Compliance-based hiding (is restricted content hidden in specific countries?) Pricing tiers (do high-income regions get different price ladders?) Payment gateways (is a certain payment method visible in that country?) Feature flags tied to geography (e.g. features rolled out in 3 markets only) By running the exact same call through 5–10 different country proxies, the developer immediately sees if business rules are correctly encoded in the API. One Practical List: Best Practices for Using Proxies in API Testing Use HTTPS for all proxy traffic to avoid tampering and to mirror real-world usage. Keep a mapping of “country → proxy endpoint” in your test repo so tests are reproducible. Log the IP and country used for each test run – it makes debugging much easier. Don’t rely on just one IP per country; some APIs will cache responses per IP. Add assertions per country in automated tests (“if country=DE, expect currency=EUR”). Rotate or refresh proxies periodically to avoid stale or blocked IPs. Document test coverage so product owners know which countries are actually being tested. This is the kind of hygiene that turns proxies from an ad-hoc trick into a stable part of your QA pipeline. How to Integrate Geo Proxy Testing into Automated Pipelines A lot of teams start by testing manually with a proxy in Postman, Insomnia, or curl. That’s fine for discovery, but not enough for long-term reliability. The real win is when you add multi-geo tests into CI/CD so every deployment checks location-based behaviour automatically. The pattern is straightforward: Your test suite has a list of target countries. For each country, the test runner sets the proxy configuration. The runner calls the API and captures the response. The test compares the response to the expected shape/content for that country. If even one country fails (for example, Canada doesn’t get CAD), the pipeline fails. When proxies provide a simplified network-level interface, it is compatible with virtually any language or testing framework, be it JavaScript (Axios, node-fetch), Python (requests), Java (HttpClient), Go (http.Client with transport), or even a cURL-based Bash script. It is a matter of setting the proxy for each request. This is extremely useful for teams implementing progressive geo-release features. Suppose the marketing team wants to release a feature in the UK and Germany, but not in the US. Your continuous integration system could enforce this rule. If the US suddenly gets the feature, the build fails. That is control. Common Pitfalls and How to Avoid Them While proxy-based testing is simple in principle, developers do hit some recurring issues: 1. API uses more than IP to detect location Some APIs also look at Accept-Language, SIM/Carrier data (for mobile), or account settings. If you only change IP, you might not trigger all geographic branches. Solution: mirror headers and user profile conditions where possible. 2. Caching hides differences If the upstream service caches by URL only (not by IP), you might get the same response even when changing country. Solution: add cache-busting query params or ensure the API is configured to vary by IP. 3. Using free or low-quality proxies Unreliable proxies cause false negatives – timeouts, blocked IPs, or wrong countries. For testing business logic, stable and correctly geo-located IPs matter more than saving a dollar. 4. Forgetting about time zones Some services couple geo logic with local time. If you test only the IP but not the time window, you might think the feature is missing. Document time-based rules separately. 5. Not logging proxy usage When someone reports “Germany didn’t get the right prices”, you need to know which IP you used. Always log the proxy endpoint and country for traceability. Avoiding these mistakes makes geo testing with proxies extremely reliable. Why Proxies Beat Manual Remote Testing You could ask a colleague in Spain to click your link. You could set up cloud instances in 12 regions. You could even travel. But those options are slow, expensive, and not repeatable. Proxies, on the other hand: Work instantly from your current location Scale to as many countries as your provider supports Can be run in CI/CD, not just manually Are independent from your personal device or IP Are easy to rotate if one IP is blocked From an engineering point of view, they’re simply the most automatable way to emulate different user geographies. Conclusion: Proxies Turn Geo Testing into a Repeatable Process There are geo-targeted APIs everywhere – commerce, content, fintech, mobility, gaming, SaaS. Any product you operate in multiple countries will eventually have to solve the question, “What does this look like for users in X?” Proxies give the cleanest way for developers to programmatically answer this question. Developers can check whether prices, currencies, languages, availability, and compliance rules behave as expected by changing the same API call to use different country IPs. With a good proxy provider, you can turn this from a one-off debugging technique into a standard check in your testing process. The conclusion is straightforward: If the API logic is based on the user’s location, so must the testing be. Proxies are the way to achieve this from your desk. The post How Developers Use Proxies to Test Geo Targeted APIs? appeared first on The Crazy Programmer.
-
22 Linux Books for $25: This Humble Bundle Is Absurdly Good Value
by: Sourav Rudra Mon, 10 Nov 2025 14:59:22 GMT Humble Bundle has a Linux collection (partner link) running right now that's kind of hard to ignore. Twenty-two books covering everything from "how do I even install this" to Kubernetes orchestration and ARM64 reverse engineering. All from Apress and Springer; this means proper technical publishers, not some random self-published stuff. Humble Tech Book Bundle: Linux for Professionals by Apress/SpringerUnlock essential resources for Linux—get a professional edge on the competition with a little help from the experts at Apress & Springer!Humble BundleIf you decide to go ahead with this bundle, your money will go to support Room to Read, a non-profit that focuses on girls' literacy and education in low-income communities. ⏲️ The last date for the deal is November 24, 2025. 📋This article contains affiliate links. Please read our affiliate policy for more information.So, What's in The Bundle?First off, the "Zero to SysAdmin" trilogy. Using and Administering Linux: Volume 1 covers installation and basic command line usage. Volume 2 goes into file systems, scripting, and system management. Volume 3 focuses on network services like DNS, DHCP, and email servers. The Kubernetes coverage includes three books. Deploy Container Applications Using Kubernetes covers microk8s and AWS EKS implementations. Ansible for Kubernetes by Example shows cluster automation. Kubernetes Recipes provides solutions for common deployment scenarios. Plus Certified Kubernetes Administrator Study Companion if you're prepping for the CKA exam. systemd for Linux SysAdmins explains the init system and service manager used in modern distributions. It covers unit files, service management, and systemd components. For low-level work, there's Assembly Language Reimagined for Intel x64 programming on Linux. Foundations of Linux Debugging, Disassembling, and Reversing covers x64 architecture analysis. Foundations of ARM64 Linux Debugging, Disassembling, and Reversing does the same for ARM64. Linux Containers and Virtualization covers container implementation using Rust. Oracle on Docker explains running Oracle databases in containers. Supercomputers for Linux SysAdmins covers HPC cluster management and hardware. Yocto Project Customization for Linux is for building custom embedded Linux distributions. Pro Bash is a shell scripting reference. Introduction to Ansible Network Automation covers network device automation. The Enterprise Linux Administrator and Linux System Administration for the 2020s both cover current sysadmin practices. Practical Linux DevOps focuses on building development labs. CompTIA Linux+ Certification Companion is exam preparation material. Linux for Small Business Owners covers deploying Linux in small business environments. What Do You Get for Your Money?All 22 books are available as eBooks in PDF and ePub formats. They should work on most modern devices, ranging from computers and smartphones to tablets and e-readers. Here's the complete collection. 👇 Column 1 Column 2 CompTIA Linux + Certification Companion Introduction to Ansible Network Automation Certified Kubernetes Administrator Study Companion Pro Bash Yocto Project Customization for Linux Linux Containers and Virtualization Using and Administering Linux: Volume 1 Foundations of ARM64 Linux Debugging, Disassembling, and Reversing Using and Administering Linux: Volume 2 Foundations of Linux Debugging, Disassembling, and Reversing Using and Administering Linux: Volume 3 Deploy Container Applications Using Kubernetes systemd for Linux SysAdmins Ansible for Kubernetes by Example Assembly Language Reimagined Linux for Small Business Owners Kubernetes Recipes Linux System Administration for the 2020s Oracle on Docker Practical Linux DevOps Supercomputers for Linux SysAdmins The Enterprise Linux Administrator There are three pricing tiers here: $1 tier: Two books: Linux System Administration for the 2020s and Practical Linux DevOps. Both focus on current practices. Not bad for a dollar. $18 tier: Adds three more books covering Kubernetes, Ansible automation, and DevOps stuff. Five books total. $25 tier: All 22 books. This is where you get the whole bundle. These books are yours to keep with no DRM restrictions. Head over to Humble Bundle (partner link) to grab the collection before the deal expires. Get The Deal (partner link)
-
Headings: Semantics, Fluidity, and Styling — Oh My!
by: Geoff Graham Mon, 10 Nov 2025 14:44:13 +0000 A few links about headings that I’ve had stored under my top hat. “Page headings don’t belong in the header” Martin Underhill: A classic conundrum! I’ve seen the main page heading (<h1>) placed in all kinds of places, such as: The site <header> (wrapping the site title) A <header> nested in the <main> content A dedicated <header> outside the <main> content Aside from that first one — the site title serves a different purpose than the page title — Martin pokes at the other two structures, describing how the implicit semantics impact the usability of assistive tech, like screen readers. A <header> is a wrapper for introductory content that may contain a heading element (in addition to other types of elements). Similarly, a heading might be considered part of the <main> content rather than its own entity. So: <!-- 1️⃣ --> <header> <!-- Header stuff --> <h1>Page heading</h1> </header> <main> <!-- Main page content --> </main> <!-- 2️⃣ --> <main> <header> <!-- Header stuff --> <h1>Page heading</h1> </header> <!-- Main page content --> </main> Like many of the decisions we make in our work, there are implications: If the heading is in a <header> that is outside of the <main> element, it’s possible that a user will completely miss the heading if they jump to the main content using a skip link. Or, a screenreader user might miss it when navigating by landmark. Of course, it’s possible that there’s no harm done if the first user sees the heading prior to skipping, or if the screenreader user is given the page <title> prior to jumping landmarks. But, at worst, the screenreader will announce additional information about reaching the end of the banner (<header> maps to role="banner") before getting to the main content. If the heading is in a <header> that is nested inside the <main> element, the <header> loses its semantics, effectively becoming a generic <div> or <section>, thus introducing confusion as far as where the main page header landmark is when using a screenreader. All of which leads to Martin to a third approach, where the heading should be directly in the <main> content, outside of the <header>: <!-- 3️⃣ --> <header> <!-- Header stuff --> </header> <main> <h1>Page heading</h1> <!-- Main page content --> </main> This way: The <header> landmark is preserved (as well as its role). The <h1> is connected to the <main> content. Navigating between the <header> and <main> is predictable and consistent. As Martin notes: “I’m really nit-picking here, but it’s important to think about things beyond the visually obvious.” Read article “Fluid Headings” Donnie D’Amato: To recap, we’re talking about text that scales with the viewport size. That usually done with the clamp() function, which sets an “ideal” font size that’s locked between a minimum value and a maximum value it can’t exceed. .article-heading { font-size: clamp(<min>, <ideal>, <max>); } As Donnie explains, it’s common to base the minimum and maximum values on actual font sizing: .article-heading { font-size: clamp(18px, <ideal>, 36px); } …and the middle “ideal” value in viewport units for fluidity between the min and max values: .article-heading { font-size: clamp(18px, 4vw, 36px); } But the issue here, as explained by Maxwell Barvian on Smashing Magazine, is that this muffs up accessibility if the user applies zooming on the page. Maxwell’s idea is to use a non-viewport unit for the middle “ideal” value so that the font size scales to the user’s settings. Donnie’s idea is to calculate the middle value as the difference between the min and max values and make it relative to the difference between the maximum number of characters per line (something between 40-80 characters) and the smallest viewport size you want to support (likely 320px which is what we traditionally associate with smaller mobile devices), converted to rem units, which . .article-heading { --heading-smallest: 2.5rem; --heading-largest: 5rem; --m: calc( (var(--heading-largest) - var(--heading-smallest)) / (30 - 20) /* 30rem - 20rem */ ); font-size: clamp( var(--heading-smallest), var(--m) * 100vw, var(--heading-largest) ); } I couldn’t get this working. It did work when swapping in the unit-less values with rem. But Chrome and Safari only. Firefox must not like dividing units by other units… which makes sense because that matches what’s in the spec. Anyway, here’s how that looks when it works, at least in Chrome and Safari. CodePen Embed Fallback Read article Style :headings Speaking of Firefox, here’s something that recently landed in Nightly, but nowhere else just yet. Alvaro Montoro: :heading: Selects all <h*> elements. :heading(): Same deal, but can select certain headings instead of all. I scratched my head wondering why we’d need either of these. Alvaro says right in the intro they select headings in a cleaner, more flexible way. So, sure, this: :heading { } …is much cleaner than this: h1, h2, h3, h4, h5, h6 { } Just as: :heading(2, 3) {} …is a little cleaner (but no shorter) than this: h2, h3 { } But Alvaro clarifies further, noting that both of these are scoped tightly to heading elements, ignoring any other element that might be heading-like using HTML attributes and ARIA. Very good context that’s worth reading in full. Read article Headings: Semantics, Fluidity, and Styling — Oh My! originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
ODF 1.4 Release Marks 20 Years of OpenDocument Format
by: Sourav Rudra Mon, 10 Nov 2025 12:35:00 GMT Microsoft's proprietary formats like .doc and .docx dominate the office productivity landscape. Most people and organizations rely on these formats for daily document work. This creates a predatory situation where vendor lock-in is the norm and compatibility issues are taken as a omen that moving away from Microsoft Office is a bad idea. OpenDocument Format (ODF) offers an open alternative. It is an ISO-standard XML-based format for text documents, spreadsheets, presentations, and graphics. ODF works across multiple office suites, including LibreOffice, Collabora Online, and Microsoft Office itself. The format operates under the OASIS Open umbrella, a nonprofit consortium that develops open standards and open source projects. It brings together individuals, organizations, and governments to solve technical challenges through collaboration. Coming after four years of development work, OASIS Open has introduced ODF 1.4, marking a major milestone during ODF's 20th anniversary as an OASIS Standard. ODF 1.4 Packs in Many UpgradesThe development involved contributions from multiple organizations. Engineers from Collabora, The Document Foundation, IBM, Nokia, Microsoft, and KDE participated. Community members from the LibreOffice project also made significant contributions. As for the major improvements of this release, tables can now be placed inside shapes, breaking free from the textbox-only limitation. This bridges a compatibility gap with Microsoft's OOXML and other file formats, making cross-format workflows smoother. Accessibility gets meaningful upgrades through decorative object marking. Images and shapes can be flagged as decorative, instructing screen readers to skip them. This eliminates clutter for assistive technology users navigating complex documents. A new overlap prevention property helps manage document layout. Anchored objects can now specify whether they need to avoid sitting on top of other elements. This gives users finer control over how images and shapes interact on a page. Text direction support improves with 90-degree counter-clockwise rotation. Content can now flow left to right, then top to bottom, in this rotated orientation. The addition complements the existing clockwise direction commonly used for Japanese text layouts. Michael Stahl, Senior Software Engineer at Collabora Productivity, explained the development approach: Over the last four years, since ODF 1.3 was approved, engineers from Collabora Productivity and LibreOffice community members have worked with the Technical Committee to standardise many important interoperability features.The feature freeze for ODF 1.4 was over two years ago, so while the list of changes is extensive the focus here is not on ‘new’ features that contemporary office suite users haven’t seen before, but improvements to bring ODF more in-line with current expectations.For a Closer LookThe complete ODF 1.4 specification is available on the OASIS Open documentation website. The specification consists of four numbered documents covering different aspects of the standard. Part 1 provides the introduction and master table of contents. Part 2 defines the package format language. Part 3 contains the XML schema definitions. Part 4 specifies the formula language for spreadsheet calculations. ODF 1.4Suggested Read 📖 Ownership of Digital Content Is an Illusion—Unless You Self‑HostPrices are rising across Netflix, Spotify, and their peers, and more people are quietly returning to the oldest playbook of the internet: piracy. Is the golden age of streaming over?It's FOSSTheena Kumaragurunathan
-
Command Your Calendar: Inside the Minimalist Linux Productivity Tool Calcurse
by: Roland Taylor Mon, 10 Nov 2025 05:30:39 GMT If you love working in the terminal or just want something fast and lightweight for calendar management, Calcurse gives you a full organiser you can use right in your shell. As its name suggests, Calcurse uses ncurses to deliver a complex command-line interface that rivals some GUI apps in features and efficiency. 0:00 /1:05 1× If you don't need automated reminders and/or the overhead of a database, it's great for keeping track of your appointments and to-do lists. Being lightweight, it works well in server environments over SSH, and is a great candidate for those using low-powered devices. Understanding Calcurse at a glanceThe standard Calcurse interface in actionCalcurse is written in C, and boasts robust support for scripting and helper tools. It supports many of the features you'd expect in a GUI calendar app, including iCalendar (.ics) import/export, as well as some you may never have thought of. It should bring back some nostalgia if you were around during the early days of computing (DOS, early Unix, etc), where text-based user interface (TUI) apps were predominant, and complex, keyboard-driven interfaces were actually the norm. 📋I can't cover everything about Calcurse here, since it's got way too many features for a single article. If you're interested in trying it out, check out the documentation.Calcurse operates in three main forms: An interactive ncurses interface: the standard Calcurses interface that you get by running the calcurses command with no arguments or flags.A non-interactive TUI: prints output according to parameters, and exits. Called by passing flags like --status.A background daemon: must first be enabled from the ncurses interface or run with --daemon, can be ended by starting the interactive interface or by using pkill calcurse.Most actions are a single keystroke away, with on-screen prompts and a simple help/config menu when you need it. Once the shortcuts click, navigation is quick and predictable. Where most calendar apps store your data in a database, Calcurse uses plain text files on the backend. This choice keeps it snappy, easy to back up, and instantly responsive to your changes. At this time, Calcurse can only show one calendar per instance, so if you'd like to have multiple calendars, you'll need to run different instances, each connected to a different calendar file (with -c) and data directory (with -D). Notifcations and sync? Check!Calcurse supports notifications within its ncurses UI or by running a custom command (such as your system's mailer or your desktop environment's own notification system). By default, Calcurse does not run as a daemon (background process), so as long as you're not actively running it, it uses no additional system resources. However, being as versatile as it is, you can enable daemon mode so Calcurse can deliver notifications even after you quit the UI. Launching the UI typically stops the daemon to avoid conflicting instances, unless using the --status flag. To avoid this, you can run Calcurse as a separate instance or query it using the appropriate flags without bringing up the UI. If you'd prefer a more hands-on approach, you can set up cron jobs and scripting to interact with the non-interactive mode for the same purposes. iCalendar import/export is built into the native app itself and can be invoked with "i" (for import) or "x" (for export). CalDAV sync is also supported, but requires a third-party helper (calcurse-caldav). It's still considered alpha-quality software, and does require its own database, so syncing between Calcurse instances may be a little trickier here. Going deeper on syncingPerhaps one of the coolest parts of using a tool like Calcurse is that since everything is kept in plain text, you can use version control for just about everything: from configurations to schedules. If you have a certain schedule you'd like to sync between your devices, you'd just need to store your ~/.config/.calcurse~ and ~/.local/share/calcurse folders in a Git repo or your personal Nextcloud server, for instance. You could have the actual folder stored in your sync location and have Calcurse read from a symlink. This way, you could manually edit your configuration from anywhere, and have it automatically sync to every device where you use Calcurse. Pretty handy for power users with many devices to manage. Customisation and quality-of-lifeCustomizing the colour theme in Calcurse is easyWith how many advanced features Calcurse offers, you may not be too surprised to learn that it supports a degree of customisation (in interactive mode), accessible through the config menu. You can change the colours and layout, or choose the first day of the week. You can also enable various quality of life features, like autosave and confirmations. If you don't like the standard key bindings, you can set your own, which is quite handy for those who may have certain preferences. For example, you can bind a custom key for jumping between views. If you're running Calcurse in a terminal emulator under Wayland, it's especially useful. You won't need to worry about running into conflicts over hotkeys in your desktop environment. Changing viewsCalcurse with the calendar in week viewIf you'd like to change how the calendar is displayed, you can change the appearannce.calendarview option in the config between monthly and weekly. In weekly view, the number of the week is shown in the top right-corner of the calendar. There's no way to enable this in the monthly view, it shows the day of the year instead. Creating an appointment with the calendar in month viewIf you'd like to show notifications in Calcurse itself, you can toggle the notification bar with the appearance.notifybar option. I didn't test notifications in this way, as I'd prefer to set up system integration. Where Calcurse might not be for youOf course, as powerful as it is, Calcurse does have some quirks and shortcomings that may be an issue for some users. For instance, it does not support any fancy views or month-grid editing like many GUI calendar tools. To be fair, the default interface is simple enough to be comfortable to use once you get used to it, but if you need these additional features, you're out of luck. One other quirk is that the 12-hour time format is not globally supported throughout the app. The interactive list uses the hh:mm format, whereas the notification bar and CLI output can be switched to the 12 hr format. The rest of the app displays its time in the 24 hr format. Changing the format where you are allowed to isn't trivial, so be prepared to consult the documentation for this one. The format quirks also show up in how you choose certain display units for dates. Unless you're well versed in these, you might find yourself consulting the documentation often. This could be off-putting for some users, even terminal lovers who prefer the TUI over everything else. It's also inconsistent in this way, since format.inputdate uses simple numbers in the config, whereas format.dayheading uses the less familiar "%-letter" format. Overall, even if you like working on the command-line, the learning curve for Calcurse can be a little steep. That said, once you get acclimated, the key-driven TUI is actually comfortable to work with, and the high range of features would make it a great tool for those who like to build custom solutions on top of headless apps. Getting Calcurse on your distroCalcurse is packaged for many distros, including Debian/Ubuntu, Arch, Fedora, and others, as well as their derivatives, of course. You can search for calcurse in your software manager (if it supports native packages) or use your standard installation commands to install it: Debian/Ubuntu/Mint: sudo apt install calcurse Fedora: sudo dnf install calcurseArch: sudo pacman -S calcurseHowever, if you're looking to build from source, you can grab up-to-date source releases from the Calcurse downloads page, pull the latest code from the project's GitHub page. 📋Calcurse does not track releases on its GitHub page. If you pull from Git, you're essentially pulling the development branch.ConclusionCalcurse is a rare gem: a powerful, straightforward TUI calendar management app with support for iCal import/export, CalDAV sync, and scriptable reports. If you live in the shell, manage servers over SSH, or want plain-text data you can version, it's a reasonable solution. Sure, there are real trade-offs: no month-grid, a slight learning curve, and 12-hour time relegated to the notification bar and output. For terminal-first users, it is an easy recommendation.
-
7 Privacy Wins You Can Get This Weekend (Linux-First)
by: Theena Kumaragurunathan Sun, 09 Nov 2025 03:44:40 GMT Privacy is a practice. I treat it like tidying my room. A little attention every weekend keeps the mess from becoming a monster. Here are seven wins you can stack in a day or two, all with free and open source tools. 1. Harden your browserFirefox is still the easiest place to start. Install uBlock Origin, turn on strict tracking protection, and only whitelist what you truly need. Add NoScript if you want to control which sites can run scripts. Why it matters: Most tracking starts in the browser. Blocking it reduces profiling and drive‑by nasties.How to do it: In Firefox settings, set Enhanced Tracking Protection to Strict. Install uBlock Origin. If you’re comfortable, install NoScript and allow scripts only on trusted sites.Trade‑off: Some pages break until you tweak permissions. You’ll learn quickly which sites respect you.2. Search without surveillanceShift your default search to privacy‑respecting frontends and engines. SearXNG is a self‑hostable metasearch. Startpage, if you want something similar to Google, although excessive ads on their search page is a turn-off. Why it matters: Your searches reveal intent and identity. Reducing data capture lowers your footprint.How to do it: Set your browser’s default search to DuckDuckGo or Startpage or a trusted SearXNG instance. Consider hosting SearXNG later if you enjoy tinkering.Trade‑off: Results can feel slightly different from Google. For most queries, they’re more than enough.📋The article contains some partnered affiliate links. Please read our affiliate policy.3. Block ads and trackers on your networkA Pi‑hole or AdGuard Home (partner link) box filters ads for every device behind your router. It’s set‑and‑forget once configured. AdGuard is not open source but a trusted mainstream service. Why it matters: Network‑level filtering catches junk your browser misses and protects smart TVs and phones.How to do it: Install Pi‑hole or AdGuard Home on a Raspberry Pi or a spare machine. Point your router’s DNS to the box.Trade‑off: Some services rely on ad domains and may break. You can whitelist specific domains when needed.Photo by Matthew Henry / Unsplash4. Private DNS and a lightweight VPNEncrypt DNS with DNS‑over‑HTTPS and use WireGuard for a fast, modern VPN. Even if you only use it on public Wi‑Fi, it’s worth it. Why it matters: DNS queries can expose your browsing. A VPN adds another layer of transport privacy.How to do it: In Firefox, turn on DNS‑over‑HTTPS. Set up WireGuard with a reputable provider or self‑host if you have a server.Trade‑off: A tiny speed hit. Misconfiguration can block certain services. Keep a fallback profile handy.5. Secure messaging that respects youSignal is my default for personal chats. It’s simple, secure, and widely adopted. The desktop app keeps conversations synced without drama. Why it matters: End‑to‑end encryption protects content even if servers are compromised.How to do it: Install Signal on your phone, then link the desktop app. Encourage your inner circle to join.Trade‑off: Not everyone will switch. That’s fine. Use it where you can.6) Passwords and 2FA, properlyStore strong, unique passwords in KeePassXC and use time‑based one‑time codes. You’ll never reuse a weak password again. Use ProtonPass if you want a more mainstream option. Why it matters: Credential stuffing is rampant. Unique passwords and 2FA stop it cold.How to do it: Create a KeePassXC vault, generate 20‑plus character passwords, and enable TOTP for accounts that support it. Back up the vault securely.Trade‑off: A small setup hurdle. After a week, it becomes second nature.Top 6 Best Password Managers for Linux [2024]Linux Password Managers to the rescue!It's FOSSAnkush Das7) Email with privacy in mindUse ProtonMail for personal email. Add aliasing to keep your main address clean. For newsletters, pipe them into an RSS reader so your inbox isn’t a tracking playground. Why it matters: Email carries identity. Aliases cut spam, and RSS limits pixel tracking.How to do it: Create a Proton account. Use aliases for sign‑ups. Subscribe to newsletters via RSS feeds if available or use a privacy‑friendly digest service.Trade‑off: Some newsletters force email only. Accept a separate alias or unsubscribe.Good, Better, BestBrowser Good: Firefox with uBlock Origin. Better: Add NoScript and tweak site permissions. Best: Harden about:config and use containers for logins.Search Good: Startpage as default. Better: Use a trusted SearXNG instance. Best: Self‑host SearXNG and monitor queries.Network filtering Good: Pi‑hole or AdGuard Home on a spare device. Better: Add curated blocklists and per‑client rules. Best: Run on a reliable server with automatic updates and logging.DNS and VPN Good: Browser DNS‑over‑HTTPS. Better: System‑wide DoH or DoT. Best: WireGuard with your own server or a vetted provider.Messaging Good: Signal for core contacts. Better: Encourage groups to adopt. Best: Use disappearing messages and safety numbers.Passwords and 2FA Good: KeePassXC vault and TOTP for key accounts. Better: Unique passwords everywhere and hardware‑encrypted backups. Best: Hardware tokens where supported plus KeePassXC.Email Good: Proton for personal mail. Better: Aliases per service. Best: RSS for newsletters and strict filtering rules.Time to implementQuick wins: Browser hardening, search swap, Signal setup. About 60 to 90 minutes.Medium: KeePassXC vault, initial 2FA rollout. About 90 minutes.Weekend projects: Pi‑hole or AdGuard Home, WireGuard. About 3 to 5 hours depending on your comfort.ConclusionStart with what you control. The browser, your passwords, your default search. Privacy is cumulative. One small change today makes the next change easier tomorrow. If you keep going, the internet feels calmer, like you finally opened a window in a stuffy room.
-
Pentora Box: Pen-Test Practice Labs
by: Abhishek Prakash Sat, 08 Nov 2025 17:52:50 +0530 Learn by doing, not just reading or watching. Pen-testing can’t be mastered by watching videos or reading blogs alone. You need to get your hands dirty. Pentora Box turns each Linux Handbook tutorial into a self-try exercise. Every lab gives you a realistic, safe environment where you can explore reconnaissance, scanning, exploitation, and post-exploitation, step by step. How to use it?Curious how you can get started with ethical hacking and pen-testing for free with these hands-on labs? It's easy. Here's what you need to do: Step 1: Pick a lab to practiceChoose from a curated list of hands-on pen-testing exercises, from OSINT to exploitation. The labs are not in a particular order but it would be a good practice to follow: 🧭 Reconnaissance Track: To scout the target for attack surface and vulnerabilities⚔️ Exploitation Track: Simulate attack after finding vulnerabilities.🛡️ Defense Track: Monitor your system and network and harden up your defenses.Step 2: Set up locallyEach lab includes setup instructions. It's good to use Kali Linux, as it often includes the required tools. You can also use Debian or Ubuntu based distributions, as the package installation command will work the same. Sure, you can try it on any Linux distro as long as you manage to install the packages. Labs are safe to perform as they are performed on VulnHub, a platform dedicated for pen-testing exercises. Step 3: Execute and learnRun commands, observe output, fix errors, and build muscle memory, the hacker way. The tutorials explain the output so that you can understand what's going on and what you should be focusing on after running the commands. 💡Each lab is designed for localhost or authorized test targets. No external attacks. Always hack responsibly.Before you start: Setting up your practice environmentYou don’t need a dedicated server or paid sandbox to begin. All labs can be practiced on your Linux system or a virtual machine. Recommended Setup: 🐧 Kali Linux/ParrotOS/Debian/Ubuntu + tools🐳 Docker (for local vulnerable targets)⚙️ VS Code or terminal-based editor🔒 Good ethics: always test in legal environments🚧These labs are designed for educational use on local or authorized environments only. Never attempt to exploit real systems without permission. Always respect the principles of responsible disclosure and digital ethics.Stay in touch for future labsNew labs are added regularly. Subscribe to get notified when a new tool, challenge, or lab goes live. You can also share your results or request new topics in our community forum or newsletter.
-
LHB Linux Digest #25.34: CNCF Project Hands-on, Better split Command, Local AWS Cloud Stack and More
by: Abhishek Prakash Fri, 07 Nov 2025 18:12:51 +0530 After publishing the Linux Networking at Scale, while we work on the new course, I am proud to present a super long but immensely helpful hands-on guide that shows you the steps from creating an open source project to submitting it to CNCF. The guide is accesible to members of all levels. Building and Publishing an Open Source Project to CNCFA hands-on guide to creating, documenting, and submitting an open source project to the CNCF Landscape.Linux HandbookSachin H RSachin, author of our Kubernetes Operators course, faced the lack of organized document when he worked on his project, Kubereport. He shared his personal notes in the form of a guide with some sample codes. Please note that this is more suitable for Kubernetes and Cloud Native projects. Here's why you should get LHB Pro membership: ✅ Get access to Linux for DevOps, Docker, Ansible, Systemd and other text courses ✅ Get access to Kubernetes Operator and SSH video course ✅ Get 6 premium books on Bash, Linux and Ansible for free ✅ No ads on the website Get Pro Membership This post is for subscribers only Subscribe now Already have an account? Sign in
-
A to Z Hands-on Guide to Building and Publishing an Open Source Project to CNCF
by: Sachin H R Fri, 07 Nov 2025 17:49:50 +0530 The idea for a practical guide to build an open source project and publishing it to CNCF came to me when I was working on KubeReport, an open source tool that automatically generates PDF/CSV deployment reports from your Kubernetes cluster. It is designed for DevOps teams, QA, and managers. It can auto-email reports, integrate with Jira, and track exactly what got deployed and when. I noticed that there were not clear enough documentation on how to create a project that adheres to CNCF standards. And thus I created this guide from the experience I gained with KubeReport. 💡I have created a small project, KubePRC (Pod Restart Counter), for you to practice hands-on Kubernetes concepts before building your own open-source products in any programming language. I presume that you are familiar with some sort of coding and GitHub. Explaining those things are out of scope for this guide.Step 0: Ask yourself first: Why are you building the project?Before you start building anything, be clear about why are you doing it? Think about these three points: market gap market trendlong-term vision.Let me take the example of my existing project KubeReport again. 🌉 Market Gap - Automatic reports post deploymentIn fast-moving environments with 40–50 clients, deployments happen every day. After deployment, we often rely on manual smoke tests before involving QA. But issues like: Missed service failuresNo track of deployment countNo visibility into images or teams involved...often surface only after clients report problems. This is not just a company-specific issue. These gaps are common across the DevOps world. KubeReport fills that gap. It provides a centralized, auditable report after every deployment — in the form of downloadable PDFs or CSVs — sent automatically to managers, clients, Jira tickets and email groups. 📈 Market Trend – Rising demand for AI-driven automationAs DevOps matures, there's an increasing demand for: Lightweight, CLI-based tools integrated into pipelinesImmediate post-deployment health visibilityIntelligent automation and alerting systems🤖 Future Scope – AI-powered task automationIn the long term, the goal is to reduce manual intervention by integrating AI to: Detect anomalies in restart counts based on historical deployment trendsAutomatically classify failures (e.g., infra-related vs. app-related)Generate intelligent deployment health reportsRecommend or trigger self-healing actions (like auto-restart, scaling, or rollback)These enhancements will empower teams to act faster with minimal manual input — reducing human error and increasing confidence in every release. This is how I outlined it before creating the KubeReport tool. You get the gist. You should build a tool that not only solves real problems but also has a future scope for improvements. 🔍 Step 1: Check if your idea already existsBefore building KubeReport, we asked: Is there already something like this out there?If an idea already exists — for example, a MySQL Operator — you have three options: Don’t build itBuild a better versionSolve it for a different target (e.g., MongoDB or Postgres)In our case, there was no specific open-source tool that automated Kubernetes deployment reports like this — so we started building. 💻 Step 2: Language & tech stack selectionKubernetes is written in Go, which makes it a strong choice for any native integration. Our goals were: Fast performanceAccess to client librariesEase of deploymentSo we used: Go for core logicKubernetes APIs to fetch pod/deployment dataGo PDF/CSV libraries for report generation📋You can adapt your stack based on your needs. Choose what offers good performance, community support, and personal comfort.🧩 Step 3: Design the ArchitectureIf your project has multiple components (like frontend, backend, APIs, and DB), architecture diagrams can be very useful. I recommend: Miro or Whimsical for quick architecture and flow diagramsWeekly planning: what to build this week, next, and this monthBreaking down work into phases keeps the project manageable. 🛠️ Step 4: Start Small – "Hello World" FirstAlways begin with a small, functional unit. For example, the first version of KubeReport: Listed pods from KubernetesGenerated a simple PDFThat was it. Later, we added: CSV formatDeployment filtersAuto-email featureCloud storage✅ One step at a time. Build a small working thing, then grow. Let's see all this with a sample project. Feel free to replicate the steps. Building kubeprc (Kube Pod Restart Counter)Let’s take kubeprc as an example project for hands-on practice. kubeprc (short for Kube Pod Restart Counter) is a lightweight open source tool that scans your Kubernetes cluster and reports on pod restart counts. It’s ideal for DevOps and SRE teams who need to monitor crash loops and container stability — either in real-time or as part of automated checks. Why are we building this?As part of early discovery (Step 1 & Step 2), we identified a clear market gap: There was no simple, focused tool that: Counted pod restarts across a clusterWorked both locally and in-clusterCould be deployed cleanly via HelmWas lightweight and customizableWhile 2–3 similar tools exist, they either lack flexibility or are too heavy. Our aim is to build: A focused tool with a clean CLIExtra features based on real-world DevOps use casesA Helm chart for seamless integration in CI/CD pipelines or monitoring stacksFeature Planning I used Miro for feature plans layout. You can use any project management tool of your choice. Tech StackKubernetes is written in Go, so client libraries and API access are very well supported. Go offers great performance, concurrency, and portability. Tool Purpose Go Core CLI logic & Kubernetes client Docker Containerization for portability Helm Kubernetes deployment automation Minikube / Cloud (Azure/GCP/AWS) Local & cloud testing environments This post is for subscribers only Subscribe now Already have an account? Sign in
-
Explaining the Accessible Benefits of Using Semantic HTML Elements
by: Geoff Graham Thu, 06 Nov 2025 15:57:49 +0000 Here’s something you’ll spot in the wild: <div class="btn" role="button">Custom Button</div> This is one of those code smells that makes me stop in my tracks because we know there’s a semantic <button> element that we can use instead. There’s a whole other thing about conflating anchors (e.g., <a class="btn">) and buttons, but that’s not exactly what we’re talking about here, and we have a great guide on it. A semantic <button> element makes a lot more sense than reaching for a <div> because, well, semantics. At least that’s what the code smell triggers for me. I can generically name some of the semantical benefits we get from a <button> off the top of my head: Interactive states Focus indicators Keyboard support But I find myself unable to explicitly define those benefits. They’re more like talking points I’ve retained than clear arguments for using <button> over <div>. But as I’ve made my way through Sara Soueidan’s Practical Accessibility course, I’m getting a much clearer picture of why <button> is a best practice. Let’s compare the two approaches: CodePen Embed Fallback Did you know that you can inspect the semantics of these directly in DevTools? I’m ashamed to admit that I didn’t before watching Sara’s course. There’s clearly a difference between the two “buttons” and it’s more than visual. Notice a few things: The <button> gets exposed as a button role while the <div> is a generic role. We already knew that. The <button> gets an accessible label that’s equal to its content. The <button> is focusable and gets a click listener right out of the box. I’m not sure exactly why someone would reach for a <div> over a <button>. But if I had to wager a guess, it’s probably because styling <button> is tougher that styling a <div>. You’ve got to reset all those user agent styles which feels like an extra step in the process when a <div> comes with no styling opinions whatsoever, save for it being a block-level element as far as document flow goes. I don’t get that reasoning when all it take to reset a button’s styles is a CSS one-liner: CodePen Embed Fallback From here, we can use the exact same class to get the exact same appearance: CodePen Embed Fallback What seems like more work is the effort it takes to re-create the same built-in benefits we get from a semantic <button> specifically for a <div>. Sara’s course has given me the exact language to put words to the code smells: The div does not have Tab focus by default. It is not recognized by the browser as an interactive element, even after giving it a button role. The role does not add behavior, only how it is presented to screen readers. We need to give it a tabindex. But even then, we can’t operate the button on Space or Return. We need to add that interactive behavior as well, likely using a JavaScript listener for a button press to fire a function. Did you know that the Space and Return keys do different things? Adrian Roselli explains it nicely, and it was a big TIL moment for me. Probably need different listeners to account for both interactions. And, of course, we need to account for a disabled state. All it takes is a single HTML attribute on a <button>, but a <div> probably needs yet another function that looks for some sort of data-attribute and then sets disabled on it. Oh, but hey, we can slap <div role=button> on there, right? It’s super tempting to go there, but all that does is expose the <div> as a button to assistive technology. It’s announced as a button, but does nothing to recreate the interactions needed for the complete user experience a <button> does. And no amount of styling will fix those semantics, either. We can make a <div> look like a button, but it’s not one despite its appearances. Anyway, that’s all I wanted to share. Using semantic elements where possible is one of those “best practice” statements we pick up along the way. I teach it to my students, but am guilty of relying on the high-level “it helps accessibility” reasoning that is just as generic as a <div>. Now I have specific talking points for explaining why that’s the case, as well as a “new-to-me” weapon in my DevTools arsenal to inspect and confirm those points. Thanks, Sara! This is merely the tip of the iceberg as far as what I’m learning (and will continue to learn) from the course. Explaining the Accessible Benefits of Using Semantic HTML Elements originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.