Jump to content

All Activity

This stream auto-updates

  1. Today
  2. by: Neville Ondara Sun, 16 Nov 2025 00:47:33 GMT If you’re like me, you probably grew up with the classic Linux command-line tools such as ls, cat, du. These commands have carried me through countless scripts and late-night debugging sessions. Here's the thing. While these tools do their job, they can be plain looking and difficult to use for certain tasks. Take the du command for example. It shows the disk usage on the system but use it without any option, and it is a mess. Terminals today support color, Unicode icons, live previews, all things our old favorites weren’t designed for. And the Rust revolution has quietly reshaped the command-line landscape. So there is a wave of Rust-based CLI tools that don’t just replicate the traditional ones; they modernize them. They’re fast, (claim to be) memory-safe, polished, and often come with thoughtful UX touches that make daily terminal work noticeably smoother. I’ve been tinkering with these tools lately, and I thought it’d be fun to share a list of my favorites. 🚧If you are a sysadmin, managing servers, you should not rely on alternatives. You might not get these fancy new tools on every system and installing them on every Linux server you log in is not feasible. The alternative tools are good when you are using a personal computer and have full control over the development environment. exa: Alternative to lsIf there’s one tool that convinced me Rust CLI apps were worth exploring, it’s exa. It feels familiar but adds what the original ls has always lacked: sensible colors, icons, and Git awareness. Highlights: Beautiful color themesGit integrationOptional tree viewClearer permissions formattingInstallation: cargo install exaUsage: exa -al --git You can instantly see which files are new, which are modified, and which are pure chaos. bat: Alternative to catcat is great for quick checks, but reading config files or code in raw plain text gets tedious. bat fixes that with syntax highlighting, Git integration, and line numbers, automatic paging, without losing cat compatibility. Installation: cargo install batExample Usage: bat ~/.bashrcIt’s basically cat with a glow-up ✨. When I first used it, I found myself opening random config files just to admire the colors. dust: Alternative to dudu always dumps a mountain of numbers on your screen. dust turns that into a compact, visual representation of disk usage that you can parse at a glance. It’s instantly more readable than the old command. The output is clean, easy to parse, and shows relative sizes visually. I swear my hard drive has never looked this friendly. 😎 Install dust: cargo install du-dust Usage: dustfd: Alternative to findRemember spending 10 minutes crafting the perfect find command? Yeah… me too. fd makes this easier. It has simple syntax, ignores hidden files by default and it is super-fast. Install fd: cargo install fd-findExample: fd main.rs fd fossnewsIts speed and simplicity make find feel outdated. After switching, you’ll rarely look back. ripgrep (rg): Alternative to grepRust-based ripgrep has become a must-have for developers. It’s dramatically faster and gives clear, highlighted search results. Install ripgrep: cargo install ripgrepExample usage: rg TODO src/ It respects your .gitignore and outputs results with color highlighting. I use it every day for searching TODOs, bug reports. duf: Alternative to dfdf is useful, but let’s be honest: the output looks like something printed from a 90s dot-matrix printer😆. duf fixes that. It takes the same disk-usage information and turns it into a clean, colorful, structured table you can actually understand at a glance. duf gives you a clean dashboard with grouped filesystems, readable sizes, clear partition labels, and a quick view of what’s healthy vs. what’s nearly full. Installation: sudo apt install dufUsage: duf procs: Alternative to psWhile ps aux works, it can feel visually overwhelming. procs gives you a more structured, color-coded view of your system processes, letting you quickly see what’s running without the need to launch a full TUI tool like htop. It’s like a personal dashboard for your processes. I use it every day to keep tabs on what’s running without feeling buried in a wall of text. Installation: cargo install procsUsage: procstldr: Alternative to mantldr makes navigating manual pages painless by offering clear examples, highlighting essential flags, and keeping things short (no scrolling forever). Installation: cargo install tldrUsage: tldr tarHonestly, I wish this existed when I was learning Linux, it's a lifesaver for newbies and veterans alike. broot: Alternative to treeIf you’ve ever used tree, you know it can quickly becomes overwhelming in large directories. broot upgrades the concept: it lets you navigate directories interactively, collapse or expand folders on the fly, and search as you go. Installation: cargo install broot Usage: brootI’ve ditched my old ls -R habit entirely, broot makes exploring directories feel interactive and satisfying, turning a messy filesystem into something you can actually enjoy navigating. zoxide: Alternative to cdHow many times have you typed cd ../../../../some/long/path? Too many, right? z (or zoxide) solves that by tracking your most visited directories and letting you jump to them with a single command, saving your fingers and making navigation effortless. Installation: cargo install zoxideYou also need to initialize it in your shell: # Bash eval "$(zoxide init bash)" # Zsh eval "$(zoxide init zsh)" # Fish zoxide init fish | source Usage: z codeIt keeps track of your frequently used directories and lets you jump to them instantly. lsd: Alternative to lsIf you’re tired of the plain, monochrome output of ls, lsd is here to make your directory listings not just readable, but enjoyable. With built-in icons and vibrant colors, it instantly helps you distinguish between files, directories, and executables at a glance. Installation: cargo install lsdYou can run it just like a normal ls command: lsd -la lsd organizes information clearly and highlights key file attributes, making navigation faster and more intuitive. bottom: Alternative to topThe classic top command shows system usage, but let’s face it, it can feel like you’re looking at a terminal snapshot from 1995 😆. bottom (or btm) brings a modern, clean, and highly visual experience to monitoring your system. It provides: Color-coded CPU, memory, and disk usageReal-time graphs directly in the terminalAn organized layout that’s easy to read and navigateInstallation: cargo install bottomYou can launch it simply with: btm Once you start using bottom, it’s hard to go back. Watching CPU spikes, memory usage, and disk activity while compiling Rust projects feels strangely satisfying. It’s both functional and fun, giving you the insights you need without the clutter of older tools. hyperfine: Alternative to time and other benchmarking commandsEver wondered which of your commands is truly the fastest? Stop guessing and start measuring with hyperfine. This Rust-based benchmarking tool makes it effortless to compare commands side by side. hyperfine runs each command multiple times, calculates averages, and gives you a clear, color-coded comparison of execution times. Beyond simple comparisons, it also supports warm-up runs, statistical analysis, and custom command setups, making it a powerful addition to any developer’s toolkit. Installation: cargo install hyperfine Usage example: hyperfine "exa -al" "ls -al"Watching exa obliterate ls in mere milliseconds is oddly satisfying⚡. If you love optimization, efficiency, and a little nerdy satisfaction, hyperfine is your new best friend. xplr: Alternative to nnnNow, I don't know if I can call nnn a classic Linux tool but I liked xplr so much that I decided to include it here. xplr takes the idea of a terminal file explorer to the next level. If you loved broot, xplr will blow your mind with these features: Navigate directories using arrow keys or Vim-style bindingsPreview files directly inside the terminalLaunch commands on files without leaving the appFully customizable layouts and keybindings for power usersInstallation: cargo install xplrUsage: xplrWrapping UpSwitching to new commands might feel like extra effort at first, but Rust-based CLI tools are often more than just a trend, they’re fast, modern, and designed to make your workflow enjoyable. They handle colors, syntax highlighting, and Git integration right out of the box.They save keystrokes, reduce frustration, and make complex tasks simpler.They make your terminal feel alive and engaging.On top of that, using them makes you look extra cool in front of fellow Linux nerds. Trust me, it’s a subtle flex 💪 Start small, maybe install exa and bat first, and gradually expand your toolkit. Soon, your terminal will feel futuristic, your workflow smoother, and your projects easier to manage.
  3. Last week
  4. by: Ryan Trimble Fri, 14 Nov 2025 15:32:50 +0000 A few weeks ago, Quiet UI made the rounds when it was released as an open source user interface library, built with JavaScript web components. I had the opportunity to check out the documentation and it seemed like a solid library. I’m always super excited to see more options for web components out in the wild. Unfortunately, before we even had a chance to cover it here at CSS-Tricks, Quiet UI has disappeared. When visiting the Quiet UI website, there is a simple statement: The repository for Quiet UI is no longer available on Quiet UI’s GitHub, and its social accounts seem to have been removed as well. The creator, Cory LaViska, is a veteran of UI libraries and most known for work on Shoelace. Shoelace joined Font Awesome in 2022 and was rebranded as Web Awesome. The latest version of Web Awesome was released around the same time Quiet UI was originally announced. According to the Quiet UI site, Cory will be continuing to work on it as a personal creative outlet, but hopefully we’ll be able to see what he’s cooking up again, someday. In the meantime, you can get a really good taste of what the project is/was all about in Dave Rupert’s fantastic write-up. Quiet UI Came and Went, Quiet as a Mouse originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  5. by: Sourav Rudra Fri, 14 Nov 2025 13:37:14 GMT Firefox has been pushing AI features for a while now. Over the past year, they've added AI chatbots in the sidebar, automatic alt text generation, and AI-enhanced tab grouping. It is basically their way of keeping up with Chrome and Edge, both of which have gone all-in on AI. Of course not everyone is thrilled about AI creeping into their web browsers, and Mozilla (the ones behind Firefox) seems to understand that. Every AI feature in Firefox is opt-in. You can keep using the browser as you always have, or flip on AI tools when you actually need them. Now, they are taking this approach a step further with something called AI Window. Firefox AI Window: What's Cooking?Mozilla has announced it's working on AI Window, a new browsing mode that comes with a built-in AI assistant. Think of it as a third option alongside the Classic browsing mode and Private Window mode. Before you get angry, know that it will be fully optional. Switch to AI Window when you want help, or just ignore it entirely. Try it, hate it, disable it. Mozilla's whole pitch is that you stay in control. On the transparency front, they are making three commitments: A fully opt-in experience.Features that protect your choice.More transparency around how your data is used.Why bother with all this, you ask? Mozilla sees AI as part of the web's future and wants to shape it their way. They figure ignoring AI while it reshapes the web doesn't help anyone, so they want to steer it toward user control rather than watch browsers from AI companies (read: Big Tech) lock people in. Ajit Varma, the Vice President and Head of Product at Firefox, put it like this: We believe standing still while technology moves forward doesn’t benefit the web or humanity. That’s why we see it as our responsibility to shape how AI integrates into the web — in ways that protect and give people more choice, not less.The feature isn't live. Mozilla's building it "in the open" and wants feedback to shape how it turns out. If you want early access, there's a waitlist at firefox.com/ai to get updates and first dibs on testing. Suggested Read 📖 Exploring Firefox Tab Groups: Has Mozilla Redeemed Itself?Firefox’s Tab Groups help you organize tabs efficiently. But how efficiently? Let me share my experience.It's FOSSSourav Rudra
  6. by: Abhishek Prakash Fri, 14 Nov 2025 17:08:50 +0530 Feels like 2025 is ending sooner than expected. I know that's not the case but it just feels like that 😄 On that note, we plan to publish at least two more courses for you before the year ends. They are likely to be on Terraform and Kubernetes. I am also planning a microcourse on 'automated backups with cron and rsync'. These classic Linux tools are always reliable. And in the meantime, we are also working on expanding our collection of hands-on practice labs so that you can improve your skills by doing it. Lots of things planned. Stay tuned, stay subscribed. Here's why you should get LHB Pro membership: ✅ Get access to Linux for DevOps, Docker, Ansible, Systemd and other text courses ✅ Get access to Kubernetes Operator and SSH video course ✅ Get 6 premium books on Bash, Linux and Ansible for free ✅ No ads on the website Get Pro Membership       This post is for subscribers only Subscribe now Already have an account? Sign in
  7. by: Sourav Rudra Fri, 14 Nov 2025 10:44:14 GMT Ubuntu is Canonical's flagship Linux distribution that powers a significant portion of the information technology infrastructure today. It has two major versions: an interim release that comes with nine months of support and a long-term support release that comes with five years of standard support that is extensible via Ubuntu Pro. If you didn't know, Canonical introduced Ubuntu Pro in 2022 as a subscription service that extends LTS coverage beyond the standard five years. It includes Expanded Security Maintenance (ESM), which provides an additional five years of security patching, bringing the total coverage to 10 years for LTS releases. Similarly, back in 2024, Canonical launched the Legacy add-on for Ubuntu Pro, which initially provided two additional years of support beyond ESM, bringing total coverage to 12 years. And now they have announced an expansion that brings 15 years of support for LTS releases. 15 Years of Support Sounds GreatThe expanded Legacy add-on now offers five additional years of support after the 10-year ESM window ends. This means Ubuntu LTS releases receive: 5 years of standard security maintenance.5 years of Expanded Security Maintenance.5 years of Legacy add-on support.Ubuntu 14.04 LTS, which entered the standard support period in April 2024, will now be maintained until April 2029. This gives it a full 15-year lifecycle from its initial release. The Legacy add-on kicks in after the first 10 years and costs 50% more than the standard Ubuntu Pro subscription. All future LTS releases, including 16.04, 18.04, 20.04, and beyond, are eligible for the same 15-year coverage when they reach the Legacy phase. Get Ubuntu Pro (Legacy add-on)The Legacy add-on becomes available after an LTS release completes 10 years of coverage, and as I mentioned earlier, costs 50% more than the standard Ubuntu Pro subscription. To activate the Legacy add-on support, Canonical asks users to contact their sales team or reach out to their assigned account manager. Ubuntu Pro (Legacy add-on)Suggested Read 📖 IBM Joins OpenSearch Software Foundation to Advance AI-Powered Search and RAGPledges enterprise-grade enhancements as Premier Member.It's FOSSSourav Rudra
  8. by: Ani Fri, 14 Nov 2025 09:53:36 +0000 Steve Jobs famously said, “Design is not just what it looks like and feels like — design is how it works.” Design is so much more than the visual layer; it goes far deeper than that. About meI’m Petra Tarkkala, and I’m the Head of Design at Tietoevry Create Finland. I have 25 years of experience in service design, UX, and digital transformation, making me one of the pioneers in digital design in Finland. When I started, the field of service design was still quite small, and it’s been inspiring to witness its growth. In many ways, I’ve evolved together with the industry. My team has about 20 designers in Finland and collaborates with international design teams across the Nordics, Central Europe, and the US. Most of my work involves consultative projects, which are mainly in public services and large enterprises based in Finland. My approach is very hands-on and grounded in understanding real user needs. We always base our work on insights, so it’s essential for me first to understand the actual context and what users truly need before trying to solve any problem. About my roleAs Head of Design, I lead our design team, grow our competence, recruit new talent, and help shape our project portfolio. I also stay hands-on with design projects. This keeps my skills sharp and my thinking fresh. Working directly with clients not only inspires new ideas, but also makes me a better design leader. Service design is fundamentally about understanding people and creating services that are accessible, intuitive, and genuinely valuable, whether that means digital solutions, better face-to-face experiences, or entirely new ways of working. The process always starts with a deep dive into user and business needs, followed by ideation, prototyping, and testing with real users. It’s iterative: we refine and test concepts until we find what truly works. In a nutshell, we co-create solutions that make a positive difference for both organizations and the people they serve. For example, in healthcare projects, service design might mean ensuring digital tools support, not replace, human interaction, or making sure vulnerable groups aren’t left behind. In Finland, service design can help make limited resources go further by tailoring services to different needs: some people are happy with digital consultations, while others—like many older adults—prefer face-to-face encounters. The key is designing with empathy and flexibility, so everyone gets the support they need. Petra Tarkkala, Head of Design, Tietoevry Create The beginning of my career I was always quite good at math and strong in the natural sciences, and I was also very creative. Still, I didn’t have a clear idea of what I wanted to do. I didn’t dream of being a doctor or a teacher. I just knew I wanted to do something meaningful that would let me use my strengths. Since I had studied a lot of math and physics in high school, I decided to apply to the Helsinki University of Technology (now known as Aalto University) to study computer science. I got accepted right away, in 1996. Building my own pathI feel incredibly lucky to have followed this path. I could have never planned it. Back in high school, this kind of career didn’t even exist. That’s something I often tell young people, including my own kids: don’t stress too much about deciding exactly what you want to be, because your future job might not even exist yet. At the time, I just believed that having a master’s degree would open doors, and I truly got lucky. I made my choices somewhat randomly, but by following my strengths, I found work that motivates me and makes me happy. Working at Tietoevry I joined Tietoevry in 2018, and I’ve genuinely loved the journey ever since. At heart, I’m a creative problem-solver—I thrive at the intersection of business, design, and technology, and I honestly can’t imagine doing anything else. With my technical background, creativity, and strong sense of user empathy, my role fits me perfectly. I also value meaningful work: helping businesses succeed while creating real impact. I feel lucky that it’s been so easy to balance my work with my personal life. The value of AIAI enables us to focus on more meaningful and valuable work by automating the mundane tasks. AI frees up time and resources. For example, previously, part of our project’s budget had to be used for routine tasks, such as transcribing user interviews. Now, AI tools can generate transcripts for us and even help identify key insights from those interviews. I use AI as a sparring partner. When I need to produce material for a client or develop something for a project, I check AI’s findings, compare it with my own, and then create a synthesis. It’s like having a very smart colleague always available, who provides valuable input, but one you can’t trust 100%. Keeping myself motivatedAs a consultant, receiving genuine gratitude from clients at the end of a challenging design project is highly motivating. Another key source of motivation for me is the community I work with. My team is fun, energetic, and truly passionate about what we do. What motivates us is the belief that our work matters, that we’re solving real problems and making a difference. Being surrounded by people who care deeply about the impact of their work is incredibly motivating. My advice to women in techI think that for women in tech is especially important to remember that we should be bold in our ideas and confident in our abilities. If we have the skills and the foundation, we shouldn’t wait to be guided; we should step forward and take the lead ourselves. I encourage my team to be proactive and speak up. I often remind them: “Don’t wait for permission to lead — just start leading.” Design is not always well understood; being clear, assertive, and confident is necessary to move ideas forward. My favourite quoteSteve Jobs famously said, “Design is not just what it looks like and feels like — design is how it works.” Design is a powerful tool for change. Design is not just about making things look good—it’s about making things work better for people, systems, and the planet. I believe in creativity as a force for transformation, and I’m always looking for ways to bring creative problem solving and user empathy into the work I do. The post Role Model Blog: Petra Tarkkala, Tietoevry Create first appeared on Women in Tech Finland.
  9. by: Hangga Aji Sayekti Fri, 14 Nov 2025 07:49:11 +0530 Did you know that many security breaches happen through assets companies didn't even know they had? Subdomains like staging.company.com or test.api.company.com are frequently overlooked yet can expose your entire infrastructure. OWASP Amass solves this by automatically discovering all your subdomains, giving you a complete picture of your attack surface. In this guide, we'll show you how to use it like a pro. What is OWASP Amass?OWASP Amass is an open-source tool designed for in-depth Attack Surface Mapping and Asset Discovery. In simpler terms, it's a subdomain enumeration powerhouse. It doesn't just use one method; it combines data from over 80 different sources, including: Certificate Transparency Logs: It looks at public records of SSL certificates issued for a domain.Search Engines: It scrapes results from Google, Bing, and others.DNS Databases: It queries massive DNS data archives.Brute Forcing: It intelligently guesses common subdomain names.The result is a comprehensive list of subdomains you might not have even known existed. 📋A crucial reminder: Only use Amass on domains you own or have explicit permission to test. Unauthorized scanning can be considered hostile and may violate terms of service or laws. vulnweb.com is a safe and legal playground for this purpose.Step 1: Installing OWASP AmassThe easiest way to install Amass on most Linux distributions is via a package manager. Amass is bundled with Kali, so you’re safe. Drop it in the terminal and let the enumeration do the work. For Debian/Ubuntu-based systems: sudo apt install amass To verify your installation, run: amass -version If it returns a version number, you're all set! Understanding the Basic Syntax of AmassThe amass command is powerful because of its various flags and options. Here's a quick reference table for the flags we'll use in this guide: Flag / Option Description Example enum The subcommand for subdomain enumeration. amass enum -d Specifies the target domain. (Required) -d vulnweb.com -passive Uses only passive data sources (no direct DNS queries). -passive -brute Forces a brute-force attack using wordlists. -brute -o Saves the results to a specified file. -o results.txt -json Saves detailed results in JSON format. -json output.json -list Shows the data sources used in enumeration. amass enum -list -help Shows the help menu for the enum subcommand. amass enum -help Step 2: Your First Subdomain Hunt: A Passive ReconnaissanceLet's start with the safest and most common method: passive reconnaissance. This means Amass will only query its numerous data sources. It won't send any traffic directly to the target's servers, making it stealthy and non-intrusive. For this tutorial, we'll use vulnweb.com, a site intentionally created for security testing. Open your terminal and type: amass enum -passive -d vulnweb.com Let's break this down with our new syntax knowledge: enum: This is the subcommand for enumeration (discovery).-passive: This flag tells Amass to stick to passive methods.-d vulnweb.com: Specifies our target domain.Within seconds, you'll see a list of subdomains start to populate your terminal. For vulnweb.com, you should see entries like testphp.vulnweb.com, testasp.vulnweb.com, and testhtml5.vulnweb.com. This is your initial map! You've just discovered multiple "entrances" to the vulnweb.com infrastructure. Step 3: Digging Deeper: Active Reconnaissance and Brute ForcingPassive mode is great, but sometimes you need to be more thorough. This is where active reconnaissance comes in. It involves directly interacting with the target's DNS servers. This method can be louder but often reveals subdomains that aren't listed in any public database. To perform an active DNS enumeration, simply remove the -passive flag: amass enum -d vulnweb.com As I explained in the previous section about -passive, both amass runs found the same things — the only difference was the order the results were printed. -passive tells Amass to gather info quietly from public sources (certificate logs, public DNS, search engines) without touching the target, while running it without -passive allows noisier, active checks like brute force or direct DNS queries. In your case the public sources already contained everything, so the active run didn’t discover anything new — it just mixed the same entries in a different sequence. Taking it Up a Notch: Brute ForcingWhat about subdomains that are completely hidden? Think dev, staging, ftp, cpanel. Amass can perform a "brute force" attack by trying a massive list of common subdomain names. We'll combine this with passive mode to be efficient and respectful. amass enum -passive -brute -d vulnweb.com Let Amass complete the enumeration... hangga@hangga-kali  ~  amass enum -passive -brute -d vulnweb.com vulnweb.com (FQDN) --> ns_record --> ns2.eurodns.com (FQDN) vulnweb.com (FQDN) --> ns_record --> ns3.eurodns.com (FQDN) vulnweb.com (FQDN) --> ns_record --> ns4.eurodns.com (FQDN) vulnweb.com (FQDN) --> ns_record --> ns1.eurodns.com (FQDN) ns2.eurodns.com (FQDN) --> a_record --> 104.37.178.107 (IPAddress) ns2.eurodns.com (FQDN) --> aaaa_record --> 2610:1c8:b001::107 (IPAddress) ns3.eurodns.com (FQDN) --> a_record --> 199.167.66.108 (IPAddress) ns3.eurodns.com (FQDN) --> aaaa_record --> 2610:1c8:b002::108 (IPAddress) ns4.eurodns.com (FQDN) --> a_record --> 104.37.178.108 (IPAddress) ns4.eurodns.com (FQDN) --> aaaa_record --> 2610:1c8:b001::108 (IPAddress) ns1.eurodns.com (FQDN) --> a_record --> 199.167.66.107 (IPAddress) ns1.eurodns.com (FQDN) --> aaaa_record --> 2610:1c8:b002::107 (IPAddress) rest.vulnweb.com (FQDN) --> a_record --> 18.215.71.186 (IPAddress) testasp.vulnweb.com (FQDN) --> a_record --> 44.238.29.244 (IPAddress) testaspnet.vulnweb.com (FQDN) --> a_record --> 44.238.29.244 (IPAddress) localhost.vulnweb.com (FQDN) --> a_record --> 127.0.0.1 (IPAddress) 104.37.176.0/21 (Netblock) --> contains --> 104.37.178.108 (IPAddress) 104.37.176.0/21 (Netblock) --> contains --> 104.37.178.107 (IPAddress) 199.167.64.0/22 (Netblock) --> contains --> 199.167.66.108 (IPAddress) 199.167.64.0/22 (Netblock) --> contains --> 199.167.66.107 (IPAddress) 44.224.0.0/11 (Netblock) --> contains --> 44.238.29.244 (IPAddress) 2610:1c8:b001::/48 (Netblock) --> contains --> 2610:1c8:b001::108 (IPAddress) 2610:1c8:b001::/48 (Netblock) --> contains --> 2610:1c8:b001::107 (IPAddress) 127.0.0.0/8 (Netblock) --> contains --> 127.0.0.1 (IPAddress) 23393 (ASN) --> managed_by --> NUCDN (RIROrganization) 23393 (ASN) --> announces --> 104.37.176.0/21 (Netblock) 23393 (ASN) --> announces --> 199.167.64.0/22 (Netblock) 23393 (ASN) --> managed_by --> NUCDN, US (RIROrganization) 23393 (ASN) --> announces --> 2610:1c8:b001::/48 (Netblock) 16509 (ASN) --> managed_by --> AMAZON-02 - Amazon.com, Inc. (RIROrganization) 16509 (ASN) --> announces --> 44.224.0.0/11 (Netblock) 0 (ASN) --> managed_by --> Reserved Network Address Blocks (RIROrganization) 0 (ASN) --> announces --> 127.0.0.0/8 (Netblock) 2610:1c8:b002::/48 (Netblock) --> contains --> 2610:1c8:b002::108 (IPAddress) 2610:1c8:b002::/48 (Netblock) --> contains --> 2610:1c8:b002::107 (IPAddress) 18.208.0.0/13 (Netblock) --> contains --> 18.215.71.186 (IPAddress) 23393 (ASN) --> announces --> 2610:1c8:b002::/48 (Netblock) 14618 (ASN) --> managed_by --> AMAZON-AES - Amazon.com, Inc. (RIROrganization) 14618 (ASN) --> announces --> 18.208.0.0/13 (Netblock) The enumeration has finished Wow! Your Amass scan just uncovered the complete infrastructure blueprint of vulnweb.com! The scan revealed not just the obvious subdomains like rest.vulnweb.com and testasp.vulnweb.com, but also uncovered that testaspnet.vulnweb.com shares the same IP address—suggesting shared hosting. Interestingly, it even found localhost.vulnweb.com pointing to 127.0.0.1, which might indicate some misconfiguration. Beyond subdomains, Amass mapped out the entire network topology: EuroDNS handling nameservers, with actual services distributed across Amazon AWS and NUCDN cloud infrastructure. This level of detail gives you the complete attack surface in a single scan—perfect for both security assessment and documentation. Ready to dive deeper into any of these findings? Next, to explore Amass's extensive data sources, run: amass enum -list This shows you all the available data sources that Amass queries during enumeration. Step 4: Getting Detailed Output and Understanding the ResultsTo get more detailed information about the discovered subdomains, save the results in text file: amass enum -passive -d vulnweb.com -o vulnweb_subdomains.txt Let's make sure the output is saved: cat vulnweb_subdomains.txt 💡Action Required: Always export Amass results to text file. Critical for pentest documentation.Final ThoughtsOWASP Amass is an indispensable tool in your Linux toolkit. It transforms the daunting task of asset discovery from a manual, error-prone process into an automated, comprehensive one. By knowing your entire attack surface—not just subdomains but also infrastructure relationships—you can patch vulnerabilities, close unused access points, and build a much more robust defense. So go ahead, fire up that terminal, and start mapping. Your future, more secure self will thank you for it.
  10. by: Sourav Rudra Fri, 14 Nov 2025 01:53:05 GMT FFmpeg maintainers have publicly criticized Google after its AI tool reported a security bug in code for a 1995 video game. The maintainers called the finding "CVE slop" and questioned whether trillion-dollar corporations should use AI to find security issues in volunteer code without providing fixes. Unchecked Automation is Not an Answer So what happened is, Google's AI agent Big Sleep found a bug in FFmpeg's code for decoding LucasArts Smush codec. The issue affected the first 10-20 frames of Rebel Assault II, a game from 1995. If you didn't know, Big Sleep is Google's AI-powered vulnerability detection tool developed by its Project Zero and DeepMind divisions. It is supposed to find security vulnerabilities in software before attackers can exploit them. But there's an issue here: under Google's "Reporting Transparency" policy, the tech giant publicly announces it has found a vulnerability within one week of reporting it. A 90-day disclosure clock then starts regardless of whether a patch is available. You see the problem now? 🤔 FFmpeg developers patched the bug but weren't happy about it. They tweeted in late October that "We take security very seriously but at the same time is it really fair that trillion-dollar corporations run AI to find security issues in people's hobby code? Then expect volunteers to fix." Beyond that, you have to understand that FFmpeg is an important piece of digital infrastructure that is used in Google Chrome, Firefox, YouTube, VLC, Kodi, and many other platforms. The project is written almost exclusively by volunteers. Much of the code is in assembly language, which is difficult to work with. This situation basically highlights the ongoing tensions over how corporations use volunteer-maintained open source software that powers their commercial products and expect them to fix any obscure issues that crop up. Via: The New Stack Suggested Reads 📖 Open Source Infrastructure is Breaking Down Due to Corporate FreeloadingAn unprecedented threat looms over open source.It's FOSSSourav RudraFFmpeg Receives $100K in Funding from India’s FLOSS/fund InitiativeIt is one of the world’s most widely used multimedia frameworks today.It's FOSSSourav Rudra
  11. by: Daniel Schwarz Thu, 13 Nov 2025 15:00:20 +0000 The range syntax isn’t a new thing. We‘re already able to use it with media queries to query viewport dimensions and resolutions, as well as container size queries to query container dimensions. Being able to use it with container style queries — which we can do starting with Chrome 142 — means that we can compare literal numeric values as well as numeric values tokenized by custom properties or the attr() function. In addition, this feature comes to the if() function as well. Here’s a quick demo that shows the range syntax being used in both contexts to compare a custom property (--lightness) to a literal value (50%): #container { /* Choose any value 0-100% */ --lightness: 10%; /* Applies it to the background */ background: hsl(270 100% var(--lightness)); color: if( /* If --lightness is less than 50%, white text */ style(--lightness < 50%): white; /* If --lightness is more than or equal to 50%, black text */ style(--lightness >= 50%): black ); /* Selects the children */ * { /* Specifically queries parents */ @container style(--lightness < 50%) { color: white; } @container style(--lightness >= 50%) { color: black; } } } Again, you’ll want Chrome 142 or higher to see this work: CodePen Embed Fallback Both methods do the same thing but in slightly different ways. Let’s take a closer look. Range syntax with custom properties In the next demo coming up, I’ve cut out the if() stuff, leaving only the container style queries. What’s happening here is that we’ve created a custom property called --lightness on the #container. Querying the value of an ordinary property isn’t possible, so instead we save it (or a part of it) as a custom property, and then use it to form the HSL-formatted value of the background. #container { /* Choose any value 0-100% */ --lightness: 10%; /* Applies it to the background */ background: hsl(270 100% var(--lightness)); } After that we select the container’s children and conditionally declare their color using container style queries. Specifically, if the --lightness property of #container (and, by extension, the background) is less than 50%, we set the color to white. Or, if it’s more than or equal to 50%, we set the color to black. #container { /* etc. */ /* Selects the children */ * { /* Specifically queries parents */ @container style(--lightness < 50%) { color: white; } @container style(--lightness >= 50%) { color: black; } } } CodePen Embed Fallback /explanation Note that we wouldn’t be able to move the @container at-rules to the #container block, because then we’d be querying --lightness on the container of #container (where it doesn’t exist) and then beyond (where it also doesn’t exist). Prior to the range syntax coming to container style queries, we could only query specific values, so the range syntax makes container style queries much more useful. By contrast, the if()-based declaration would work in either block: #container { --lightness: 10%; background: hsl(270 100% var(--lightness)); /* --lightness works here */ color: if( style(--lightness < 50%): white; style(--lightness >= 50%): black ); * { /* And here! */ color: if( style(--lightness < 50%): white; style(--lightness >= 50%): black ); } } CodePen Embed Fallback So, given that container style queries only look up the cascade (whereas if() also looks for custom properties declared within the same CSS rule) why use container style queries at all? Well, personal preference aside, container queries allow us to define a specific containment context using the container-name CSS property: #container { --lightness: 10%; background: hsl(270 100% var(--lightness)); /* Define a named containment context */ container-name: myContainer; * { /* Specify the name here */ @container myContainer style(--lightness < 50%) { color: white; } @container myContainer style(--lightness >= 50%) { color: black; } } } With this version, if the @container at-rule can’t find --lightness on myContainer, the block doesn’t run. If we wanted @container to look further up the cascade, we’d only need to declare container-name: myContainer further up the cascade. The if() function doesn’t allow for this, but container queries allow us to control the scope. Range syntax with the attr() CSS function We can also pull values from HTML attributes using the attr() CSS function. In the HTML below, I’ve created an element with a data attribute called data-notifs whose value represents the number of unread notifications that a user has: <div data-notifs="8"></div> We want to select [data-notifs]::after so that we can place the number inside [data-notifs] using the content CSS property. In turn, this is where we’ll put the @container at-rules, with [data-notifs] serving as the container. I’ve also included a height and matching border-radius for styling: [data-notifs]::after { height: 1.25rem; border-radius: 1.25rem; /* Container style queries here */ } Now for the container style query logic. In the first one, it’s fairly obvious that if the notification count is 1-2 digits (or, as it’s expressed in the query, less than or equal to 99), then content: attr(data-notifs) inserts the number from the data-notifs attribute while aspect-ratio: 1 / 1 ensures that the width matches the height, forming a circular notification badge. In the second query, which matches if the number is more than 99, we switch to content: "99+" because I don’t think that a notification badge could handle four digits. We also include some inline padding instead of a width, since not even three characters can fit into the circle. To summarize, we’re basically using this container style query logic to determine both content and style, which is really cool: [data-notifs]::after { height: 1.25rem; border-radius: 1.25rem; /* If notification count is 1-2 digits */ @container style(attr(data-notifs type(<number>)) <= 99) { /* Display count */ content: attr(data-notifs); /* Make width equal the height */ aspect-ratio: 1 / 1; } /* If notification count is 3 or more digits */ @container style(attr(data-notifs type(<number>)) > 99) { /* After 99, simply say "99+" */ content: "99+"; /* Instead of width, a little padding */ padding-inline: 0.1875rem; } } CodePen Embed Fallback But you’re likely wondering why, when we read the value in the container style queries, it’s written as attr(data-notifs type(<number>) instead of attr(data-notifs). Well, the reason is that when we don’t specify a data type (or unit, you can read all about the recent changes to attr() here), the value is parsed as a string. This is fine when we’re outputting the value with content: attr(data-notifs), but when we’re comparing it to 99, we must parse it as a number (although type(<integer>) would also work). In fact, all range syntax comparatives must be of the same data type (although they don’t have to use the same units). Supported data types include <length>, <number>, <percentage>, <angle>, <time>, <frequency>, and <resolution>. In the earlier example, we could actually express the lightness without units since the modern hsl() syntax supports that, but we’d have to be consistent with it and ensure that all comparatives are unit-less too: #container { /* 10, not 10% */ --lightness: 10; background: hsl(270 100 var(--lightness)); color: if( /* 50, not 50% */ style(--lightness < 50): white; style(--lightness >= 50): black ); * { /* 50, not 50% */ @container style(--lightness < 50) { color: white; } @container style(--lightness >= 50) { color: black; } } } Note: This notification count example doesn’t lend itself well to if(), as you’d need to include the logic for every relevant CSS property, but it is possible and would use the same logic. Range syntax with literal values We can also compare literal values, for example, 1em to 32px. Yes, they’re different units, but remember, they only have to be the same data type and these are both valid CSS <length>s. In the next example, we set the font-size of the <h1> element to 31px. The <span> inherits this font-size, and since 1em is equal to the font-size of the parent, 1em in the scope of <span> is also 31px. With me so far? According to the if() logic, if 1em is equal to less than 32px, the font-weight is smaller (to be exaggerative, let’s say 100), whereas if 1em is equal to or greater than 32px, we set the font-weight to a chunky 900. If we remove the font-size declaration, then 1em computes to the user agent default of 32px, and neither condition matches, leaving the font-weight to also compute to the user agent default, which for all headings is 700. Basically, the idea is that if we mess with the default font-size of the <h1>, then we declare an optimized font-weight to maintain readability, preventing small-fat and large-thin text. <h1> <span>Heading 1</span> </h1> h1 { /* The default value is 32px, but we overwrite it to 31px, causing the first if() condition to match */ font-size: 31px; span { /* Here, 1em is equal to 31px */ font-weight: if( style(1em < 32px): 100; style(1em > 32px): 900 ); } } CodePen Embed Fallback CSS queries have come a long way, haven’t they? In my opinion, the range syntax coming to container style queries and the if() function represents CSS’s biggest leap in terms of conditional logic, especially considering that it can be combined with media queries, feature queries, and other types of container queries (remember to declare container-type if combining with container size queries). In fact, now would be a great time to freshen up on queries, so as a little parting gift, here are some links for further reading: Media queries Container queries Feature queries The Range Syntax Has Come to Container Style Queries and if() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  12. by: Umair Khurshid Thu, 13 Nov 2025 18:02:00 +0530 Every few years, the Linux world finds something to fight about. Sometimes it is about package managers, sometimes about text editors, but nothing in recent memory split the community quite like systemd. What began as an init replacement quietly grew into a full-blown identity crisis for Linux itself, complete with technical manifestos, emotional arguments, and more mailing list drama than I ever thought possible. I did not plan to take a side in that debate. Like most users, I just wanted my server to boot, my logs to make sense, and my scripts to run. Yet systemd had a way of showing up uninvited. It replaced the old startup scripts I had trusted for years and left me wondering whether the Linux I loved was changing into something else entirely. Over time, I learned that this was not simply a story about software. It was a story about culture, trust, and how communities handle change. What follows is how I got pulled into that argument, what I learned when I finally stopped resisting. I have even created a course on using advanced automation with systemd. That much I use systemd now. Advanced Automation with systemdTake Your Linux Automation Beyond CronLinux HandbookUmair KhurshidHow I Got Pulled into the Systemd DebateMy introduction to systemd was not deliberate and arrived uninvited with an update, quietly replacing the familiar tangle of /etc/init.d scripts on Manjaro. The transition was abrupt enough to feel like a betrayal as one morning, my usual service apache2 restart returned a polite message: Use systemctl instead. That was the first sign that something fundamental had changed. I remember the tone of Linux forums around that time was half technical and half existential. Lennart Poettering, systemd’s creator, had become a lightning rod for criticism. To some, he was the architect of a modern, unified boot system; to others, he was dismantling the very ethos that made Unix elegant. I was firmly in the second camp! Back then, my world revolved around small workstations and scripts I could trace line by line. The startup process was tangible, you just had to open /etc/rc.local, add a command, and know it would run. When Fedora first adopted systemd in 2011, followed later by Debian, I watched from a distance with the comfort of someone on a “safe” distribution, but it was only a matter of time. Ian Jackson of Debian called the decision to make systemd the default a failure of pluralism and Devuan was born soon after as a fork of Debian, built specifically to keep sysvinit alive. On the other side, Poettering argued that systemd was never meant to violate Unix principles, but to reinterpret them for modern systems where concurrency and dependency tracking actually mattered. I followed those arguments closely, sometimes nodding along with Jackson’s insistence on modularity, other times feeling curious about Poettering’s idea of “a system that manages systems.” Linus Torvalds chimed in occasionally, not against systemd itself but against its developers’ attitudes, which kept the controversy alive far longer than it might have lasted. At that point, I saw systemd as something that belonged to other distributions, an experiment that might stabilize one day but would never fit into the quiet predictability of my setups. That illusion lasted until I switched to a new version of Manjaro in 2013, and the first boot greeted me with the unmistakable parallel startup messages of systemd.  The Old World: init, Scripts, and PredictabilityBefore systemd, Linux startup was beautifully transparent. When the machine booted, you could almost watch it think. The bootloader passed control to the kernel, the kernel mounted the root filesystem, and init took over from there. It read a single configuration file, /etc/inittab, and decided which runlevel to enter. Each runlevel had its own directory under /etc/rc.d/, filled with symbolic links to shell scripts. The naming convention S01network, S20syslog, K80apache2was primitive but logical. The S stood for “start,” K for “kill,” and the numbers determined order. The whole process was linear, predictable, and very, very readable. If something failed, you could open the script and see exactly what went wrong, and debugging was often as simple as adding an echo statement or two. On a busy day, I would edit /etc/init.d/apache2, add logging to a temporary file, and restart the service manually.  A typical init script looked something like this: #!/usr/bin/env sh ### BEGIN INIT INFO # Provides: myapp # Required-Start: $network # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start myapp daemon ### END INIT INFO case "$1" in start) echo "Starting myapp" /usr/local/bin/myapp & ;; stop) echo "Stopping myapp" killall myapp ;; restart) $0 stop $0 start ;; *) echo "Usage: /etc/init.d/myapp {start|stop|restart}" exit 1 esac exit 0Crude, yes, but understandable. Even if you did not write it, you could trace what happened as it was pure shell, running in sequence, and every decision it made was visible to you. This simplicity was based on the Unix principle of small, composable parts. You could swap out one script without affecting the rest and even bypass init entirely by editing /etc/rc.local, placing commands there for one-off startup tasks. The problem, however, was because everything ran in a fixed order, startup could be painfully slow in comparison to systemd. If a single service stalled, the rest of the boot process might hang and dependencies were implied rather than declared. I could declare that a service “required networking,” but the system had no reliable way to verify that the network was fully up. Also, parallel startup was almost if not completely impossible. Distributions like Ubuntu experimented with alternatives such as Upstart, which tried to make init event driven rather than sequential, but it never fully replaced the traditional init scripts. When systemd appeared, it looked like another attempt in the same category, a modernization effort destined to break the things I liked most. From my perspective, init was not broken. It was slow, yes, but there was total control, which I did not like getting replaced by a binary I could not open in a text editor. The early systemd adopters claimed that it was faster, cleaner, and more consistent, but the skeptics (which included me) saw it as a power grab by Red Hat. The subsequent years have proven this fear somewhat justified, as maintaining non-systemd configurations has become increasingly difficult but the standardization has also made Linux more predictable across distributions. Looking back, I now realize that I had mistaken predictability for transparency. The old init world felt clear because it was simple, not because it was necessarily better designed. As systems grew more complex with services that had to interact asynchronously, containers starting in milliseconds, and dependency chains that spanned multiple layers, the cracks began to show.  What Systemd Tried to FixIt took me a while to understand that systemd was not an act of defiance against the Unix philosophy but a response to a real set of technical problems. Once I started reading Lennart Poettering’s early blog posts, a clearer picture emerged. His goal was not to replace init for its own sake but to make Linux startup faster, more reliable, and more predictable. Poettering and Kay Sievers began designing systemd in 2010 at Red Hat, with Fedora as its first proving ground (now my daily driver). Their idea was to build a parallelized, dependency-aware init system that could handle modern workloads gracefully. Instead of waiting for one service to finish before starting the next, systemd would launch multiple services concurrently based on declared dependencies. At its heart, systemd introduced unit files, small declarative configurations that replaced the hand-written shell scripts of SysV init. Each unit described a service, socket, target, or timer in a consistent, machine-readable format. The structure was both simple and powerful: [Unit] Description=My Custom Service After=network.target [Service] ExecStart=/usr/local/bin/myapp --config /etc/myapp.conf Restart=on-failure User=myuser [Install] WantedBy=multi-user.targetTo start it, you no longer edited symlinks or runlevels. You simply ran: sudo systemctl start myapp.service sudo systemctl enable myapp.serviceThe old init scripts had no formal way to express “start only after this other service has finished successfully.” They relied on arbitrary numbering and human intuition. With systemd, relationships were declared using After=, Before=, Requires=, and Wants=. [Unit] Description=Web Application After=network.target database.service Requires=database.serviceThis meant systemd could construct a dependency graph at boot and launch services in optimal order, improving startup times dramatically. Systemd also integrated timers (replacing cron for system tasks), socket activation (starting services only when needed), and cgroup management (to isolate processes cleanly). The critics called it “scope creep,” but Poettering’s argument was that the components it replaced were fragmented and poorly integrated, and building them into a single framework reduced complexity overall. That claim divided the Linux world! On one side were distributions like Fedora, Arch, and openSUSE, which adopted systemd quickly. They saw its promise in boot speed, unified tooling, and clear dependency tracking. On the other side stood Debian and its derivatives, which valued independence and simplicity (some of you new folks might find it odd given their current Rust adoption). Debian’s Technical Committee vote in 2014 was one of the most contentious in its history. When systemd was chosen as the default, Ian Jackson resigned from the committee, citing an erosion of choice and the difficulty of maintaining alternative inits. That decision directly led to the birth of Devuan whose developers described systemd as “an intrusive monolith,” a phrase that captured the mood of the opposition. Yet, beneath the politics, systemd was solving problems that were not just theoretical. Race conditions during boot were common and service dependencies often failed silently. On embedded devices and containerized systems, startup order mattered in ways SysV init could not reliably enforce. The Real Arguments (and What They Miss)The longer I followed the systemd controversy, the more I realized that the arguments around it were not always about code and were also about identity. Every debate thread eventually drifted toward the same philosophical divide that should Linux remain a collection of loosely coupled programs, or should it evolve into a cohesive, centrally managed system? When I read early objections from Slackware and Debian maintainers, they were rarely technical complaints about bugs or performance. They were about trust and the Unix philosophy “do one thing well” that had guided decades of design. init was primitive but modular, and its limitations could be fixed piecemeal. systemd, by contrast, felt like a comprehensive replacement that tied everything together under a single logic (the current debate around C being memory unsafe and Rust adoption are quite similar in the form). Devuan’s founders said that once core packages like GNOME began depending on systemd-logind, users effectively lost the ability to choose another init system. That interdependence was viewed as a form of lock-in at the architecture level. Meanwhile, Lennart Poettering maintained that systemd was modular internally, just not in the fragmented Unix sense. He described systemd as an effort to build coherence into an environment that had historically resisted it. I remember reading Linus’ comments on the matter around 2014. He was not against systemd per se but his frustration (and it has not changed much) was about developer behavior which called out what he saw as unnecessary hostility from both sides, maintainers blocking patches, developers refusing to accommodate non-systemd setups, and the cultural rigidity that had turned a design debate into a purity contest. His opinion was pragmatic that as long as systemd worked well and did not break things needlessly, it was fine. The irony was that both camps were right in their own way. The anti-systemd camp correctly foresaw that once GNOME and major distributions adopted it, alternatives would fade and the pro-systemd side correctly argued that modern systems needed unified control and reliable dependency management. As someone who would later get into Sysadmin and DevOps, now I feel like the conversation missed the fact that Linux itself had changed. By the early 2010s, most servers were running dozens of services, containers were replacing bare-metal deployments, and hardware initialization had become vastly more complex. Boot time was no longer the slow, linear dance it used to be and was a more of a network of parallelized events that had to interact safely and recover from failure automatically. I once tried to build a stripped-down Debian container without systemd, thinking I could recreate the old init world but it was an enlightening failure. I spent hours wiring together shell scripts and custom supervision loops, all to mimic what a single Restart=on-failure directive did automatically in systemd. That experience showed me what most arguments missed that the problem was not that systemd did too much, but that the old approach expected the user to do everything manually. For instance, consider a classic SysV approach to restarting a service on crash. You would write something like this: #!/bin/sh while true; do /usr/local/bin/myapp status=$? if [ $status -ne 0 ]; then echo "myapp crashed with status $status, restarting..." >&2 sleep 2 else break fi doneIt worked, but it was a hack. systemd gave you the same reliability with a single configuration line: Restart=on-failure RestartSec=2The simplicity of that design was hard to deny. Yet to many administrators, it still felt like losing both control and familiarity. The cultural resistance was amplified by how fast systemd grew. Each release seemed to absorb another subsystem: udevd, logind, networkd, and later resolved. Critics accused it of “taking over userland” but the more I examined those claims, the more I saw a different pattern. Each of those tools replaced a historically fragile or inconsistent component that distributions had struggled to maintain separately but the cultural resistance was amplified by how fast systemd grew. Critics also pointed to the technical risk of consolidating so much functionality in one project, fearing a single regression could break the entire ecosystem. The defensive tone of Poettering’s posts did not help, and over time, his name became synonymous with the debate itself. But even among the loudest critics, there was a reluctant recognition that systemd had improved startup speed, service supervision, and logging consistency and what they feared was not its functionality but its dominance. The most productive discussions I saw were not about whether systemd was “good” or “bad,” but about whether Linux had space for diversity anymore. In a sense, systemd’s arrival forced the community to confront its own maturity. You could no longer treat Linux as a loose federation of components; it had become a unified operating system in practice, even if the philosophy still insisted otherwise. By the time I reached that conclusion, the debate had already cooled. Most distributions had adopted systemd by default, Devuan had carved out its niche, and the rest of us were learning to live with the new landscape. I began to see that the real question was not whether systemd broke the Unix philosophy, but whether the old Unix philosophy could still sustain the complexity of modern systems. What I Learned After Actually Using ItAt some point, resistance gave way to necessity. As often happens, I started managing servers (Cent OS) that already used systemd, so learning it was no longer optional. What surprised me most was how quickly frustration turned into familiarity once I stopped fighting it. The commands that had felt alien at firs began to make sense. The first time I ran systemctl status nginx.service, I understood what Poettering had been talking about. Instead of a terse message like “nginx is running,” I saw a complete summary including the process ID, uptime, memory usage, recent logs, and the exact command used to start it. It was the kind of insight that had previously required grepping through /var/log/syslog and several ps invocations. Here is what a typical status output looked like: It was immediately practical as I could see that the service was running, its exact configuration file path, and its dependencies all in one place. When a service failed, systemd logged everything automatically. Instead of checking multiple files, I could simply run: journalctl -u nginx.service -bThat -b flag restricted the logs to the current boot, saving me from wading through old entries. It was efficient in a way the traditional logging setup never was. Then there was dependency introspection. I could visualize the startup tree with: systemctl list-dependencies nginx.serviceThis command revealed the entire boot relationship graph, showing what started before and after Nginx. For anyone who had ever debugged slow boots by adding echo statements to init scripts, this was revolutionary. Over time, I began writing my own unit files. They were simple once you got used to the syntax. I remember converting a small Python daemon I had once managed with a hand-rolled init script. The old version had been about thirty lines of conditional shell logic. The new one was six lines: [Unit] Description=Custom Python Daemon After=network.target [Service] ExecStart=/usr/bin/python3 /opt/daemon.py Restart=always RestartSec=5 [Install] WantedBy=multi-user.targetThat was all it took to handle startup order, failure recovery, and clean shutdown without any custom scripting. The first time I watched systemd automatically restart the process after a crash, I felt a mix of admiration and reluctant respect. Some of my early complaints persisted such as the binary log format of journald still felt unnecessary. I understood why it existed, structured logs allowed richer metadata but it broke the old habit of inspecting logs with less and grep. I eventually learned that you could still pipe output: journalctl -u myapp.service | grep ERRORSo even that compromise turned out to be tolerable. I also began to appreciate how much time I saved not having to worry about service supervision. Previously, I had used supervisord or custom shell loops to keep processes alive but with systemd, it was built-in. When a process crashed, I could rely on Restart=on-failure or Restart=always. If I needed to ensure that something ran only after a network interface was up, I could just declare: After=network-online.target Wants=network-online.targetAlso, one thing that most discussions about systemd missed was built-in service sandboxing. For all the arguments about boot speed and complexity, few people talked about how deeply systemd reshaped security at the service level. The [Service] section of a unit file is not just about start commands, it can define isolation boundaries in a way that old init scripts never could. Directives like PrivateTmp, ProtectSystem, RestrictAddressFamilies, and NoNewPrivileges can drastically reduce the attack surface of a service. A web server, for instance, can be locked into its own temporary directory with PrivateTmp=true and denied access to the host’s filesystem with ProtectSystem=full. Even if compromised, it cannot modify critical paths or open new network sockets.  Still, I had to get past a subtle psychological barrier as for years, I had believed that understanding the system meant being able to edit its scripts directly and add to it the social pressure. With systemd, much of that transparency moved behind declarative configuration and binary logs. It felt at first like a loss of intimacy but as I learned more about how systemd used cgroups to track processes, I realized it was not hiding complexity but just managing it. A perfect example came when I started using systemd-nspawn to spin up lightweight containers. The simplicity of systemd-nspawn -D /srv/container was eye-opening as it showed how systemd was not just an init system but a general process manager, capable of running containers, user sessions, and even virtual environments with consistent supervision. At that point, I began reading the source code and documentation rather than Reddit threads. I discovered how deeply it integrated with Linux kernel features like control groups and namespaces and what had seemed like unnecessary overreach began to look more like a natural evolution of process control. The resentment faded and in its place came something more complicated, an understanding that my dislike of systemd had been rooted in nostalgia as much as principle but in a modern environment with hundreds of interdependent services, the manual approach simply did not scale though I respect people who shoot at things like building their own AWS alternative.  Systemd was not perfect and especially it was opinionated and sometimes too aggressive in unifying tools. Yet once I accepted it as a framework rather than an ideology, it stopped feeling oppressive and just became another tool, powerful when used wisely, irritating when misunderstood. By then, I had moved from avoidance to proficiency and could write units, debug services, and configure dependencies with ease. I no longer missed the old init scripts as maintenance time became important to me. Why Systemd Controversy Still MattersBy now, most major distributions have adopted systemd, and the initial outrage has faded into the background. Yet the debate never truly disappeared it just changed form. It became less about startup speed or PID 1 design, and more about philosophy. What kind of control should users have over their systems? How much abstraction is too much? The systemd debate persists because it touches something deeper than process management which is about the identity of Linux itself. The traditional Unix model prized minimalism and composability, one tool for one job. systemd, by contrast, represents a coordinated platform which integrates logging, device management, service supervision, and even containerization. To people like me, that integration feels awesome to others, it feels like betrayal. For administrators who grew up writing init scripts and manipulating processes by hand, systemd signaled a loss of transparency which replaced visible shell logic with declarative files and binary logs, and assumed responsibility for things that used to belong to the user but for newer users, especially those managing cloud-scale systems, it offered a coherent framework that actually worked the same everywhere. Though I’m not a huge fan of the word "trade-off" but unfortunately it defines most of modern computing. The more complexity we hide, the less friction we face in day-to-day tasks, but the more we depend on the hidden layer behaving correctly. It is the same tension that runs through all abstraction, from container orchestration to AI frameworks. Even now, forks and alternatives appear from time to time such as runit, s6, OpenRC each promising a return to simplicity but few large distributions switch back, because the practical benefits of systemd outweigh nostalgia.  Still, I think the discomfort matters as it reminds us that simplicity is not just a technical virtue but a cultural one. The fear of losing control keeps the ecosystem diverse. Projects like Devuan exist not because they expect to overtake Debian, but because they preserve the possibility of a different path. The real lesson, for me, is not about whether systemd is good or bad. It is about what happens when evolution in open source collides with emotion. Change in infrastructure is not just a matter of better code, it is also a negotiation between habits, values, and trust. When I type systemctl now, I no longer feel resistance as I just see a tool that grew faster than we were ready for, one that forced a conversation the Linux world was reluctant to have. The controversy still matters because it captures the moment when Linux stopped being a loose federation of ideas and started becoming an operating system in the full sense of the word. That transition was messy, and it probably had to be. If you have come this far, you likely see that systemd is more than just an init system, it’s a complete automation framework. If you want to explore that side of it, my course Advanced Automation with systemd walks through how to replace fragile cron jobs with powerful, dependency-aware timers, sandboxed tasks, and resource-controlled services. It’s hands-on and practical! Advanced Automation with systemdTake Your Linux Automation Beyond CronLinux HandbookUmair Khurshid
  13. by: Sourav Rudra Thu, 13 Nov 2025 12:13:58 GMT Nitrux is a Debian-based Linux distribution that has always stood out for its bold design choices. It even made our list of the most beautiful Linux distributions. Earlier this year, the project made a significant announcement. They discontinued its custom NX Desktop and the underlying KDE Plasma base, prioritizing a Hyprland desktop experience combined with their in-house developed app distribution methods. Now, the first major release reflecting this redefined approach is finally here. 🆕 Nitrux 5.0.0: What's New? The release uses OpenRC 0.63 as its init system instead of systemd. This is paired with either Liquorix kernel 6.17.7 or a CachyOS-patched kernel, depending on your hardware, and the desktop experience is Wayland-only. KDE Plasma, KWin, and SDDM are gone. In their place, you get Hyprland with Waybar for the panel, Crystal Dock for application launching, and greetd as the login manager, and QtGreet as the display manager. Wofi serves as the application launcher, while wlogout handles logout actions. Nitrux 5.0.0 ships with an immutable root filesystem powered by NX Overlayroot. This provides system stability and rollback capabilities through the Nitrux Update Tool System (nuts). Plus, there is Nitrux's new approach to software management. NX AppHub and AppBoxes are now the primary methods for installing applications. Flatpak and Distrobox remain available as complementary options. There are many updated apps and tooling in this release too: Podman 5.6.1Docker 26.1.5Git 2.51.0Python 3.13.7OpenRazer 3.10.3MESA 25.2.3BlueZ 5.84PipeWire 1.4.8The developers are clear about who Nitrux is for. It is designed for users who see configuration as empowerment, not inconvenience. This isn't a distribution trying to please everyone. The team put it this way in their announcement: These are not additions for the sake of novelty, but extensions of the same philosophy—emphasizing that Nitrux targets modern, powerful hardware. Tuned for real machines: a track weapon, not a city commuter—built for those who drive, not spectate.📥 Download Nitrux 5.0.0The nitrux-contemporary-cachy-nvopen ISO is designed for NVIDIA hardware. It includes the NVIDIA Open Kernel Module and uses the CachyOS-patched kernel. The nitrux-contemporary-liquorix-mesa ISO targets AMD and Intel graphics. It ships with the Liquorix kernel and MESA drivers. Both versions are also available through SourceForge. Nitrux 5.0 (SourceForge)A fresh installation is strongly recommended for this release. Updates from Nitrux 3.9.1 to 5.0.0 are not supported. Future updates will be delivered through the Nitrux Update Tool System. Also, virtual machines are not supported natively, as the team removed many VM-specific components. You can learn more in the release notes. Suggested Read 📖 Here are the Most Beautiful Linux Distributions in 2025Aesthetically pleasing? Customized out of the box? You get the best of both worlds in this list.It's FOSSAnkush Das
  14. by: Nitij Taneja Thu, 13 Nov 2025 09:50:29 GMT Introduction In the rapidly evolving landscape of Artificial Intelligence, Retrieval-Augmented Generation (RAG) has emerged as a pivotal technique for enhancing the factual accuracy and relevance of Large Language Models (LLMs). By enabling LLMs to retrieve information from external knowledge bases before generating responses, RAG mitigates common issues such as hallucination and outdated information. However, traditional RAG approaches often rely on vector-based similarity searches, which, while effective for broad retrieval, can sometimes fall short in capturing the intricate relationships and contextual nuances present in complex data. This limitation can lead to the retrieval of fragmented information, hindering the LLM's ability to synthesize truly comprehensive and contextually appropriate answers. Enter Graph RAG, a groundbreaking advancement that addresses these challenges by integrating the power of knowledge graphs directly into the retrieval process. Unlike conventional RAG systems that treat information as isolated chunks, Graph RAG dynamically constructs and leverages knowledge graphs to understand the interconnectedness of entities and concepts. This allows for a more intelligent and precise retrieval mechanism, where the system can navigate relationships within the data to fetch not just relevant information, but also the surrounding context that enriches the LLM's understanding. By doing so, Graph RAG ensures that the retrieved knowledge is not only accurate but also deeply contextual, leading to significantly improved response quality and a more robust AI system. This article will delve into the core principles of Graph RAG, explore its key features, demonstrate its practical applications with code examples, and discuss how it represents a significant leap forward in building more intelligent and reliable AI applications. Key Features of Graph RAG Graph RAG distinguishes itself from traditional RAG architectures through several innovative features that collectively contribute to its enhanced retrieval capabilities and contextual understanding. These features are not merely additive but fundamentally reshape how information is accessed and utilized by LLMs. Dynamic Knowledge Graph Construction One of the most significant advancements of Graph RAG is its ability to construct a knowledge graph dynamically during the retrieval process. Traditional knowledge graphs are often pre-built and static, requiring extensive manual effort or complex ETL (Extract, Transform, Load) pipelines to maintain and update. In contrast, Graph RAG builds or expands the graph in real time based on the entities and relationships identified from the input query and initial retrieval results. This on-the-fly construction ensures that the knowledge graph is always relevant to the immediate context of the user's query, avoiding the overhead of managing a massive, all-encompassing graph. This dynamic nature allows the system to adapt to new information and evolving contexts without requiring constant re-indexing or graph reconstruction. For instance, if a query mentions a newly discovered scientific concept, Graph RAG can incorporate this into its temporary knowledge graph, linking it to existing related entities, thereby providing up-to-date and relevant information. Intelligent Entity Linking At the heart of dynamic graph construction lies intelligent entity linking. As information is processed, Graph RAG identifies key entities (e.g., people, organizations, locations, concepts) and establishes relationships between them. This goes beyond simple keyword matching; it involves understanding the semantic connections between different pieces of information. For example, if a document mentions "GPT-4" and another mentions "OpenAI," the system can link these entities through a "developed by" relationship. This linking process is crucial because it allows the RAG system to traverse the graph and retrieve not just the direct answer to a query, but also related information that provides richer context. This is particularly beneficial in domains where entities are highly interconnected, such as medical research, legal documents, or financial reports. By linking relevant entities, Graph RAG ensures a more comprehensive and interconnected retrieval, enhancing the depth and breadth of the information provided to the LLM. Contextual Decision-Making with Graph Traversal Unlike vector search, which retrieves information based on semantic similarity in an embedding space, Graph RAG leverages the explicit relationships within the knowledge graph for contextual decision-making. When a query is posed, the system doesn't just pull isolated documents; it performs graph traversals, following paths between nodes to identify the most relevant and contextually appropriate information. This means the system can answer complex, multi-hop questions that require connecting disparate pieces of information. For example, to answer "What are the main research areas of the lead scientist at DeepMind?", a traditional RAG might struggle to connect "DeepMind" to its "lead scientist" and then to their "research areas" if these pieces of information are in separate documents. Graph RAG, however, can navigate these relationships directly within the graph, ensuring that the retrieved information is not only accurate but also deeply contextualized within the broader knowledge network. This capability significantly improves the system's ability to handle nuanced queries and provide more coherent and logically structured responses. Confidence Score Utilization for Refined Retrieval To further optimize the retrieval process and prevent the inclusion of irrelevant or low-quality information, Graph RAG utilizes confidence scores derived from the knowledge graph. These scores can be based on various factors, such as the strength of relationships between entities, the recency of information, or the perceived reliability of the source. By assigning confidence scores, the framework can intelligently decide when and how much external knowledge to retrieve. This mechanism acts as a filter, helping to prioritize high-quality, relevant information while minimizing the addition of noise. For instance, if a particular relationship has a low confidence score, the system might choose not to expand retrieval along that path, thereby avoiding the introduction of potentially misleading or unverified data. This selective expansion ensures that the LLM receives a compact and highly relevant set of facts, improving both efficiency and response accuracy by maintaining a focused and pertinent knowledge graph for each query. How Graph RAG Works: A Step-by-Step Breakdown Understanding the theoretical underpinnings of Graph RAG is essential, but its true power lies in its practical implementation. This section will walk through the typical workflow of a Graph RAG system, illustrating each stage with conceptual code examples to provide a clearer picture of its operational mechanics. While the exact implementation may vary depending on the chosen graph database, LLM, and specific use case, the core principles remain consistent. Step 1: Query Analysis and Initial Entity Extraction The process begins when a user submits a query. The first step for the Graph RAG system is to analyze this query to identify key entities and potential relationships. This often involves Natural Language Processing (NLP) techniques such as Named Entity Recognition (NER) and dependency parsing. Conceptual Code Example (Python): import spacy from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity import networkx as nx # Load spaCy nlp = spacy.load("en_core_web_sm") # Step 1: Extract entities def extract_entities(query): doc = nlp(query) return [(ent.text.strip(), ent.label_) for ent in doc.ents] query = "Who is the CEO of Google and what is their net worth?" extracted_entities = extract_entities(query) print(f"🧠 Extracted Entities: {extracted_entities}" Step 2: Initial Retrieval and Candidate Document Identification Once entities are extracted, the system performs an initial retrieval from a vast corpus of documents. This can be done using traditional vector search (e.g., cosine similarity on embeddings) or keyword matching. The goal here is to identify a set of candidate documents that are potentially relevant to the query. Conceptual Code Example (Python - simplified vector search): # Step 2: Retrieve candidate documents corpus = [ "Sundar Pichai is the CEO of Google.", "Google is a multinational technology company.", "The net worth of many tech CEOs is in the billions.", "Larry Page and Sergey Brin founded Google." ] vectorizer = TfidfVectorizer() corpus_embeddings = vectorizer.fit_transform(corpus) def retrieve_candidate_documents(query, corpus, vectorizer, corpus_embeddings, top_k=2): query_embedding = vectorizer.transform([query]) similarities = cosine_similarity(query_embedding, corpus_embeddings).flatten() top_indices = similarities.argsort()[-top_k:][::-1] return [corpus[i] for i in top_indices] candidate_docs = retrieve_candidate_documents(query, corpus, vectorizer, corpus_embeddings) print(f"📄 Candidate Documents: {candidate_docs}") Step 3: Dynamic Knowledge Graph Construction and Augmentation This is the core of Graph RAG. The extracted entities from the query and the content of the candidate documents are used to dynamically construct or augment a knowledge graph. This involves identifying new entities and relationships within the text and adding them as nodes and edges to the graph. If a base knowledge graph already exists, this step augments it; otherwise, it builds a new graph from scratch for the current query context. Conceptual Code Example (Python - using NetworkX for graph representation): # Step 3: Build or augment graph def build_or_augment_graph(graph, entities, documents): for entity, entity_type in entities: graph.add_node(entity, type=entity_type) for doc in documents: doc_nlp = nlp(doc) person = None org = None for ent in doc_nlp.ents: if ent.label_ == "PERSON": person = ent.text.strip().strip(".") elif ent.label_ == "ORG": org = ent.text.strip().strip(".") if person and org and "CEO" in doc: graph.add_node(person, type="PERSON") graph.add_node(org, type="ORG") graph.add_edge(person, org, relation="CEO_of") return graph # Create and populate the graph knowledge_graph = nx.Graph() knowledge_graph = build_or_augment_graph(knowledge_graph, extracted_entities, candidate_docs) print("🧩 Graph Nodes:", knowledge_graph.nodes(data=True)) print("🔗 Graph Edges:", knowledge_graph.edges(data=True)) Step 4: Graph Traversal and Contextual Information Retrieval With the dynamic knowledge graph in place, the system performs graph traversals starting from the query entities. It explores the relationships (edges) and connected entities (nodes) to retrieve contextually relevant information. This step is where the "graph" in Graph RAG truly shines, allowing for multi-hop reasoning and the discovery of implicit connections. Conceptual Code Example (Python - graph traversal): # Step 4: Graph traversal def traverse_graph_for_context(graph, start_entity, depth=2): contextual_info = set() visited = set() queue = [(start_entity, 0)] while queue: current_node, current_depth = queue.pop(0) if current_node in visited or current_depth > depth: continue visited.add(current_node) contextual_info.add(current_node) for neighbor in graph.neighbors(current_node): edge_data = graph.get_edge_data(current_node, neighbor) if edge_data: relation = edge_data.get("relation", "unknown") contextual_info.add(f"{current_node} {relation} {neighbor}") queue.append((neighbor, current_depth + 1)) return list(contextual_info) context = traverse_graph_for_context(knowledge_graph, "Google") print(f"🔍 Contextual Information from Graph: {context}") Step 5: Confidence Score-Guided Expansion (Optional but Recommended) As mentioned in the features, confidence scores can be used to guide the graph traversal. This ensures that the expansion of retrieved information is controlled and avoids pulling in irrelevant or low-quality data. This can be integrated into Step 4 by assigning scores to edges or nodes and prioritizing high-scoring paths. Step 6: Information Synthesis and LLM Augmentation The retrieved contextual information from the graph, along with the original query and potentially the initial candidate documents, is then synthesized into a coherent prompt for the LLM. This enriched prompt provides the LLM with a much deeper and more structured understanding of the user's request. Conceptual Code Example (Python): def synthesize_prompt(query, contextual_info, candidate_docs): return "\n".join([ f"User Query: {query}", "Relevant Context from Knowledge Graph:", "\n".join(contextual_info), "Additional Information from Documents:", "\n".join(candidate_docs) ]) final_prompt = synthesize_prompt(query, context, candidate_docs) print(f"\n📝 Final Prompt for LLM:\n{final_prompt}") Step 7: LLM Response Generation Finally, the LLM processes the augmented prompt and generates a response. Because the prompt is rich with contextual and interconnected information, the LLM is better equipped to provide accurate, comprehensive, and coherent answers. Conceptual Code Example (Python - using a placeholder LLM call): # Step 7: Simulated LLM response def generate_llm_response(prompt): if "Sundar" in prompt and "CEO of Google" in prompt: return "Sundar Pichai is the CEO of Google. He oversees the company and has a significant net worth." return "I need more information to answer that accurately." llm_response = generate_llm_response(final_prompt) print(f"\n💬 LLM Response: {llm_response} import matplotlib.pyplot as plt plt.figure(figsize=(4, 3)) pos = nx.spring_layout(knowledge_graph) nx.draw(knowledge_graph, pos, with_labels=True, node_color='skyblue', node_size=2000, font_size=12, font_weight='bold') edge_labels = nx.get_edge_attributes(knowledge_graph, 'relation') nx.draw_networkx_edge_labels(knowledge_graph, pos, edge_labels=edge_labels) plt.title("Graph RAG: Knowledge Graph") plt.show() This step-by-step process, particularly the dynamic graph construction and traversal, allows Graph RAG to move beyond simple keyword or semantic similarity, enabling a more profound understanding of information and leading to superior response generation. The integration of graph structures provides a powerful mechanism for contextualizing information, which is a critical factor in achieving high-quality RAG outputs. Practical Applications and Use Cases of Graph RAG Graph RAG is not just a theoretical concept; its ability to understand and leverage relationships within data opens up a myriad of practical applications across various industries. By providing LLMs with a richer, more interconnected context, Graph RAG can significantly enhance performance in scenarios where traditional RAG might fall short. Here are some compelling use cases: 1. Enhanced Enterprise Knowledge Management Large organizations often struggle with vast, disparate knowledge bases, including internal documents, reports, wikis, and customer support logs. Traditional search and RAG systems can retrieve individual documents, but they often fail to connect related information across different silos. Graph RAG can build a dynamic knowledge graph from these diverse sources, linking employees to projects, projects to documents, documents to concepts, and concepts to external regulations or industry standards. This allows for: Intelligent Q&A for Employees: Employees can ask complex questions like "What are the compliance requirements for Project X, and which team members are experts in those areas?" Graph RAG can traverse the graph to identify relevant compliance documents, link them to specific regulations, and then find the employees associated with those regulations or Project X. Automated Report Generation: By understanding the relationships between data points, Graph RAG can gather all necessary information for comprehensive reports, such as project summaries, risk assessments, or market analyses, significantly reducing manual effort. Onboarding and Training: New hires can quickly get up to speed by querying the knowledge base and receiving contextually rich answers that explain not just what something is, but also how it relates to other internal processes, tools, or teams. 2. Advanced Legal and Regulatory Compliance The legal and regulatory domains are inherently complex, characterized by vast amounts of interconnected documents, precedents, and regulations. Understanding the relationships between different legal clauses, case laws, and regulatory frameworks is critical. Graph RAG can be a game-changer here: Contract Analysis: Lawyers can use Graph RAG to analyze contracts, identify key clauses, obligations, and risks, and link them to relevant legal precedents or regulatory acts. A query like "Show me all clauses in this contract related to data privacy and their implications under GDPR" can be answered comprehensively by traversing the graph of legal concepts. Regulatory Impact Assessment: When new regulations are introduced, Graph RAG can quickly identify all affected internal policies, business processes, and even specific projects, providing a holistic view of the compliance impact. Litigation Support: By mapping relationships between entities in case documents (e.g., parties, dates, events, claims, evidence), Graph RAG can help legal teams quickly identify connections, uncover hidden patterns, and build stronger arguments. 3. Scientific Research and Drug Discovery Scientific literature is growing exponentially, making it challenging for researchers to keep up with new discoveries and their interconnections. Graph RAG can accelerate research by creating dynamic knowledge graphs from scientific papers, patents, and clinical trial data: Hypothesis Generation: Researchers can query the system about potential drug targets, disease pathways, or gene interactions. Graph RAG can connect information about compounds, proteins, diseases, and research findings to suggest novel hypotheses or identify gaps in current knowledge. Literature Review: Instead of sifting through thousands of papers, researchers can ask questions like "What are the known interactions between Protein A and Disease B, and which research groups are actively working on this?" The system can then provide a structured summary of relevant findings and researchers. Clinical Trial Analysis: Graph RAG can link patient data, treatment protocols, and outcomes to identify correlations and insights that might not be apparent through traditional statistical analysis, aiding in drug development and personalized medicine. 4. Intelligent Customer Support and Chatbots While many chatbots exist, their effectiveness is often limited by their inability to handle complex, multi-turn conversations that require deep contextual understanding. Graph RAG can power next-generation customer support systems: Complex Query Resolution: Customers often ask questions that require combining information from multiple sources (e.g., product manuals, FAQs, past support tickets, user forums). A query like "My smart home device isn't connecting to Wi-Fi after the latest firmware update; what are the troubleshooting steps and known compatibility issues with my router model?" can be resolved by a Graph RAG-powered chatbot that understands the relationships between devices, firmware versions, router models, and troubleshooting procedures. Personalized Recommendations: By understanding a customer's past interactions, preferences, and product usage (represented in a graph), the system can provide highly personalized product recommendations or proactive support. Agent Assist: Customer service agents can receive real-time, contextually relevant information and suggestions from a Graph RAG system, significantly improving resolution times and customer satisfaction. These use cases highlight Graph RAG's potential to transform how we interact with information, moving beyond simple retrieval to true contextual understanding and intelligent reasoning. By focusing on the relationships within data, Graph RAG unlocks new levels of accuracy, efficiency, and insight in AI-powered applications. Conclusion Graph RAG represents a significant evolution in the field of Retrieval-Augmented Generation, moving beyond the limitations of traditional vector-based retrieval to harness the power of interconnected knowledge. By dynamically constructing and leveraging knowledge graphs, Graph RAG enables Large Language Models to access and synthesize information with unprecedented contextual depth and accuracy. This approach not only enhances the factual grounding of LLM responses but also unlocks the potential for more sophisticated reasoning, multi-hop question answering, and a deeper understanding of complex relationships within data. The practical applications of Graph RAG are vast and transformative, spanning enterprise knowledge management, legal and regulatory compliance, scientific research, and intelligent customer support. In each of these domains, the ability to navigate and understand the intricate web of information through a graph structure leads to more precise, comprehensive, and reliable AI-powered solutions. As data continues to grow in complexity and interconnectedness, Graph RAG offers a robust framework for building intelligent systems that can truly comprehend and utilize the rich tapestry of human knowledge. While the implementation of Graph RAG may involve overcoming challenges related to graph construction, entity extraction, and efficient traversal, the benefits in terms of enhanced LLM performance and the ability to tackle real-world problems with greater efficacy are undeniable. As research and development in this area continue, Graph RAG is poised to become an indispensable component in the architecture of advanced AI systems, paving the way for a future where AI can reason and respond with a level of intelligence that truly mirrors human understanding. Frequently Asked Questions 1. What is the primary advantage of Graph RAG over traditional RAG? The primary advantage of Graph RAG is its ability to understand and leverage the relationships between entities and concepts within a knowledge graph. Unlike traditional RAG, which often relies on semantic similarity in vector space, Graph RAG can perform multi-hop reasoning and retrieve contextually rich information by traversing explicit connections, leading to more accurate and comprehensive responses. 2. How does Graph RAG handle new information or evolving knowledge? Graph RAG employs dynamic knowledge graph construction. This means it can build or augment the knowledge graph in real-time based on the entities identified in the user query and retrieved documents. This on-the-fly capability allows the system to adapt to new information and evolving contexts without requiring constant re-indexing or manual graph updates. 3. Is Graph RAG suitable for all types of data? Graph RAG is particularly effective for data where relationships between entities are crucial for understanding and answering queries. This includes structured, semi-structured, and unstructured text that can be transformed into a graph representation. While it can work with various data types, its benefits are most pronounced in domains rich with interconnected information, such as legal documents, scientific literature, or enterprise knowledge bases. 4. What are the main components required to build a Graph RAG system? Key components typically include: **LLM (Large Language Model): **For generating responses. Graph Database (or Graph Representation Library): To store and manage the knowledge graph (e.g., Neo4j, Amazon Neptune, NetworkX). Information Extraction Module: For Named Entity Recognition (NER) and Relation Extraction (RE) to populate the graph. Retrieval Module: To perform initial document retrieval and then graph traversal. Prompt Engineering Module: To synthesize the retrieved graph context into a coherent prompt for the LLM. 5. What are the potential challenges in implementing Graph RAG? Challenges can include: Complexity of Graph Construction: Accurately extracting entities and relations from unstructured text can be challenging. Scalability: Managing and traversing very large knowledge graphs efficiently can be computationally intensive. Data Quality: The quality of the generated graph heavily depends on the quality of the input data and the extraction models. Integration: Seamlessly integrating various components (LLM, graph database, NLP tools) can require significant engineering effort. 6. Can Graph RAG be combined with other RAG techniques? Yes, Graph RAG can be combined with other RAG techniques. For instance, initial retrieval can still leverage vector search to narrow down the relevant document set, and then Graph RAG can be applied to these candidate documents to build a more precise contextual graph. This hybrid approach can offer the best of both worlds: the broad coverage of vector search and the deep contextual understanding of graph-based retrieval. 7. How does confidence scoring work in Graph RAG? Confidence scoring in Graph RAG involves assigning scores to nodes and edges within the dynamically constructed knowledge graph. These scores can reflect the strength of a relationship, the recency of information, or the reliability of its source. The system uses these scores to prioritize paths during graph traversal, ensuring that only the most relevant and high-quality information is retrieved and used to augment the LLM prompt, thereby minimizing irrelevant additions. References Graph RAG: Dynamic Knowledge Graph Construction for Enhanced Retrieval Note: This is a conceptual article based on the principles of Graph RAG. Specific research papers on "Graph RAG" as a unified concept are emerging, but the underlying ideas draw from knowledge graphs, RAG, and dynamic graph construction. Original Jupyter Notebook (for code examples and base content) Retrieval-Augmented Generation (RAG) Lewis, P., et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv preprint arXiv:2005.11401. https://arxiv.org/abs/2005.11401 Knowledge Graphs Ehrlinger, L., & Wöß, W. (2016). Knowledge Graphs: An Introduction to Their Creation and Usage. In Semantic Web Challenges (pp. 1-17). Springer, Cham. https://link.springer.com/chapter/10.1007/978-3-319-38930-1_1 Named Entity Recognition (NER) and Relation Extraction (RE) Nadeau, D., & Sekine, S. (2007). A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1), 3-26. https://www.researchgate.net/publication/220050800_A_survey_of_named_entity_recognition_and_classification NetworkX (Python Library for Graph Manipulation) https://networkx.org/ spaCy (Python Library for NLP) https://spacy.io/ scikit-learn (Python Library for Machine Learning) https://scikit-learn.org/
  15. by: Abhishek Prakash Thu, 13 Nov 2025 04:29:03 GMT Here is the news. It's FOSS News (news.itsfoss.com) doesn't exist anymore, at least not as a separate entity. All news articles are now located under the main website: https://itsfoss.com/news/ I merged the two portals into one. Now, you just have to log into one portal to enjoy your membership benefits. I hope it simplifies things for you, specially if you are a Plus member. Let's see what else you get in this edition of FOSS Weekly: A new ODF document standard release.Open source alternative to Help Scout.YouTube clamping down on tech YouTubers.Fixing thumbnail issues in Fedora 43Ubuntu's Rust transition hitting yet another hurdle.And other Linux news, tips, and, of course, memes!This edition of FOSS Weekly is supported by Internxt. SPONSORED You cannot ignore the importance of cloud storage these days, especially when it is encrypted. Internxt is offering 1 TB of lifetime, encrypted cloud storage for a single payment. Make it part of your 3-2-1 backup strategy and use it for dumping data. At least, that's what I use it for. Get Internxt Lifetime Cloud Storage 📰 Linux and Open Source NewsA new Rust-related problem has cropped up in the land of Ubuntu.ODF 1.4 is here as the next evolution for the open document standard.You can now play classic D3D7 games on Linux with this new project.YouTube recently deleted some Windows 11 bypass tutorials with some absurd claims.Kaspersky antivirus software is now available for Linux users. Personally, I don't use any such software on Linux.Big Tech being Big Tech. A creator claimed that his videos about bypassing Windows 11's mandatory online account were removed by YouTube. YouTube Goes Bonkers, Removes Windows 11 Bypass Tutorials, Claims ‘Risk of Physical Harm’When will these Big Tech platforms learn?It's FOSSSourav Rudra🧠 What We’re Thinking AboutCould GNOME Office be a thing? Roland has some convincing points: It’s Time to Bring Back GNOME Office (Hope You Remember It)Those who used GNOME 2 in the 2000’s would remember the now forgotten GNOME Office. I think it’s time to revive that project.It's FOSSRoland TaylorOn a side note, I found out that Flathub is ranking on Google for NSFW keywords. What a Shame! FlatHub is Ranking on Google for Po*nHub DownloadsAnd it’s not Google’s fault this time.It's FOSSAbhishek Prakash🧮 Linux Tips, Tutorials, and LearningsYou can fix that annoying issue of GNOME Files not showing image thumbnails on Fedora, btw. Fixing Image Thumbnails Not Showing Up in GNOME Files on Fedora LinuxTiny problem but not good for the image of Fedora Linux, pun intended.It's FOSSAbhishek PrakashTheena suggests some ways to reclaim your data privacy. Switching to a private email service like Proton is one of the recommendations. If you are absolutely new to the Linux commands, we have a hands-on series to help you out. Linux Command Tutorials for Absolute BeginnersNever used Linux commands before? No worries. This tutorial series is for absolute beginners to the Linux terminal.It's FOSS👷 AI, Homelab and Hardware CornerOwnership of digital content is an illusion, until you take matters into your own hands. Our self-hosting starter pack should be a good starting point. The Self-Hosting Starter Pack: 5 Simple Tools I Recommend To Get Started With Your HomelabSelf-hosting isn’t rocket science—if I can do it, so can you!It's FOSSTheena Kumaragurunathan🛍️ Linux eBook bundleThis curated library (partner link) of courses includes Supercomputers for Linux SysAdmins, CompTIA Linux+ Certification Companion, Using and Administering Linux: Volumes 1–2, and more. Plus, your purchase supports the Room to Read initiative! Explore the Humble offer here✨ Project HighlightsDon't let its name fool you. Calcurse is a powerhouse of a tool that can be your go-to for any calendar management needs (like a boon, almost). Command Your Calendar: Inside the Minimalist Linux Productivity Tool CalcurseA classic way to stay organized in the Linux terminal with a classic CLI tool.It's FOSSRoland TaylorHelp Scout is known for abrupt pricing changes; why not switch to a platform that actually cares? Tired of Help Scout Pulling the Rug from Under You? Try This Free, Open Source AlternativeDiscover how FreeScout lets you run your own help desk without vendor lock-in or surprise price hikes.It's FOSSSourav Rudra📽️ Videos I Am Creating for YouThe latest video shows my recommendations for Kitty terminal configuration changes. Subscribe to It's FOSS YouTube Channel Linux is the most used operating system in the world. but on servers. Linux on desktop is often ignored. That's why It's FOSS made it a mission to write helpful tutorials and guides to help use Linux on their personal computer. We do it all for free. No venture capitalist funds us. But you know who does? Readers like you. Yes, we are an independent, reader supported publication helping Linux users worldwide with timely news coverage, in-depth guides and tutorials. If you believe in our work, please support us by getting a Plus membership. It costs just $3 a month or $99 for a lifetime subscription. Join It's FOSS Plus 💡 Quick Handy TipIn the Konsole terminal emulator, you can use the right-click context menu to open any folder with a specific tool. For example, if you are inside a directory, right-click and select the "Open Folder With" option. From the list, select an application. So, for instance, if Dolphin is selected, the location will be opened in the file manager. If Kate is selected, that location is opened in the editor. Other than that, if you enable the "Underline Files" option in Configure Konsole →Profiles → Edit Profile → Mouse → Miscellaneous, you can even right-click and open files in GUI tools right from the terminal. 🎋 Fun in the FOSSverseCan you get all the answers to this Linux distro logo quiz? Guess the Distro from its LogoThere is a logo and four distro names. Guess which one it belongs to. It’s that simple.It's FOSSAbhishek Prakash🤣 Meme of the Week: Such words can hurt the soul, you know. 😵 🗓️ Tech Trivia: On November 9, 2004, Mozilla Firefox 1.0 was released, introducing a faster, safer web-browsing experience with features like tabbed browsing and popup blocking, marking a major challenge to Microsoft’s Internet Explorer dominance. 🧑‍🤝‍🧑 From the Community: One of the developers of antiX Linux has announced that the first beta release of antiX 25 is now live! antiX 25 Beta 1 Available for Public TestingantiX-25-full-beta1available for public testing November 5, 2025 by anticapitalista Here is the first beta iso of antiX-25 (64bit). Bullet point notes for now. based on Debian 13 ‘trixie’ 4 modern systemd-free init systems – runit (default), s6-rc, s6-66 and dinit new default look usual ‘antiX magic’ you should be able to boot live in the non-default init and it should then become the default after install. Please note that user intervention will be required more than previous versions o…It's FOSS CommunityProwlerGr❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  16. by: Sourav Rudra Wed, 12 Nov 2025 17:12:36 GMT The OpenSearch Software Foundation is a vendor-neutral organization under the Linux Foundation that hosts the OpenSearch Project. It recently appointed a new Executive Director, and the project itself has already seen over 1 billion software downloads since launch. If you didn't know, OpenSearch focuses on search, analytics, observability, and vector database capabilities. What's Happening: During this year's KubeCon + CloudNativeCon North America conference, the foundation announced that IBM has joined as a Premier Member. This move comes at a time when enterprises are increasingly adopting retrieval-augmented generation (RAG) for AI applications. The membership costs $150,000 annually, where IBM joins existing Premier Members, including AWS, SAP, and Uber. IBM currently uses OpenSearch in production through DataStax, its subsidiary. The company integrated JVector with OpenSearch for high-performance vector search at billion-vector scale. During the announcement, Ed Anuff, the VP of Data and AI Platforms Strategy at IBM, added that: As part of IBM’s work in the evolution of AI, we’re thrilled to contribute to the development of OpenSearch. By joining the Foundation, we are helping ensure that production generative Al can be built on a robust open source foundation.What to Expect: IBM will contribute enterprise-grade enhancements to OpenSearch's security and observability features. The company plans to share high-availability patterns tested through IBM Cloud deployments. The focus areas include vector search performance improvements and multimodal document ingestion. IBM also aims to advance the developer experience for building AI agents. Plus, the company is on track to announce a new open source project featuring OpenSearch at the OpenRAG Summit on November 13. Reflecting on the partnership's significance, Bianca Lewis remarked: IBM’s commitment to the OpenSearch Software Foundation is a testament to the role open source search and analytics play in AI-enabled enterprises of the future. Our member organizations help shape and develop the tools and technology that make intelligent operations a reality, and we are thrilled that IBM has joined the Foundation, strengthening our community and mission.Suggested Read 📖 OpenSearch Foundation Strengthens Leadership with New Executive DirectorBianca Lewis becomes executive director of the OpenSearch Foundation.It's FOSSSourav Rudra
  17. by: Sourav Rudra Wed, 12 Nov 2025 15:11:51 GMT The Linux ecosystem is facing increasing pressure from threat actors, who are getting more clever day-by-day, threatening critical infrastructure worldwide. Servers powering essential services, industrial control systems, and enterprise networks all rely on Linux, and these attackers know it. What was once considered a relatively safe ecosystem is now a lucrative target. 🥲 This brings us to Kaspersky, the Russian cybersecurity firm with a reputation. The company was banned from selling its antivirus software and cybersecurity products in the U.S. back in July 2024. But for users outside the U.S., Kaspersky just announced something interesting. They are bringing antivirus protection to home Linux users. Though, it remains to be seen, whether this addresses genuine security needs or if it's just security theater for worried penguins. 🚧This piece of software is not FOSS. We covered it because it is available for Linux!Kaspersky for Linux: What Does it Offer?Kaspersky has expanded its consumer security lineup to include Linux. This marks the first time their home user products officially support the platform. The company adapted their existing business security solution for home users. Support covers major 64-bit distributions, including Debian, Ubuntu, Fedora, and RED OS. Depending on the plan you opt for, the feature set includes real-time monitoring of files, folders, and applications to detect and eliminate malware. Behavioral analysis detects malware on the device for proactive defense. Removable media like USB drives and external hard drives get scanned automatically upon connection. This prevents the spread of viruses across devices and networks. Anti-phishing alerts users when attempting to follow phishing links in emails and on websites. Online payment protection verifies the security of bank websites and online stores before financial transactions. Anti-cryptojacking prevents unauthorized crypto mining on devices to protect system performance, and AI-powered scanning blocks infected files, folders, and applications upon detecting viruses, ransomware trojans, password stealers, and other malware. Though, there is one important thing to consider: Kaspersky for Linux isn't GDPR-ready, so keep this in mind if you are an EU-based user concerned about data protection compliance. Get Kaspersky for LinuxAn active paid subscription is required to download and use Kaspersky for Linux. A 30-day free trial is available for users who want to test before committing to a paid plan. Both DEB and RPM packages are provided for easy installation. The official installation guide contains detailed setup instructions. Kaspersky for LinuxVia: Phoronix
  18. by: Sourav Rudra Wed, 12 Nov 2025 13:29:24 GMT Ubuntu's move to Rust-based system utilities has hit some bumps. Earlier, a bug in the Rust-based date command broke automatic updates. The command returned current time instead of file modification timestamps, causing Ubuntu 25.10 systems to stop automatically checking for software updates. That issue was quickly fixed, but now, two security vulnerabilities have been found in sudo-rs. Better Now than LaterThe first vulnerability involves password exposure during timeouts. When users type a password but don't press enter, the timeout causes those keystrokes to replay onto the console. This could reveal partial passwords in shell history or on screen. The second issue affects timestamp authentication. When Defaults targetpw or Defaults rootpw options are enabled, sudo-rs incorrectly recorded the wrong user ID in timestamps. This allowed bypassing authentication by reusing cached credentials even when policy required a different password. Patches for both issues have been released in sudo-rs 0.2.10. Ubuntu is set to push the fixes through a Stable Release Update (SRU). These bugs being caught in Ubuntu 25.10 is actually a good sign. The interim release serves as a testing ground before Ubuntu 26.04 LTS arrives in April 2026. Finding critical security flaws now allows developers ample time to address them. Here's the Fix!At the time of writing, the updated sudo-rs package had not yet arrived in the Ubuntu 25.10 repositories. But it should be available soon. Once the update is live, you can get the fix using the graphical Software Updater tool by launching it from your application menu and installing any available security updates. sudo-rs' upgrade process on Ubuntu 25.10. Alternatively, you can use the terminal. Run these commands one after the other to get the patch: sudo-rs updatesudo-rs upgradePS: Using sudo instead of sudo-rs also works the same. Via: Phoronix Suggested Read 📖 sudo vs sudo-rs: What You Need to Knowsudo-rs is poised to take over. Here’s what you should know about sudo-rs as a sudo user.It's FOSSAbhishek Prakash
  19. by: Ani Wed, 12 Nov 2025 11:14:36 +0000 The only constant in life is change. About meI’m Chandni, currently working as a UX Designer at Neomore, where I create user experiences for SAP applications. My work begins with understanding workflows through user research and interviews. From there, I create wireframes, prototypes, and user flows, collaborating closely with consultants, developers, and stakeholders to ensure that every design is both technically feasible and genuinely user-friendly. Starting my career in the IT field My career began as a software developer, which gave me a strong foundation in how digital products are built. Over time, I realized that what truly fascinated me was the human side of technology, the way people interact with systems, and how design can make that interaction possible. Being naturally people-oriented, I transitioned into UX design to focus on understanding users’ needs and creating experiences that make complex systems user-friendly and enjoyable. Chandni Sharma, Consultant, UX & Application Innovation, Neomore My Background I earned my Bachelor of Technology degree in India. Later, I moved to Finland when my husband was relocated here, and I quickly fell in love with the country. What started as a visit soon became a long-term decision to build both my life and career here. When I first arrived, it was a challenging time to find a job. It was right after Nokia’s major economic decline, and many experienced professionals had entered the job market. Each time I applied, I often got a reply that the position had been filled by someone who had just left Nokia. Being young and less experienced, it was difficult to compete. Instead of giving up, I decided to focus on learning. I pursued a bachelor’s degree in user experience at Haaga-Helia and later completed my master’s in computer science at Aalto University, specializing in Service Design and UX. These studies not only deepened my technical knowledge but also boosted my confidence. They helped me successfully transition from software development to becoming a UX designer and researcher, an expert in creating meaningful user experiences. My path to Neomore When I was studying, I had heard of SAP but didn’t really understand what the field was about. Later, while searching for jobs, I came across openings for UX roles in SAP and became curious to know how user experience fits into SAP. I decided to apply, and during the interviews, the team explained the work in detail. I realized it would be both a challenging and rewarding learning journey. That’s how I joined Neomore. I had some prior experience from another company, and now, this December, I’ll be completing three years at Neomore. Working at NeomoreWhat truly motivates me to go to the office every day is the people and the culture at Neomore. The supportive and inspiring environment they’ve built makes a huge difference, which keeps me motivated no matter how challenging a project or task might be. I have to admit, it’s the people and the culture that make Neomore such a great place to work. Enjoying my work The best part of my job is the constant learning that comes with working in the SAP industry. Every day, I gain new insights into different processes and how our clients operate, areas I never imagined I’d explore. For example, I’ve learned how manufacturing works, what kinds of machines are used in woodworking, and the challenges people face in their daily routines. Understanding these real-world contexts and discovering how technology can help them is both motivating and exciting. This continuous learning keeps me energized, no matter how challenging the work gets. The necessary skills in the IT fieldProblem-solving and curiosity are some of the most important skills to have. In my work, curiosity drives me to ask questions and explore different perspectives. When I listen to people, my curious nature helps me go deeper into their stories and uncover hidden insights. I’m not afraid to ask questions—though I’m always mindful of how and when to ask them. This openness allows me to gather valuable answers, identify the real problems, and map out effective solutions. Ultimately, curiosity and the courage to ask are key to meaningful problem-solving in any field. Other important skills to have are good communication skills, being empathetic towards users, and having knowledge of software development. Being skilled in technology, I can bridge the technology and human needs into something that can really make a difference. Overcoming challengesTo overcome challenges, I’ve learned that having patience with oneself is essential. You need to give time to keep yourself updated on technologies so that you can level up with developers, stakeholders, and users. To keep yourself up-to-date, you must embrace continuous learning to help yourself bridge the gaps, and I’m grateful that Neomore strongly supports professional growth. They regularly encourage us to learn and discuss ways to develop further. For me, patience and lifelong learning are the keys to overcoming challenges in this field. The key to solving problemsWhenever I get stuck on a problem and cannot find a solution, I pause for a moment instead of forcing myself to continue. At the office, we have a pool table, so I often take a short break to play a quick game, sometimes alone, sometimes with a colleague. That brief change of focus helps clear my mind. It is a simple routine, but it really helps me get back into the right state of mind to solve the problems effectively. Sources of energyI get a lot of energy by being surrounded by people, either friends or family. This has been a good source of energy for me. After becoming a mother, my children became my greatest source of energy. Playing with my kids, listening to their stories, and doing whatever they want to do brings my mind to balance and provides me with a lot of energy to continue and thrive in any situation I am in. Always Start with WhyStart with Why, written by Simon Sinek, has had a profound impact on my work and, more broadly, on my life. For example, when I talk to a client, I listen to their needs and always ask why they want to have what they want. This allows me to go deeper into their needs, and I get a clearer idea of what they are requesting, which helps me to help them get their solutions. About the impact of AII strongly believe that AI is a powerful tool designed to help us. There’s a saying that the only constant in life is change, and AI is a part of that change. Instead of fearing it, we should learn, understand, and use it to our advantage. Every technology has both positive and negative sides, and AI is no different. The key is to understand both aspects and use them responsibly. Personally, I actively explore and learn from different AI tools, finding ways they can support my work and growth. While some worry that AI might take jobs or have negative effects, I see it as an opportunity to evolve and work smarter. Note: Between the time of writing this blog post and publishing it, Chandni Sharma’s employment at Neomore ended. Neomore, Chandni and Women in Tech decided to publish this blog post regardless since Chandni will always be a role model to our communities. The post Role Model Blog: Chandni Sharma, Neomore first appeared on Women in Tech Finland.
  20. by: Theena Kumaragurunathan Wed, 12 Nov 2025 07:21:41 GMT In my last column, Ownership is an illusion, unless you self-host, I encouraged readers to go down the self-hosting path. My thesis was simple: ownership of digital assets (movies, music, games, books, software) is an illusion, and that the only way to move away from this make-believe was to embrace self-hosting. For people like me, non-programmer types, this is easier said than done: Free and Open Source (FOSS) can seem intimidating because often (not always) FOSS asks you to embrace granular control over convenience and ease-of-use. The author's server, a repurposed 14 year old ThinkPad, ©Theena Kumaragurunathan, 2025When non-tech people see my server (an old ThinkPad T420) nestled in my book-shelf, running ‘bpytop’ of all things, they assume that I am engaged in some hackery: ‘What is this Matrix shit?’, a friend once wondered. When I told him that it was nothing more than a file-server for my media (movies, music and books), and then showed him my Jellyfin instance running inside my browser, I could see he was having a lightbulb moment: ‘Can you do this for me?’ Sure, I told him, but I offered him a better choice: ‘I’ll show you what you need to know in order to do this yourself, and then we will create a media server for you together.’ Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime, right? That journey with my friend took us from basics Linux commands to installation of Plex/Jellyfin, which is well beyond the scope of this article (let us know if that is something you are interested in, non-techie/programmer readers). Instead in this column, I will offer an abdridged version. Ask yourself why you need thisI had a clear motivation to go down this path during the pandemic: I wanted to backup my media collection of music and run it off Plex Server. My friend wanted to self-host his movies so that he didn’t have to wade through his hard-disks when selecting a movie to watch. What is your motivation? What you need to get started on the homelab bandwagonAn old computer or laptop. This needs to be in working order. Mine is an old ThinkPad T420 (which is 14 years old, and I am its 3rd owner). Anything from the last decade and a half ought to do. You can also get a Raspberry Pi. I would also prefer an older machine with an ethernet port; connection stability is better when your server has a wired connection to your network in my experience. Pick an operating system: I chose Debian server. You can host many of the applications listed below on a Windows install too, but your mileage may vary. If you want an even easier way, try the YunoHost Linux distro. The author's self-hosting stack, ©Theena Kumaragurunathan, 2025Start your homelab by self-hosting these softwareYou don't have to deploy all the recommendations. Think about which one would fit your requirements the most. Select it and then deploy it. Once that is successful, try next. One project at a time. 📋I am not going to include installation and set-up instructions. Those things may differ based on the choice of your operating system as well as hardware. These are just the recommendations to put you on the right track.Jellyfin: Your own NetflixEnjoying local movies on TV with JellyfinJellyfin is your home theater. It organizes movies and shows, fetches artwork, and streams to TV, browser, and phone. I chose Jellyfin media server software because setup is simple. On Debian or Ubuntu, you can use the official guide, or run it with Docker and point it at your media folders. It has no subscriptions and no tracking. 💡Keep your server on wired ethernet for stable playback, and enable hardware transcoding only if your CPU or GPU supports it.Kavita: Your own Kindle libraryThe author's instance of Kavita running on his local server, ©Theena Kumaragurunathan, 2025 Kavita is a self hosted library for books, PDFs, comics, and manga. It has a fast reader, rich metadata, OPDS, and good user management. I use it to keep my EPUBs and essays in one place with clean reading progress across devices. 💡Sort files into clear folders, let Kavita watch those folders, and enable OPDS if you read on third party apps.Nextcloud: Your own Google driveNextcloud is your personal file cloud. Sync your files, share links, and extend it with Notes, Calendar, and Contacts. It feels like a private Dropbox that runs on your hardware. The server has regular releases and clear upgrade docs. If you are new, use the web installer or Docker and start with Files before adding apps. 💡Keep it simple. Install Files first, set up the desktop client, and only add one or two apps after you are comfortable.Immich: Your own Google PhotosImmich is a private photo and video backup with mobile apps on Android and iOS. It does face recognition, search, albums, and multi user support. It is fast and designed for large libraries. Installation is straightforward with Docker Compose. Begin with the official site, then the server and apps. 💡Turn on automatic mobile backup, keep originals on the server, and use albums for curation.Navidrome: Your own SpotifyNavidrome turns your music collection into a streaming service. It indexes quickly, supports Subsonic clients, and runs well on modest hardware. You can use a single binary or Docker and attach your music folder. 💡Install ffmpeg for transcoding, clean your tags for better library browsing, and test a few clients until one fits your flow.Putting It TogetherA practical starter map looks like this. Jellyfin for movies and shows. Kavita for books and PDFs. Nextcloud for files and sharing. Immich for your photos. Navidrome for music. Run all five on Debian server or YunoHost or on Docker if you prefer containers. Keep your server on wired Ethernet. Back up the data folders in your home network. Start with one service, get comfortable, then add another. The point is not perfection. It is owning your library and making it available to the people you care about, without asking permission from a platform that can lock you out at a whim. Enjoy your home lab 🏠🥼
  21. by: Sourav Rudra Tue, 11 Nov 2025 17:23:36 GMT D7VK is a new Vulkan-based translation layer for Direct3D 7. It relies on DXVK’s Direct3D 9 backend and works with Wine on Linux. The project is open source and actively maintained. The developer behind it is WinterSnowfall, who has also worked on D8VK between 2023 and 2024. That project has since been merged into the larger DXVK project that's extensively used by Linux users. You have to understand that D7VK is not meant to run every Direct3D 7 game. Titles that mix D3D7 with older DirectDraw or GDI calls may fail to launch or show graphical glitches. So, compatibility is experimental and limited. It works by translating Direct3D 7 calls to Direct3D 9 through DXVK, allowing Vulkan-based 3D application rendering on Linux. Sadly, there is no official list of supported games yet. Some games work well, others have issues. Missing textures, crashes, and black screens are common. The issues page on the project's GitHub repo shows which games are behaving poorly. It is a good way to see what currently works. 📋PCGamingWiki's list of Direct3D 2-7 games is also a handy resource to have if you want to test a specific Direct3D 7 game.What’s nice is how the developer sets expectations right from the start. They are upfront about the experimental nature of the project. This clarity makes it easier to test games without getting disappointed. For fans of late 90s and early 2000s games, D7VK could be handy. It won’t fix everything, but it opens the door to running older Direct3D 7 games on Linux. Want to Check it Out?The D7VK GitHub repository has the source code. You can manually compile it and place it in your Wine prefix directory to try it out. D7VK supports a HUD overlay and frame rate limiting through DXVK. These features will help you track performance and debug graphical issues. D7VK (GitHub)Suggested Read 📖 Is Linux Ready For Mainstream Gaming In 2025?Linux is quietly gaining ground on Windows in the gaming space. But how well does it actually perform? Here’s what I experienced.It's FOSSSourav Rudra
  22. by: Umair Khurshid Tue, 11 Nov 2025 20:03:47 +0530 Networking problems rarely announce themselves clearly. A deployment fails, a pod cannot reach its database, or a service responds intermittently. The logs look clean, yet something feels wrong. Most engineers eventually learn one painful truth: when everything else seems fine, it is usually the network. From misrouted traffic to invisible firewalls, let me walk you through the most frequent networking issues that DevOps engineers encounter in Linux environments. I also explain how to investigate, diagnose, and fix each class of problem using real commands and reasoning. All this comes from the experience I have gained after spending years. The same experience also yielded this Linux networking microcourse that you should definitely check out. Linux Networking at ScaleMaster advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair KhurshidIt’s Almost Always the NetworkWhen an application behaves unpredictably, the first instinct is to look at the code. Developers dig through logs, restart containers, or roll back deployments. In many cases, the application is not the culprit and it's the network.  Early in my career, I used to dread these moments as application logs would show nothing but retries and timeouts. The developers would swear nothing changed and the operations team would swear they touched nothing. Yet packets were vanishing into the void and that is how I began to take networking seriously not because I wanted to, but because I had to. A good troubleshooting approach begins by proving that connectivity works at every layer. Start simple: ping -c 4 8.8.8.8 ping -c 4 example.com If the first command succeeds but the second fails, DNS is the culprit. If both fail, it is a routing or firewall issue. This baseline test should always come before looking into application-level logs. Then, verify whether the local host can reach its gateway and whether packets are returning: ip route show traceroute 8.8.8.8The Routing Rabbit HoleRouting problems are deceptively subtle as traffic flows one way but not the other, or only some destinations are reachable. The root cause often hides in Linux’s routing tables or in policies added by container frameworks. Start by displaying the active routes: ip routeThis shows the kernel’s routing decisions. For more detailed analysis, especially in multi-interface or container setups, check which route a particular destination would take: ip route get 1.1.1.1If a host has multiple network interfaces or is part of a VPN or overlay, verify that the correct table is being used. Linux supports multiple routing tables, and policy routing determines which one applies. Check the rules: ip rule showMisconfigured rules can cause asymmetric routing, where packets leave through one interface but return on another. Firewalls often drop these replies because they appear invalid. One reliable fix is to assign separate routing tables for each interface and use ip rule add with from or fwmark selectors to control the path. For example, to route traffic from 192.168.10.0/24 through a specific gateway: ip route add default via 192.168.10.1 dev eth1 table 10 ip rule add from 192.168.10.0/24 table 10Always check for reverse path filtering: cat /etc/resolv.conf Set it to 2 (loose mode) on multi-homed hosts to prevent dropped packets due to asymmetric routes. Routing issues rarely announce themselves clearly. The key is to map how packets should travel, then prove it with ip route get, traceroute, and tcpdump. DNS: The Eternal SuspectNo other component gets blamed as frequently or incorrectly as DNS. Even the recent AWS outage that took down half of the internet was reportedly caused by DNS. When an application cannot reach its dependency, the first guess is always “maybe DNS is broken.” Sometimes it is, but often the problem is caching, misconfiguration, or unexpected resolution order. Start by checking the configured resolvers: cat /etc/resolv.conf Most distros these use systemd-resolved, the file may point to a stub resolver at 127.0.0.53. To see the active DNS servers: resolvectl statusIf resolution is inconsistent between services, the problem may be namespace isolation. Containers often have their own /etc/resolv.conf, copied at startup. If the host’s DNS changes later, containers keep using outdated resolvers. Test resolution directly: dig example.com dig @8.8.8.8 example.comCompare responses from the default resolver and a public one. If only the latter works, the issue lies in internal DNS or local caching. A subtle but common failure arises from nsswitch.conf. The order of resolution methods (files dns myhostname) determines whether /etc/hosts entries or mDNS override DNS queries. In container-heavy environments, this can lead to confusing name collisions. 💡DNS problems are not always network failures, but they produce identical symptoms. That is why verifying DNS resolution early saves hours of debugging.Even when DNS works, it can still mislead you. I remember spending an hour debugging a connection issue that turned out to be caused by an unexpected IPv6 AAAA record. The application preferred IPv6 but the route to that subnet was broken. The fix was as simple as setting precedence ::ffff:0:0/96 100 in /etc/gai.conf. MTU and Fragmentation HeadachesThe Maximum Transmission Unit or MTU defines how large a packet can be before it needs fragmentation. When this number mismatches between interfaces, tunnels, or virtual networks, packets vanish without trace. You get intermittent timeouts, partial uploads, and mysterious hangs in SSH sessions. To check the MTU on an interface: ip link show eth0To test path MTU discovery, use ping with increasing packet sizes: ping -s 1472 8.8.8.8Regular ICMP echoes may succeed even when TCP traffic fails. To detect MTU issues, you need to force the “do not fragment” flag: ping -M do -s 1472 8.8.8.8If it fails, lower the size until it succeeds. The MTU equals payload plus 28 bytes (ICMP and IP headers). In virtualized or overlay environments (VXLAN, WireGuard, GRE, eBPF), encapsulation overhead reduces the effective MTU. For example, VXLAN adds 50 bytes. Setting MTU to 1450 instead of 1500 avoids fragmentation. Adjust interface MTU safely: ip link set dev eth0 mtu 1450Applications sensitive to latency often experience erratic behavior because of hidden fragmentation. Once MTU mismatches are corrected, those mysterious slowdowns vanish. In container environments, MTU mismatches become especially painful. Overlay networks such as Flannel or Calico encapsulate packets inside UDP tunnels, reducing available space. If the MTU is not adjusted inside the container, performance plummets. A single missing ip link set dev eth0 mtu 1450 can make a cluster look broken. Overlay Networks and Ghost PacketsModern clusters rely on overlays to connect containers across hosts. VXLAN, WireGuard, and similar technologies encapsulate traffic into tunnels, creating virtual networks. They are convenient but introduce new failure modes that look invisible to traditional tools. A common symptom is “ghost packets” which is traffic that appears to leave one node but never arrives at another. The tunnel endpoint logs nothing, yet connectivity fails. The first step is to confirm that the tunnel interfaces exist and are up: ip link show type vxlanCheck if the remote endpoint is reachable outside the tunnel: ping <remote_host_ip>If that fails, the problem is not the overlay but the underlay, the physical or cloud network below it. Next, verify that encapsulated traffic is not filtered. VXLAN uses UDP port 4789 by default, and WireGuard uses 51820. Ensure that firewalls on both ends allow those ports. To inspect whether encapsulation is functioning: tcpdump -i eth0 udp port 4789If packets appear here but not on the remote host, NAT or routing between the nodes is rewriting source addresses in a way that breaks return traffic. WireGuard adds its own layer of complexity. Its peers are identified by public keys, not IP addresses, so if the endpoint’s IP changes (for example, in cloud autoscaling), you must update its Endpoint in the configuration: wg set wg0 peer <public-key> endpoint <new-ip>:51820 Overlay debugging requires seeing both worlds at once, the logical (tunnel) and physical (underlay) networks. Always verify that encapsulated packets can travel freely and that MTU accommodates the overhead. Most ghost packets die because of either firewall drops or fragmentation within the tunnel. When Firewalls and Conntrack Betray YouFirewalls are supposed to protect systems, but when they fail silently, they create some of the hardest problems to diagnose. Linux’s connection tracking layer (conntrack) manages the state of every connection for NAT and stateful inspection. When its table fills or rules conflict, packets disappear with no visible error. Start by checking the current number of tracked connections: cat /proc/sys/net/netfilter/nf_conntrack_count cat /proc/sys/net/netfilter/nf_conntrack_maxI have debugged a number of microservice cluster where outbound connections failed intermittently and the culprit is overloaded conntrack table. Each NAT-ed connection consumes an entry, and the table silently drops new connections once full. The solution to this issue is simply increasing the limit: sysctl -w net.netfilter.nf_conntrack_max=262144For persistent tuning, add it to /etc/sysctl.conf. State timeouts can also cause intermittent loss and long lived connections often expire in conntrack while still active on the application side. Adjust the TCP established timeout: sysctl -w net.netfilter.nf_conntrack_tcp_timeout_established=3600Firewalls configured with nftables or iptables can complicate debugging when NAT or DNAT rules are applied incorrectly. Always inspect the active NAT table: nft list table natMake sure destination NAT and source NAT are paired correctly because Asymmetric NAT produces connection resets or silence. In high-throughput environments, offloading some rules to nftables sets with maps improves performance and reduces conntrack pressure. This is one of the areas where modern Linux firewalls significantly outperform legacy setups. Conntrack issues are often invisible until you look directly into its state tables. Once you learn to monitor them, many “random” connectivity problems turn out to be predictable and fixable. Lessons I Wish I Learned EarlierEvery engineer eventually learns that networking failures tend to follow recognizable patterns, and identifying those patterns early can save hours of unnecessary panic. 1. Always check the local host first. Half of network incidents begin with something as simple as a down interface, a missing route, or an outdated /etc/resolv.conf. 2. Validate one layer at a time. Use ping for reachability, dig for DNS, traceroute for routing, tcpdump for packet visibility, and nft list ruleset for firewalls and never skip steps. 3. Document assumptions. When debugging, write down what you believe should happen before testing. Networking surprises often come from assumptions no one verified. 4. Monitor the invisible. Connection tracking, queue lengths, and interface drops are invisible in standard metrics. Expose them to your monitoring system to catch silent failures early. 5. Learn how Linux really routes. Most complex issues trace back to misunderstood routing tables, policy rules, or namespaces. Understanding these mechanisms transforms troubleshooting from guessing to knowing. Wrapping UpThe more you troubleshoot Linux networking, the more you realize it is not about memorizing commands. It is about building mental models of how packets move, how policies influence paths, and how the kernel’s view of the network differs from yours. For DevOps engineers managing modern infrastructure, from bare metal to Kubernetes that understanding becomes essential. Once you have fixed enough DNS loops, routing asymmetries, and conntrack overflows, the next logical step is to study how Linux handles these problems at scale: multiple routing tables, virtual routing instances, nftables performance tuning, encrypted overlays, and traffic shaping. The Linux Networking at Scale course builds directly on these foundations. It goes deeper into policy routing, nftables, overlays, and QoS, the exact skills that turn network troubleshooting into design. I highly recommend checking it out. Linux Networking at ScaleMaster advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair Khurshid
  23. by: Sourav Rudra Tue, 11 Nov 2025 14:22:16 GMT Having a reliable help desk solution is a must for any consumer-facing business in today's digital age. Whether you handle customer emails, support tickets, or live chat, a good help desk system keeps your communication organized and your customers happy. Sadly, many companies take advantage of this need. They push users into walled gardens where access to basic features can change on a whim and key tools get locked behind paywalls. Help Scout's pricing as of November 11, 2025.One such case has been of Help Scout, which switched to a more expensive pricing plan. After customer backlash, the company reverted to a revised plan that was slightly cheaper than the one that sparked the outrage. But, what if I told you there was an alternative that does not make you anxious about sudden pricing changes? Something that lets you build your own setup, keep your data close, and pay only for what you actually need. FreeScout Doesn't Lock You InFreeScout is an open source help desk and shared mailbox built with PHP and Laravel. It is licensed under AGPL 3.0, which means the code is freely available, and you can self-host it on your own server without having to pay any user-based costs. You only pay for hosting and optional paid modules that expand functionality. Modules cover integrations, push notifications, and specialized features. Everything else, from ticket handling to automation, works out of the box once you install FreeScout. Other than the usual help desk features like shared inboxes, agent collision detection, canned responses, and user management, FreeScout offers flexibility that few platforms can match. FreeScout goes a step further with self-hosting, custom domains, API access, and full database control. You decide how your data is stored, backed up, and secured. For organizations that care about privacy and sovereignty, this makes a real difference. It also supports mobile apps for Android and iOS. Push notifications require a paid server-side module, but once configured, your team can manage tickets directly from their phones with no extra cloud dependencies. If you want integrations, FreeScout connects with Slack, Telegram, and other services. There are modules for CRM tools, customer portals, and even AI-assisted responses (via Community modules). Some Things to Keep in MindRunning FreeScout does need some technical setup. You will manage hosting, updates, and backups. Adding advanced features like AI-powered replies or analytics will take extra configuration and can add costs over time. Depending on your setup, you may still rely on FreeScout modules or community support. That means moving away later could take planning, though you always keep your data since it lives on your own server. In contrast, Help Scout and Zendesk provide everything under a single roof. They handle hosting, maintenance, and scaling for you but limit backend customization and control. You use what they provide within their rules. Overall, what FreeScout offers beats any walled garden solution, especially for people running small businesses or larger teams that value data ownership and predictable costs over convenience that comes with lock-in. Want to Deploy It?You can try FreeScout in your browser using its live demo. If you would like hosting it yourself, the official installation guide covers every step for various kinds of setups. Plus, there are apps for both Android and iOS. However, in order to for them to work with your FreeScout instance, you must do some additional config work. FreeScout🚀Run your own instance of FreeScout effortlessly in the cloud with PikaPods! Start free with $5 welcome credit 😎If you are considering a move from another help desk like Help Scout or Zendesk, you should check out the official migration guide, and if you are interested in the source code, then you can visit the project's GitHub repository. Suggested Read 📖 5 Signs Your Proprietary Workflow Is Stifling Your Creativity (And What You Can Do About It)If these signs feel familiar, your creativity may be stifled by proprietary constraints.It's FOSSTheena Kumaragurunathan
  24. by: Abhishek Prakash Tue, 11 Nov 2025 11:07:18 GMT Imagine that one of the most prestigious open source software websites starts showing up in top results for "pornhub downloader". This is actually happening with Flathub, the official web-based app store for Flatpak packages. Here's a demo I made while risking to spoil my relationship with Google: 0:00 /0:20 1× And no, I was not particularly looking for a one-handy utility like this 👼 I was using Ahrefs, a SEO tool used for monitoring web rankings, among other things. This is when I noticed that FlatHub was ranking for terms it should not have been. Flathub ranking for words it wouldn't wantNot just that, out of the top 10 ranked pages, at least 2 of them are NSFW tag pages. Top ranking pages are not something Flathub would be proud ofShady developer piggybacking on Flathub's reputationThis would have been one innovative, fun way to make more people use open source software only the application that uses these tags is not open source software at all. There are actually three of these applications, all of which were created by the same developer, called Warlord Software. I am not going to link out to this website out of spite. Similar kinds of apps, from the same developerIf you visit the Flathub page of these applications, nothing seems extra ordinary, just a regular downloader app for Linux. Seems like a regular downloader app until it is notBut when you scroll down to the tag section, this is where you see the root cause of the problem. All three apps are using those NSFW tags. This is a deliberate act of exploiting the good reputation of Flathub to get more people to dowload these applications before getting them on the paid version. Yes, all these three apps have premium licenses as well. Before you say that this is all no-issue and there is nothing wrong with offering an app for downloading videos from adult websites, let me tell you that you will find no such tags or words mentioned anywhere on the developer's website: No NSFW words on developer's own website where these apps are offeredHere's what's going on...See, it is nearly impossible for a new website or application to rank for popular but highly competitive keywords like 'xyz downloader'. There are numerous websites and tools that let you download online videos from x number (orn XXX number) of websites. So this developer created a few downloader apps that have no special features, offered Flatpak versions for Linux users, published them on Flathub and tagged them with the NSFW keywords. With verified tag, the app looks more legit and tempting to download. It is easy for a highly reputed website like Flathub to rank highly for those terms. This way, a shrewd developer who would have never been able to get even 100 downloads on his own got more than 250,000 downloads. There are tons of good downloader applications for Linux. They can also use these keywords, but we only have apps made by a certain developer doing this. This is pure exploitation of the Flathub ecosystem. Flathub is not to be blamed hereIt's not entirely their fault that someone added NSFW words and used it to sell shady properitary apps. Although they should be more careful about such clear exploitation of their web reputation. Now, it may seem like I am making an issue out of nothing. Perhaps. I actually noticed this a few months ago. I wanted to write about it but then I decided to ignore a 'non-issue'. Few months later, Flathub was still ranking for all kind of this-hub, that-tube, xyz-hamster downloaders, and I could not tolerate it anymore. Lovely folks at Flatpak/Flathub/Fedora, please take note. My rant ends here.
  25. by: Chris Coyier Mon, 10 Nov 2025 18:00:39 +0000 It’s interesting to me to think about during a lot of the web’s evolution, there were many different browser engines (more than there are now) and they mostly just agreed-on-paper to do the same stuff. We focus on how different things could be cross-browser back then, which is true, but mostly it all worked pretty well. A miracle, really, considering how unbelievably complicated browsers are. Then we got standards and specifications and that was basically the greatest thing that could have happened to the web. So we put on our blue beanies and celebrate that, which also serves as a reminder to protect these standards. Don’t let browsers go rogue, people! Then, still later, we actually got tests. In retrospect, yes, obviously, we need tests. These are now web-platform-tests (WPT), and they help all the browser engines make sure they are all doing the right thing. Amazing. (Side note: isn’t it obnoxious how many billions of dollars goes into newfangled browsers without any of them contributing or funding actual browser engine work?) I only recently just saw browserscore.dev by Lea Verou as well. Yet another tool to keep browsers honest. Frankly I’m surprised how low all browsers score on those tests. I read in one of Lea’s commit messages “We’re not WPT, we’re going for breadth not depth.” which I found interesting. The Browser Score tests run in the browser and pretty damn fast. I haven’t run them myself, but I have a feeling WPT tests take… a while. How can we improve on all this? Well a gosh-darn excellent way to do it is what the companies that make browsers have already been doing for a number of years: Interop. Interop is a handshake deal from these companies that they are going to get together and pick some great things that need better testing and fixed up implementations and then actually do that work. Interop 2025 looks like it went great again. It’s that time again now, and these browser companies are asking for ideas for Interop 2026. If you have something that bugs you how it works cross-browser, now is a great time to say so. Richard has some great ideas that seem like perfect fits for the task. Godspeed, ya’ll. We can’t all be like Keith and just do it ourselves.
  26. by: Neeraj Mishra Mon, 10 Nov 2025 16:40:16 +0000 Creating and updating geo targeted APIs may seem easy, but there are countless challenges involved. Every country, every city, and every mobile network can respond differently and will require distinct adjustments. When pricing endpoints contain location-based compliance features and payment options, testing them will require more than one physical location. Proxies are a crucial part of the developer’s toolkit–they enable you to virtually “stand” in another country to observe what the users see. Developers encounter many problems when it comes to testing geo targeted APIs and it is the use of proxies that addresses this concern. In this article, we will outline the proxy use case and its benefits, the different proxy types, and potential challenges. We will maintain a practical approach so that you can pass it to a QA engineer or a backend developer and they will be able to use it directly. What Are Geo Targeted APIs and Why Do They Matter? A geo targeted API is an API that customizes its response according to clients’ geographical location. Such locations are primarily determined by an IP address, sometimes by headers, and in specific situations by account data. Streaming services provide different content to different countries, hotel booking systems adjust prices based on geographical location, ride-hailing apps change currency according to local clientele, and fintech apps restrict viewable payment services based on geographical payment regulations. Why are developers so focused on this? Such APIs also need to be consistent, compliant, and predictable, and for good reason. When users in Poland see prices in USD instead of the local PLN, or people in the UK see services that are not legally available to them, there are likely customer dissatisfaction, transaction failures, or, in the worst case, regulatory issues to deal with. Ensuring that geo logic is accurately tested is not optional; for anything that concerns money, content, or the law, it is essential built-in QA. If a team is based in a single location, it is predictable that all requests that they attempt are from that location. Mock the API is an option, but that will not give you enough information about what the real upstream service will return, and that’s critical information. A way to disguise requests as if they come from a different geographical location is necessary, that is the function of a proxy in this situation. Why Proxies Are the Easiest Way to Test Location-Based Responses? A proxy server acts as an intermediary that conveys your request to the target API and returns the response. One important element is the API only sees the proxy’s IP address and not yours. Assuming the proxy is in Germany, the API will think the request is coming from Germany; the same applies to Brazil, the API will see Brazil. A developer can use a good proxy pool to send an API request from 10 different countries and check if the API is working correctly. You also don’t have to set up test infrastructure in different regions. No cloud instances have to be set up in various geographies every time you want to test. You don’t have to rely on colleagues from different countries to participate in “just a check” test. Simply route the request through a different IP address and analyze the results. Another reason for the popularity of proxies in this task is that they work on the network level. There is no need to alter the API code itself, only the API caller needs to be changed. This enables QA engineers and backend developers to test production-like behavior without changing the production logic. Typical Workflow: How Developers Actually Use Proxies in Testing Let’s break down a realistic workflow you’d see in a team that regularly tests geo targeted APIs. Define the geo scenarios First, the team decides which locations they need to test: EU vs US, specific countries like UK, Canada, Germany, UAE, or mobile-only markets. This list often mirrors business logic in the API. Choose or rotate proxies for those locations The tester/developer picks proxy endpoints that match those locations. A good provider will offer a large choice of countries so you don’t have gaps in testing. Send the same API request through different proxies The team sends the same endpoint call – say, /v1/pricing?product=123 – but with the client configured to use different proxy IPs. The API should return different currencies, prices, availability, language, or content depending on the location. Capture and compare responses Responses are saved and compared either manually or with automated tests. If Germany and France receive the same content but they were supposed to be different, that’s a bug. Automate for regression Once the pattern is confirmed, the team bakes it into CI/CD or scheduled tests. Every time the API is deployed, the test suite calls it from multiple countries via proxies to ensure nothing broke. That’s the core idea: same request, different exit IP, compare output. Which Types of Proxies Are Best for Geo API Testing? Not all proxies are equal, and developers learn this quickly once they start hitting real services. Some APIs are strict, some are lenient, and some are downright suspicious of automated traffic. So choosing the right proxy type matters. Here is a simple comparison to help decide: Proxy TypeBest Use CaseProsConsDatacenter proxiesFast functional testing across many countriesHigh speed, good for automation, cheaperSome services detect them as non-residentialResidential proxiesTesting real-user conditions and stricter APIsHigh trust, looks like normal user trafficSlower, often more expensiveMobile proxiesTesting mobile-only features and app endpointsSeen as mobile users, great for app testingMost expensive, limited availabilityRotating proxiesLarge-scale multi-geo automated testingIP freshness, less blocking over many callsHarder to debug single fixed IP behaviour For most backend teams, datacenter proxies are enough to verify logic: does the API return EUR to a German IP and GBP to a UK IP? For QA teams testing production-like flows, residential or mobile proxies are better, because many modern APIs personalise content or apply security rules based on the perceived “realness” of the IP. If you need a flexible source of geo IPs for dev and QA, using a provider like proxys.io is convenient because you can pick locations on demand and plug them into your scripts without overcomplicated setup. Key Things Developers Test with Proxies Developers don’t use proxies for fun; they use them to answer very specific questions about how a geo targeted API behaves. Here are the most common areas they validate: Currency and localisation (USD vs EUR vs GBP, date formats, language headers) Regional availability (is this product/service actually shown in this market?) Compliance-based hiding (is restricted content hidden in specific countries?) Pricing tiers (do high-income regions get different price ladders?) Payment gateways (is a certain payment method visible in that country?) Feature flags tied to geography (e.g. features rolled out in 3 markets only) By running the exact same call through 5–10 different country proxies, the developer immediately sees if business rules are correctly encoded in the API. One Practical List: Best Practices for Using Proxies in API Testing Use HTTPS for all proxy traffic to avoid tampering and to mirror real-world usage. Keep a mapping of “country → proxy endpoint” in your test repo so tests are reproducible. Log the IP and country used for each test run – it makes debugging much easier. Don’t rely on just one IP per country; some APIs will cache responses per IP. Add assertions per country in automated tests (“if country=DE, expect currency=EUR”). Rotate or refresh proxies periodically to avoid stale or blocked IPs. Document test coverage so product owners know which countries are actually being tested. This is the kind of hygiene that turns proxies from an ad-hoc trick into a stable part of your QA pipeline. How to Integrate Geo Proxy Testing into Automated Pipelines A lot of teams start by testing manually with a proxy in Postman, Insomnia, or curl. That’s fine for discovery, but not enough for long-term reliability. The real win is when you add multi-geo tests into CI/CD so every deployment checks location-based behaviour automatically. The pattern is straightforward: Your test suite has a list of target countries. For each country, the test runner sets the proxy configuration. The runner calls the API and captures the response. The test compares the response to the expected shape/content for that country. If even one country fails (for example, Canada doesn’t get CAD), the pipeline fails. When proxies provide a simplified network-level interface, it is compatible with virtually any language or testing framework, be it JavaScript (Axios, node-fetch), Python (requests), Java (HttpClient), Go (http.Client with transport), or even a cURL-based Bash script. It is a matter of setting the proxy for each request.  This is extremely useful for teams implementing progressive geo-release features. Suppose the marketing team wants to release a feature in the UK and Germany, but not in the US. Your continuous integration system could enforce this rule. If the US suddenly gets the feature, the build fails. That is control. Common Pitfalls and How to Avoid Them While proxy-based testing is simple in principle, developers do hit some recurring issues: 1. API uses more than IP to detect location Some APIs also look at Accept-Language, SIM/Carrier data (for mobile), or account settings. If you only change IP, you might not trigger all geographic branches. Solution: mirror headers and user profile conditions where possible. 2. Caching hides differences If the upstream service caches by URL only (not by IP), you might get the same response even when changing country. Solution: add cache-busting query params or ensure the API is configured to vary by IP. 3. Using free or low-quality proxies Unreliable proxies cause false negatives – timeouts, blocked IPs, or wrong countries. For testing business logic, stable and correctly geo-located IPs matter more than saving a dollar. 4. Forgetting about time zones Some services couple geo logic with local time. If you test only the IP but not the time window, you might think the feature is missing. Document time-based rules separately. 5. Not logging proxy usage When someone reports “Germany didn’t get the right prices”, you need to know which IP you used. Always log the proxy endpoint and country for traceability. Avoiding these mistakes makes geo testing with proxies extremely reliable. Why Proxies Beat Manual Remote Testing You could ask a colleague in Spain to click your link. You could set up cloud instances in 12 regions. You could even travel. But those options are slow, expensive, and not repeatable. Proxies, on the other hand: Work instantly from your current location Scale to as many countries as your provider supports Can be run in CI/CD, not just manually Are independent from your personal device or IP Are easy to rotate if one IP is blocked From an engineering point of view, they’re simply the most automatable way to emulate different user geographies. Conclusion: Proxies Turn Geo Testing into a Repeatable Process There are geo-targeted APIs everywhere – commerce, content, fintech, mobility, gaming, SaaS. Any product you operate in multiple countries will eventually have to solve the question, “What does this look like for users in X?” Proxies give the cleanest way for developers to programmatically answer this question.  Developers can check whether prices, currencies, languages, availability, and compliance rules behave as expected by changing the same API call to use different country IPs. With a good proxy provider, you can turn this from a one-off debugging technique into a standard check in your testing process. The conclusion is straightforward: If the API logic is based on the user’s location, so must the testing be. Proxies are the way to achieve this from your desk. The post How Developers Use Proxies to Test Geo Targeted APIs? appeared first on The Crazy Programmer.
  27. by: Sourav Rudra Mon, 10 Nov 2025 14:59:22 GMT Humble Bundle has a Linux collection (partner link) running right now that's kind of hard to ignore. Twenty-two books covering everything from "how do I even install this" to Kubernetes orchestration and ARM64 reverse engineering. All from Apress and Springer; this means proper technical publishers, not some random self-published stuff. Humble Tech Book Bundle: Linux for Professionals by Apress/SpringerUnlock essential resources for Linux—get a professional edge on the competition with a little help from the experts at Apress & Springer!Humble BundleIf you decide to go ahead with this bundle, your money will go to support Room to Read, a non-profit that focuses on girls' literacy and education in low-income communities. ⏲️ The last date for the deal is November 24, 2025. 📋This article contains affiliate links. Please read our affiliate policy for more information.So, What's in The Bundle?First off, the "Zero to SysAdmin" trilogy. Using and Administering Linux: Volume 1 covers installation and basic command line usage. Volume 2 goes into file systems, scripting, and system management. Volume 3 focuses on network services like DNS, DHCP, and email servers. The Kubernetes coverage includes three books. Deploy Container Applications Using Kubernetes covers microk8s and AWS EKS implementations. Ansible for Kubernetes by Example shows cluster automation. Kubernetes Recipes provides solutions for common deployment scenarios. Plus Certified Kubernetes Administrator Study Companion if you're prepping for the CKA exam. systemd for Linux SysAdmins explains the init system and service manager used in modern distributions. It covers unit files, service management, and systemd components. For low-level work, there's Assembly Language Reimagined for Intel x64 programming on Linux. Foundations of Linux Debugging, Disassembling, and Reversing covers x64 architecture analysis. Foundations of ARM64 Linux Debugging, Disassembling, and Reversing does the same for ARM64. Linux Containers and Virtualization covers container implementation using Rust. Oracle on Docker explains running Oracle databases in containers. Supercomputers for Linux SysAdmins covers HPC cluster management and hardware. Yocto Project Customization for Linux is for building custom embedded Linux distributions. Pro Bash is a shell scripting reference. Introduction to Ansible Network Automation covers network device automation. The Enterprise Linux Administrator and Linux System Administration for the 2020s both cover current sysadmin practices. Practical Linux DevOps focuses on building development labs. CompTIA Linux+ Certification Companion is exam preparation material. Linux for Small Business Owners covers deploying Linux in small business environments. What Do You Get for Your Money?All 22 books are available as eBooks in PDF and ePub formats. They should work on most modern devices, ranging from computers and smartphones to tablets and e-readers. Here's the complete collection. 👇 Column 1 Column 2 CompTIA Linux + Certification Companion Introduction to Ansible Network Automation Certified Kubernetes Administrator Study Companion Pro Bash Yocto Project Customization for Linux Linux Containers and Virtualization Using and Administering Linux: Volume 1 Foundations of ARM64 Linux Debugging, Disassembling, and Reversing Using and Administering Linux: Volume 2 Foundations of Linux Debugging, Disassembling, and Reversing Using and Administering Linux: Volume 3 Deploy Container Applications Using Kubernetes systemd for Linux SysAdmins Ansible for Kubernetes by Example Assembly Language Reimagined Linux for Small Business Owners Kubernetes Recipes Linux System Administration for the 2020s Oracle on Docker Practical Linux DevOps Supercomputers for Linux SysAdmins The Enterprise Linux Administrator There are three pricing tiers here: $1 tier: Two books: Linux System Administration for the 2020s and Practical Linux DevOps. Both focus on current practices. Not bad for a dollar. $18 tier: Adds three more books covering Kubernetes, Ansible automation, and DevOps stuff. Five books total. $25 tier: All 22 books. This is where you get the whole bundle. These books are yours to keep with no DRM restrictions. Head over to Humble Bundle (partner link) to grab the collection before the deal expires. Get The Deal (partner link)

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.