Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Sourav Rudra
    Thu, 20 Nov 2025 11:55:33 GMT

    There is an ongoing OSS maintainer burnout crisis. A new report reveals that a significant portion of developers have experienced burnout, with most of them being unpaid and very close to quitting.
    Luckily, all's not lost. With proper funding, support, and recognition, there is a chance this crisis can be handled.
    Alongside releasing that report, Sentry and the Open Source Pledge also announced something very wholesome.
    Celebrate Thanksgiving, The Open Source Way
    Cranberry sOSS is a real jar of cranberry sauce that Vlad-Stefan Harbuz, Studio 404, and the Open Source Pledge team created to help open source maintainers. All proceeds made from this will go directly to developers.
    Funding distribution is handled systematically, with thank.dev's dependency analysis model being used to identify the most depended-upon open source maintainers globally. The complete list of beneficiaries is public on the funding distribution page.
    The creators acknowledge "no model is perfect". But they believe the weighted distribution approach makes sense for supporting foundational maintainers who keep the internet running.
    As for the sauce, it contains fresh cranberries sourced locally from Cape Cod. Other ingredients include organic cane sugar and water. It is gluten-free, sodium-free, and certified organic too.
    This can be a lovely addition to your Thanksgiving table in seven days; it should go well with sandwiches, charcuterie boards, and turkey dinners.
    Let the Celebrations Begin
    A jar of Cranberry sOSS costs $13.37 excluding shipping. You can order through the Sentry Shop. Cranberry sOSS ships internationally, though some countries are excluded, and thanks.dev handles all payment disbursement to the selected open source maintainers.
    PS: Keep an eye out for some genius copywriting on the product page. The pricing itself is an obvious one.
    Cranberry sOSSSuggested Read 📖
    Open Source Developers Are Exhausted, Unpaid, and Ready to Walk AwayThe foundation of modern software is cracking under the weight of burnout.It's FOSSSourav Rudra
  2. by: Sourav Rudra
    Thu, 20 Nov 2025 08:32:58 GMT

    Blender is a free and open source 3D creation suite used across film, animation, game development, and VFX production. Organizations like Ubisoft, NVIDIA, AMD, and others rely on it for their commercial projects.
    With a recent announcement, the Blender team has released Blender 5.0, introducing major improvements to color management, video editing, and rendering workflows.
    🆕 Blender 5.0: What's New?
    Blender 5.0 adds support for ACES 1.3 and 2.0 workflows. You can now use the ACEScg working space and ACES 2.0 view transforms. OpenEXR images can be saved in ACES2065-1 and ACEScg color spaces, with files now enabling custom working color spaces, like Linear Rec.2020.
    The Compositor also gets a new Convert Colorspace node, and tooltips now show descriptions for displays, views, and color spaces from your OpenColorIO config.
    The color pipeline now supports HDR and wide gamut colors for images and video. You get new display options like Rec.2100-PQ, Rec.2100-HLG, and AgX HDR. Using HDR on Blender requires a compatible monitor, Vulkan backend on Windows and Linux, and Wayland on Linux. Apple Silicon Macs work out of the box.
    Source: Blender
    Then there are the color pickers, which have a new Linear/Perceptual toggle, where the Linear option (formerly called RGB) uses the scene linear working color space and the Perceptual option uses the color picking space (sRGB by default) that matches the visual color picking widgets.
    Similarly, the Sky Texture node now uses multiple scattering for delivering more realistic skies and can be used to create day-to-night cycles by animating just a single parameter.
    The new Radial Tiling node allows you to create circular patterns and rounded polygons with adjustable roundness, and Grease Pencil objects now support motion blur with adjustable quality steps.
    We wrap up this section with the Video Sequencer getting a major overhaul. The Strip properties have been moved to the Properties Editor with dedicated tabs.
    You can now pick different scenes for the Sequencer, and the biggest addition here is the Compositor Modifier, which brings node-based effects into the Sequencer. Plus, playback controls are now built into the editor.
    🛠️ Other Changes and Improvements
    There are plenty of other quality-of-life upgrades too; some notable ones include:
    Better denoising quality with the OptiX denoiser. Metallic materials support thin-film iridescence effects like oil slicks. Material compilation is up to 4x faster on NVIDIA GPUs with Vulkan. Subsurface scattering now uses multi-bounce rendering for more realistic results. Volumes render faster with NanoVDB and new filtering (up to 3x speedup on GPU). 📥 Download Blender 5.0
    You will find the necessary binaries for this release on the official website for Linux, Windows, and macOS. For alternative downloads on Linux, you get this release from Snapcraft and Steam.
    Though, before you download, you should take note of the updated GPU requirements, and for an in-depth look at Blender 5.0, you can go through the changelog.
    The source code lives in the project's Gitea instance.
    Blender 5.0
  3. by: Abhishek Prakash
    Thu, 20 Nov 2025 03:31:34 GMT

    Nitrux has released a major new version, and it now uses Hyprland instead of KDE Plasma. Hyprland popularity is soaring, and I predict that more distros will start offering their Hyprland soon. Are we entering a Hyprland era of desktop Linux?
    Nitrux 5.0.0 Released: A ‘New Beginning’ That’s Not for Everyone (By Design)The Debian-based distro goes all-in on Hyprland, immutability, and intentional design.It's FOSSSourav RudraLet's see what else you get in this edition of FOSS Weekly:
    FFmpeg being unhappy with Google. Mastodon seeing a leadership change. Tool for keeping GNOME Panel clean. Snapchat dipping their toes in the open source pond. Video review of a beautiful Raspberry Pi mini PC case. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Internxt. SPONSORED Free Webinar | SOC Leader’s Playbook: 3 Steps to Faster MTTR

    SOCs are dealing with tighter timelines, rising noise, and fast-moving threats.
    In this session, you’ll get a clear 3-step playbook designed to help teams:
    Cut MTTR by 21 minutesper incident Detect new attacks earlierwith intel from15,000 organizations Achieve a 3× performance boostby reducing false positives Tune in to the LIVE webinar on November 25 at 3 PM UTC 📰 Linux and Open Source News
    Snapchat open-sources the Valdi framework. RustDesk has pulled ahead of its competitors. FFmpeg has called out Google over CVE slop. IBM has joined the OpenSearch Software Foundation. Mozilla has unveiled a new 'AI Window' browsing mode for Firefox. Ubuntu now offers 15 years of support for long-lived enterprise systems. Eugen Rochko has stepped down from his role as CEO at Mastodon and looks forward to his new advisory role.
    After Nearly 10 Years of Building Mastodon, Eugen Rochko Steps Into Advisory RoleMastodon’s creator steps back from CEO role, transfers assets to non-profit organization.It's FOSSSourav Rudra🧠 What We’re Thinking About
    Open source developers are burning out fast, and if concrete steps aren't taken, they will quit.
    Open Source Developers Are Exhausted, Unpaid, and Ready to Walk AwayThe foundation of modern software is cracking under the weight of burnout.It's FOSSSourav RudraMicrosoft's new AI feature for Windows can be tricked into installing malware.
    🧮 Linux Tips, Tutorials, and Learnings
    Some suggestions to follow to keep your Linux systems secure. Ever heard the term "LUKS"? If not, then we have an explainer on it. Here are some Rust-based alternatives to classic CLI tools on Linux. If you see a 'apt-key is deprecated' warning, here's what you can do about it.
    [Fixed] apt-key is deprecated. Manage keyring files in trusted.gpg.dThe apt-key is being deprecated and your system knows that. But do you know what you should do to get packages from external repositories?It's FOSSAbhishek PrakashI mentioned Hyprland window manager at the beginning. On the related topic, here's an explainer on what a tiling window manager is.
    Explained: What is a Tiling Window Manager in Linux?Learn what a tiling window manager is, and the benefits that come along with it.It's FOSSAnkush Das Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a McDonald's burger a month), and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 👷 AI, Homelab and Hardware Corner
    Your Raspberry Pi Pico can be a DIY powerhouse with these project ideas.
    9 Projects Ideas to Get into DIY Mode With Raspberry Pi PicoGot a Raspberry Pi Pico? Here are some examples of what you can do with this tiny but versatile microcontroller.It's FOSSAbhishek Kumar🛍️ Linux eBook bundle
    This curated library (partner link) of courses includes Supercomputers for Linux SysAdmins, CompTIA Linux+ Certification Companion, Using and Administering Linux: Volumes 1–2, and more. Plus, your purchase supports the Room to Read initiative!
    Explore the Humble offer here✨ Project Highlights
    Veil is a neat extension that can tidy up the top panel on GNOME-equipped systems.
    Clean Up Your GNOME Panel With This New ExtensionHide unwanted panel icons automatically with Veil.It's FOSSSourav Rudra📽️ Videos I Am Creating for You
    A beautiful mini PC case for Raspberry Pi. The latest video shows its positives and negatives in action.
    Subscribe to It's FOSS YouTube Channel Linux is the most used operating system in the world. but on servers. Linux on desktop is often ignored. That's why It's FOSS made it a mission to write helpful tutorials and guides to help use Linux on their personal computer.
    We do it all for free. No venture capitalist funds us. But you know who does? Readers like you. Yes, we are an independent, reader supported publication helping Linux users worldwide with timely news coverage, in-depth guides and tutorials.
    If you believe in our work, please support us by getting a Plus membership. It costs just $3 a month or $99 for a lifetime subscription.
    Join It's FOSS Plus 💡 Quick Handy Tip
    In Firefox's recent versions (143 and above), you can copy and share links to highlight. Select a piece of text on a website you want to bring attention to, then right-click on the selection and click on "Copy Link to Highlight."

    🎋 Fun in the FOSSverse
    This crossword will test your knowledge of popular shortforms in FOSS.
    Expand the Short form: CrosswordIt’s time for you to solve a crossword!It's FOSSAnkush Das🤣 Meme of the Week: Yeah, visiting some Linux forums can be brutal, but not ours!
    🗓️ Tech Trivia: On November 21, 1969, the first ARPANET link went live between UCLA and the Stanford Research Institute, demonstrating packet-switched communication and laying the groundwork for what would eventually become the modern Internet.
    🧑‍🤝‍🧑 From the Community: One of our regular Pro FOSSers, Xander, is looking for solutions to backup only the Home folder in their Linux Mint system. Can you provide any helpful pointers?
    How to best make a backup of ONLY my home folder?Today I’m expecting a brand new computer. A Tuxedo computer. Now, on my current computer I’m running Linux Mint 22.2. How do I best backup my HOME directory completely, including all the dotfiles and dotdirectories. I’m especially interested in keeping my firefox configuration, but there’s also quite some documents on there I wish to keep. I’d either need to import it in TuxedoOS or a new install of Linux Mint, depending on how used I got to cinnamon.It's FOSS Communityxahodo❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  4. by: Sourav Rudra
    Wed, 19 Nov 2025 11:14:02 GMT

    If you ask me, Microsoft has been one of the biggest driving forces behind Linux adoption in recent years. The way they've been handling Windows, with its forced updates, aggressive telemetry, and questionable AI features, has sent more people to Linux than any marketing campaign ever could.
    And they are at it again with a new AI feature that could be tricked into installing malware on your system.
    Isn't This Too Much?
    Microsoft is rolling out a new experimental feature called "Copilot Actions" to Windows Insiders. They pitch it as an AI agent that handles tasks you describe to it. Organize vacation photos, sort your Downloads folder, extract info from PDFs, that sort of thing.
    It is currently available in Windows 11 Insider builds (version 26220.7262) as part of Copilot Labs and is off by default, requiring admin access to set it up.
    But here's the catch. Copilot Actions isn't just suggesting what to do. It runs in a separate environment called "Agent Workspace" with its unique user account, clicking through apps and working on your files.
    Microsoft says it has "capabilities like its own desktop" and can "interact with apps in parallel to your own session." And that's where the problems start.
    In a support document (linked above), Microsoft admits that features like Copilot Actions introduce "novel security risks." They warn about cross-prompt injection (XPIA), where malicious content in documents or UI elements can override the AI's instructions.
    The result? "Unintended actions like data exfiltration or malware installation."
    Yeah, you read that right. Microsoft is shipping a feature that could be tricked into installing malware on your system.
    Microsoft's own warning hits hard: "We recommend that you only enable this feature if you understand the security implications."
    When you try to enable these experimental features, Windows shows you a warning dialog that you have to acknowledge. 👇
    Source: MicrosoftEven with these warnings, the level of access Copilot Actions demands is concerning. When you enable the feature, it gets read and write access to your Documents, Downloads, Desktop, Pictures, Videos, and Music folders.
    Windows Latest notes that, unlike Windows Sandbox, which runs in complete isolation and gets wiped clean when you close it, Copilot Actions operates in "Agent Workspace" with persistent user accounts that keep access to these folders across sessions. Also keep in mind that the feature can also access any apps installed for all users on a system.
    Microsoft says they are implementing safeguards. All actions are logged, users must approve data access requests, the feature operates in isolated workspaces, and the system uses audit logs to track activity.
    But you are still giving an AI system that can "hallucinate and produce unexpected outputs" (Microsoft's words, not mine) full access to your personal files.
    Closing Thoughts
    There is a pattern here. Microsoft seems obsessed with shoving AI into every corner of Windows, whether users want it or not, whether it's ready or not, while simultaneously playing around with the data of its users.
    This is why Linux keeps gaining traction. No AI experiments that could install malware, and no fighting against features you never asked for. Plus, the most likely way you will nuke your installation is if you deliberately run something like rm -rf yourself, not because Copilot got confused by a malicious PDF.
    If Microsoft's AI experiments are making you uncomfortable, then there are plenty of Linux distributions that respect your privacy and put you in control.
    Suggested Reads 📖
    Best Linux Distributions For Everyone in 2025Looking for the best Linux distribution that suits everyone? Take a look at our comprehensive list.It's FOSSAnkush DasMicrosoft Recall Exposes Passwords and Banking Data!New tests reveal Microsoft Recall still screenshots sensitive data.It's FOSSSourav Rudra
  5. by: Sourav Rudra
    Wed, 19 Nov 2025 08:10:35 GMT

    Mastodon is a decentralized social network built on the ActivityPub protocol. Unlike Big Tech platforms, it operates as a federated network where users can choose or host their own servers.
    The key advantage is that no single entity controls your data or content. We already have an active presence on the instance owned and operated by the Mastodon non-profit, so you can follow us there if you have not already.
    Now let’s move on to the topic at hand.
    What's Happening: Eugen Rochko is stepping down as CEO after nearly ten years, transferring the trademark and other assets to the Mastodon non‑profit. He first announced the transition plan just two weeks into 2025, and after a series of quiet behind-the-scenes preparations, the change is now complete.
    Eugen goes on to explain his thinking behind the move:
    He further elaborates on how the role has taken a toll on him, particularly the public visibility that comes with leading a social platform. Constant scrutiny, comparisons to tech billionaires, and the emotional weight of community expectations contributed to mounting stress.
    Over time, it was clear that continuing in such a prominent and demanding position was no longer healthy for him.
    What to Expect: According to TechCrunch, Mastodon is setting up a new Belgian AISBL (international non-profit association) to replace its former German gGmbH, which lost non-profit status last year.
    The Belgian structure offers more governance flexibility and international recognition. Meanwhile, a US-based 501(c)(3) non-profit currently holds the trademark and assets until the Belgian entity is established.
    Felix Hlatky is the new Executive Director, and the board includes Twitter co-founder Biz Stone, Karien Bezuidenhout, and Esra'a Al Shafei. Other members include Renaud Chaput as the Technical Director, Andy Piper as the Head of Communications, and Philip Schröpel as the Strategy & Product Advisor.
    Funding was secured from Jeff Atwood (€2.2 million), Biz Stone, AltStore (€260,000), GCC (€65,000), and Craigslist founder Craig Newmark. It is also worth noting that Eugen received €1 million in recognition of his years of service at a salary below market rates.
    And all of this makes sense. He built something bigger than himself and is now ensuring it remains that way. By handing control to a non-profit, he protects Mastodon’s values and frees himself to focus on building rather than managing all the noise that comes with the job.
    Suggested Read 📖
    Mastodon: The Decentralized, Open Source Alternative to Twitter [to Resist Big Tech Monopoly]I don’t know about you, but I have long yearned for a social network that I can truly call home. Facebook is no good as it’s full of pictures of people’s cats and their dinner (probably not for me). Twitter is full of trolls and rude people,It's FOSSCommunity
  6. by: Sourav Rudra
    Wed, 19 Nov 2025 05:42:55 GMT

    Your favorite apps run on code maintained by exhausted volunteers. The databases powering your company? Built by developers working double shifts. Those JavaScript frameworks everyone depends on? Often shepherded by a single person, unpaid, drowning in demands.
    A new report reveals just how bad things have gotten. Sentry funded this research through their Open Source Pledge initiative. Miranda Heath, a psychologist and PhD student at The University of Edinburgh, conducted the study.
    She reviewed academic literature, analyzed 57 community materials, and talked to seven OSS developers directly. Some had burned out. Others managed to avoid it. Some walked away entirely.
    Her findings track with open source infrastructure breaking down. The pressure points are nearly identical.
    Before we dive in, you have to know there is one major limitation with this report. Most analyzed materials came from white male developers. Miranda notes that marginalized groups likely experience additional burnout factors the research missed.
    Burnout in Open Source: A Structural Problem We Can Fix Together | Open Source PledgeBurnout is affecting the entire Open Source ecosystem. Here’s what we could do to make things better.Open Source PledgeThe Three Faces of Burnout
    Firstly, you have to understand that burnout isn't just being tired. It has three distinct characteristics that feed off each other.
    The motivational component hits first. Developers lose the ability to push through tasks. What once felt manageable becomes impossible to start. They avoid work entirely.
    Then comes the affective breakdown. Emotional regulation fails. Developers become easily frustrated, irritated, and overwhelmed. They snap at users. They withdraw from communities.
    The cognitive shift follows. People mentally distance themselves from their work. They express negativity and cynicism towards it. Dark humor becomes a coping mechanism. "Fix it, fork it, f*ck off" becomes the phrase of choice.
    The numbers are brutal. A 2023 survey found 73% out of 26,348 developers experienced burnout at some point. Another survey showed 60% of OSS maintainers considered leaving entirely.
    Burnout is a predictor of quitting. When developers burn out, they walk away.
    Burnout is Slow Death
    Miranda found six interconnected factors driving maintainers to the edge.
    Difficulty Getting Paid: Sixty percent of OSS maintainers receive no payment whatsoever (according to the Tidelift survey). They work full-time jobs, then maintain critical infrastructure for free. The double shift wrecks their mental and physical health and steals time from friends/family. Loneliness follows.
    Crushing Workload: Popular package maintainers drown in requests. They are often solo. Finding quality contributors is nearly impossible. Email overload alone can trigger burnout.
    Maintenance Feels Unrewarding: Developers love creating. They hate the repetitive, mind-numbing maintenance work. It takes time away from what they actually enjoy (coding). There is no creativity, no learning, just repetitive work.
    Toxic Community Behavior: Users demand features like customers. They shame maintainers publicly when bugs appear. Good work goes unrecognized. Mistakes get amplified. The entitlement exhausts them.
    Toxicity exists between developers too. The majority of OSS collaboration happens remotely. No face-to-face contact. No conflict resolution training. No formal support structures or governance unless teams build them.
    This makes team toxicity both more likely and harder to fix, and the isolation aspect only makes everything worse.
    Hyper-responsibility: Developers feel crushing obligation to their communities. They can't say no, and stepping back feels like betrayal. The guilt compounds the stress.
    Pressure to Prove Oneself: Developers need portfolios for jobs. They constantly prove themselves to the community and potential employers. The performance pressure never stops. Fear of losing reputation keeps them working past healthy limits.
    GitHub makes it worse. Achievements, badges, contribution graphs. It gamifies the work. Developers feel compelled to maintain streaks and numbers. The metrics become the measure of worth.
    These factors reinforce each other. No pay for OSS means working a full-time job on top of it. The double shift means longer hours. Longer hours kill patience. Less patience breeds toxicity. Toxicity drives contributors away. Fewer contributors means more work.
    What Needs to Change
    The report offers four clear recommendations.
    Pay OSS developers reliably. Not donations or tips. Predictable income through decentralized funding that preserves maintainer autonomy. Foster recognition and respect too.
    Community leaders must encourage better behavior, and platforms like GitHub should educate users about the humans behind the code.
    Grow the community through better education and mentorship programs. Make it easier for newcomers to contribute quality work. Financial support helps here too.
    And finally, advocate for maintainers. OSS powers critical infrastructure. Burnout puts that at risk. Advocacy bodies need to make governments aware. That awareness can bring funding and real solutions.
    And, I will be honest, this hits close to home. I fully understand what's happening. Burnout literally robs you of any motivation or energy to do the things you love. It doesn't just slow you down. It kills the joy entirely.
    The fix isn't complicated. Treat maintainers like the humans they are, not free infrastructure. Companies profiting from open source need to contribute financially (at the very least).
    Employers should give developers dedicated time for OSS work. Users must remember there is a person on the other end of that issue thread. Fellow developers need to call out toxicity when they see it.
    Burnout prevention starts with basic human decency.
    Suggested Read 📖
    Open Source Infrastructure is Breaking Down Due to Corporate FreeloadingAn unprecedented threat looms over open source.It's FOSSSourav Rudra
  7. by: Chris Coyier
    Tue, 18 Nov 2025 23:11:32 +0000

    There was a day not long ago where a Google Chrome browser update left any page with a CodePen Embed on it throwing a whole big pile of red JavaScript errors in the console. Not ideal, obviously.
    The change was related to how the browser handles allow attributes on iframes (i.e. <iframe allow="...">). CodePen was calculating the appropriate values inside an iframe for a nested iframe. That must have been a security issue of sorts, as now those values need to be present on the outside iframe as well.
    We documented all this in a blog post so hopefully we could get some attention from Chrome on this, and for other browser makers as well since it affects all of us.
    And I posted it on the ol’ social media:
    Huge thanks to Bramus Van Damme who saw this, triaged it at Chrome, and had a resolution within a day:
    I think the patch is a great change so hats off to everyone involved for getting it done so quickly. It’s already in Canary and don’t really know when it’ll get the stable but that sure will be good. It follows how Safari is doing things where values that aren’t understood are just ignored (which we think is fine and inline with how HTML normally works).
    Fortunately we were able to mitigate the problem a little until then. For most Embedded Pens, a <script> is loaded on the page embedding it, and we dynamically create the <iframe> for you. This is just nice as it makes making an accessible fallback easier and gives you access to API-ish features for the embeds. We were able to augment that script to do a little browser user-agent sniffing and apply the correct set of allow attributes on the iframe, as to avoid those JavaScript errors we were seeing.
    But there’s the rub: we’d rather not do any user-agent sniffing at all.
    If we could just put all the possible allow attributes we want on there, and not be terribly concerned if any particular browser didn’t support any particular value, that would be ideal. We just can’t have the scary console errors, out of concern for our users who may not understand them.
    Where we’re at in the saga now is that:
    We’re waiting for the change to Chrome to get to stable. We’re hoping Safari stays the way it is. OH HI FIREFOX. On that last point, if we put all the allow attributes we would want to on an <iframe> in Firefox, we also get console-bombed. This time not with red-errors but with yellow-warnings.
    So yes, hi Firefox, if you could also not display these warnings (unless a reporting URL is set up) that would be great. We’d be one less website out there relying on user-agent sniffing.
  8. by: Sourav Rudra
    Tue, 18 Nov 2025 09:09:02 GMT

    RustDesk has positioned itself as a compelling open source alternative to proprietary remote desktop solutions like TeamViewer and AnyDesk. Built with Rust and licensed under AGPL 3.0, it offers cross-platform support across Linux, Android, Windows, macOS, and iOS.
    The project has now announced a major update for Linux users. RustDesk's latest nightly build introduces support for multiple monitors with different scaling factors on Wayland sessions, specifically targeting KDE and GNOME desktop environments.
    RustDesk Levels Up
    This update addresses a well-known issue across the Linux desktop space, where users running multiple monitors with different resolutions and scaling levels, such as a 4K display at 200% scaling alongside a standard 1080p monitor, often struggled with proper display handling.
    The most common problem was pointer misalignment. Users would click in one location, but the input would register elsewhere on the remote machine. This made multi-monitor setups with mixed scaling practically unusable for remote work.
    The developers claim that their implementation now makes them the only remote desktop solution with this capability on Wayland.
    This puts RustDesk ahead of its commercial rivals. TeamViewer, AnyDesk, and Splashtop have been relatively slow to address Wayland-specific challenges, particularly around complex multi-monitor configurations.
    Get RustDesk
    This improvement is currently available in RustDesk's nightly builds on GitHub. These pre-release versions get updated daily with the latest code and features for early testing.
    Once testing completes, the multi-scaled display support will roll out to the stable version available on the official website.
    We tested RustDesk back in 2024 and found it impressive even then. This latest update only solidifies its position as a serious TeamViewer alternative.
    RustDeskSuggested Read 📖
    RustDesk: I Found This Open-Source TeamViewer Alternative Impressive!RustDesk is a fantastic secure remote desktop tool. Let’s take it for a spin!It's FOSSSourav Rudra
  9. Chris’ Corner: Cursors

    by: Chris Coyier
    Mon, 17 Nov 2025 18:00:37 +0000

    CSS has a bunch of cursors already. Chances are, you’re not using them as much as you should be. Well, should is a strong word. I can’t cite any evidence offhand that special cursors is some absolute boon to user experience or accessibility. But it certainly seems like a nice touch. Like:
    .copy-button { cursor: copy; } Or
    [disabled] { cursor: not-allowed; } These cursors are actually supplied by your OS, and thus can be altered by the OS. That’s a good thing, as some OSs let you bump up the size of the cursor (with a url() value), for example, which is good for accessibility. You can set custom cursors as well, which won’t get bumped up, which is bad for accessibility.
    Looking around at our 2.0 Beta editor, I can see lots of CSS-provided cursor changes.
    I’m pretty pleased with those!
    An interesting aspect of “custom” cursors is that they are only obviously a problem if you replace the actual cursor itself. That doesn’t rule out doing things in addition or next to the cursor. Our own Rachel Smith’s site has rainbow paint splotches shoot out from behind the cursor, just for fun, but the cursor itself isn’t changed.
    Kyle Lambert has a good article about doing useful things with the cursor with a particular focus on things Figma does. Here’s some of excerpts of good ideas:
    Just interesting stuff! Not sure we’re seeing quite as much cursor innovation elsewhere.
  10. by: Juan Diego Rodríguez
    Mon, 17 Nov 2025 14:47:54 +0000

    This is a series! It all started a couple of articles ago, when we found out that, according to the State of CSS 2025 survey, trigonometric functions were the “Most Hated” CSS feature.
    I’ve been trying to change that perspective, so I showcased several uses for trigonometric functions in CSS: one for sin() and cos() and another on tan(). However, that’s only half of what trigonometric functions can do. So today, we’ll poke at the inverse world of trigonometric functions: asin(), acos(), atan() and atan2().
    CSS Trigonometric Functions: The “Most Hated” CSS Feature
    sin() and cos() tan() asin(), acos(), atan() and atan2() (You are here!) Inverse functions?
    Recapping things a bit, given an angle, the sin(), cos() and tan() functions return a ratio presenting the sine, cosine, and tangent of that angle, respectively. And if you read the last two parts of the series, then you already know what each of those quantities represents.
    What if we wanted to go the other way around? If we have a ratio that represents the sine, cosine or tangent of an angle, how can we get the original angle? This is where inverse trigonometric functions come in! Each inverse function asks what the necessary angle is to get a given value for a specific trigonometric function; in other words, it undoes the original trigonometric function. So…
    acos() is the inverse of cos(), asin() is the inverse of sin(), and atan() and atan2() are the inverse of tan(). They are also called “arcus” functions and written as arcos(), arcsin() and arctan() in most places. This is because, in a circle, each angle corresponds to an arc in the circumference.
    CodePen Embed Fallback The length of this arc is the angle times the circle’s radius. Since trigonometric functions live in a unit circle, where the radius is equal to 1, the arc length is also the angle, expressed in radians.
    Their mathy definitions are a little boring, to say the least, but they are straightforward:
    y = acos(x) such that x = cos(y) y = asin(x) such that x = sin(y) y = atan(x) such that x = tan(y) acos() and asin()
    Using acos() and asin(), we can undo cos(θ) and sin(θ) to get the starting angle, θ. However, if we try to graph them, we’ll notice something odd:
    The functions are only defined from -1 to 1!
    Remember, cos() and sin() can take any angle, but they will always return a number between -1 and 1. For example, both cos(90°) and cos(270°) (not to mention others) return 0, so which value should acos(0) return? To answer this, both acos() and asin() have their domain (their input) and range (their output) restricted:
    acos() can only take numbers between -1 and 1 and return angles between 0° and 180°. asin() can only take numbers between -1 and 1 and return angles between -90° and 90°. This limits a lot of the situations where we can use acos() and asin(), since something like asin(1.2) doesn’t work in CSS* — according to the spec, going outside acos() and asin() domain returns NaN — which leads us to our next inverse function…
    atan() and atan2()
    Similarly, using atan(), we can undo tan(θ) to get θ. But, unlike asin() and acos(), if we graph it, we’ll notice a big difference:
    This time it is defined on the whole number line! This makes sense since tan() can return any number between -Infinity and Infinity, so atan() is defined in that domain.
    atan() can take any number between -Infinity and Infinity and returns angles -90° and 90°.
    This makes atan() incredibly useful to find angles in all kinds of situations, and a lot more versatile than acos() and asin(). That’s why we’ll be using it, along atan2(), going forward. Although don’t worry about atan2() for now, we’ll get to it later.
    Finding the perfect angle
    In the last article, we worked a lot with triangles. Specifically, we used the tan() function to find one of the sides of a right-angled triangle from the following relationships:
    To make it work, we needed to know one of its sides and the angle, and by solving the equation, we would get the other side. However, in most cases, we do know the lengths of the triangle’s sides and what we are actually looking for is the angle. In that case, the last equation becomes:
    Triangles and Conic Gradients
    Finding the angle comes in handy in lots of cases, like in gradients, for instance. In a linear gradient, for example, if we want it to go from corner to corner, we’ll have to match the gradient’s angle depending on the element’s dimensions. Otherwise, with a fixed angle, the gradient won’t change if the element gets resized:
    .gradient { background: repeating-linear-gradient(ghostwhite 0px 25px, darkslategray 25px 50px); } CodePen Embed Fallback This may be the desired look, but I think that most often than not, you want it to match the element’s dimensions.
    Using linear-gradient(), we can easily solve this using to top right or to bottom left values for the angle, which automatically sets the angle so the gradient goes from corner to corner.
    .gradient { background: repeating-linear-gradient(to top right, ghostwhite 0px 25px, darkslategray 25px 50px); } CodePen Embed Fallback However, we don’t have that type of syntax for other gradients, like a conic-gradient(). For example, the next conic gradient has a fixed angle and won’t change upon resizing the element.
    .gradient { background: conic-gradient(from 45deg, #84a59d 180deg, #f28482 180deg); } CodePen Embed Fallback Luckily, we can fix this using atan()! We can look at the gradient as a right-angled triangle, where the width is the adjacent side and the height the opposite side:
    Then, we can get the angle using this formula:
    .gradient { --angle: atan(var(--height-gradient) / var(--width-gradient)); } Since conic-gradient() starts from the top edge — conic-gradient(from 0deg) — we’ll have to shift it by 90deg to make it work.
    .gradient { --rotation: calc(90deg - var(--angle)); background: conic-gradient(from var(--rotation), #84a59d 180deg, #f28482 180deg); } CodePen Embed Fallback You may be wondering: can’t we do that with a linear gradient? And the answer is, yes! But this was just an example to showcase atan(). Let’s move on to more interesting stuff that’s unique to conic gradients.
    I got the next example from Ana Tudor’s post on “Variable Aspect Ratio Card With Conic Gradients”:
    CodePen Embed Fallback Pretty cool, right?. Sadly, Ana’s post is from 2021, a time when trigonometric functions were specced out but not implemented. As she mentions in her article, it wasn’t possible to create these gradients using atan(). Luckily, we live in the future! Let’s see how simple they become with trigonometry and CSS.
    We’ll use two conic gradients, each of them covering half of the card’s background.
    To save time, I’ll gloss over exactly how to make the original gradient, so here is a quick little step-by-step guide on how to make one of those gradients in a square-shaped element:
    CodePen Embed Fallback Since we’re working with a perfect square, we can fix the --angle and --rotation to be 45deg, but for a general use case, each of the conic-gradients would look like this in CSS:
    .gradient { background: /* one below */ conic-gradient( from var(--rotation) at bottom left, #b9eee1 calc(var(--angle) * 1 / 3), #79d3be calc(var(--angle) * 1 / 3) calc(var(--angle) * 2 / 3), #39b89a calc(var(--angle) * 2 / 3) calc(var(--angle) * 3 / 3), transparent var(--angle) ), /* one above */ conic-gradient( from calc(var(--rotation) + 180deg) at top right, #fec9d7 calc(var(--angle) * 1 / 3), #ff91ad calc(var(--angle) * 1 / 3) calc(var(--angle) * 2 / 3), #ff5883 calc(var(--angle) * 2 / 3) calc(var(--angle) * 3 / 3), transparent var(--angle) ); } And we can get those --angle and --rotation variables the same way we did earlier — using atan(), of course!
    .gradient { --angle: atan(var(--height-gradient) / var(--width-gradient)); --rotation: calc(90deg - var(--angle)); } CodePen Embed Fallback What about atan2()?
    The last example was all abou atan(), but I told you we would also look at the atan2() function. With atan(), we get the angle when we divide the opposite side by the adjacent side and pass that value as the argument. On the flip side, atan2() takes them as separate arguments:
    atan(opposite/adjacent) atan2(opposite, adjacent) What’s the difference? To explain, let’s backtrack a bit.
    We used atan() in the context of triangles, meaning that the adjacent and opposite sides were always positive. This may seem like an obvious thing since lengths are always positive, but we won’t always work with lengths.
    Imagine we are in a x-y plane and pick a random point on the graph. Just by looking at its position, we can know its x and y coordinates, which can have both negative and positive coordinates. What if we wanted its angle instead? Measuring it, of course, from the positive x-axis.
    Well, remember from the last article in this series that we can also define tan() as the quotient between sin() and cos():
    Also recall that when we measure the angle from the positive x-axis, then sin() returns the y-coordinate and cos() returns the x-coordinate. So, the last formula becomes:
    And applying atan(), we can directly get the angle!
    This formula has one problem, though. It should work for any point in the x-y plane, and since both x and y can be negative, we can confuse some points. Since we are dividing the y-coordinate by the x-coordinate, in the eyes of atan(), the negative y-coordinate looks the same as the negative x-coordinate. And if both coordinates are negative, it would look the same as if both were positive.
    To compensate for this, we have atan2(), and since it takes the y-coordinate and x-coordinate as separate arguments, it’s smart enough to return the angle everywhere in the plane!
    Let’s see how we can put it to practical use.
    Following the mouse
    Using atan2(), we can make elements react to the mouse’s position. Why would we want to do that? Meet my friend Helpy, Clippy’s uglier brother from Microsoft.
    CodePen Embed Fallback Helpy wants to always be looking at the user’s mouse, and luckily, we can help him using atan2(). I won’t go into too much detail about how Helpy is made, just know that his eyes are two pseudo-elements:
    .helpy::before, .helpy::after { /* eye styling */ } To help Helpy, we first need to let CSS know the mouse’s current x-y coordinates. And while I may not like using JavaScript here, it’s needed in order to pass the mouse coordinates to CSS as two custom properties that we’ll call --m-x and --m-y.
    const body = document.querySelector("body"); // listen for the mouse pointer body.addEventListener("pointermove", (event) => { // set variables for the pointer's current coordinates let x = event.clientX; let y = event.clientY; // assign those coordinates to CSS custom properties in pixel units body.style.setProperty("--m-x", `${Math.round(x)}px`); body.style.setProperty("--m-y", `${Math.round(y)}px`); }); Helpy is currently looking away from the content, so we’ll first move his eyes so they align with the positive x-axis, i.e., to the right.
    .helpy::before, .helpy::after { rotate: 135deg; } Once there, we can use atan2() to find the exact angle Helpy has to turn so he sees the user’s mouse. Since Helpy is positioned at the top-left corner of the page, and the x and y coordinates are measured from there, it’s time to plug those coordinates into our function: atan2(var(--m-y), var(--m-x)).
    .helpy::before, .helpy::after { /* rotate the eyes by it's starting position, plus the atan2 of the coordinates */ rotate: calc(135deg + atan2(var(--m-y), var(--m-x))); } We can make one last improvement. You’ll notice that if the mouse goes on the little gap behind Helpy, he is unable to look at the pointer. This happens because we are measuring the coordinates exactly from the top-left corner, and Helpy is positioned a little bit away from that.
    To fix this, we can translate the origin of the coordinate system directly on Helpy by subtracting the padding and half its size:
    Which looks like this in CSS:
    .helpy::before, .helpy::after { rotate: calc( 135deg + atan2( var(--m-y) - var(--spacing) - var(--helpy-size) / 2, var(--m-x) - var(--spacing) - var(--helpy-size) / 2 ) ); } CodePen Embed Fallback This is a somewhat minor improvement, but moving the coordinate origin will be vital if we want to place Helpy in any other place on the screen.
    Extra: Getting the viewport (and anything) in numbers
    I can’t finish this series without mentioning a trick to typecast different units into simple numbers using atan2() and tan(). It isn’t directly related to trigonometry but it’s still super useful. It was first described amazingly by Jane Ori in 2023, and goes as follows.
    If we want to get the viewport as an integer, then we can…
    @property --100vw { syntax: "<length>;"; initial-value: 0px; inherits: false; } :root { --100vw: 100vw; --int-width: calc(10000 * tan(atan2(var(--100vw), 10000px))); } And now: the --int-width variable holds the viewport width as an integer. This looks like magic, so I really recommend reading Jane Ori’s post to understand it. I also have an article using it to create animations as the viewport is resized!
    CodePen Embed Fallback What about reciprocals?
    I noticed that we are still lacking the reciprocals for each trigonometric function. The reciprocals are merely 1 divided by the function, so there’s a total of three of them:
    The secant, or sec(x), is the reciprocal of cos(x), so it’s 1 / cos(x). The cosecant, or csc(x), is the reciprocal of sin(x), so it’s 1 / sin(x). The cotangent, or cot(x) is the reciprocal of tan(x), so it’s 1 / tan(x). The beauty of sin(), cos() and tan() and their reciprocals is that they all live in the unit circle we’ve looked at in other articles in this series. I decided to put everything together in the following demo that shows all of the trigonometric functions covered on the same unit circle:
    CodePen Embed Fallback That’s it!
    Welp, that’s it! I hope you learned and had fun with this series just as much as I enjoyed writing it. And thanks so much for those of you who have shared your own demos. I’ll be rounding them up in my Bluesky page.
    CSS Trigonometric Functions: The “Most Hated” CSS Feature
    sin() and cos() tan() asin(), acos(), atan() and atan2() (You are here!) The “Most Hated” CSS Feature: asin(), acos(), atan() and atan2() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  11. by: Sourav Rudra
    Mon, 17 Nov 2025 12:49:10 GMT

    I rely heavily on GNOME extensions for my daily workflow. From Dash to Dock for quick app launching to Tiling Shell to effortlessly manage app windows while working. These basically turn the vanilla GNOME experience into something that truly fits my needs.
    While browsing through the latest This Week in GNOME post, I stumbled upon something interesting. A developer announced Veil, describing it as a cleaner and more modern way than Hide Items to manage applets in the GNOME panel.
    It sounded promising. So I decided to take it for a spin and see what it brings to the table.
    Veil: Overview ⭐

    Veil comes from Dagim G. Astatkie, a software professional based out of Ethiopia. This extension addresses a common frustration among GNOME users. If you are a power user, then your top panel can quickly fill up with system indicators and status icons.
    It gets messy fast, and Veil gives you control over what stays visible and what gets hidden away.
    It offers many handy features, like auto-hide items on timer, slick animations when showing or hiding items, and the ability to selectively choose which panel icons stay visible.
    Initial Impressions 👨‍💻
    I installed it using Extension Manager on a Ubuntu 25.10 system, and I found it straightforward from start to finish. First, I enabled a few other extensions to properly test how Veil handles multiple panel items. Once that was done, everything clicked into place.
    A single click on the sharp-looking arrow at the top right of the panel did the trick. My network stats indicator disappeared. The Tiling Shell layout switcher vanished. System Monitor went away too. A clean top panel, just like that.
    Veil's General and Panel Items page.
    If I wanted to tweak things further, I could easily do so by heading into the "General" tab of the extension settings. There I got to play around with options like save state, default visibility, changing the arrow icon to something else for open and close actions, configuring auto-hide timing, and deciding which items stay visible at all times.
    This level of freedom should be enough for most people who want a clean top panel and some peace of mind.
    📥 Get Veil
    If you already have GNOME extensions set up on your system, installation is straightforward. Visit the extensions website or open Extension Manager and search for "Veil" by author "JD".
    If you haven't configured extensions yet, our complete guide on GNOME shell extensions will walk you through the entire setup process. The source code for Veil lives on GitHub for those interested in contributing or building from source.
    VeilSuggested Read 📖
    How to Use GNOME Shell Extensions [Complete Guide]Step-by-step detailed guide to show you how to install GNOME Shell Extensions manually or easily via a browser.It's FOSSAbhishek Prakash
  12. by: Sourav Rudra
    Mon, 17 Nov 2025 09:30:47 GMT

    Snap Inc., the company behind Snapchat, has open-sourced Valdi, a cross-platform mobile UI framework. The social media company typically keeps its technology in-house, but this marks a surprising move into open source territory.
    While there was no dedicated announcement for this on their news portal, The New Stack were the first ones to report this; I am assuming they received a press release for this.
    Anyhow, let's dive into this interesting development.
    Valdi is Now Open Source
    Valdi is now available on GitHub under the MIT license. The framework has powered Snapchat's production features for eight years, and, with the accompanying license in place, developers can use, modify, and distribute the code freely, and there are no restrictions on commercial use.
    Valdi compiles TypeScript code directly into native views for Android, iOS, and macOS. It does not use web views or JavaScript bridges. The framework claims 2x faster time-to-first-render and uses 1/4 the memory compared to competitors. These benchmarks were shared during Valdi's initial beta phase, when Snapchat first announced Valdi in August 2025 on Hacker News.
    Back then, the company sought beta testers and required NDAs for private repository access. The initial beta lasted three months before the public release, and Snapchat seems to have used this window to refine documentation and developer tooling.
    The current repository includes instant hot reload, full VSCode debugging support, and automatic view recycling. It also features a C++ layout engine and FlexBox layout system support, and developers can embed Valdi components in existing native apps.
    You can visit Valdi's GitHub repository for access to the source code and the documentation. There is also a Discord server for community support and developer discussions.
    ValdiNot Everyone is Convinced
    Developer reception has been mixed. Reddit netizens are questioning Valdi's advantages over React Native. One of them, SamsungProgrammer, asked:
    To which another redditor, idkhowtocallmyacc, responded with skepticism. They pointed out that React Native's new architecture has also eliminated JavaScript bridges, potentially negating Valdi's main selling point.
    And that does make sense, to be honest. Ending that comment thread, a redditor called balder1993, responded by saying that:
    Only time will tell if Valdi can escape Snapchat's shadow and find a broader developer audience.
  13. by: Neville Ondara
    Sun, 16 Nov 2025 00:47:33 GMT

    If you’re like me, you probably grew up with the classic Linux command-line tools such as ls, cat, du. These commands have carried me through countless scripts and late-night debugging sessions.
    Here's the thing. While these tools do their job, they can be plain looking and difficult to use for certain tasks.
    Take the du command for example. It shows the disk usage on the system but use it without any option, and it is a mess.
    Terminals today support color, Unicode icons, live previews, all things our old favorites weren’t designed for. And the Rust revolution has quietly reshaped the command-line landscape. So there is a wave of Rust-based CLI tools that don’t just replicate the traditional ones; they modernize them. They’re fast, (claim to be) memory-safe, polished, and often come with thoughtful UX touches that make daily terminal work noticeably smoother.
    I’ve been tinkering with these tools lately, and I thought it’d be fun to share a list of my favorites.
    🚧If you are a sysadmin, managing servers, you should not rely on alternatives. You might not get these fancy new tools on every system and installing them on every Linux server you log in is not feasible. The alternative tools are good when you are using a personal computer and have full control over the development environment. exa: Alternative to ls
    If there’s one tool that convinced me Rust CLI apps were worth exploring, it’s exa. It feels familiar but adds what the original ls has always lacked: sensible colors, icons, and Git awareness.
    Highlights:
    Beautiful color themes Git integration Optional tree view Clearer permissions formatting Installation:
    cargo install exaUsage:
    exa -al --git You can instantly see which files are new, which are modified, and which are pure chaos.
    bat: Alternative to cat
    cat is great for quick checks, but reading config files or code in raw plain text gets tedious. bat fixes that with syntax highlighting, Git integration, and line numbers, automatic paging, without losing cat compatibility.
    Installation:
    cargo install batExample Usage:
    bat ~/.bashrcIt’s basically cat with a glow-up ✨. When I first used it, I found myself opening random config files just to admire the colors.
    dust: Alternative to du
    du always dumps a mountain of numbers on your screen. dust turns that into a compact, visual representation of disk usage that you can parse at a glance.
    It’s instantly more readable than the old command. The output is clean, easy to parse, and shows relative sizes visually. I swear my hard drive has never looked this friendly. 😎
    Install dust:
    cargo install du-dust Usage:
    dustfd: Alternative to find
    Remember spending 10 minutes crafting the perfect find command? Yeah… me too. fd makes this easier. It has simple syntax, ignores hidden files by default and it is super-fast.
    Install fd:
    cargo install fd-findExample:
    fd main.rs fd fossnewsIts speed and simplicity make find feel outdated. After switching, you’ll rarely look back.
    ripgrep (rg): Alternative to grep
    Rust-based ripgrep has become a must-have for developers. It’s dramatically faster and gives clear, highlighted search results.
    Install ripgrep:
    cargo install ripgrepExample usage:
    rg TODO src/ It respects your .gitignore and outputs results with color highlighting. I use it every day for searching TODOs, bug reports.
    duf: Alternative to df
    df is useful, but let’s be honest: the output looks like something printed from a 90s dot-matrix printer😆. duf fixes that. It takes the same disk-usage information and turns it into a clean, colorful, structured table you can actually understand at a glance.
    duf gives you a clean dashboard with grouped filesystems, readable sizes, clear partition labels, and a quick view of what’s healthy vs. what’s nearly full.
    Installation:
    sudo apt install dufUsage:
    duf procs: Alternative to ps
    While ps aux works, it can feel visually overwhelming. procs gives you a more structured, color-coded view of your system processes, letting you quickly see what’s running without the need to launch a full TUI tool like htop.
    It’s like a personal dashboard for your processes. I use it every day to keep tabs on what’s running without feeling buried in a wall of text.
    Installation:
    cargo install procsUsage:
    procstldr: Alternative to man
    tldr makes navigating manual pages painless by offering clear examples, highlighting essential flags, and keeping things short (no scrolling forever).
    Installation:
    cargo install tldrUsage:
    tldr tarHonestly, I wish this existed when I was learning Linux, it's a lifesaver for newbies and veterans alike.
    broot: Alternative to tree
    If you’ve ever used tree, you know it can quickly becomes overwhelming in large directories. broot upgrades the concept: it lets you navigate directories interactively, collapse or expand folders on the fly, and search as you go.
    Installation:
    cargo install broot Usage:
    brootI’ve ditched my old ls -R habit entirely, broot makes exploring directories feel interactive and satisfying, turning a messy filesystem into something you can actually enjoy navigating.
    zoxide: Alternative to cd
    How many times have you typed cd ../../../../some/long/path? Too many, right? z (or zoxide) solves that by tracking your most visited directories and letting you jump to them with a single command, saving your fingers and making navigation effortless.
    Installation:
    cargo install zoxideYou also need to initialize it in your shell:
    # Bash eval "$(zoxide init bash)" # Zsh eval "$(zoxide init zsh)" # Fish zoxide init fish | source Usage:
    z codeIt keeps track of your frequently used directories and lets you jump to them instantly.
    lsd: Alternative to ls
    If you’re tired of the plain, monochrome output of ls, lsd is here to make your directory listings not just readable, but enjoyable. With built-in icons and vibrant colors, it instantly helps you distinguish between files, directories, and executables at a glance.
    Installation:
    cargo install lsdYou can run it just like a normal ls command:
    lsd -la lsd organizes information clearly and highlights key file attributes, making navigation faster and more intuitive.
    bottom: Alternative to top
    The classic top command shows system usage, but let’s face it, it can feel like you’re looking at a terminal snapshot from 1995 😆. bottom (or btm) brings a modern, clean, and highly visual experience to monitoring your system. It provides:
    Color-coded CPU, memory, and disk usage Real-time graphs directly in the terminal An organized layout that’s easy to read and navigate Installation:
    cargo install bottomYou can launch it simply with:
    btm Once you start using bottom, it’s hard to go back. Watching CPU spikes, memory usage, and disk activity while compiling Rust projects feels strangely satisfying. It’s both functional and fun, giving you the insights you need without the clutter of older tools.
    hyperfine: Alternative to time and other benchmarking commands
    Ever wondered which of your commands is truly the fastest? Stop guessing and start measuring with hyperfine. This Rust-based benchmarking tool makes it effortless to compare commands side by side.
    hyperfine runs each command multiple times, calculates averages, and gives you a clear, color-coded comparison of execution times. Beyond simple comparisons, it also supports warm-up runs, statistical analysis, and custom command setups, making it a powerful addition to any developer’s toolkit.
    Installation:
    cargo install hyperfine Usage example:
    hyperfine "exa -al" "ls -al"Watching exa obliterate ls in mere milliseconds is oddly satisfying⚡. If you love optimization, efficiency, and a little nerdy satisfaction, hyperfine is your new best friend.
    xplr: Alternative to nnn
    Now, I don't know if I can call nnn a classic Linux tool but I liked xplr so much that I decided to include it here.
    xplr takes the idea of a terminal file explorer to the next level. If you loved broot, xplr will blow your mind with these features:
    Navigate directories using arrow keys or Vim-style bindings Preview files directly inside the terminal Launch commands on files without leaving the app Fully customizable layouts and keybindings for power users Installation:
    cargo install xplrUsage:
    xplrWrapping Up
    Switching to new commands might feel like extra effort at first, but Rust-based CLI tools are often more than just a trend, they’re fast, modern, and designed to make your workflow enjoyable.
    They handle colors, syntax highlighting, and Git integration right out of the box. They save keystrokes, reduce frustration, and make complex tasks simpler. They make your terminal feel alive and engaging. On top of that, using them makes you look extra cool in front of fellow Linux nerds. Trust me, it’s a subtle flex 💪
    Start small, maybe install exa and bat first, and gradually expand your toolkit. Soon, your terminal will feel futuristic, your workflow smoother, and your projects easier to manage.
  14. by: Ryan Trimble
    Fri, 14 Nov 2025 15:32:50 +0000

    A few weeks ago, Quiet UI made the rounds when it was released as an open source user interface library, built with JavaScript web components. I had the opportunity to check out the documentation and it seemed like a solid library. I’m always super excited to see more options for web components out in the wild.
    Unfortunately, before we even had a chance to cover it here at CSS-Tricks, Quiet UI has disappeared. When visiting the Quiet UI website, there is a simple statement:
    The repository for Quiet UI is no longer available on Quiet UI’s GitHub, and its social accounts seem to have been removed as well.
    The creator, Cory LaViska, is a veteran of UI libraries and most known for work on Shoelace. Shoelace joined Font Awesome in 2022 and was rebranded as Web Awesome. The latest version of Web Awesome was released around the same time Quiet UI was originally announced.
    According to the Quiet UI site, Cory will be continuing to work on it as a personal creative outlet, but hopefully we’ll be able to see what he’s cooking up again, someday. In the meantime, you can get a really good taste of what the project is/was all about in Dave Rupert’s fantastic write-up.
    Quiet UI Came and Went, Quiet as a Mouse originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  15. by: Sourav Rudra
    Fri, 14 Nov 2025 13:37:14 GMT

    Firefox has been pushing AI features for a while now. Over the past year, they've added AI chatbots in the sidebar, automatic alt text generation, and AI-enhanced tab grouping. It is basically their way of keeping up with Chrome and Edge, both of which have gone all-in on AI.
    Of course not everyone is thrilled about AI creeping into their web browsers, and Mozilla (the ones behind Firefox) seems to understand that. Every AI feature in Firefox is opt-in. You can keep using the browser as you always have, or flip on AI tools when you actually need them.
    Now, they are taking this approach a step further with something called AI Window.
    Firefox AI Window: What's Cooking?
    Mozilla has announced it's working on AI Window, a new browsing mode that comes with a built-in AI assistant. Think of it as a third option alongside the Classic browsing mode and Private Window mode.
    Before you get angry, know that it will be fully optional. Switch to AI Window when you want help, or just ignore it entirely. Try it, hate it, disable it. Mozilla's whole pitch is that you stay in control.
    On the transparency front, they are making three commitments:
    A fully opt-in experience. Features that protect your choice. More transparency around how your data is used. Why bother with all this, you ask? Mozilla sees AI as part of the web's future and wants to shape it their way. They figure ignoring AI while it reshapes the web doesn't help anyone, so they want to steer it toward user control rather than watch browsers from AI companies (read: Big Tech) lock people in.
    Ajit Varma, the Vice President and Head of Product at Firefox, put it like this:
    The feature isn't live. Mozilla's building it "in the open" and wants feedback to shape how it turns out. If you want early access, there's a waitlist at firefox.com/ai to get updates and first dibs on testing.
    Suggested Read 📖
    Exploring Firefox Tab Groups: Has Mozilla Redeemed Itself?Firefox’s Tab Groups help you organize tabs efficiently. But how efficiently? Let me share my experience.It's FOSSSourav Rudra
  16. by: Abhishek Prakash
    Fri, 14 Nov 2025 17:08:50 +0530

    Feels like 2025 is ending sooner than expected. I know that's not the case but it just feels like that 😄
    On that note, we plan to publish at least two more courses for you before the year ends. They are likely to be on Terraform and Kubernetes. I am also planning a microcourse on 'automated backups with cron and rsync'. These classic Linux tools are always reliable.
    And in the meantime, we are also working on expanding our collection of hands-on practice labs so that you can improve your skills by doing it.
    Lots of things planned. Stay tuned, stay subscribed.
    Here's why you should get LHB Pro membership:
    ✅ Get access to Linux for DevOps, Docker, Ansible, Systemd and other text courses
    ✅ Get access to Kubernetes Operator and SSH video course
    ✅ Get 6 premium books on Bash, Linux and Ansible for free
    ✅ No ads on the website
    Get Pro Membership  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  17. by: Sourav Rudra
    Fri, 14 Nov 2025 10:44:14 GMT

    Ubuntu is Canonical's flagship Linux distribution that powers a significant portion of the information technology infrastructure today. It has two major versions: an interim release that comes with nine months of support and a long-term support release that comes with five years of standard support that is extensible via Ubuntu Pro.
    If you didn't know, Canonical introduced Ubuntu Pro in 2022 as a subscription service that extends LTS coverage beyond the standard five years. It includes Expanded Security Maintenance (ESM), which provides an additional five years of security patching, bringing the total coverage to 10 years for LTS releases.
    Similarly, back in 2024, Canonical launched the Legacy add-on for Ubuntu Pro, which initially provided two additional years of support beyond ESM, bringing total coverage to 12 years.
    And now they have announced an expansion that brings 15 years of support for LTS releases.
    15 Years of Support Sounds Great
    The expanded Legacy add-on now offers five additional years of support after the 10-year ESM window ends. This means Ubuntu LTS releases receive:
    5 years of standard security maintenance. 5 years of Expanded Security Maintenance. 5 years of Legacy add-on support. Ubuntu 14.04 LTS, which entered the standard support period in April 2024, will now be maintained until April 2029. This gives it a full 15-year lifecycle from its initial release. The Legacy add-on kicks in after the first 10 years and costs 50% more than the standard Ubuntu Pro subscription.
    All future LTS releases, including 16.04, 18.04, 20.04, and beyond, are eligible for the same 15-year coverage when they reach the Legacy phase.
    Get Ubuntu Pro (Legacy add-on)
    The Legacy add-on becomes available after an LTS release completes 10 years of coverage, and as I mentioned earlier, costs 50% more than the standard Ubuntu Pro subscription.
    To activate the Legacy add-on support, Canonical asks users to contact their sales team or reach out to their assigned account manager.
    Ubuntu Pro (Legacy add-on)Suggested Read 📖
    IBM Joins OpenSearch Software Foundation to Advance AI-Powered Search and RAGPledges enterprise-grade enhancements as Premier Member.It's FOSSSourav Rudra
  18. by: Ani
    Fri, 14 Nov 2025 09:53:36 +0000

    Steve Jobs famously said, “Design is not just what it looks like and feels like — design is how it works.” Design is so much more than the visual layer; it goes far deeper than that.
    About me
    I’m Petra Tarkkala, and I’m the Head of Design at Tietoevry Create Finland. I have 25 years of experience in service design, UX, and digital transformation, making me one of the pioneers in digital design in Finland. When I started, the field of service design was still quite small, and it’s been inspiring to witness its growth. In many ways, I’ve evolved together with the industry.
    My team has about 20 designers in Finland and collaborates with international design teams across the Nordics, Central Europe, and the US. Most of my work involves consultative projects, which are mainly in public services and large enterprises based in Finland.
    My approach is very hands-on and grounded in understanding real user needs. We always base our work on insights, so it’s essential for me first to understand the actual context and what users truly need before trying to solve any problem.
    About my role
    As Head of Design, I lead our design team, grow our competence, recruit new talent, and help shape our project portfolio. I also stay hands-on with design projects. This keeps my skills sharp and my thinking fresh. Working directly with clients not only inspires new ideas, but also makes me a better design leader.
    Service design is fundamentally about understanding people and creating services that are accessible, intuitive, and genuinely valuable, whether that means digital solutions, better face-to-face experiences, or entirely new ways of working. The process always starts with a deep dive into user and business needs, followed by ideation, prototyping, and testing with real users. It’s iterative: we refine and test concepts until we find what truly works. In a nutshell, we co-create solutions that make a positive difference for both organizations and the people they serve. 
    For example, in healthcare projects, service design might mean ensuring digital tools support, not replace, human interaction, or making sure vulnerable groups aren’t left behind. In Finland, service design can help make limited resources go further by tailoring services to different needs: some people are happy with digital consultations, while others—like many older adults—prefer face-to-face encounters. The key is designing with empathy and flexibility, so everyone gets the support they need.
    Petra Tarkkala, Head of Design, Tietoevry Create
    The beginning of my career

    I was always quite good at math and strong in the natural sciences, and I was also very creative. Still, I didn’t have a clear idea of what I wanted to do. I didn’t dream of being a doctor or a teacher. I just knew I wanted to do something meaningful that would let me use my strengths.
    Since I had studied a lot of math and physics in high school, I decided to apply to the Helsinki University of Technology (now known as Aalto University) to study computer science. I got accepted right away, in 1996.
    Building my own path
    I feel incredibly lucky to have followed this path. I could have never planned it. Back in high school, this kind of career didn’t even exist. That’s something I often tell young people, including my own kids: don’t stress too much about deciding exactly what you want to be, because your future job might not even exist yet.
    At the time, I just believed that having a master’s degree would open doors, and I truly got lucky. I made my choices somewhat randomly, but by following my strengths, I found work that motivates me and makes me happy.
    Working at Tietoevry

    I joined Tietoevry in 2018, and I’ve genuinely loved the journey ever since. At heart, I’m a creative problem-solver—I thrive at the intersection of business, design, and technology, and I honestly can’t imagine doing anything else. With my technical background, creativity, and strong sense of user empathy, my role fits me perfectly. I also value meaningful work: helping businesses succeed while creating real impact. I feel lucky that it’s been so easy to balance my work with my personal life.
    The value of AI
    AI enables us to focus on more meaningful and valuable work by automating the mundane tasks. AI frees up time and resources. For example, previously, part of our project’s budget had to be used for routine tasks, such as transcribing user interviews. Now, AI tools can generate transcripts for us and even help identify key insights from those interviews.
    I use AI as a sparring partner. When I need to produce material for a client or develop something for a project, I check AI’s findings, compare it with my own, and then create a synthesis. It’s like having a very smart colleague always available, who provides valuable input, but one you can’t trust 100%.
    Keeping myself motivated
    As a consultant, receiving genuine gratitude from clients at the end of a challenging design project is highly motivating. Another key source of motivation for me is the community I work with. My team is fun, energetic, and truly passionate about what we do. What motivates us is the belief that our work matters, that we’re solving real problems and making a difference. Being surrounded by people who care deeply about the impact of their work is incredibly motivating.
    My advice to women in tech
    I think that for women in tech is especially important to remember that we should be bold in our ideas and confident in our abilities. If we have the skills and the foundation, we shouldn’t wait to be guided; we should step forward and take the lead ourselves. I encourage my team to be proactive and speak up. I often remind them: “Don’t wait for permission to lead — just start leading.” Design is not always well understood; being clear, assertive, and confident is necessary to move ideas forward.
    My favourite quote
    Steve Jobs famously said, “Design is not just what it looks like and feels like — design is how it works.”
    Design is a powerful tool for change.  Design is not just about making things look good—it’s about making things work better for people, systems, and the planet. I believe in creativity as a force for transformation, and I’m always looking for ways to bring creative problem solving and user empathy into the work I do.
    The post Role Model Blog: Petra Tarkkala, Tietoevry Create first appeared on Women in Tech Finland.
  19. by: Hangga Aji Sayekti
    Fri, 14 Nov 2025 07:49:11 +0530

    Did you know that many security breaches happen through assets companies didn't even know they had? Subdomains like staging.company.com or test.api.company.com are frequently overlooked yet can expose your entire infrastructure.
    OWASP Amass solves this by automatically discovering all your subdomains, giving you a complete picture of your attack surface. In this guide, we'll show you how to use it like a pro.
    What is OWASP Amass?
    OWASP Amass is an open-source tool designed for in-depth Attack Surface Mapping and Asset Discovery. In simpler terms, it's a subdomain enumeration powerhouse. It doesn't just use one method; it combines data from over 80 different sources, including:
    Certificate Transparency Logs: It looks at public records of SSL certificates issued for a domain. Search Engines: It scrapes results from Google, Bing, and others. DNS Databases: It queries massive DNS data archives. Brute Forcing: It intelligently guesses common subdomain names. The result is a comprehensive list of subdomains you might not have even known existed.
    📋A crucial reminder: Only use Amass on domains you own or have explicit permission to test. Unauthorized scanning can be considered hostile and may violate terms of service or laws. vulnweb.com is a safe and legal playground for this purpose.Step 1: Installing OWASP Amass
    The easiest way to install Amass on most Linux distributions is via a package manager.
    Amass is bundled with Kali, so you’re safe. Drop it in the terminal and let the enumeration do the work.
    For Debian/Ubuntu-based systems:
    sudo apt install amass To verify your installation, run:
    amass -version If it returns a version number, you're all set!
    Understanding the Basic Syntax of Amass
    The amass command is powerful because of its various flags and options. Here's a quick reference table for the flags we'll use in this guide:
    Flag / Option Description Example enum The subcommand for subdomain enumeration. amass enum -d Specifies the target domain. (Required) -d vulnweb.com -passive Uses only passive data sources (no direct DNS queries). -passive -brute Forces a brute-force attack using wordlists. -brute -o Saves the results to a specified file. -o results.txt -json Saves detailed results in JSON format. -json output.json -list Shows the data sources used in enumeration. amass enum -list -help Shows the help menu for the enum subcommand. amass enum -help Step 2: Your First Subdomain Hunt: A Passive Reconnaissance
    Let's start with the safest and most common method: passive reconnaissance. This means Amass will only query its numerous data sources. It won't send any traffic directly to the target's servers, making it stealthy and non-intrusive.
    For this tutorial, we'll use vulnweb.com, a site intentionally created for security testing.
    Open your terminal and type:
    amass enum -passive -d vulnweb.com Let's break this down with our new syntax knowledge:
    enum: This is the subcommand for enumeration (discovery). -passive: This flag tells Amass to stick to passive methods. -d vulnweb.com: Specifies our target domain. Within seconds, you'll see a list of subdomains start to populate your terminal. For vulnweb.com, you should see entries like testphp.vulnweb.com, testasp.vulnweb.com, and testhtml5.vulnweb.com.
    This is your initial map! You've just discovered multiple "entrances" to the vulnweb.com infrastructure.
    Step 3: Digging Deeper: Active Reconnaissance and Brute Forcing
    Passive mode is great, but sometimes you need to be more thorough. This is where active reconnaissance comes in. It involves directly interacting with the target's DNS servers. This method can be louder but often reveals subdomains that aren't listed in any public database.
    To perform an active DNS enumeration, simply remove the -passive flag:
    amass enum -d vulnweb.com As I explained in the previous section about -passive, both amass runs found the same things — the only difference was the order the results were printed. -passive tells Amass to gather info quietly from public sources (certificate logs, public DNS, search engines) without touching the target, while running it without -passive allows noisier, active checks like brute force or direct DNS queries. In your case the public sources already contained everything, so the active run didn’t discover anything new — it just mixed the same entries in a different sequence.
    Taking it Up a Notch: Brute Forcing
    What about subdomains that are completely hidden? Think dev, staging, ftp, cpanel. Amass can perform a "brute force" attack by trying a massive list of common subdomain names.
    We'll combine this with passive mode to be efficient and respectful.
    amass enum -passive -brute -d vulnweb.com Let Amass complete the enumeration...
    hangga@hangga-kali  ~  amass enum -passive -brute -d vulnweb.com vulnweb.com (FQDN) --> ns_record --> ns2.eurodns.com (FQDN) vulnweb.com (FQDN) --> ns_record --> ns3.eurodns.com (FQDN) vulnweb.com (FQDN) --> ns_record --> ns4.eurodns.com (FQDN) vulnweb.com (FQDN) --> ns_record --> ns1.eurodns.com (FQDN) ns2.eurodns.com (FQDN) --> a_record --> 104.37.178.107 (IPAddress) ns2.eurodns.com (FQDN) --> aaaa_record --> 2610:1c8:b001::107 (IPAddress) ns3.eurodns.com (FQDN) --> a_record --> 199.167.66.108 (IPAddress) ns3.eurodns.com (FQDN) --> aaaa_record --> 2610:1c8:b002::108 (IPAddress) ns4.eurodns.com (FQDN) --> a_record --> 104.37.178.108 (IPAddress) ns4.eurodns.com (FQDN) --> aaaa_record --> 2610:1c8:b001::108 (IPAddress) ns1.eurodns.com (FQDN) --> a_record --> 199.167.66.107 (IPAddress) ns1.eurodns.com (FQDN) --> aaaa_record --> 2610:1c8:b002::107 (IPAddress) rest.vulnweb.com (FQDN) --> a_record --> 18.215.71.186 (IPAddress) testasp.vulnweb.com (FQDN) --> a_record --> 44.238.29.244 (IPAddress) testaspnet.vulnweb.com (FQDN) --> a_record --> 44.238.29.244 (IPAddress) localhost.vulnweb.com (FQDN) --> a_record --> 127.0.0.1 (IPAddress) 104.37.176.0/21 (Netblock) --> contains --> 104.37.178.108 (IPAddress) 104.37.176.0/21 (Netblock) --> contains --> 104.37.178.107 (IPAddress) 199.167.64.0/22 (Netblock) --> contains --> 199.167.66.108 (IPAddress) 199.167.64.0/22 (Netblock) --> contains --> 199.167.66.107 (IPAddress) 44.224.0.0/11 (Netblock) --> contains --> 44.238.29.244 (IPAddress) 2610:1c8:b001::/48 (Netblock) --> contains --> 2610:1c8:b001::108 (IPAddress) 2610:1c8:b001::/48 (Netblock) --> contains --> 2610:1c8:b001::107 (IPAddress) 127.0.0.0/8 (Netblock) --> contains --> 127.0.0.1 (IPAddress) 23393 (ASN) --> managed_by --> NUCDN (RIROrganization) 23393 (ASN) --> announces --> 104.37.176.0/21 (Netblock) 23393 (ASN) --> announces --> 199.167.64.0/22 (Netblock) 23393 (ASN) --> managed_by --> NUCDN, US (RIROrganization) 23393 (ASN) --> announces --> 2610:1c8:b001::/48 (Netblock) 16509 (ASN) --> managed_by --> AMAZON-02 - Amazon.com, Inc. (RIROrganization) 16509 (ASN) --> announces --> 44.224.0.0/11 (Netblock) 0 (ASN) --> managed_by --> Reserved Network Address Blocks (RIROrganization) 0 (ASN) --> announces --> 127.0.0.0/8 (Netblock) 2610:1c8:b002::/48 (Netblock) --> contains --> 2610:1c8:b002::108 (IPAddress) 2610:1c8:b002::/48 (Netblock) --> contains --> 2610:1c8:b002::107 (IPAddress) 18.208.0.0/13 (Netblock) --> contains --> 18.215.71.186 (IPAddress) 23393 (ASN) --> announces --> 2610:1c8:b002::/48 (Netblock) 14618 (ASN) --> managed_by --> AMAZON-AES - Amazon.com, Inc. (RIROrganization) 14618 (ASN) --> announces --> 18.208.0.0/13 (Netblock) The enumeration has finished Wow! Your Amass scan just uncovered the complete infrastructure blueprint of vulnweb.com!
    The scan revealed not just the obvious subdomains like rest.vulnweb.com and testasp.vulnweb.com, but also uncovered that testaspnet.vulnweb.com shares the same IP address—suggesting shared hosting. Interestingly, it even found localhost.vulnweb.com pointing to 127.0.0.1, which might indicate some misconfiguration.
    Beyond subdomains, Amass mapped out the entire network topology: EuroDNS handling nameservers, with actual services distributed across Amazon AWS and NUCDN cloud infrastructure. This level of detail gives you the complete attack surface in a single scan—perfect for both security assessment and documentation.
    Ready to dive deeper into any of these findings? Next, to explore Amass's extensive data sources, run:
    amass enum -list This shows you all the available data sources that Amass queries during enumeration.
    Step 4: Getting Detailed Output and Understanding the Results
    To get more detailed information about the discovered subdomains, save the results in text file:
    amass enum -passive -d vulnweb.com -o vulnweb_subdomains.txt Let's make sure the output is saved:
    cat vulnweb_subdomains.txt 💡Action Required: Always export Amass results to text file. Critical for pentest documentation.Final Thoughts
    OWASP Amass is an indispensable tool in your Linux toolkit. It transforms the daunting task of asset discovery from a manual, error-prone process into an automated, comprehensive one. By knowing your entire attack surface—not just subdomains but also infrastructure relationships—you can patch vulnerabilities, close unused access points, and build a much more robust defense.
    So go ahead, fire up that terminal, and start mapping. Your future, more secure self will thank you for it.
  20. by: Sourav Rudra
    Fri, 14 Nov 2025 01:53:05 GMT

    FFmpeg maintainers have publicly criticized Google after its AI tool reported a security bug in code for a 1995 video game.
    The maintainers called the finding "CVE slop" and questioned whether trillion-dollar corporations should use AI to find security issues in volunteer code without providing fixes.
    Unchecked Automation is Not an Answer
    So what happened is, Google's AI agent Big Sleep found a bug in FFmpeg's code for decoding LucasArts Smush codec. The issue affected the first 10-20 frames of Rebel Assault II, a game from 1995.
    If you didn't know, Big Sleep is Google's AI-powered vulnerability detection tool developed by its Project Zero and DeepMind divisions. It is supposed to find security vulnerabilities in software before attackers can exploit them.
    But there's an issue here: under Google's "Reporting Transparency" policy, the tech giant publicly announces it has found a vulnerability within one week of reporting it. A 90-day disclosure clock then starts regardless of whether a patch is available.
    You see the problem now? 🤔
    FFmpeg developers patched the bug but weren't happy about it. They tweeted in late October that "We take security very seriously but at the same time is it really fair that trillion-dollar corporations run AI to find security issues in people's hobby code? Then expect volunteers to fix."
    Beyond that, you have to understand that FFmpeg is an important piece of digital infrastructure that is used in Google Chrome, Firefox, YouTube, VLC, Kodi, and many other platforms.
    The project is written almost exclusively by volunteers. Much of the code is in assembly language, which is difficult to work with. This situation basically highlights the ongoing tensions over how corporations use volunteer-maintained open source software that powers their commercial products and expect them to fix any obscure issues that crop up.
    Via: The New Stack
    Suggested Reads 📖
    Open Source Infrastructure is Breaking Down Due to Corporate FreeloadingAn unprecedented threat looms over open source.It's FOSSSourav RudraFFmpeg Receives $100K in Funding from India’s FLOSS/fund InitiativeIt is one of the world’s most widely used multimedia frameworks today.It's FOSSSourav Rudra
  21. by: Daniel Schwarz
    Thu, 13 Nov 2025 15:00:20 +0000

    The range syntax isn’t a new thing. We‘re already able to use it with media queries to query viewport dimensions and resolutions, as well as container size queries to query container dimensions. Being able to use it with container style queries — which we can do starting with Chrome 142 — means that we can compare literal numeric values as well as numeric values tokenized by custom properties or the attr() function.
    In addition, this feature comes to the if() function as well.
    Here’s a quick demo that shows the range syntax being used in both contexts to compare a custom property (--lightness) to a literal value (50%):
    #container { /* Choose any value 0-100% */ --lightness: 10%; /* Applies it to the background */ background: hsl(270 100% var(--lightness)); color: if( /* If --lightness is less than 50%, white text */ style(--lightness < 50%): white; /* If --lightness is more than or equal to 50%, black text */ style(--lightness >= 50%): black ); /* Selects the children */ * { /* Specifically queries parents */ @container style(--lightness < 50%) { color: white; } @container style(--lightness >= 50%) { color: black; } } } Again, you’ll want Chrome 142 or higher to see this work:
    CodePen Embed Fallback Both methods do the same thing but in slightly different ways.
    Let’s take a closer look.
    Range syntax with custom properties
    In the next demo coming up, I’ve cut out the if() stuff, leaving only the container style queries. What’s happening here is that we’ve created a custom property called --lightness on the #container. Querying the value of an ordinary property isn’t possible, so instead we save it (or a part of it) as a custom property, and then use it to form the HSL-formatted value of the background.
    #container { /* Choose any value 0-100% */ --lightness: 10%; /* Applies it to the background */ background: hsl(270 100% var(--lightness)); } After that we select the container’s children and conditionally declare their color using container style queries. Specifically, if the --lightness property of #container (and, by extension, the background) is less than 50%, we set the color to white. Or, if it’s more than or equal to 50%, we set the color to black.
    #container { /* etc. */ /* Selects the children */ * { /* Specifically queries parents */ @container style(--lightness < 50%) { color: white; } @container style(--lightness >= 50%) { color: black; } } } CodePen Embed Fallback /explanation Note that we wouldn’t be able to move the @container at-rules to the #container block, because then we’d be querying --lightness on the container of #container (where it doesn’t exist) and then beyond (where it also doesn’t exist).
    Prior to the range syntax coming to container style queries, we could only query specific values, so the range syntax makes container style queries much more useful.
    By contrast, the if()-based declaration would work in either block:
    #container { --lightness: 10%; background: hsl(270 100% var(--lightness)); /* --lightness works here */ color: if( style(--lightness < 50%): white; style(--lightness >= 50%): black ); * { /* And here! */ color: if( style(--lightness < 50%): white; style(--lightness >= 50%): black ); } } CodePen Embed Fallback So, given that container style queries only look up the cascade (whereas if() also looks for custom properties declared within the same CSS rule) why use container style queries at all? Well, personal preference aside, container queries allow us to define a specific containment context using the container-name CSS property:
    #container { --lightness: 10%; background: hsl(270 100% var(--lightness)); /* Define a named containment context */ container-name: myContainer; * { /* Specify the name here */ @container myContainer style(--lightness < 50%) { color: white; } @container myContainer style(--lightness >= 50%) { color: black; } } } With this version, if the @container at-rule can’t find --lightness on myContainer, the block doesn’t run. If we wanted @container to look further up the cascade, we’d only need to declare container-name: myContainer further up the cascade. The if() function doesn’t allow for this, but container queries allow us to control the scope.
    Range syntax with the attr() CSS function
    We can also pull values from HTML attributes using the attr() CSS function.
    In the HTML below, I’ve created an element with a data attribute called data-notifs whose value represents the number of unread notifications that a user has:
    <div data-notifs="8"></div> We want to select [data-notifs]::after so that we can place the number inside [data-notifs] using the content CSS property. In turn, this is where we’ll put the @container at-rules, with [data-notifs] serving as the container. I’ve also included a height and matching border-radius for styling:
    [data-notifs]::after { height: 1.25rem; border-radius: 1.25rem; /* Container style queries here */ } Now for the container style query logic. In the first one, it’s fairly obvious that if the notification count is 1-2 digits (or, as it’s expressed in the query, less than or equal to 99), then content: attr(data-notifs) inserts the number from the data-notifs attribute while aspect-ratio: 1 / 1 ensures that the width matches the height, forming a circular notification badge.
    In the second query, which matches if the number is more than 99, we switch to content: "99+" because I don’t think that a notification badge could handle four digits. We also include some inline padding instead of a width, since not even three characters can fit into the circle.
    To summarize, we’re basically using this container style query logic to determine both content and style, which is really cool:
    [data-notifs]::after { height: 1.25rem; border-radius: 1.25rem; /* If notification count is 1-2 digits */ @container style(attr(data-notifs type(<number>)) <= 99) { /* Display count */ content: attr(data-notifs); /* Make width equal the height */ aspect-ratio: 1 / 1; } /* If notification count is 3 or more digits */ @container style(attr(data-notifs type(<number>)) > 99) { /* After 99, simply say "99+" */ content: "99+"; /* Instead of width, a little padding */ padding-inline: 0.1875rem; } } CodePen Embed Fallback But you’re likely wondering why, when we read the value in the container style queries, it’s written as attr(data-notifs type(<number>) instead of attr(data-notifs). Well, the reason is that when we don’t specify a data type (or unit, you can read all about the recent changes to attr() here), the value is parsed as a string. This is fine when we’re outputting the value with content: attr(data-notifs), but when we’re comparing it to 99, we must parse it as a number (although type(<integer>) would also work).
    In fact, all range syntax comparatives must be of the same data type (although they don’t have to use the same units). Supported data types include <length>, <number>, <percentage>, <angle>, <time>, <frequency>, and <resolution>. In the earlier example, we could actually express the lightness without units since the modern hsl() syntax supports that, but we’d have to be consistent with it and ensure that all comparatives are unit-less too:
    #container { /* 10, not 10% */ --lightness: 10; background: hsl(270 100 var(--lightness)); color: if( /* 50, not 50% */ style(--lightness < 50): white; style(--lightness >= 50): black ); * { /* 50, not 50% */ @container style(--lightness < 50) { color: white; } @container style(--lightness >= 50) { color: black; } } } Note: This notification count example doesn’t lend itself well to if(), as you’d need to include the logic for every relevant CSS property, but it is possible and would use the same logic.
    Range syntax with literal values
    We can also compare literal values, for example, 1em to 32px. Yes, they’re different units, but remember, they only have to be the same data type and these are both valid CSS <length>s.
    In the next example, we set the font-size of the <h1> element to 31px. The <span> inherits this font-size, and since 1em is equal to the font-size of the parent, 1em in the scope of <span> is also 31px. With me so far?
    According to the if() logic, if 1em is equal to less than 32px, the font-weight is smaller (to be exaggerative, let’s say 100), whereas if 1em is equal to or greater than 32px, we set the font-weight to a chunky 900. If we remove the font-size declaration, then 1em computes to the user agent default of 32px, and neither condition matches, leaving the font-weight to also compute to the user agent default, which for all headings is 700.
    Basically, the idea is that if we mess with the default font-size of the <h1>, then we declare an optimized font-weight to maintain readability, preventing small-fat and large-thin text.
    <h1> <span>Heading 1</span> </h1> h1 { /* The default value is 32px, but we overwrite it to 31px, causing the first if() condition to match */ font-size: 31px; span { /* Here, 1em is equal to 31px */ font-weight: if( style(1em < 32px): 100; style(1em > 32px): 900 ); } } CodePen Embed Fallback CSS queries have come a long way, haven’t they?
    In my opinion, the range syntax coming to container style queries and the if() function represents CSS’s biggest leap in terms of conditional logic, especially considering that it can be combined with media queries, feature queries, and other types of container queries (remember to declare container-type if combining with container size queries). In fact, now would be a great time to freshen up on queries, so as a little parting gift, here are some links for further reading:
    Media queries Container queries Feature queries The Range Syntax Has Come to Container Style Queries and if() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  22. Why I Stopped Hating Systemd

    by: Umair Khurshid
    Thu, 13 Nov 2025 18:02:00 +0530

    Every few years, the Linux world finds something to fight about. Sometimes it is about package managers, sometimes about text editors, but nothing in recent memory split the community quite like systemd. What began as an init replacement quietly grew into a full-blown identity crisis for Linux itself, complete with technical manifestos, emotional arguments, and more mailing list drama than I ever thought possible.
    I did not plan to take a side in that debate. Like most users, I just wanted my server to boot, my logs to make sense, and my scripts to run. Yet systemd had a way of showing up uninvited. It replaced the old startup scripts I had trusted for years and left me wondering whether the Linux I loved was changing into something else entirely.
    Over time, I learned that this was not simply a story about software. It was a story about culture, trust, and how communities handle change. What follows is how I got pulled into that argument, what I learned when I finally stopped resisting. I have even created a course on using advanced automation with systemd. That much I use systemd now.
    Advanced Automation with systemdTake Your Linux Automation Beyond CronLinux HandbookUmair KhurshidHow I Got Pulled into the Systemd Debate
    My introduction to systemd was not deliberate and arrived uninvited with an update, quietly replacing the familiar tangle of /etc/init.d scripts on Manjaro. The transition was abrupt enough to feel like a betrayal as one morning, my usual service apache2 restart returned a polite message: Use systemctl instead. That was the first sign that something fundamental had changed.
    I remember the tone of Linux forums around that time was half technical and half existential. Lennart Poettering, systemd’s creator, had become a lightning rod for criticism. To some, he was the architect of a modern, unified boot system; to others, he was dismantling the very ethos that made Unix elegant. I was firmly in the second camp!
    Back then, my world revolved around small workstations and scripts I could trace line by line. The startup process was tangible, you just had to open /etc/rc.local, add a command, and know it would run. When Fedora first adopted systemd in 2011, followed later by Debian, I watched from a distance with the comfort of someone on a “safe” distribution, but it was only a matter of time.
    Ian Jackson of Debian called the decision to make systemd the default a failure of pluralism and Devuan was born soon after as a fork of Debian, built specifically to keep sysvinit alive. On the other side, Poettering argued that systemd was never meant to violate Unix principles, but to reinterpret them for modern systems where concurrency and dependency tracking actually mattered.
    I followed those arguments closely, sometimes nodding along with Jackson’s insistence on modularity, other times feeling curious about Poettering’s idea of “a system that manages systems.” Linus Torvalds chimed in occasionally, not against systemd itself but against its developers’ attitudes, which kept the controversy alive far longer than it might have lasted.
    At that point, I saw systemd as something that belonged to other distributions, an experiment that might stabilize one day but would never fit into the quiet predictability of my setups. That illusion lasted until I switched to a new version of Manjaro in 2013, and the first boot greeted me with the unmistakable parallel startup messages of systemd. 
    The Old World: init, Scripts, and Predictability
    Before systemd, Linux startup was beautifully transparent. When the machine booted, you could almost watch it think. The bootloader passed control to the kernel, the kernel mounted the root filesystem, and init took over from there. It read a single configuration file, /etc/inittab, and decided which runlevel to enter.
    Each runlevel had its own directory under /etc/rc.d/, filled with symbolic links to shell scripts. The naming convention S01network, S20syslog, K80apache2was primitive but logical. The S stood for “start,” K for “kill,” and the numbers determined order. The whole process was linear, predictable, and very, very readable.
    If something failed, you could open the script and see exactly what went wrong, and debugging was often as simple as adding an echo statement or two. On a busy day, I would edit /etc/init.d/apache2, add logging to a temporary file, and restart the service manually. 
    A typical init script looked something like this:
    #!/usr/bin/env sh ### BEGIN INIT INFO # Provides: myapp # Required-Start: $network # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start myapp daemon ### END INIT INFO case "$1" in start) echo "Starting myapp" /usr/local/bin/myapp & ;; stop) echo "Stopping myapp" killall myapp ;; restart) $0 stop $0 start ;; *) echo "Usage: /etc/init.d/myapp {start|stop|restart}" exit 1 esac exit 0Crude, yes, but understandable. Even if you did not write it, you could trace what happened as it was pure shell, running in sequence, and every decision it made was visible to you.
    This simplicity was based on the Unix principle of small, composable parts. You could swap out one script without affecting the rest and even bypass init entirely by editing /etc/rc.local, placing commands there for one-off startup tasks.
    The problem, however, was because everything ran in a fixed order, startup could be painfully slow in comparison to systemd. If a single service stalled, the rest of the boot process might hang and dependencies were implied rather than declared. I could declare that a service “required networking,” but the system had no reliable way to verify that the network was fully up.
    Also, parallel startup was almost if not completely impossible. Distributions like Ubuntu experimented with alternatives such as Upstart, which tried to make init event driven rather than sequential, but it never fully replaced the traditional init scripts.
    When systemd appeared, it looked like another attempt in the same category, a modernization effort destined to break the things I liked most. From my perspective, init was not broken. It was slow, yes, but there was total control, which I did not like getting replaced by a binary I could not open in a text editor.
    The early systemd adopters claimed that it was faster, cleaner, and more consistent, but the skeptics (which included me) saw it as a power grab by Red Hat. The subsequent years have proven this fear somewhat justified, as maintaining non-systemd configurations has become increasingly difficult but the standardization has also made Linux more predictable across distributions.
    Looking back, I now realize that I had mistaken predictability for transparency. The old init world felt clear because it was simple, not because it was necessarily better designed. As systems grew more complex with services that had to interact asynchronously, containers starting in milliseconds, and dependency chains that spanned multiple layers, the cracks began to show. 
    What Systemd Tried to Fix
    It took me a while to understand that systemd was not an act of defiance against the Unix philosophy but a response to a real set of technical problems. Once I started reading Lennart Poettering’s early blog posts, a clearer picture emerged. His goal was not to replace init for its own sake but to make Linux startup faster, more reliable, and more predictable.
    Poettering and Kay Sievers began designing systemd in 2010 at Red Hat, with Fedora as its first proving ground (now my daily driver). Their idea was to build a parallelized, dependency-aware init system that could handle modern workloads gracefully. Instead of waiting for one service to finish before starting the next, systemd would launch multiple services concurrently based on declared dependencies.
    At its heart, systemd introduced unit files, small declarative configurations that replaced the hand-written shell scripts of SysV init. Each unit described a service, socket, target, or timer in a consistent, machine-readable format. The structure was both simple and powerful:
    [Unit] Description=My Custom Service After=network.target [Service] ExecStart=/usr/local/bin/myapp --config /etc/myapp.conf Restart=on-failure User=myuser [Install] WantedBy=multi-user.targetTo start it, you no longer edited symlinks or runlevels. You simply ran:
    sudo systemctl start myapp.service sudo systemctl enable myapp.serviceThe old init scripts had no formal way to express “start only after this other service has finished successfully.” They relied on arbitrary numbering and human intuition. With systemd, relationships were declared using After=, Before=, Requires=, and Wants=.
    [Unit] Description=Web Application After=network.target database.service Requires=database.serviceThis meant systemd could construct a dependency graph at boot and launch services in optimal order, improving startup times dramatically.
    Systemd also integrated timers (replacing cron for system tasks), socket activation (starting services only when needed), and cgroup management (to isolate processes cleanly). The critics called it “scope creep,” but Poettering’s argument was that the components it replaced were fragmented and poorly integrated, and building them into a single framework reduced complexity overall. That claim divided the Linux world!
    On one side were distributions like Fedora, Arch, and openSUSE, which adopted systemd quickly. They saw its promise in boot speed, unified tooling, and clear dependency tracking. On the other side stood Debian and its derivatives, which valued independence and simplicity (some of you new folks might find it odd given their current Rust adoption). Debian’s Technical Committee vote in 2014 was one of the most contentious in its history. When systemd was chosen as the default, Ian Jackson resigned from the committee, citing an erosion of choice and the difficulty of maintaining alternative inits.
    That decision directly led to the birth of Devuan whose developers described systemd as “an intrusive monolith,” a phrase that captured the mood of the opposition.
    Yet, beneath the politics, systemd was solving problems that were not just theoretical. Race conditions during boot were common and service dependencies often failed silently. On embedded devices and containerized systems, startup order mattered in ways SysV init could not reliably enforce.
    The Real Arguments (and What They Miss)
    The longer I followed the systemd controversy, the more I realized that the arguments around it were not always about code and were also about identity. Every debate thread eventually drifted toward the same philosophical divide that should Linux remain a collection of loosely coupled programs, or should it evolve into a cohesive, centrally managed system?
    When I read early objections from Slackware and Debian maintainers, they were rarely technical complaints about bugs or performance. They were about trust and the Unix philosophy “do one thing well” that had guided decades of design. init was primitive but modular, and its limitations could be fixed piecemeal. systemd, by contrast, felt like a comprehensive replacement that tied everything together under a single logic (the current debate around C being memory unsafe and Rust adoption are quite similar in the form).
    Devuan’s founders said that once core packages like GNOME began depending on systemd-logind, users effectively lost the ability to choose another init system. That interdependence was viewed as a form of lock-in at the architecture level.
    Meanwhile, Lennart Poettering maintained that systemd was modular internally, just not in the fragmented Unix sense. He described systemd as an effort to build coherence into an environment that had historically resisted it.
    I remember reading Linus’ comments on the matter around 2014. He was not against systemd per se but his frustration (and it has not changed much) was about developer behavior which called out what he saw as unnecessary hostility from both sides, maintainers blocking patches, developers refusing to accommodate non-systemd setups, and the cultural rigidity that had turned a design debate into a purity contest. His opinion was pragmatic that as long as systemd worked well and did not break things needlessly, it was fine.
    The irony was that both camps were right in their own way. The anti-systemd camp correctly foresaw that once GNOME and major distributions adopted it, alternatives would fade and the pro-systemd side correctly argued that modern systems needed unified control and reliable dependency management.
    As someone who would later get into Sysadmin and DevOps, now I feel like the conversation missed the fact that Linux itself had changed. By the early 2010s, most servers were running dozens of services, containers were replacing bare-metal deployments, and hardware initialization had become vastly more complex. Boot time was no longer the slow, linear dance it used to be and was a more of a network of parallelized events that had to interact safely and recover from failure automatically.
    I once tried to build a stripped-down Debian container without systemd, thinking I could recreate the old init world but it was an enlightening failure. I spent hours wiring together shell scripts and custom supervision loops, all to mimic what a single Restart=on-failure directive did automatically in systemd.
    That experience showed me what most arguments missed that the problem was not that systemd did too much, but that the old approach expected the user to do everything manually.
    For instance, consider a classic SysV approach to restarting a service on crash. You would write something like this:
    #!/bin/sh while true; do /usr/local/bin/myapp status=$? if [ $status -ne 0 ]; then echo "myapp crashed with status $status, restarting..." >&2 sleep 2 else break fi doneIt worked, but it was a hack. systemd gave you the same reliability with a single configuration line:
    Restart=on-failure RestartSec=2The simplicity of that design was hard to deny. Yet to many administrators, it still felt like losing both control and familiarity. The cultural resistance was amplified by how fast systemd grew. Each release seemed to absorb another subsystem: udevd, logind, networkd, and later resolved. Critics accused it of “taking over userland” but the more I examined those claims, the more I saw a different pattern.
    Each of those tools replaced a historically fragile or inconsistent component that distributions had struggled to maintain separately but the cultural resistance was amplified by how fast systemd grew. Critics also pointed to the technical risk of consolidating so much functionality in one project, fearing a single regression could break the entire ecosystem. The defensive tone of Poettering’s posts did not help, and over time, his name became synonymous with the debate itself.
    But even among the loudest critics, there was a reluctant recognition that systemd had improved startup speed, service supervision, and logging consistency and what they feared was not its functionality but its dominance.
    The most productive discussions I saw were not about whether systemd was “good” or “bad,” but about whether Linux had space for diversity anymore. In a sense, systemd’s arrival forced the community to confront its own maturity. You could no longer treat Linux as a loose federation of components; it had become a unified operating system in practice, even if the philosophy still insisted otherwise.
    By the time I reached that conclusion, the debate had already cooled. Most distributions had adopted systemd by default, Devuan had carved out its niche, and the rest of us were learning to live with the new landscape. I began to see that the real question was not whether systemd broke the Unix philosophy, but whether the old Unix philosophy could still sustain the complexity of modern systems.
    What I Learned After Actually Using It
    At some point, resistance gave way to necessity. As often happens, I started managing servers (Cent OS) that already used systemd, so learning it was no longer optional. What surprised me most was how quickly frustration turned into familiarity once I stopped fighting it. The commands that had felt alien at firs began to make sense.
    The first time I ran systemctl status nginx.service, I understood what Poettering had been talking about. Instead of a terse message like “nginx is running,” I saw a complete summary including the process ID, uptime, memory usage, recent logs, and the exact command used to start it. It was the kind of insight that had previously required grepping through /var/log/syslog and several ps invocations.
    Here is what a typical status output looked like:
    It was immediately practical as I could see that the service was running, its exact configuration file path, and its dependencies all in one place.
    When a service failed, systemd logged everything automatically. Instead of checking multiple files, I could simply run:
    journalctl -u nginx.service -bThat -b flag restricted the logs to the current boot, saving me from wading through old entries. It was efficient in a way the traditional logging setup never was.
    Then there was dependency introspection. I could visualize the startup tree with:
    systemctl list-dependencies nginx.serviceThis command revealed the entire boot relationship graph, showing what started before and after Nginx. For anyone who had ever debugged slow boots by adding echo statements to init scripts, this was revolutionary.
    Over time, I began writing my own unit files. They were simple once you got used to the syntax. I remember converting a small Python daemon I had once managed with a hand-rolled init script. The old version had been about thirty lines of conditional shell logic. The new one was six lines:
    [Unit] Description=Custom Python Daemon After=network.target [Service] ExecStart=/usr/bin/python3 /opt/daemon.py Restart=always RestartSec=5 [Install] WantedBy=multi-user.targetThat was all it took to handle startup order, failure recovery, and clean shutdown without any custom scripting. The first time I watched systemd automatically restart the process after a crash, I felt a mix of admiration and reluctant respect.
    Some of my early complaints persisted such as the binary log format of journald still felt unnecessary. I understood why it existed, structured logs allowed richer metadata but it broke the old habit of inspecting logs with less and grep. I eventually learned that you could still pipe output:
    journalctl -u myapp.service | grep ERRORSo even that compromise turned out to be tolerable. I also began to appreciate how much time I saved not having to worry about service supervision. Previously, I had used supervisord or custom shell loops to keep processes alive but with systemd, it was built-in. When a process crashed, I could rely on Restart=on-failure or Restart=always. If I needed to ensure that something ran only after a network interface was up, I could just declare:
    After=network-online.target Wants=network-online.targetAlso, one thing that most discussions about systemd missed was built-in service sandboxing. For all the arguments about boot speed and complexity, few people talked about how deeply systemd reshaped security at the service level. The [Service] section of a unit file is not just about start commands, it can define isolation boundaries in a way that old init scripts never could.
    Directives like PrivateTmp, ProtectSystem, RestrictAddressFamilies, and NoNewPrivileges can drastically reduce the attack surface of a service. A web server, for instance, can be locked into its own temporary directory with PrivateTmp=true and denied access to the host’s filesystem with ProtectSystem=full. Even if compromised, it cannot modify critical paths or open new network sockets. 
    Still, I had to get past a subtle psychological barrier as for years, I had believed that understanding the system meant being able to edit its scripts directly and add to it the social pressure. With systemd, much of that transparency moved behind declarative configuration and binary logs. It felt at first like a loss of intimacy but as I learned more about how systemd used cgroups to track processes, I realized it was not hiding complexity but just managing it.
    A perfect example came when I started using systemd-nspawn to spin up lightweight containers. The simplicity of systemd-nspawn -D /srv/container was eye-opening as it showed how systemd was not just an init system but a general process manager, capable of running containers, user sessions, and even virtual environments with consistent supervision.
    At that point, I began reading the source code and documentation rather than Reddit threads. I discovered how deeply it integrated with Linux kernel features like control groups and namespaces and what had seemed like unnecessary overreach began to look more like a natural evolution of process control.
    The resentment faded and in its place came something more complicated, an understanding that my dislike of systemd had been rooted in nostalgia as much as principle but in a modern environment with hundreds of interdependent services, the manual approach simply did not scale though I respect people who shoot at things like building their own AWS alternative. 
    Systemd was not perfect and especially it was opinionated and sometimes too aggressive in unifying tools. Yet once I accepted it as a framework rather than an ideology, it stopped feeling oppressive and just became another tool, powerful when used wisely, irritating when misunderstood. By then, I had moved from avoidance to proficiency and could write units, debug services, and configure dependencies with ease. I no longer missed the old init scripts as maintenance time became important to me.
    Why Systemd Controversy Still Matters
    By now, most major distributions have adopted systemd, and the initial outrage has faded into the background. Yet the debate never truly disappeared it just changed form. It became less about startup speed or PID 1 design, and more about philosophy. What kind of control should users have over their systems? How much abstraction is too much?
    The systemd debate persists because it touches something deeper than process management which is about the identity of Linux itself. The traditional Unix model prized minimalism and composability, one tool for one job. systemd, by contrast, represents a coordinated platform which integrates logging, device management, service supervision, and even containerization. To people like me, that integration feels awesome to others, it feels like betrayal.
    For administrators who grew up writing init scripts and manipulating processes by hand, systemd signaled a loss of transparency which replaced visible shell logic with declarative files and binary logs, and assumed responsibility for things that used to belong to the user but for newer users, especially those managing cloud-scale systems, it offered a coherent framework that actually worked the same everywhere. Though I’m not a huge fan of the word "trade-off" but unfortunately it defines most of modern computing. The more complexity we hide, the less friction we face in day-to-day tasks, but the more we depend on the hidden layer behaving correctly. It is the same tension that runs through all abstraction, from container orchestration to AI frameworks.
    Even now, forks and alternatives appear from time to time such as runit, s6, OpenRC each promising a return to simplicity but few large distributions switch back, because the practical benefits of systemd outweigh nostalgia. 
    Still, I think the discomfort matters as it reminds us that simplicity is not just a technical virtue but a cultural one. The fear of losing control keeps the ecosystem diverse. Projects like Devuan exist not because they expect to overtake Debian, but because they preserve the possibility of a different path.
    The real lesson, for me, is not about whether systemd is good or bad. It is about what happens when evolution in open source collides with emotion. Change in infrastructure is not just a matter of better code, it is also a negotiation between habits, values, and trust.
    When I type systemctl now, I no longer feel resistance as I just see a tool that grew faster than we were ready for, one that forced a conversation the Linux world was reluctant to have. The controversy still matters because it captures the moment when Linux stopped being a loose federation of ideas and started becoming an operating system in the full sense of the word. That transition was messy, and it probably had to be.
    If you have come this far, you likely see that systemd is more than just an init system, it’s a complete automation framework. If you want to explore that side of it, my course Advanced Automation with systemd walks through how to replace fragile cron jobs with powerful, dependency-aware timers, sandboxed tasks, and resource-controlled services. It’s hands-on and practical!
    Advanced Automation with systemdTake Your Linux Automation Beyond CronLinux HandbookUmair Khurshid
  23. by: Sourav Rudra
    Thu, 13 Nov 2025 12:13:58 GMT

    Nitrux is a Debian-based Linux distribution that has always stood out for its bold design choices. It even made our list of the most beautiful Linux distributions.
    Earlier this year, the project made a significant announcement. They discontinued its custom NX Desktop and the underlying KDE Plasma base, prioritizing a Hyprland desktop experience combined with their in-house developed app distribution methods.
    Now, the first major release reflecting this redefined approach is finally here.
    🆕 Nitrux 5.0.0: What's New?

    The release uses OpenRC 0.63 as its init system instead of systemd. This is paired with either Liquorix kernel 6.17.7 or a CachyOS-patched kernel, depending on your hardware, and the desktop experience is Wayland-only. KDE Plasma, KWin, and SDDM are gone.
    In their place, you get Hyprland with Waybar for the panel, Crystal Dock for application launching, and greetd as the login manager, and QtGreet as the display manager. Wofi serves as the application launcher, while wlogout handles logout actions.
    Nitrux 5.0.0 ships with an immutable root filesystem powered by NX Overlayroot. This provides system stability and rollback capabilities through the Nitrux Update Tool System (nuts).
    Plus, there is Nitrux's new approach to software management. NX AppHub and AppBoxes are now the primary methods for installing applications. Flatpak and Distrobox remain available as complementary options.
    There are many updated apps and tooling in this release too:
    Podman 5.6.1 Docker 26.1.5 Git 2.51.0 Python 3.13.7 OpenRazer 3.10.3 MESA 25.2.3 BlueZ 5.84 PipeWire 1.4.8 The developers are clear about who Nitrux is for. It is designed for users who see configuration as empowerment, not inconvenience. This isn't a distribution trying to please everyone.
    The team put it this way in their announcement:
    📥 Download Nitrux 5.0.0
    The nitrux-contemporary-cachy-nvopen ISO is designed for NVIDIA hardware. It includes the NVIDIA Open Kernel Module and uses the CachyOS-patched kernel.
    The nitrux-contemporary-liquorix-mesa ISO targets AMD and Intel graphics. It ships with the Liquorix kernel and MESA drivers. Both versions are also available through SourceForge.
    Nitrux 5.0 (SourceForge)A fresh installation is strongly recommended for this release. Updates from Nitrux 3.9.1 to 5.0.0 are not supported. Future updates will be delivered through the Nitrux Update Tool System.
    Also, virtual machines are not supported natively, as the team removed many VM-specific components. You can learn more in the release notes.
    Suggested Read 📖
    Here are the Most Beautiful Linux Distributions in 2025Aesthetically pleasing? Customized out of the box? You get the best of both worlds in this list.It's FOSSAnkush Das
  24. by: Nitij Taneja
    Thu, 13 Nov 2025 09:50:29 GMT

    Introduction
    In the rapidly evolving landscape of Artificial Intelligence, Retrieval-Augmented Generation (RAG) has emerged as a pivotal technique for enhancing the factual accuracy and relevance of Large Language Models (LLMs). By enabling LLMs to retrieve information from external knowledge bases before generating responses, RAG mitigates common issues such as hallucination and outdated information.
    However, traditional RAG approaches often rely on vector-based similarity searches, which, while effective for broad retrieval, can sometimes fall short in capturing the intricate relationships and contextual nuances present in complex data. This limitation can lead to the retrieval of fragmented information, hindering the LLM's ability to synthesize truly comprehensive and contextually appropriate answers.
    Enter Graph RAG, a groundbreaking advancement that addresses these challenges by integrating the power of knowledge graphs directly into the retrieval process. Unlike conventional RAG systems that treat information as isolated chunks, Graph RAG dynamically constructs and leverages knowledge graphs to understand the interconnectedness of entities and concepts.
    This allows for a more intelligent and precise retrieval mechanism, where the system can navigate relationships within the data to fetch not just relevant information, but also the surrounding context that enriches the LLM's understanding. By doing so, Graph RAG ensures that the retrieved knowledge is not only accurate but also deeply contextual, leading to significantly improved response quality and a more robust AI system.
    This article will delve into the core principles of Graph RAG, explore its key features, demonstrate its practical applications with code examples, and discuss how it represents a significant leap forward in building more intelligent and reliable AI applications.
    Key Features of Graph RAG
    Graph RAG distinguishes itself from traditional RAG architectures through several innovative features that collectively contribute to its enhanced retrieval capabilities and contextual understanding. These features are not merely additive but fundamentally reshape how information is accessed and utilized by LLMs.
    Dynamic Knowledge Graph Construction
    One of the most significant advancements of Graph RAG is its ability to construct a knowledge graph dynamically during the retrieval process.
    Traditional knowledge graphs are often pre-built and static, requiring extensive manual effort or complex ETL (Extract, Transform, Load) pipelines to maintain and update. In contrast, Graph RAG builds or expands the graph in real time based on the entities and relationships identified from the input query and initial retrieval results.
    This on-the-fly construction ensures that the knowledge graph is always relevant to the immediate context of the user's query, avoiding the overhead of managing a massive, all-encompassing graph. This dynamic nature allows the system to adapt to new information and evolving contexts without requiring constant re-indexing or graph reconstruction.
    For instance, if a query mentions a newly discovered scientific concept, Graph RAG can incorporate this into its temporary knowledge graph, linking it to existing related entities, thereby providing up-to-date and relevant information.
    Intelligent Entity Linking
    At the heart of dynamic graph construction lies intelligent entity linking.
    As information is processed, Graph RAG identifies key entities (e.g., people, organizations, locations, concepts) and establishes relationships between them. This goes beyond simple keyword matching; it involves understanding the semantic connections between different pieces of information.
    For example, if a document mentions "GPT-4" and another mentions "OpenAI," the system can link these entities through a "developed by" relationship. This linking process is crucial because it allows the RAG system to traverse the graph and retrieve not just the direct answer to a query, but also related information that provides richer context.
    This is particularly beneficial in domains where entities are highly interconnected, such as medical research, legal documents, or financial reports. By linking relevant entities, Graph RAG ensures a more comprehensive and interconnected retrieval, enhancing the depth and breadth of the information provided to the LLM.
    Contextual Decision-Making with Graph Traversal
    Unlike vector search, which retrieves information based on semantic similarity in an embedding space, Graph RAG leverages the explicit relationships within the knowledge graph for contextual decision-making.
    When a query is posed, the system doesn't just pull isolated documents; it performs graph traversals, following paths between nodes to identify the most relevant and contextually appropriate information.
    This means the system can answer complex, multi-hop questions that require connecting disparate pieces of information.
    For example, to answer "What are the main research areas of the lead scientist at DeepMind?", a traditional RAG might struggle to connect "DeepMind" to its "lead scientist" and then to their "research areas" if these pieces of information are in separate documents. Graph RAG, however, can navigate these relationships directly within the graph, ensuring that the retrieved information is not only accurate but also deeply contextualized within the broader knowledge network.
    This capability significantly improves the system's ability to handle nuanced queries and provide more coherent and logically structured responses.
    Confidence Score Utilization for Refined Retrieval
    To further optimize the retrieval process and prevent the inclusion of irrelevant or low-quality information, Graph RAG utilizes confidence scores derived from the knowledge graph.
    These scores can be based on various factors, such as the strength of relationships between entities, the recency of information, or the perceived reliability of the source. By assigning confidence scores, the framework can intelligently decide when and how much external knowledge to retrieve.
    This mechanism acts as a filter, helping to prioritize high-quality, relevant information while minimizing the addition of noise.
    For instance, if a particular relationship has a low confidence score, the system might choose not to expand retrieval along that path, thereby avoiding the introduction of potentially misleading or unverified data.
    This selective expansion ensures that the LLM receives a compact and highly relevant set of facts, improving both efficiency and response accuracy by maintaining a focused and pertinent knowledge graph for each query.
    How Graph RAG Works: A Step-by-Step Breakdown
    Understanding the theoretical underpinnings of Graph RAG is essential, but its true power lies in its practical implementation.
    This section will walk through the typical workflow of a Graph RAG system, illustrating each stage with conceptual code examples to provide a clearer picture of its operational mechanics.
    While the exact implementation may vary depending on the chosen graph database, LLM, and specific use case, the core principles remain consistent.
    Step 1: Query Analysis and Initial Entity Extraction
    The process begins when a user submits a query.
    The first step for the Graph RAG system is to analyze this query to identify key entities and potential relationships. This often involves Natural Language Processing (NLP) techniques such as Named Entity Recognition (NER) and dependency parsing.
    Conceptual Code Example (Python):
    import spacy from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity import networkx as nx # Load spaCy nlp = spacy.load("en_core_web_sm") # Step 1: Extract entities def extract_entities(query): doc = nlp(query) return [(ent.text.strip(), ent.label_) for ent in doc.ents] query = "Who is the CEO of Google and what is their net worth?" extracted_entities = extract_entities(query) print(f"🧠 Extracted Entities: {extracted_entities}" Step 2: Initial Retrieval and Candidate Document Identification
    Once entities are extracted, the system performs an initial retrieval from a vast corpus of documents.
    This can be done using traditional vector search (e.g., cosine similarity on embeddings) or keyword matching. The goal here is to identify a set of candidate documents that are potentially relevant to the query.
    Conceptual Code Example (Python - simplified vector search):
    # Step 2: Retrieve candidate documents corpus = [ "Sundar Pichai is the CEO of Google.", "Google is a multinational technology company.", "The net worth of many tech CEOs is in the billions.", "Larry Page and Sergey Brin founded Google." ] vectorizer = TfidfVectorizer() corpus_embeddings = vectorizer.fit_transform(corpus) def retrieve_candidate_documents(query, corpus, vectorizer, corpus_embeddings, top_k=2): query_embedding = vectorizer.transform([query]) similarities = cosine_similarity(query_embedding, corpus_embeddings).flatten() top_indices = similarities.argsort()[-top_k:][::-1] return [corpus[i] for i in top_indices] candidate_docs = retrieve_candidate_documents(query, corpus, vectorizer, corpus_embeddings) print(f"📄 Candidate Documents: {candidate_docs}") Step 3: Dynamic Knowledge Graph Construction and Augmentation
    This is the core of Graph RAG.
    The extracted entities from the query and the content of the candidate documents are used to dynamically construct or augment a knowledge graph. This involves identifying new entities and relationships within the text and adding them as nodes and edges to the graph. If a base knowledge graph already exists, this step augments it; otherwise, it builds a new graph from scratch for the current query context.
    Conceptual Code Example (Python - using NetworkX for graph representation):
    # Step 3: Build or augment graph def build_or_augment_graph(graph, entities, documents): for entity, entity_type in entities: graph.add_node(entity, type=entity_type) for doc in documents: doc_nlp = nlp(doc) person = None org = None for ent in doc_nlp.ents: if ent.label_ == "PERSON": person = ent.text.strip().strip(".") elif ent.label_ == "ORG": org = ent.text.strip().strip(".") if person and org and "CEO" in doc: graph.add_node(person, type="PERSON") graph.add_node(org, type="ORG") graph.add_edge(person, org, relation="CEO_of") return graph # Create and populate the graph knowledge_graph = nx.Graph() knowledge_graph = build_or_augment_graph(knowledge_graph, extracted_entities, candidate_docs) print("🧩 Graph Nodes:", knowledge_graph.nodes(data=True)) print("🔗 Graph Edges:", knowledge_graph.edges(data=True)) Step 4: Graph Traversal and Contextual Information Retrieval
    With the dynamic knowledge graph in place, the system performs graph traversals starting from the query entities. It explores the relationships (edges) and connected entities (nodes) to retrieve contextually relevant information.
    This step is where the "graph" in Graph RAG truly shines, allowing for multi-hop reasoning and the discovery of implicit connections.
    Conceptual Code Example (Python - graph traversal):
    # Step 4: Graph traversal def traverse_graph_for_context(graph, start_entity, depth=2): contextual_info = set() visited = set() queue = [(start_entity, 0)] while queue: current_node, current_depth = queue.pop(0) if current_node in visited or current_depth > depth: continue visited.add(current_node) contextual_info.add(current_node) for neighbor in graph.neighbors(current_node): edge_data = graph.get_edge_data(current_node, neighbor) if edge_data: relation = edge_data.get("relation", "unknown") contextual_info.add(f"{current_node} {relation} {neighbor}") queue.append((neighbor, current_depth + 1)) return list(contextual_info) context = traverse_graph_for_context(knowledge_graph, "Google") print(f"🔍 Contextual Information from Graph: {context}") Step 5: Confidence Score-Guided Expansion (Optional but Recommended)
    As mentioned in the features, confidence scores can be used to guide the graph traversal.
    This ensures that the expansion of retrieved information is controlled and avoids pulling in irrelevant or low-quality data. This can be integrated into Step 4 by assigning scores to edges or nodes and prioritizing high-scoring paths.
    Step 6: Information Synthesis and LLM Augmentation
    The retrieved contextual information from the graph, along with the original query and potentially the initial candidate documents, is then synthesized into a coherent prompt for the LLM.
    This enriched prompt provides the LLM with a much deeper and more structured understanding of the user's request.
    Conceptual Code Example (Python):
    def synthesize_prompt(query, contextual_info, candidate_docs): return "\n".join([ f"User Query: {query}", "Relevant Context from Knowledge Graph:", "\n".join(contextual_info), "Additional Information from Documents:", "\n".join(candidate_docs) ]) final_prompt = synthesize_prompt(query, context, candidate_docs) print(f"\n📝 Final Prompt for LLM:\n{final_prompt}") Step 7: LLM Response Generation
    Finally, the LLM processes the augmented prompt and generates a response.
    Because the prompt is rich with contextual and interconnected information, the LLM is better equipped to provide accurate, comprehensive, and coherent answers.
    Conceptual Code Example (Python - using a placeholder LLM call):
    # Step 7: Simulated LLM response def generate_llm_response(prompt): if "Sundar" in prompt and "CEO of Google" in prompt: return "Sundar Pichai is the CEO of Google. He oversees the company and has a significant net worth." return "I need more information to answer that accurately." llm_response = generate_llm_response(final_prompt) print(f"\n💬 LLM Response: {llm_response} import matplotlib.pyplot as plt plt.figure(figsize=(4, 3)) pos = nx.spring_layout(knowledge_graph) nx.draw(knowledge_graph, pos, with_labels=True, node_color='skyblue', node_size=2000, font_size=12, font_weight='bold') edge_labels = nx.get_edge_attributes(knowledge_graph, 'relation') nx.draw_networkx_edge_labels(knowledge_graph, pos, edge_labels=edge_labels) plt.title("Graph RAG: Knowledge Graph") plt.show() This step-by-step process, particularly the dynamic graph construction and traversal, allows Graph RAG to move beyond simple keyword or semantic similarity, enabling a more profound understanding of information and leading to superior response generation.
    The integration of graph structures provides a powerful mechanism for contextualizing information, which is a critical factor in achieving high-quality RAG outputs.
    Practical Applications and Use Cases of Graph RAG
    Graph RAG is not just a theoretical concept; its ability to understand and leverage relationships within data opens up a myriad of practical applications across various industries. By providing LLMs with a richer, more interconnected context, Graph RAG can significantly enhance performance in scenarios where traditional RAG might fall short. Here are some compelling use cases:
    1. Enhanced Enterprise Knowledge Management
    Large organizations often struggle with vast, disparate knowledge bases, including internal documents, reports, wikis, and customer support logs. Traditional search and RAG systems can retrieve individual documents, but they often fail to connect related information across different silos.
    Graph RAG can build a dynamic knowledge graph from these diverse sources, linking employees to projects, projects to documents, documents to concepts, and concepts to external regulations or industry standards. This allows for:
    Intelligent Q&A for Employees: Employees can ask complex questions like "What are the compliance requirements for Project X, and which team members are experts in those areas?" Graph RAG can traverse the graph to identify relevant compliance documents, link them to specific regulations, and then find the employees associated with those regulations or Project X.
    Automated Report Generation: By understanding the relationships between data points, Graph RAG can gather all necessary information for comprehensive reports, such as project summaries, risk assessments, or market analyses, significantly reducing manual effort.
    Onboarding and Training: New hires can quickly get up to speed by querying the knowledge base and receiving contextually rich answers that explain not just what something is, but also how it relates to other internal processes, tools, or teams.
    2. Advanced Legal and Regulatory Compliance
    The legal and regulatory domains are inherently complex, characterized by vast amounts of interconnected documents, precedents, and regulations. Understanding the relationships between different legal clauses, case laws, and regulatory frameworks is critical. Graph RAG can be a game-changer here:
    Contract Analysis: Lawyers can use Graph RAG to analyze contracts, identify key clauses, obligations, and risks, and link them to relevant legal precedents or regulatory acts. A query like "Show me all clauses in this contract related to data privacy and their implications under GDPR" can be answered comprehensively by traversing the graph of legal concepts.
    Regulatory Impact Assessment: When new regulations are introduced, Graph RAG can quickly identify all affected internal policies, business processes, and even specific projects, providing a holistic view of the compliance impact.
    Litigation Support: By mapping relationships between entities in case documents (e.g., parties, dates, events, claims, evidence), Graph RAG can help legal teams quickly identify connections, uncover hidden patterns, and build stronger arguments.
    3. Scientific Research and Drug Discovery
    Scientific literature is growing exponentially, making it challenging for researchers to keep up with new discoveries and their interconnections. Graph RAG can accelerate research by creating dynamic knowledge graphs from scientific papers, patents, and clinical trial data:
    Hypothesis Generation: Researchers can query the system about potential drug targets, disease pathways, or gene interactions. Graph RAG can connect information about compounds, proteins, diseases, and research findings to suggest novel hypotheses or identify gaps in current knowledge.
    Literature Review: Instead of sifting through thousands of papers, researchers can ask questions like "What are the known interactions between Protein A and Disease B, and which research groups are actively working on this?" The system can then provide a structured summary of relevant findings and researchers.
    Clinical Trial Analysis: Graph RAG can link patient data, treatment protocols, and outcomes to identify correlations and insights that might not be apparent through traditional statistical analysis, aiding in drug development and personalized medicine.
    4. Intelligent Customer Support and Chatbots
    While many chatbots exist, their effectiveness is often limited by their inability to handle complex, multi-turn conversations that require deep contextual understanding. Graph RAG can power next-generation customer support systems:
    Complex Query Resolution: Customers often ask questions that require combining information from multiple sources (e.g., product manuals, FAQs, past support tickets, user forums). A query like "My smart home device isn't connecting to Wi-Fi after the latest firmware update; what are the troubleshooting steps and known compatibility issues with my router model?" can be resolved by a Graph RAG-powered chatbot that understands the relationships between devices, firmware versions, router models, and troubleshooting procedures.
    Personalized Recommendations: By understanding a customer's past interactions, preferences, and product usage (represented in a graph), the system can provide highly personalized product recommendations or proactive support.
    Agent Assist: Customer service agents can receive real-time, contextually relevant information and suggestions from a Graph RAG system, significantly improving resolution times and customer satisfaction.
    These use cases highlight Graph RAG's potential to transform how we interact with information, moving beyond simple retrieval to true contextual understanding and intelligent reasoning. By focusing on the relationships within data, Graph RAG unlocks new levels of accuracy, efficiency, and insight in AI-powered applications.
    Conclusion
    Graph RAG represents a significant evolution in the field of Retrieval-Augmented Generation, moving beyond the limitations of traditional vector-based retrieval to harness the power of interconnected knowledge. By dynamically constructing and leveraging knowledge graphs, Graph RAG enables Large Language Models to access and synthesize information with unprecedented contextual depth and accuracy.
    This approach not only enhances the factual grounding of LLM responses but also unlocks the potential for more sophisticated reasoning, multi-hop question answering, and a deeper understanding of complex relationships within data.
    The practical applications of Graph RAG are vast and transformative, spanning enterprise knowledge management, legal and regulatory compliance, scientific research, and intelligent customer support. In each of these domains, the ability to navigate and understand the intricate web of information through a graph structure leads to more precise, comprehensive, and reliable AI-powered solutions. As data continues to grow in complexity and interconnectedness, Graph RAG offers a robust framework for building intelligent systems that can truly comprehend and utilize the rich tapestry of human knowledge.
    While the implementation of Graph RAG may involve overcoming challenges related to graph construction, entity extraction, and efficient traversal, the benefits in terms of enhanced LLM performance and the ability to tackle real-world problems with greater efficacy are undeniable.
    As research and development in this area continue, Graph RAG is poised to become an indispensable component in the architecture of advanced AI systems, paving the way for a future where AI can reason and respond with a level of intelligence that truly mirrors human understanding.
    Frequently Asked Questions
    1. What is the primary advantage of Graph RAG over traditional RAG?
    The primary advantage of Graph RAG is its ability to understand and leverage the relationships between entities and concepts within a knowledge graph. Unlike traditional RAG, which often relies on semantic similarity in vector space, Graph RAG can perform multi-hop reasoning and retrieve contextually rich information by traversing explicit connections, leading to more accurate and comprehensive responses.
    2. How does Graph RAG handle new information or evolving knowledge?
    Graph RAG employs dynamic knowledge graph construction. This means it can build or augment the knowledge graph in real-time based on the entities identified in the user query and retrieved documents. This on-the-fly capability allows the system to adapt to new information and evolving contexts without requiring constant re-indexing or manual graph updates.
    3. Is Graph RAG suitable for all types of data?
    Graph RAG is particularly effective for data where relationships between entities are crucial for understanding and answering queries. This includes structured, semi-structured, and unstructured text that can be transformed into a graph representation. While it can work with various data types, its benefits are most pronounced in domains rich with interconnected information, such as legal documents, scientific literature, or enterprise knowledge bases.
    4. What are the main components required to build a Graph RAG system?
    Key components typically include:
    **LLM (Large Language Model): **For generating responses.
    Graph Database (or Graph Representation Library): To store and manage the knowledge graph (e.g., Neo4j, Amazon Neptune, NetworkX). Information Extraction Module: For Named Entity Recognition (NER) and Relation Extraction (RE) to populate the graph.
    Retrieval Module: To perform initial document retrieval and then graph traversal. Prompt Engineering Module: To synthesize the retrieved graph context into a coherent prompt for the LLM. 5. What are the potential challenges in implementing Graph RAG?
    Challenges can include:
    Complexity of Graph Construction: Accurately extracting entities and relations from unstructured text can be challenging. Scalability: Managing and traversing very large knowledge graphs efficiently can be computationally intensive. Data Quality: The quality of the generated graph heavily depends on the quality of the input data and the extraction models. Integration: Seamlessly integrating various components (LLM, graph database, NLP tools) can require significant engineering effort. 6. Can Graph RAG be combined with other RAG techniques?
    Yes, Graph RAG can be combined with other RAG techniques. For instance, initial retrieval can still leverage vector search to narrow down the relevant document set, and then Graph RAG can be applied to these candidate documents to build a more precise contextual graph. This hybrid approach can offer the best of both worlds: the broad coverage of vector search and the deep contextual understanding of graph-based retrieval.
    7. How does confidence scoring work in Graph RAG?
    Confidence scoring in Graph RAG involves assigning scores to nodes and edges within the dynamically constructed knowledge graph. These scores can reflect the strength of a relationship, the recency of information, or the reliability of its source. The system uses these scores to prioritize paths during graph traversal, ensuring that only the most relevant and high-quality information is retrieved and used to augment the LLM prompt, thereby minimizing irrelevant additions.
    References
    Graph RAG: Dynamic Knowledge Graph Construction for Enhanced Retrieval Note: This is a conceptual article based on the principles of Graph RAG. Specific research papers on "Graph RAG" as a unified concept are emerging, but the underlying ideas draw from knowledge graphs, RAG, and dynamic graph construction.
    Original Jupyter Notebook (for code examples and base content)
    Retrieval-Augmented Generation (RAG)
    Lewis, P., et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv preprint arXiv:2005.11401. https://arxiv.org/abs/2005.11401 Knowledge Graphs
    Ehrlinger, L., & Wöß, W. (2016). Knowledge Graphs: An Introduction to Their Creation and Usage. In Semantic Web Challenges (pp. 1-17). Springer, Cham. https://link.springer.com/chapter/10.1007/978-3-319-38930-1_1 Named Entity Recognition (NER) and Relation Extraction (RE)
    Nadeau, D., & Sekine, S. (2007). A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1), 3-26.
    https://www.researchgate.net/publication/220050800_A_survey_of_named_entity_recognition_and_classification NetworkX (Python Library for Graph Manipulation)
    https://networkx.org/ spaCy (Python Library for NLP)
    https://spacy.io/ scikit-learn (Python Library for Machine Learning)
    https://scikit-learn.org/
  25. by: Abhishek Prakash
    Thu, 13 Nov 2025 04:29:03 GMT

    Here is the news. It's FOSS News (news.itsfoss.com) doesn't exist anymore, at least not as a separate entity. All news articles are now located under the main website: https://itsfoss.com/news/
    I merged the two portals into one. Now, you just have to log into one portal to enjoy your membership benefits. I hope it simplifies things for you, specially if you are a Plus member.
    Let's see what else you get in this edition of FOSS Weekly:
    A new ODF document standard release. Open source alternative to Help Scout. YouTube clamping down on tech YouTubers. Fixing thumbnail issues in Fedora 43 Ubuntu's Rust transition hitting yet another hurdle. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Internxt. SPONSORED You cannot ignore the importance of cloud storage these days, especially when it is encrypted. Internxt is offering 1 TB of lifetime, encrypted cloud storage for a single payment. Make it part of your 3-2-1 backup strategy and use it for dumping data. At least, that's what I use it for.
    Get Internxt Lifetime Cloud Storage 📰 Linux and Open Source News
    A new Rust-related problem has cropped up in the land of Ubuntu. ODF 1.4 is here as the next evolution for the open document standard. You can now play classic D3D7 games on Linux with this new project. YouTube recently deleted some Windows 11 bypass tutorials with some absurd claims. Kaspersky antivirus software is now available for Linux users. Personally, I don't use any such software on Linux. Big Tech being Big Tech. A creator claimed that his videos about bypassing Windows 11's mandatory online account were removed by YouTube.
    YouTube Goes Bonkers, Removes Windows 11 Bypass Tutorials, Claims ‘Risk of Physical Harm’When will these Big Tech platforms learn?It's FOSSSourav Rudra🧠 What We’re Thinking About
    Could GNOME Office be a thing? Roland has some convincing points:
    It’s Time to Bring Back GNOME Office (Hope You Remember It)Those who used GNOME 2 in the 2000’s would remember the now forgotten GNOME Office. I think it’s time to revive that project.It's FOSSRoland TaylorOn a side note, I found out that Flathub is ranking on Google for NSFW keywords.
    What a Shame! FlatHub is Ranking on Google for Po*nHub DownloadsAnd it’s not Google’s fault this time.It's FOSSAbhishek Prakash🧮 Linux Tips, Tutorials, and Learnings
    You can fix that annoying issue of GNOME Files not showing image thumbnails on Fedora, btw.
    Fixing Image Thumbnails Not Showing Up in GNOME Files on Fedora LinuxTiny problem but not good for the image of Fedora Linux, pun intended.It's FOSSAbhishek PrakashTheena suggests some ways to reclaim your data privacy. Switching to a private email service like Proton is one of the recommendations.
    If you are absolutely new to the Linux commands, we have a hands-on series to help you out.
    Linux Command Tutorials for Absolute BeginnersNever used Linux commands before? No worries. This tutorial series is for absolute beginners to the Linux terminal.It's FOSS👷 AI, Homelab and Hardware Corner
    Ownership of digital content is an illusion, until you take matters into your own hands. Our self-hosting starter pack should be a good starting point.
    The Self-Hosting Starter Pack: 5 Simple Tools I Recommend To Get Started With Your HomelabSelf-hosting isn’t rocket science—if I can do it, so can you!It's FOSSTheena Kumaragurunathan🛍️ Linux eBook bundle
    This curated library (partner link) of courses includes Supercomputers for Linux SysAdmins, CompTIA Linux+ Certification Companion, Using and Administering Linux: Volumes 1–2, and more. Plus, your purchase supports the Room to Read initiative!
    Explore the Humble offer here✨ Project Highlights
    Don't let its name fool you. Calcurse is a powerhouse of a tool that can be your go-to for any calendar management needs (like a boon, almost).
    Command Your Calendar: Inside the Minimalist Linux Productivity Tool CalcurseA classic way to stay organized in the Linux terminal with a classic CLI tool.It's FOSSRoland TaylorHelp Scout is known for abrupt pricing changes; why not switch to a platform that actually cares?
    Tired of Help Scout Pulling the Rug from Under You? Try This Free, Open Source AlternativeDiscover how FreeScout lets you run your own help desk without vendor lock-in or surprise price hikes.It's FOSSSourav Rudra📽️ Videos I Am Creating for You
    The latest video shows my recommendations for Kitty terminal configuration changes.
    Subscribe to It's FOSS YouTube Channel Linux is the most used operating system in the world. but on servers. Linux on desktop is often ignored. That's why It's FOSS made it a mission to write helpful tutorials and guides to help use Linux on their personal computer.
    We do it all for free. No venture capitalist funds us. But you know who does? Readers like you. Yes, we are an independent, reader supported publication helping Linux users worldwide with timely news coverage, in-depth guides and tutorials.
    If you believe in our work, please support us by getting a Plus membership. It costs just $3 a month or $99 for a lifetime subscription.
    Join It's FOSS Plus 💡 Quick Handy Tip
    In the Konsole terminal emulator, you can use the right-click context menu to open any folder with a specific tool. For example, if you are inside a directory, right-click and select the "Open Folder With" option.
    From the list, select an application. So, for instance, if Dolphin is selected, the location will be opened in the file manager. If Kate is selected, that location is opened in the editor.
    Other than that, if you enable the "Underline Files" option in Configure Konsole →Profiles → Edit Profile → Mouse → Miscellaneous, you can even right-click and open files in GUI tools right from the terminal.
    🎋 Fun in the FOSSverse
    Can you get all the answers to this Linux distro logo quiz?
    Guess the Distro from its LogoThere is a logo and four distro names. Guess which one it belongs to. It’s that simple.It's FOSSAbhishek Prakash🤣 Meme of the Week: Such words can hurt the soul, you know. 😵
    🗓️ Tech Trivia: On November 9, 2004, Mozilla Firefox 1.0 was released, introducing a faster, safer web-browsing experience with features like tabbed browsing and popup blocking, marking a major challenge to Microsoft’s Internet Explorer dominance.
    🧑‍🤝‍🧑 From the Community: One of the developers of antiX Linux has announced that the first beta release of antiX 25 is now live!
    antiX 25 Beta 1 Available for Public TestingantiX-25-full-beta1available for public testing November 5, 2025 by anticapitalista Here is the first beta iso of antiX-25 (64bit). Bullet point notes for now. based on Debian 13 ‘trixie’ 4 modern systemd-free init systems – runit (default), s6-rc, s6-66 and dinit new default look usual ‘antiX magic’ you should be able to boot live in the non-default init and it should then become the default after install. Please note that user intervention will be required more than previous versions o…It's FOSS CommunityProwlerGr❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.