Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Sourav Rudra
    Mon, 03 Nov 2025 15:08:48 GMT

    Rust has been making waves in the information technology space. Its memory safety guarantees and compile-time error checking offer clear advantages over C and C++.
    The language eliminates entire classes of bugs. Buffer overflows, null pointer dereferences, and data races can't happen in safe Rust code. But not everyone is sold. Critics point to the steep learning curve and unnecessary complexity of certain aspects of it.
    Despite criticism, major open source projects keep adopting it. The Linux kernel and Ubuntu have already made significant progress on this front. Now, Debian's APT package manager is set to join that growing list.
    What's Happening: Julian Andres Klode, an APT maintainer, has announced plans to introduce hard Rust dependencies into APT starting May 2026.
    The integration targets critical areas like parsing .deb, .ar, and tar files plus HTTP signature verification using Sequoia. Julian said these components "would strongly benefit from memory safe languages and a stronger approach to unit testing."
    He also gave a firm message to maintainers of Debian ports:
    The reasoning is straightforward. Debian wants to move forward with modern tools rather than being held back by legacy architecture.
    What to Expect: Debian ports running on CPU architectures without Rust compiler support have six months to add proper toolchains. If they can't meet this deadline, those ports will need to be discontinued. As a result, some obscure or legacy platforms may lose official support.
    For most users on mainstream architectures like x86_64 and ARM, nothing changes. Your APT will simply become more secure and reliable under the hood.
    If done right, this could significantly strengthen APT's security and code quality. However, Ubuntu's oxidation efforts offer a reality check. A recent bug in Rust-based coreutils breifly broke automatic updates in Ubuntu 25.10.
    Via: Linuxiac
    Suggested Read 📖
    Bug in Coreutils Rust Implementation Briefly Downed Ubuntu 25.10’s Automatic Upgrade SystemThe fix came quickly, but this highlights the challenges of replacing core GNU utilities with Rust-based ones.It's FOSS NewsSourav Rudra
  2. by: Pulkit Chandak
    Mon, 03 Nov 2025 14:18:57 GMT

    It is time to talk about the most important love-hate relationship that has ever been. It is Instagram and... you.
    Instagram has become irreplaceable if you're in a spot where you need to reach out to the world to present your work and follow others' in any area, be it art, music, dance, science, tech, modelling, etc. Being one of the biggest platforms, you can't skip out on it if you want to keep up with the world and the lives of your friends. But on the other hand, it is also one of the most distracting apps to exist because of the possibility and addictiveness of doomscrolling your hours into nothingness.
    Worry not, because we once again bring you the way to make your life better. The solution, unsurprisingly, lies in the Linux terminal (as most of them do), which will be your next Instagram Client.
    Well, actually it is not, as you'll read in this article. But before you do that, check out It's FOSS Instagram account as we are killing it with some realli infotaining stuff. 92K+ people are the proof of that.
    Follow It's FOSS on Insta for daily dose of Linux Memes and NewsBehold Instagram-CLI! And it's not from Meta
    Claiming to be the "ultimate weapon against brainrot", Instagram-CLI provides an exciting option to use Instagram through your terminal. Said mission is achieved by limiting possible actions to only three things: checking your messages, your notifications, and your feed (consisting only of the accounts that you have followed).
    Sliding into the DMs via CLI
    The command to access the chats is:
    instagram-cli chatThe interface of which looks like this:
    The navigation is quite simple. j/k keys to scroll through the accounts you can chat with (J/K to select the absolutely first or the absolutely last chat), when you press Enter to choose the chat you want to access. When chatting with someone, you can obviously just write your texts in the chat-box and hit Enter to reply. But if you either want to reply, react or unsend a message, it all starts with the input:
    :selectAfter writing that and pressing Enter, you can navigate through the texts using j/k keys (again, J/K to select the absolutely first or the absolutely last text) and select one for an action. To send a reply saying "You have been replied to.", the input will look like:
    :reply You have been replied to. 0:00 /0:19 1× To embed an emoji in a normal text, you can do it as so:
    You have been replied to :thumbsup:To unsend the message, the input given is:
    :unsendAnd to react, say with a thumbs-up emoji, the input will look like:
    :react thumbsupTo mention someone in a group chat, you can use the "@" as usual, and you can even send files using a simple hashtag. It even supports autocomplete after the hashtag, similar to how it would on the terminal itself. So to send a file called "test.png" that is in your Downloads directory alongside a message, simply write:
    This is image testing #Downloads/test.pngIt does take a while for a file to be sent, though. I have demonstrated the process in this video:
    0:00 /0:24 1× However, to send the file on its own, you can use:
    :upload #Downloads/test.png🗒️It is worth noting that the behavior of this chat is very inconsistent. In my personal experience, I have not been able to make the emoji reactions work even though I executed it exactly as they had shown, and while the messages with emojis do get sent, they don't show up on the texting window and disappear from the Instagram official app/website after reloading. The replying function is also a hit or miss.Goota check the feed
    To access your feed, you can simply enter:
    instagram-cli feedThis brings up your feed, where you can scroll through the posts using j/k and through the carousel of a single post using h/l. If you do it for the first time without much configuration, the images in your feed will look something like this:
    The graphics by default are ASCII, and that might not be something you want, considering the fact that nothing is quite clear (however cool it may be). So how do you fix that? You switch the image mode with the following command:
    instagram-cli config image.protocol kittyNow, the graphical media will look... well, graphical:
    If it doesn't work, try using a terminal like Ghostty or Kitty.
    If you want to switch back, replace the "kitty" in the command with "ascii". In total, there are 6 imaging options Instagram-CLI provides: "ascii", "halfBlock", "braille", "kitty", "iterm2", "sixel", or "", but knowing only these two might suffice.
    🗒️The feed is quite janky. It automatically scrolls through posts rather inconsistently and doesn't always respond well to the scrolling input. The often images don't sit well within the boxes that they are contained in, making it feel a little rough around the edges.Notify my terminal
    This simply requires one command, and there isn't much more to it:
    instagram-cli notifyAuthenticating in the CLI
    Logging in can be done with the simple username-password combination after entering the following command:
    instagram-cli auth login --usernameYou can log into multiple accounts in this manner, which you can switch among through this command:
    instagram-cli auth switch <username>In case you forget what account is currently active, you can ask it who you are:
    instagram-cli auth whoamiAnd to finally log out of your currently active account, simply enter:
    instagram-cli auth logout🚧This was is perhaps the most important warning of all. I tried to log into my personal account on Instagram-CLI and Instagram flagged it as suspicious behavior calling it scraping. I was locked out of my account for a little bit because of it, so log in at your own risk. We recommend using a dummy account that is expendable.Config if you can
    Since it offers a bunch of configuration options, it only makes sense to have a command that can list them all at once so you can keep a track of it all:
    instagram-cli configAny of the values can be changed with:
    instagram-cli config <key> <value>But if you want to change multiple keys at once, you can simply edit the config file as a text file at once:
    instagram-cli config editTry it (but perhaps not risking your main account)
    The recommended method for installation of the program uses npm, so make sure that you have that preinstalled on your system. If not, you can install it using:
    sudo curl -qL https://www.npmjs.com/install.sh | shAnd then to install Instagram-CLI on your system, enter:
    sudo npm install -g @i7m/instagram-cliAlternatively, if you want to install it without npm, you can use Python:
    sudo pip3 install instagram-cli🚧The project developers have asked specifically not to use the same account if you have both the clients installed.💡 Bonus Banner
    If you want to recreate the banner at the beginning of the article (perhaps to show off the capabilities of your terminal), enter the command without any other parameters:
    instagram-cliWrapping Up
    Instagram-CLI is an interesting initiative because of the way it reduces your screentime while still giving you an option to socialize. Not to forget, it helps you avoid Meta's trackers. Helps you simultaneously improve your social media habits while also managing your FOMO.
    The project is still very clearly quite rough around the edges, which has more to do with Meta's policies than the developers themselves. It is a hit or miss, but it might just work for your account, so give it a shot. But if you see your account flagged, you know what you got to do.
    Let us know what you think about this it in the comments. Cheers!
  3. by: Abhishek Prakash
    Sun, 02 Nov 2025 06:07:03 GMT

    Do we need a separate, dedicated software center application for Flatpaks? I don't know and I don't want to go in this debate anymore. For now, I am going to share this new marketplace that I have come across and found intriguing.
    Bazaar is a modern Flatpak app store designed with GNOME styles. It focuses on discovering and installing Flatpak apps, especially from Flathub. In can se you did not know already, bazaar means market or marketplace. A suitable name, I would say.
    Bazaar: More than just a front end for Flathub
    As you'll see in the later sections, Bazaar is not perfect. But then nothing is perfect in this world. There are scopes for improvement but overall, it provides a good experience if you are someone who frequently and heavily use Flatpaks on GNOME desktop. There is a third-party KRunner plugin for KDE Plasma users.
    Let's explore this Bazaar and see what features it offers. If you prefer videos, you can watch its features in our YouTube video.
    Subscribe to It's FOSS YouTube ChannelApps organized into categories
    Like GNOME software, several app categories are available in Bazaar. You can find them on the homepage itself. If you are just exploring new apps of your interest, this helps a little.
    App categoriesSearch and install an app
    Of course, you can search for an application, too. Not only you can search with its name, you can also search for its type. See, Flathub allows tagging apps and this helps 'categorizing' apps in a way. So if you search for text editor, it will show the applications tagged with text editor.
    Search AppsWhen you hit the install button, you can see a progress bar on the top-right. Click on it to open the entire progress bar as a sidebar.
    Progress barIt shows what items and runtimes are being installed. You can scroll down the page of the package to get more details, screenshots of the project, and more.
    Accent colors
    The progress bar you saw above can be customized a little. Click the hamburger menu to access preferences and then go to the Progress Bar section. You'll find the options to choose a theme for the progress bar. These themes are accent colors represent LGBTQ and their sub-catrgories.
    Progress bar style settingsYou can see an Aromantic Flag applied for the progress bar in the screenshot below.
    Progress bar style appliedShow only open source apps
    Flathub has both open source and proprietary software available. The licensing information is displayed on an individual application page.
    Non-free apps in search resultNow, some people don't want to install proprietary software. For them, there is the option to only show open source software in Bazaar.
    You can access this option by going to preferences from the hamburger menu and toggle on the button, "Show only free software".
    Show only free software settings📋Repeated reminded. Free in FOSS means free as in freedom, not free as in beer.Refresh the content using the shortcut CTRL + R and you should not see proprietary software anymore.
    No non-free software in resultsApplication download statistics
    In an app page, you can click on the Monthly Downloads section to get a chart view and a map view.
    The map view shows the download per region of that app.
    Download per locationThe chart view gives you an overview of the download stats.
    Download overview chartOther than that, if you click on the download size of an application in the app page:
    Click on download sizeYou can see a funny download size table, comparing the size of the Flatpak applications with some facts.
    Funny download size chartEasily manage addons
    Some apps, like OBS Studio, have optional add-on packages. Bazaar indicates the availability of add-ons in the Installed view. Of course, the add-ons have to be in Flatpak format. This feature comes from Flathub.
    When you click the add-ons option, it will show the add-ons available for installation.
    Manage add-onsRemoving installed Flatpak apps
    You can easily remove installed Flatpak apps from the Installed view.
    Remove applicationsThis view shows all the installed Flatpak packages on your system, even the ones you did not install via Bazaar.
    More than just Flathub
    By default, Bazaar includes applications from Flathub repository. But if you have added additional remote Flatpak repositories to your system, Bazaar will include them as well.
    It's possible that an application is available in more than one remote Flatpak repositories. You can choose which one you want to use from the application page.
    Select an installation repositoryAlthough, I would like to have the ability to filter applications by repositories. This is something that can be added in the future versions.
    Installing Bazaar on Linux
    No prizes for guessing that Bazaar is available as a Flatpak application from Flathub. Presuming that you have already added Flathub remote repo to your system, you can install it quickly with this command:
    flatpak install flathub io.github.kolunmi.Bazaar If you are using Fedora or Linux Mint, you can install Bazaar from the software center of respective distributions as well.
    Wrapping Up
    Overall, this is a decent application for Flatpak lovers. There is also a 'curated' option available for distributors. Which means if some new distros want to package Bazaar as ist software center, they can have a curated list of applications for specific purpose.
    Is it worth using it? That is debatable and really up to you. Fedora and Mint already provide Flatpak apps from their default software center. This could, however, be a good fit for obscure window managers and DEs. That's just my opinion and I would like to know yours. Please share yours in the comment section.
  4. by: Sourav Rudra
    Sat, 01 Nov 2025 11:02:59 GMT

    Proton VPN (partner link) is one of the most trusted privacy-focused VPN services. It offers a free plan, strong no-logs policies, and open source apps for multiple platforms.
    The service is known for its focus on security and transparency, making it a popular choice for people who value privacy and control over their online activity.
    Linux users have long requested a proper command-line interface for it. While the earlier CLI was useful, recent development focused on GUI apps. Fortunately, their requests have now been addressed.
    Proton VPN CLI App (Beta): What to Expect?
    The new CLI app lets Linux users connect and disconnect from VPN servers and select servers by country, city, or specific server for paid plans. It is fast, lightweight, and removes the need to use the desktop GUI.
    The CLI is still in beta. Current limitations include only supporting the WireGuard protocol, no advanced features such as NetShield, Kill Switch, Split Tunneling, or Port Forwarding, and settings must be edited via config files. Proton is shipping the essentials first and plans to expand features according to user feedback.
    This was announced as part of the Proton VPN 2025-26 fall and winter roadmap. The update also mentions an upcoming auto-launch feature for Linux, allowing the VPN to start automatically at boot.
    Beyond the CLI, Proton VPN (partner link) is set to roll out a new network architecture designed for faster speeds, better reliability, stronger anti-censorship, and post-quantum encryption. Free-tier users gain new server locations in Mexico, Canada, Norway, Singapore, and more.
    The best VPN for speed and securityGet fast, secure VPN service in 120+ countries. Download our free VPN now — or check out Proton VPN Plus for even more premium features.Proton VPNHow Does it Hold Up?
    I configured it to run on an Ubuntu 25.10 system. The initial setup was a bit tricky, especially for a GUI-first user like me, but running protonvpn -h made it relatively simple to figure out how to sign in and connect to servers.
    Once I was connected to their Seattle server, I ran a speed test using fast.com and got speeds close to what my usual 300 Mbps fiber connection gives me (I am located in India, btw), which was impressive.
    You can try this early version of the Proton VPN CLI for Linux by following one of the official guides linked below:
    Debian Ubuntu Fedora Make sure you first install the "Beta" Linux app as described in the guides above. Once that’s done, run the additional command listed below for your specific distro to get the CLI client.
    Debian/Ubuntu: sudo apt update && sudo apt install proton-vpn-cli
    Fedora: sudo dnf check-update --refresh && sudo dnf install proton-vpn-cli
    Use this command to launch: protonvpn
    If you are on a different distro, the CLI might work if it’s based on one of the above (e.g., an Ubuntu derivative), but Proton doesn’t officially guarantee compatibility. Test it and let me know in the comments below, maybe?
    Proton VPN CLI (Beta)Suggested Reads 📖
    Proton Launches Data Breach Observatory to Track Dark Web Activity in Real-TimeA constantly updated dark web monitoring tool.It's FOSS NewsSourav RudraVPNs With “No Logging Policy” You Can Use on LinuxThe VPNs that me and the team have used on Linux in personal capacities. These services also claim to have ‘no log policy’.It's FOSSSourav Rudra
  5. by: Abhishek Prakash
    Fri, 31 Oct 2025 17:16:28 +0530

    Good news! All modules of the new course 'Linux Networking at Scale' have been published. You can start learning all the advanced topics and complete the labs in the course.
    Linux Networking at ScaleMaster advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair KhurshidThis course is only available for Pro members. This would be a good time to consider upgrading your membership, if you are not already a Pro member.
     
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  6. by: Pulkit Chandak
    Fri, 31 Oct 2025 09:40:12 GMT

    A desktop-wide search application can be the key to speeding up your workflow by a significant amount, as anything you might look for will almost be at your fingertips at any given moment.
    Today, we'll be looking at a GUI desktop application that does exactly that.
    FSearch: Fast, Feature-rich GUI Search App
    FSearch is a fast file search application, inspired by Everything Search Engine on Windows.
    It works in an efficient way without slowing down your system, giving you results as you type the keywords in. The way it does this is by indexing the files from the directories in advance, updating them at a fixed interval, and storing that information to search through whenever the application is used.
    It is written in C and based on GTK3, which is ideal for GNOME users but might not look as good on Qt based desktop environments like KDE. Let's look at some of the features this utility offers.
    Index Inclusion/Exclusion
    The first thing that you need to do after installation and the most crucial aspect of all is to specify to the utility what are the directories that you want it to search for anything in. Besides the inclusion category, you can also specify what directories you want excluded from the search. Another extremely helpful option is to exclude the hidden files from being searched which can be the case if you only want to search the files as you see them on your file explorer.
    Besides that, you can also configure how often the database needs to be refreshed and updated. This will depend on how often the relevant files on your system change, and hence should be your own choice.
    Wildcard and RegEx Support
    The search input supports the wildcard mode by default, which are often used for specifications on the command line. For example, if I want to search for all files that contain "Black" in the name, I can give the input as such:
    Here, "*" essentially means everything. So any files with anything at all before and after the word "Black" will be listed. There are many more wildcards like this such as "?" for one missing character, and "[ ]" specifying ranges. You can read more about them here.
    The other option is to specify the search results by the RegEx formatting, which is a different style in itself. It can be activated using Ctrl+R, and switched by the same.
    Fast Sort
    You can quickly sort out the results based on name, path, size or last modification date right from the interface, as the results are shown with these details present. All it takes is one click on the right detail header (or two clicks if you want them in a descending instead of an ascending order).
    Filetype Filter
    The searched files can be of different categories defined in the utility itself, which are defined by the extensions of the files that the results yield. There is a button on the right of the search bar where the search results category can be specified, the default being "All". The categories are:
    All Files Folders Applications (such as .desktop) Archives (such as .7z, .gzip, .bz) Audio (such as .mp3, .aac, .flac) Documents (such as .doc, .csv, .html) Pictures (such as .png, .jpg, .webp) Videos (such as .mp4, .mkv, .avi) The excellent feature is that these categories and their list of extensions are modifiable. You can add or change any of the options if it doesn't fit your needs well.
    Search in Specific Path
    Another interestingly important search option is to also search in the path of the filenames. This becomes relevant when you remember the approximate location of the file or part of the path or something as such. It seems like a minor detail but can be a real savior when the appropriate time arises. An example of it can be this:
    This mode can be activated using the keyboard shortcut Ctrl+U.
    Other Features
    There are other minor features that help in the customization, such as toggling the case sensitivity of the search terms (which can also be done with the Ctrl+I keyboard shortcut), single-clicking to open files, pressing Esc to exit, remembering window size on closing, etc.
    Installing FSearch on Linux
    FSearch is available on various distributions in multiple different ways. First, to cover the distro-independent option, Flatpak. FSearch exists on Flathub and can be installed with a simple search on any distribution where Flathub is enabled internally in the app store such as Fedora. If not from the store, you can find the .flatpakref file here and (considering it is downloaded in the Downloads folder) install it with:
    sudo flatpak install io.github.cboxdoerfer.FSearch.flatpakrefOn Ubuntu based distributions, there are two options, a stable release and a daily one. To add the repository the stable version, enter this command:
    sudo add-apt-repository ppa:christian-boxdoerfer/fsearch-stable Whereas for the daily release:
    sudo add-apt-repository ppa:christian-boxdoerfer/fsearch-dailyIn either case, then enter the following commands after to install the application:
    sudo apt update sudo apt install fsearchOn Arch-based distributions, use the following command:
    sudo pacman -S fsearchOn Fedora, the installation can be done by entering:
    dnf copr enable cboxdoerfer/fsearch dnf install fsearchIf none of these apply, you can always install from source or find instructions on the official website.
    Final Thoughts
    FSearch does what it claims to do without exceptions and hurdles. It is very fast, not very taxing on the hardware, has very sensible configuration options, and looks pretty good while doing its job. A huge recommendation from my side would be to add a keyboard shortcut to open FSearch (the process will depend on your distribution), something very accessible like Shift+S perhaps to easily open the utility and use it immediately.
    I know that for many Linux users, nothing replaces the find command clubbed with xargs and exec but still, not all desktop Linux users are command line ninjas. That's why desktop search apps like FSearch, ANGRYsearch and SearchMonkey exist. Nautilus' built-in file search works well, too.
    Mastering Nautilus File Search in Linux DesktopBecome a pro finder with these handy tips to improve your file search experience with GNOME’s Nautilus file search.It's FOSSSreenathPlease let us know in the comments if this is an application you'd like to use, or if you have any other preferences. Cheers!
  7. by: Theena Kumaragurunathan
    Fri, 31 Oct 2025 04:07:42 GMT

    Previously on the Internet
    I have a theory: Most people from mine and slightly older generations (early 80s kids) still remember the first time we went online unsupervised.
    It was late 2001, I was 18 years old, which was an admittedly belated entry into cyberspace compared to my peers, but the fact that I remember when and where it happened, and what websites I visited, should underscore my point, especially to younger readers: the internet felt like a revelation.
    Why would I bestow such gravitas and import to that one hour over two decades ago, in a tiny internet cafe, on Internet Explorer of all things?
    This was when I had finally decided what I was going to do with my life: I wanted to be a filmmaker. But I was in Sri Lanka, and had little access to the resources I would need; what films and filmmakers to study, how films were made in the first place, such things were mysterious and secretive in my pre-internet life.
    On that day in 2001, in that one hour, I realized how wrong I was. Everything I wanted to learn about film was just a Yahoo! search away. The internet had lived up to its hype: it was the promised land for the insatiably curious. Today, the kids would call it a nerdgasm.
    I start this essay with this flashback because I want to carry out a thought experiment: All other things about me being equal, what would an 18 year old me dreaming of films and film-making, encounter on the internet in 2025? I encourage my younger readers (those born in the 2000s) to do the opposite: imagine if you were old enough to encounter the pre-social media, pre-SEO spam, pre-AI sludge filled internet.
    The Dead Internet
    In their paper The Dead Internet Theory: A Survey on Artificial Interactions and the Future of Social Media (Asian Journal of Research in Computer Science, 18(1) 67-73), Muzumdar, et al., trace the genesis of the theory to online communities in the late 2010s:
    "The origins of the Dead Internet Theory (DIT) can be traced back to the speculative discussions in online forums and communities in the late 2010s and early 2020s. It emerged as a response to the growing unease about the changing nature of the internet, particularly in how social media and content platforms operate. Early proponents of the theory observed that much of the internet no longer felt as vibrant or genuine as it had in its earlier days, where user-generated blogs, niche forums, and personal websites created spaces for online interaction."
    In Is Google Getting Worse? A Longitudinal Investigation of SEO Spam in Search Engines Bevendorff, J., et al,. showed there was empirical evidence to back these observations.
    What does that look like at a macro level? On the surface, it means more than half of all internet traffic is bots.
    Image credit: Bot Traffic report from Imperva shared on Information AgeThis seems almost inevitable.
    Around 2005, I was working as a copywriter for a web development firm that specialized in the hospitality sector. Our clients were some of the largest brands in the industry, and every week our job was to ensure their websites would rank above the competition. My employer was a well-known service provider to the entire sector, which meant we worked on brands that were competing against one another.
    One half of the day would be spent ensuring Hotel X in New York City ranked higher than Hotel Y, the former's competitor in, say, the luxury hotel space for New York. The second half would be focused on—and I wish I was joking—ensuring Hotel Y would rank over Hotel X. This mercenary approach to winning Google search rankings for clients, drove me to quit. When my boss at the time asked why I was quitting, I could not adequately express my misgivings. It only took me twenty years to crystallize my thoughts on the matter.
    The Costs of A Dead Internet
    The research carried out by Bevendorff, et all., restricted itself mostly to websites that focused on product reviews. We don't require advanced comprehension of statistics to extrapolate these findings into more critical areas such as political and social discourse; as AI generated news combines with SEO Spam and bots, the stakes are enormous.
    The evidence shows that AI misinformation is leading to an erosion on a common, shared truth. Is it any wonder that the last decade has seen increasing polarization in our societies?
    Reviving the Revelatory Internet
    The study by Campante et al., 2025 offers a way forward:
    "While exposure to AI-generated misinformation does make people more worried about the quality of information available online, it can also increase the value they attach to outlets with reputations for credibility, as the need for help in distinguishing between real and synthetic content becomes more pressing."
    Reviving the internet has to be a collective fight. Everyone of us can play their part in ensuring a more vibrant internet. Then we don't have go into survival mode and opt for devices like Prepper Disk for a post-apocalyptic, offline internet knowledge. Excellent idea, by the way.
    Prepper Disk Premium | Loaded with 512GB of Survival ContentEven without the Grid, your knowledge stays online. A private hotspot with 512GB of survival information and maps, available on any device. CONTENT INCLUDED Complete English Wikipedia (Over 6 million articles and images). Searchable and browsable just like the real site. North America, Europe, and Oceania Street Maps wPrepper DiskPrepper Disk StoreHere are some ways we can still resist for a more human internet:
    Spam Protection and Authenticity
    mosparo: AI-powered open-source spam filtering for website forms, avoiding intrusive CAPTCHAs and preserving genuine user interactions. ASSP (Anti-Spam SMTP Proxy): Open-source email firewall using Bayesian filtering, greylisting, and AI spam detection. Anubis: Blocks AI scrapers with proof-of-work challenges, protecting self-hosted sites from bot scraping. CAI SDK (Content Authenticity Initiative): Open-source tools for verifying content provenance and checking if media/news is authentic and unaltered. Disinformation Detection and Curated Search
    iVerify: Fact-checking and false narrative alerting tool with transparent code, useful for journalists and regular users. Disinfo Open Toolbox: Suite of open-source tools to verify news credibility and monitor fake news/disinformation sources. Codesinfo: Set of open-source civic journalism tools for fact-checking, evidence gathering, and author attribution. phpBB, Discourse: FOSS forum platforms for authentic, moderated human communities. OSINT tools (Maltego & others): Free open-source tools to investigate online identities, emails, and website authenticity. Building and Joining Authentic Communities
    Fediverse platforms (e.g., Mastodon, Lemmy): Decentralized open-source social networks emphasizing moderation and organic growth. Protect Your Browser
    Browser privacy extensions and alternative search engines (Searx, DuckDuckGo): Reduce SEO spam and filter content farms. RSS aggregators and curated open-source communities: Bypass algorithmic feeds for direct access to trusted sources. FOSS moderation, spam filtering, fact-checking, and media verification: Ensuring content authenticity and reliable engagement. ProtonProton provides easy-to-use encrypted email, calendar, cloud storage, password manager, and VPN services, built on the principle of your data, your rules.ProtonNext On the Internet
    The easy thing for someone like me—a writer of speculative fiction—is to veer this column towards the dystopian. I could, for instance, liken a future internet to a zombie apocalypse where AI powered spam and content bots bury thriving virtual communities run by actual people.
    This isn't a feat of imagination even: just take a gander at blogging sites like Medium (which began with a promise to make writing and writers on the internet feel seen); almost all the site's tech writing is clearly AI generated, while some of its writers in the paid partnership write repetitive pieces on how AI has allowed them to supposedly make six-figure incomes.
    In such a case, I should end this with a eulogy to an internet that I no longer recognize.
    Or I could write this note to the imaginary 18-year-old me using the internet in 2025. In which case, I would tell him: there is a better way, and that better way is within your grasp.
  8. by: Roland Taylor
    Thu, 30 Oct 2025 19:21:42 +0530

    Creating PDFs is one of the easiest tasks to take for granted on Linux, thanks to the robust PDF support provided by CUPS and Ghostscript. However, converting multiple files to this portable format can get tedious fast, especially for students, non-profits, and businesses that may have several files to handle on any given day. Fortunately, the Linux ecosystem gives you everything you need to fully automate this task, supporting several file formats and any number of files.
    This guide will show you how to use unoconv (powered by headless LibreOffice) to build a simple, reliable system that converts any supported document format into PDF, and optionally sorts your original files into subfolders for storage or further management.
    We’ll cover common open document formats, and show you how to expand the approach so you can drop in other types as needed. We’ll also use cron to automate execution, flock to prevent overlapping runs, and logrotate to handle log rotation automatically. The final result will be a lightweight, low-maintenance automation you can replicate on almost any Linux system.
    The methods here work on both desktop and server environments, which makes them a practical fit for organisations that need to handle regular PDF conversions. Once configured, the process is fully hands-free. We’ll keep things approachable and script-first, run everything as a non-privileged user, and focus on a clear folder layout you can adapt to your own workflow with no GUI required.
    📋Even if you do not need such a system, trying out such tutorials help sharpen your Linux skills. Try it, learn new things while having fun with it.Our automation goals
    We’ll build a practical, approachable system that does the following:
    Watch a single folder for new documents in any supported file format (ODF, DOCX, etc.). Convert each file to PDF using unoconv. Move converted PDFs into a dedicated folder. Move original files into subfolders matching their extensions (e.g., originals/odt/). Prevent overlapping runs using a lockfile. Log all actions to /var/log/lo-unoconv.log with automatic log rotation. This gives us a self-contained, resilient system that can handle everything from a trickle of invoices to hundreds of archived reports.
    📋By supported file formats, we're referring to any file type that we include in our script. LibreOffice supports many file formats that we are unlikely to need.Where to use such automated PDF conversion?
    Imagine this scenario: In a company or organization, there's a shared folder where staff (or automated systems) drop finished documents that need to be standardized for archival or distribution. Everyone can keep editing their working files in the usual place. When a document is ready for the day, it gets saved to the Document Inbox folder and synched to the file server.
    Every few minutes, a conversion job runs automatically, checking this folder for any supported documents, whether ODT, ODS, ODP, DOCX, etc. — and converts them to the PDF format. The resulting PDFs are saved to "Reports-PDF", replacing any previous versions if necessary, and the processed copy of the source document is filed into a folder in "Originals", sorted by extension for traceability.
    There are no extra buttons to press and no manual exporting to remember. Anyone can drop a file and go on about their day, and the PDFs will be neatly arranged and waiting in the output directory minutes later. This lets the team keep a simple routine while ensuring consistent, ready-to-share PDFs appear on schedule. This is exactly the solution we’re aiming for in this tutorial.
    Understanding Unoconv
    Unoconv (short for UNO Converter) is a Python wrapper for LibreOffice’s Universal Network Objects (UNO) API. It interfaces directly with a headless instance of LibreOffice, either by launching a new instance or connecting to an existing one, and uses this to convert between supported file formats.
    🚧unoconv is available on most Linux distributions, but is no longer under development. Its replacement unoserver, is under active development, but does not yet have all the features of unoconv.Why Use Unoconv Instead of Headless LibreOffice Directly?
    You might wonder why we're not using LibreOffice directly, since it has a headless version that can even be used on servers. The answer lies in how headless LibreOffice works. It is designed to launch a new instance every time the libreoffice --headless command is run.
    This works fine for one-time tasks, but it puts a strain on the system if this command must be loaded from storage and system resources must be reallocated every time you try to use it. By using unoconv as a wrapper, we can allow headless LibreOffice to run as a persistent listener, with predictable resource usage, and avoid overlap when multiple conversions are needed. This saves time, and makes an ideal solution for recurring jobs like ours.
    Installing the prerequisites
    You'll need to install LibreOffice, unoconv, and the UNO Python bindings (pyuno) for this setup to work. The Writer, Calc, and Impress components are also required, as they provide filters needed for file format conversions.
    However, we won't need any GUI add-ons — everything here is headless/server-friendly. Even if some small GUI-related libraries are installed as dependencies, everything you'll install will run fully headless; absolutely no display server required.
    Note: on desktops, some of these packages may already be installed. Running these commands will ensure you're not missing any dependencies, but will not cause any problems if the packages already exist.
    Debian / Ubuntu:
    sudo apt update sudo apt install unoconv libreoffice-core libreoffice-writer libreoffice-calc libreoffice-impress python3-uno fonts-dejavu fonts-liberation RHEL/CentOS Stream
    First enable EPEL (often required for unoconv on RHEL and its derivatives, Fedora has it in the default repos):
    sudo dnf install epel-release Then install:
    sudo dnf install unoconv libreoffice-writer libreoffice-calc libreoffice-impress libreoffice-pyuno python3-setuptools dejavu-sans-fonts liberation-fonts openSUSE (Leap / Tumbleweed)
    sudo zypper install unoconv libreoffice-writer libreoffice-calc libreoffice-impress python3-uno python3-setuptools dejavu-fonts liberation-fonts Arch Linux (and Manjaro)
    Heads up: There’s no separate libreoffice-core/libreoffice-headless split on Arch, but the packages still run headless.
    sudo pacman -S unoconv libreoffice-fresh python-setuptools ttf-dejavu ttf-liberation Note: libreoffice-fresh includes pyuno on Arch; use libreoffice-still for the LTS track.
    Testing that everything works
    Once you've installed the prerequisites, I recommend checking to see that unoconv is working. To do this, you can try these instructions:
    First, create a sample text file:
    cat > sample.txt << 'EOF' Unoconv smoke test ================== This is a plain-text file converted to PDF via LibreOffice (headless) and unoconv. • Bullet 1 • Bullet 2 • Unicode check: café – 東京 – ½ – ✓ EOF Next, run a test conversion with unoconv:
    # Convert TXT → PDF unoconv -f pdf sample.txt You may run into this error on recent Debian/Ubuntu systems:
    Traceback (most recent call last): File "/usr/bin/unoconv", line 19, in <module> from distutils.version import LooseVersion ModuleNotFoundError: No module named 'distutils' This occurs because unoconv still imports distutils, which was removed in Python 3.12. You can fix this with:
    sudo apt install python3-packaging sudo sed -i 's/from distutils.version import LooseVersion/from packaging.version import parse as LooseVersion/' /usr/bin/unoconv You may get a similar error on Fedora, that looks something like this:
    unoconv -f pdf sample.txt /usr/bin/unoconv:828: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if product.ooName not in ('LibreOffice', 'LOdev') or LooseVersion(product.ooSetupVersion) <= LooseVersion('3.3'):However, the conversion should still be able to proceeed.
    Verifying the conversion
    If the command proceeded successfully, it's wise to verify that the output is valid before proceeding.
    You can verify and validate the PDF with these commands:
    ls -lh sample.pdf file sample.pdf You should see output similar to this:
    -rw-r--r--. 1 username username 26K Oct 29 12:44 sample.pdf sample.pdf: PDF document, version 1.7, 1 page(s)Verifying the PDF exists and is validOptionally, if you have poppler-utils installed, you can check the PDF metadata:
    pdfinfo sample.pdf 2>/dev/null || true This should give you out that looks something like this:
    Creator: Writer Producer: LibreOffice 25.2.2.2 (X86_64) CreationDate: Wed Oct 29 12:44:23 2025 AST Custom Metadata: no Metadata Stream: yes Tagged: yes UserProperties: no Suspects: no Form: none JavaScript: no Pages: 1 Encrypted: no Page size: 612 x 792 pts (letter) Page rot: 0 File size: 25727 bytes Optimized: no PDF version: 1.7 Checking the PDF's info with poppler-utilsFinally, clean up the test files:
    rm -f sample.txt sample.pdf Setting up a persistent LibreOffice listener
    By default, unoconv starts a new LibreOffice instance for each conversion, which is fine for small workloads, but for our setup, we want it to run as a persistent headless listener. This way, your system doesn't have to fire up LibreOffice for every conversion, thus keeping resources predictable and enhancing system stability.
    To do this, we'll first create a dedicated profile for the headless instance to use. This is most critical on the desktop, since running a headless LibreOffice instance on a shared profile would block GUI functionality. On servers, you can skip this step if you are sure you will only need LibreOffice for this purpose or are otherwise fine with using a shared profile.
    Creating the LibreOffice profile
    To create the profile for your headless LibreOffice instance, run:
    # Create the user with a proper home directory sudo useradd --system --create-home --home-dir /var/lib/lo-svc --shell /bin/bash lo-svc # Ensure the directory exists with correct permissions sudo mkdir -p /var/lib/lo-svc # ← Changed to match home directory sudo chown -R lo-svc:lo-svc /var/lib/lo-svc sudo chmod 755 /var/lib/lo-svcYou can choose any path you'd like, just be sure to remember this path for the next step.
    Setting Up the Folder Structure
    Now that we've installed all prerequisites and prepared the LibreOffice listener, we'll set up our system with a simple folder layout.
    🗒️ You can use any folder names you want, but you'll need to pay attention to their names and change the names in the scripts we'll create later.
    /srv/convert/ ├── inbox # Drop documents here for conversion ├── PDFs # Converted PDFs appear here └── originals # Originals moved here (grouped by extension) Create these directories:
    sudo mkdir -p /srv/convert/{inbox,PDFs,originals} sudo chown -R lo-svc:lo-svc /srv/convert sudo chmod 1777 /srv/convert/inbox # World-writable with sticky bit sudo chmod 755 /srv/convert/PDFs # lo-svc can write, others can read sudo chmod 755 /srv/convert/originals # lo-svc can write, others can readBy using this folder configuration, anyone can drop files in the inbox folder, but only the script will have permission to write to the originals and PDFs folders. This is done for security purposes. However, you can set the permissions that you prefer, so long as you understand the risks and requirements.

    You can also have this automation run on the same server where you've installed Nextcloud/Owncloud, and place these folders on a network share or Nextcloud/Owncloud directory to enable collaborative workflows. Just be sure to set the correct permissions so that Nextcloud/Owncloud can write to these folders.
    For the sake of brevity, we won't cover that additional setup in this tutorial.
    Setting up a persistent LibreOffice Listener with systemd
    The next step is to establish the headless LibreOffice instance, and use a systemd service to keep it running in the background every time the system is restarted. Even on servers this can be critical in case services fail for any reason.
    Option A: System-wide service (dedicated user)
    If you're planning to use this solution in a multiuser setup, then this method is highly recommended as it will save system resources and simplify management.
    Create /etc/systemd/system/libreoffice-listener.service:
    sudo nano /etc/systemd/system/libreoffice-listener.serviceThen enter the following:
    [Unit] Description=LibreOffice headless UNO listener After=network.target [Service] User=lo-svc Group=lo-svc WorkingDirectory=/tmp Environment=VCLPLUGIN=headless ExecStart=/usr/bin/soffice --headless --nologo --nodefault --nofirststartwizard --norestore \ --accept='socket,host=127.0.0.1,port=2002;urp;StarOffice.ComponentContext' \ '-env:UserInstallation=file:///var/lib/lo-svc' Restart=on-failure # Optional hardening: NoNewPrivileges=true PrivateTmp=true ProtectSystem=full ProtectHome=true [Install] WantedBy=multi-user.target Press CTRL+O and enter to save the file and CTRL+X to exit nano.
    Enable and start the systemd service:
    sudo systemctl daemon-reload sudo systemctl enable --now libreoffice-listenerEnsuring the service is running correctly
    Once you've set up the system-wide systemd service, it's best practice to ensure that it's running smoothly and listening for connections. I'll show you how to do this below.
    Check if the service is running properly sudo systemctl status libreoffice-listenerThe LibreOffice listener running smoothlyCheck the logs if it's failing: sudo journalctl -u libreoffice-listener -fTest the connection: sudo -u lo-svc unoconv --connection="socket,host=127.0.0.1,port=2002;urp;StarOffice.ComponentContext" --showOption B: Per-user service
    If you'd like to use this on a per-user basis, you'll need to set up a systemd service for each user that needs it. This service will run without the need for root permissions or a custom user.

    To set this up, first create the a folder in your home director for the libreoffice profile:
    mkdir -p ~/.lo-headlessCreate the service file:
    mkdir -p ~/.config/systemd/user nano ~/.config/systemd/user/libreoffice-listener.serviceIn nano, enter the following contents:
    [Unit] Description=LibreOffice headless UNO listener After=network.target [Service] Type=simple ExecStart=/usr/bin/soffice --headless --nologo --nodefault --nofirststartwizard --norestore \ --accept='socket,host=127.0.0.1,port=2002;urp;' \ '-env:UserInstallation=file://%h/.lo-headless' Restart=on-failure RestartSec=5 [Install] WantedBy=default.targetSave the file with CTRL+O and ENTER on your keyboard, then exit as usual with CTRL+X.
    Then run the following commands:
    systemctl --user daemon-reload systemctl --user enable --now libreoffice-listener systemctl --user status libreoffice-listener For user services to start at boot, enable linger:
    sudo loginctl enable-linger "$USER" Building the conversion script
    Now that we've setup the folders, we can move on to the heart of the system: the bash script that will call unoconv and direct conversions and sorting automatically.
    It will perform the following actions:
    Loop through every file in the inbox Use unoconv to convert it to PDF Move or delete any original files Log each operation Prevent multiple conversions from running at once First, let's create the script by running:
    sudo nano /usr/local/bin/lo-autopdf.sh Here's the full content of the script, we’ll walk through the details:
    #!/usr/bin/env bash set -euo pipefail IFS=$'\n\t' shopt -s nullglob INBOX="/srv/convert/inbox" PDF_DIR="/srv/convert/PDFs" ORIGINALS_DIR="/srv/convert/originals" # Note: If using per-user service, change this to a user-accessible location like: # LOG_FILE="$HOME/.lo-unoconv.log" LOG_FILE="/var/log/lo-unoconv.log" LOCK_FILE="/tmp/lo-unoconv.lock" LIBREOFFICE_SOCKET="socket,host=127.0.0.1,port=2002;urp;StarOffice.ComponentContext" DELETE_AFTER_CONVERT=false timestamp() { date +"%Y-%m-%d %H:%M:%S"; } log() { printf "[%s] %s\n" "$(timestamp)" "$*" | tee -a "$LOG_FILE"; } for dir in "$INBOX" "$PDF_DIR" "$ORIGINALS_DIR"; do if [ ! -d "$dir" ]; then log "ERROR: Directory $dir does not exist" exit 1 fi done # Global script lock - prevent multiple instances exec 9>"$LOCK_FILE" if ! flock -n 9; then log "Another conversion process is already running. Exiting." exit 0 fi log "Starting conversion run..." for file in "$INBOX"/*; do [[ -f "$file" ]] || continue base="$(basename "$file")" ext="${base##*.}" lower_ext="${ext,,}" [[ "$base" == .~lock*# ]] && continue [[ "$base" == *.tmp ]] && continue [[ "$base" == *.swp ]] && continue # Optional: Check if file is busy (being written to) # Uncomment if you need to avoid processing files during large transfers #if ! flock -n "$file" true 2>/dev/null; then # log "File $base is busy (being written to), skipping..." # continue #fi log "Converting: $base" # Convert file - PDF will be created in same directory as input if unoconv --connection="$LIBREOFFICE_SOCKET" -f pdf "$file" >>"$LOG_FILE" 2>&1; then # Get the expected PDF filename pdf_name="${base%.*}.pdf" pdf_file="$INBOX/$pdf_name" # Check if PDF was created and move it to PDFs directory if [[ -f "$pdf_file" ]]; then mv -f -- "$pdf_file" "$PDF_DIR/" log "Converted successfully: $base → PDF" else log "❌ PDF was not created for $base" continue fi if $DELETE_AFTER_CONVERT; then rm -f -- "$file" log "Deleted original: $base" else dest_dir="$ORIGINALS_DIR/$lower_ext" mkdir -p "$dest_dir" mv -f -- "$file" "$dest_dir/" log "Moved original to: $dest_dir/" fi else log "❌ Conversion failed for $base" fi done log "Conversion run complete."Feel free to copy this script as-is, if you've used the same directory structure as the tutorial. When you're ready, press CTRL+O followed by ENTER to save the file, and CTRL+X to exit.
    Make it executable and create the log file:
    # 6. Make script executable and create log file sudo chmod +x /usr/local/bin/lo-autopdf.sh sudo touch /var/log/lo-unoconv.log sudo chown lo-svc:lo-svc /var/log/lo-unoconv.log sudo chmod 644 /var/log/lo-unoconv.logNote: If you've created your directories elsewhere, you'll need to update the $INBOX, $PDF_DIR, and $ORIGINALS_DIR variables in the script to point to your chosen directories.
    With that said, let’s take a closer look and break this all down.
    Error handling and safety
    Even for a simple script like this, it's best that we practice safety and avoid common problems. To this end, we've built the script with some safeguards in place.
    The first line:
    set -euo pipefail enforces certain strict behaviours in the script:
    -e: exit immediately on any error -u: treat unset variables as errors -o pipefail: capture failures even inside pipelines These three options will make the script more predictable, which is critical, as it will run unattended.
    The second line:
    IFS=$'\n\t' is there to ensure filenames with spaces don’t cause trouble.
    The third line:
    shopt -s nullglob prevents literal wildcards (\*) from appearing when no files are present in the Inbox folder.
    Variables and directory definitions
    The first three variables:
    INBOX="/srv/convert/inbox" PDF_DIR="/srv/convert/PDFs" ORIGINALS_DIR="/srv/convert/originals" Define the directories the script will use. You can change these to your liking, if you'd like to use a different setup from what is demonstrated here.
    These LOG_FILE variable:
    LOG_FILE="/var/log/lo-unoconv.log" is used for logging. This way, the script will keep track of every time it is run and any errors it encounters, for later troubleshooting.
    Note: if you're using a per-user service, change LOG_FILE to point to a user-accessible location, such as $HOME/.lo-unoconv.log.
    The LOCK_FILE variable:
    LOCK_FILE="/tmp/lo-unoconv.lock" is used by flock for preventing multiple instances of the script. This will prevent any potential conflicts that could arise from concurrent instances.
    The LIBREOFFICE_SOCKET variable:
    LIBREOFFICE_SOCKET="socket,host=127.0.0.1,port=2002;urp;StarOffice.ComponentContext"tells the script how and where to find and communicate with LibreOffice. If you ever change the location of your LibreOffice setup, whether the port or the host, you'll need to update this variable.
    The DELETE_AFTER_CONVERT variable:
    DELETE_AFTER_CONVERT=false controls whether the original file should be deleted upon conversion. If you'd like this to be the case in your setup, you can set this variable to "true".
    Timestamps & logging
    Next, we have two functions, timestamp() and log():
    timestamp() { date +"%Y-%m-%d %H:%M:%S"; } log() { printf "[%s] %s\n" "$(timestamp)" "$*" | tee -a "$LOG_FILE"; } The log() function adds the timestamps to messages using the output of the timestamp() function, and appends them to both stdout (what you'd see in the terminal) and the log file (set in $LOG_FILE).
    This ensures you can always check what time something went wrong, if anything fails.
    Checking for the necessary directories
    The next part of our script checks that the right directories exist before proceeding:
    for dir in "$INBOX" "$PDF_DIR" "$ORIGINALS_DIR"; do if [ ! -d "$dir" ]; then log "ERROR: Directory $dir does not exist" exit 1 fi doneThis is especially useful if you decide to change the location of any of the directories listed in $INBOX, $PDF_DIR, or $ORIGINALS_DIR. Any errors will show up in the log file.
    Concurrency control with flock
    Next, the script needs to be able to handle two concurrency issues:
    Multiple script instances: cron might trigger a job while another conversion is still in progress. File access conflicts (optional): users might be writing to files when the script tries to process them. This aspect of the script is within the the for loop (see "The heart of our script: the file loop" below). While this check would be useful to have by default, it has proved to be unreliable in some cases, due to quirks in flock itself, which create false positives. For this reason, it's been made optional for this guide. To prevent multiple instances, we use flock with a global lock file:
    exec 9>"$LOCK_FILE" if ! flock -n 9; then log "Another conversion process is already running. Exiting." exit 0 fiThis opens a file descriptor (9) tied to a lockfile (defined by $LOCK_FILE). If there's already a conversion in progress, the script detects it, logs a message and exits cleanly.
    If you'd like to include individual file checks, you can uncomment this section:
    # Optional: Check if file is busy (being written to) # Uncomment if you need to avoid processing files during large transfers #if ! flock -n "$file" true 2>/dev/null; then # log "File $base is busy (being written to), skipping..." # continue #fiThis can be found in the for loop after [[ "$base" == *.swp ]] && continue. If you choose to use this, do be sure to test the script to ensure that no false positives are blocking conversions.
    The global flock check should be sufficient in most use cases. However, you may want to enable this secondary check if you are working in a high traffic environment with many users saving files simultaneously.
    The heart of our script: the file Loop
    Now we come to the most critical part of this conversion script: The for loop that parses files in the $INBOX and passes them to unoconv.
    for file in "$INBOX"/*; do [[ -f "$file" ]] || continue base="$(basename "$file")" ext="${base##*.}" lower_ext="${ext,,}" [[ "$base" == .~lock*# ]] && continue [[ "$base" == *.tmp ]] && continue [[ "$base" == *.swp ]] && continue # Optional: Check if file is busy (being written to) # Uncomment if you need to avoid processing files during large transfers #if ! flock -n "$file" true 2>/dev/null; then # log "File $base is busy (being written to), skipping..." # continue #fi log "Converting: $base" # Convert file - PDF will be created in same directory as input if unoconv --connection="$LIBREOFFICE_SOCKET" -f pdf "$file" >>"$LOG_FILE" 2>&1; then # Get the expected PDF filename pdf_name="${base%.*}.pdf" pdf_file="$INBOX/$pdf_name" # Check if PDF was created and move it to PDFs directory if [[ -f "$pdf_file" ]]; then mv -f -- "$pdf_file" "$PDF_DIR/" log "Converted successfully: $base → PDF" else log "❌ PDF was not created for $base" continue fi if $DELETE_AFTER_CONVERT; then rm -f -- "$file" log "Deleted original: $base" else dest_dir="$ORIGINALS_DIR/$lower_ext" mkdir -p "$dest_dir" mv -f -- "$file" "$dest_dir/" log "Moved original to: $dest_dir/" fi else log "❌ Conversion failed for $base" fi doneIn simple terms, the first part of the loop:
    [[ -f "$file" ]] || continue base="$(basename "$file")" ext="${base##*.}" lower_ext="${ext,,}" [[ "$base" == .~lock*# ]] && continue [[ "$base" == *.tmp ]] && continue [[ "$base" == *.swp ]] && continue # Optional: Check if file is busy (being written to) # Uncomment if you need to avoid processing files during large transfers #if ! flock -n "$file" true 2>/dev/null; then # log "File $base is busy (being written to), skipping..." # continue #fiscans every file in $INBOX and skips over directories, LibreOffice lock files, and any temporary files that LibreOffice may produce during editing. As mentioned earlier, the flock check ensures that no file is processed while being saved. If everything is fine, the script continues.
    The next section performs the conversion, and logs what files are being converted:
    # Convert file - PDF will be created in same directory as input if unoconv --connection="$LIBREOFFICE_SOCKET" -f pdf "$file" >>"$LOG_FILE" 2>&1; then # Get the expected PDF filename pdf_name="${base%.*}.pdf" pdf_file="$INBOX/$pdf_name" # Check if PDF was created and move it to PDFs directory if [[ -f "$pdf_file" ]]; then mv -f -- "$pdf_file" "$PDF_DIR/" log "Converted successfully: $base → PDF" else log "❌ PDF was not created for $base" continue fiThe remainder of the script determines what happens to the files after conversion:
    log "Converted successfully: $base → PDF" if $DELETE_AFTER_CONVERT; then rm -f -- "$file" log "Deleted original: $base" else dest_dir="$ORIGINALS_DIR/$lower_ext" mkdir -p "$dest_dir" mv -f -- "$file" "$dest_dir/" log "Moved original to: $dest_dir/" fi else log "❌ Conversion failed for $base" fi done If deletion is enabled ($DELETE_AFTER_CONVERT=true, then the original files are deleted upon conversion. Otherwise, the script sorts the files into the folder corresponding to their file extension.
    For example:
    originals/odt/ originals/ods/ originals/odp/ This organisation makes it easy to trace back where each PDF came from.
    If any file fails, a log entry is written for that file. This allows you to have a a clear history of all conversions.
    The script then cleanly exits with done.
    Setting up cron
    Now that you've got everything set, you can set up cron to run the script periodically. For the purposes of this tutorial, we'll set it to run every five minutes, but you can choose any interval you prefer.
    First, open your crontab:
    sudo crontab -u lo-svc -eIf you're using the per-user setup, use crontab -e instead.
    Note: On Fedora and some other systems, editing the system crontab with sudo crontab -e may launch vim or vi, so the standard commands we've been using for nano won't apply. If that is the case, use ESC, followed by typing ":wq!" and pressing ENTER.
    Then add this line:
    */5 * * * * /usr/local/bin/lo-autopdf.sh If you need finer control, you can adjust the interval. For example, you can set it to run once every hour:
    0 * * * * /usr/local/bin/lo-autopdf.sh Setting up logging and rotation
    We've set up our script to write detailed logs to /var/log/lo-unoconv.log. However, this file grow over time, so to avoid it getting too large, we’ll use logrotate to keep it in check.
    To do this, first create a file in logrotate.d new file:
    sudo nano /etc/logrotate.d/lo-unoconv In that file, add the following:
    /var/log/lo-unoconv.log { weekly rotate 4 compress missingok notifempty create 644 lo-svc lo-svc } With this configuration, the system will keep four weeks of compressed logs, rotating them weekly. If no logs exist or they’re empty, it skips the cycle.
    Verifying log rotation worked
    Now that you've set up log rotation, it's a good practice to ensure that it's working correctly.
    To do this, first run a rotation manually:
    sudo logrotate -f /etc/logrotate.d/lo-unoconv Since a successful logrotate typically produces no output, we'll need to check for some indicators manually.
    First, check for rotated files:
    ls -la /var/log/lo-unoconv*You should see your original log file and a rotated version (e.g., lo-unoconv.log.1 or lo-unoconv.log.1.gz)
    Rotated logsNext verify the log file still exists and is writable:
    ls -la /var/log/lo-unoconv.logThis should show the file is owned by lo-svc:lo-svc and has 644 (-rw-r--r–) permissions.
    Example output for a file with the right permissionsNow, check logrotate's status:
    sudo logrotate -d /etc/logrotate.d/lo-unoconvThe -d flag runs in debug mode and shows what logrotate would normally do.
    Example output from logrotate in debug modeTest logging works by running the script manually and reading the log:
    sudo -u lo-svc /usr/local/bin/lo-autopdf.sh tail -5 /var/log/lo-unoconv.logExample output from the test run.If you see log entries and your rotated files showed correctly before, then your script is writing to the log correctly. The automated rotation will happen weekly in the background.
    Now you can run a test conversion.
    Testing your setup
    Now that you've got everything set up, you can test that it's all working correctly. To do this, you can try the following steps:
    Create two test files: # 1) Create a simple text file and convert to an ODT document cat > sample.txt << 'EOF' Weekly Report ============= - Task A done - Task B in progress EOF soffice --headless --convert-to odt --outdir . sample.txt # produces sample.odt # 2) Create a simple CSV and convert to an ODS spreadsheet cat > report.csv << 'EOF' Name,Qty,Notes Apples,3,Fresh Bananas,5,Ripe EOF soffice --headless --convert-to ods --outdir . report.csv # produces report.ods Move the test files into /srv/convert/inbox: mv sample.odt /srv/convert/inbox/ mv report.ods /srv/convert/inbox/ Wait for the next cron cycle and check the contents of /srv/convert: ls /srv/convert/PDFs ls /srv/convert/originalsReview /var/log/lo-unoconv.log to see that logging is working. If all went well, you’ll have a clean log with timestamps showing each conversion.
    Conclusion
    You've just learned how build a reliable automated PDF converter using unoconv with just one Bash script and a cron job. You can drop this into just about any setup, whether on your server, or personal computer. If you're feeling adventurous, feel free to modify the script to support other formats as needed.
  9. by: Abhishek Prakash
    Thu, 30 Oct 2025 07:49:18 GMT

    Halloween is here. Some people carve pumpkins, I crafted a special set up for my Arch Linux 🎃
    0:00 /0:30 1× In this tutorial, I'll share with you all the steps I took to give a Halloween-inspired dark, spooky makeover with Hyprland. Since it is Hyprland, you can relatively easily replicate the setup by getting the dot files from our GitHub repository.
    🚧This specific setup was done with Hyprland window compositor on top of Arch Linux. If you are not using Hyprland and still want to try it, I advise installing Arch Linux in a virtual machine. If videos are your thing, you can watch all the steps in action in this video on our YouTube channel.
    Subscribe to It's FOSS YouTube ChannelStep 1: Install Hyprland and necessary packages
    First, install all the essential Hyprland packages to get the system up and running:
    sudo pacman -S hyprland xdg-desktop-portal-hyprland hyprpolkitagent kitty The above will install Hyprland and necessary packages. Now, install other utility packages.
    sudo pacman -S hyprpaper hyprpicker hyprlock waybar wofi dunst fastfetch bat eza starship nautilus What do these packages do? Well, here are some info:
    hyprpaper: Hyprland Wallpaper utility hyprpicker: Color picker hyprlock: Lock screen utility waybar: Waybar is a Wayland panel wofi: Rofi launcher alternative, but for Wayland. Rofi can be used. In fact, we have some preset config for Rofi in our GitHub repository. But Wofi was selected for this video. dunst: Notification daemon. fastfetch: fastfetch is a system information display utility. bat: Modern alternative for cat command. eza: Modern ls command alternative starship: Starship is a prompt customization tool. nautilus: Nautilus is the file manager from GNOME. Step 2: Install and enable display manager
    You need a display manager to login to the system. We use SDDM display manager. GDM also works fine with Hyprland.
    sudo pacman -S sddm Once SDDM package is installed, enable the display manager on boot time.
    sudo systemctl enable sddm.service Enable SDDM
    Now, reboot the system. When login prompt appears, login to the system.
    Login to HyprlandStep 3: Install other utility packages
    Once essential Hyprland packages are installed and you are logged in, open a terminal in Hyprland using Super + Q. Now install Firefox browser using:
    sudo pacman -S firefox It's time to install theme packages. Hyprland is not a desktop environment in the sense of what GNOME or KDE is. Yet you may still use some apps developed for GNOME (GTK apps) or Qt apps.
    To theme, you need to install theme managers for respective system:
    nwg-look: To apply theme to GTK apps. qt5ct: To apply theme to Qt5 apps. Install these packages using the command:
    sudo pacman -S qt5ct nwg-look 🚧If you are using a minimal installation of Arch Linux, you may need to install an editor like nano to edit file in terminal.Step 4: Change the monitor settings
    In most cases, Hyprland should recognize the monitor and load accordingly. But in case you are running it in a VM, it will not set the display size properly.
    Even though we give full configuration at a later stage, if you want to fix the monitor, use the command:
    monitor=<Monitor-name>,1920x1080,auto,auto Monitor settingsIt is important to get the name of the monitor. Use this command:
    hyprctl monitors Remember the name of your monitor.
    Get monitor nameStep 5: Download our custom Hyprland dot files
    Go to It's FOSS GitHub page and download the text-script-files repository.
    Download config filesDownload Config FilesYou can also clone the repo, if you want using the command:
    git clone https://github.com/itsfoss/text-script-files.git But the above needs git installed.
    If you have downloaded the zip file, extract the archive file. Inside that, you will find a directory config/halloween-hyprland. This is what we need in this article.
    Step 6: Copy wallpaper to directory
    Copy the images in the wallpapers folder to a directory called ~/Pictures/Wallpapers. Create it if it does not exist, of course.
    mkdir -p ~/Pictures/Wallpapers Copy wallpapersStep 7: Download GTK theme, icons and fonts
    Download the Everforest GTK theme dark borderless macOS buttons.
    Download GTK themeDownload Everforest GTK ThemeDownload Dominus Funeral icon theme dark style.
    Download Icon themeDownload Dominus Funeral Icon themeDownload the "Creepster" font from Google Fonts website.
    Download Creepster fontNext, create ~/.themes, ~/.icons, and ~/.fonts respectively:
    mkdir -p ~/.themes ~/.icons ~/.fonts And we need to paste theme, icon, and font files in their respective locations:
    Extract the "Creepster" font file and place it at ~/.fonts. Extract the theme file and paste it at ~/.themes. Extract the icon file and paste it at ~/.icons Paste thems, icons, and fontsStep 8: Install other nerd fonts
    Install Nerd fonts like:
    Firacode Mono Nerd Font and Caskaydia Nerd font: Download from Nerd Fonts website. Font awesome free desktop fonts JetBrains Mono If you are in Arch Linux, open a terminal and run the command:
    sudo pacman -S ttf-firacode-nerd ttf-cascadia-code-nerd ttf-cascadia-mono-nerd woff2-font-awesome ttf-jetbrains-mono Step 9: Verify Waybar and Hyprland config
    Open the config.jsonc file on the downloaded directory and replace any occurrence of Virtual-1 with your monitor name.
    For GNOME Box VM, it is Virtual-1. On my main system, I have two monitors connected. So, the names for my monitors are HDMI-A-1 and HDMI-A-2. Note the name of the monitors as we saw in Step 4:
    hyprctl monitorsNow in the Waybar config, change the monitor name from Virtual-1 to the name of your monitor. Change all such occurrences.
    📋You can use any editor's find and replace feature. Find complete word Virtual-1 and replace it with your monitor name. If you are using nano, follow this guide to learn search and replace in nano editor.Also, take a look at the panel item. If you see any item that is not needed in the panel, you can remove it from the [modules-<position>] part.
    👉 Similarly, open the hyprland config in the downloaded directory. Change all reference to Virtual-1 to your monitor name. Similarly, replace monitor name in the hyprlock and hyprpaper config files.
    Step 10: Copy and paste config files
    Copy the following directories (in the downloaded GitHub files) and paste it to the ~/.config folder.
    waybar: Waybar panel configs and styles. wofi: Application launcher config dunst: Customized dunst notification system. starship.toml: Customized starship prompt. If you are using a GUI file manager, copy all file/folders except hypr, wallpaper, and README.
    Copy except hypr and wallpaperStep 11: Replace Hyprland config
    We did not copy hypr folder, because there is already a folder called hypr in every Hyprland system, which contains the minimal config.
    I don't want to make it vanish. Instead, keep it as a backup.
    cp ~/.config/hypr/hyprland.conf ~/.config/hypr/hyprland.conf.bak Now, exchange the content of the hyprland.conf in your system with the customized content. Luckily, the mv command has a convenient option called -exchange.
    mv --exchange ~/.config/hypr/hyprland.conf /path/to/new/hyprland/config 🚧What the above command does is swap the contents of your default hyprland config with the one we created.Backup and replace Hyprland configStep 12: Paste hyprlock and hyprpaper configs
    Now, copy the hyprlock.conf and hyprpaper.conf file to ~/.config/hypr directory.
    Copy hyprlock and hyprpaper config filesStep 13: Change themes
    Open the NWG-Look app and set the GTK theme and font (Creepster font) for GTK apps:
    Set GTK Theme and fontNow, change icon theme:
    Set icon theme for GTK appsThis app automatically adds necessary file links in the ~/.config/gtk-4.0. Thanks to this feature, you don't need to apply theme manually to the GTK4 apps.
    Open the Qt5ct app and change the theme to darker.
    Apply Qt Darker themeNow, apply icon theme:
    Qt icon themeAnd change the normal font to "Creepster":
    Qt font styleStep 14: Set Starship and aliases
    First, paste some cool command aliases for the normal ls and cat command, using the modern alternatives eza and bat respectively. This is optional, of course.
    Open ~/.bashrc in any editor and paste these lines at the bottom of this file:
    alias ls='eza -lG --color always --icons' alias la='eza -alG --color always --icons' alias cat='bat --color always --theme="Dracula"' Now, to enable Starship prompt, paste the starship eval line to the ~/.bashrc and source the config.
    Edit bashrceval "$(starship init bash)" source ~/.bashrc Customized starship promptOnce all this is done, restart the system, and log back in to see the Halloween themed Hyprland.
    Hyprland Halloween Makeover
    Enjoy the spooky Hyprland set up. Happy Halloween 🎃
  10. by: Abhishek Prakash
    Thu, 30 Oct 2025 04:30:16 GMT

    It's Halloween so time to talk spooky stuff 👻
    If solving Linux mysteries sounds thrilling, SadServers will be your new haunted playground. I came across this online platform that gives you real, misconfigured servers to fix and real-world inspired situations to deal with. This is perfect for sharpening your troubleshooting skills, specially in the Halloween season 🎃
    What LeetCode? I Found This Platform to Practice Linux Troubleshooting SkillsMove over theory and practice your Linux and DevOps skills by solving various challenges on this innovative platform. A good way to prepare for job interviews.It's FOSS NewsAbhishek Prakash💬 Let's see what else you get in this edition:
    A new KDE Plasma and Fedora 43 release. An Austrian ministry kicking out Microsoft. Ubuntu 25.10 users encountering another bug. App that gives you Pomodoro with task management. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Proton Mail. Ghosts aren’t the only ones watching 👀 — Big Tech is too. Protect your inbox from creepy trackers and invisible eyes with Proton Mail, the privacy-first, end-to-end encrypted email trusted by millions. Make the switch today and exorcize your inbox demons. 🕸️💌
    Switch to Proton Mail 📰 Linux and Open Source News
    KDE Plasma 6.5 has been released with some neat upgrades. Ubuntu Unity maintainers have sounded the alarm for their survival. Canonical Academy is here to make you an Ubuntu-certified Linux user. Google Safe Browsing has managed to flag Immich URLs as dangerous. Ubuntu 25.10 briefly introduced a bug that broke the automatic upgrade system. Fedora 43 is finally out after a brief delay. It packs in many useful refinements. Fedora 43 is Out with Wayland-Only Desktop, GNOME 49, and Linux 6.17RPM 6.0 security upgrades, X11 removal from Workstation, and many other changes.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    Austria's BMWET has moved away from Microsoft in a well-organized migration to Nextcloud.
    Good News! Austrian Ministry Kicks Out Microsoft in Favor of NextcloudThe BMWET migrates 1,200 employees to sovereign cloud in just four months.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials, and Learnings
    Ghostty is loaded with functionality; join me as I explore some of them.
    Forks happen when freedom matters more than control.
    Community Strikes Back: 12 Open Source Projects Born from ResistanceFrom BSL license changes to abandoned codebases, see how the open source community struck back with powerful forks and fresh alternatives.It's FOSSPulkit ChandakDon't forget to utilize templates feature in LibreOffice and save some time.
    Comparing two of the best open source but mainstream password managers.
    Bitwarden vs. Proton Pass: What’s The Best Password Manager?What is your favorite open-source password manager?It's FOSSAnkush Das👷 AI, Homelab and Hardware Corner
    Discover what’s next for tinkerers in the post-Qualcomm world.
    Arduino Alternative Microcontroller Boards for Your DIY Projects in the Post-Qualcomm EraIf Arduino being acquired puts a bad taste in your mouth, or even if you just want to explore what the alternatives offer, this article is for you.It's FOSSPulkit ChandakTerraMaster has launched two flagship-class hybrid NAS devices that pack a punch.
    🛍️ Deals You Should Not Miss
    The 16-book library also includes just-released editions of The Official Raspberry Pi Handbook 2026, Book of Making 2026, and much more! Whether you’re just getting into coding or want to deepen your knowledge about something more specific, this pay-what-you-want bundle has everything you need. And you support Raspberry Pi Foundation North America with your purchase!
    Humble Tech Book Bundle: All Things Raspberry Pi by Raspberry Pi PressLearn the ins and outs of computer coding with this library from Raspberry Pi! Pay what you want and support the charity of your choice!Humble BundleExplore the Humble offer here✨ Project Highlights
    An in-depth look at a super cool Pomodoro app for Linux.
    Pomodoro With Super Powers: This Linux App Will Boost Your ProductivityPomodoro combined with task management and website blocking. This is an excellent tool for productivity seekers but there are some quirks worth noticing.It's FOSSRoland Taylor📽️ Videos I Am Creating for You
    Giving a dark, menacing but fun Halloween makeover to my Arch Linux system.
    Subscribe to It's FOSS YouTube Channel Linux is the most used operating system in the world. but on servers. Linux on desktop is often ignored. That's why It's FOSS made it a mission to write helpful tutorials and guides to help use Linux on their personal computer.
    We do it all for free. No venture capitalist funds us. But you know who does? Readers like you. Yes, we are an independent, reader supported publication helping Linux users worldwide with timely news coverage, in-depth guides and tutorials.
    If you believe in our work, please support us by getting a Plus membership. It costs just $3 a month or $99 for a lifetime subscription.
    Join It's FOSS Plus 💡 Quick Handy Tip
    In GNOME desktop, you can use the ArcMenu extension for a heavily customizable panel app menu. For instance, you can get 20+ menu layouts by going to Menu → Menu Layout → Pick a layout of your choice.
    🎋 Fun in the FOSSverse
    We have got a spooky crossword this time around. Can you identify all the FOSS ghosts?
    Ghosts of Open Source [Halloween Special Crossword]A spooky crossword challenge for true FOSS enthusiasts!It's FOSSAbhishek PrakashActually, there is a whole bunch of Halloween themed puzzles and quizzes for you to enjoy 😄🎃
    Cyber boogeymen crossword Spooky Linux Commands Quiz Linux Halloween Quest Pick up the Pieces of Halloween Tux 🤣 Meme of the Week: Yeah, my Windows partition feels left out.
    🗓️ Tech Trivia: On October 30, 2000, the last Multics system was shut down at the Canadian Department of National Defence in Halifax. Multics was a groundbreaking time-sharing operating system that inspired Unix and introduced ideas like hierarchical file systems, dynamic linking, and security rings that shaped modern computing.
    🧑‍🤝‍🧑 From the Community: Pro FOSSer Neville has shared a fascinating take on arithmetic.
    Arithmetic and our Sharing CultureWe al learn to do division “If there are 6 cakes and 3 children, how many cakes does each child get” Division is about sharing But it does not always work “It there are 2 sharks and 8 people in a pool, how many people does each shark get?” Division can not answer that question. Because that example is not about sharing , it is about competition Whether division works depends on what are called the “Rules of Engagement” We all learnt to multiply “If 10 children each bring 2 apples, how m…It's FOSS Communitynevj❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  11. by: Andy Clarke
    Wed, 29 Oct 2025 16:22:28 +0000

    Over the past few months, I’ve explored how we can get creative using well-supported CSS properties. Each article is intended to nudge web design away from uniformity, toward designs that are more distinctive and memorable. One bit of feedback from Phillip Bagleg deserves a follow up:
    Fair point well made, Phillip. So, let’s bust the myth that editorial-style web design is impractical on small screens.
    My brief: Patty Meltt is an up-and-coming country music sensation, and she needed a website to launch her new album and tour. She wanted it to be distinctive-looking and memorable, so she called Stuff & Nonsense. Patty’s not real, but the challenges of designing and developing sites like hers are.
    The problem with endless columns
    On mobile, people can lose their sense of context and can’t easily tell where a section begins or ends. Good small-screen design can help orient them using a variety of techniques.
    When screen space is tight, most designers collapse their layouts into a single long column. That’s fine for readability, but it can negatively impact the user experience when hierarchy disappears; rhythm becomes monotonous, and content scrolls endlessly until it blurs. Then, nothing stands out, and pages turn from being designed experiences into content feeds.
    Like a magazine, layout delivers visual cues in a desktop environment, letting people know where they are and suggesting where to go next. This rhythm and structure can be as much a part of visual storytelling as colour and typography.
    But those cues frequently disappear on small screens. Since we can’t rely on complex columns, how can we design visual cues that help readers feel oriented within the content flow and stay engaged? One answer is to stop thinking in terms of one long column of content altogether. Instead, treat each section as a distinct composition, a designed moment that guides readers through the story.
    Designing moments instead of columns
    Even within a narrow column, you can add variety and reduce monotony by thinking of content as a series of meaningfully designed moments, each with distinctive behaviours and styles. We might use alternative compositions and sizes, arrange elements using different patterns, or use horizontal and vertical scrolling to create experiences and tell stories, even when space is limited. And fortunately, we have the tools we need to do that at our disposal:
    @media and @container queries CSS Grid and Flexbox Scroll Snap Orientation media features Logical properties These moments might move horizontally, breaking the monotony of vertical scrolling, giving a section its own rhythm, and keeping related content together.
    Make use of horizontal scrolling
    My desktop design for Patty’s discography includes her album covers arranged in a modular grid. Layouts like these are easy to achieve using my modular grid generator.
    But that arrangement isn’t necessarily going to work for small screens, where a practical solution is to transform the modular grid into a horizontal scrolling element. Scrolling horizontally is a familiar behaviour and a way to give grouped content its own stage, the way a magazine spread might.
    I started by defining the modular grid’s parent — in this case, the imaginatively named modular-wrap — as a container:
    .modular-wrap { container-type: inline-size; width: 100%; } Then, I added grid styles to create the modular layout:
    .modular { display: grid; gap: 1.5rem; grid-template-columns: repeat(3, 1fr); grid-template-rows: repeat(2, 1fr); overflow-x: visible; width: 100%; } It would be tempting to collapse those grid modules on small screens into a single column, but that would simply stack one album on top of another.
    Collapsing grid modules on small screens into a single column So instead, I used a container query to arrange the album covers horizontally and enable someone to scroll across them:
    @container (max-width: 30rem) { #example-1 .modular { display: grid; gap: 1.5rem; grid-auto-columns: minmax(70%, 1fr); grid-auto-flow: column; grid-template-columns: none; grid-template-rows: 1fr; overflow-x: auto; -webkit-overflow-scrolling: touch; } } Album covers are arranged horizontally rather than vertically. See this example in my lab. Now, Patty’s album covers are arranged horizontally rather than vertically, which forms a cohesive component while preventing people from losing their place within the overall flow of content.
    Push elements off-canvas
    Last time, I explained how to use shape-outside and create the illusion of text flowing around both sides of an image. You’ll often see this effect in magazines, but hardly ever online.
    The illusion of text flowing around both sides of an image Desktop displays have plenty of space available, but what about smaller ones? Well, I could remove shape-outside altogether, but if I did, I’d also lose much of this design’s personality and its effect on visual storytelling. Instead, I can retain shape-outside and place it inside a horizontally scrolling component where some of its content is off-canvas and outside the viewport.
    My content is split between two divisions: the first with half the image floating right, and the second with the other half floating left. The two images join to create the illusion of a single image at the centre of the design:
    <div class="content"> <div> <img src="img-left.webp" alt=""> <p><!-- ... --></p> </div> <div> <img src="img-right.webp" alt=""> <p><!-- ... --></p> </div> </div> I knew this implementation would require a container query because I needed a parent element whose width determines when the layout should switch from static to scrolling. So, I added a section outside that content so that I could reference its width for determining when its contents should change:
    <section> <div class="content"> <!-- ... --> </div> </section> section { container-type: inline-size; overflow-x: auto; position: relative; width: 100%; } My technique involves spreading content across two equal-width divisions, and these grid column properties will apply to every screen size:
    .content { display: grid; gap: 0; grid-template-columns: 1fr 1fr; width: 100%; } Then, when the section’s width is below 48rem, I altered the width of my two columns:
    @container (max-width: 48rem) { .content { grid-template-columns: 85vw 85vw; } } Setting the width of each column to 85% — a little under viewport width — makes some of the right-hand column’s content visible, which hints that there’s more to see and encourages someone to scroll across to look at it.
    Some of the right-hand column’s content is visible. See this example in my lab. The same principle works at a larger scale, too. Instead of making small adjustments, we can turn an entire section into a miniature magazine spread that scrolls like a story in print.
    Build scrollable mini-spreads
    When designing for a responsive environment, there’s no reason to lose the expressiveness of a magazine-inspired layout. Instead of flattening everything into one long column, sections can behave like self-contained mini magazine spreads.
    Sections can behave like self-contained mini magazine spreads. My final shape-outside example flowed text between two photomontages. Parts of those images escaped their containers, creating depth and a layout with a distinctly editorial feel. My content contained the two images and several paragraphs:
    <div class="content"> <img src="left.webp" alt=""> <img src="right.webp" alt=""> <p><!-- ... --></p> <p><!-- ... --></p> <p><!-- ... --></p> </div> Two images float either left or right, each with shape-outside applied so text flows between them:
    .content img:nth-of-type(1) { float: left; width: 45%; shape-outside: url("left.webp"); } .spread-wrap .content img:nth-of-type(2) { float: right; width: 35%; shape-outside: url("right.webp"); } That behaves beautifully at large screen sizes, but on smaller ones it feels cramped. To preserve the design’s essence, I used a container query to transform its layout into something different altogether.
    First, I needed another parent element whose width would determine when the layout should change. So, I added a section outside so that I could reference its width and gave it a little padding and a border to help differentiate it from nearby content:
    <section> <div class="content"> <!-- ... --> </div> </section> section { border: 1px solid var(--border-stroke-color); box-sizing: border-box; container-type: inline-size; overflow-x: auto; padding: 1.5rem; width: 100%; } When the section’s width is below 48rem, I introduced a horizontal Flexbox layout:
    @container (max-width: 48rem) { .content { align-items: center; display: flex; flex-wrap: nowrap; gap: 1.5rem; scroll-snap-type: x mandatory; -webkit-overflow-scrolling: touch; } } And because this layout depends on a container query, I used container query units (cqi) for the width of my flexible columns:
    .content > * { flex: 0 0 85cqi; min-width: 85cqi; scroll-snap-align: start; } On small screens, the layout flows from image to paragraphs to image. See this example in my lab. Now, on small screens, the layout flows from image to paragraphs to image, with each element snapping into place as someone swipes sideways. This approach rearranges elements and, in doing so, slows someone’s reading speed by making each swipe an intentional action.
    To prevent my images from distorting when flexed, I applied auto-height combined with object-fit:
    .content img { display: block; flex-shrink: 0; float: none; height: auto; max-width: 100%; object-fit: contain; } Before calling on the Flexbox order property to place the second image at the end of my small screen sequence:
    .content img:nth-of-type(2) { order: 100; } Mini-spreads like this add movement and rhythm, but orientation offers another way to shift perspective without scrolling. A simple rotation can become a cue for an entirely new composition.
    Make orientation-responsive layouts
    When someone rotates their phone, that shift in orientation can become a cue for a new layout. Instead of stretching a single-column design wider, we can recompose it entirely, making a landscape orientation feel like a fresh new spread.
    Turning a phone sideways is an opportunity to recompose a layout. Turning a phone sideways is an opportunity to recompose a layout, not just reflow it. When Patty’s fans rotate their phones to landscape, I don’t want the same stacked layout to simply stretch wider. Instead, I want to use that additional width to provide a different experience. This could be as easy as adding extra columns to a composition in a media query that’s applied when the device’s orientation is detected in landscape:
    @media (orientation: landscape) { .content { display: grid; grid-template-columns: 1fr 1fr; } } For the long-form content on Patty Meltt’s biography page, text flows around a polygon clip-path placed over a large faux background image. This image is inline, floated, and has its width set to 100%:
    <div class="content"> <img src="patty.webp" alt=""> <!-- ... --> </div> .content > img { float: left; width: 100%; max-width: 100%; } Then, I added shape-outside using the polygon coordinates and added a shape-margin:
    .content > img { shape-outside: polygon(...); shape-margin: 1.5rem; } I only want the text to flow around the polygon and for the image to appear in the background when a device is held in landscape, so I wrapped that rule in a query which detects the screen orientation:
    @media (orientation: landscape) { .content > img { float: left; width: 100%; max-width: 100%; shape-outside: polygon(...); shape-margin: 1.5rem; } } See this example in my lab. Those properties won’t apply when the viewport is in portrait mode.
    Design stories that adapt, not layouts that collapse
    Small screens don’t make design more difficult; they make it more deliberate, requiring designers to consider how to preserve a design’s personality when space is limited.
    Phillip was right to ask how editorial-style design can work in a responsive environment. It does, but not by shrinking a print layout. It works when we think differently about how content flexes, shifts, and scrolls, and when a design responds not just to a device, but to how someone holds it.
    The goal isn’t to mimic miniature magazines on mobile, but to capture their energy, rhythm, and sense of discovery that print does so well. Design is storytelling, and just because there’s less space to tell one, it shouldn’t mean it should make any less impact.
    Getting Creative With Small Screens originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  12. by: Roland Taylor
    Wed, 29 Oct 2025 10:29:16 GMT

    There is no shortage of to-do apps in the Linux ecosystem, but few are designed to keep you focused while you work. Koncentro takes a direct approach by bundling a versatile task list, a Pomodoro-style timer, and a configurable website blocker into one tidy solution.
    What is Koncentro exactly?
    Koncentro is a free, open-source productivity tool, inspired by the likes of Super Productivity and Chomper. The project is actively developed by Bishwa Saha (kun-codes), with source code, issue tracking, and discussions hosted on GitHub. Built with a sleek Qt 6 interface echoing Microsoft’s Fluent Design language, this app pairs modern aesthetics with solid functionality.
    The latest release, version 1.1.0, arrived earlier this month with new features and quality-of-life improvements, including sub-tasks and a system-tray option.
    That said, it's not without quirks, and first-time users may hit a few bumps along the way. However, once you get past the initial hurdles and multistep setup, it becomes a handy companion for getting things done while blocking out common distractions.
    In this review, we examine what sets Koncentro apart from the to-do crowd and help you determine whether it is the right fit for your workflow.
    Bringing Koncentro’s methods into focus
    It is rare to find an app that gives you everything you need in one go without becoming overstuffed or cumbersome to use. Koncentro strikes a solid balance, offering more than to-do apps that stop at lists and due dates without veering into overwhelm.
    The pomodoro timer in Koncentro during a focus periodIt combines the Pomodoro technique with timeblocking, emphasizing an economical approach where time is the primary unit of work. As such, it caters to an audience that aims to structure the day rather than the week.
    In fact, there is no option to add tasks with specific dates — only times. This omission is not a limitation so much as a design choice. It fits the Pomodoro philosophy of tackling work in short, focused intervals, encouraging you to act now rather than plan for later. It makes Koncentro perfect for day-to-day activities, but you may need to find another solution if you're looking for long-term task tracking.

    Backing up this standard functionality is a snazzy website blocker to help you stave off distractions while you get down to work.
    The hands-on experience
    In my experience, Koncentro proved to be quite pleasant to use, as someone who relies on similar apps in my daily life. In this section, I'll focus on the overall experience of using the app from a fresh install onward.
    Using Koncentro📋While Koncentro features a distinct Pomodoro timer, I will not discuss this feature in depth in this section.First run
    On the first run, Koncentro will guide you through setting up its website blocking feature; the app's core function outside simple task management. In order for this to work, the system must temporarily disconnect from the internet, since the app must set up a proxy to facilitate website blocking. All filtering happens locally; no browsing data is sent anywhere outside your machine. I'll explain how this works when we get to the website blocker in detail.
    The first of two setup dialogs in Koncentro🚧Note: The proxy Koncentro relies on runs on port :8080, so it may conflict with other services using this port. Be sure to check for any conflicts before running the setup.The second setup dialog in KoncentroOnce you've managed to set it up (or managed to bypass this step), Koncentro will walk you through an introductory tutorial, showing how its primary features work. Once the tutorial is completed, you can rename or remove the default workspace and tasks.
    🚧Be aware that there is a known bug on X11, the tutorial traps focus and may not be able to exit until the app is restarted.Straightforward task management
    Koncentro follows a rather uncomplicated approach to task management. There are no tags, no due dates, and no folders. Also, tasks cannot overlap, since the timer for one task is automatically stopped if you start another. Furthermore, while tasks can have sub-tasks, parent tasks cannot be started on their own.
    Adding a task in KoncentroThis approach may not be for everyone, but since the app is focused on streamlined productivity, it makes sense to arrange things in this way, as you're unlikely to lose track of any given tasks with strict rules around time management.
    Tasks must be timeboxed upon creation, meaning you have to select a maximum time for each task to be accomplished within. This is set as the "estimated time" value. When you start the timer on any task, "elapsed time" is recorded and contrasted against the estimated time. This comes in pretty handy if you want to measure your performance against a benchmark or goal.
    Editing the time for a task in KoncentroActive and uncompleted tasks are grouped into "To Do Tasks", and finished tasks into "Completed Tasks", though this doesn't happen automatically. Since there are no folders or tags, task organization is accomplished by simply dragging tasks between these two sections.
    Workspaces: a subtle power tool
    One of the standout features of Koncentro is the way it uses workspaces to manage not just tasks, but overall settings. While this implementation is still clearly in its infancy, I see the potential for even more powerful functionality in the future.
    Managing Workspaces in KoncentroCurrently, workspaces serve to group your tasks and are protected by an optional website blocker to keep your attention on the present goal.
    📋In order to access workspaces, you must first make sure to stop any timers on your tasks, and ensure that "Current Task:" says "None" in the bottom left of the window. If the workspace button is greyed out, clicking the stop button will fix this.The website blocker in depth
    Perhaps the most distinguishing feature of Koncentro is its website blocker. It's not something you find in most to-do list apps for Linux, yet its simplicity and versatility make it a truly standout addition. Plus, the fact that each workspace can have its own block list makes Koncentro especially useful for scoping your focus periods and break times.
    The website blocker in KoncentroIn terms of usage, it's mostly seamless once you've passed the initial setup process, which isn't too tedious, but certainly could be made smoother overall. Koncentro doesn't block any particular sites by default, so you'll need to manually add any sites you'd like to block to each workspace.
    Note: Website blocking is only active when there is an active task. If all tasks are stopped, website blocking will not be activated.
    Editing the blocklist in KoncentroKoncentro relies on a Man In The Middle (MITM) proxy called mitmproxy to power this feature. Don't let the name throw you off: mitmproxy is a trusted open-source Python tool commonly used for network testing, repurposed here to handle local HTTPS interception for blocking rules. It's only activated when you're performing a task, and can be disabled altogether in Koncentro's settings.
    The mitmproxy home pagePart of the setup process involves installing its certificates if you wish to use the website blocker. You'll need to do this both for your system and for Firefox (if you're using Firefox as your browser), since Firefox does not use the system's certificates.
    Example usage scenario
    Let's say, for instance, you want to block all social media while you're working. You'd just need to add these sites to your "At-work space" (or whatever you'd like to call it) and get down to business.
    Website blocking with Koncentro is simple and straightforwardEven if a friend sends you a YouTube video, you won't be distracted by thumbnails because that URL would be locked out for that time period. Once that stretch of work ends, you could switch to your "taking a break" workspace, where social media is allowed, and (if you like) all work-related URLs are blocked.
    But does it really work?
    That's the real question here, of course: whether this is actually effective in practice. Of course, if you're highly distractible, it might be just the thing to help you keep on track. However, if you're already quite disciplined in your work, it might not be particularly meaningful. It really depends on how you work as an individual, after all.
    That said, I can definitely see a benefit for power users who know how to leverage the site blocker to prevent notifications in popular chat apps, which must still communicate with a central server to notify you.
    Sure, you can use "Do not disturb" in desktop environments that support it, but this doesn't consistently disable sound or notifications (if the chat app in question uses non-native notifications, for instance).
    A focus on aesthetics - Why it feels nice to use
    The choice to use Microsoft's Fluent design language may seem strange to many Linux users, but in fairness, Koncentro is a cross-platform application, and Windows still maintains the dominant position in the market.
    The Fluent Design language home page in Microsoft Edge, which also uses this design language for its UI.That being said, in many ways, it's similar enough in practical usage to the UI libraries and UX principles popular within the Linux ecosystem. It's close enough in functionality to apps built with Kirigami and Libadwaita that it doesn't seem too out of place among them.
    Customization
    Koncentro features a limited degree of customization options, following the "just enough" principle that seems to be the trend in modern design. It threads the delicate line between the user's freedom for customization and the developer's intentions for how their app should look and behave across platforms.
    Koncentro using the "Light" themeYou get the standard light and dark modes, and the option to follow your system's preference. Using it on the Gnome desktop, it picked up my dark mode preference out of the box.
    System Integration
    Koncentro integrates well with the system tray support, using a standard app indicator with a simple menu.
    The Koncentro indicator menu in the Gnome Desktop on Ubuntu with Dash-To-Panel enabledHowever, while you get the option to choose a theme colour, it doesn't give the option to follow your system's accent colour, unlike most modern Linux/open-source applications. It also does not feature rounded corners, which some users may find disappointing.
    Koncentro with a custom accent colour selectedThe quirks that still hold it back
    As mentioned earlier, Koncentro has a number of quirks that detract from the overall experience, though most of these are limited to its first-time run.
    Mandatory website blocker setup
    Perhaps the most unconventional choice, there's no way to start using Koncentro until its website blocker is set up. It will not allow you to use the app (even to disable the website blocker) in any way without first completing this step.
    While you can "fake it" by clicking "setup completed" in the second pop-up dialog, it creates a false sense of urgency, which could be especially confusing for less experienced users. This is perhaps where Koncentro would be better served by offering a smoother initial setup experience.
    No way to copy workspaces/settings
    While you can have multiple workspaces with their own settings, you can't duplicate workspaces or even copy your blocklists between them.
    This isn't a big deal if you're just using a couple of workspaces with simple block/allow lists, but if you're someone who wants to have a complex setup with shared lists on multiple workspaces, you'll need to add them to each workspace manually.
    No penalty for time overruns
    At this time, nothing happens when you go over time — no warnings, no sounds, no notifications. If you're trying to stay on task and run overtime, it would help to have some kind of "intervention" or warning.
    No warning for a time overrunI've gone ahead and made feature requests for possible solutions to these UX issues: export/import for lists, warnings or notifications for overruns, and copying workspace settings. These are all just small limitations in what is otherwise a remarkably cohesive early-stage project.
    Installing Koncentro on Linux
    Being that it's available on Flathub, Koncentro can be installed on all Linux distributions that support Flatpaks. You can grab it from there through your preferred software manager, or run this command in the terminal:
    flatpak install flathub com.bishwasaha.KoncentroAlternatively, you can also get official .deb or .rpm packages for your distro of choice (or source code for compiling it yourself) from the project's releases page.
    Conclusion
    All told, Koncentro is a promising productivity tool that offers a blend of simplicity, aesthetic appeal, and smooth functionality. It's a great tool for anyone who likes to blend time management with structure. For Linux users who value open-source productivity tools that respect privacy and focus, it’s a refreshing middle ground between the more minimal to-do lists and full-blown productivity suites. It’s still young, but it already shows how open-source can combine focus and flexibility without unnecessary noise.
  13. 415: Babel Choices

    by: Chris Coyier
    Tue, 28 Oct 2025 18:07:00 +0000

    Robert and Chris hop on the show to talk about choices we’ve had to make around Babel.
    Probably the best way to use Babel is to just use the @babel/preset-env plugin so you get modern JavaScript features processed down to a level of browser support you find comfortable. But Babel supports all sorts of plugins, and in our Classic Editor, all you do is select “Babel” from a dropdown menu and that’s it. You don’t see the config nor can you change it, and that config we use does not use preset env.
    So we’re in an interesting position with the 2.0 editor. We want to give new Pens, which do support editable configs, a good modern config, and we want all converted Classic Pens a config that doesn’t break anything. There is some ultra-old cruft in that old config, and supporting all of it felt kinda silly. We could support a “legacy” Babel block that does support all of it, but so far, we’ve decided to just provide a config that handles the vast majority of old stuff, while using the same Babel block that everyone will get on day one.
    We’re still in the midst of working on our conversion code an verifying the output of loads of Classic Pens, so we’ll see how it goes!
    Time Jumps
    00:15 New editor and blocks at CodePen 04:10 Dealing with versioning in blocks 14:44 What the ‘tweener plugin does 19:31 What we did with Sass? 22:10 Trying to understand the TC39 process 27:41 JavaScript and APIs
  14. by: Hangga Aji Sayekti
    Tue, 28 Oct 2025 18:46:16 +0530

    Want a fast XSS check? Dalfox does the heavy lifting. It auto-injects, verifies (headless/DOM checks included), and spits out machine-friendly results you can act on. Below: installing on Kali, core commands, handy switches, and a demo scan against a safe target. Copy, paste, profit. (lab-only.)
    Behind the Scenes: How Dalfox Works
    Dalfox is more than a simple payload injector. Its efficiency comes from a smart engine that:
    Performs Parameter Analysis: Identifies all parameters and checks if input is reflected in the response Uses a DOM Parser: Analyzes the Document Object Model to verify if a payload would truly execute in the browser Applies Optimization: Eliminates unnecessary payloads based on context and uses abstraction to generate specific payloads Leverages Parallel Processing: Sends requests concurrently, making the scanning process exceptionally fast 🚧testphp.vulnweb.com is a purposely vulnerable playground — safe to practice on. Always obtain explicit permission before scanning other domains.1. Install dependencies
    Update packages and make sure Go (Golang) is installed:
    sudo apt update && sudo apt upgrade -y go version || sudo apt install golang-go -y If go version shows a Go runtime, you’re good.
    2. Install Dalfox
    Install the latest Dalfox binary using Go:
    go install github.com/hahwul/dalfox/v2@latest export PATH=$PATH:$(go env GOPATH)/bin # add GOPATH/bin to PATH if needed dalfox version That installs Dalfox into your Go bin folder so you can run dalfox directly.
     
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  15. by: Silvestar Bistrović
    Mon, 27 Oct 2025 14:33:17 +0000

    Making a tab interface with CSS is a never-ending topic in the world of modern web development. Are they possible? If yes, could they be accessible? I wrote how to build them the first time nine long years ago, and how to integrate accessible practices into them.
    Although my solution then could possibly still be applied today, I’ve landed on a more modern approach to CSS tabs using the <details> element in combination with CSS Grid and Subgrid.
    First, the HTML
    Let’s start by setting up the HTML structure. We will need a set of <details> elements inside a parent wrapper that we’ll call .grid. Each <details> will be an .item as you might imagine each one being a tab in the interface.
    <div class="grid"> <!-- First tab: set to open --> <details class="item" name="alpha" open> <summary class="subitem">First item</summary> <div><!-- etc. --></div> </details> <details class="item" name="alpha"> <summary class="subitem">Second item</summary> <div><!-- etc. --></div> </details> <details class="item" name="alpha"> <summary class="subitem">Third item</summary> <div><!-- etc. --></div> </details> </div> These don’t look like true tabs yet! But it’s the right structure we want before we get into CSS, where we’ll put CSS Grid and Subgrid to work.
    Next, the CSS
    Let’s set up the grid for our wrapper element using — you guessed it — CSS Grid. Basically what we’re making is a three-column grid, one column for each tab (or .item), with a bit of spacing between them.
    We’ll also set up two rows in the .grid, one that’s sized to the content and one that maintains its proportion with the available space. The first row will hold our tabs and the second row is reserved for the displaying the active tab panel.
    .grid { display: grid; grid-template-columns: repeat(3, minmax(200px, 1fr)); grid-template-rows: auto 1fr; column-gap: 1rem; } Now we’re looking a little more tab-like:
    Next, we need to set up the subgrid for our tab elements. We want subgrid because it allows us to use the existing .grid lines without nesting an entirely new grid with new lines. Everything aligns nicely this way.
    So, we’ll set each tab — the <details> elements — up as a grid and set their columns and rows to inherit the main .grid‘s lines with subgrid.
    details { display: grid; grid-template-columns: subgrid; grid-template-rows: subgrid; } Additionally, we want each tab element to fill the entire .grid, so we set it up so that the <details> element takes up the entire available space horizontally and vertically using the grid-column and grid-row properties:
    details { display: grid; grid-template-columns: subgrid; grid-template-rows: subgrid; grid-column: 1 / -1; grid-row: 1 / span 3; } It looks a little wonky at first because the three tabs are stacked right on top of each other, but they cover the entire .grid which is exactly what we want.
    Next, we will place the tab panel content in the second row of the subgrid and stretch it across all three columns. We’re using ::details-content (good support, but not yet in WebKit at the time of writing) to target the panel content, which is nice because that means we don’t need to set up another wrapper in the markup simply for that purpose.
    details::details-content { grid-row: 2; /* position in the second row */ grid-column: 1 / -1; /* cover all three columns */ padding: 1rem; border-bottom: 2px solid dodgerblue; } The thing about a tabbed interface is that we only want to show one open tab panel at a time. Thankfully, we can select the [open] state of the <details> elements and hide the ::details-content of any tab that is :not([open])by using enabling selectors:
    details:not([open])::details-content { display: none; } We still have overlapping tabs, but the only tab panel we’re displaying is currently open, which cleans things up quite a bit:
    Turning <details> into tabs
    Now on to the fun stuff! Right now, all of our tabs are visually stacked. We want to spread those out and distribute them evenly along the .grid‘s top row. Each <details> element contains a <summary> providing both the tab label and button that toggles each one open and closed.
    Let’s place the <summary> element in the first subgrid row and add apply light styling when a <details> tab is in an [open] state:
    summary { grid-row: 1; /* First subgrid row */ display: grid; padding: 1rem; /* Some breathing room */ border-bottom: 2px solid dodgerblue; cursor: pointer; /* Update the cursor when hovered */ } /* Style the <summary> element when <details> is [open] */ details[open] summary { font-weight: bold; } Our tabs are still stacked, but how we have some light styles applied when a tab is open:
    We’re almost there! The last thing is to position the <summary> elements in the subgrid’s columns so they are no longer blocking each other. We’ll use the :nth-of-type pseudo to select each one individually by their order in the HTML:
    /* First item in first column */ details:nth-of-type(1) summary { grid-column: 1 / span 1; } /* Second item in second column */ details:nth-of-type(2) summary { grid-column: 2 / span 1; } /* Third item in third column */ details:nth-of-type(3) summary { grid-column: 3 / span 1; } Check that out! The tabs are evenly distributed along the subgrid’s top row:
    Unfortunately, we can’t use loops in CSS (yet!), but we can use variables to keep our styles DRY:
    summary { grid-column: var(--n) / span 1; } Now we need to set the --n variable for each <details> element. I like to inline the variables directly in HTML and use them as hooks for styling:
    <div class="grid"> <details class="item" name="alpha" open style="--n: 1"> <summary class="subitem">First item</summary> <div><!-- etc. --></div> </details> <details class="item" name="alpha" style="--n: 2"> <summary class="subitem">Second item</summary> <div><!-- etc. --></div> </details> <details class="item" name="alpha" style="--n: 3"> <summary class="subitem">Third item</summary> <div><!-- etc. --></div> </details> </div> Again, because loops aren’t a thing in CSS at the moment, I tend to reach for a templating language, specifically Liquid, to get some looping action. This way, there’s no need to explicitly write the HTML for each tab:
    {% for item in itemList %} <div class="grid"> <details class="item" name="alpha" style="--n: {{ forloop.index }}" {% if forloop.first %}open{% endif %}> <!-- etc. --> </details> </div> {% endfor %} You can roll with a different templating language, of course. There are plenty out there if you like keeping things concise!
    Final touches
    OK, I lied. There’s one more thing we ought to do. Right now, you can click only on the last <summary> element because all of the <details> pieces are stacked on top of each other in a way where the last one is on top of the stack.
    You might have already guessed it: we need to put our <summary> elements on top by setting z-index.
    summary { z-index: 1; } Here’s the full working demo:
    CodePen Embed Fallback Accessibility
    The <details> element includes built-in accessibility features, such as keyboard navigation and screen reader support, for both expanded and collapsed states. I’m sure we could make it even better, but it might be a topic for another article. I’d love some feedback in the comments to help cover as many bases as possible.
    It’s 2025, and we can create tabs with HTML and CSS only without any hacks. I don’t know about you, but this developer is happy today, even if we still need a little patience for browsers to fully support these features.

    Pure CSS Tabs With Details, Grid, and Subgrid originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  16. by: Pulkit Chandak
    Mon, 27 Oct 2025 07:34:29 GMT

    When open source is spoken about, it is done so just as a licensing model for software. But when you think about it, it is often deeper than just that. With the open source philosophy, developers make good software exist just for the sake of its existence. Sometimes this good software is so good, that it disrupts the already existing players of the area, tipping the balance entirely. We'll be looking at the most significant cases of such an event in this article. So sit back and enjoy this casual read.
    1. Git decimates BitKeeper
    Imagine being the creator of Linux and yet people know you more for creating Git. That's the story of Linus Torvalds.
    Before Git, BitKeeper was the primary software used for distributed revision control of Linux kernel source code. And it was revolutionary because before that, according to Torvalds, the only good option was to manually check the patches and put them in.
    While Stallman and some others crticized the use of a properitary tool for the development of open source Linux kernel project, BitKeeper remained the choice of VCS tool.
    It was in 2005, that BitKeeper revoked the free license for Linux kernel project. They blamed Andrew Tridgell who tried creating an open source version of BitKeeper, the same way he had created Samba protocol, by resevre engineering existing project.
    This move violated BitKeeper's terms as Tridgell was employed by OSDL, predeccors of Linux Foundation, the non-profit organization pushing the Linux kernel development.
    After a public feud with Tridgell, Torvalds started working on his own source control software and released the first version before the month ended. And that's how Git was born, out of necessity, just like Linux project.
    Fun fact, this incident also led to the birth of Mercurial, another open source VCS. Popularity of Git overshadowed Mercurial.
    BitKeeper then turned open source before eventually being discontinued. Git, however, remains the most popular software control tool, with GitHub and GitLab, etc. being the most massive code bases used by everyone.
    2. X.Org takes on XFree86's advertising clause
    X Window System, aka X11 is one of the graphic windowing systems that are used in many Linux distributions as of now, and was used almost exclusively all major distributions before Wayland came along.
    The most popular implementation of X11 used to be XFree86. It began to go sour when the development of the software started to stagnate, as the core team began to resist progress. Things changed in 2004 when XFree86 wanted to include an advertising clause in their license, making it incompatible with the GPL license. This caused some tension within the community with the developers of major distributions warning to pull out.
    As a response, X.Org Foundation made the X.Org Server based on the last open source compatible version of XFree86. It became really popular really fast, replacing XFree86 in most of the major distributions within months. With a modular structure and transparency in development, X.Org became integral in graphical Linux operating systems, only now starting to be slowly replaced by a different windowing system entirely, Wayland.
    3. Icinga takes on Nagios
    In an IT workplace, all the technological elements of the system need to be monitored well. This is what is done by a network and infrastructure monitoring system, like Nagios. It is an application that can watch over servers, devices, applications, computers, etc. over a network, and report errors, if there are any.
    Nagios dominated this area, being open-source and extensible. This modularity, however, became the reason for its downfall as the developers made the decision to move certain plugins and features behind paid tiers. Due to this increased commercialization and closed development, they started losing their users.
    As a response, Icinga was made from a Nagios fork in 2009. They kept the backward compatibility to keep system from breaking, but put a step towards the future. Icinga offered a new web-interface, configuration format and improved scalability, essentially replacing Nagios as the preferred platform.
    4. Illumos carries the legacy of OpenSolaris
    Sun Microsystems had been a major player in the tech world, both hardware and software wise, during the dot-com boom. Solaris was a proprietary, UNIX based operating system designed by them that became really important in the industry. They then released OpenSolaris, which was their daring attempt at open-sourcing their powerful OS. Eventually, however, Oracle acquired Sun in 2010, abruptly abandoning the OpenSolaris project, leaving a lot of people hanging in the process.
    The solution? Some of the former Sun engineers and the open-source community came together to build Illumos from the last open-source version of OpenSolaris. It aimed to carry forward the userbase and legacy of OpenSolaris, and to continually develop new features, keeping the OS relevant. It has retained the excellent and distinguishing features of OpenSolaris such as the ZFS filesystem and DTrace. It has since then been the basis for other operating systems as well, like OpenIndiana, OmniOS and SmartOS.
    5. OpenSearch when ElasticSearch went SSPL
    ElasticSearch, soon after its release became the preferred search engine of enterprises all across the world. Providing rich analytics and usage statistics, it seemed to fulfill all the needs. Initially open source under Apache 2.0, ElasticSearch was later on moved to the SSPL (Server Side Public License), which is not a license recognized by the OSI. Amazon saw the opportunity and picked up the slack by forking the last open source release of ElasticSearch and adding their own spin to it, bringing about OpenSearch, which is open source.
    OpenSearch retains most of the important features ElasticSearch had along with the look and feel, and adds more on top such as easy AWS integration and cloud alignment, which proves to be a great advantage for most web service purposes.
    ElasticSearch came back as open source project again in 2024. But the damage was done as big players like Amazon has already put OpenSearch at the forfront of cloud servers.
    6. VeraCrypt continues TrueCrypt
    Disk encryption is one of the most, if not the most important security feature on an operating system. For a very long time, this job was reliably done by TrueCrypt, with automatic and on-the-fly encryption. However suddenly in 2014, TrueCrypt announced that they would not develop the program any further, and that the program was "not secure". It is unclear what the proper reasoning was (as flaws that major were not found) but in their message, they asked the users to switch to Microsoft's BitLocker.
    That didn't seem to take with the open-source community, which them proceeded to build VeraCrypt, forked from the last version of TrueCrypt. VeraCrypt carried on the existing features well, also improving various factors including stronger encryption algorithms, better key derivation functions and rapid bug fixing. It is known for being transparent and community-driven and hence very trusted.
    7. Rocky Linux born in the aftermath of CentOS fiasco
    CentOS was an operating system by Red Hat that was based on RHEL (Red Hat Enterprise Limited) source code, getting all of its features a few months after RHEL itself, only free of cost. CentOS was eventually transitioned into CentOS Stream, which is a rolling release. The features now came in faster, but the stability was significantly hindered. This made is unsuitable for development environments, commercial uses or even personal usage.
    To resolve the situation, one of the original creators of CentOS created Rocky Linux in 2021, filling in the gap that CentOS left behind. It was, and ever since has been enterprise-ready and rock-solid stable. Being based on RHEL, it can be used in high-performance computing, cloud and hyperscale computing, as well as for smaller commercial systems.
    8. OpenELA tackles RHEL's partially close source moves
    Following up the previous point, this one carries it further. RHEL had announced that the only source code that will be publicly available related to RHEL would be the CentOS Stream, and for Red Hat customers, it would be available through the customer portal. Understandably, the developers of the distributions based on RHEL were not pleased with the decision.
    CIQ, the company backing Rocky Linux, SUSE and Oracle responded by forming OpenELA (Open Enterprise Linux Association) with the goal of creating distributions compatible with RHEL, while keeping the source code open to all. It was supposed to be an answer to the hole that the dependency of enterprise operating systems on CentOS had left behind.
    The group has automated the task of paying to get access to the source code and then publishing it on a public repository, out for everyone to be able to access it and make an operating system out of it. Several distributions like Rocky Peridot, SUSE Open Build Service, Fedora Koji, and the AlmaLinux Build System were born out of the same.
    9. OpenTofu fills the void after Terraform opted for Business Source License
    The story starts with Terraform being a terrific open source for IaC (infrastructure-as-code) purposes. The idea is that it will let you visualize, manage and plan your computing infrastructure (such as VMs, databases, VPNs, etc.) not manually, but as code, which automatically then executes the needed action.
    Terraform started as as open source, cross-cloud and was very extensible, which made it the go-to choice for everyone to the point where other services were being build around Terraform. In 2023, however, they decided to move from the open source MPL license to a BSL (Business Source License), which put several restrictions that put certain users at risk.
    Concerned about the problems that might occur in the future, open source developers forked the last open source version of Terraform and released OpenTofu, which then was backed by the Linux Foundation itself. Now after some time has passed, OpenTofu has not only proven successful in its mission, but has features that Terraform lacks. Listening to the community and its needs, OpenTofu has found great success.
    10. Valkey forked from Redis as it changed license
    Redis (REmote DIctionary Server) was built to be an in-memory data store with blazing speed and utility. This meant that it could contain and retrieve data from RAM (with optional persistence to disk) with microsecond latency. This has several essential uses such as caching, session storage (like shopping carts), real-time analytics (like, share counts, etc.) and so on. Initially open source under the BSD license, it became wildly popular and an integral part of the internet's infrastructure.
    In 2024, however, Redis announced a change in license which would restrict its use in commercial clouds, heavily affecting the users. In response, Valkey was created, which was born out of the last open source version of Redis. 100% Redis compatible and not governed by a single company, Valkey thrived as a drop-in replacement for Redis.
    11. LineageOS carries on after CyanogenMod's demise
    For a very long time, CyanogenMod had been the go-to option for Android users to install an alternative open-source operating system which could give them more control, customization and most importantly, freedom from any of the manufacturer's proprietary trackers, etc. Eventually, Cyanogen Inc. shifted its focus to more proprietary projects and discontinued the project.
    The developers' response was to fork the last known version into LineageOS, successfully taking the place of CyanogenMod. It is still going strong as the best open-source option for Android, different ROMs for different devices, with enhanced security and customization. Not only that, but it offers extended software support to older devices that are not supported by their parent companies any longer.
    12. MariaDB, the OG Business Source Licensee
    MySQL is an open-source database management system that has been the biggest program of its kind, and for good reason. It has had amazing support and documentation, can be used for extremely large databases with read-heavy purposes, and is very simple to use (so easy that it is taught to schoolchildren). It was acquired by (yet another time) Oracle, and the open-source community feared that the development might become slow, features might become proprietary, and it might lose the openness.
    In response, the original creator of MySQL Michael "Monty" Widenius created MariaDB, keeping it under the GPL license. It acted as a drop-in alternative to MySQL while also introducing new and exciting features that set it apart. It has since become the preferred management system in open-source projects.
    📋It is kind of ironical to include MariaDB in this list. While it was created as a modern version of MySQL, MariaDB was the one that introduced the Business Source License. This was done because cloud vendors like AWS and Azure were reaping the benefit of open source projects by offering their hosted versions. This impacted those open source projects as they were not getting enough enterprise customers to sustain the development. As you can see, whenever an open source project opted for BSL, big players like AWS, Azure etc would just fork them and create an open source project they themselves govern. Decide who is the hero and who is the villain in this story. Conclusion
    Time and time again, the open source philosophy often trumps some rash business decisions, favoring openness and innovation. The twists and turns of these changes come from all sorts of different directions, but more often than not, good open source software has existed and thrived solely because people wanted them to. Let us know if you enjoyed this article in the comments. Cheers!
  17. by: Abhishek Prakash
    Sun, 26 Oct 2025 14:37:47 GMT

    When I first started using Linux, I did not care much about the terminal applications. Not in the sense that I was not using the terminal but more like I never cared about trying other terminal application (or terminal emulators, if you want to use the correct technical term.)
    I mean, why would I? The magic is in the commands you run, after all. How does it matter if it's the default terminal that comes with the system or something else?
    Most terminals are pretty much the same, or so it feels. But still, there are numerous terminal emulators available for Linux. Perhaps they are more in number than the Arch-based distros.
    Last year, HashiCorp founder Mitchell Hashimoto developed another new terminal called Ghostty. And it took the developer world by storm. It seemed like everyone was talking about it.
    But that didn't bother me much. I attributed all the buzz around Ghostty to the Hashimoto's stature, never cared about trying it until last month.
    And when I tried it, I discovered a few features that I think makes it a favorite for pro terminal dwellers. If videos are your thing, this video shows Ghostty features in action.
    Subscribe to It's FOSS YouTube ChannelWhat makes Ghostty special?
    Ghostty is a relatively new terminal emulator for Linux and macOS, that provides a platform native UI and GPU acceleration.
    Easy to use configuration
    Ghostty does not require a configuration file to work. This is one of the cool features for a terminal emulator that comes with no GUI-based settings manager.
    It's not that you cannot edit the config file. It's just that the defaults are so good, you can just get on with your commands.
    For example, Ghostty supports nerd-fonts by default. So, your glyph characters and funny CLI tools like Starship prompt will just work out-of-the-box in Ghostty.
    Editing the configuration file of Ghostty is very simple; even for less tech-savvy people. The configuration file, usually stored at ~/.config/ghostty/config, is just a plain text file with a bunch of key-value pairs.
    Let's say you want to hide the mouse while typing. You just add this line to the config file:
    mouse-hide-while-typing = trueAnd reload the config with Ctrl+Shift+, or choosing the option from hamburger menu.
    How will you know what key-value you can use in Ghostty? Well, Ghostty keeps a fantastic, easy to understand documentation.
    You can start reading this doc, understand what a key is all about, and then add it to the config. It's that simple!
    💡The documentation is also available locally on your system. Use the command ghostty +show-config --default --docs | lessWindows, tabs, splits and overview
    If you have used Kitty, you probably are aware of the various windows and split options. Ghostty provides a very similar experience. I won't deny, Ghostty borrows a lot of features from Kitty.
    So, here, you have one main window, and can have multiple tabs. Almost every terminal has multiple tab options these days. But Ghostty also allows you to have multiple window splits.
    Window splits in GhosttyIt's not as effective as using Tmux or screen command but this is good if you want to use multiple terminals in the same screen. A feature that made Terminator a popular choice a decade ago.
    This window split is mostly inclined to power users, who want to control multiple things at the same time. You can use keyboard shortcuts or the menu.
    Another interesting feature in this section is the tab overview. You can click on the overview button on the top bar.
    Click on the overview buttonThis is convenient, as this intuitive look introduces some kind of organization to your terminal usage. Somewhat like GNOME overview.
    Tabs in Ghostty (Click to enlarge the image)More importantly, you can search tabs as well! As you can see in the above screenshot, there is a proper name for each tab that was automatically assigned based on the last command you ran. So, if you ever reach a point where like browser tabs, you have numerous terminal tabs opened, you can search for it relatively easier ;)
    This overview feature is also available through keyboard shortcuts and that is my next favorite Ghostty feature in this list.
    Trigger Sequence Shortcuts
    There are a whole lot of actions properly documented on the Ghostty documentation for you. These can be assigned to various keybindings of your preference.
    Ghostty keybindings will allow you to assign trigger sequences, which Vim users are familiar with. That is, you can use a trigger shortcut and then enter another key to complete the action. For example, in my Ghostty config, I have set:
    keybind = ctrl+a>o=toggle_tab_overviewWhat this does is, I can press ctrl+a and then press o to open the tab overview! How cool is that, to have a familiar workflow everywhere!
    Custom keybindings are also placed in Ghostty config file.
    Action Reference - KeybindingsReference of all Ghostty keybinding actions.GhosttyPerformable Keybindings
    This is a new feature introduced in version 1.2.0. With performable keybinding, you can assign a keyboard shortcut to multiple action. But the keybinding is activated only if the action is able to be performed.
    The Ghostty team itself provides a convenient example of how this works:
    keybind = performable:ctrl+c=copy_to_clipboardWhat it does is, use Ctrl+C to copy text only when there is something selected and available to copy. Otherwise, it works as the interrupt signal! No more accidental interrupts when you try to copy something.
    Kind of difficult for me to show it in the screenshot and thus I'll skip adding any image to this section.
    Image support
    Not all terminals come with image protocol support. Only a few do. One of them is Kitty, which developed its own image rendering protocol, the Kitty Image Protocol. Ghostty implements the same Kitty Image Protocol in the terminal so that you can view images right from the terminal.
    Now, a simple user may not find the use of images support in the terminal. But there are a few use cases of image support. Simply speaking, this image rendering helps Ghostty to display images in fun tool like Fastfetch to reading manga right-within the terminal.
    Watch our video on fun stuff you can do in Linux terminal.
    Subscribe to It's FOSS YouTube ChannelLigature and fancy fonts
    Ghostty also has ligature support. Now what is the purpose of ligatures, and what is its use within the terminal?
    If you are into coding, there are symbols that are a combination of two symbols. Let's say, "Not equal to", usually denoted as != but mathematically displayed as ≠ . Now, with a ligature supported terminal, you will get the proper symbol for this operation. See the difference for yourself.
    Terminals with NO ligature support and WITH ligature support. (Click to enlarge the image)
    This makes code more human readable and understandable.
    Built-in themes with light and dark variant
    With Ghostty, you have no reason to search the web for color schemes. There is a huge list of color schemes, baked right in to the application. All you have to do is, note its name and use it in the config.
    To list all the available color schemes/themes, use the command:
    ghostty +list-themesThis new interface lists every theme available, along with a live preview. Note the name of a theme from the left sidebar. Use q to exit the preview.
    Let's say I want to use the Adventure dark theme. All I have to do is to add a line in the config:
    theme = AdventureThere are light and dark variants of themes available to choose from. You can define themes for both light and dark mode. So if you system uses dark mode, the terminal theme will be the one you chose for dark mode and vice versa.
    theme = dark:Moonkai Pro Machine,light:Catppuccin LatteHow does it matter? Well, operating systems these days also come with feature that automatically switches between dark and light modes based on the time of the day. And if you opt for that feature, you'll have a better dark/light experience with Ghostty.
    Native UI
    Many apps use the same frameworks on all the operating system and that might not blend well. This is specially true for applications built on top of Electron framework often look out of place in Linux.
    Ghostty for Linux is developed using the GTK4 toolkit, which makes it looks native in various Linux distributions. Popular distributions like Ubuntu, Fedora, etc uses GNOME as their default desktop offering. Thus, you will get a familiar look and feel for the window, along with overall system uniqueness.
    On macOS, Ghosttty app is built using Swift, AppKit, and SwiftUI, with real native macOS components like native tabs, native splits, native windows, menu bars, and a proper settings GUI.
    Installing Ghostty on Linux
    If you are an Arch Linux user, Ghostty is available in the official repository. You can install it using the pacman command:
    sudo pacman -Syu ghosttyFor Ubuntu users, there is an unofficial user-maintained repository, offering deb files. You can download it from the releases page.
    You can check other official installation methods in the installation manual.
    GhosttyWrapping Up
    If you are new to Ghostty and want to get an overview of the config file format, you can refer to our sample Ghostty configuration. Don't forget to read the README!
    Get custom Ghostty configGhostty indeed is a worthy choice if you are looking for some all-rounder terminal emulators. But only if you are looking for one because most of the time, the default terminal works just fine. With a little configuration tweaking, you could get many of the discussed Ghostty features, too. Take KDE's Konsole terminal customization as an example.
    What's your take on Ghostty? Is it worth a try or would you rather stick with your current terminal choice? Share your views in the comments please.
  18. by: Preethi
    Fri, 24 Oct 2025 14:18:03 +0000

    Modern CSS has great ways to position and move a group of elements relative to each other, such as anchor positioning. That said, there are instances where it may be better to take up the old ways for a little animation, saving time and effort.
    We’ve always been able to affect an element’s structure, like resizing and rotating it. And when we change an element’s intrinsic sizing, its children are affected, too. This is something we can use to our advantage.
    Let’s say a few circles need to move towards and across one another. Something like this:
    Our markup might be as simple as a <main> element that contains four child .circle elements:
    <main> <div class="circle"></div> <div class="circle"></div> <div class="circle"></div> <div class="circle"></div> </main> As far as rotating things, there are two options. We can (1) animate the <main> parent container, or (2) animate each .circle individually.
    Tackling that first option is probably best because animating each .circle requires defining and setting several animations rather than a single animation. Before we do that, we ought to make sure that each .circle is contained in the <main> element and then absolutely position each one inside of it:
    main { contain: layout; } .circle { position: absolute; &:nth-of-type(1){ background-color: rgb(0, 76, 255); } &:nth-of-type(2){ background-color: rgb(255, 60, 0); right: 0; } &:nth-of-type(3){ background-color: rgb(0, 128, 111); bottom: 0; } &:nth-of-type(4){ background-color: rgb(255, 238, 0); right: 0; bottom: 0; } } If we rotate the <main> element that contains the circles, then we might create a specific .animate class just for the rotation:
    /* Applied on <main> (the parent element) */ .animate { width: 0; transform: rotate(90deg); transition: width 1s, transform 1.3s; } …and then set it on the <main> element with JavaScript when the button is clicked:
    const MAIN = document.querySelector("main"); function play() { MAIN.className = ""; MAIN.offsetWidth; MAIN.className = "animate"; } It looks like we’re animating four circles, but what we’re really doing is rotating the parent container and changing its width, which rotates and squishes all the circles in it as well:
    CodePen Embed Fallback Each .circle is fixed to a respective corner of the <main> parent with absolute positioning. When the animation is triggered in the parent element — i.e. <main> gets the .animate class when the button is clicked — the <main> width shrinks and it rotates 90deg. That shrinking pulls each .circle closer to the <main> element’s center, and the rotation causes the circles to switch places while passing through one another.
    This approach makes for an easier animation to craft and manage for simple effects. You can even layer on the animations for each individual element for more variations, such as two squares that cross each other during the animation.
    /* Applied on <main> (the parent element) */ .animate { transform: skewY(30deg) rotateY(180deg); transition: 1s transform .2s; .square { transform: skewY(30deg); transition: inherit; } } CodePen Embed Fallback See that? The parent <main> element makes a 30deg skew and flip along the Y-axis, while the two child .square elements counter that distortion with the same skew. The result is that you see the child squares flip positions while moving away from each other.
    If we want the squares to form a separation without the flip, here’s a way to do that:
    /* Applied on <main> (the parent element) */ .animate { transform: skewY(30deg); transition: 1s transform .2s; .square { transform: skewY(-30deg); transition: inherit; } } CodePen Embed Fallback This time, the <main> element is skewed 30deg, while the .square children cancel that with a -30deg skew.
    Setting skew() on a parent element helps rearrange the children beyond what typical rectangular geometry allows. Any change in the parent can be complemented, countered, or cancelled by the children depending on what effect you’re looking for.
    Here’s an example where scaling is involved. Notice how the <main> element’s skewY() is negated by its children and scale()s at a different value to offset it a bit.
    /* Applied on <main> (the parent element) */ .animate { transform: rotate(-180deg) scale(.5) skewY(45deg) ; transition: .6s .2s; transition-property: transform, border-radius; .squares { transform: skewY(-45deg) scaleX(1.5); border-radius: 10px; transition: inherit; } } CodePen Embed Fallback The parent element (<main>) rotates counter-clockwise (rotate(-180deg)), scales down (scale(.5)), and skews vertically (skewY(45deg)). The two children (.square) cancel the parent’s distortion by using the negative value of the parent’s skew angle (skewY(-45deg)), and scale up horizontally (scaleX(1.5)) to change from a square to a horizontal bar shape.
    There are a lot of these combinations you can come up with. I’ve made a few more below where, instead of triggering the animation with a JavaScript interaction, I’ve used a <details> element that triggers the animation when it is in an [open] state once the <summary> element is clicked. And each <summary> contains an .icon child demonstrating a different animation when the <details> toggles between open and closed.
    Click on a <details> to toggle it open and closed to see the animations in action.
    CodePen Embed Fallback That’s all I wanted to share — it’s easy to forget that we get some affordances for writing efficient animations if we consider how transforming a parent element intrinsically affects the size, position, and orientation. That way, for example, there’s no need to write complex animations for each individual child element, but rather leverage what the parent can do, then adjust the behavior at the child level, as needed.
    CSS Animations That Leverage the Parent-Child Relationship originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  19. by: Abhishek Prakash
    Fri, 24 Oct 2025 18:11:02 +0530

    I am happy to announce the release of our 14th course, Linux Networking at Scale. Okay, this is still a work in progress but I could not wait to reveal it to you 😀
    It's a 4-module micro-course that takes you into the world of policy routing, VRFs, nftables, VXLAN, WireGuard, and real-world traffic control, with practical labs in each module.
    From sysadmins to DevOps to homelab enthusiasts, there is something for everyone in this course. Two modules are available now and the other two will be published in the coming two weeks. Enjoy upgrading your Linux skills 💪
    Linux Networking at Scale (In Progress)Master advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair KhurshidLinux Networking at Scale  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  20. Linux Networking at Scale

    by: Umair Khurshid
    Fri, 24 Oct 2025 16:57:28 +0530

    🚀 Why this course?
    Modern infrastructure demands more than basic networking commands. When systems span across containers, data centers, and cloud edges, you need to scale, isolate, and secure your network intelligently; all using the native power of Linux.
    This micro-course takes you beyond the basics and into the world of policy routing, VRFs, nftables, VXLAN, WireGuard, and real-world traffic control, with practical labs at every step.
    🧑‍🎓 Who is this course for?
    This course is designed to help SysAdmins and DevOps engineers move from basic interface configuration to production-grade, resilient networking on Linux. Even aspiring network engineers may find some value in how Linux handles routing, policy decisions, and multi-network connectivity. Later modules will build upon this foundation to explore nftables for complex and optimized firewalls, VXLAN and WireGuard for secure overlays, and tc for traffic shaping and QoS.
    This is for:
    Linux admins and DevOps engineers managing distributed systems Network engineers expanding into Linux-based routing and firewalls Homelab enthusiasts and advanced learners who want real mastery 📋Prerequisite: Familiarity with Linux command-line tools (ip, ping, systemctl) and basic TCP/IP concepts.🧩 What you’ll learn in this micro-course?
    By the end of this course, you’ll be able to:
    Design multi-path and multi-tenant routing using iproute2 and VRFs Build high-performance firewall setups with nftables Create secure overlay networks using VXLAN and WireGuard Implement traffic shaping and QoS policies to control real-world bandwidth usage 🥼Every concept is paired with hands-on labs using network namespaces and containers, no expensive lab gear needed.You’ll build, break, and fix your network, exactly like in production. Well, maybe not exactly like production but pretty close to that.
    What are you waiting for? Time to take your Linux networking knowledge to the next level.
  21. by: LHB Community
    Fri, 24 Oct 2025 11:21:17 +0530

    You've already seen how to monitor CPU and memory usage with top and htop. Now, let's take a look at two other tools you can use for monitoring your system: iotop and ntopng. These tools monitor disk I/O (Input/Output) and network traffic, respectively. This tutorial will show you how to install, configure, and use both tools.
    What are iotop and ntopng?
    iotop:
    Similar in appearance to top and htop, iotop is a real-time disk I/O monitoring utility that displays the current activity (reads, writes, and waiting) of each process or thread on a Linux system. It can also show total accumulated usage per process/thread. It's useful for identifying processes that are generating heavy I/O traffic (reads/writes) or causing bottlenecks and high latency.
    ntopng:
    As the name suggests, ntopng is the next-generation version of ntop, a tool for real-time network-traffic monitoring. It provides analytics, host statistics, protocol breakdowns, flow views, and geolocation, helping you spot abnormal usage. Unlike iotop (and the older ntop command), ntopng primarily serves its output through a web interface, so you interact with it in a browser. While this tutorial also covers basic console usage, do note that it's more limited on the CLI.
    📋ntopng integrates with systemd on most distros by default, and this tutorial does not cover systems using other init systems.Installing iotop and ntopng
    Both tools are available for installation on Ubuntu and most other distros in their standard repositories.
    For Debian/Ubuntu and their derivatives:
    sudo apt update && sudo apt install -y iotop ntopng To install ntopng, RHEL, CentOS, Rocky, and AlmaLinux users will need to enable the EPEL repository first:
    sudo dnf install -y epel-release sudo dnf install -y iotop ntopng For Arch-based distros, use:
    sudo pacman -Syu --noconfirm iotop ntopng For openSUSE, run:
    sudo zypper refresh && sudo zypper install -y iotop ntopng 📋On all systems, ntopng is installed as a systemd service, but it only runs by default on Debian/Ubuntu-based systems and on openSUSE/SUSE.Enable ntopng if you'd like it to run constantly in the background:
    sudo systemctl enable --now ntopng If you'd like to disable this behavior and only use ntopng on demand, you can run:
    sudo systemctl stop nntopng && sudo systemctl disable ntopng Using iotop for monitoring disk I/O
    Much like top and htop, iotop runs solely as a CLI tool. It requires root permissions, but not to worry, it is only used for monitoring purposes and cannot access or control anything else on your system.
    sudo iotop You’ll see something like this:
    At the top, the following real-time readouts are displayed (all in Kilobytes):
    Total DISK READ: cumulative amount of data read from disk since iotop started. Total DISK WRITE: cumulative amount of data written to disk since start. Current DISK READ: how much data is being read (per second). Current DISK WRITE: how much data is being written (per second). Below these outputs, there are several columns shown by default:
    TID: Thread ID (unique identifier of the thread/process). PRIO: I/O priority level (lower number = higher priority). USER: The user owning the process/thread. DISK READ: Data read from disk by this thread/process. DISK WRITE: Data written to disk by this thread/process. SWAPIN: Percentage of time spent swapping memory in/out. IO> (I/O): Percentage of time the process waits on I/O operations. COMMAND: The name or command of the running process/thread. Useful options & key bindings
    You can control what iotop shows by default by passing various flags when launching the command. Here are some of the commonly used options:
    -o (or --only): Only show processes with current I/O (filter idle processes). -b (or --batch): Non-interactive mode (useful for logging). -n <count>: Outputs several iterations, then exits (runs in batch mode). -d <delay>: Delay between iterations (in seconds). For instance, use -d 5 for a 5-second delay, or -d 0.5 for a half-second delay. The default is one second. When run without "-b/--batch", iotop starts in interactive mode, where you can use the following keys to change various options:
    o: toggles the view between showing only processes currently doing I/O and all processes running on the system. p: toggles between displaying only processes or all threads. Changes "TID" (Thread ID) to "PID" (Process ID). a: toggles accumulated I/O vs current I/O. r: Reverse sort order (toggles ascending/descending). left/right arrows: Change the sort column (move between columns like DISK READ, COMMAND, etc.). HOME: Jump to sorting by TID (Thread ID). END: Jump to sorting by COMMAND (process name). q: quits iotop. 💡Excessive disk I/O from unexpected processes is usually a sign of possible misconfiguration, runaway logs, a backup mis-schedule, or high database activity. If you're not sure about a process, it's best to investigate what purpose that process serves before taking action.Practical example scenario where iotop helps you as a sysadmin
    Let's say you're working on your system and you notice that it's suddenly slowing down, but can't find the cause via the normal means (high CPU or memory usage). You might suspect disk I/O is the bottleneck, but this will not show up in most system monitoring tools, so you run "sudo iotop" and sort by DISK WRITE. There, you notice a process is constantly writing hundreds of MB/s, blocking other processes.
    Using the "o" keybinding, you filter to only active writers. You may then throttle or stop that process in another tool (like htop), reschedule it to run at off-hours, or have it use another storage device.
    iotop has its limitations
    While it is a useful monitoring tool, iotop cannot control processes on its own. It only has access for reading activity, not controlling it. Some other key things to note with this tool are:
    On systems with many threads/processes doing I/O, sorting/filtering is key. It's recommended that you use "-o" when launching the command, or press "o" after you've started it. iotop shows process-level I/O, but does not always give full hardware device stats (for that, tools like iostat or blktrace may be needed). You should avoid running iotop on production systems for long intervals without caution, since iotop itself causes overhead when many processes are updating at the same time. Exploring ntopng to get graphical view of network traffic
    Unlike iotop and its older variant, ntop (which is no longer packaged on some distros), ntopng is primarily accessed via a web-based GUI at default port 3000.
    For example: http://your-server-ip-address:3000 or if you're running it on your locallyr, from https://localhost:3000.
    From the GUI, you can view hosts, traffic flows, protocols, top talkers, geolocation, alerts, etc. To keep things simple, we'll cover basic usage and features.
    Changing the default port
    Changing the port is a good idea if you already use port 3000 for other local web services.
    To change ntopng’s default web port, edit its configuration file and restart the service.
    sudo nano /etc/ntopng/ntopng.conf Then, change the line defining the web port. If it doesn't exist, add it:
    -w=3001 You can use any unused port above 1024.
    Next, you'll need to restart ntopng:
    sudo systemctl restart ntopng You should now see ntopng listening on port 3001.
    Dashboard overview
    💡When you first load ntopng in your browser, you'll need to log in. The default username and password are both "admin". However, you'll be prompted to change the password on the first login.Once you're logged in, you'll land on the main dashboard, which looks like this:
    This dashboard provides a real-time visual overview of network activity and is usually the first thing you see.
    By default, the dashboard includes:
    Traffic summary (top left): shows live inbound and outbound traffic rates, number of active hosts, flows, and alerts. Clicking on any of these will take you to the relevant section. Search bar (top center): lets you quickly find hosts, IPs, or ports. Top Flow Talkers (main panel): a large visual block showing which hosts are generating or receiving the most traffic (e.g., your machine vs. external IPs). Sidebar (left): navigation menu with access to:Dashboard: current view. Alerts: security or threshold-based notifications. Flows/Hosts/Ports/Applications: detailed breakdowns of network activity. Interfaces: network interfaces being monitored. Settings / System / Developer: configuration and data export options. Refresh indicator (bottom): shows the live update frequency (default: 5 seconds). Footer: version information, uptime, and system clock. You can check each panel in the sidebar and dashboard individually to see what each displays. For this tutorial, we won't go into every detail, as there are too many to cover here.
    Using ntopng from the console
    Although ntopng is designed to be primarily web-based, you can still run it directly in the console for quick checks or lightweight monitoring. This can be useful on headless systems over SSH, or when you just want a quick snapshot of network activity without loading the web UI.
    First, stop the ntopng systemd service:
    sudo systemctl stop ntopng This is necessary to avoid any conflicts between the running service and your access via the CLI.
    Now you can launch ntopng directly:
    sudo ntopng --disable-ui --verbose This command will listen on all network interfaces that ntopng can find. If you'd like to restrict to a certain interface, you can use the -i flag.
    For example, to listen only on your WiFi interface, you can use either of the following commands (usually begins with "wl"):
    ip link | grep wl or
    nmcli device status | grep wl Then run ntopng, pointed to your wifi router:
    sudo ntopng --disable-ui --verbose -i wlp49s0 Replace "wlp49s0" with your device, of course.
    Basic logging with the ntopng CLI
    If you'd like to capture a basic log with ntopng from the console, you can run:
    sudo ntopng --disable-ui -i wlp49s0 --dump-flows flows.log Again, just remember to replace wlp49s0 with your device name. Note that the log will save to which ever folder is your current working directory. You can change the location of the log file by providing a path, for example:
    sudo ntopng --disable-ui -i wlp49s0 --dump-flows path/to/save/to/flows.log Practical example scenario where ntopng helps
    Say you suspect unusual network activity on your system. You log in to the ntopng dashboard and notice that one host on your network is sending a large amount of data to an external IP address over port 443 (HTTPS).
    Clicking on that host reveals its flows, showing that a specific application is continuously communicating with a remote server. Using this insight, you can then open another monitoring tool, such as top or htop, to identify and stop the offending process before investigating further.
    Even for less experienced users, ntopng is a great way to understand a system’s network usage at a glance. You can run it on a production server if resources allow, or dedicate a small monitoring host to watch other devices on your network (out of scope here).
    By combining real-time views with short-term history (e.g., spotting periodic traffic spikes), you can build a picture of network health. Used alongside a firewall and tools like fail2ban, ntopng helps surface anomalies quickly so you can investigate and respond.
    ngtopng has its limitations too
    While ntopng is powerful, capturing all network traffic at very high throughput can require serious resources (NICs, CPU, memory). If you're using it on a high-traffic network, it's probably best to use a separate server for monitoring. Here are some other important things to note:
    If you are monitoring remote networks or via VLANs, you may need an appropriate network setup (mirror ports, network taps). However, these are outside the scope of this tutorial. For data retention out of the box, you only get a limited history. For long-term trends, you'll need to configure external storage or a database. Most traffic (e.g., HTTPS) is encrypted, so ntopng can only show metadata (hosts, ports, volumes, SNI (Server Name Indication) where available). In such cases, it cannot show the actual payloads. Conclusion
    iotop and ntopng are two powerful free/open-source tools that can help you monitor, analyze, and troubleshoot critical subsystems on your Linux machine. By incorporating these into your arsenal, you'll get a better understanding of your system's baseline for normal operations and be better equipped to spot anomalies or bottlenecks quickly.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.