Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Adnan Shabbir
    Wed, 28 May 2025 05:55:55 +0000

    Ubuntu 25.04, codenamed Plucky Puffin, released in April 2025, is an interim release supported for 9 months (until Jan 2026). Ubuntu 25.04 is equipped with experimental features that will be tested until the next LTS, 26.04, and if declared stable, these features will be carried forward and may be part of Ubuntu 26.04, the next Ubuntu LTS in line.
    In today’s guide, I’ll give you a brief overview of Ubuntu 25.04, what it looks like, and what other features are included in the development and testing.
    Outline:
    What’s New in Ubuntu 25.04 Codenamed Plucky Puffin? GNOME 48 Kernel .14 Security Center Updated apt Installation Interface Well-Being HDR Display – High Dynamic Range Other Notable Updates in Ubuntu 25.04 How to Upgrade to Ubuntu 25.04 Conclusion What’s New in Ubuntu 25.04 Codenamed Plucky Puffin?
    With every interim release (just like Ubuntu 25.04), there comes a list of new features to be tested and some improvements to existing functionalities. This time we are focusing on Ubuntu 25.04, some major as well as minor updates will be provided:
    GNOME 48
    Ubuntu 24.04 is based on GNOME 46, whereas at the moment of writing this post, Ubuntu 25.04 is being experimented with GNOME 48. As of now, GNOME 48 is more modern and graphics-friendly, which is always, i.e., the latest version is supposed to overcome the deficiency of the previous GNOME version and improve over time.

    Kernel 6.14
    The kernel is the central nervous system of Linux, i.e., a bridge between the hardware and the software. Ubuntu 25.04 comes with a Kernel 6.14 (Upstream), i.e., developed and maintained by Linus Torvalds and the Linux kernel maintainers.
    The first official release of Ubuntu 24.04 contained the Kernel 6.8. However, Ubuntu 24.04.2 is now updated to the Linux Kernel 6.11.
    Security Center
    Although Ubuntu is an open-source OS and is more secure than other OSs. However, to align with this top-tech era, Ubuntu might be seeking some additional application support. These applications require some permissions that a user has to give for smooth functionality. To deal with such permissions, Ubuntu has released a Security Center in this release, so that users may turn on or off the permissions.
    Here’s the initial interface of the Security Center, where you can see that the feature is experimental at the moment.

    If you enable it, the strictly confined apps request permissions. The app permissions can be checked in the settings, i.e., “Settings > Apps”
    Updated apt Installation Interface
    An interactive UI for the apt-based installations and uninstallations:

    Uninstalling:

    Well-Being
    This is about the well-being of the Ubuntu lovers. The users can enable it and set the following features:
    Screen Time: Set the average screen time usage. Eyesight Reminder: A reminder to look away from the screen. Movement Reminder: A reminder to move around.
    Document Viewer (PDF Viewer)
    Ubuntu 25.04 is now equipped with a built-in Document Viewer for various types of files. You can open a variety of files on this viewer, i.e., PDF, comic books, DjVU, and TIFF documents. Here’s the document viewer:

    HDR Display – High Dynamic Range
    High Dynamic Range (HDR) is a state-of-the-art technology to provide better display with advanced coloring sense of the HDR technology. This is one of the significant additions in this update list. If you have an HDR monitor, now, you can attach it to your Ubuntu 25.04 to experience HDR displays.
    Head over to “Settings > Display > HDR” to manage it.
    Other Notable Updates in Ubuntu 25.04
    Color to Color Management:
    The Color section in the “Settings” has been replaced with Color Management in the Settings.

    Timezone in Events:
    Ubuntu 25.04 provides timezone support while creating events in the calendar. Here’s how you can locate it in the Calendar app of Ubuntu 25.04:

    JPEG XL Image Support:
    JPEG XL is an image type (an enhanced JPEG), and now it is supported by Ubuntu and providing a better experience for the users.
    Notification Grouping:
    Ubuntu 25.04 has now offered a notification grouping inside the notifications, making it easier for users to navigate through multiple notifications.
    Yaru Theme:
    The icon and theme experience is better than the previous releases. The icons are now more dynamic and are well integrated with the accent color support.
    Updated Network Manager:
    Ubuntu 25.04 has an updated Network Manager 1.52, whereas Ubuntu 24.04.2 (released parallel to Ubuntu 25.04) has 1.46. The significant change is that Network Manager 1.52 is more aligned towards IPv6 as compared to the 1.46 version.
    Chrony (Network Time Protocol):
    Ubuntu 25.04 has adopted Chrony as its Network Time Protocol client (SUSE and RHEL inspired), which synchronizes the system time as per the NTP servers, i.e., a GPS receiver.
    Until now, Ubuntu has been using “systemd-timesync” as its Network Time Protocol client, which is also known as SNTP (Simple Network Time Protocol). The SNTP synchronizes the system clock with the remote server and has less accuracy when compared with Chrony (Full NTP).
    What is NTP? The purpose of the NTP is to synchronize the clocks of the systems over a network to ensure the security, performance, event coordination, and logging. NTP ensures the time sync is as accurate as possible, i.e., in milliseconds / submilliseconds. Developer Tools and Libraries:
    Since Ubuntu is well-liked in the developer community, the Ubuntu contributors continuously work on providing updated tools. Ubuntu 25.04 is equipped with updated tools, i.e., Python, GCC, Rust, and Go.
    Similarly, a few of the developers associated libraries are also upgraded, i.e., glibc, binutils, and OpenSSL.
    Gaming Support (NVIDIA Dynamic Boost):
    The NVIDIA dynamic boost enables the gamer to manage the power between the CPU and GPU. This is now enabled by default in Ubuntu 25.04.
    System Monitor’s Interface:
    Ubuntu’s system monitor shows information about the processes, resources, and file systems. In Ubuntu 25.04, there is a slight change in the interface of the Ubuntu System Monitor. For instance, the info inside the processes tab is restricted to, i.e., ID, CPU, Memory, Disk Write, and Disk Read. Here’s the interface where you can see this.

    However, in older versions, the Processes tab has some additional info for each process:

    That’s all from the notable features of Ubuntu 25.04.
    Would you like to upgrade your Ubuntu to Ubuntu 25.04?
    How to Upgrade to Ubuntu 25.04 Plucky Puffin
    If you are using any other release of Ubuntu (Ubuntu 25.10 or Ubuntu 24.04), you can easily upgrade to Ubuntu 25.04. Let’s go through the steps to upgrade:
    Important Note: If you are using a Ubuntu LTS release other than Ubuntu 24.04, then you have to first upgrade to Ubuntu 24.04:
    Upgrade Ubuntu to the Latest LTS Once upgraded to Ubuntu 24.04, you are now ready to follow the steps below and upgrade to Ubuntu 24.10.
    Step 1: Upgrade Your Ubuntu to Ubuntu 24.10
    Since it is an interim release, you must have the previous release installed to get Ubuntu 25.04. Here’s how you can upgrade to Ubuntu 24.10:
    Update and upgrade the system repositories: sudo apt update && sudo apt upgrade
    Note: It is recommended to use “sudo apt autoremove” after update/upgrade, to clean up the system from any useless dependencies/packages that are not required.
    If you are using Ubuntu 24.04 LTS, then you have to enable the non-LTS release upgrade. For that, open the release-upgrader file in an editor: sudo nano /etc/update-manager/release-upgrades Now, change the “Prompt” parameter’s value from “lts” to “normal”, as can be seen below:

    Start the upgrade using the do-release-upgrade command: sudo do-release-upgrade Here you go:

    Press “y” to proceed with the installation:

    While upgrading, you will be prompted several times asking for acceptance of the changes being processed:

    Step 2: Upgrade to Ubuntu 25.04
    Once you are in Ubuntu 24.10, use the do-release command again to upgrade to Ubuntu 25.04:
    sudo do-release-upgrade Note: If you get any prompt like “please install all updates available for your release”, then use the command “sudo apt dist-upgrade” and reboot to fix it.

    Here’s the Ubuntu 25.04:

    That’s all from this guide.
    Conclusion
    Ubuntu 25.04, codenamed “Plucky Puffin”, is an interim Ubuntu release supported for 9 months. Ubuntu 25.04, released in April 2025, features the updated GNOME (48), updated Kernel (6.14), an improved apt version (3.0), and a security center. Other features include the HDR display, enhanced color management, timezone support in events, etc.
    This post briefly lists the notable features of Ubuntu 25.04 and also explains the process to upgrade to Ubuntu 25.04.
    FAQS
    How Long Will Ubuntu 25.04 be Supported?
    Ubuntu 25.04 will be supported until January 2026. Since Ubuntu 25.04 is an interim release and an Ubuntu interim release is supported for 9 months after its release.
    Is Ubuntu 25.04 LTS?
    No, Ubuntu 25.04 is an interim release, not an LTS. The current latest LTS is Ubuntu 24.04 codenamed Noble Numba, and the next in line LTS is Ubuntu 26.04.
    How to Upgrade to Ubuntu 25.04?
    First, upgrade to Ubuntu 24.04, then to 24.10, and from there, you can upgrade to Ubuntu 25.10.
  2. by: Abhishek Prakash
    Wed, 28 May 2025 03:29:07 GMT

    There are two main choices for getting VS Code on Arch Linux:
    Install Code - OSS from Arch repositories Install Microsoft's VS Code from AUR I know. It's confusing. Let me clear the air for you.
    VS Code is an open source project but the binaries Microsoft distributes are not open source. They have telemetry enabled in it.
    Code - OSS is the actual open source version of VS Code.
    Think of Code - OSS as Chromium browser and VS Code as Google Chrome (which is based on Chromium browser).
    Another thing here is that some extensions will only work in VS Code, not in the de-Micorsoft Code - OSS.
    This is why you should think it through if you want to use Microsoft's VS Code or its 100% open sourced version.
    Let me show you the steps for both installation.
    Method 1: Install Code - OSS
    ✅ Open source version of Microsoft VS Code
    ✅ Easy to install with a quick pacman command
    ❌ Some extensions may not workThis is simple. All you have to do is to ensure that your Arch system is updated:
    pacman -SyuAnd then install Code - OSS with:
    pacman -S codeIt cannot be simpler than this, can it?
    As I mentioned earlier, you may find some extensions that do not work in the open source version of Code.
    Also, I had noticed earlier that Ctrl+C - Ctrl+V was not working for copy paste. Instead, it was defaulted to Ctrl+Shift+C and Ctrl+Shift+V for reasons not known to me. I had not made any changes to key bindings or had opted for a Vim plugin.
    Removing Code OSS
    Removal is equally simple:
    sudo pacman -R codeMethod 2: Install the actual Microsoft's VS Code
    ✅ Popular Microsoft VS Code that is used by most people
    ✅ Access to all proprietary features and extensions in the marketplace
    ❌ Installation may take effort if you don't have an AUR helperIf you don't care too much about ethics, open source principles and just want to code without thinking it too much, go with VS Code.
    There are a couple of VS Code offerings available in the AUR but the official one is this.
    Before installing it, you should remove Code OSS
    sudo pacman -R codeIf you have an AUR helper like yay already installed, use it like this:
    yay -S visual-studio-code-binOtherwise, install yay first and then use it to install the desired package.
    Don't be deceived by the pretty looking screenshot above. I was using a different theme in VS Code.
    Removal
    You can use your AUR helper or the super reliable pacman command to remove Microsoft VS Code from Arch Linux.
    sudo pacman -R visual-studio-code-binI let you enjoy your preferred version of VS Code on Arch Linux. Please feel free to use the comment section if you have questions or suggestions.
  3. By: Janus Atienza
    Tue, 27 May 2025 18:22:59 +0000

    In today’s world, managing shipments and packages has become an important milestone for both personal and business use. For Linux enthusiasts and those who prefer open source tools, there are several powerful ways to track packages effectively without the need for paid software. This approach provides flexibility, security, and control over the tracking process. Let’s take a look at how open source tools can help with shipping management and why Linux is a great platform for these tasks.

    Package Tracking Tools
    Linux, being a versatile and open operating system, offers a variety of package tracking tools that meet the different needs of users. These tools allow both individual users and businesses to track shipments across multiple carriers, all in one place, without having to pay for expensive tracking services.
    The first step in tracking packages is to be able to integrate with different carriers such as FedEx, UPS, DHL, or local postal services. Open source tools can help centralize tracking across these platforms, reducing the need to visit multiple websites for updates. Such tools often pull real-time shipment information from multiple carriers and present it in a user-friendly interface, with minimal user effort.
    Linux users can set up custom scripts to automate the package tracking process. Linux also allows you to set up regular checks and integrate package tracking with other business operations.
    How Open Source Tools Improve Delivery Management
    Using open source tools for delivery management offers several benefits, especially when it comes to tracking multiple packages at once or managing logistics for a business. Here’s how these tools improve the process:
    Transparency and Control. With open source tools, you have complete control over how your package data is processed. Collaboration and Scalability. Open source tools make it easy to scale your package tracking system. Cost-effective delivery management. Open source package tracking tools are usually free to use, and the only cost is the time spent installing and configuring them.
    Tracking packages doesn’t have to be a complicated or expensive process. By leveraging the power of Linux and open source tools, users can gain complete control over their delivery management process. The right tools, combined with the flexibility of Linux, can transform the way you manage your deliveries, ensuring that your shipments are always on track.
    The post Parcel Tracking and Linux: Using Open-Source Tools for Delivery Management appeared first on Unixmen.
  4. YAML Validator

    by: Abhishek Prakash
    Tue, 27 May 2025 21:57:07 +0530

    Paste your YAML content or upload a file to validate syntax. Scroll down to see the details on the errors, if any.
    YAML Validator Tool .yv-wrapper * { margin: 0; padding: 0; box-sizing: border-box; } .yv-wrapper { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif; display: flex; align-items: center; justify-content: center; } .yv-container { max-width: 1200px; width: 100%; } .yv-editor-section { border-radius: 12px; height: 500px; display: flex; flex-direction: column; margin-bottom: 20px; } .yv-section-title { font-size: 1.2em; color: #333; margin-bottom: 15px; font-weight: 600; flex-shrink: 0; } .yv-editor-section .CodeMirror { height: 100%; border-radius: 8px; font-size: 14px; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); } .yv-controls { display: flex; gap: 15px; margin-bottom: 20px; flex-wrap: wrap; } .yv-button { background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); color: white; border: none; padding: 12px 24px; border-radius: 8px; font-size: 16px; font-weight: 600; cursor: pointer; transition: all 0.3s ease; box-shadow: 0 4px 6px rgba(102, 126, 234, 0.2); } .yv-button:hover { transform: translateY(-2px); box-shadow: 0 6px 12px rgba(102, 126, 234, 0.3); } .yv-button:active { transform: translateY(0); } .yv-file-input-wrapper { position: relative; overflow: hidden; display: inline-block; } .yv-file-input-wrapper input[type=file] { position: absolute; left: -9999px; } .yv-file-input-label { background: linear-gradient(135deg, #f093fb 0%, #f5576c 100%); color: white; border: none; padding: 12px 24px; border-radius: 8px; font-size: 16px; font-weight: 600; cursor: pointer; transition: all 0.3s ease; display: inline-block; box-shadow: 0 4px 6px rgba(240, 147, 251, 0.2); } .yv-file-input-label:hover { transform: translateY(-2px); box-shadow: 0 6px 12px rgba(240, 147, 251, 0.3); } .yv-validation-result { margin-top: 20px; } .yv-success { background: linear-gradient(135deg, #d4fc79 0%, #96e6a1 100%); color: #2d5016; padding: 15px 20px; border-radius: 8px; font-weight: 600; display: flex; align-items: center; gap: 10px; } .yv-error { background: linear-gradient(135deg, #fc6076 0%, #ff9a44 100%); color: white; padding: 15px 20px; border-radius: 8px; margin-bottom: 15px; font-weight: 500; box-shadow: 0 4px 6px rgba(252, 96, 118, 0.2); } .yv-error-details { background: #fff; color: #333; border-left: 4px solid #fc6076; padding: 15px; border-radius: 4px; font-family: 'Courier New', monospace; font-size: 14px; line-height: 1.6; white-space: pre-wrap; word-break: break-word; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.05); margin-top: 10px; } .yv-error-details strong { display: block; margin-bottom: 5px; color: #d32f2f; } .yv-error-line { background: rgba(252, 96, 118, 0.1); border-left: 3px solid #fc6076; margin-left: -15px; padding-left: 12px; } .yv-footer { text-align: center; margin-top: 30px; color: #666; font-size: 14px; } .yv-sample-btn { background: linear-gradient(135deg, #4facfe 0%, #00f2fe 100%); box-shadow: 0 4px 6px rgba(79, 172, 254, 0.2); } .yv-sample-btn:hover { box-shadow: 0 6px 12px rgba(79, 172, 254, 0.3); } .yv-clear-btn { background: linear-gradient(135deg, #fa709a 0%, #fee140 100%); box-shadow: 0 4px 6px rgba(250, 112, 154, 0.2); } .yv-clear-btn:hover { box-shadow: 0 6px 12px rgba(250, 112, 154, 0.3); } Validate YAML Load YAML File Load Sample Clear YAML Input
    Paste your YAML content or upload a file to validate syntax.
  5. by: Daniel Schwarz
    Tue, 27 May 2025 13:02:32 +0000

    The reading-flow and reading-order proposed CSS properties are designed to specify the source order of HTML elements in the DOM tree, or in simpler terms, how accessibility tools deduce the order of elements. You’d use them to make the focus order of focusable elements match the visual order, as outlined in the Web Content Accessibility Guidelines (WCAG 2.2).
    To get a better idea, let’s just dive in!
    (Oh, and make sure that you’re using Chrome 137 or higher.)
    reading-flow
    reading-flow determines the source order of HTML elements in a flex, grid, or block layout. Again, this is basically to help accessibility tools provide the correct focus order to users.
    The default value is normal (so, reading-flow: normal). Other valid values include:
    flex-visual flex-flow grid-rows grid-columns grid-order source-order Let’s start with the flex-visual value. Imagine a flex row with five links. Assuming that the reading direction is left-to-right (by the way, you can change the reading direction with the direction CSS property), that’d look something like this:
    CodePen Embed Fallback Now, if we apply flex-direction: row-reverse, the links are displayed 5-4-3-2-1. The problem though is that the focus order still starts from 1 (tab through them!), which is visually wrong for somebody that reads left-to-right.
    CodePen Embed Fallback But if we also apply reading-flow: flex-visual, the focus order also becomes 5-4-3-2-1, matching the visual order (which is an accessibility requirement!):
    <div> <a>1</a> <a>2</a> <a>3</a> <a>4</a> <a>5</a> </div> div { display: flex; flex-direction: row-reverse; reading-flow: flex-visual; } CodePen Embed Fallback To apply the default flex behavior, reading-flow: flex-flow is what you’re looking for. This is very akin to reading-flow: normal, except that the container remains a reading flow container, which is needed for reading-order (we’ll dive into this in a bit).
    For now, let’s take a look at the grid-y values. In the grid below, the grid items are all jumbled up, and so the focus order is all over the place.
    CodePen Embed Fallback We can fix this in two ways. One way is that reading-flow: grid-rows will, as you’d expect, establish a row-by-row focus order:
    <div> <a>1</a> <a>2</a> <a>3</a> <a>4</a> <a>5</a> <a>6</a> <a>7</a> <a>8</a> <a>9</a> <a>10</a> <a>11</a> <a>12</a> </div> div { display: grid; grid-template-columns: repeat(4, 1fr); grid-auto-rows: 100px; reading-flow: grid-rows; a:nth-child(2) { grid-row: 2 / 4; grid-column: 3; } a:nth-child(5) { grid-row: 1 / 3; grid-column: 1 / 3; } } CodePen Embed Fallback Or, reading-flow: grid-columns will establish a column-by-column focus order:
    CodePen Embed Fallback reading-flow: grid-order will give us the default grid behavior (i.e., the jumbled up version). This is also very akin to reading-flow: normal (except that, again, the container remains a reading flow container, which is needed for reading-order).
    There’s also reading-flow: source-order, which is for flex, grid, and block containers. It basically turns containers into reading flow containers, enabling us to use reading-order. To be frank, unless I’m missing something, this appears to make the flex-flow and grid-order values redundant?
    reading-order
    reading-order sort of does the same thing as reading-flow. The difference is that reading-order is for specific flex or grid items, or even elements in a simple block container. It works the same way as the order property, although I suppose we could also compare it to tabindex.
    Note: To use reading-order, the container must have the reading-flow property set to anything other than normal.
    I’ll demonstrate both reading-order and order at the same time. In the example below, we have another flex container where each flex item has the order property set to a different random number, making the order of the flex items random. Now, we’ve already established that we can use reading-flow to determine focus order regardless of visual order, but in the example below we’re using reading-order instead (in the exact same way as order):
    div { display: flex; reading-flow: source-order; /* Anything but normal */ /* Features at the end because of the higher values */ a:nth-child(1) { /* Visual order */ order: 567; /* Focus order */ reading-order: 567; } a:nth-child(2) { order: 456; reading-order: 456; } a:nth-child(3) { order: 345; reading-order: 345; } a:nth-child(4) { order: 234; reading-order: 234; } /* Features at the beginning because of the lower values */ a:nth-child(5) { order: -123; reading-order: -123; } } CodePen Embed Fallback Yes, those are some rather odd numbers. I’ve done this to illustrate how the numbers don’t represent the position (e.g., order: 3 or reading-order: 3 doesn’t make it third in the order). Instead, elements with lower numbers are more towards the beginning of the order and elements with higher numbers are more towards the end. The default value is 0. Elements with the same value will be ordered by source order.
    In practical terms? Consider the following example:
    div { display: flex; reading-flow: source-order; a:nth-child(1) { order: 1; reading-order: 1; } a:nth-child(5) { order: -1; reading-order: -1; } } CodePen Embed Fallback Of the five flex items, the first one is the one with order: -1 because it has the lowest order value. The last one is the one with order: 1 because it has the highest order value. The ones with no declaration default to having order: 0 and are thus ordered in source order, but otherwise fit in-between the order: -1 and order: 1 flex items. And it’s the same concept for reading-order, which in the example above mirrors order.
    However, when reversing the direction of flex items, keep in mind that order and reading-order work a little differently. For example, reading-order: -1 would, as expected, and pull a flex item to the beginning of the focus order. Meanwhile, order: -1 would pull it to the end of the visual order because the visual order is reversed (so we’d need to use order: 1 instead, even if that doesn’t seem right!):
    div { display: flex; flex-direction: row-reverse; reading-flow: source-order; a:nth-child(5) { /* Because of row-reverse, this actually makes it first */ order: 1; /* However, this behavior doesn’t apply to reading-order */ reading-order: -1; } } CodePen Embed Fallback reading-order overrides reading-flow. If we, for example, apply reading-flow: flex-visual, reading-flow: grid-rows, or reading-flow: grid-columns (basically, any declaration that does in fact change the reading flow), reading-order overrides it. We could say that reading-order is applied after reading-flow.
    What if I don’t want to use flexbox or grid layout?
    Well, that obviously rules out all of the flex-y and grid-y reading-flow values; however, you can still set reading-flow: source-order on a block element and then manipulate the focus order with reading-order (as we did above).
    How does this relate to the tabindex HTML attribute?
    They’re not equivalent. Negative tabindex values make targets unfocusable and values other than 0 and -1 aren’t recommended, whereas a reading-order declaration can use any number as it’s only contextual to the reading flow container that contains it.
    For the sake of being complete though, I did test reading-order and tabindex together and reading-order appeared to override tabindex.
    Going forward, I’d only use tabindex (specifically, tabindex="-1") to prevent certain targets from being focusable (the disabled attribute will be more appropriate for some targets though), and then reading-order for everything else.
    Closing thoughts
    Being able to define reading order is useful, or at least it means that the order property can finally be used as intended. Up until now (or rather when all web browsers support reading-flow and reading-order, because they only work in Chrome 137+ at the moment), order hasn’t been useful because we haven’t been able to make the focus order match the visual order.
    What We Know (So Far) About CSS Reading Order originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  6. by: Abhishek Prakash
    Tue, 27 May 2025 01:53:31 GMT

    How do you get help in the Linux command line?
    On Linux, there are man pages that come preloaded with any distribution. The man pages are basically help pages which you can access using the terminal.
    You get an instruction manual when you purchase a new gadget, right? It is just like that.
    If you want to know what a command does, just use the 'man' keyword followed by the command you would like to know about. While it may seem pretty straightforward, the user experience is a bit dull, as it is all a bunch of text without any decorations or any other features.
    There are some man page alternatives that have tried to modernize the user experience, or give a specific focus to the man pages for particular users. Let me share my quick experience with them.
    Subscribe to It's FOSS YouTube Channel1. Qman
    Qman is a modern manual page viewer with navigation, scrolling, hyperlink, and table of contents support.
    It aims to be fast and offer a couple of features at the same time being a terminal-focused tool.
    Qman terminal interfaceKey Features:
    Index page that displays all manual pages available on the system, sorted alphabetically and organized by section. Hyperlinks to other manual pages, URLs and email addresses. Table of Contents for each man pages Incremental search for manual pages and free page text search. Mouse support Navigation history On-line help Fully configurable using an INI-style config file Qman Command Working
    Installation
    This supports Arch Linux for easy installation using the following command:
    yay -Syu qmanFor other systems, you need to build it from source
    Qman2. TLDR
    Love cheat sheets? So, you do not have to waste your time scrolling through a barrage of descriptions? That's what TLDR helps you with.
    It gives short and actionable information for commands to follow.
    TLDR workingKey Features:
    Community-maintained help pages. Simpler, more approachable complement to traditional man pages. Help pages focused on practical examples TL;DR stands for "Too Long; Didn't Read". It originated as Internet slang, where it is used to indicate that a long text (or parts of it) has been skipped as too lengthy. Installation
    🚧You cannot have tldr and tealdeer installed at the same time.If you need to install Snap for Ubuntu, here is the command to do that:
    sudo snap install tldrFor Arch Linux and Fedora, the commands are (respectively):
    sudo pacman -Syu tldr sudo dnf install tldrtldr3. Tealdeer
    If you want TLDR tool, but built on Rust, Tealdeer should be your pick. Simplified, example based and community-driven man pages.
    Tealdeer workingI noticed an interesting thing about the project's name and I'll quote it here below from their GitHub page:
    Installation
    It is available on Debian, Arch Linux, and Fedora repos:
    sudo apt install tealdeer sudo pacman -Syu tealdeer sudo dnf install tealdeerThere are static binary builds for Linux only. You can also install via cargo:
    cargo install tealdeerOnce installed, run the command below to update the cache:
    tldr --updateTo get shell completion in bash:
    cp completion/bash_tealdeer /usr/share/bash-completion/completions/tldrTealdeer4. Navi Cheat Sheet
    If you favored a cheat sheet, and want an interactive UI to complement the same, Navi Cheat Sheet is the answer.
    Navi Interactive Cheat Sheet
    Key Features:
    Browse through cheat sheets and execute commands. Set up your own config file Change colors Can be either used as a command or as a shell widget (à la Ctrl-R). Install
    In Arch Linux and Fedora, use the commands below:
    sudo pacman -Syu navi sudo dnf install naviYou can also try using Homebrew:
    brew install naviOnce installed, run navi. It will suggest a command to add a default set of cheat sheets. Run it:
    navi repo add denisidoro/cheatsAdd Default Set of Cheat Sheets in Navi
    Navi GitHub5. Cheat.sh
    If your focus is only on Cheat sheets, and get the best of community-driven inputs for the same, Cheat.sh is the perfect terminal tool for you.
    Cheat.sh Working using Local Shell Instance
    Key Features:
    Simple interface Covers 56 programming languages, several DBMSes, and more than 1000 most important UNIX/Linux commands. No installation needed, but can be installed for offline usage. Has a convenient command line client, cht.sh Can be used directly from code editors Supports a special stealth mode where it can be used fully invisibly without ever touching a key and making sounds. Installation
    You can install it using Curl with the following commands:
    curl cheat.sh/tar curl cht.sh/curl curl https://cheat.sh/rsync curl https://cht.sh/trTo install locally, first install rlwrap and most.
    PATH_DIR="$HOME/<a-directory-that-is-in-PATH>" mkdir -p "$PATH_DIR" curl https://cht.sh/:cht.sh > "$PATH_DIR/cht.sh" chmod +x "$PATH_DIR/cht.sh"Cheat.sh6. The MOST Pager
    Alright, if you are like me, and probably not looking for anything fancy, but just a colorful man page, you can use the Most pager.
    Most as PagerMOST is a powerful paging program. Supports multiple windows and can scroll left and right. It keeps the same good-old man page look with added colors.
    Install
    sudo apt install most sudo dnf install most sudo pacman -Syu mostOnce installed, edit ~/~.bashrc:
    nano ~/.bashrcTo add the line:
    export PAGER='most'For the latest most versions, color may not appear by default. In that case, below line to ~/.bashrc.
    export GROFF_NO_SGR=1Most7. Yelp or GNOME Help
    Considering you are using a distribution powered by GNOME desktop, you just need to search for the GNOME Help app from the menu. You can also access the same via the terminal using the command yelp.
    Using GNOME Help (Yelp) to view man pages
    Press CTRL to open the search bar and type the command that you want when using the terminal interface.
    man:<command> # For example man:manOr, if you are in a browser, go to the address bar (CTRL+L). Here, enter man:man. When asked to open the link in help, click on it.
    Opening man page from a browser
    GNOME Help (Yelp)Bonus: Use a terminal with built-in AI
    AI is everywhere, even in your terminal. The proximity of AI in the tool lets you quickly use them.
    There are a few terminals that come with built-in AI agents to help you get all sorts of help; from simple command suggestion to full-fledged deployment plans.
    You may use them too if you are an AI aficionado. Warp is one such terminal which is not open source but hugely popular among modern Linux users.
    Get Warp (Partner link)Wrapping Up
    While you have It's FOSS along with the traditional man pages to learn what most commands do on Linux, there are alternatives to man pages which could enhance your learning experience.
    If you prefer a GUI, GNOME Help should be helpful or any similar equivalent pre-installed on your distribution. For terminal-based solutions, there are a couple you can try. Take a look at the feature set they offer, and install what you like the most.
    What do you prefer the most? Let me know in the comments below!
  7. by: Chris Coyier
    Mon, 26 May 2025 15:54:28 +0000

    This is a great story from Dan North about “The Worst Programmer I know”, Tim MacKinnon. It’s a story about measuring developer performance with metrics:
    Scared? Maybe you can guess. Tim was very bad at metrics.
    Why? Maybe you can guess again. Tim wasn’t playing that game, he was a true senior developer in the sense that he nurtured his team.
    Every organization is different though. Mercifully in the situation above, Dan protected Tim. But we can all imagine a situation where Tim was fired because of this silly measurement system. (That always reminds me of Cathy O’Neils Weapons of Math Destruction). Getting to know how the organization works, so you can work within it, is another approach that Cindy Sridharan advocates for. See: How to become a more effective engineer.
    Different organizations will have different paths to these answers. I’ll pluck off a few bullet points:
    Figure out who matters, what they care about, and how to effectively get things done. And don’t wait!
    Another one I like in this realm of being a truly effective developer is Artem Zakirullin’s Cognitive load is what matters. A good developer can write code that themselves and others can read without being so complex that, well, you run out of mental capacity.
    That tracks for me. I start tracing how code works, and I’m all good and have it in my head, then it feels like right at the fifth logical jump, my brain just dumps it all out and I’m lost.
    I suspect it’s a talent of really great programmers that they can hold a bit more in their head, but it’s not smart to assume that of your fellow developers. And remember that even the very smart appreciate things that are very simple and clear, perhaps especially.
    You know what strikes me as a smart developer move? When they work together even across organizations. It’s impressive to me to see Standard Schema an effort by all the people who work on any library that deals with JavaScript/TypeScript schemas to make them easier to use and implement.
    There are heaps of libraries and tools that already support it, so I’d call that a big success. I see Zod released Mini recently, which uses functions instead of methods, making it tree-shakable, but otherwise works exactly the same. Likely a nod to Validbot which was always the “Zod but smaller” choice.
    Another thing I think is smart: seeing what developers are already doing and making that thing better. Like, I’m sure there are very fancy exotic ways to debug JavaScript in powerful ways. But we all know most of us just console.log() stuff. So I like how Microsoft is like, let’s just make that better with console.context(). This allows for better filtering and styling of messages and such, which would surely be welcome. Might as well steal formatting strings from Node as well.
  8. by: Abhishek Kumar
    Mon, 26 May 2025 14:31:56 +0530

    I see a lot of posts on my Twitter (or X) feed debating the merits of ditching cloud services in favor of homelab self-hosted setups just like I tried hosting Wikipedia and the Arch wiki. Some even suggest using bare-metal servers for professional environments.
    Source: Fireship on XWhile these posts often offer intriguing tools and perspectives, I can't help but notice a pattern: companies lean heavily on cloud services until something goes catastrophically wrong, like Google Cloud accidentally deleting customer data.
    Source: Ars TechnicaHowever, let’s be real, human error can occur anywhere. Whether in the cloud or on-premises, mistakes are universal.
    So, no, I’m not here to tell you to migrate your production services to a makeshift homelab server and become the next Antoine from Silicon Valley.
    But if you’re wondering why people even homelab in the era of AWS and Hetzner, I’m here to make the case: it’s fun, empowering, and yes, sometimes even practical.
    1. Cost control over time
    Cloud services are undeniably convenient, but they often come with hidden costs. I still remember during my initial days here at It's FOSS, and Abhishek messaged me reminding me to delete any unused or improperly configured Linode instances.
    That’s the thing with cloud services, you pay for convenience, and if you’re not meticulous, it can snowball.
    Source: Mike Shoebox on XA homelab, on the other hand, is a one-time investment. You can repurpose an old PC or buy retired enterprise servers at a fraction of their original cost.
    Sure, there are ongoing power costs, but for many setups, especially with efficient hardware like Raspberry Pi clusters, this remains manageable.
    I'll take this opportunity to share my favorite AWS meme.
    2. Learning and experimentation
    If you’re in tech, be it as a sysadmin, developer, or DevOps engineer, having a homelab is like owning a personal playground.
    Want to deploy Kubernetes? Experiment with LXC containers? Test Ansible playbooks? You can break things, fix them, and learn along the way without worrying about production outages or cloud charges.
    For me, nothing beats the thrill of running Proxmox on an old Laptop with a Core i5, 8 GB of RAM, and a 1 TB hard drive.
    It’s modest (you might've seen that poor machine in several of my articles), but I’ve used it to spin up VMs, host Docker containers, and even test self-hosted alternatives to popular SaaS tools.
    3. Privacy and ownership
    When your data resides in the cloud, you trust the provider with its security and availability. But breaches happen, and privacy-conscious individuals might balk at the idea of sensitive information being out of their direct control.
    With a homelab, you own your data. Want a cloud backup? Use tools like Nextcloud. Need to share documents easily? Host your own FileBrowser. This setup isn’t just practical, it’s liberating.
    Sure, there’s a learning curve and it could be steep for many. Thankfully, we also have plug-and-play solutions like CasaOS, which we covered in a previous article. All you need to do is head to the app store, select 'install,' and everything will be taken care of for you.
    4. Practical home uses
    Homelabs aren’t just for tech experiments. They can serve real-world purposes, often replacing expensive commercial solutions:
    Media servers: Host your own movie library with Jellyfin or Plex. No subscription fees, no geo-restrictions, and no third-party tracking, as long as you have the media on your computer. Home security: Set up a CCTV network with open-source tools like ZoneMinder. Add AI-powered object detection, and you’ve built a system that rivals professional offerings at a fraction of the cost. Family productivity: Create centralized backups with Nextcloud or run remote desktop environments for family members. You become the go-to tech person for your household, but in a rewarding way. For my parents, I host an Immich instance for photo management and a Jellyfin instance for media streaming on an old family desktop. Since the server is already running, I also use it as my offsite backup solution, just to be safe. 😅
    📋If you are into self-hosting, always make multiple instances of data backup in different locations/systems/medium. Follow the golden rule of 3-2-1 backup.9 Dashboard Tools to Manage Your Homelab EffectivelySee which server is running what services with the help of a dashboard tool for your homelab.It's FOSSAbhishek KumarWhat about renting a VPS for cloud computing?
    I completely understand that renting a VPS can be a great choice for many. It offers flexibility, ease of use, and eliminates the need to manage physical hardware.
    These days, most VPS providers like AWS, Oracle, and others offer a 1-year free tier to attract users and encourage them to explore their platforms. This is a fantastic opportunity for beginners or those testing the waters with cloud hosting.
    I’ve also heard great things about Hetzner, especially their competitive pricing, which makes them an excellent option for those on a budget.
    In fact, I use Nanode myself to experiment with a DNS server. This setup spares me the hassle of port forwarding, especially since I’m behind CGNAT.
    If you’re interested in hosting your own projects or services but face similar limitations, I’ve covered a guide on Cloudflare Tunnels that you can consult, it’s a handy workaround for these challenges.
    Personally, I believe gatekeeping is one of the worst things in the tech community. No one should dictate how or where you host your projects. Mix and match! Host some services on your own systems, build something from scratch, or rent a VPS for convenience.
    Just be mindful of what you’re hosting like ensuring no misconfigured recursive function is running loose and keep exploring what suits your needs best.
    My journey into homelab
    My journey into homelabbing and self-hosting started out of necessity. When I was in university, I didn’t have the budget for monthly cloud server subscriptions, domain hosting, or cloud storage.
    That limitation pushed me to learn how to piece things together: finding free tools, configuring them to fit my needs, diving into forum discussions, helping others solve specific issues, and eagerly waiting for new releases and features.
    This constant tinkering became an endless cycle of learning, often without even realizing it. And that’s the beauty of it. Whether you’re self-hosting on a VPS or your homelab, every step is a chance to explore, experiment, and grow.
    So, don’t feel constrained by one approach. The freedom to create, adapt, and learn is what makes this space so exciting.
    Wrapping up
    At the end of the day, whether you choose to build a homelab, rent a VPS, or even dabble in both, it’s all about finding what works best for you. There’s no one-size-fits-all approach here.
    For me, homelabbing started as a necessity during my university days, but over time, it became a passion, a way to learn, experiment, and create without boundaries.
    Sure, there are challenges, misconfigured services, late nights debugging, and the occasional frustration, but those moments are where the real learning happens.
    Renting a VPS has its perks too. It’s quick, convenient, and often more practical for certain projects.
    I’ve come to appreciate the balance of using both approaches, hosting some things locally for the sheer fun of it and using VPS providers when it makes sense. It’s not about choosing sides; it’s about exploring the possibilities and staying curious.
    If you’re someone who enjoys tinkering, building, and learning by doing, I’d encourage you to give homelabbing a try. Start small, experiment, and let your curiosity guide you.
    And if you prefer the convenience of a VPS or a mix of both, that’s perfectly fine too. At the end of the day, it’s your journey, your projects, and your learning experience.
    So, go ahead, spin up that server, configure that service, or build something entirely new. The world of self-hosting is vast, and the possibilities are endless. Happy tinkering!
  9. by: Abhishek Prakash
    Fri, 23 May 2025 18:40:52 +0530

    Linux doesn't get easier. You just get better at it.
    This is what I always suggest to beginners. The best way to learn Linux (or any new skill for that matter) is to start using it. The more you use it, the better you get at it 💪
    Here are the highlights of this edition :
    Master splitting windows in Vim Essential YAML concepts Checkcle And more tools, tips and memes for you This edition of LHB Linux Digest newsletter is supported by PikaPods. 🚀 Self-hosting on autopilot
    PikaPods allows you to quickly deploy your favorite open source software. All future updates and backups are handled automatically by PikaPods while you enjoy using the software.
    Wondering if you should rely on it? You get $5 free credit, so try it out and test it yourself.
    PikaPods - Instant Open Source App HostingRun the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.Instant Open Source App Hosting  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  10. by: Temani Afif
    Fri, 23 May 2025 13:02:32 +0000

    Creating CSS Shapes is a classic and one of my favorite exercise. Indeed, I have one of the biggest collections of CSS Shapes from where you can easily copy the code of any shape. I also wrote an extensive guide on how to create them: The Modern Guide For Making CSS Shapes.
    Even if I have detailed most of the modern techniques and tricks, CSS keeps evolving, and new stuff always emerges to simplify our developer life. Recently, clip-path was upgraded to have a new shape() value. A real game changer!
    Before we jump in, it’s worth calling out that the shape() function is currently only supported in Chrome 137+ and Safari 18.4+ as I’m writing this in May 2025.
    What is shape()?
    Let me quote the description from the official specification:
    In other words, we have the SVG features in the CSS side that we can combine with existing features such as var(), calc(), different units, etc. SVG is already good at drawing complex shapes, so imagine what is possible with something more powerful.
    If you keep reading the spec, you will find:
    And guess what? I already created an online converter from SVG to CSS. Save this tool because it will be very handy. If you are already good at creating SVG shapes or you have existing codes, no need to reinvent the wheel. You paste your code in the generator, and you get the CSS code that you can easily tweak later.
    Let’s try with the CSS-Tricks logo. Here is the SVG I picked from the website:
    <svg width="35px" height="35px" viewBox="0 0 362.62 388.52" > <path d="M156.58,239l-88.3,64.75c-10.59,7.06-18.84,11.77-29.43,11.77-21.19,0-38.85-18.84-38.85-40C0,257.83,14.13,244.88,27.08,239l103.6-44.74L27.08,148.34C13,142.46,0,129.51,0,111.85,0,90.66,18.84,73,40,73c10.6,0,17.66,3.53,28.25,11.77l88.3,64.75L144.81,44.74C141.28,20,157.76,0,181.31,0s40,18.84,36.5,43.56L206,149.52l88.3-64.75C304.93,76.53,313.17,73,323.77,73a39.2,39.2,0,0,1,38.85,38.85c0,18.84-12.95,30.61-27.08,36.5L231.93,194.26,335.54,239c14.13,5.88,27.08,18.83,27.08,37.67,0,21.19-18.84,38.85-40,38.85-9.42,0-17.66-4.71-28.26-11.77L206,239l11.77,104.78c3.53,24.72-12.95,44.74-36.5,44.74s-40-18.84-36.5-43.56Z"></path> </svg> You take the value inside the d attribute, paste it in the converter, and boom! You have the following CSS:
    .shape { aspect-ratio: 0.933; clip-path: shape(from 43.18% 61.52%,line by -24.35% 16.67%,curve by -8.12% 3.03% with -2.92% 1.82%/-5.2% 3.03%,curve by -10.71% -10.3% with -5.84% 0%/-10.71% -4.85%,curve to 7.47% 61.52% with 0% 66.36%/3.9% 63.03%,line by 28.57% -11.52%,line to 7.47% 38.18%,curve to 0% 28.79% with 3.59% 36.67%/0% 33.33%,curve to 11.03% 18.79% with 0% 23.33%/5.2% 18.79%,curve by 7.79% 3.03% with 2.92% 0%/4.87% 0.91%,line by 24.35% 16.67%,line to 39.93% 11.52%,curve to 50% 0% with 38.96% 5.15%/43.51% 0%,smooth by 10.07% 11.21% with 11.03% 4.85%,line to 56.81% 38.48%,line by 24.35% -16.67%,curve to 89.29% 18.79% with 84.09% 19.7%/86.36% 18.79%,arc by 10.71% 10% of 10.81% 10.09% small cw,curve by -7.47% 9.39% with 0% 4.85%/-3.57% 7.88%,line to 63.96% 50%,line to 92.53% 61.52%,curve by 7.47% 9.7% with 3.9% 1.51%/7.47% 4.85%,curve by -11.03% 10% with 0% 5.45%/-5.2% 10%,curve by -7.79% -3.03% with -2.6% 0%/-4.87% -1.21%,line to 56.81% 61.52%,line by 3.25% 26.97%,curve by -10.07% 11.52% with 0.97% 6.36%/-3.57% 11.52%,smooth by -10.07% -11.21% with -11.03% -4.85%,close); } Note that you don’t need to provide any viewBox data. The converter will automatically find the smallest rectangle for the shape and will calculate the coordinates of the points accordingly. No more viewBox headaches and no need to fight with overflow or extra spacing!
    CodePen Embed Fallback Here is another example where I am applying the shape to an image element. I am keeping the original SVG so you can compare both shapes.
    CodePen Embed Fallback When to use shape()
    I would be tempted to say “all the time” but in reality, not. In my guide, I distinguish between two types of shapes: The ones with only straight lines and the ones with curves. Each type can either have repetition or not. In the end, we have four categories of shapes.
    If we don’t have curves and we don’t have repetition (the easiest case), then clip-path: polygon() should do the job. If we have a repetition (with or without curves), then mask is the way to go. With mask, we can rely on gradients that can have a specific size and repeat, but with clip-path we don’t have such options.
    If you have curves and don’t have a repetition, the new shape() is the best option. Previously, we had to rely on mask since clip-path is very limited, but that’s no longer the case. Of course, these are not universal rules, but my own way to identify which option is the most suitable. At the end of the day, it’s always a case-by-case basis as we may have other things to consider, such as the complexity of the code, the flexibility of the method, browser support, etc.
    Let’s draw some shapes!
    Enough talking, let’s move to the interesting part: drawing shapes. I will not write a tutorial to explain the “complex” syntax of shape(). It will be boring and not interesting. Instead, we will draw some common shapes and learn by practice!
    Rectangle
    Take the following polygon:
    clip-path: polygon( 0 0, 100% 0, 100% 100%, 0 100% ); Technically, this will do nothing since it will draw a rectangle that already follows the element shape which is a rectangle, but it’s still the perfect starting point for us.
    Now, let’s write it using shape().
    clip-path: shape( from 0 0, line to 100% 0, line to 100% 100%, line to 0 100% ); The code should be self-explanatory and we already have two commands. The from command is always the first command and is used only once. It simply specifies the starting point. Then we have the line command that draws a line to the next point. Nothing complex so far.
    We can still write it differently like below:
    clip-path: shape( from 0 0, hline to 100%, vline to 100%, hline to 0 ); Between the points 0 0 and 100% 0, only the first value is changing which means we are drawing a horizontal line from 0 0 to 100% 0, hence the use of hline to 100% where you only need to specify the horizontal offset. It’s the same logic using vline where we draw a vertical line between 100% 0 and 100% 100%.
    I won’t advise you to draw your shape using hline and vline because they can be tricky and are a bit difficult to read. Always start by using line and then if you want to optimize your code you can replace them with hline or vline when applicable.
    We have our first shape and we know the commands to draw straight lines:
    CodePen Embed Fallback Circular Cut-Out
    Now, let’s try to add a circular cut-out at the top of our shape:
    For this, we are going to rely on the arc command, so let’s understand how it works.
    If we have two points, A and B, there are exactly two circles with a given radius that intersect with both points like shown in the figure. The intersection gives us four possible arcs we can draw between points A and B. Each arc is defined by a size and a direction.
    There is also the particular case where the radius is equal to half the distance between A and B. In this case, only two arcs can be drawn and the direction will decide which one.
    The syntax will look like this:
    clip-path: shape( from Xa Ya, arc to Xb Yb of R [large or small] [cw or ccw] ); Let’s add this to our previous shape. No need to think about the values. Instead, let’s use random ones and see what happens:
    clip-path: shape( from 0 0, arc to 40% 0 of 50px, line to 100% 0, line to 100% 100%, line to 0 100% ); CodePen Embed Fallback Not bad, we can already see the arc between 0 0 and 40% 0. Notice how I didn’t define the size and direction of the arc. By default, the browser will use small and ccw.
    Let’s explicitly define the size and direction to see the four different cases:
    CodePen Embed Fallback Hmm, it’s working for the first two blocks but not the other ones. Quite strange, right?
    Actually, everything is working fine. The arcs are drawn outside the element area so nothing is visible. If you add some box-shadow, you can see them:
    CodePen Embed Fallback Arcs can be tricky due to the size and direction thing, so get ready to be confused. If that happens, remember that you have four different cases, and trying all of them will help you find which one you need.
    Now let’s try to be accurate and draw half a circle with a specific radius placed at the center:
    We can define the radius as a variable and use what we have learned so far:
    .shape { --r: 50px; clip-path: shape( from 0 0, line to calc(50% - var(--r)) 0, arc to calc(50% + var(--r)) 0 of var(--r), line to 100% 0, line to 100% 100%, line to 0 100% ); } CodePen Embed Fallback It’s working fine, but the code can still be optimized. We can replace all the line commands with hline and vline like below:
    .shape { --r: 50px; clip-path: shape(from 0 0, hline to calc(50% - var(--r)), arc to calc(50% + var(--r)) 0 of var(--r), hline to 100%, vline to 100%, hline to 0 ); } We can also replace the radius with 1%:
    .shape { --r: 50px; clip-path: shape(from 0 0, hline to calc(50% - var(--r)), arc to calc(50% + var(--r)) 0 of 1%, hline to 100%, vline to 100%, hline to 0 ); } When you define a small radius (smaller than half the distance between both points), no circle can meet the condition we explained earlier (an intersection with both points), so we cannot draw an arc. This case falls within an error handling where the browser will scale the radius until we can have a circle that meets the condition. Instead of considering this case as invalid, the browser will fix “our mistake” and draw an arc.
    This error handling is pretty cool as it allows us to simplify our shape() function. Instead of specifying the exact radius, I simply put a small value and the browser will do the job for me. This trick only works when the arc we want to draw is half a circle. Don’t try to apply it with any arc command because it won’t always work.
    Another optimization is to update the following:
    arc to calc(50% + var(--r)) 0 of 1%, …with this:
    arc by calc(2 * var(--r)) 0 of 1%, Almost all the commands can either use a to directive or a by directive. The first one defines absolute coordinates like the one we use with polygon(). It’s the exact position of the point we are moving to. The second defines relative coordinates which means we need to consider the previous point to identify the coordinates of the next point.
    In our case, we are telling the arc to consider the previous point (50% - R) 0 and move by 2*R 0, so the final point will be (50% - R + 2R) (0 + 0), which is the same as (50% + R) 0.
    .shape { --r: 50px; clip-path: shape(from 0 0, hline to calc(50% - var(--r)), arc by calc(2 * var(--r)) 0 of 1px, hline to 100%, vline to 100%, hline to 0 ); } This last optimization is great because if we want to move the cutout from the center, we only need to update one value: the 50%.
    .shape { --r: 50px; --p: 40%; clip-path: shape( from 0 0, hline to calc(var(--p) - var(--r)), arc by calc(2 * var(--r)) 0 of 1px, hline to 100%, vline to 100%, hline to 0 ); } CodePen Embed Fallback How would you adjust the above to have the cut-out at the bottom, left, or right? That’s your first homework assignment! Try to do it before moving to the next part.
    I will give my implementation so that you can compare with yours, but don’t cheat! If you can do this without referring to my work, you will be able to do more complex shapes more easily.
    CodePen Embed Fallback Rounded Tab
    Enough cut-out, let’s try to create a rounded tab:
    Can you see the puzzle of this one? Similar to the previous shape, it’s a bunch of arc and line commands. Here is the code:
    .shape { --r: 26px; clip-path: shape( /* left part */ from 0 100%, arc by var(--r) calc(-1 * var(--r)) of var(--r), vline to var(--r), arc by var(--r) calc(-1 * var(--r)) of var(--r) cw, /* right part */ hline to calc(100% - 2 * var(--r)), arc by var(--r) var(--r) of var(--r) cw, vline to calc(100% - var(--r)), arc by var(--r) var(--r) of var(--r) ); } It looks a bit scary, but if you follow it command by command, it becomes a lot clearer to see what’s happening. Here is a figure to help you visualize the left part of it.
    All the arc commands are using the by directive because, in all the cases, I always need to move by an offset equal to R, meaning I don’t have to calculate the coordinates of the points. And don’t try to replace the radius by 1% because it won’t work since we are drawing a quarter of a circle rather than half of a circle.
    CodePen Embed Fallback From this, we can easily achieve the left and right variations:
    CodePen Embed Fallback Notice how I am only using two arc commands instead of three. One rounded corner can be done with a classic border radius, so this can help us simplify the shape.
    Inverted Radius
    One last shape, the classic inner curve at the corner also called an inverted radius. How many arc commands do we need for this one? Check the figure below and think about it.
    If your answer is six, you have chosen the difficult way to do it. It’s logical to think about six arcs since we have six curves, but three of them can be done with a simple border radius, so only three arcs are needed. Always take the time to analyze the shape you are creating. Sometimes, basic CSS properties can help with creating the shape.
    What are you waiting for? This is your next homework and I won’t help you with a figure this time. You have all that you need to easily create it. If you are struggling, give the article another read and try to study the previous shapes more in depth.
    Here is my implementation of the four variations:
    CodePen Embed Fallback Conclusion
    That’s all for this first part. You should have a good overview of the shape() function. We focused on the line and arc commands which are enough to create most of the common shapes.
    Don’t forget to bookmark the SVG to CSS converter and keep an eye on my CSS Shape collection where you can find the code of all the shapes I create. And here is a last shape to end this article.
    CodePen Embed Fallback Better CSS Shapes Using shape() — Part 1: Lines and Arcs originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  11. by: Geoff Graham
    Thu, 22 May 2025 14:43:09 +0000

    Clever, clever that Andy Bell. He shares a technique for displaying image alt text when the image fails to load. Well, more precisely, it’s a technique to apply styles to the alt when the image doesn’t load, offering a nice UI fallback for what would otherwise be a busted-looking error.
    The recipe? First, make sure you’re using alt in the HTML. Then, a little JavaScript snippet that detects when an image fails to load:
    const images = document.querySelectorAll("img"); if (images) { images.forEach((image) => { image.onerror = () => { image.setAttribute("data-img-loading-error", ""); }; }); } That slaps an attribute on the image — data-img-loading-error — that is selected in CSS:
    img[data-img-loading-error] { --img-border-style: 0.25em solid color-mix(in srgb, currentColor, transparent 75%); --img-border-space: 1em; border-inline-start: var(--img-border-style); border-block-end: var(--img-border-style); padding-inline-start: var(--img-border-space); padding-block: var(--img-border-space); max-width: 42ch; margin-inline: auto; } And what you get is a lovely little presentation of the alt that looks a bit like a blockquote and is is only displayed when needed.
    CodePen Embed Fallback Andy does note, however, that Safari does not render alt text if it goes beyond a single line, which 🤷‍♂️.
    You can style alt text like any other text originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  12. awk Command in Linux

    by: Adnan Shabbir
    Thu, 22 May 2025 08:59:25 +0000

    The awk command is not just a command; it’s a scripting language, just like bash. The awk command is used for advanced pattern scanning, data extraction, and text manipulation. Because of its scripting support, it is useful for Linux power users, whether an administrator, a developer, or a Linux enthusiast. For instance, a system administrator can swiftly examine, i.e., log processing/analysis, tracking network IPs, generating reports, and monitoring system tasks.
    If you are looking to have a strong grip on awk technically, this guide provides a brief tutorial on the awk command, including its use cases with examples.
    awk Linux Command
    The awk command contains programming language features. Because of its diverse use cases, awk has left behind its command-line competitors, i.e., grep, cut, etc. So, before diving further into the awk command examples, let’s first discuss the following:
    How is the awk command better than grep, cut, and other competitors? Syntax of the awk Command For what Operations can you use the awk command How is the awk command better than grep, cut, and other competitors?
    Let’s quickly have a look at why awk beats grep, cut, and other competitors.
    The awk command has built-in variables that are used silently by default. For instance, the “FS” allows you to split the field with some character, and you can access and perform operations on them. The awk supports loops, variables, conditions, arrays, and functions, which are not supported by any of the awk’s competitors. The data can be presented beautifully using the built-in variables that other commands do not have. Syntax of the awk Command
    awk 'pattern {action}' file The quotes on this command are optional. When using multiple patterns, actions, you need to put the quotes; otherwise, they are not necessary to be used every time. Here’s the further breakdown of the awk:
    The “file” is the name of the file on which the “awk” is being applied. The ‘pattern {action}’ is the core part of awk:
    “pattern”: represents a condition, i.e., can be a regex, usually left blank to match all the lines. “action”: The operation to be executed after a successful pattern match. For What Operations can you use the awk Command
    The awk scripting language is broadly used in text processing. Let’s discuss what text processing operations can be performed using awk:
    Pattern scanning and processing. Text filtration, transformation, Editing/Modifying. Extracting data from fields or fields as a whole. File processing, i.e., single, multiple, supports different types of file processing Formatted output, i.e., getting output in another file, and print support Mathematical calculations, i.e., using conditions, looping support Controlled operation on the data, i.e., BEGIN, END System monitoring, i.e., using awk with other Linux commands to get the filtered results. That’s all from understanding the basics of the awk command. Now, let’s dig into the examples to learn practically.
    awk Command Examples
    The awk examples provided here were experimented on two files, “students.txt” and “class.txt”.
    Content of the “students.txt”:

    Content of the “class.txt”:

    Let’s start with the basic use cases.
    Print with awk
    Print is the primary operation of awk command. It can be done anytime and in any form to print the data.
    Example 1: Print All (in a single or multiple file)
    Use the print in the “{action}” part to print the content of the file:
    awk '{print}' file
    Example 2: Print Specific Columns
    Write the column number in place of the “n” to show the data of that column only:
    awk '{print $n}' file
    Example 3: Print a Specific Range of Data from the File
    You can specify the column range to retrieve only that specific data:
    awk 'NR==4, NR==8' file
    Example 4: Update any Field’s Value and Print it
    You need to specify the field that you want to update and its value. Then, you can print it or store the output in some other file.
    Here’s a demonstration:
    awk '{$3 = "New Value"; print}' file Just updates on the terminal, the original content remains the same.

    Example 5: Printing From Multiple Files
    The awk command can be applied to multiple files. Here’s how it works:
    awk '{print $1}' file1 file2 The command will print the first column/field of two different files:

    Matching a Specific Expression | awk With Regular Expressions
    Expressions are used with the “awk” command to match a specific pattern or a word in the file and perform any operation on it.
    Example 6: Matching the Exact Word
    The command below matches the expression and performs the operation based on that match. Here’s a simple example:
    awk '/Software/ {print}' file The command checks for the “Software” word and prints all the data records that contain the “Software” word.

    Example 7: Match the Start of the Character
    The “carrot” symbol is used to match the character with the start of the files in the file and then further operations can be performed. Here’s the practical:
    awk '/^110/ {print}' file Printing the lines that start with “110”:

    Example 8: Match the Endpoint Character
    Likewise, the end character can also be specified to match and print (or perform any other operation):
    awk '/Year$/ {print}' file The above command matches the “Year” and prints all the records that contain the word “year”.

    Bonus: Other Regex Characters
    The following are the rest of the Regex operators that can be used with awk to get more refined output:
    Regex Characters Description [ax] Picks only a and x characters [a-x] Picks all the characters that are in the range of a-x. \w Selects a word \s Blank Space character \d Selects a digit. Formatting the Data With awk | awk Variables ( NR, NF, FS, FNR, OFS, ORS)
    The awk variables are the built-in variables to process and manipulate the data. Each variable has its own purpose and refers to a record or a field. Let me first introduce all these variables to you:
    NR (Number of Records):
    The number of records being processed, i.e., if a file has 10 records (rows), then the NR’s value will range from 1 to 10.
    Example 9: Print the Number of Records
    Printing the record number of the data in a file:
    awk '{print NR, $2}' filename
    Example 10: Getting the Disk Usage Report
    Based on NR, the administrator can analyze the disk usage report:
    df -h | awk 'NR>1 { print $1, $5 }' The command gets the input from the df -h command and then pipes it with awk to get the filesystem name ($1) and the percentage ($5) used by each filesystem.

    Similarly, other resources’ performance and progress can also be checked using the NR in awk.
    NF (Number of Fields):
    Denotes the number of fields in each record.
    Example 11: Getting Number of Fields in a File
    Let’s see it through the following:
    awk '{print NF}' file The command prints the “NF” number of fields in each record of the target file:

    FS (Field Separator):
    This is the character used to separate the fields. It is a white space by default and a comma for an Excel file:
    Example 12: Printing the field separator
    awk ‘{print FS, $4}’ file This command prints the field separator:

    FNR (File Number of Record):
    Counting the number of records for each file when multiple files are being processed. For instance, when a single file is being processed, the NR value always starts from 1 and continues this number when multiple files are being processed, whereas the FNR value starts from 1 (for the new files) instead of continuing as NR.
    Example 13: Printing the Field Number Record with a Field
    awk '{print FNR, $4}' file The command prints the field number and the 4th field:

    OFS (Output Field Separator):
    This is the Output field separator, i.e. the character separating the output fields. The default output field separator is a space ( “ ” ). However, you can change or set a new field separator with the help of the OFS keyword:
    Example 14: Changing the Output Field Separator
    Let’s understand it practically:
    awk 'BEGIN {OFS=" - "} {print $3, $4}' file The command will print the 3rd and 4th columns from the specified file and will set the “–” as the new OFS:

    ORS (Output Record Separator):
    Likewise, OFS, this ORS represents the Output Record Separator. It is the space by default, however, you can change it as we are going to show you here practically.
    Example 15: Changing the Output Record Separator
    The following command will change the record separator to “|”:
    awk '{ORS =" | "} {print}' filename The record separator is now set to “ | ” for this specific output only.

    Advanced awk Examples
    Until now, we have gone through some basic and intermediate use cases of the awk example. Since awk also incorporates programming language features, so, here we’ll also discuss awk’s functionality with some advanced use cases:
    Example 16: Find the Largest Field in a File
    The following command uses the if-else statement to give you the length of the longest line in the file. Here’s an example to do so:
    awk '{if (length($0) > max) max = length($0)} END {print max}' file The “awk” is the command keyword, and the “file” on which the operation is being performed. The rest of the details are in quotes. The length($0) expression gets the length of the current line and checks if it is greater than “max”. If the condition is true, the “length($0)” is stored in “max” and the “max” is printed at the end.
    Similarly, if you want to check/get the minimum length of a line, then it would be:
    awk '{if (length($1) > min) min = length($1) } END {print min}' file
    Example 17: Get the Sum of the Field Values
    With awk, you can calculate the sum of any field. Here’s the practical demonstration:
    awk '{sum += $1} END {print "Total:", sum}' file A sum variable initially stores the first value from the first field ($1) The $1 (first field) values are being added to the already stored values in the sum.
    Example 18: Finding a Max Value in a Column
    Here, m is the variable that stores the maximum value:
    awk 'BEGIN{m=0} {if($2 > m) m=$1} END {print "Maximum Value:", m}' file Here’s the breakdown of the command:
    A variable “m” is initialized, then, if the value of the “2nd column” is greater than m, the “2nd column” value is stored in m. This continues until the condition “$2 > m” becomes false.
    Example 19: Count Specific Occurrences of a Word
    s is the looping variable, f shows the frequency, and “Year” word to be searched for the number of entries:
    awk '{for(s=1;s<=NF;s++) f[$s]++} END {for(Year in f) print Year, f[Year]}' file The “for loop” loops through all in each record using (NF). The “f[$s]++” expression stores each word in an associative array “f” and counts how many times it appears. In the END block, the for loop prints each unique word and its frequency (as a value) from an array.
    Example 20: Monitoring the Log Files
    To monitor the Apache access log file:
    awk '{print $n}' /var/log/apache2/access.log Similarly, you can keep track of the iptables.log files to track the IPs triggering the firewall logs. You can check out / print the iptables.log for that purpose, i.e., available at the location /var/log/iptables.log.
    Example 21: Search and Analyze | AWK with grep
    The grep is known for its data filtering and searching, and the awk extracts and manipulates data. Let’s see the practical to check how these utilities work together:
    grep "Software" file | awk "{print $3}" Here’s the breakdown of the above command:
    grep “Software” will only filter and select the records containing the “Software” keyword. The awk command prints the columns containing the “Software” word.
    Similarly, grep and awk can be applied to other log files or system files to filter and analyze the specific log-related information.
    Example 22: Substituting | AWK with sed
    The “sed” command edits and manipulates the text. So, the output of the awk command can be piped with the sed to perform specific operations on the output, or individually:
    Let’ see the practical, using the following simple command:
    awk '{print $3, $4}' file | sed 's/ /,/g' The awk command prints the 3rd and 4th columns of the file, and the sed command substitutes the “,” in place of the white spaces in the document file globally.

    Functions in awk
    Since awk is a scripting language, it has a long list of functions that are used to perform various functions. Some of these are used in the above examples, i.e., length(n). Here, we will elaborate on a few functions and their use cases.
    Substituting in awk with Functions
    The awk has “sub” and “gsub” as two substitution functions; let’s understand these through examples:
    Note: The file on which the “sub” and “gsub” functions will be experimented with.

    Example 23: Substitute the First Occurrence (in each record) Only | awk with sub
    The “sub” function substitutes the first occurrence of the matching word/expression in each record. Here is an example command to understand:
    awk '{sub("Year", "Y"); print}' file The first occurrence of the word “Year” will be replaced with the “Y”:

    Example 24: Substitute all the instances of a word/Expression | awk with gsub
    The “gsub” globally substitutes the matching keyword, i.e., all the occurrences of the matching keyword. Here’s an example:
    awk '{ gsub("Year", "Y"); print }' file Now, all the occurrences will be replaced:

    Example 25: Get the UNIX Timestamp | awk with systime()
    The awk command has the systime() function to get the UNIX timestamp of the fields or for the whole system. Let’s practice it:
    awk 'BEGIN {print systime()}' file The command will print the UNIX timestamp for each of the records (when modified last time) of the file:

    Similarly, you can get the overall UNIX time using the command:
    awk 'BEGIN {print systime()}'
    Other awk Functions:
    Because of its scripting support, the awk has a long list of supported functions. The following table lists some of these and also describes their purpose with syntax:
    awk Function Description/Purpose Syntax length() Length of the current line awk ‘{print length()}’ file substr(a, b, c) Extracting a substring from string a, the length starts at b, and the overall length is c. awk ‘{print substr($a, b, c)}’ file split(a,b,c) Split string a into array b using a separator c. awk ‘{split($a, arr, “:”); print arr[a] }’ file tolower(a) Converts the a to lowercase awk ‘{print tolower($a)}’ file toupper(a) Converts the a to uppercase awk ‘{print tolower($a)}’ file int(a) Converts the $a into an integer awk ‘{print int($a)} file’ sqrt(a) Square root of $a. awk ‘{print sqrt($a)}’ file srand() Seeding a random generator awk ‘BEGIN { srand(); print rand() }’ rand() Random generator That’s all from this tutorial.
    Conclusion
    The awk utility is an effective command-line tool that serves as a scripting language, too. From a basic search and find to running it for advanced scripting, the awk is a full package.
    This post has a brief overview and explanation of the awk command in Linux, with advanced examples.
    FAQs
    Q 1: What does awk stand for?
    The awk is named after its inventors, i.e., Aho, Weinberger, and Kernighan. They designed awk at AT&T Bell Laboratories in 1977.
    Q 2: What is awk in Linux?
    Ans: The awk is a powerful command-line utility and a scripting language. The awk is used to: read the data, search and scan for patterns, print, format, calculate, analyze, and more. For these use cases, awk sometimes has to be used with grep, sed, and normal regular expressions.
    Q 3: What does awk ‘{print $2}’ mean?
    The awk ‘{print $2}’ will print the 2nd (second) field of the file on the terminal. If used with multiple files, then the 2nd (second field of multiple files will be printed.
    Q 4: What is the difference between awk and grep?
    The grep performs the searching and filtering up to some extent, while the awk utility extracts, manipulates, and analyzes the data. The awk and grep are used together for search and analysis purposes, i.e., grep provides the searching/filtering support, where the awk performs the analysis.
    Q 5: What is the difference between awk and bash?
    Bash is recommended for professional scripting. However, where scripting and basic operations on the terminal are required, then awk is good. Both awk and bash are scripting languages. However, the simpler tasks are performed swiftly using awk as compared to bash.
    Q6: How do I substitute using awk?
    The awk supports two functions for substituting, i.e., sub and gsub. The sub is used to substitute the first occurrence of the matching word, whereas the gsub is used for global substitution of the matching word.
     
  13. by: Abhishek Kumar
    Thu, 22 May 2025 07:40:49 GMT

    It took me way longer than I’d like to admit to wrap my head around MCP servers.
    At first glance, they sound like just another protocol in the never-ending parade of tech buzzwords decorated alongside AI.
    But trust me, once you understand what they are, you start to see why people are obsessed with them.
    This post isn’t meant to be the ultimate deep dive (I’ll link to some great resources for that at the end). Instead, consider it just a lil introduction or a starter on MCP servers.
    And no, I’m not going to explain MCP using USB-C as a metaphor, if you get that joke, congrats, you’ve clearly been Googling around like the rest of us. If not… well, give it time. 😛
    Source: Norah Sakal's BlogWhat even is an MCP Server?
    MCP stands for Model Context Protocol, an open standard introduced by Anthropic in November 2024.
    Its purpose is to improve how AI models interact with external systems, not by modifying the models themselves, but by providing them structured, secure access to real-world data, tools, and services.
    An MCP server is a standalone service that exposes specific capabilities such as reading files, querying databases, invoking APIs, or offering reusable prompts, in a standardized format that AI models can understand.
    Rather than building custom integrations for every individual data source or tool, developers can implement MCP servers that conform to a shared protocol.
    This eliminates the need for repetitive boilerplate and reduces complexity in AI applications.
    What can an MCP Server actually do?
    Source: XQuite a bit. Depending on how they’re set up, MCP servers can expose:
    Resources – Stuff like files, documents, or database queries that an AI can read. Tools – Actions like sending an email, creating a GitHub issue, or checking the weather. Prompts – Predefined instructions or templates that guide AI behavior in repeatable ways. Each of these is exposed through a JSON-RPC 2.0 interface, meaning AI clients can query what's available, call the appropriate function, and get clean, structured responses.https://www.anthropic.com/
    So... how does an MCP server actually work?
    MCP servers follow a well-defined architecture intended to standardize how AI models access external tools, data, and services.
    MCP client-server architecture | Source: modelcontextprotocol.ioEach part of the system has a clear role, contributing to a modular and scalable environment for AI integration.
    Host Applications
    These are the environments where AI agents operate, such as coding assistants, desktop apps, or conversational UIs.

    They don’t interact with external systems directly, but instead rely on MCP clients to broker those connections. MCP Clients
    The client is responsible for managing the connection between the AI agent and the MCP server. It handles protocol-level tasks like capability discovery, permissions, and communication state.

    Clients maintain direct, persistent connections to the server, ensuring requests and responses are handled correctly. MCP Servers
    The server exposes defined capabilities such as reading files, executing functions, or retrieving documents using the Model Context Protocol.

    Each server is configured to present these capabilities in a standardized format that AI models can interpret without needing custom integration logic. Underlying Data or Tooling
    This includes everything the server is connected to: file systems, databases, external APIs, or internal services.

    The server mediates access, applying permission controls, formatting responses, and exposing only what the client is authorized to use. This separation of roles between the model host, client, server, and data source, allows AI applications to scale and interoperate cleanly.
    Developers can focus on defining useful capabilities inside a server, knowing that any MCP-compatible client can access them predictably and securely.
    Wait, so how are MCP Servers different from APIs?
    Fair question. It might sound like MCP is just a fancy wrapper around regular APIs, but there are key differences:
    FeatureTraditional APIMCP ServerPurposeGeneral software communicationFeed AI models with data, tools, or promptsInteractionRequires manual integration and parsingPresents info in model-friendly formatStandardizationVaries wildly per serviceUnified protocol (MCP)SecurityMust be implemented case-by-caseBuilt-in controls and isolationUse CaseBackend services, apps, etc.Enhancing AI agents like Claude or Copilot or Cursor Basically, APIs were made for apps. MCP servers were made for AI.
    Want to spin up your own self-hosted MCP Server?
    While building a custom MCP server from scratch is entirely possible, you don’t have to start there.
    There’s already a growing list of open-source MCP servers you can clone, deploy, and start testing with your preferred AI assistant like Claude, Cursor, or others.
    mcpservers.org is an amazing website to find open-source MCP ServersIf you're interested in writing your own server or extending an existing one, stay tuned. We’re covering that in a dedicated upcoming post, we'll walk through the process step by step in an upcoming post, using the official Python SDK.
    Make sure you’re following or better yet, subscribe, so you don’t miss it.
    Want to learn more on MCP?
    Here are a few great places to start:
    I personally found this a good introduction to MCP Servers
    How I Finally Understood MCP — and Got It Working in Real Life - towards data science What are MCP Servers And Why It Changes Everything - Huggingface Conclusion
    And there you have it, a foundational understanding of what MCP servers are, what they can do, and why they’re quickly becoming a cornerstone in the evolving landscape of AI.
    We’ve only just scratched the surface, but hopefully, this introduction has demystified some of the initial complexities and highlighted the immense potential these servers hold for building more robust, secure, and integrated AI applications.
    Stay tuned for our next deep dive, where we’ll try and build an MCP server and a client from scratch with the Python SDK. Because really, the best way to learn is to get your hands dirty.
    Until then, happy hacking. 🧛
  14. by: Abhishek Prakash
    Thu, 22 May 2025 04:36:31 GMT

    I have an interesting story to share. You are probably already aware that many products and services offer a trial version for a limited time.
    And some people try to take advantage of the trial period by creating new accounts with new email addresses.
    But imagine if a multi-million dollar enterprise does the same. And it does so for an open source software that they could have managed on their own.
    Free as in Fraud? A $130M Aerospace Company Caught Exploiting Open Source TrialAn unnamed $130M company has been exploiting Xen Orchestra’s free trial for over a decade.It's FOSS NewsSourav Rudra💬 Let's see what else you get in this edition
    More new default apps in Ubuntu 25.10. Fedora's Wayland decision. Systemd free distros. Making Bash beautiful. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by ANY RUN. 💖 Grab your ANY.RUN's 9th Birthday special offers
    Analyze, investigate, and detect cyber threats with unmatched speed and depth.
    Interactive Sandbox licenses for your team:
    Subscription Plans - ANY.RUNInteractive malware hunting service. Live testing of most type of threats in any environments. No installation and no waiting necessary.ANY.RUNSpeed up alert triage and response in your SOC. The offers are active until May 31.
    📰 Linux and Open Source News
    FFmpeg has taken a swipe at rust. VS Code is gearing up for an AI-first makeover. WSL is now open source under the MIT license. Microsoft has fixed a dual-boot issue that broke Linux. Warp terminal (non-FOSS) now has experimental support for MCP. 2025 looks like a great year for open source, with a few corporate donations. Ubuntu 25.10 will feature new terminal and image viewer apps.
    Ubuntu 25.10 will Have a Brand New Terminal (and Image Viewer)Ubuntu 25.10 replaces its default terminal and image viewer with modern apps.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    Are we finally entering the Xorg-less era? Fedora has taken the bold move to go for Wayland-only desktop offering in the upcoming version 43.
    No More Xorg! Fedora 43 Will Be Wayland-onlyA bold move by Fedora. Will everyone be onboard?It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials and More
    Imagine Oh My Zsh but for Bash. The Bash-it framework lets you enjoy a beautiful bash shell experience. I am just surprised that it is not called Oh My Bash 😜 Not trying to re-ignite the systemd vs sysvinit debate. Just sharing a list of systemd-free distros in the age where most distros are systemd-first. Take advantage of multi-cursor editing in VS Code to simplify repetitive actions. Try Sausage, and enjoy the classic Bookworm game like experience in the terminal.
    Play With Words in Linux Terminal With With Bookworm Style GameRemember the classic Bookworm game? You can have similar fun in the terminal with Sausage.It's FOSSAbhishek Prakash Remember your favorite tech websites like Anand Tech or magazines like Linux Voice? They don't exist anymore.
    In the age of AI Overview in search engines, more and more people are not even reaching the websites from where AI is 'copying' the text. As a result, your favorite websites continue to shut down.
    More than ever, now is the most crucial time to save your favorite websites from AI onslaught.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year, i.e. $2 a month. Even a burger costs more than $2. For skipping a burger a month, you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 👷 Homelab and Maker's Corner
    Troubleshooting your Pi? Don’t ignore those blinking LEDs! They are a powerful diagnostic tool that are super handy in headless setup.
    Red and Green LED Lights on Raspberry Pi and Their MeaningRaspberry Pi’s status LEDs are a surprisingly powerful diagnostic tool, especially for headless setups.It's FOSSAbhishek Kumar✨ Apps Highlight
    Doodle lets you pixelify your Android smartphone with its cool wallpaper collection.
    Pixelify Your Android Smartphone with This Wallpaper AppTransform your Android smartphone’s home screen with battery-friendly live wallpapers.It's FOSS NewsSourav RudraHow about an open source, decentralized alternative to the likes of Discord and Slack? Peersuite is a self-hostable peer-to-peer workspace that isn't user data hungry.
    📽️ Videos I am Creating for You
    Man pages are good but not easy to follow, specially for new Linux users. Here are some alternatives to the man pages in Linux. See them in action.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Can you beat the Essential Ubuntu Shortcuts puzzle?
    Essential Ubuntu Shortcuts: PuzzleDo you know all the handy Ubuntu shortcuts? Solve this puzzle to find out!It's FOSSAnkush Das💡 Quick Handy Tip
    In file managers like Nemo, Nautilus, etc., you can easily create file duplicates by pressing the CTRL key and dragging the file to a blank space in the window.
    If you drop a file while pressing the CTRL key when in another folder, the file will be copied to that directory.
    Use CTRL+Z to undo the file duplication. During this, your file manager will ask you whether you want to delete the copied file.
    🤣 Meme of the Week
    The man's got a Debian-flavored beard. 😆
    🗓️ Tech Trivia
    On May 18, 1998, the U.S. Department of Justice sued Microsoft, alleging that the company was illegally monopolizing the web browser market by integrating its Internet Explorer browser into its Windows operating system.
    🧑‍🤝‍🧑 FOSSverse Corner
    Pro FOSSer Neville has compiled a table of Linux distros for beginners that is really well-made. I am so proud of our active community members 🙏
    Table of Linux Distros by Difficulty for BeginnersThe outcome of discussion in the forum topic was a table representing collective experience of forum members regarding Linux distros which are easy or difficult for beginners. @Tech_JA suggested that we place the final table in a fresh topic where it is not buried among 130 replies. Thanks Jorge. In future our table will be updated here. Last revision 17/5/25 Difficulty Systemd distros Non-systemd distros Beginner Mint LMDE MX/Systemd MX/sysVinit Peppermint/Debian Pepp…It's FOSS Communitynevj❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  15. By: Janus Atienza
    Wed, 21 May 2025 15:49:37 +0000

    Prioritize a thorough code review. Engaging experienced developers familiar with decentralized frameworks can unearth vulnerabilities that might be overlooked. Employ automated tools for initial scans before transitioning to manual inspections, as human intuition can spot nuanced issues that algorithms may miss.
    Establish clear testing protocols. Integrate both black-box and white-box methodologies to ensure comprehensive coverage. It’s crucial that your team simulates real-world attack scenarios, as this offers insights into potential exploit paths attackers might utilize against your infrastructure.
    Continuously monitor smart contracts after deployment. Implement automated tracking systems that alert stakeholders to any suspicious activity. Regular updates to your contracts should follow a strict testing sequence to ensure that new features don’t inadvertently introduce weaknesses.
    Build a robust incident response plan. Outline steps for communication, mitigation, and recovery in case of an identified breach. Regular drills should be conducted to keep the team well-prepared, reinforcing the area of preparedness for real-world incidents.
    Maintain documentation throughout every phase. A well-maintained record not only aids future audits but supports compliance with industry standards. Additionally, gather feedback from all team members involved, creating a cycle of continuous improvement.
    Identifying Vulnerabilities in Smart Contracts
    Implement automated testing tools like MythX, Slither, and Oyente to uncover potential weaknesses in the code. These tools provide static analysis and can detect common issues such as reentrancy attacks, arithmetic overflows, and gas limit problems.
    Conduct thorough code reviews with a focus on the following aspects:
    Access Control: Validate ownership checks and ensure restricted functions are not accessible to unauthorized users. State Changes: Examine all state-modifying functions for possible vulnerabilities where state can be altered unexpectedly. Fallback Functions: Assess the implementation of fallback methods to prevent abuse from unexpected transactions. Integer Operations: Look for potential overflows and underflows in arithmetic operations. External Calls: Identify areas where the contract interacts with other contracts and potential risks associated with these interactions. Utilize unit testing frameworks such as Truffle or Hardhat. Write tests for various scenarios, including edge cases, to ensure each function behaves as intended.
    Engage in pair programming sessions with another developer. This collaborative approach can provide new insights and help surface overlooked vulnerabilities.
    Consider using formal verification methods to mathematically prove the correctness of your contract. This is particularly useful for high-stakes applications.
    Partnering with a reputable Web3 audit company can provide a comprehensive review of your codebase, leveraging specialized tools and expertise to catch issues automated tools might miss.
    Monitor audit community channels for emerging threats and trends. Being aware of the latest vulnerabilities discovered in the ecosystem can enhance your assessment process.
    Incorporate a bounty program encouraging external white-hat hackers to identify flaws with financial incentives for their discoveries.
    The combination of these techniques creates a robust framework for detecting and mitigating vulnerabilities within your smart contract deployments.
    Leveraging Linux/Unix for a Secure Audit Environment
    Establishing a hardened and reliable development environment is foundational to effective Web3 security. Linux/Unix-based operating systems offer several advantages that make them ideal for conducting smart contract audits and related Web3 security assessments:
    Security by Design: Linux/Unix systems are inherently more secure than many alternatives due to their robust permission model and modular architecture. By using hardened distributions (e.g., Ubuntu LTS, Debian, Fedora, or security-focused distros like Qubes OS or Kali Linux), audit teams reduce attack surfaces and benefit from mature system-level controls.
    Command-Line Efficiency: Most audit tools—including MythX, Slither, Oyente, and static analysis scripts—are natively compatible with Linux command-line environments. This enhances workflow automation and enables deeper integration with CI/CD pipelines for continuous testing and monitoring.
    Customizable Firewall and Access Controls: Linux allows fine-grained control over firewall settings (e.g., using iptables or ufw) and system-level access control, which is crucial when handling sensitive smart contract code or deploying private blockchain nodes.
    Open-Source Transparency: The open-source nature of Linux/Unix promotes transparency and trust, allowing audit professionals to inspect and modify every layer of the OS if needed. This aligns with the transparency principles central to decentralized ecosystems.
    Process Isolation and Containerization: Utilizing tools like Docker on a Linux system enables environment isolation during testing and simulation of real-world attacks. Containerization helps ensure reproducible test conditions and segregates potentially risky processes.
    Best Practices for Linux/Unix Use in Web3 Audits:
    Regularly apply OS and package updates via secure repositories.
    Use ssh with key-based authentication instead of passwords.
    Audit logs with tools like auditd, logrotate, or centralized logging solutions like the ELK stack.
    Leverage SELinux or AppArmor to enforce additional security policies on critical audit tools.
    Incorporating Linux/Unix best practices into your audit workflow not only strengthens your security posture but also creates a consistent, scalable foundation for future audit and deployment processes.
    Implementing Robust Access Controls and Permissions
    Establish role-based access control (RBAC) to ensure users receive permissions aligned with their responsibilities. Define roles clearly and assign privileges accordingly. Maintain a principle of least privilege (PoLP) to minimize the risk of unauthorized access.
    Utilize multifactor authentication (MFA) to fortify user verification processes. Incorporate biometric methods or time-sensitive codes in addition to passwords, enhancing the security framework against breaches.
    Implement granular permissions, allowing specific access to resources rather than broad, overarching permissions. Use attribute-based access control (ABAC) to create more refined rules based on user attributes and context.
    Regularly review and update access permissions. Conduct audits on user roles and their access to ensure no unnecessary privileges persist. This proactive approach mitigates the risk associated with stale permissions.
    Employ logging and monitoring to track access attempts and actions taken by users. Implement alerts for suspicious activities to enhance the response capabilities of the security team.
    Educate users on proper access protocols and the significance of maintaining strong password practices. Regularly conduct training sessions to reinforce awareness regarding potential threats and the importance of adhering to security measures.
    Utilize automated tools to manage and enforce access controls. These tools help streamline the permission management process, reducing human error and increasing oversight efficiency.
    Conducting Post-Audit Testing and Monitoring
    Implement continuous monitoring solutions to track smart contract performance after evaluation. Monitor transaction patterns closely to identify anomalies that may indicate security flaws or potential exploits.
    Utilize automated testing frameworks to simulate various attack scenarios. Regularly execute unit tests and integration tests on the codebase to ensure functionality remains intact following modifications.
    Establish a bug bounty program to encourage community-driven testing. This initiative incentivizes ethical hackers to find and report vulnerabilities, enhancing the overall robustness of your system.
    Incorporate logging mechanisms that capture detailed information regarding every transaction and event. Analyze these logs for signs of unauthorized access or unusual activity that could compromise integrity.
    Schedule routine assessments to reevaluate the system against the latest threat intelligence and vulnerabilities. Employ third-party services for impartial insights into your project, ensuring an unbiased evaluation.
    Engage in regular training sessions for your development team focused on secure coding practices. Knowledge improvement reduces the risk of introducing new vulnerabilities in future updates.
    Utilize decentralized monitoring tools to maintain transparency and community trust. Such solutions enable stakeholders to verify operational integrity, promoting accountability in the ecosystem.
    Finally, document all testing results and created countermeasures comprehensively. This record helps in analyzing trends and improving methodologies over time, fortifying defenses against future risks.
    The post Simple Strategies for Achieving End-to-End Security in Web3 Audits appeared first on Unixmen.
  16. SVG to CSS Shape Converter

    by: Geoff Graham
    Wed, 21 May 2025 15:09:29 +0000

    Shape master Temani Afif has what might be the largest collection of CSS shapes on the planet with all the tools to generate them on the fly. There’s a mix of clever techniques he’s typically used to make those shapes, many of which he’s covered here at CSS-Tricks over the years.
    Some of the more complex shapes were commonly clipped with the path() function. That makes a lot of sense because it literally accepts SVG path coordinates that you can draw in an app like Figma and export.
    But Temani has gone all-in on the newly-released shape() function which recently rolled out in both Chromium browsers and Safari. That includes a brand-new generator that converts path() shapes in shape() commands instead.
    So, if we had a shape that was originally created with an SVG path, like this:
    .shape { clip-path: path( M199.6, 18.9c-4.3-8.9-12.5-16.4-22.3-17.8c-11.9-1.7-23.1, 5.4-32.2, 13.2c-9.1, 7.8-17.8, 16.8-29.3, 20.3c-20.5, 6.2-41.7-7.4-63.1-7.5c38.7, 27, 24.8, 33, 15.2, 43.3c-35.5, 38.2-0.1, 99.4, 40.6, 116.2c32.8, 13.6, 72.1, 5.9, 100.9-15c27.4-19.9, 44.3-54.9, 47.4-88.6c0.2-2.7, 0.4-5.3, 0.5-7.9c204.8, 38, 203.9, 27.8, 199.6, 18.9z ); } …the generator will spit this out:
    .shape { clip-path: shape( from 97.54% 10.91%, curve by -10.93% -10.76% with -2.11% -5.38%/-6.13% -9.91%, curve by -15.78% 7.98% with -5.83% -1.03%/-11.32% 3.26%, curve by -14.36% 12.27% with -4.46% 4.71%/-8.72% 10.15%, curve by -30.93% -4.53% with -10.05% 3.75%/-20.44% -4.47%, curve to 7.15% 25.66% with 18.67% 15.81%/11.86% 19.43%, curve by 19.9% 70.23% with -17.4% 23.09%/-0.05% 60.08%, curve by 49.46% -9.07% with 16.08% 8.22%/35.34% 3.57%, curve by 23.23% -53.55% with 13.43% -12.03%/21.71% -33.18%, curve by 0.25% -4.77% with 0.1% -1.63%/0.2% -3.2%, curve to 97.54% 10.91% with 100.09% 22.46%/99.64% 16.29%, close ); } Pretty cool!
    CodePen Embed Fallback Honestly, I’m not sure how often I’ll need to convert path() to shape(). Seems like a stopgap sorta thing where the need for it dwindles over time as shape() is used more often — and it’s not like the existing path() function is broken or deprecated… it’s just different. But still, I’m using the generator a LOT as I try to wrap my head around shape() commands. Seeing the commands in context is invaluable which makes it an excellent learning tool.
    SVG to CSS Shape Converter originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  17. Using Split Windows With Vim

    by: Abhishek Prakash
    Wed, 21 May 2025 16:16:09 +0530

    Have you ever watched a bearded sysadmin navigate their editor with lightning speed, jumping between multiple panes with the flick of a few keystrokes? That's the magic of Vim's split window feature!
    Think of it as having multiple monitors inside a single screen. And you don't even need screen command or tmux tools for this purpose. Vim does it on its own.
    Split Windows in Vim EditorYou can split the screen horizontally as well as vertically. And all this is done with a few keystrokes, of course.
    Vim split window keyboard shortcuts
    Action Keyboard Shortcut Description Horizontal split :split or :sp Splits window horizontally Vertical split :vsplit or :vs Splits window vertically Close current window :q or :close Closes the active window Close all except current :only or :on Closes all windows except active one Navigate between windows Ctrl-w + h/j/k/l Move to left/down/up/right window Navigate between windows Ctrl-w + Ctrl-w Cycle through all windows Resize horizontally Ctrl-w + < or > Decrease/increase width Resize vertically Ctrl-w + - or + Decrease/increase height Equal size windows Ctrl-w + = Make all windows equal size Maximize height Ctrl-w + _ Maximize current window height Maximize width Ctrl-w + | Maximize current window width Move window Ctrl-w + r Rotate windows Swap with next window Ctrl-w + x Exchange with next window You can also use the mouse for resizing and some other features if you have mouse enabled in Vim.
    Creating Split windows
    Let's see in details about how those magical keys work and look like in the editor.
    Horizontal splits
    Creating a horizontal split in Vim is like adding a floor to your house - you're stacking views on top of each other.
    When you're deep in code and need to reference something above or below your current focus, horizontal splits come to the rescue.
    :split filename (or :sp filename) 0:00 /0:11 1× Horizontal split in action
    Filename is optional here. If you don't specify a filename, Vim will split the window and show the current file in both panes.
    💡Use :set splitbelow to open the new windows below the current one.Vertical splits
    Vertical splits open windows side-by-side. It's good for viewing documentation, or keeping an eye on multiple parts of your project. You can also use it for quickly comparing two files if you do not want to use the dedicated vimdiff.
    :vsplit filename (or :vs filename) 0:00 /0:12 1× Vertical split in action
    💡By default, the new windows are opened on the left of the current window. Use :set splitright to open the new windows on the right.Moving between split windows
    Once you've created splits, hopping between them is where the real productivity starts. Think of Ctrl-w as your magic wand - when followed by a direction key, it teleports your cursor to that window.
    Ctrl-w followed by w will switch to the below/right window. Ctrl-w followed by W (uppercase W or Shift-w key) will switch to the above/left window. I prefer the direction keys, though. It is easier to navigate this way in my opinion.
    Ctrl-w h # Move to the window on the left Ctrl-w j # Move to the window below Ctrl-w k # Move to the window above Ctrl-w l # Move to the window on the right 0:00 /0:15 1× Move between splits
    You can also use the arrow keys instead of the typical hjkl Vim movement keys.
    💡If remembering directions feels like a mental gymnastics routine, just cycle through all the windows by pressing Ctrl-w Ctrl-w. Pressing them multiple times in pair and you'll move from one split window to the next.I'll be honest - when I first started using Vim, I kept forgetting these window navigation commands. So, I thought of Ctrl-w as "window" followed by the direction I wanted to go. After a few days of practice, my fingers remembered even when my brain forgot!
    Resizing split windows
    Not all windows are created equal. Some need more space than others, based on your need. Vim lets you adjust your viewing space with a few keystrokes.
    Ctrl-w + # Increase height by one line Ctrl-w - # Decrease height by one line Ctrl-w > # Increase width by one column Ctrl-w < # Decrease width by one column 0:00 /0:21 1× Resize split windows
    For faster resizing, prefix these commands with a number:
    10 Ctrl-w + # Increase height by 10 lines When things get too chaotic, there's always the great equalizer:
    Ctrl-w = # Make all windows equal size Just so that you know, you can also create splits with specific dimensions by adding a number before the command.
    For example, :10split creates a horizontal split with 10 lines of height, while :30vsplit creates a vertical split that's 30 characters wide.
    💡Need maximum space ASAP? Try these power moves:

    Ctrl-w _ maximizes the current window's height
    Ctrl-w | maximizes the current window's width
    Ctrl-w = equalizes all windows when you're ready to share again

    I call this the "focus mode toggle" - perfect for when you need to temporarily zoom in on one particular section of a file!Moving and rearranging Windows
    Sometimes you create the perfect splits but realize they're in the wrong order. Rather than closing and recreating them, you can rearrange your windows like furniture:
    Ctrl-w r # Rotate windows downward/rightward Ctrl-w R # Rotate windows upward/leftward Ctrl-w x # Exchange current window with the next one 0:00 /0:28 1× Rearrange split windows
    You can also completely move a window to a new position:
    Ctrl-w H # Move current window to far left Ctrl-w J # Move current window to very bottom Ctrl-w K # Move current window to very top Ctrl-w L # Move current window to far right It's like playing Tetris with your editor layout. While I am a fan of the classic Tetris game, I am not a fan of moving and rearranging the windows unless it is really needed.
    💡 Few random but useful tips
    Let me share a few more tips that will help your workflow when you are dealing with split windows in Vim.
    Add terminal in the mix
    If you are in a situation where you want to look at your code and run it at the same time, like an IDE, you can add a terminal in your split. No more alt-tabbing between terminal and editor!
    :sp | terminal opens a horizontal split with a terminal :vs | terminal opens a vertical split with a terminal Terminal is split window in VimStart split with Vim
    Want to start Vim with splits already configured? You can do that from your command line:
    # Open two files in horizontal splits vim -o file1 file2 # Open two files in vertical splits vim -O file1 file2 # Open three files in horizontal splits vim -o3 file1 file2 file3 File explorer in split windows
    One of my favorite tricks is opening Vim's built-in file explorer (netrw) in a split:
    :Sexplore # Open file explorer in horizontal split :Vexplore # Open file explorer in vertical split Vim file explorer in split windowIt's like having a mini file manager right next to your code - perfect for quickly navigating project files without losing your place in the current file.
    Close all the split windows and exit Vim
    When you are done with your tasks on a project, instead of closing individual split windows, you can close all of them together and exit Vim :qa
    Save your work before, of course.
    Wrapping up
    Splits aren't just a cool feature - they're a strategic tool. You can use it to edit files with reference doc open on the side or to watch log output in the terminal while debugging. It's up to you how you use this amazing feature.
    It might seem like keyboard gymnastics at first, but quickly becomes second nature. Like learning to touch type or ride a bike, the initial awkwardness gives way to fluid motions that you'll hardly think about.
    Start small - maybe just two vertical splits - and gradually incorporate more commands as you get comfortable. Before long, you'll be that expert terminal dweller others watch in amazement as you effortlessly dance between multiple files without ever touching your mouse. Happy splitting! 🚀
  18. by: Abhishek Kumar
    Wed, 21 May 2025 02:40:20 GMT

    Working with code often involves repetition, changing variable names, updating values, tweaking class names, or adding the same prefix across several lines.
    If you find yourself making the same changes again and again, line by line, then multi-cursor editing in Visual Studio Code can help simplify that process. In this part of our ongoing VS Code series, we’ll take a closer look at this feature and how it can make everyday tasks quicker and more manageable.
    Why use multiple cursors?
    Multi-cursor editing lets you place more than one cursor in your file so you can edit several lines at once.
    Instead of jumping between lines or writing the same change repeatedly, you can type once and apply that change across multiple places. Here are a few common situations where it comes in handy:
    Renaming a variable or function in multiple places. Adding or removing the same snippet of code across several lines. Editing repeated structures (like object keys, class names, or attribute values). Commenting out a bunch of lines quickly. Once you start using it, you’ll notice it helps reduce small repetitive tasks and keeps your focus on the code itself.
    Placing multiple cursors: mouse and keyboard
    There are two main ways to place multiple cursors in VS Code using the mouse or keyboard shortcuts.
    Let’s start with the mouse-based approach, which is more visual and straightforward for beginners.
    Then, we’ll move on to keyboard shortcuts, which are faster and more efficient once you’re comfortable.
    Method 1: Using the mouse
    To place cursors manually using your mouse: Hold down Alt (Windows/Linux) or Option (Mac), then click anywhere you want to insert a new cursor.
    Each click places a new blinking cursor. You can now type, delete, or paste, and the change will reflect at all cursor positions simultaneously. To cancel all active cursors and return to a single one, press Esc. This method is handy for quick edits where the lines aren’t aligned or when you want more control over cursor placement.
    Method 2: Using keyboard shortcuts
    The mouse method is a good starting point, but learning keyboard shortcuts can save more time in the long run.
    Below are a few keyboard-driven techniques to add and manage multiple cursors efficiently.
    Add Cursors Vertically in a Column
    When you want to add cursors above or below the current line to edit a block of similar lines (like inserting or deleting the same code at the beginning of each line), use this shortcut: Ctrl+ Alt + Up/Down arrow keys.
    This aligns cursors in a vertical column, making it easier to apply the same action to adjacent lines.
    Select the next occurrence of the current word
    To select and edit repeated words one by one such as variable names or function calls, place your cursor on the word and use: Ctrl + D
    Each press selects the next matching word and adds a cursor to it. You can press it repeatedly to continue selecting further matches.
    Select all occurrences of a word in the file
    If you want to update every instance of a word across the file at once, for example, replacing a class name or a repeated property, use: Ctrl + Shift + L
    This selects all matching words and places a cursor at each one. It’s powerful, but use with care in large files to avoid unintentional edits.
    Editing with multiple cursors
    Once your cursors are in place, editing works just like usual:
    Type to insert text across all cursors. Use Backspace or Delete to remove characters. Paste snippets — they get applied to each cursor position. Standard commands like cut, copy, undo, and redo all function as expected. Just keep an eye on alignment. If cursors are placed unevenly across lines, your edits might not be consistent.
    Since you seem to be interested, check out some of the other VS Code keyboard shortcuts.
    15 Best VS Code Keyboard Shortcuts to Increase ProductivityDo you want to be highly productive? Get familiar and memorize these VS Code keyboard shortcuts for Linux, Windows, and macOS.It's FOSSCommunityWrapping Up
    Multi-cursor editing is one of those small but effective features in VS Code that can make repetitive tasks less of a chore.
    You don’t need to learn all the shortcuts right away. Start simple, try placing cursors with Ctrl + D or selecting multiple lines vertically and build from there. As you become more comfortable, these techniques will become second nature and help you focus more on writing logic and less on repeating edits.
  19. by: Abhishek Kumar
    Tue, 20 May 2025 03:07:08 GMT

    While setting up a Raspberry Pi 5 for a new project, I decided to go with a headless setup - no display, keyboard, or mouse. I flashed the SD card, connected power, and waited for the Pi to appear on my network.
    But nothing showed up. I scanned my network, double-checked the router’s client list, still no sign of the Pi. Without access to a display, I had no immediate way to see what was happening under the hood.
    Then I noticed something: the green status LED was blinking in a repeating pattern. It wasn’t random, it looked deliberate. That small detail led me down a rabbit hole, and what I found was surprisingly useful.
    The Raspberry Pi’s onboard LEDs aren’t just indicators, they’re diagnostic tools. When the Pi fails to boot, it can signal the cause through specific blink patterns.
    If you know how to read them, you can identify problems like missing boot files, SD card issues, or hardware faults without plugging in a monitor.
    In this guide, we’ll decode what those LED signals mean and how to use them effectively in your troubleshooting process.
    📋The placement, colors, and behavior of the status LEDs vary slightly across different Raspberry Pi models. In this guide, we'll go through the most popular models and explain exactly what each LED pattern means.Raspberry Pi 5
    The Raspberry Pi 5 is a major step up in terms of power and architecture. It packs a 2.4GHz quad-core ARM Cortex-A76 CPU, supports up to 16GB of LPDDR4X RAM, and includes PCIe, RTC, and power button support.
    Raspberry Pi 5But when it comes to diagnostics, the big upgrade is in the STAT LED.
    On the Pi 5:
    Red LED (PWR): Shows power issues (not always ON by default!) Green LED (STAT): Shows SD card activity and blink codes Ethernet LEDs: Show network status Here’s what the green LED blink codes mean:
    Long Flash Short Flash Meaning 0 3 Generic failure to boot 0 4 start.elf not found 0 7 kernel.img not found 0 8 SDRAM failure 0 9 Insufficient SDRAM 0 10 In HALT state 2 1 Boot device not FAT formatted 2 2 Failed to read boot partition 2 3 Extended partition not FAT 2 4 File signature/hash mismatch 3 1 SPI EEPROM error 3 2 SPI EEPROM write protected 3 3 I2C error 3 4 Invalid secure boot configuration 4 3 RP1 not found 4 4 Unsupported board type 4 5 Fatal firmware error 4 6 Power failure Type A 4 7 Power failure Type B Thanks to the bootloader residing on the onboard EEPROM (Electrically Erasable Programmable Read-Only Memory), the Raspberry Pi 5 can perform much more detailed self-checks right from the start.
    Raspberry Pi 4 & 400
    The Raspberry Pi 4 and the keyboard-integrated Raspberry Pi 400 also feature sophisticated LED diagnostics, similar in many ways to the Pi 5.
    Raspberry Pi 4BThe Raspberry Pi’s onboard LEDs aren’t just indicators, they’re diagnostic tools. They typically have:
    Red LED (PWR): Indicates power status. On the Pi 4/400, this LED is solid ON when the board is receiving sufficient power. If it's off or flickering, suspect a power issue. Green LED (ACT): The activity LED. While showing SD card activity, like the Pi 5, it also flashes specific patterns to indicate boot issues. Ethernet LEDs: Found on the Ethernet port (Pi 4 only), showing network link and activity. Like the Pi 5, the Raspberry Pi 4 and 400 boot from onboard EEPROM, enabling them to run more detailed diagnostics than older models.
    The flash codes for the green ACT LED on the Raspberry Pi 4 and 400 are identical to the Pi 5 codes listed above.
    Long Flash Short Flash Meaning 0 3 Generic failure to boot 0 4 start.elf not found 0 7 kernel.img not found 0 8 SDRAM failure 0 9 Insufficient SDRAM 0 10 In HALT state 2 1 Boot device not FAT formatted 2 2 Failed to read boot partition 2 3 Extended partition not FAT 2 4 File signature/hash mismatch 3 1 SPI EEPROM error 3 2 SPI EEPROM write protected 3 3 I2C error 3 4 Invalid secure boot configuration 4 3 RP1 not found 4 4 Unsupported board type 4 5 Fatal firmware error 4 6 Power failure Type A 4 7 Power failure Type B Raspberry Pi 3 Model B, B+, and A+
    Moving back a generation, the Raspberry Pi 3 models were popular for their performance and features.
    Raspberry Pi 3B+These boards typically have:
    Red LED (PWR): Solid ON when receiving adequate power. Off or flickering suggests a power problem. Green LED (ACT): Indicates SD card activity. It also flashes error codes if the boot process fails. Ethernet LEDs: Found on the Ethernet port (Model B and B+), showing network link and activity. The slimline Model A+ lacks the Ethernet port and thus these LEDs. Unlike the Pi 4 and 5, the Raspberry Pi 3 boards rely entirely on the SD card for the initial boot process (there's no onboard EEPROM bootloader).
    This means the diagnostic capabilities are slightly less extensive, but the green ACT LED still provides valuable clues about common boot problems.
    Here's what the green ACT LED flashes mean on the Raspberry Pi 3 models:
    Flashes Meaning 3 start.elf not found 4 start.elf corrupt 7 kernel.img not found 8 SDRAM not recognized (bad image or damaged RAM) Irregular Normal read/write activity Raspberry Pi 2 and Pi 1 (Model B, B+, A, A+)
    This group covers some of the earlier but still widely used Raspberry Pi boards, including the Raspberry Pi 2 Model B, and the various iterations of the original Raspberry Pi 1 (Model B, Model B+, Model A, Model A+).
    Raspberry Pi 1B+Their LED setups are similar to the Pi 3:
    Red LED (PWR): Solid ON for sufficient power. Off or flickering indicates a power problem. Green LED (ACT): Shows SD card activity and signals boot errors. Ethernet LEDs: Present on models with an Ethernet port (Pi 2 B, Pi 1 B, Pi 1 B+). They lack advanced diagnostics and rely on the same basic LED flash codes as the Pi 3 series:
    Flashes Meaning 3 start.elf not found 4 start.elf corrupt 7 kernel.img not found 8 SDRAM not recognized Irregular Normal SD card activity Raspberry Pi Zero and Zero W
    The incredibly compact Raspberry Pi Zero and Zero W models are known for their minimalist design, and this extends to their LEDs as well.
    Raspberry Pi Zero WThe most significant difference here is the absence of the Red (PWR) LED. The Pi Zero series only features:
    Green LED (ACT): This is the only status LED. It indicates SD card activity and, importantly, signals boot errors. Flashes Meaning 3 start.elf not found 4 start.elf corrupt 7 kernel.img not found 8 SDRAM not recognized Irregular Normal SD activity Since there's no PWR LED, diagnosing power issues can be slightly trickier initially. If the green ACT LED doesn't light up at all, it could mean no power, an improperly inserted SD card, or a corrupted image preventing any activity.
    Pironman 5 Case With Tower Cooler and Fan
    This dope Raspberry Pi 5 case has a tower cooler and dual RGB fans to keep the device cool. It also extends your Pi 5 with M.2 SSD slot and 2 standard HDMI ports.
    Explore Pironman 5 Conclusion
    In conclusion, Raspberry Pi’s status LEDs are a surprisingly powerful diagnostic tool, especially for headless setups.
    They allow you to troubleshoot and pinpoint issues without needing a screen or direct access to the Pi.
    It’s an intriguing feature that makes the Pi even more versatile for remote projects, as long as you know what the blink codes mean.
    After all, knowing the code is half the battle, without it, those flashing lights might as well be a mystery show.
    You can take your debugging to the next step by adding a UART to your Pi and fetch the debugging data in your computer.
    Using a USB Serial Adapter (UART) to Help Debug Raspberry PiA UART attached to your Raspberry Pi can help you troubleshoot issues with your Raspberry Pi. Here’s what you need to know.It's FOSSPratham PatelIn the same context, knowing the Raspberry Pi pinout is always helpful.
    Understanding the Raspberry Pi 5 Pin OutLet’s take a closer look at each pin in Raspberry Pi 5 and its specific function to ensure you’re well-prepared for your project.It's FOSSAbhishek KumarWhat do you think? Have you ever used the Pi’s LEDs to diagnose an issue? Drop a comment below and share your experiences.
  20. by: Chris Coyier
    Mon, 19 May 2025 16:36:15 +0000

    I admit I’m a sucker for “do this; don’t do that” (can’t you read the sign) blog posts when it comes to design. Screw nuance, gimme answers. Anthony Hobday has a pretty good one in Visual design rules you can safely follow every time.
    Makes sense to me; ship it. Erik Kennedy does a pretty good job with posts in this vein, and I just read one about designing sidebars in his email newsletter. But he didn’t blog it so I can’t link to it. Instead I’ll link to his 15 Tips for Better Signup / Login UX which is the same kinda “do this” advice.
    I perish each time I have to hunt manually for the @ Jon Yablonski’s Laws of UX site is a pretty good example of this too, except the “do this” advice is more like “think about this principle”. They are pretty straightforward though, like:
    Welp now that we’ve linked up a bunch of design related stuff I’d better keep going. My pockets are always hot with links. I’m like and old man except instead of Wether’s Originals I have great URLs.
    If I had to design some shirts and hoodies and coats and stuff, I’d definitely want some clean templates to use, so I think these 45 fully editable apparel templates from atipo is pretty sweet. (€30.00 with a bunch of fonts too)
    Not Boring software rules. They have some really nice apps that feel very designed. I use their Habits app every day. They’ve got a nice blog post on the role of sound in software design. It’s common to think that software that makes any sound is a mistake as it’s just obnoxious or superfluous. I like this one:
    Is “good” and “bad” web design subjective (like just our opinions) or objective (provable with data)? I mean, that’s subjective, lol. Remy Sharp was thinking about it recently and has developed his own set of criteria. A simple one:
    Seems simple, but… not always. I was reviewing a site recently and the homepage had just a zillion things it was trying to say. It was a store, so there were locations, an invite to search, an invite to call, and invite to chat, discounts available, a current promotion, financing available, categories for browsing, upcoming events, etc, etc. The thing is, it’s easy to point at it and say Mess! — all that stuff is there because it’s important to somebody in the organization. Deciding on what even “the content” is can be tricky. I always think the homepage probably isn’t the best place to start a big debate like this. Clean up the more focused pages first.
    Let’s end with something beautiful, like these flowing WebGL gradients by Alex Harri. I loved the mathematical intro on doing all this pixel by pixel work manually, then how to shift the responsibility of that work:
    Shaders are a real journey, but I bet if you read every word of this post closely you’d be well on your way.
  21. by: Juan Diego Rodríguez
    Mon, 19 May 2025 12:32:22 +0000

    A couple of days back, among the tens of crypto-scams that flood our contact inbox, we found an interesting question on nested lists from one of our readers.
    Styling lists? Enough to catch my attention. After all, I just completed an entire guide about CSS counters. The message continues:
    Fair enough! So, what we are looking to achieve is a nested list, where each sublist marker/counter is of a different kind. The example linked in the message is the following:
    8 The strata corporation must repair and maintain all of the following: (a) common assets of the strata corporation; (b) common property that has not been designated as limited common property; (c) limited common property, but the duty to repair and maintain it is restricted to (i) repair and maintenance that in the ordinary course of events occurs less often than once a year, and (ii) the following, no matter how often the repair or maintenance ordinarily occurs: (A) the structure of a building; (B) the exterior of a building; (C) chimneys, stairs, balconies and other things attached to the exterior of a building; (D) doors, windows and skylights on the exterior of a building or that front on the common property; While simple at first glance, it still has some nuance, so let’s try to come up with the most maintainable solution here.
    The ugly way
    My first approach to this problem was no approach at all; I just opened CodePen, wrote up the HTML, and tried to get my CSS to work towards the final result. After translating the Markdown into ol and li elements, and with no special styling on each list, the base list would look like the following:
    CodePen Embed Fallback Once there, my first instinct was to select each ol element and then change its list-style-type to the desired one. To target each level, I selected each ol depending on its number of ol ancestors, then let the specificity handle the rest:
    ol { list-style-type: decimal; /* Unnecessary; just for demo */ } ol ol { list-style-type: lower-alpha; } ol ol ol { list-style-type: lower-roman; } ol ol ol ol { list-style-type: upper-alpha; } And as you can see, this works… But we can agree it’s an ugly way to go about it.
    CodePen Embed Fallback Nesting to the rescue
    Luckily, CSS nesting has been baseline for a couple of years now, so we could save ourselves a lot of ol selectors by just nesting each element inside the next one.
    ol { list-style-type: decimal; ol { list-style-type: lower-alpha; ol { list-style-type: lower-roman; ol { list-style-type: upper-alpha; } } } } While too much nesting is usually frowned upon, I think that, for this case in particular, it makes the CSS clearer on what it intends to do — especially since the CSS structure matches the HTML itself, and it also keeps all the list styles in one place. All to the same result:
    CodePen Embed Fallback It’s legal
    I don’t know anything about legal documents, nor do I intend to learn about them. However, I do know the law, and by extension, lawyers are finicky about how they are formatted because of legal technicalities and whatnot. The point is that for a legal document, those parentheses surrounding each list marker — like (A) or (ii) — are more than mere decoration and have to be included in our lists, which our current solution doesn’t.
    A couple of years back, we would have needed to set a counter for each list and then include the parentheses along the counter() output; repetitive and ugly. Nowadays, we can use the @counter-style at rule, which as its name implies, allows us to create custom counter styles that can be used (among other places) in the list-style-type property.
    In case you’re unfamiliar with the @counter-style syntax, what we need to know is that it can be used to extend predefined counter styles (like decimal or upper-alpha), and attach to them a different suffix or prefix. For example, the following counter style extends the common decimal style and adds a dash (-) as a prefix and a colon (:) as a suffix.
    @counter-style my-counter-style { system: extends decimal; prefix: "- "; suffix: ": "; } ol { list-style-type: my-counter-style; } CodePen Embed Fallback In our case, we’ll need four counter styles:
    A decimal marker, without the ending dot. The initial submission doesn’t make it clear if it’s with or without the dot, but let’s assume it’s without. A lower alpha marker, enclosed in parentheses. A lower Roman marker, also enclosed in parentheses. An upper alpha marker, enclosed in parentheses as well. All these would translate to the following @counter-style rules:
    @counter-style trimmed-decimal { system: extends decimal; suffix: " "; } @counter-style enclosed-lower-alpha { system: extends lower-alpha; prefix: "("; suffix: ") "; } @counter-style enclosed-lower-roman { system: extends lower-roman; prefix: "("; suffix: ") "; } @counter-style enclosed-upper-alpha { system: extends upper-alpha; prefix: "("; suffix: ") "; } And then, we just gotta replace each with its equivalent in our initial ol declarations:
    ol { list-style-type: trimmed-decimal; ol { list-style-type: enclosed-lower-alpha; ol { list-style-type: enclosed-lower-roman; ol { list-style-type: enclosed-upper-alpha; } } } } CodePen Embed Fallback It should work without CSS!
    Remember, though, it’s a legal document, so what happens if the internet is weak enough so that only the HTML loads correctly, or if someone checks the page from an old browser that doesn’t support nesting or @counter-style?
    Thinking only about the list, in most websites, it would be a mild annoyance where the markers go back to decimal, and you have to go by padding to know where each sublist starts. However, in a legal document, it can be a big deal. How big? I am no lawyer, so it beats me, but we still can make sure the list keeps its original numbering even without CSS.
    For the task, we can use the HTML type attribute. It’s similar to CSS list-style-type but with its own limited uses. First, its use with ul elements is deprecated, while it can be used in ol elements to keep the lists correctly numbered even without CSS, like in legal or technical documents such as ours. It has the following values:
    "1" for decimal numbers (default) "a" for lowercase alphabetic "A" for uppercase alphabetic "i" for lowercase Roman numbers "I" for uppercase Roman numbers Inside our HTML list, we would assign the correct numbering for each ol level:
    CodePen Embed Fallback Depending on how long the document is, it may be more the hassle than the benefit, but it is still good to know. Although this kind of document doesn’t change constantly, so it wouldn’t hurt to add this extra safety net.
    Welp, that was kinda too much for a list! But that’s something intrinsic to legal documents. Still, I think it’s the simplest way to achieve the initial reader’s goal. Let me know in the comments if you think this is overengineered or if there is an easier way.
    More on lists!
    Almanac on Apr 23, 2021 list-style
    ul { list-style: square outside none; } lists Sara Cope Almanac on Jan 28, 2025 @counter-style
    @counter-style apple-counter { ... } lists Juan Diego Rodríguez Article on May 7, 2025 Styling Counters in CSS
    lists Juan Diego Rodríguez A Reader’s Question on Nested Lists originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  22. by: Adnan Shabbir
    Sun, 18 May 2025 05:36:45 +0000

    Linux has evolved over time, from a minimalist interface and tools to supporting state-of-the-art interfaces and applications. In today’s modern era, a Browser is one of the most required applications on any system. Linux distros that come with a GUI by default have some browsers pre-installed, i.e., Firefox, Chromium. Other than the default installed browser, there are more competitive browsers supported by Linux that can be a better choice than the already installed one.
    Keeping this in view, I will discuss the top Linux browsers, including the GUI and text-based browsers.
    Top 12 Browsers for Linux
    From a user’s point of view, there are several factors that influence the browser choice. While choosing a browser, some users prefer a resource-friendly browser, a browser full of features, or a secure browser. If you are unsure, please go through this guide, and you’ll find your browser as per your requirements.
    Firefox
    Firefox, because of its Free and Open-Source (FOSS) nature, comes by default in most of the Linux distributions, i.e., Ubuntu, Kali. It was introduced in 2004 with the aim of being a competitor of Internet Explorer. Firefox offers various features that make it stand out among most browsers.
    Let’s see why anyone should use or opt for Firefox.
    Why Firefox?
    Over time, Firefox evolved, and it attracted a large number of Linux users. Firefox is well-known for its privacy-oriented features, i.e., cookie protection, tracking protection, and support for DNS over HTTPS.
    Firefox is updated every 4 weeks, and the core focus is continuously evolving the privacy and security features.
    Firefox has a large number of extensions in its “Add-ons” store. Extensions assist the users in doing specific tasks with one click instead of spending a few minutes on a specific task.
    Firefox is highly customizable, which makes it favorable for those looking for some visual appeal in the browser.
    Limitations of Firefox:
    Although Firefox is well-liked and is no doubt a fully loaded browser. However, it still has some limitations that I want to highlight:
    The processing is slow and laggy, which is a red flag in this speedy tech era. That’s why the Gen Z audience barely adapts to Firefox. Consumes relatively larger memory than it should, as it has a poor tab process management mechanism. Want to give it a try? Let’s learn how it can be installed on various Linux distributions.
    Install Firefox on Linux
    sudo apt install firefox #Debian Derivatives sudo pacman install firefox #Arch and its Derivatives sudo pacman install firefox #Arch and its Derivatives Google Chrome
    Google Chrome is also one of the leading browsers for Linux systems. It was introduced in 2008, and since then, it has been gaining popularity day by day because of its amazing strengths, which you might not see in any other browser.
    So, let’s dig into the “Why” part:
    Why Google Chrome?
    Google Chrome releases its stable version every 4 weeks (same as Firefox). Currently, Google Chrome 136 is the latest stable release with security updates in focus as well.
    Google Chrome has a large extensions store to integrate various tools, apps with your browser to save time. That’s why Google Chrome’s user experience is better than other browsers in the list.
    Google Chrome is a part of Google’s ecosystem, thus, you can integrate Google services with your Chrome profile. This way, multiple accounts can be integrated with multiple Chrome profiles.
    Chrome offers some control over the data. Like, protecting your list of passwords, autofill control, managing cookies and sessions up to some extent, indicating if the password is found in a data breach, asking before saving any password, etc.
    Limitations of Google Chrome:
    High resource consumption, i.e., RAM. As it is integrated with Google’s ecosystem, Chrome usually tracks and collects data on the user’s behavior throughout the session. Chrome has some serious limitations as discussed above, but still used and loved the most.
    Install Google Chrome on Linux
    Let me take you through the installation methods of Chrome on Linux:
    Ubuntu and other Debian Distros:
    Chrome is not directly available on Ubuntu’s or Debian’s repository. You have to download the “deb” package file from the Official Website and use the following command:
    sudo apt install "./deb-package-name" Click here to read the complete Installation guide of Chrome on Ubuntu.
    Arch-Linux and Its Derivatives:
    Get the AUR helper, i.e., yay in this case:
    sudo pacman -S --needed base-devel git
    git clone https://aur.archlinux.org/yay-git.git
    cd yay
    makepkg -si Now, install Chrome on Arch using the following command:
    yay -S google-chrome Detailed insight into installing Google Chrome on Arch Linux.
    Fedora:
    sudo dnf install fedora-workstation-repositories
    sudo dnf install google-chrome-stable Click here to learn multiple methods of installing Chrome on Fedora.
    Opera
    Opera is a Chromium-based, partially Open-Source browser. It was first launched in 1995 with the aim of providing a state-of-the-art user experience at that time. Let’s have a look at the “Why”?
    Why Opera?
    Opera has the same rendering engine (JavaScript-based) as Google Chrome, and thus provides speed assurance while surfing.
    Opera provides built-in support for messengers of social platforms, providing a dedicated bar inside the browser.
    Opera has built-in support for VPN, which serves through 3 locations only.
    Opera supports a number of extensions that assist users in doing several tasks quickly, i.e., a single click to open/manage apps or tools. Apart from Opera’s own extensions, it supports Chrome-based extensions as well.
    Limitations of Opera:
    Resource consumption. Chromium-based, but still utilizes high resources. Since it is a partially open source, and thus the VPN source code is not revealed, which makes it vulnerable, ultimately compromising the privacy of the user. Likewise, the same concern is for the Ad and tracker blocker. Install Opera on Linux
    Snap Supported Distros:
    sudo snap install opera Ubuntu and Other Debian Derivatives:
    Click here to see detailed installation methods of Opera on Ubuntu and other Debian derivatives.
    Arch and Its Derivatives:
    yay -S opera Brave
    In 2016, the co-founder of the Mozilla project introduced Brave to provide privacy. For this, it was launched with a built-in Ad and tracker blocker. Let’s dig into the core details of why this Browser is one of the most used by Linux users:
    Why Brave?
    Since Brave was introduced to ensure privacy so it integrated Tor browsing, which is the most secure way of browsing. Routing through multiple IPs and ensuring the tracker blocking is what makes Brave a secure browser.
    Brave has a unique Ad reward system, known as “Basic Attention Token”. Users can earn these tokens by watching the Ads and supporting the content creators.
    Brave is also a Chromium-based browser (equipped with the V8 JS engine), which makes it a fast browser.
    Brave supports a list of Chrome-based extensions to ensure the availability of maximum features to the users. Moreover, it has cross-platform support available, i.e., you can integrate the saved bookmarks, browsing history, and other settings to another platform.
    Limitations of Brave:
    The aggressive tracker and ad-blocking feature of Brave blocks various useful extensions and sites either completely or partially, which impacts the user experience. Now, let’s explore the ways to install Brave on Linux:
    Install Brave Browser on Linux
    You can get Brave on Linux using one shell script:
    curl -fsS https://dl.brave.com/install.sh | sh Ubuntu and Other Debian Derivatives:
    Click here to get detailed instructions for installing Brave on Ubuntu and Debian Derivatives
    Arch and Its Derivatives:
    Click here to install Brave on Arch and its derived distributions.
    Chromium
    Google Chrome is a free and open-source browser developed and maintained by Google (under the Chromium project). It was first released in 2008 and named after the Chromium metal, which is used to create Chrome plates. Let’s see why Chromium is one of the best browser choices.
    Why Chromium?
    Chromium is open source, which makes it favorable for Linux users, and it was developed by Google, so most of the Chrome-like features are already there.
    Chromium also allows you to get extensions from the Chrome Web Store and from external sources, resulting in a large number of extensions for a better user experience.
    The browser’s source code can be modified, but only Google-authorized developers are allowed to do so.
    When compared with Chrome, Chromium is more privacy-oriented than Google Chrome, i.e., it installs updates manually and it does not track or share user data.
    Limitations of Chromium:
    Chromium is also a resource-intensive browser, making it hard for people looking for a hardware-friendly browser, and the Chromium codebase is the major reason behind this.
    Install Chromium on Linux
    Ubuntu and Other Debian Derivatives:
    sudo apt install chromium-browser Read this guide for detailed installation instructions.
    Snap supported Distros:
    sudo snap install chromium Arch and Its Derivatives:
    sudo pacman -S chromium Fedora:
    sudo dnf install chromium Vivaldi
    Vivaldi is another Chromium-based browser, introduced in 2015 by Vivaldi. It was developed and considered as an alternative to the Opera browser.
    Why Vivaldi?
    Since Vivaldi is Chromium-based, its UI is customizable, and users have a variety of themes, layout options to experience a unique feel.
    Vivaldi’s quick command line support allows you to navigate between browsers, create Vivaldi notes, and scroll through browser history. Just write the keyword in the command search tab, and a list of commands is shown with their purpose.
    Vivaldi is equipped with a built-in mail client and RSS feed reader, which you may not get by default in other browsers.
    Limitations of Vivaldi:
    Vivaldi is not completely open-source, with some of its features on a closed-source list. Although it offers a customizable UI but it can result in consuming high hardware resources. Install Vivaldi on Linux
    Ubuntu and Other Debian Derivatives:
    Vivaldi is not directly available on the repositories of Ubuntu or other Debian derivatives. However, you can get the “.deb” package from the official Vivaldi site. Once the “deb” package is downloaded, you can use the following command to install it:
    sudo apt install "./path-to-deb-file" Note: Follow this guide for a detailed installation method.
    Snap Supported Distros:
    Users of those Linux distributions where the snap is functional can use the following command to install Vivaldi on the system:
    sudo snap install vivaldi Flatpak Supported Distros:
    Ensure that your system has Flatpak installed and it is connected to Flathub. Then, use the following command to install it:
    flatpak install flathub com.vivaldi.Vivaldi Tor (The Onion Router)
    Tor is the most secure browser so far, introduced in 2002, with an aim to create the first ever anonymous browser. It is managed and maintained under the Tor Project.
    Why Tor?
    When a request is sent through Tor, it passes through a multi-layered routing, one layer after another. This multi-layer routing makes it impossible to detect and trace a user or the location of the user.
    Tor usually works on the “.onion” links, which only work on the Onion routing, i.e., Tor. This makes Tor a browser for specific use (helping the government and corporate sector work anonymously to achieve specific goals).
    Limitations of Tor:
    The multi-layered routing puts extra load on the system resources, which is not good for users looking for resource-friendly browsers. Takes more time to load/start. Anonymity is usually utilized in illegal activities (hacking, dark web, etc). Install Tor on Linux
    Ubuntu and other Debian Derivatives:
    sudo apt install torbrowser-launcher Note: Follow this guide for detailed installation instructions for Tor on Ubuntu.
    Flatpak Supported Distros:
    flatpak install flathub org.torproject.torbrowser-launcher Fedora:
    To install Tor on Fedora, you have to first integrate the Tor project with Fedora’s package repository and then proceed with the installation. Get brief info on this at Tor’s official page.
    Falkon
    Falkon was initially introduced in 2010 with the name “QupZilla”. Later in 2017, KDE adapted it and renamed it from “QupZilla” to “Falkon”.
    Why Falkon?
    Since it is a KDE-owned browser, it works well and is integrated with the KDE environment (a desktop environment on Linux).
    Falkon is a resource-friendly browser, making it well-liked among users working in a resource-constrained environment.
    It offers a built-in Ad blocker and some privacy controls, which are enough for a normal user and thus nullify the need to install any other service.
    Limitations of Falkon:
    Falkon is very straightforward and resource-friendly with limited engine updates. Because of that, it sometimes behaves abnormally when modern web standards are encountered, i.e., dynamic sites, high-end graphic visuals, JavaScript-enriched sites. It does not have enough extension support as compared to other browsers. Install Falkon on Linux
    sudo apt install falkon #Debian Derivatives flatpak install flathub org.kde.falkon #Flatpak Supported Distros sudo snap install falkon #Snap Supported Distros Read this guide for detailed installation instructions using Snap.
    Midori
    Midori was introduced in 2007 as a part of the XFCE project and aimed to offer a simple, fast, and lightweight solution for Linux users.
    Why Midori?
    Midori is not modern in visuals, but effective for hardware-conscious users, i.e., old hardware, low hardware specs, embedded systems.
    It has a notably low memory consumption, which makes it boot and perform fast.
    It supports low-level tools for tracker and cookie blocking, providing essential privacy to users.
    Limitations of Midori:
    Low support for the extensions and advanced customization. Neither intermediate nor advanced security measures are supported. Install Midori on Linux:
    Ubuntu and Other Debian Derivatives:
    sudo apt install midori Snap Supported Distros:
    sudo snap install midori Note: Remember to configure and enable snapd, or else you will get an error while installing.
    Flatpak Supported Distros:
    flatpak install flathub org.midori_browser.Midori These were the most used and recommended GUI browsers for Linux users.
    Lynx | Text-Based Browser
    Lynx is an open-source and command-line browser for linux systems. It was introduced in 1992 by a group of researchers from the University of Kansas.
    Why Lynx?
    Lynx was aimed at command-line browsing and is still used in Linux servers to keep the GUI exposure as low as possible.
    Lynx allows a limited number of operations to track the user data. However, it provides control over the cookies, users can manage if the cookies are allowed or disallowed.
    It is a preferred browser while communicating with a system through SSH, Telnet, or any other terminal-based connections.
    Because of its only command line support, Lynx is well-supported and recommended for resource-friendly systems.
    Limitations of Lynx:
    Only command-line operations. The search result is provided as a formatted text on the terminal screen, which might not be suitable for all Linux users or users shifting to Linux. Only recommended when browsing is not being done frequently. Install Lynx on Linux
    sudo apt install lynx Ubuntu and Other Debian Distros sudo dnf install lynx Fedora and other dnf-supported Distros sudo pacman -S lynx Arch and its Distros Browsh | Text-Based Browser
    Browsh is another text-based browser for Linux. However, it is modern as compared to the Lynx as it supports a GUI but in a controlled manner. For GUI rendering, the user must have Firefox installed.
    Why Browsh?
    Browsh supports a basic graphics element renderer (CSS/JS) to offer limited GUI support for the search results.
    Browsh does not allow sharing the user data. However, while processing web pages, the cookies need to be managed manually.
    Being a text-based browser, it is lightweight and supports old hardware or systems with low hardware resources.
    Browsh is also used when communicating through SSH, Telnet, etc., the remote connections to the systems.
    Limitations of Browsh:
    Although it supports a modern GUI renderer, advanced graphics have not yet been supported inside Browsh. Thus, when displaying the graphics, a few of them pixelate or do not show up properly. Install Browsh on Linux
    sudo apt install browsh #Ubuntu and Other Debian Distros sudo dnf install browsh #Fedora and other dnf-supported Distros sudo pacman -S browsh #Arch and its Distros: W3m | Text-Based Browser
    W3m was initially introduced in 1995 as a text-based browser for the Unix-derived operating systems. Since then, it has been adapted at a larger scale by Linux users to browse in a non-GUI environment.
    Why W3m?
    Like other text-based browsers, it is also resource-friendly, takes no time to start, and browses the data with an optimal speed.
    It is usually used in Linux servers and for remote browsing through remote connection protocols, i.e., SSH, Telnet.
    With time, W3m has been updated and now it provides a more user-friendly text-based interface, i.e., inline images and interactive results.
    Limitations of W3m:
    W3m needs to be configured to show inline images and SSL-encrypted pages. If not configured properly, it will show abnormal results in the terminal. Install W3m on Linux
    sudo apt install w3m #Ubuntu and Other Debian Distros sudo dnf install w3m #Fedora and Other dnf-supported Distros sudo pacman -S w3m #Arch and Its Derivatives That’s all from the list of top Linux browsers.
    Comparison of the Browsers | Which one to choose?
    Now that you have gone through the top browsers for Linux. Let me provide you with a comparison chart of the browsers. Here, I have considered notable parameters that a user should consider before switching to another browser:
    Browser System Resource Usage Privacy Customization Extension Support Rendering Engine Updates Source-Code GUI Browsers Firefox Medium Medium Medium High Gecko Regular FOSS Chrome High Low Low High Blink Regular Proprietary Tor High High Low Medium Gecko Regular FOSS Opera Medium Ad and Tracker blocker, VPN
    High, i.e., sidebar, themes, workspaces High Blink Regular Partially Open-Source Brave Low Ad and Tracker blocker Medium Medium Blink Regular FOSS Chromium Low Low Low Medium Blink Regular FOSS Vivaldi Low Tracker blocker Low High Blink Regular Partially Open-Source Falkon Very Low Low Low Medium QtWebEngine Not Regular FOSS Midori Very Low Low Low Medium WebKit Not Regular FOSS Terminal/Text-Based Browsers Lynx Very Low Low Low Internal Not Regular FOSS w3m Very Low Low Low Internal Not Regular Browsh Very Low Low Low Geck-based Not Regular That’s all. Choose your browser wisely.
    Conclusion
    The top linux browsers for 2025 are: Google Chrome, Firefox, Opera, Brave, Chromium, Vivaldi, Tor, Falkon, Midori, Lynx, Browsh, and W3m. Each browser is chosen based on some factors, i.e., some offer advanced features, security, resource consumption, and text-based interfaces. You just need to see which browser fulfills your requirements and just go for it.
    I have provided a list of the most used Browsers on Linux and demonstrated a brief comparison so that a user can easily pick a browser as per their requirements.
  23. by: Abhishek Kumar
    Sun, 18 May 2025 05:23:03 GMT

    Manually formatting code can be tedious, especially in fast-paced or collaborative development environments.
    While consistent formatting is essential for readability and maintainability, doing it by hand slows you down and sometimes leads to inconsistent results across a project.
    In this article, I’ll walk you through the steps to configure Visual Studio Code to automatically format your code each time you save a file.
    We'll use the VS Code extension called Prettier, one of the most widely adopted tools for enforcing code style in JavaScript, TypeScript, and many other languages.
    By the end of this guide, you'll have a setup that keeps your code clean with zero extra effort.
    Step 1: Install Prettier extension in VS Code
    To start, you'll need the Prettier - Code Formatter extension. This tool supports JavaScript, TypeScript, HTML, CSS, React, Vue, and more.
    Open VS Code, go to the Extensions sidebar (or press Ctrl + Shift + X), and search for Prettier.
    Click on Install and reload VS Code if prompted.
    Step 2: Enable format on save
    Now that Prettier is installed, let’s make it run automatically whenever you save a file.
    Open Settings via Ctrl + , or by going to File > Preferences > Settings.
    In the search bar at the top, type format on save and then Check the box for Editor: Format On Save.
    This tells VS Code to auto-format your code whenever you save a file, but that’s only part of the setup.
    Troubleshooting
    If saving a file doesn’t automatically format your code, it’s likely due to multiple formatters being installed in VS Code. Here’s how to make sure Prettier is set as the default:
    Open any file in VS Code and press Ctrl + Shift + P (or Cmd + Shift + P on Mac) to bring up the Command Palette. Type “Format Document” and select the option that appears. If multiple formatters are available, VS Code will prompt you to choose one. Select “Prettier - Code formatter” from the list. Now try saving your file again. If Prettier is correctly selected, it should instantly reformat the code on save.
    In some cases, you might want to save a file without applying formatting, for example, when working with generated code or temporary formatting quirks. To do that, open the Command Palette again and run “Save Without Formatting.”
    Optional: Advanced configuration
    Prettier works well out of the box, but you can customize how it formats your code by adding a .prettierrc configuration file at the root of your project.
    Here’s a simple example:
    { "singleQuote": true, "trailingComma": "es5", "semi": false } This configuration tells Prettier to use single quotes, add trailing commas where valid in ES5 (like in objects and arrays), and omit semicolons at the end of statements.
    There are many other options available such as adjusting print width, tab width, or controlling how JSX and HTML are handled.
    You can find the full list of supported options in Prettier’s documentation, but for most projects, a few key settings in .prettierrc go a long way.
    Try It Out
    Create or open any file, JavaScript, TypeScript, HTML, etc. Add some poorly formatted code.
    <html><head><style>body{background:#fff;color:#333;font-family:sans-serif}</style></head><body><h1>Hello</h1><script>document.querySelector("h1").addEventListener("click",()=>{alert("Hello World!")})</script></body></html> Then simply save the file (Ctrl + S or Cmd + S), and watch Prettier instantly clean it up.
    As you can see, Prettier neatly indents and spaces each part of the html code, even across different embedded languages.
    Wrapping Up
    It doesn't matter if you are vibe coding or doing everything on your own, proper formatting is a sign of writing good code.
    We’ve already covered the fundamentals of writing clean, consistent code - indentation, spacing, and word wrap, and automatic formatting builds directly on top of those fundamentals.
    Once configured, it removes the need to think about structure while coding, letting you focus on the logic.
    If you're also wondering how to actually run JavaScript or HTML inside VS Code, we've covered that as well, so check those guides if you're setting up your workflow from scratch.
    If you’re not already using automatic formatting, it’s worth making part of your workflow.
    And if you use a different tool or approach, I’d be interested to hear how you’ve set it up, let us know in the comments. 🧑‍💻
  24. by: Geoff Graham
    Fri, 16 May 2025 14:38:19 +0000

    Some weekend reading on the heels of Global Accessibility Awareness Day (GAADM), which took place yesterday. The Email Markup Consortium (EMC) released its 2025 study on the accessibility in HTML emails, and the TL;DR is not totally dissimilar from what we heard from WebAIM’s annual web report:
    The results come from an analysis of 443,585 emails collected from the past year. According to EMC, only 21 emails passed all accessibility checks — and they were all written by the same author representing two different brands. And, further, that author represents one of the companies that not only sponsors the study, but develops the automated testing tool powering the analysis.
    Automated testing is the key here. That’s needed for a project looking at hundreds of thousands of emails, but it won’t surface everything, as noted:
    The most common issues relate to internationalization, like leaving out the lang (96% of emails) and dir (98% of emails) attributes. But you’ll be familiar with most of what rounds up the top 10, because it lines up with WebAIM’s findings:
    Links must have discernible text Element has insufficient color contrast Images must have alternate text Link text should be descriptive Links must be distinguishable without relying on color I appreciate that the report sheds a light on what accessibility features are supported by specific email clients, such as Gmail. The report outlines a set of 20 HTML, CSS, and ARIA features they look for and found that only one email client (SFR Mail?) of the 44 evaluated supports all of the features. Apple Mail and Samsung Email are apparently close behind, but the other 41? Not so much.
    AilSo, yeah. Email has a ways to go, like a small microcosm of the web itself.
    HTML Email Accessibility Report 2025 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  25. by: Abhishek Prakash
    Fri, 16 May 2025 14:25:21 GMT

    Sausage is a word forming game, inspired by the classic Bookworm. Written in bash script, you can use it on any Linux distribution.
    Playing SausageThe goal of the game is simple.
    Earn points by spotting words. Longer word spotting results in coloured letters. Using coloured letters give more points. Smaller words introduces red letters, which when reached bottom, you lose the game. Installation
    ✋Since it's a terminal-based game, it requires a few commands for installation. I advise learning the command line essentails from our terminal basics series.You need to have git installed on your system.
    Use git to clone the official Sausage repository:
    git clone https://gitlab.com/christosangel/sausage.git Switch to the clone directory:
    cd sausageGive execution permission to the install.sh shell script.
    chmod +x install.sh Run the script:
    ./install.sh Once the installation is finished, open Sausage in the same location using:
    ./sausage.sh Essential commands and shortcuts
    📋Sausage needs a 60 Column x 34 Lines terminal to work properly.The interface has all the key combinations described properly. Even the direction of motion is displayed.
    To move without selecting any word, use the arrow key.
    Once a starting word is decided, press the Space/Enter key to select that letter. Now, use the navigation keys to continue selection.
    Navigation Key ↑ (Up) k or Up Arrow ↓ (Down) j or Down Arrow ↗ (Right and Up) L or Shift + Right Arrow ↘ (Right and Down) l or Right Arrow ↖ (Left and Up) H or Shift + Left Arrow ↙ (Left and Down) h or Left Arrow To show all the words, press the b key in the game.
    Show all wordsTo undo a letter select, press the Backspace key. Undo a word selection with the Delete key.
    Select/Unselect lettersPress r key in game to reshuffle. Each reshuffle loses a turn and introduces multiple red cells. Existing red cell drop one cell down.
    Reshuffle in SausageConfiguration
    Limited configuration is possible here. Either manually edit ~/.config/sausage/sausage.config file or use the c key in the game start page.
    Sausage ConfigYou can find more gameplay details on its official GitLab page.
    Removing Sausage
    Technically, you run Sausage from the script itself. Still, initially, it has created a few directories. This screenshot from the official repository shows them:
    So, to 'uninstall' Sausage, you have to remove the cloned repository and if you want to remove the game related files, check the screenshot above and remove them.
    Up for a (word) game?
    If you ever played the classic Bookworm, Sausage will be pure nostalgia. And if you never played that before, it could still be fun to try it f you like these kinds of game.
    It's one of those amusing things you can do in the terminal.
    I let you leave a few words in the comments 😉

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.