Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Geoff Graham
    Fri, 19 Sep 2025 13:58:37 +0000

    I’m inclined to take a few notes on Eric Bailey’s grand post about the use of inclusive personas in user research. As someone who has been in roles that have both used and created user personas, there’s so much in here
    What’s the big deal, right? We’re often taught and encouraged to think about users early in the design process. It’s user’ centric design, so let’s personify 3-4 of the people we think represent our target audiences so our work is aligned with their objectives and needs. My master’s program was big on that and went deep into different approaches, strategies, and templates for documenting that research.
    And, yes, it is research. The idea, in theory, is that by understanding the motivations and needs of specific users (gosh, isn’t “users” an awkward term?), we can “design backwards” so that the end goal is aligned to actions that get them there.
    Eric sees holes in that process, particularly when it comes to research centered around inclusiveness. Why is that? Very good reasons that I’m compiling here so I can reference it later. There’s a lot to take in, so you’d do yourself a solid by reading Eric’s post in full. Your takeaways may be different than mine.
    Traditional vs. Inclusive user research
    First off, I love how Eric distinguishes what we typically refer to as the general type of user personas, like the ones I made to generalize an audience, from inclusive user personas that are based on individual experiences.
    So, right off the bat we have to reframe what we’re talking about. There’s blanket personas that are placeholders for abstracting what we think we know about specific groups of people versus individual people that represent specific experiences that impact usability and access to content.
    Assistive technology is not exclusive to disabilities
    It’s so easy to assume that using assistive tools automatically means accommodating a disability or impairment, but that’s not always the case. Choice points from Eric:
    First is that assistive technology is a means, and not an end. Some disabled people use more than one form of assistive technology, both concurrently and switching them in and out as needed. Some disabled people don’t use assistive technology at all. Not everyone who uses assistive technology has also mastered it. Disproportionate attention placed on one kind of assistive technology at the expense of others. It’s entirely possible to have a solution that is technically compliant, yet unintuitive or near-impossible to use in the actual.  I like to keep in mind that assistive technologies are for everyone. I often think about examples in the physical world where everyone benefits from an accessibility enhancement, such as cutting curbs in sidewalks (great for skateboarders!), taking elevators (you don’t have to climb stairs in some cases), and using TV subtitles (I often have to keep the volume low for sleeping kids).
    That’s the inclusive part of this. Everyone benefits rather than a specific subset of people.
    Different personas, different priorities
    What happens when inclusive research is documented separately from general user research?
    In practice, that means:
    Thinking of a slick new feature that will impress your users? Great! Let’s make sure it doesn’t step on the toes of other experiences in the process, because that’s antithetical to inclusiveness. I recognize this temptation in my own work, particularly if I land on a novel UI pattern that excites me. The excitement and tickle I get from a “clever” idea gives me a blind side to evaluating the overall effectiveness of it.
    Radical participatory design
    Gosh dang, why didn’t my schoolwork ever cover this! I had to spend a little time reading the Cambridge University Press article explaining radical participatopry design (RPD) that Eric linked up.
    Ah, a method for methodology! We’re talking about not only including community members into the internal design process, but make them equal stakeholders as well. They get the power to make decisions, something the article’s author describes as a form of decolonization.
    Or, as Eric nicely describes it:
    Bonus points for surfacing the model minority theory:
    It introduces exclusiveness in the quest to pursue inclusiveness — a stereotype within a stereotype.
    Thinking bigger
    Eric caps things off with a great compilation of actionable takeaways for avoiding the pitfalls of inclusive user personas:
    Letting go of control leads to better outcomes. Member checking: letting participants review, comment on, and correct the content you’ve created based on their input. Take time to scrutinize the functions of our roles and how our organizations compel us to undertake them in order to be successful within them. Organizations can turn inwards and consider the artifacts their existing design and research processes produce. They can then identify opportunities for participants to provide additional clarity and corrections along the way. On inclusive personas and inclusive user research originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  2. by: Abhishek Prakash
    Fri, 19 Sep 2025 17:05:42 +0530

    Before you see all the new tips and tutorials, allow me to share a few future updates.
    So, we are working on two new microcourses: Git for DevOps and Advanced Automation With Systemd.
    I know that we already have a systemd course in place, but this one specifically focuses on automation and can be considered an advanced topic.
    Other than that, we are working on Docker video tutorials. Stay tuned for the awesome Linux learning with Linux Handbook.
     
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  3. by: Hangga Aji Sayekti
    Fri, 19 Sep 2025 15:56:59 +0530

    When you start exploring a target website, the first question to ask is simple: what names exist out there?
    Before you think about vulnerabilities or exploits, you would want a map of subdomains. That map can reveal forgotten login pages, staging servers, or even entire apps that weren’t meant to be public.
    My preferred tool for this first step is subfinder. It’s simple, quiet, and quick. In this guide, we’ll walk through installing subfinder on Kali Linux, running the most useful commands, saving results, and experimenting with extra flags.
    We’ll practice together on vulnweb.com, a safe site for learning.
    What subfinder actually does
    Subfinder is a passive subdomain discovery tool. Instead of hammering DNS servers or brute-forcing names, it asks public sources: certificate transparency logs, DNS databases, GitHub, and more. That’s why it’s fast and low-noise.
    The catch: subfinder gives us names only. Some names may be stale and some may point to nothing. And that’s fine. At this stage, all we need is a clean list of possible subdomains. Later, we can resolve and probe them.
    Installing subfinder on Kali Linux
    You have two easy choices:
    Install via apt (fastest):
    sudo apt update sudo apt install subfinder Install the latest version via Go:
    go install github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest echo 'export PATH=$PATH:$HOME/go/bin' >> ~/.bashrc && source ~/.bashrc If youn are just starting out, the apt version is fine. If something feels buggy, you can switch to the Go install for the newest release.
    Understanding the basic flags
    -d chooses the domain -silent prints one subdomain per line, no banners -o saves to a text file -oJ saves JSONL (structured data, one object per line) -ls shows which sources subfinder can query -s selects only certain sources -all uses every available source Using subfinder
    Subfinder has more options that are useful once you’re comfortable with the basics. Let’s walk through them with examples:
    Enumerate subdomains (basic):
    subfinder -d vulnweb.com Here’s what will happen next:
    Awesome, all the subdomains are now visible.
    Enumerate multiple domains from file:
    Runs against every domain listed in domains.txt.
    subfinder -dL domains.txt Use all sources:
    Queries every source subfinder knows about. Slower, but more complete.
    subfinder -d vulnweb.com -all Exclude specific sources:
    subfinder -d vulnweb.com -es alienvault,zoomeyeapi Skips certain sources if you don’t want to use them.
    Set concurrency (threads):
    subfinder -d vulnweb.com -o results.txt -t 50 Runs with up to 50 concurrent tasks.
    Limit request rate:
    subfinder -d vulnweb.com -rl 50 Keeps requests under 50 per second.
    Output to a plain file:
    subfinder -d vulnweb.com -o results.txt JSON output:
    subfinder -d vulnweb.com -oJ vulnweb.jsonl CSV output (via quick conversion):
    subfinder -d vulnweb.com -oJ subfinder.jsonl jq -r '.name' subfinder.jsonl | awk 'BEGIN{print "subdomain"}{print $0}' > subfinder.csv The output has been saved to subfinder.csv. Open it to view the data.
    Unique results only:
    subfinder -d vulnweb.com -silent -o results.txt sort -u results.txt > results_unique.txt Recursive enumeration (find deeper subdomains):
    subfinder -d vulnweb.com -recursive Provider configuration (optional but powerful)
    You can add API keys of premium services like SecurityTrails, Shodan etc. They will give you richer results. You can add keys to ~/.config/subfinder/provider-config.yaml:
    securitytrails: key: "YOUR_SECURITYTRAILS_API_KEY" virustotal: key: "YOUR_VIRUSTOTAL_API_KEY" shodan: key: "YOUR_SHODAN_API_KEY" You can use subfinder without keys, but adding them usually gives us more coverage.
    Simple practice exercise with vulnweb.com
    Let's get into practice mode and run a few simple coommands to explore subfinder:
    Collect subdomains:
    subfinder -d vulnweb.com -silent -o vulnweb_raw.txt Clean up duplicates:
    sort -u vulnweb_raw.txt > vulnweb_clean.txt The following command will create vulnweb_clean.txt with unique entries:
    Done! The cleaned list is in vulnweb_clean.txt — go check it out.
    Next, save JSON output too (for reports):
    subfinder -d vulnweb.com -oJ vulnweb.jsonl Take a look at the first 10 results:
    head -n 10 vulnweb_clean.txt Now you have a tidy list of candidate subdomains, ready for the next step of vulnerability assessment.
    Final notes
    Subfinder is the quiet scout in the toolkit. It doesn’t overwhelm us with noise, it just hands out the names that exist out in the wild. With a few simple commands, we can build a reliable subdomain list for any domain we’re testing.
    For now, practice on vulnweb.com until you’re comfortable. Later, move on to checking which of those names are live and what services they’re running. But that’s another story for another day.
  4. by: Sourav Rudra
    Fri, 19 Sep 2025 06:43:45 GMT

    Hyprland is a dynamic tiling Wayland compositor that has been gaining traction in the Linux community due to its modern aesthetics, smooth animations, and extensive configurability.
    Unlike traditional X11 window managers, Hyprland leverages Wayland's capabilities to provide a more fluid and visually appealing desktop experience. Its growing popularity is evident in discussions across forums and communities, where people have been praising its performance and customization options.
    But if you look at our Hyprland tutorial series, you'll realize that setting up Hyprland can be a huge challenge. And that's why I am listing a few options that lower the entry barrier by providing a preconfigured Hyprland option. Let's see them.
    1. Garuda Linux
    Garuda Linux offers a dedicated Hyprland edition, preconfigured with themes, wallpapers, and essential applications. It is designed for users who want a visually appealing and ready-to-use desktop without manually configuring Hyprland.
    The distribution includes performance-oriented buffs such as the Zen kernel, Btrfs snapshots, and optimized compositor settings. Users can enjoy a responsive system with minimal tweaking needed post-install.
    Garuda’s tools, like "Rani," simplify maintenance and system management. This ensures even users new to Linux can manage updates, drivers, and desktop settings efficiently.
    ⭐ Key Features
    Preinstalled tools for system management. Ready-to-use desktop layout with Hyprland. Rolling release updates via Arch Linux repos. Gaurda Linux2. ArchRiot
    ArchRiot is a community-driven, Arch-based distribution that comes with Hyprland preinstalled. It includes essential applications and cool themes for a ready-to-use desktop experience.
    The distribution provides a Go-based installer that automates setup and includes rollback support, reducing setup errors. Plus, the distro follows a rolling release model, allowing users to stay up to date with the latest packages and Hyprland features.
    Initially started as a fork of Omarchy (discussed later), it has evolved into a distinct project with custom developed tools.
    ⭐ Key Features
    Go-based installer with rollback support. Dependable community support for new users. Preconfigured Hyprland with curated apps and themes. ArchRiot3. CachyOS
    CachyOS is an Arch-based distribution focused on speed and ease of use. It offers a Hyprland option during installation, letting users start with a functional, preconfigured desktop.
    It includes a simple installer, and the post-install tools are helpful to manage packages, settings, and desktop customization without extra complexity. This is a suitable option for both beginners and experienced users who want a fast Arch-based system with Hyprland ready to go.
    ⭐ Key Features
    GUI and CLI installation options. Tools for hardware detection and system customization. Optimized kernel with BORE scheduler for better performance. CachyOSCachyOS: Arch-based Distro for Speed and Ease of UseA performance-focused Arch-based distro for newbies and experts.It's FOSS NewsAnkush Das
    4. Omarchy (A Script for Arch Linux)
    Omarchy is a script for Arch Linux that automates the installation and configuration of Hyprland. It sets up themes, layouts, keybinds, and essential applications.
    The script reduces manual setup effort, allowing users to get a functional desktop with a single command. It supports optional packages for productivity and multimedia, letting users tailor the environment to their needs.
    Omarchy is ideal for users who want the flexibility of Arch Linux without configuring every component manually.
    ⭐ Key Features
    Many preinstalled themes. Automated Hyprland setup in one command. Optional productivity and multimedia integrations. OmarchyThis One Command Turned My Arch Install Into a Beautiful Hyprland SetupThis script turned my boring Arch install into something special.It's FOSS NewsSourav Rudra5. KooL's Arch - Hyprland (Another Script for Arch Linux)
    KooL's Arch - Hyprland is an automated installation script that sets up a complete Hyprland desktop environment on minimal Arch Linux systems. The script installs Hyprland along with a curated collection of themes, applications, and preconfigured dotfiles from a centralized repository, creating a polished and functional desktop experience out of the box.
    While the setup is relatively opinionated and comes with various configurations, users still need to be comfortable with terminal usage and basic configuration file editing for system maintenance and minor adjustments.
    ⭐ Key Features
    One-script setup with complete Hyprland environment installation. Curated preconfigured dotfiles from an actively maintained repository. Flexible display manager options, including GDM and SDDM support. KooL's Arch - HyprlandConclusion
    If you want a distribution that boots directly into Hyprland with minimal setup, Garuda Linux (Hyprland edition), CachyOS, and ArchRiot are the best candidates. They provide preconfigured desktops, themes, and essential tools without requiring you to fiddle with anything.
    For Arch enthusiasts who want to stay close to vanilla, Omarchy with Arch Linux or Arch Linux combined with JaKooLit’s script (number 5) are strong alternatives. These do not qualify as full "Hyprland distros," but they automate the setup process and deliver a comparable experience.
    And don't forget that there are several enthusiasts who have specific Hyprland setups that can be achieved with their dot files.
    GitHub - msmafra/dotfiles: My Hyprland environment (dotfiles)My Hyprland environment (dotfiles). Contribute to msmafra/dotfiles development by creating an account on GitHub.GitHubmsmafraSuggested Read 📖
    Getting Started With HyprlandLet’s get on the “hyp” wagon with HyprlandIt's FOSSAbhishek Prakash
  5. by: Abhishek Prakash
    Thu, 18 Sep 2025 04:31:27 GMT

    We hit a major milestone on our Mastodon account. We crossed the 40,000 mark. It's a pleasant surprise. We have a lot more people on Twitter, Facebook, Instagram and even YouTube. But seeing this number on a non-mainstream platform like Mastodon gives a positive uplift🕺

    💬 Let's see what you get in this edition:
    Ubuntu making a major change. A long-time KDE contributor leaving. The Apache Software Foundation's rebranding. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by TigerData. TigerData, the creators of TimescaleDB, are on a mission to make Postgres the fastest database for modern workloads. See how Postgres can scale to 2 PB and 1.5 trillion metrics per day—all without proprietary black boxes or hidden tools. With Tiger Postgres, you get massive scale without sacrificing the SQL you already know and love.
    TigerData Postgres Scaling 📰 Linux and Open Source News
    Jonathan Riddell has left KDE after 25 years. The Apache Software Foundation has turned a new leaf. SUSE's Agama installer recently received a major update. CUDA will be directly offered via Ubuntu's repositories soon. Ubuntu has made a move to Dracut, replacing initramfs-tools. The OpenSearch Foundation has a new Executive Director leading it. GNOME 49 is released. Ubuntu 25.10 and Fedora 43 will have them. Rolling distros like Arch should have them in a week or so, hopefully.
    GNOME 49 Launches With New Apps, Nautilus Redesign, and GNOME Shell UpgradesMany fresh applications and a refined user interface mark this release.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    The Rustification of Ubuntu has some performance hurdles to tackle.
    Rust Coreutils Are Performing Worse Than GNU Coreutils in UbuntuUbuntu’s Rust move shows promise, but questions remain on performance.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials, and Learnings
    Compose Key in GNOME makes typing € ♥ © and more super easy. Here's how the classic sudo compares to the Rust-based sudo-rs. Avoid these 10 mistakes if you are a new Linux user. Top 10 Mistakes New Linux Users MakeEvery Linux user makes these rookie mistakes. Get to know them before you do, or have you already got into trouble?It's FOSSAnkush Das👷 AI, Homelab and Hardware Corner
    Turn your Pi into a powerhouse with the Pironman 5 Max.
    Running local LLMs on your phone isn't science fiction! You can try running a local AI on your Android smartphone. Don't expect a superb experience, but it can help in some cases.
    And I tried my hands on a Raspberry Pi Pico 2 kit. It's a well-thought-out device primarily aiming to help children get into STEM.
    Review: Elecrow’s All-in-one Starter Kit for Pico 2For anyone looking to introduce themselves or their children to the exciting world of electronics and programming, this starter kit offers a good entry point into these essential modern skills.It's FOSSAbhishek Prakash✨ Project Highlight
    Readest is a solid eBook reader choice that runs great on Linux (but is not limited to).
    This Could Be My New Favorite eBook Reader App on LinuxReadest offers a modern cross-platform eBook reading experience on Linux.It's FOSS NewsSourav Rudra📽️ Videos I Am Creating for You
    Learn about using and managing AppImages in Linux in our latest YouTube video.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    What's in a Container? A lot, if you can solve it.
    Crossword: What’s in the Container?Containers are fun… until they’re in a crossword. 🧩Test your Docker IQ and see if you can solve this without running docker --help in panic mode.It's FOSSAbhishek Prakash Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a McDonald's burger a month), and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 💡 Quick Handy Tip
    You can easily save sessions in KDE Plasma. First, go into KDE Settings -> Session -> Desktop Session. Here, under the "Session Restore" section, toggle on the "When session was manually saved" button.
    This will add a new "Save Session" button to your Power Menu, as shown in the screenshot above (on the right). Click on it to make Plasma remember the apps that are open and restore them on the next login.
    To customize the behavior further, open the apps you need at login and click the button again to change the apps.
    🤣 Meme of the Week
    You never know when you might need them!
    🗓️ Tech Trivia
    The Association for Computing Machinery was founded on September 15, 1947. Today it has over 100,000 members worldwide and organizes conferences and workshops to advance computing knowledge and technology.
    🧑‍🤝‍🧑 FOSSverse Corner
    One of our readers has sent over a reimagination of what Tux, the mascot of Linux, can be.
    Tux Redesign... UnofficialHey FOSSers, A reader, Michael Kolesidis, sent me an email and shared a redesigned, modern, and simplified version of our beloved Tux mascot that he designed and released under the Creative Commons Attribution-ShareAlike 4.0 International license. I am sharing them with you here: There is also a I <3 Tux styled version: You can find the new designs on Wikimedia. 🔗 Redesigned Tux: https://commons.wikimedia.org/wiki/File:Tux_Redesign.svg ❤ “I Love Linux” derivative: https://…It's FOSS Communityabhishek❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  6. Is it Time to Un-Sass?

    by: Jeff Bridgforth
    Wed, 17 Sep 2025 14:02:25 +0000

    Several weeks ago, I participated in Front End Study Hall. Front End Study Hall is an HTML and CSS focused meeting held on Zoom every two weeks. It is an opportunity to learn from one another as we share our common interest in these two building blocks of the Web. Some weeks, there is more focused discussion while other weeks are more open ended and members will ask questions or bring up topics of interest.
    Joe, the moderator of the group, usually starts the discussion with something he has been thinking about. In this particular meeting, he asked us about Sass. He asked us if we used it, if we liked it, and then to share our experience with it. I had planned to answer the question but the conversation drifted into another topic before I had the chance to answer. I saw it as an opportunity to write and to share some of the things that I have been thinking about recently.
    Beginnings
    I started using Sass in March 2012. I had been hearing about it through different things I read. I believe I heard Chris Coyier talk about it on his then-new podcast, ShopTalk Show. I had been interested in redesigning my personal website and I thought it would be a great chance to learn Sass. I bought an e-book version of Pragmatic Guide to Sass and then put what I was learning into practice as I built a new version of my website. The book suggested using Compass to process my Sass into CSS. I chose to use SCSS syntax instead of indented syntax because SCSS was similar to plain CSS. I thought it was important to stay close to the CSS syntax because I might not always have the chance to use Sass, and I wanted to continue to build my CSS skills.
    It was very easy to get up and running with Sass. I used a GUI tool called Scout to run Compass. After some frustration trying to update Ruby on my computer, Scout gave me an environment to get up and going quickly. I didn’t even have to use the command line. I just pressed “Play” to tell my computer to watch my files. Later I learned how to use Compass through the command line. I liked the simplicity of that tool and wish that at least one of today’s build tools incorporated that same simplicity.
    I enjoyed using Sass out of the gate. I liked that I was able to create reusable variables in my code. I could set up colors and typography and have consistency across my code. I had not planned on using nesting much but after I tried it, I was hooked. I really liked that I could write less code and manage all the relationships with nesting. It was great to be able to nest a media query inside a selector and not have to hunt for it in another place in my code.
    Fast-forward a bit…
    After my successful first experience using Sass in a personal project, I decided to start using it in my professional work. And I encouraged my teammates to embrace it. One of the things I liked most about Sass was that you could use as little or as much as you liked. I was still writing CSS but now had the superpower that the different helper functions in Sass enabled.
    I did not get as deep into Sass as I could have. I used the Sass @extend rule more in the beginning. There are a lot of features that I did not take advantage of, like placeholder selectors and for loops. I have never been one to rely much on shortcuts. I use very few of the shortcuts on my Mac. I have dabbled in things like Emmet but tend to quickly abandon them because I am just use to writing things out and have not developed the muscle memory of using shortcuts.
    Is it time to un-Sass?
    By my count, I have been using Sass for over 13 years. I chose Sass over Less.js because I thought it was a better direction to go at the time. And my bet paid off. That is one of the difficult things about working in the technical space. There are a lot of good tools but some end up rising to the top and others fall away. I have been pretty fortunate that most of the decisions I have made have gone the way that they have. All the agencies I have worked for have used Sass.
    At the beginning of this year, I finally jumped into building a prototype for a personal project that I have been thinking about for years: my own memory keeper. One of the few things that I liked about Facebook was the Memories feature. I enjoyed visiting that page each day to remember what I had been doing on that specific day in years past. But I felt at times that Facebook was not giving me all of my memories. And my life doesn’t just happen on Facebook. I also wanted a way to view memories from other days besides just the current date.
    As I started building my prototype, I wanted to keep it simple. I didn’t want to have to set up any build tools. I decided to write CSS without Sass.
    Okay, so that was my intention. But I soon realized that that I was using nesting. I had been working on it a couple of days before I realized it.
    But my code was working. That is when I realized that the native nesting in CSS works much the same nesting in Sass. I had followed the discussion about implementing nesting in native CSS. At one point, the syntax was going to be very different. To be honest, I lost track of where things had landed because I was continuing to use Sass. Native CSS nesting was not a big concern to me right then.
    I was amazed when I realized that nesting works just the same way. And it was in that moment that I began to wonder:
    Is this finally the time to un-Sass?
    I want to give credit where credit is due. I’m borrowing the term “un-Sass” from Stu Robson, who is actually in the middle of writing a series called “Un-Sass’ing my CSS” as I started thinking about writing this post. I love the term “un-Sass” because it is easy to remember and so spot on to describe what I have been thinking about.
    Here is what I am taking into consideration:
    Custom Properties
    I knew that a lot about what I liked about Sass had started to make its way into native CSS. Custom properties were one of the first things. Custom properties are more powerful than Sass variables because you can assign a new value to a custom property in a media query or in a theming system, like light and dark modes. That’s something Sass is unable to do since variables become static once they are compiled into vanilla CSS. You can also assign and update custom properties with JavaScript. Custom properties also work with inheritance and have a broader scope than Sass variables.
    So, yeah. I found that not only was I already fairly familiar with the concept of variables, thanks to Sass, but the native CSS version was much more powerful.
    I first used CSS Custom Properties when building two different themes (light and dark) for a client project. I also used them several times with JavaScript and liked how it gave me new possibilities for using CSS and JavaScript together. In my new job, we use custom properties extensively and I have completely switched over to using them in any new code that I write. I made use of custom properties extensively when I redesigned my personal site last year. I took advantage of it to create a light and dark theme and I utilized it with Utopia for typography and spacing utilities.
    Nesting
    When Sass introduced nesting, it simplified the writing of CSS code because you write style rules within another style rule (usually a parent). This means that you no longer had to write out the full descendent selector as a separate rule. You could also nest media queries, feature queries, and container queries.
    This ability to group code together made it easier to see the relationships between parent and child selectors. It was also useful to have the media queries, container queries, or feature queries grouped inside those selectors rather than grouping all the media query rules together further down in the stylesheet.
    I already mentioned that I stumbled across native CSS nesting when writing code for my memory keeper prototype. I was very excited that the specification extended what I already knew about nesting from Sass.
    Two years ago, the nesting specification was going to require you to start the nested query with the & symbol, which was different from how it worked in Sass.
    .footer { a { color: blue } } /* 2023 */ .footer { & a { color: blue } /* This was valid then */ } But that changed sometime in the last two years and you no longer need the ampersand (&) symbol to write a nested query. You can write just as you had been writing it in Sass. I am very happy about this change because it means native CSS nesting is just like I have been writing it in Sass.
    /* 2025 */ .footer { a { color: blue } /* Today's valid syntax */ } There are some differences in the native implementation of nesting versus Sass. One difference is that you cannot create concatenated selectors with CSS. If you love BEM, then you probably made use of this feature in Sass. But it does not work in native CSS.
    .card { &__title {} &__body {} &__footer {} } It does not work because the & symbol is a live object in native CSS and it is always treated as a separate selector. Don’t worry, if you don’t understand that, neither do I. The important thing is to understand the implication – you cannot concatenate selectors in native CSS nesting.
    If you are interested in reading a bit more about this, I would suggest Kevin Powell’s, “Native CSS Nesting vs. Sass Nesting” from 2023. Just know that the information about having to use the & symbol before an element selector in native CSS nesting is out of date.
    I never took advantage of concatenated selectors in my Sass code so this will not have an impact on my work. For me, nesting is native CSS is equivalent to how I was using it in Sass and is one of the reasons why to consider un-Sassing.
    My advice is to be careful with nesting. I would suggest trying to keep your nested code to three levels at the most. Otherwise, you end up with very long selectors that may be more difficult to override in other places in our codebase. Keep it simple.
    The color-mix() function
    I liked using the Sass color module to lighten or darken a color. I would use this most often with buttons where I wanted the hover color to be different. It was really easy to do with Sass. (I am using $color to stand in for the color value).
    background-color: darken($color, 20%); The color-mix() function in native CSS allows me to do the same thing and I have used it extensively in the past few months since learning about it from Chris Ferdinandi.
    background-color: color-mix(in oklab, var(--color), #000000 20%); Mixins and functions
    I know that a lot of developers who use Sass make extensive use of mixins. In the past, I used a fair number of mixins. But a lot of the time, I was just pasting mixins from previous projects. And many times, I didn’t make as much use of them as I could because I would just plain forget that I had them. They were always nice helper functions and allowed me to not have to remember things like clearfix or font smoothing. But those were also techniques that I found myself using less and less.
    I also utilized functions in Sass and created several of my own, mostly to do some math on the fly. I mainly used them to convert pixels into ems because I liked being able to define my typography and spacing as relative and creating relationships in my code. I also had written a function to covert pixels to ems for custom media queries that did not fit within the breakpoints I normally used. I had learned that it was a much better practice to use ems in media queries so that layouts would not break when a user used page zoom.
    Currently, we do not have a way to do mixins and functions in native CSS. But there is work being done to add that functionality. Geoff wrote about the CSS Functions and Mixins Module.
    I did a little experiment for the use case I was using Sass functions for. I wanted to calculate em units from pixels in a custom media query. My standard practice is to set the body text size to 100% which equals 16 pixels by default. So, I wrote a calc() function to see if I could replicate what my Sass function provided me.
    @media (min-width: calc((600 / 16) * 1em)); This custom media query is for a minimum width of 600px. This would work based on my setting the base font size to 100%. It could be modified.
    Tired of tooling
    Another reason to consider un-Sassing is that I am simply tired of tooling. Tooling has gotten more and more complex over the years, and not necessarily with a better developer experience. From what I have observed, today’s tooling is predominantly geared towards JavaScript-first developers, or anyone using a framework like React. All I need is a tool that is easy to set up and maintain. I don’t want to have to learn a complex system in order to do very simple tasks.
    Another issue is dependencies. At my current job, I needed to add some new content and styles to an older WordPress site that had not been updated in several years. The site used Sass, and after a bit of digging, I discovered that the previous developer had used CodeKit to process the code. I renewed my Codekit license so that I could add CSS to style the content I was adding. It took me a bit to get the settings correct because the settings in the repo were not saving the processed files to the correct location.
    Once I finally got that set, I continued to encounter errors. Dart Sass, the engine that powers Sass, introduced changes to the syntax that broke the existing code. I started refactoring a large amount of code to update the site to the correct syntax, allowing me to write new code that would be processed. 
    I spent about 10 minutes attempting to refactor the older code, but was still getting errors. I just needed to add a few lines of CSS to style the new content I was adding to the site. So, I decided to go rogue and write the new CSS I needed directly in the WordPress template. I have had similar experiences with other legacy codebases, and that’s the sort of thing that can happen when you’re super reliant on third-party dependencies. You spend more time trying to refactor the Sass code so you can get to the point where you can add new code and have it compiled.
    All of this has left me tired of tooling. I am fortune enough at my new position that the tooling is all set up through the Django CMS. But even with that system, I have run into issues. For example, I tried using a mixture of percentage and pixels values in a minmax() function and Sass was trying to evaluate it as a math function and the units were incompatible.
    grid-template-columns: repeat(auto-fill, minmax(min(200px, 100%), 1fr)); I needed to be able to escape and not have Sass try to evaluate the code as a math function:
    grid-template-columns: repeat(auto-fill, minmax(unquote("min(200px, 100%)"), 1fr)); This is not a huge pain point but it was something that I had to take some time to investigate that I could have been using to write HTML or CSS. Thankfully, that is something Ana Tudor has written about.
    All of these different pain points lead me to be tired of having to mess with tooling. It is another reason why I have considered un-Sassing.
    Verdict
    So what is my verdict — is it time to un-Sass?
    Please don’t hate me, but my conclusion is: it depends. Maybe not the definitive answer you were looking for.
    But you probably are not surprised. If you have been working in web development even a short amount of time, you know that there are very few definitive ways of doing things. There are a lot of different approaches, and just because someone else solves it differently, does not mean you are right and they are wrong (or vice versa). Most things come down to the project you are working on, your audience, and a host of other factors.
    For my personal site, yes, I would like to un-Sass. I want to kick the build process to the curb and eliminate those dependencies. I would also like for other developers to be able to view source on my CSS. You can’t view source on Sass. And part of the reason I write on my site is to share solutions that might benefit others, and making code more accessible is a nice maintenance enhancement.
    My personal site does not have a very large codebase. I could probably easily un-Sass it in a couple of days or over a weekend.
    But for larger sites, like the codebase I work with at my job. I wouldn’t suggest un-Sassing it. There is way too much code that would have to be refactored and I am unable to justify the cost for that kind of effort. And honestly, it is not something I feel motivated to tackle. It works just fine the way that it is. And Sass is still a very good tool to use. It’s not “breaking” anything.
    Your project may be different and there might be more gains from un-Sassing than the project I work on. Again, it depends.
    The way forward
    It is an exciting time to be a CSS developer. The language is continuing to evolve and mature. And every day, it is incorporating new features that first came to us through other third-party tools such as Sass. It is always a good idea to stop and re-evaluate your technology decisions to determine if they still hold up or if more modern approaches would be a better way forward.
    That does not mean we have to go back and “fix” all of our old projects. And it might not mean doing a complete overhaul. A lot of newer techniques can live side by side with the older ones. We have a mix of both Sass variables and CSS custom properties in our codebase. They don’t work against each other. The great thing about web technologies is that they build on each other and there is usually backward compatibility.
    Don’t be afraid to try new things. And don’t judge your past work based on what you know today. You did the best you could given your skill level, the constraints of the project, and the technologies you had available. You can start to incorporate newer ways right alongside the old ones. Just build websites!
    Is it Time to Un-Sass? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  7. by: Abhishek Prakash
    Wed, 17 Sep 2025 13:42:32 GMT

    Raspberry Pi Pico 2 starter kit from Elecrow is an educational device that integrates multiple sensors and components onto a single board for learning electronics and programming. Built around the dual-core Raspberry Pi Pico2 RP2350 chip, the kit includes 17 sensors, 20 RGB LEDs, and a 2.4-inch TFT color touchscreen in a portable case format.
    The kit is designed to eliminate the need for breadboarding, wiring, and soldering, allowing users to focus on programming concepts and sensor functionality. It comes with 21 structured tutorials that progress from basic to advanced levels, using Arduino IDE as the programming environment.
    In this article, I'll share my experience with this starter kit.
    📋Elecrow sent me this kit for review. The opinion is purely mine, based on my experience.Technical specification
    The kit comes in the form of a briefcase-styled plastic case. It weighs less than 350 gram and has a size of 19.5x17x4.6 cm.
    At the core of this kit lies Raspberry Pi Pico2 RP2350. There is a 2.4 inches TFT touch screen surrounded by seventeen sensors. These sensors are connected to Pico 2 already son you don't need to do any manual connections to access them. It is powered by a type C port and the same is used for transferring the project files to the board.
    Light Sensor Hall Sensor Gas Sensor (MQ2) Sound Sensor Temperature & Humidity Sensor MPU-6050 Accelerometer & Gyro 2.0 Ultrasonic Ranging Sensor Touch Sensor Buzzer Servo Motor Vibration Motor Relay Individual LEDs RGB LED Buttons Linear Potentiometer Infrared My experience with Elecrow Pico 2 Starter Kit
    The kit comes preloaded with a few games and a program that lets you enable the LED lights and change their patterns. The games are Dinosaur Jump (the one you see in Chrome) and Snake.
    The games are not as interesting as I would want them to be. Dianousr moves way too slow in the first stage. Even my four-years old didn't have enough patience to play this 'slow game'. While the Snake game is better, there is a slight delay in button press and the response on screen.
    But this is not what the kit is for. It is for exploring programming all those sensors on the board.
    Easier if you are familiar with the Arduino ecosystem
    Here's the thing. If you are familiar with Arduino board and their ecosystem, things will be a lot easier for you. I have been using Raspberry Pi for years but never used an Arduino or other microcontroller like the Pico board here.
    I learned a few things for sure. You have to 'burn' the project code on the board and you have to do it each time you have a new project. Which means if you ran a program that sounds the buzzer and next you want to try a program that interacts with the ultrasound sensor, you have to put this new code on Pico 2.
    Elecrow does provide more than one documentation, but they are inconsistent with each other. The getting started guide should be improved, especially for beginners. It took me some time to figure things out based on the two documents and some web searches.
    The web-based documentation does not mention that version 4.2.0 of the Raspberry Pi Pico/RP2040/RP2350 has to be explicitly added to the board manager in Arduino IDE. It is mentioned in the user manual PDF, though.
    Elecrow provides source code for around 15 projects. Wiki on the web mentions a different source code link and the PDF user manual mentions the source code on GitHub.
    It doesn't end here. Most of the sample project codes on GitHub have different name for their folders and the .ino files. In the Arduino ecosystem, both .ino code file and folder that contains it must have the same name; otherwise, the sketchbook won't be visible in Arduino IDE.
    In my opinion, things would have been smoother if I were familiar with Arduino and the documentation was a bit more straightforward.
    Sample projects are simple and fun
    I did manage to overcome the initial hurdle and was able to run several of the provided projects.
    Now, the provided user manual does an excellent job at explaining the sample projects. It explains the objective of the experiment, actions that should be performed, working principles, and key elements of the program.
    Document is excellent for understaning the sample projectsProjects are mostly simple and explore various sensors present on the kit. Simple projects like LED controlling with a button, oscillating the servo motor, showing room temperature and humidity, measuring obstacle distance with an ultrasound sensor, etc.
    Room temperature and humidityThe projects that involved an infrared receiver didn't compile. I'll debug the issue later and if I am unable to fix it, I'll perhaps open a bug report on Elecrow's GitHub repo.
    To experiment, I even changed the code slightly. I can see that there is potential to modify the existing code into something else. For example, if the room temperature reaches a certain level, the servo motor starts rotating. There is potential here to explore and have fun.
    Above all, exploring this device made me familiar with Arduino. New skill unlocked 💪
    Conclusion
    This is a suitable option for schools, as they can have a bunch of these kits in their STEM lab. Children can start working on modifying the codes for their lab projects instead of struggling with wiring and soldering. The briefcase-style case also makes it easier to store without worrying about disturbing the wire connections. Perhaps there could be a discount on bulk orders; I am just guessing.
    Parents who have a little bit of Arduino experience or the willingness to learn can also get this as a present for their children. With a little guidance, they can build new things upon the existing sample projects, and that will help them explore the exciting world of electronics and programming.
    To the makers, if they could improve their getting-started guide and provide code consistent with Arduino IDE requirements, it would surely flatten the learning curve.
    This kit is available for $37.99, which is a fair price for what it offers. Do refer to the official manual beforfe starting, if you purchase the kit.
    Explore Elecrow All-in-on Starter Kit for Pico 2
  8. by: Chris Coyier
    Tue, 16 Sep 2025 15:41:29 +0000

    Chris and Stephen talk about how we use a Cloudflare Worker & HTMLRewriter to inject a very special <script> tag into the previews of the Pens you work on. This script has a lot of important jobs so it’s presence is crucial, and getting it in there reliably can is a bit of a challenge.
    Time Jumps
    00:06 Injecting a script into your code 01:10 What we talked about previously that led up to this 02:45 What are the jobs of this script? 07:54 How do we account for HTML pages? 10:22 Preview page address 20:02 How do we get the script in?
  9. by: Chris Coyier
    Mon, 15 Sep 2025 17:18:42 +0000

    I found myself saying “The Edge” in a recent podcast with Stephen. I was talking about some server-side JavaScript that executes during a web request, and that it was advantageous that it happens at CDN nodes around the world rather than at one location only, so that it’s fast. That was kinda the whole point about “The Edge” is speed.
    I don’t hear the term bandied about much anymore, but it’s still a useful architectural concept that many use. Salma Alam-Naylor has a good explainer post.
    It’s just interesting how terms kinda just chill out in usage over time. They feel like such big important things at the time, that everyone has a thought about, then they just fade away, even if we’re all still doing and using the thing we were talking about.
    Even terms like “SPA” (Single Page App) seemed like it’s all anyone wanted to argue about for quite a while there and now I see it chilling out. All the nuance and distinctions between that and a website with regular ol’ links and reloads have come to bear. Concepts like paint holding and view transitions make regular sites feel much more like SPAs and concepts like server side rendering make SPAs work as regular sites anyway. It’s not a fight anymore it’s just technology.
    The more you understand, the more rote (and, dare I say, boring) all this becomes. Dave Rupert says:
    Design, too, can be and benefits from being a bit boring. Fortunately we have grug to guide us.
  10. by: Juan Diego Rodríguez
    Mon, 15 Sep 2025 14:31:06 +0000

    No feature is truly “the worst” in CSS, right? After all, it’s all based on opinion and personal experience, but if we had to reach a consensus, checking the State of CSS 2025 results would be a good starting point. I did exactly that, jumped into the awards section, and there I found it: the “Most Hated Feature,” a title no CSS should have bear…
    This shocks me, if I’m being honest. Are really trigonometric functions really that hated? I know “hated” is not the same as saying something is “worst”, but it still has an awful ring to it. And I know I’m being a little dramatic here, since only “9.1% of respondents truly hate trigonometry.” But that’s still too much shade being thrown for my taste.
    I want to eliminate that 9.1%. So, in this series, I want to look at practical uses for CSS trigonometric functions. We’ll tackle them in pieces because there’s a lot to take in and I find it easiest to learn and retain information when it’s chunked into focused, digestible pieces. And we’ll start with what may be the most popular functions of the “worst” feature: sin() and cos().
    CSS Trigonometric Functions: The “Most Hated” CSS Feature
    sin() and cos() (You are here!) Tackling the CSS tan() Function (coming soon) Inverse functions: asin(), acos(), atan() and atan2() (coming soon) What the heck are cos() and sin() anyway?
    This section is for those who cos() and sin() don’t quite click yet, or simply want a refresher. If you aced trigonometry quizzes in high school, feel free to skip ahead to the next section!
    What I find funny about cos() and sin()— and also why I think there is confusion around them — is the many ways we can describe them. We don’t have to look too hard. A quick glance at this Wikipedia page has an eye-watering number of super nuanced definitions.
    This is a learning problem in the web development field. I feel like some of those definitions are far too general and lack detail about the essence of what trigonometric functions like sin() and cos() can do. Conversely, other definitions are overly complex and academic, making them tough to grok without an advanced degree.
    Let’s stick to the sweet middle spot: the unit circle.
    Meet the unit circle. It is a circle with a radius of one unit:
    Right now it’s alone… in space. Let’s place it on the Cartesian coordinate system (the classic chart with X and Y axes). We describe each point in space in Cartesian coordinates:
    The X coordinate: The horizontal axis, plotting the point towards the left or right. The Y coordinate: The vertical axis, plotting the point towards the top or bottom. We can move through the unit circle by an angle, which is measured from the positive X-axis going counter-clockwise.
    CodePen Embed Fallback We can go in a clockwise direction by using negative angles. As my physics teacher used to say, “Time is negative!”
    Notice how each angle lands on a unique point in the unit circle. How else can we describe that point using Cartesian coordinates?
    When the angle is 0° the X and Y coordinates are 1 and 0 (1, 0), respectively. We can deduce the Cartesian coordinates for other angles just as easily, like 90°, 180° and 270°. But for any other angle, we don’t know where the point is initially located on the unit circle.
    If only there were a pair of functions that take an angle and give us our desired coordinates…
    You guessed it, the CSS cos() and sin() functions do exactly that. And they’re very closely related, where cos() is designed to handle the X coordinate and sin() returns the Y coordinate.
    Play with the toggle slider in the following demo to see the relationship between the two functions, and notice how they form a right triangle with the initial point on the unit circle:
    CodePen Embed Fallback I think that’s all you really need to know about cos() and sin() for the moment. They’re mapped to Cartesian coordinates, which allows us to track a point along the unit circle with an angle, no matter what size that circle happens to be.
    Let’s dive into what we can actually use cos() and sin() for our everyday CSS work. It’s always good to put a little real-world context to theoretical concepts like math.
    Circular layouts
    If we go by the unit circle definition of cos() and sin(), then it’s easy to see how they might be used to create circular layouts in CSS. The initial setup is a single row of circular elements:
    CodePen Embed Fallback Say we want to place each circular item around the outline of a larger circle instead. First, we would let CSS know the total number of elements and also each element’s index (the order it’s in), something we can do with an inline CSS variable that holds each order in the position:
    <ul style="--total: 9"> <li style="--i: 0">0</li> <li style="--i: 1">1</li> <li style="--i: 2">2</li> <li style="--i: 3">3</li> <li style="--i: 4">4</li> <li style="--i: 5">5</li> <li style="--i: 6">6</li> <li style="--i: 7">7</li> <li style="--i: 8">8</li> </ul> Note: This step will become much easier and concise when the sibling-index() and sibling-count() functions gain support (and they’re really neat). I’m hardcoding the indexes with inline CSS variables in the meantime.
    To place the items around the outline of a larger circle, we have to space them evenly by a certain angle. And to get that angle, we can divide 360deg (a full turn around the circle) by the total number of items, which is 8 in this specific example. Then, to get each element’s specific angle, we can multiply the angle spacing by the element’s index (i.e., position):
    li { --rotation: calc(360deg / var(--total) * var(--i)); } We also need to push the items away from the center, so we’ll assign a --radius value for the circle using another variable.
    ul { --radius: 10rem; } We have the element’s angle and radius. What’s left is to calculate the X and Y coordinates for each item.
    That’s where cos() and sin() come into the picture. We use them to get the X and Y coordinates that place each item around the unit circle, then multiply each coordinate by the --radius value to get an item’s final position on the bigger circle:
    li { /* ... */ position: absolute; transform: translateX(calc(cos(var(--rotation)) * var(--radius))) translateY(calc(sin(var(--rotation)) * var(--radius))); } That’s it! We have a series of eight circular items placed evenly around the outline of a larger circle:
    CodePen Embed Fallback And we didn’t need to use a bunch of magic numbers to do it! All we provide CSS with is the unit circle’s radius, and then CSS does all the trigonometric gobbledygook that makes so many of us call this the “worst” CSS feature. Hopefully, I’ve convinced you to soften your opinions on them if that’s what was holding you back!
    We aren’t limited to full circles, though! We can also have a semicircular arrangement by choosing 180deg instead of 360deg.
    CodePen Embed Fallback This opens up lots of layout possibilities. Like, what if we want a circular menu that expands from a center point by transitioning the radius of the circle? We can totally do that:
    CodePen Embed Fallback Click or hover the heading and the menu items form around the circle!
    Wavy layouts
    There’s still more we can do with layouts! If, say, we plot the cos() and sin() coordinates on a two-axis graph, notice how they give us a pair of waves that periodically go up and down. And notice they are offset from each other along the horizontal (X) axis:
    Where do these waves come from? If we think back to the unit circle we talked about earlier, the value of cos() and sin() oscillate between -1 and 1. In other words, the lengths match when the angle around the unit circle varies. If we graph that oscillation, then we’ll get our wave and see that they’re sorta like reflections of each other.
    ⚠️ Auto-playing media
    Can we place an element following one of these waves? Absolutely. Let’s start with the same single row layout of circular items we made earlier. This time, though, the length of that row spans beyond the viewport, causing overflow.
    CodePen Embed Fallback We’ll assign an index position for each item like we did before, but this time we don’t need to know the total number of items. We had eight items last time, so let’s bump that up to 10 and pretend like we don’t know that:
    <ul> <li style="--i: 0"></li> <li style="--i: 1"></li> <li style="--i: 2"></li> <li style="--i: 3"></li> <li style="--i: 4"></li> <li style="--i: 5"></li> <li style="--i: 6"></li> <li style="--i: 7"></li> <li style="--i: 8"></li> <li style="--i: 9"></li> <li style="--i: 10"></li> </ul> We want to vary the element’s vertical position along either a sin() or cos() wave, meaning translating each item’s position based on its order in the index. We’ll multiply an item’s index by a certain angle that is passed into the sin() function, and that will return a ratio that describes how high or low the element should be on the wave. The final thing is to multiply that result by a length value, which I calculated as half an item’s total size.
    Here’s the math in CSS-y terms:
    li { transform: translateY(calc(sin(60deg * var(--i)) * var(--shape-size) / 2)); } I’m using a 60deg value because the waves it produces are smoother than some other values, but we can vary it as much as we want to get cooler waves. Play around with the toggle in the next demo and watch how the wave’s intensity changes with the angle:
    CodePen Embed Fallback This is a great example to see what we’re working with, but how would you use it in your work? Imagine we have two of these wavy chains of circles, and we want to intertwine them together, kinda like a DNA strand.
    Let’s say we’re starting with the HTML structure for two unordered lists nested inside another unordered list. The two nested unordered lists represent the two waves that form the chain pattern:
    <ul class="waves"> <!-- First wave --> <li> <ul class="principal"> <!-- Circles --> <li style="--i: 0"></li> <li style="--i: 1"></li> <li style="--i: 2"></li> <li style="--i: 3"></li> <!-- etc. --> </ul> </li> <!-- Second wave --> <li> <ul class="secondary"> <!-- Circles --> <li style="--i: 0"></li> <li style="--i: 1"></li> <li style="--i: 2"></li> <li style="--i: 3"></li> <!-- etc. --> </ul> </li> </ul> Pretty similar to the examples we’ve seen so far, right? We’re still working with an unordered list where the items are indexed with a CSS variable, but now we’re working with two of those lists… and they’re contained inside a third unordered list. We don’t have to structure this as lists, but I decided to leave them so I can use them as hooks for additional styling later.
    To avoid any problems, we’ll ignore the two direct <li> elements in the outer unordered list that contain the other lists using display: contents.
    .waves > li { display: contents; } Notice how one of the chains is the “principal” while the other is the “secondary.” The difference is that the “secondary” chain is positioned behind the “principal” chain. I’m using slightly different background colors for the items in each chain, so it’s easier to distinguish one from the other as you scroll through the block-level overflow.
    CodePen Embed Fallback We can reorder the chains using a stacking context:
    .principal { position: relative; z-index: 2; } .secondary { position: absolute; } This positions one chain on top of the other. Next, we will adjust each item’s vertical position with the “hated” sin() and cos() functions. Remember, they’re sorta like reflections of one another, so the variance between the two is what offsets the waves to form two intersecting chains of items:
    .principal { /* ... */ li { transform: translateY(calc(sin(60deg * var(--i)) * var(--shape-size) / 2)); } } .secondary { /* ... */ li { transform: translateY(calc(cos(60deg * var(--i)) * var(--shape-size) / 2)); } } We can accentuate the offset even more by shifting the .secondary wave another 60deg:
    .secondary { /* ... */ li { transform: translateY(calc(cos(60deg * var(--i) + 60deg) * var(--shape-size) / 2)); } } The next demo shows how the waves intersect at an offset angle of 60deg. Adjust the slider toggle to see how the waves intersect at different angles:
    CodePen Embed Fallback Oh, I told you this could be used in a practical, real-world way. How about adding a little whimsy and flair to a hero banner:
    CodePen Embed Fallback Damped oscillatory animations
    The last example got me thinking: is there a way to use sin() and cos()‘s back and forth movement for animations? The first example that came to mind was an animation that also went back and forth, something like a pendulum or a bouncing ball.
    This is, of course, trivial since we can do it in a single animation declaration:
    .element { animation: someAnimation 1s infinite alternate; } This “back and forth” animation is called oscillatory movement. And while cos() or sin() are used to model oscillations in CSS, it would be like reinventing the wheel (albeit a clunkier one).
    I’ve learned that perfect oscillatory movement — like a pendulum that swings back and forth in perpetuity, or a ball that never stops bouncing — doesn’t really exist. Movement tends to decay over time, like a bouncing spring:
    ⚠️ Auto-playing media
    There’s a specific term that describes this: damped oscillatory movement. And guess what? We can model it in CSS with the cos() function! If we graph it over time, then we will see it goes back and forth while getting closer to the resting position1.
    Wikipedia has another animated example that nicely demonstrates what damped oscillation looks like.
    In general, we can describe damped oscillation over time as a mathematical function:
    It’s composed of three parts:
    e−γt: Due to the negative exponent, it becomes exponentially smaller as time passes, bringing the movement to a gradual stop. It is multiplied by a damping constant (γ) that specifies how quickly the movement should decay. a: This is the initial amplitude of the oscillation, i.e., the element’s initial position. cos(ωt−α): This gives the movement its oscillation as time passes. Time is multiplied by frequency (ω), which determines an element’s oscillation speed2. We can also subtract from time α, which we can use to offset the initial oscillation of the system. Okay, enough with all the theory! How do we do it in CSS? We’ll set the stage with a single circle sitting all by itself.
    CodePen Embed Fallback We have a few CSS variables we can define that will come in handy since we already know the formula we’re working with:
    :root { --circle-size: 60px; --amplitude: 200px; /* The amplitude is the distance, so let's write it in pixels*/ --damping: 0.3; --frequency: 0.8; --offset: calc(pi/2); /* This is the same as 90deg! (But in radians) */ } Given these variables, we can peek at what the animation would look like on a graph using a tool like GeoGebra:
    From the graph, we can see that the animation starts at 0px (thanks to our offset), then peaks around 140px and dies out around 25s in. I, for one, won’t be waiting 25 seconds for the animation to end, so let’s create a --progress property that will animate between 0 to 25, and will act as our “time” in the function.
    Remember that to animate or transition a custom property, we’ve gotta register it with the @property at-rule.
    @property --progress { syntax: "<number>"; initial-value: 0; inherits: true; } @keyframes movement { from { --progress: 0; } to { --progress: 25; } } What’s left is to implement the prior formula for the element’s movement, which, written in CSS terms, looks like this:
    .circle { --oscillation: calc( (exp(-1 * var(--damping) * var(--progress))) * var(--amplitude) * cos(var(--frequency) * (var(--progress)) - var(--offset)) ); transform: translateX(var(--oscillation)); animation: movement 1s linear infinite; } CodePen Embed Fallback This gives a pretty satisfying animation by itself, but the damped motion is only on the x-axis. What would it look like if, instead, we applied the damped motion on both axes? To do this, we can copy the same oscillation formula for x, but replace the cos() with sin().
    .circle { --oscillation-x: calc( (exp(-1 * var(--damping) * var(--progress))) * var(--amplitude) * cos(var(--frequency) * (var(--progress)) - var(--offset)) ); --oscillation-y: calc( (exp(-1 * var(--damping) * var(--progress))) * var(--amplitude) * sin(var(--frequency) * (var(--progress)) - var(--offset)) ); transform: translateX(var(--oscillation-x)) translateY(var(--oscillation-y)); animation: movement 1s linear infinite; } CodePen Embed Fallback This is even more satisfying! A circular and damped motion, all thanks to cos() and sin(). Besides looking great, how could this be used in a real layout?
    We don’t have to look too hard. Take, for example, this sidebar I recently made where the menu items pop in the viewport with a damped motion:
    CodePen Embed Fallback Pretty neat, right?!
    More trigonometry to come!
    Well, finding uses for the “most hated CSS feature” wasn’t that hard; maybe we should start showing some love to trigonometric functions. But wait. There are still several trigonometric functions in CSS we haven’t talked about. In the following posts, we’ll keep exploring what trig functions (like tan() and inverse functions) can do in CSS.
    CSS Trigonometric Functions: The “Most Hated” CSS Feature
    sin() and cos() (You are here!) Tackling the CSS tan() Function (coming soon) Inverse functions: asin(), acos(), atan() and atan2() (coming soon) Also, before I forget, here is another demo I made using cos() and sin() that didn’t make the cut in this article, but it is still worth checking out because it dials up the swirly-ness from the last example to show how wacky we can get.
    CodePen Embed Fallback Footnotes
    This kind of damped oscillatory movement, where the back and forth is more visible, is called underdamped oscillation. There are also overdamped and critically damped oscillations, but we won’t focus on them here. ↪️ In reality, the damped constant and the frequency are closely related. You can read more about damped oscillation in this paper. ↪️
    The “Most Hated” CSS Feature: cos() and sin() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  11. by: Community
    Mon, 15 Sep 2025 11:58:13 GMT

    Like it or not, AI is here to stay. For those who are concerned about data privacy, there are several local AI options available. Tools like Ollama and LM Studio makes things easier.
    Now those options are for the desktop user and require significant computing power.
    What if you want to use the local AI on your smartphone? Sure, one way would be to deploy Ollama with a web GUI on your server and access it from your phone.
    But there is another way and that is to use an application that lets you install and use LLMs (or should I say SLMs, Small Language Models) on your phone directly instead of relying on your local AI server on another computer.
    Allow me to share my experience with experimenting with LLMs on a phone.
    📋Smartphones these days have powerful processors and some even have dedicated AI processors on board. Snapdragon 8 Gen 3, Apple’s A17 Pro, and Google Tensor G4 are some of them. Yet, the models that can be run on a phone are often vastly different than the ones you use on a proper desktop or server.Here's what you'll need:
    An app that allows you to download the language models and interact with them. Suitable LLMs that have been specifically created for running on mobile devices. Apps for running LLMs locally on a smartphone
    After researching, I decided to explore following applications for this purpose. Let me share their features and details.
    1. MLC Chat
    MLC Chat supports top models like Llama 3.2, Gemma 2, phi 3.5 and Qwen 2.5 offering offline chat, translation, and multimodal tasks through a sleek interface. Its plug-and-play setup with pre-configured models, NPU optimization (e.g., Snapdragon 8 Gen 2+), and beginner-friendly features make it a good choice for on-device AI. 
    You can download the MLC Chat APK from their GitHub release page.
    Android is looking to forbid sideloading of APK files. I don't know what would happen then, but you can use APK files for now.
    Put the APK file on your Android device, go into Files and tap the APK file to begin installation. Enable “Install from Unknown Sources” in your device settings if prompted. Follow on-screen instructions to complete the installation.
    Enable APK installationOnce installed, open the MLC Chat app, select a model from the list, like Phi-2, Gemma 2B, Llama-3 8B, Mistral 7B. Tap the download icon to install the model. I recommend opting for smaller models like Phi-2. Models are downloaded on first use and cached locally for offline use.
    Click on the download button to download a modelTap the Chat icon next to the downloaded model. Start typing prompts to interact with the LLM offline. Use the reset icon to start a new conversation if needed.
    2. SmolChat (Android)
    SmolChat is an open-source Android app that runs any GGUF-format model (like Llama 3.2, Gemma 3n, or TinyLlama) directly on your device, offering a clean, ChatGPT-like interface for fully offline chatting, summarization, rewriting, and more.
    Install SmolChat from Google's Play Store. Open the app, choose a GGUF model from the app’s model list or manually download one from Hugging Face. If manually downloading, place the model file in the app’s designated storage directory (check app settings for the path).
    3. Google AI Edge Gallery
    Google AI Edge Gallery is an experimental open-source Android app (iOS soon) that brings Google's on-device AI power to your phone, letting you run powerful models like Gemma 3n and other Hugging Face models fully offline after download. This application makes use of Google’s LiteRT framework.
    You can download it from Google Play Store. Open the app and browse the list of provided models or manually download a compatible model from Hugging Face.
    Select the downloaded model and start a chat session. Enter text prompts or upload images (if supported by the model) to interact locally. Explore features like prompt discovery or vision-based queries if available.
    Top Mobile LLMs to try out
    Here are the best ones I’ve used:
    Model My Experience Best For Google’s Gemma 3n (2B) Blazing-fast for multimodal tasks including image captions, translations, even solving math problems from photos. Quick, visual-based AI assistance Meta’s Llama 3.2 (1B/3B) Strikes the perfect balance between size and smarts. It’s great for coding help and private chats.The 1B version runs smoothly even on mid-range phones. Developers & privacy-conscious users Microsoft’s Phi-3 Mini (3.8B) Shockingly good at summarizing long documents despite its small size. Students, researchers, or anyone drowning in PDFs Alibaba’s Qwen-2.5 (1.8B) Surprisingly strong at visual question answering—ask it about an image, and it actually understands! Multimodal experiments TinyLlama-1.1B The lightweight champ runs on almost any device without breaking a sweat. Older phones or users who just need a simple chatbot All these models use aggressive quantization (GGUF/safetensors formats), so they’re tiny but still powerful. You can grab them from Hugging Face—just download, load into an app, and you’re set.
    Challenges I faced while running LLMs Locally on Android smartphone
    Getting large language models (LLMs) to run smoothly on my phone has been equally exhilarating and frustrating.
    On my Snapdragon 8 Gen 2 phone, models like Llama 3-4B run at a decent 8-10 tokens per second, which is usable for quick queries. But when I tried the same on my backup Galaxy A54 (6 GB RAM), it choked. Loading even a 2B model pushed the device to its limits. I quickly learned that Phi-3-mini (3.8B) or Gemma 2B are far more practical for mid-range hardware.
    The first time I ran a local AI session, I was shocked to see 50% battery gone in under 90 minutes. MLC Chat offers power-saving mode for this purpose. Turning off background apps to free up RAM also helps.
    I also experimented with 4-bit quantized models (like Qwen-1.5-2B-Q4) to save storage but noticed they struggle with complex reasoning. For medical or legal queries, I had to switch back to 8-bit versions. It was slower but far more reliable.
    Conclusion
    I love the idea of having an AI assistant that works exclusively for me, no monthly fees, no data leaks. Need a translator in a remote village? A virtual assistant on a long flight? A private brainstorming partner for sensitive ideas? Your phone becomes all of these staying offline and untraceable.
    I won’t lie, it’s not perfect. Your phone isn’t a data center, so you’ll face challenges like battery drain and occasional overheating. But it also provides tradeoffs like total privacy, zero costs, and offline access.
    The future of AI isn’t just in the cloud, it’s also on your device.
    Author Info
    Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has passion for working with Kubernetes.
  12. by: Abhishek Prakash
    Sun, 14 Sep 2025 05:37:41 GMT

    The upcoming Ubuntu 25.10 release features a controversial move to replace the classic sudo command with its Rust-based implementation, sudo-rs.
    This move could bring numerous questions for you. Like, why opt for this change? What's wrong with the original? How would you use this new sudo? What happens to the old one?
    I will answer all these questions in this article.
    📝TLDR;
    If you are a regular, end-user who uses sudo to run commands with root privileges, nothing changes for you at the surface, except for some error and warning messages. You'll continue using sudo as you did before and it will automatically use Rust-based sudo underneath. However, if you are a sysadmin with custom sudo configuration, you should start paying attention as some features have been changed.What is sudo-rs?
    sudo-rs is an implementation of the classic sudo and su written in the Rust programming language, which is known for its memory safety. The new sudo-rs is not 100% compatible with sudo as it drops some features and implements a few of its own. This new tool is under heavy development and may implement some of the missing sudo features.
    Why sudo-rs?
    Don't fix what's not broken, right? Perhaps not. Ubuntu developer discussion cited these primary reasons for going with the Rust-based sudo:
    Memory safety: Rust's borrow checker provides better memory management and prevents common security vulnerabilities. Modern codebase: Easier to maintain and evolve compared to 30-year-old C code. Better defaults: Removes outdated features that might now be considered security risks. Younger contributor base: Young developers are opting for modern language like Rust instead of C. Rust's safety features also make it easier for new developers to contribute more confidently. Basically, the 30-years old codebase of sudo is complicated and makes it difficult to patch or implement new features. Writing from scratch is easier and the use of a modern, memory-safe language will also help attract contributions from a borader pool of developers.
    Please note that the sudo-rs dev team is in touch with the original maintainer of the original sudo and they have found issues that were not only fixed in the new Rust-based sudo but also in the original sudo.
    So from what it seems, sudo-rs is the natural evolution over the classic sudo.
    What changes between sudo and sudo-rs?
    Not much for regular end user perspective. You'll still be typing sudo as usual while it runs sudo-rs in the background. Some warning or error messages may have different text but that's about it.
    For sysadmin and advanced users, there are a few things missing for now and some might not be implemented at all. For example, sudo-rs will not include the sendmail support of original sudo which was used for sending notifications about sudo usage.
    sudo-rs always uses PAM for authentication and thus your system must be set up for PAM. sudo-rs will use the sudo and sudo-i service configuration. meaning that resource limits, umasks, etc have to be configured via PAM and not through the sudoers file.
    Wildcards are not supported in argument positions for a command to prevent common configuration mistakes in the sudoers file.
    Using sudo or sudo-rs in Ubuntu
    In Ubuntu 25.10, the command sudo is softlinked to sudo-rs. So, you'll be using sudo as always but underneath, it will be running the new sudo-rs.
    The original sudo is still there in the system as sudo-ws. It resembles the official website sudo.ws of the classic sudo project.
    If you want to use the OG sudo, you can just replace sudo with sudo-ws.
    As stated above, there are hardly any differences visible for regular users except for the slightly changed error and warning messages.
    At least till Ubuntu 26.10, you can make the classic sudo the default sudo by updating the alternatives. Although I would advise against it. Unless you have a solid reason, there is no harm in using the Rust-based sudo. Clearly, this is what will be the future anyways.
    sudo update-alternatives --config sudo💡sudo-rs is available in universe repository starting with Ubuntu 24.04. If you want to test it, you can type sudo-rs instead of sudo in your commands. Other distributions may also have this package available.sudo-rs is not the only alternative to sudo
    Surprised? There are several alternatives to sudo that have been in existence for some years now.
    There is this doas command-line tool that can be considered a simplified, minimal version of sudo.
    Another Rust-based implementation of sudo like functionality is RootAsRole.
    Some may even count uid0 from systemd as an alternative to sudo although it's not in the same league in my opinion but serves a similar purpose.
    The official sudo website lists a few more alternatives, but I think not all of them are seeing active development.
    FAQ
    Let's summarize and answer some of your frequently asked questions on sudo-rs inclusion.
    What is sudo-rs?
    sudo-rs is re-implementation of the classic C based sudo but written in memory-safe Rust programming language.
    Do I have to use sudo-rs command instead of sudo?
    No. Starting with Ubuntu 25.10, sudo is softlinked to sudo-rs. Which means that while you continue using sudo as you did in previous versions, it will automatically be running sudo-rs underneath.
    Can I remove sudo-rs and go back to original sudo?
    Yes. The original sudo is available as sudo.ws command and you can use update-alternatives to go set it the default sudo. But it is only possible until Ubuntu 26.04. Canonical plans to test sudo-rs as the only sudo mechanism in 26.10.
    What changes between sudo and sudo-rs?
    Nothing for common end-users. However, advanced, sysadmin oriented features like sendmail, wildcard support in sudoer file etc., have been changed. Sysadmins should read the man page of sudo-rs for more details.
    Conclusion
    To me, you don't have much to worry about if you are a regular user who never touched the sudo config file. Managing servers with custom sudo config? You should pay attention.
    Now, was it a wise decision to replace a (prefectly?) working piece of software and replace it with Rust? Is it another example of 'let's do it in Rust' phenomena sweeping the dev world? Share your opinion in the comments.
  13. Exploring Ansible Modules

    by: Abhishek Prakash
    Sat, 13 Sep 2025 10:55:42 +0530

    Ansible is a powerful automation tool that simplifies the management and configuration of systems.
    At the heart of Ansible's functionality are modules, which are reusable scripts designed to perform specific tasks on remote hosts.
    These modules allow users to automate a wide range of tasks, from installing packages to managing services, all with the aim of maintaining their systems' desired state.
    This article will explain what Ansible modules are, how to use them, and provide real-world examples to demonstrate their effectiveness.
    What is an Ansible Module?
    An Ansible module is a reusable, standalone script that performs a specific task or operation on a remote host. Modules can manage system resources like packages, services, files, and users, among other things. They are the building blocks for creating Ansible playbooks, which define the automation workflows for configuring and managing systems.
    Ansible modules are designed to be idempotent, meaning they ensure that the system reaches a desired state without applying changes that are unnecessary if the system is already in the correct state. This makes Ansible operations predictable and repeatable.
    Modules can be written in any programming language, but most are in Python. Ansible ships with a large number of built-in modules, and there are also many community-contributed modules available.
    Additionally, you can write custom modules to meet specific needs.
    Here's a simple syntax to get you started:
    --- - name: My task name hosts: group_name # Group of hosts to run the task on become: true # Gain root privileges (if needed) module_name: arguments: # Module specific arguments This is a basic template for defining tasks in your Ansible playbooks.
    Ansible Modules - Real-world examples
    Let's examine some real-world examples to understand how modules work in action.
    Example 1: Installing a package
    Let's use the yum module to install the Apache web server on a RockyLinux.
    --- - name: Install Apache web server hosts: webservers tasks: - name: Install httpd package yum: name: httpd state: present In this playbook:
    The hosts directive specifies that this playbook will run on hosts in the webservers group. The yum module is used to ensure that the httpd package is installed. Let's run the above playbook:
    ansible-playbook playbook.yml After the successful playbook execution, you will see the following output:
    Example 2: Managing services
    Now, let's use the service module to ensure that the Apache web server is started and enabled to start on boot.
    --- - name: Ensure Apache is running and enabled hosts: webservers tasks: - name: Start and enable httpd service service: name: httpd state: started enabled: yes In this playbook:
    The service module is used to start the httpd service and enable it to start at boot. Now, run the above playbook:
    ansible-playbook playbook.yml Output:
    10 common Ansible modules and their usage
    In this section, I'll show you some of the most commonly used Ansible modules and their usage.
    1. ping
    The ping module is used to test the connection to the target hosts. It is often used to ensure that the target hosts are reachable and responsive. This module is particularly useful for troubleshooting connectivity issues.
    --- - name: Test connectivity hosts: all tasks: - name: Ping all hosts ping: This Ansible playbook named Test connectivity checks the network connectivity of all hosts in the inventory. It does so by running a single task: sending a ping request to each host. The task, named Ping all hosts uses the built-in ping module to ensure that every host is reachable and responding to network requests.
    Ansible Ping Module: Check if Host is ReachableQuickly test if a node is available with Ansible ping command.Linux HandbookLHB Community2. copy
    The copy module copies files from the local machine to the remote host. It is used to transfer configuration files, scripts, or any other files that need to be present on the remote system. This module simplifies file distribution across multiple hosts.
    --- - name: Copy a file to remote host hosts: webservers tasks: - name: Copy index.html copy: src: /tmp/index.html dest: /var/www/html/index.html The above playbook targets hosts in the webservers group and includes a single task. This task uses the copy module to transfer a file named index.html from the local source path /tmp/index.html to the destination path /var/www/html/index.html on each remote host.
    Ansible Copy Module [Explained With Examples]The Copy module in Ansible comes in handy in your setup. Learn some practical examples.Linux HandbookLHB Community3. user
    The user module manages user accounts. It can create, delete, and manage the properties of user accounts on the remote system. This module is essential for ensuring that the correct users are present on the system with the appropriate permissions.
    --- - name: Ensure a user exists hosts: all tasks: - name: Create a user user: name: johndoe state: present groups: sudo This playbook contains a single task, which uses the user module to ensure that a user named johndoe exists on each host. Additionally, it assigns this user to the sudo group, granting administrative privileges.
    4. package
    The package module is a generic way to manage packages across different package managers. It abstracts the differences between package managers like yum, apt, and dnf, providing a consistent interface for package management tasks. This module helps streamline the installation and management of software packages.
    --- - name: Install packages hosts: all tasks: - name: Ensure curl is installed package: name: curl state: present The above playbook uses the package module to ensure that the curl package is installed on each host. The desired state of the curl package is set to present meaning it will be installed if it is not already available on the host.
    5. shell
    The shell module is used to execute commands on the remote hosts. It allows for running shell commands with the full capabilities of the shell. They are useful for executing ad-hoc commands and scripts on remote systems.
    --- - name: Run shell commands hosts: all tasks: - name: Run a shell command shell: echo "Hello, World!" This playbook uses the shell module to execute the command echo "Hello, World!" on each host. This command will output the text Hello, World! in the shell of each remote host.
    Using the Shell Module in AnsibleShell module in Ansible is a powerful tool for executing shell commands on remote hosts, but it comes with maintenance risks.Linux HandbookLHB Community6. git
    The git module manages Git repositories. It can clone repositories, check out specific branches or commits, and update the repositories. This module is essential for deploying code and configuration managed in Git repositories.
    --- - name: Clone a Git repository hosts: all tasks: - name: Clone a repository git: repo: 'https://github.com/example/repo.git' dest: /tmp This playbook uses the git module to clone the repository from https://github.com/example/repo.git into the /tmp directory on each host.
    7. template
    The template module is used to copy and render Jinja2 templates. It allows for the dynamic creation of configuration files and scripts based on template files and variables. This module is crucial for creating customized and dynamic configuration files.
    --- - name: Deploy a configuration file from template hosts: all tasks: - name: Copy template file template: src: /tmp/template.j2 dest: /etc/nginx/nginx.conf This playbook uses the template module to deploy a configuration file by copying the template file located at /tmp/template.j2 on the control machine to /etc/nginx/nginx.conf on each host. The template file can contain Jinja2 variables that are rendered with the appropriate values during the copy process.
    8. file
    The file module manages file and directory properties. It can create, delete, and manage the properties of files and directories. This module ensures that the correct file system structure and permissions are in place.
    --- - name: Ensure a directory exists hosts: all tasks: - name: Create a directory file: path: /tmp/mydir state: directory The above playbook uses the file module to ensure that a directory at the path /tmp/mydir exists on each host. If the directory does not already exist, it will be created.
    9. service
    The service module manages system services. It can start, stop, restart, and enable services on the remote system. This module is essential for ensuring that the necessary services are running and configured to start at boot.
    --- - name: Ensure a service is running hosts: all tasks: - name: Start a service service: name: nginx state: started This playbook uses the Ansible service module to ensure that the nginx service is running on each host. If the service is not already started, it will be initiated.
    Manage Services With Ansible Service ModuleThe service module in Ansible comes in handy for managing services across a variety of platforms.Linux HandbookLHB Community10. apt
    The apt module manages packages using the apt package manager (for Debian-based systems). It handles package installation, removal, and updating. This module is vital for managing software on systems using the Debian package management system.
    --- - name: Install a package using apt hosts: all tasks: - name: Install nginx apt: name: nginx state: present This playbook uses the apt module, which is specific to Debian-based systems like Ubuntu, to manage packages. It specifies that the package nginx should be installed, ensuring Nginx is present and available on all targeted hosts after the playbook is executed.
    Install and Manager Ubuntu Packages with Ansible APT ModuleAnsible’s built-in APT module lets you manage packages on Ubuntu and Debian based nodes.Linux HandbookUmair KhurshidConclusion
    You explored the fundamental concept of Ansible modules, which are essential for automating tasks on remote hosts.
    I showed the basic syntax for using Ansible modules and provided real-world examples of installing packages and managing services. Additionally, I listed and described common and popular Ansible modules, demonstrating their usage and importance in automating various system tasks.
    This is just a glimps as we have detailed tutorials on several Ansible modules with real-world examples.
    To further enhance your skills, explore Ansible's extensive documentation and community resources to discover additional modules and advanced configurations.
  14. by: Daniel Schwarz
    Fri, 12 Sep 2025 14:20:45 +0000

    When I first started messing around with code, rounded corners required five background images or an image sprite likely created in Photoshop, so when border-radius came onto the scene, I remember everybody thinking that it was the best thing ever. Web designs were very square at the time, so to have border-radius was super cool, and it saved us a lot of time, too.
    Chris’ border-radius article from 2009, which at the time of writing is 16 years old (wait, how old am I?!), includes vendor prefixes for older web browsers, including “old Konqueror browsers” (-khtml-border-radius). What a time to be alive!
    We’re much less excited about rounded corners nowadays. In fact, sharp corners have made a comeback and are just as popular now, as are squircles (square-ish circles or circle-y squares, take your pick), which is exactly what the corner-shape CSS property enables us to create (in addition to many other cool UI effects that I’ll be walking you through today).
    At the time of writing, only Chrome 139 and above supports corner-shape, which must be used with the border-radius property or/and any of the related individual properties (i.e., border-top-left-radius, border-top-right-radius, border-bottom-right-radius, and border-bottom-left-radius):
    CodePen Embed Fallback Snipped corners using corner-shape: bevel
    These snipped corners are becoming more and more popular as UI designers embrace brutalist aesthetics.
    In the example above, it’s as easy as using corner-shape: bevel for the snipped corners effect and then border-bottom-right-radius: 16px for the size.
    corner-shape: bevel; border-bottom-right-radius: 16px; We can do the same thing and it really works with a cyberpunk aesthetic:
    CodePen Embed Fallback Slanted sections using corner-shape: bevel
    Slanted sections is a visual effect that’s even more popular, probably not going anywhere, and again, helps elements to look a lot less like the boxes that they are.
    Before we dive in though, it’s important to keep in mind that each border radii has two semi-major axes, a horizontal axis and a vertical axis, with a ‘point’ (to use vector terminology) on each axis. In the example above, both are set to 16px, so both points move along their respective axis by that amount, away from their corner of course, and then the beveled line is drawn between them. In the slanted section example below, however, we need to supply a different point value for each axis, like this:
    corner-shape: bevel; border-bottom-right-radius: 100% 50px; CodePen Embed Fallback The first point moves along 100% of the horizontal axis whereas the second point travels 50px of the vertical axis, and then the beveled line is drawn between them, creating the slant that you see above.
    By the way, having different values for each axis and border radius is exactly how those cool border radius blobs are made.
    Sale tags using corner-shape: round bevel bevel round
    You’ve see those sale tags on almost every e-commerce website, either as images or with rounded corners and not the pointy part (other techniques just aren’t worth the trouble). But now we can carve out the proper shape using two different types of corner-shape at once, as well as a whole set of border radius values:
    CodePen Embed Fallback You’ll need corner-shape: round bevel bevel round to start off. The order flows clockwise, starting from the top-left, as follows:
    top-left top-right bottom-right bottom-left Just like with border-radius. You can omit some values, causing them to be inferred from other values, but both the inference logic and resulting value syntax lack clarity, so I’d just avoid this, especially since we’re about to explore a more complex border-radius:
    corner-shape: round bevel bevel round; border-radius: 16px 48px 48px 16px / 16px 50% 50% 16px; Left of the forward slash (/) we have the horizontal-axis values of each corner in the order mentioned above, and on the right of the /, the vertical-axis values. So, to be clear, the first and fifth values correspond to the same corner, as do the second and sixth, and so on. You can unpack the shorthand if it’s easier to read:
    border-top-left-radius: 16px; border-top-right-radius: 48px 50%; border-bottom-right-radius: 48px 50%; border-bottom-left-radius: 16px; Up until now, we’ve not really needed to fully understand the border radius syntax. But now that we have corner-shape, it’s definitely worth doing so.
    As for the actual values, 16px corresponds to the round corners (this one’s easy to understand) while the 48px 50% values are for the bevel ones, meaning that the corners are ‘drawn’ from 48px horizontally to 50% vertically, which is why and how they head into a point.
    Regarding borders — yes, the pointy parts would look nicer if they were slightly rounded, but using borders and outlines on these elements yields unpredictable (but I suspect intended) results due to how browsers draw the corners, which sucks.
    Arrow crumbs using the same method
    Yep, same thing.
    CodePen Embed Fallback We essentially have a grid row with negative margins, but because we can’t create ‘inset’ arrows or use borders/outlines, we have to create an effect where the fake borders of certain arrows bleed into the next. This is done by nesting the exact same shape in the arrows and then applying something to the effect of padding-right: 3px, where 3px is the value of the would-be border. The code comments below should explain it in more detail (the complete code in the Pen is quite interesting, though):
    <nav> <ol> <li> <a>Step 1</a> </li> <li> <a>Step 2</a> </li> <li> <a>Step 3</a> </li> </ol> </nav> ol { /* Clip n’ round */ overflow: clip; border-radius: 16px; li { /* Arrow color */ background: hsl(270 100% 30%); /* Reverses the z-indexes, making the arrows stack */ /* Result: 2, 1, 0, ... (sibling-x requires Chrome 138+) */ z-index: calc((sibling-index() * -1) + sibling-count()); &:not(:last-child) { /* Arrow width */ padding-right: 3px; /* Arrow shape */ corner-shape: bevel; border-radius: 0 32px 32px 0 / 0 50% 50% 0; /* Pull the next one into this one */ margin-right: -32px; } a { /* Same shape */ corner-shape: inherit; border-radius: inherit; /* Overlay background */ background: hsl(270 100% 50%); } } } Tooltips using corner-shape: scoop
    CodePen Embed Fallback To create this tooltip style, I’ve used a popover, anchor positioning (to position the caret relative to the tooltip), and corner-shape: scoop. The caret shape is the same as the arrow shape used in the examples above, so feel free to switch scoop to bevel if you prefer the classic triangle tooltips.
    A quick walkthrough:
    <!-- Connect button to tooltip --> <button popovertarget="tooltip" id="button">Click for tip</button> <!-- Anchor tooltip to button --> <div anchor="button" id="tooltip" popover>Don’t eat yellow snow</div> #tooltip { /* Define anchor */ anchor-name: --tooltip; /* Necessary reset */ margin: 0; /* Center vertically */ align-self: anchor-center; /* Pin to right side + 15 */ left: calc(anchor(right) + 15px); &::after { /* Create caret */ content: ""; width: 5px; height: 10px; corner-shape: scoop; border-top-left-radius: 100% 50%; border-bottom-left-radius: 100% 50%; /* Anchor to tooltip */ position-anchor: --tooltip; /* Center vertically */ align-self: anchor-center; /* Pin to left side */ right: anchor(left); /* Popovers have this already (required otherwise) */ position: fixed; } } If you’d rather these were hover-triggered, the upcoming Interest Invoker API is what you’re looking for.
    Realistic highlighting using corner-shape: squircle bevel
    The <mark> element, used for semantic highlighting, defaults with a yellow background, but it doesn’t exactly create a highlighter effect. By adding the following two lines of CSS, which admittedly I discovered by experimenting with completely random values, we can make it look more like a hand-waved highlight:
    mark { /* A...squevel? */ corner-shape: squircle bevel; border-radius: 50% / 1.1rem 0.5rem 0.9rem 0.7rem; /* Prevents background-break when wrapping */ box-decoration-break: clone; } CodePen Embed Fallback We can also use squircle by itself to create those fancy-rounded app icons, or use them on buttons/cards/form controls/etc. if you think the ‘old’ border radius is starting to look a bit stale:
    CodePen Embed Fallback CodePen Embed Fallback Hand-drawn boxes using the same method
    Same thing, only larger. Kind of looks like a hand-drawn box?
    CodePen Embed Fallback Admittedly, this effect doesn’t look as awesome on a larger scale, so if you’re really looking to wow and create something more akin to the Red Dead Redemption aesthetic, this border-image approach would be better.
    Clip a background with corner-shape: notch
    Notched border radii are ugly and I won’t hear otherwise. I don’t think you’ll want to use them to create a visual effect, but I’ve learned that they’re useful for background clipping if you set the irrelevant axis to 50% and the axis of the side that you want to clip by the amount that you want to clip it by. So if you wanted to clip 30px off the background from the left for example, you’d choose 30px for the horizontal axes and 50% for the vertical axes (for the -left-radius properties only, of course).
    corner-shape: notch; border-top-left-radius: 30px 50%; border-bottom-left-radius: 30px 50%; CodePen Embed Fallback Conclusion
    So, corner-shape is actually a helluva lot of fun. It certainly has more uses than I expected, and no doubt with some experimentation you’ll come up with some more. With that in mind, I’ll leave it to you CSS-Tricksters to mess around with (remember though, you’ll need to be using Chrome 139 or higher).
    As a parting gift, I leave you with this very cool but completely useless CSS Tie Fighter, made with corner-shape and anchor positioning:
    CodePen Embed Fallback What Can We Actually Do With corner-shape? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  15. by: Abhishek Prakash
    Fri, 12 Sep 2025 17:02:47 +0530

    Another week, another batch of Linux goodies! 🎉
    Let me quickly summarize them for you.
    Spaces in filenames are still tripping people up, diff still scares beginners, and tcpdump still lets you spy on HTTP traffic like a hacker in a hoodie 🕵️‍♂️ (don’t worry, it’s for learning!).
    If containers are your thing, we’ve got a guide on checking Docker disk usage (before your server starts screaming for space) and some practical Ansible copy module examples to make automation less painful.
    Plus, our tool discovery section is stacked with ntfy, your new push-notification buddy, and Pods, the slick Podman GUI you didn’t know you needed.
    And of course, we wrap up with Linux news, from AlmaLinux updates to Linus reminding everyone to write better commit messages. Because good commits = good karma. ✨
     
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  16. by: LHB Community
    Fri, 12 Sep 2025 10:48:27 +0530

    You already know the basics of tcpdump from our guide. It helps you watch live traffic, spot misconfigurations, and check that sensitive data is handled safely.
    Let’s put tcpdump to some practical work. The skills you practice here also align with objectives in CompTIA Security+ or network security roles.
    In this hands-on tutorial, we’ll run examples against the test site http://testphp.vulnweb.com to filter GET, POST, and sensitive data.
    By focusing on high-value traffic, security engineers can efficiently audit network flows and identify potential risks without being overwhelmed by irrelevant packets.
    1. Observing Network Behaviour
    sudo tcpdump -i eth0 host testphp.vulnweb.com This captures traffic to and from testphp.vulnweb.com.
    Key observations you should focus on as a security engineer:
    Identify backend infrastructure and exposed IPs Check if sensitive data is transmitted in plaintext Monitor response size and timing to detect anomalies Ensure connection health is stable (ACKs, retransmits) From the output above, let's zoom in on this part:
    23:55:01.936700 IP 192.168.64.3.52526 > ec2-44-228-249-3.us-west-2.compute.amazonaws.com.http: Flags [P.], length 339: HTTP: GET / HTTP/1.1 23:55:02.133596 IP ec2-44-228-249-3.us-west-2.compute.amazonaws.com.http > 192.168.64.3.52526: Flags [P.], length 2559: HTTP: HTTP/1.1 200 OK Flags [.], ack ..., length 0 Breaking it down:
    Line / Field What It Shows 192.168.64.3.52526 > ec2-... Your local machine (source port 52526) talking to AWS EC2 host on port 80 (HTTP). Flags [P.] length 339 PSH + ACK = this packet contains data, the HTTP GET request. ec2-... > 192.168.64.3.52526 The server’s response back to you on the same TCP session. length 2559: HTTP/1.1 200 OK 2.5 KB payload from server, confirms 200 OK response. Flags [.], ack ..., length 0 Plain ACK packets, no payload, normal TCP housekeeping. 💡Regularly monitor endpoints to detect unusual traffic spikes or misconfigured services early. Do not use this for unauthorized scanning.2. Filter at the TCP Payload Level
    Before you use TCP Payload Level, you should first understand TCP Header.
    Each TCP segment has a header that contains the information needed for reliable transmission.
    Field Offset (bytes) Size (bytes) Size (bits) Purpose / Description Source Port 0–1 2 bytes 16 bits Port number of the sending process on the source host Destination Port 2–3 2 bytes 16 bits Port number of the receiving process on the destination host Sequence Number 4-7 4 bytes 32 bits Indicates the order of bytes sent; required for reliable delivery Acknowledgment Number 8-11 4 bytes 32 bits Confirms which bytes have been received Data Offset 12 (bits 0–3) — 4 bits Shows where the header ends and the payload begins Reserved 12 (bits 4–6) — 3 bits Reserved for future use; normally zero TCP Flags (NS,CWR,ECE,URG,ACK,PSH,RST,SYN,FIN) 12–13 (bits 7–15) — 9 bits TCP control bits managing the TCP state machine Window Size 14–15 2 bytes 16 bits Flow control: how much data the receiver can accept Checksum 16–17 2 bytes 16 bits Integrity check over header and payload Urgent Pointer 18–19 2 bytes 16 bits Marks urgent data; rarely used today Options (if present) 20–59 0–40 bytes 0–320 bits Optional parameters; extend header beyond the minimum 20 bytes 💡Knowing the Data Offset lets you inspect payload start locations. This helps monitor HTTP methods and headers for auditing, without modifying traffic.Let's take a look at this filter:
    tcp[((tcp[12:1] & 0xf0) >> 2):4] This extracts the first four bytes of the payload based on the Data Offset, which is key for monitoring GET/POST requests safely.
    Capturing HTTP GET Requests
    The command below selects packets whose payload starts with 0x47455420, which is the hexadecimal code for 'GET'.
    sudo tcpdump -s 0 -A -vv 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420' Capturing HTTP POST Requests
    The command below matches packets whose payload begins with 0x504f5354, the hex for 'POST'.
    sudo tcpdump -s 0 -A -vv 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504f5354' 💡Monitor GET/POST patterns to confirm normal traffic and detect misconfigurations. Avoid capturing other users’ sensitive data without authorization.3. Using grep and egrep to get password and cookies
    You can use egrep to search for text using patterns. Unlike grep, egrep supports extended regular expressions, so you can match multiple patterns at once using symbols like | (OR) or () for grouping.
    💡Use egrep to quickly filter output for lines that match any of your patterns, e.g., certain HTTP methods, headers, or parameter names.Monitoring Sensitive POST Data
    sudo tcpdump -s 0 -A -n -l | egrep -i "POST /|pwd=|passwd=|pass=|password=|Host:" Use this command only in controlled lab environments or on traffic you are authorized to monitor. Regularly verify that credentials are never transmitted over HTTP.
    Observing HTTP Cookies
    sudo tcpdump -nn -A -s0 -l | egrep -i 'Set-Cookie|Host:|Cookie:' This is useful for:
    Inspect session IDs and cookies for secure transmission. Ensure Secure and HttpOnly flags are used. Use this to audit cookie security and session handling policies. Never capture cookies from unauthorized users.
    Extracting HTTP User-Agents
    In this one, we only match one pattern, so just use grep:
    sudo tcpdump -nn -A -s1500 -l | grep "User-Agent:" Helpful for:
    Identify which clients or automated tools interact with your service. Spot misconfigured scanners or unauthorized bots. Use this for traffic profiling and anomaly detection. Helps enforce internal security policies.
    Conclusion
    tcpdump is a lightweight yet powerful monitoring tool for security engineers. It lets you monitor data securely, spot anomalies, and see network activity without disrupting operations.
    Integrate tcpdump monitoring into SOC workflows or automated scripts to catch potential issues in real time. Always operate within authorized boundaries.
    ✍️Contributed by Hangga Aji Sayekti, a senior software engineer experimenting with pen-testing these days.
  17. by: Geoff Graham
    Thu, 11 Sep 2025 15:16:34 +0000

    Stu Robson is on a mission to “un-Sass” his CSS. I see articles like this pop up every year, and for good reason as CSS has grown so many new legs in recent years. So much so that much of the core features that may have prompted you to reach for Sass in the past are now baked directly into CSS. In fact, we have Jeff Bridgforth on tap with a related article next week.
    What I like about Stu’s stab at this is that it’s an ongoing journey rather than a wholesale switch. In fact, he’s out with a new post that pokes specifically at compiling multiple CSS files into a single file. Splitting and organizing styles into separate files is definitely the reason I continue to Sass-ify my work. I love being able to find exactly what I need in a specific file and updating it without having to dig through a monolith of style rules.
    But is that a real reason to keep using Sass? I’ve honestly never questioned it, perhaps due to a lizard brain that doesn’t care as long as something continues to work. Oh, I want partialized style files? Always done that with a Sass-y toolchain that hasn’t let me down yet. I know, not the most proactive path.
    Stu outlines two ways to compile multiple CSS files when you aren’t relying on Sass for it:
    Using PostCSS
    Ah, that’s right, we can use PostCSS both with and without Sass. It’s easy to forget that PostCSS and Sass are compatible, but not dependent on one another.
    postcss main.css -o output.css Stu explains why this could be a nice way to toe-dip into un-Sass’ing your work:
    Custom Script for Compilation
    The ultimate thing would be eliminating the need for any dependencies. Stu has a custom Node.js script for that:
    const fs = require('fs'); const path = require('path'); // Function to read and compile CSS function compileCSS(inputFile, outputFile) { const cssContent = fs.readFileSync(inputFile, 'utf-8'); const imports = cssContent.match(/@import\s+['"]([^'"]+)['"]/g) || []; let compiledCSS = ''; // Read and append each imported CSS file imports.forEach(importStatement => { const filePath = importStatement.match(/['"]([^'"]+)['"]/)[1]; const fullPath = path.resolve(path.dirname(inputFile), filePath); compiledCSS += fs.readFileSync(fullPath, 'utf-8') + '\n'; }); // Write the compiled CSS to the output file fs.writeFileSync(outputFile, compiledCSS.trim()); console.log(`Compiled CSS written to ${outputFile}`); } // Usage const inputCSSFile = 'index.css'; // Your main CSS file const outputCSSFile = 'output.css'; // Output file compileCSS(inputCSSFile, outputCSSFile); Not 100% free of dependencies, but geez, what a nice way to reduce the overhead and still combine files:
    node compile-css.js This approach is designed for a flat file directory. If you’re like me and prefer nested subfolders:
    Very cool, thanks Stu! And check out the full post because there’s a lot of helpful context behind this, particularly with the custom script.
    Compiling Multiple CSS Files into One originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  18. by: Abhishek Prakash
    Thu, 11 Sep 2025 13:46:24 GMT

    The last time I reviewed the Pironman 5, I thought that it was the most amazing Raspberry Pi case that can be purchased. That's because people have 3D printed Pi cases and they match the wavelength of awesomeness.
    Almost a year later, SunFounder came up with a new version, Pironman 5 Max. And they increased the awesomeness of an already outstanding product.
    Due to light reflection, the picture above doesn't show its beauty properly. Look at the image below and admire the beauty.
    This gorgeous looking Raspberry Pi case is the best $90 investment for your Raspberry Pi 5 setup. If money is not an issue, I think anyone who wants to use Raspberry Pi 5 on their desktop should consider it because it offers more than just its stunning looks.
    Let me dwell on its features as well as share my experience and opinion on them.
    Pironman 5 Max specification
    But before that, let me share what you get with this case.
    Dual NVMe M.2 slots: Can be used for RAID 0/1 setup or single SSD plus AI accelerator. They are powered by PCIe Gen2 switch. Tower cooler (for passive cooling) with PWM fan (adjusts as per CPU temperature). Two additional RGB fans that can be configured. Tiny OLED display with tap to wake function Two full-sized HDMI ports. RTC battery support All GPIO pins remain accessible through the in-built extender. Sleek black looks with part metal and part acrylic body. Build your case
    Like many other SunFounder products, this too has a DIY touch. The case needs to be assembled. Which is not complicated but still take a look at the official assembly video to get a gist of what kind of effort it will take.
    I used the paper manual, as there were no assembly videos when I received it, and it took me nearly an hour to get it up and running.
    Preparing to assemble the caseAssembly needs to be done carefully. If you put the wrong end of the FPC cable in or if the attachments do not fit in properly, you’ll have to struggle with opening the case again to fix it.
    In my case, I had the fan connection wire in front of the fan and it started making awful noise. I quickly fixed it by tucking in the wire, but these things may happen.
    Cooling your Pi
    Your Raspberry Pi 5 needs a cooling system and the official inexpensive active cooler does a decent job at that.But if you want to use Raspberry Pi as a desktop or for intensive tasks, it starts getting hotter before choking up completely.
    SunFounder has been making accessories for Raspberry Pi ecosystem for a long time and their Pironman 5 Max handles it with a mix of passive and active cooling.
    Pironman 5 Max has a tower cooler to passively cool your device. And then there are RGB dual fans to add active cooling.
    Surprisingly, the RGB fans were set to run by default. But you can easily configure them to start when the temperature gets hotter.
    I put them at cool mode, as it hardly reaches beyond that for casual computing thanks to the effective passive cooling. You can control the RGB lighting on the fan to have them always on, always off or turn on only when the fans are running.
    There is a tiny lag between the lights of the two fans. Unless you have intense OCD, you won’t be bothered with that.
    Leveling up the ports
    Cooling is just one aspect of this magnificent Raspberry Pi case. It converts your barebone Pi 5 into a mini PC by adding extra ports.
    The Pi 5 still uses mini HDMI ports. But the Pironman 5 case converts them into full HDMI ports. Now you can use your regular HDMI cables. That’s a relief. All 4 USB ports are neatly accessible in the back.
    The micro SD card slot is conveniently located at the front along with a dedicated power button. You can press the power button to turn it on. While running, press it once to bring up the shutdown menu or double-press it quickly to turn it off immediately.
    There is this tiny OLED display that gives a quick overview of your system resources. You can see the IP address, disk storage, CPU temperature, and RAM consumption. This is also configurable from the handy dashboard.
    The OLED screen needs tap to wake or shake to wake. It displays for a few seconds and goes to sleep again. Saves a tiny amount of power. I find it convenient that it displays the IP address of the PI. Helps a great deal when I want to SSH into it.
    It also has an IR receiver at the front for your experiments. You are not losing the versatility of your Pi as all 40 GPIO pins are easily accessible from the side. And they are neatly labeled too.
    This Pironman 5 Max features a dual NVMe PIP board, which is an upgrade on its previous edition, which had only one NVMe slot. So, here, you can put in two SSDs and have a RAID setup, or you can have one SSD and one AI accelerator.
    Keep in mind that this is a PCIe Gen2 switch and thus you are not getting PCIe Gen3 speed like the previous Pironman version. However, that should not be an issue, as it’s good enough for random I/O operations.
    I have used two SSDs to experiment with a RAID setup. I will share that in a separate tutorial.
    Beautiful RGB lighting and more
    The RGB lighting adds to the charm of the case. There are 4 LEDs located at the top that throw the lights down. By default, it is blue mood lighting. You can configure their color and lighting pattern to match up with your desk and room setup.
    You can also control its intensity, which is a good thing, as the semi-transparent dark glass may not always show the lights in their full glory.
    A tiny but useful feature is the inclusion of RTC battery and thus giving the real time clock to your Raspberry Pi. Your Pi doesn’t need to be connected to internet to give you the correct time.
    Subscribe to It's FOSS YouTube ChannelRemember...
    Pironman does not support all kinds of SSDs. Go through their list of supported SSDs first.
    Pironman also has a list of compatible operating systems. The script and dashboard that let you control the RGB lights and other behavior work only with these operating systems, and you have to install the scripts explicitly.
    Conclusion
    Ever since I started using these Pironman cases, my Raspberry Pi not only stays cool, it also looks super cool.
    Now a price tag of $95 could seem like a lot but Pironman Max is not just a case, it transforms your Pi into a mini PC with a miniature gaming rig look. You get full HDMI ports, power buttons, an OLED display, and two SSD slots. It enhances the capabilities of your Pi.
    Another good thing is that they also take care of taxes and import duty. You can order it from their official website. The new version is not available on Amazon yet.
    If your budget allows it, this is surely worth investing in your Raspberry Pi setup.
    Get Pironman 5 Max from Official WebsiteOrder it from AmazonAlternatively, if you are on a budget, explore some other tower cases for Raspberry Pi.
    In fact, there is a new mini version of the Pironman in making that costs half the price and offers half the features.
    Pironman 5 Mini
    The mini version has only one NVMe slot and one RGB fan. There is no OLED display or passive tower cooling. But it still adds value at half the cost.
    Pironman 5 Mini And that’s my opinion. What about you? The comment section is all yours.
  19. by: Abhishek Prakash
    Thu, 11 Sep 2025 04:29:02 GMT

    Linux Mint 22.2 Zara is available now. Existing Mint 22.1 users can choose to upgrade or stay with their current version.
    Ubuntu 25.10 is a month away. I tried it and shared the new features in the latest video. Among those features, I find the switch to Rust-based sudo the most intriguing. I am working on an article that takes a deeper look at it.
    KDE's very own Arch-based distro makes the first alpha release and Sourav already took it for testing.
    These were some of the highlights from this week.
    💬 Let's see what else you get in this edition
    Microsoft open sources BASIC. SSD factors to consider before buying. Switzerland's new open source AI model. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by PrepperDisk. PrepperDisk gives you a fully offline, private copy of the world’s most useful open-source knowledge—so your access doesn’t depend on big platforms, networks, or gatekeepers.
    Built on a Raspberry Pi, it bundles projects like Wikipedia, maps, and survival manuals with tools we’ve built and open-sourced ourselves. It’s a way to safeguard information freedom: your own secure, personal archive of open knowledge, ready anywhere—even without the internet.
    Explore PrepperDisk 📰 Linux and Open Source News
    PeerTube 7.3 promises a wide range of improvements. Microsoft has open sourced the code for 6502 BASIC. Signal has introduced a new paid feature called Secure Backups. Firefox has pulled the plug on 32-bit support on Linux for Firefox. Warp has launched a new feature called Warp Code to help write and review code. Apertus is Switzerland's new open source AI model that is among Europe's largest. Linux Mint 22.2 "Zara" has been released.
    The Wait is Over! Linux Mint 22.2 “Zara” is HereA fresh Linux Mint release with many refinements.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    GNOME has had to take a step back in its campaign to remove X11 support.
    U Turn! X11 is Back in GNOME 49, For NowA temporary move that gives people some breathing room.It's FOSS NewsSourav RudraKDE Linux is finally here, albeit in an unfinished alpha form.
    KDE’s Very Own Linux Distro Just Hit AlphaI am still livid that they didn’t name it KLinux or Kinux.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials, and Learnings
    These 10 shortcuts will make you a pro VLC user. Learn how to effectively customize Hyprland to your liking. A list of open source apps for the Windows users in the house. I know that's unusual from us. Users of Ghostty Terminal will appreciate these sleek themes. 👷 AI, Homelab and Hardware Corner
    Considering buying an SSD? Speed isn't everything.
    Speed Isn’t Everything When Buying SSDs - Here’s What Really Matters!Remember this for the next time you’re shopping for an SSD.It's FOSSSourav Rudra✨ Project Highlight
    Or, how about a Linux distribution that turns any machine into a retro gaming console?
    This Linux Gaming Distro Uses SD Cards as Game Cartridges (Just Like the 90s)Insert cartridge, power on, play. No launchers or accounts required.It's FOSS NewsSourav Rudra📽️ Videos I Am Creating for You
    Ubuntu 25.10 Questing Quokka is less than a month away. New terminal with container integration, new sudo are among the main highlights. Watch them in action in this new video.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Can you spot all the errors with these Linux Commands?
    Guess the Errors With These Linux CommandsPut your Linux command line knowledge to some test.It's FOSSAbhishek Prakash Why should you opt for It's FOSS Plus membership:
    ✅ Ad-free reading experience
    ✅ Badges in the comment section and forum
    ✅ Supporting creation of educational Linux materials
    ✅ Free Linux eBook
    Join It's FOSS Plus 💡 Quick Handy Tip
    In GNOME, you can resize the window without placing the cursor at the edge or corner and dragging. Open GNOME Tweaks and go to the Windows section. Here, enable the "Resize with Secondary-Click" option. Also, remember to set a modifier key (it is the Super key by default).
    Now, in an active window, hold the modifier key and then right-click and drag anywhere in the window. Another thing to note is that this behavior is enabled by default in KDE Plasma, where the Super key is a modifier key.
    🤣 Meme of the Week
    Linux is very versatile! 😎
    🗓️ Tech Trivia
    Source: CHMOn September 9, 1947, engineers working on the Harvard Mark II computer found a moth stuck in a relay, causing the system to malfunction. They taped it into the logbook with the note "First actual case of bug being found." Grace Hopper later shared the story, making it the most famous "computer bug" in history.
    🧑‍🤝‍🧑 FOSSverse Corner
    FOSSers are discussing what the most underrated Linux distro is. Got any in mind?
    What is the most underrated Linux distribution?There are some distros like Debian, Ubuntu and Mint that are commonly used and everyone knows how good they are. but There are others that are used only by a few people and perform equally as well. Would you like to nominate your choice for the most underrated Linux distro? I will nominate Void Linux… it is No 93 on distrowatch and performs for me as well as MX Linux or Debian.It's FOSS Communitynevj❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  20. by: Geoff Graham
    Wed, 10 Sep 2025 13:13:41 +0000

    That’s what Donnie D’Amato asks in a recent post:
    This really got the CSS-Tricks team talking. It’s the nerdy version of “if you could only take one album with you on a remote island…” And everyone had a different opinion which is great because it demonstrates the messy, non-linear craft that is thinking like a front-end developer.
    Seems like a pretty straightforward thing to answer, right? But like Donnie says, this takes some strategy. Like, say spacing is high on your priority list. Are you going to use margin? padding? Perhaps you’re leaning into layout and go with gap as part of a flexbox direction… but then you’re committing to display as one of your options. That can quickly eat up your choices!
    Our answers are pretty consistent, but converged even more as the discussion wore on and all of us were coming at it with different priorities. I’ll share each person’s “gut” reaction because I like how raw it is. I think you’ll see that there’s always a compromise in the mix, but those compromises really reveal a person’s cards as far as what they think is most important in a situation with overly tight constraints.
    Juan Diego Rodriguez
    Juan and I came out pretty close to the same choices, as we’ll see in a bit:
    font: Typography is a priority and we get a lot of constituent properties with this single shorthand. padding: A little padding makes things breath and helps with visual separation. background: Another shorthand with lots of styling possibilities in a tiny package. color: More visual hierarchy. But he was debating with himself a bit in the process:
    Ryan Trimble
    Ryan’s all about that bass structure:
    display: This opens up a world of layouts, but most importantly flex. flex-direction: It’s a good idea to consider multi-directional layouts that are easily adjustable with media queries. width: This helps constrain elements and text, as well as divide up flex containers. margin: This is for spacing that’s bit more versatile than gap, while also allowing us to center elements easily. And Ryan couldn’t resist reaching a little out of bounds:
    Danny Schwarz
    Every team needs a wild card:
    font: This isn’t a big surprise if you’re familiar with Danny’s writing. padding: So far, Ryan’s the only one to eschew padding as a core choice! color: Too bad this isn’t baked right into font! I’ll also point out that Danny soon questioned his decision to use all four choices:
    Sunkanmi Fafowora
    This is the first list to lean squarely into CSS Grid, allowing the grid shorthand to take up a choice in favor of having a complete layout system:
    font: This is a popular one, right? display: Makes grid available grid: Required for this display approach color: For sprinkling in text color where it might help I love that Ryan and Sunkanmi are thinking in terms of structure, albeit in very different ways for different reasons!
    Zell Liew
    In Zell’s own words: “Really really plain and simple site here.”
    font: Content is still the most important piece of information. max-width: Ensures type measure is ok. margin: Lets me play around with spacing. color: This ensures there’s no pure black/white contrast that hurts the eyes. I’d love for background as well, but we only have four choices. But there’s a little bit of nuance in those choices, as he explains: “But I’d switch up color for background on sites with more complex info that requires proper sectioning. In that case I’d also switch margin with padding.”
    Amit Sheen
    Getting straight to Amit’s selections:
    font color background color-scheme The choices are largely driven by wanting to combat default user agent styles:
    Geoff Graham
    Alright, I’m not quite as exciting now that you’ve seen everyone else’s choices. You’ll see a lot of overlap here:
    font: A shorthand for a whopping SEVEN properties for massaging text styles. color: Seems like this would come in super handy for establishing a visual hierarchy and distinguishing one element from another. padding: I can’t live without a little breathing room between an element’s content box and its inner edge. color-scheme: Good minimal theming that’ll work nicely alongside color and support the light-dark() function. Clearly, I’m all in on typography. That could be a very good thing or it could really constrain me when it comes to laying things out. I really had to fight the urge to use display because I always find it incredibly useful for laying things out side-by-side that wouldn’t otherwise be possible with block-level elements.
    Your turn!
    Curious minds want to know! Which four properties would you take with you on a desert island?

    What’re Your Top 4 CSS Properties? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  21. by: Abhishek Prakash
    Wed, 10 Sep 2025 03:00:16 GMT

    Sometimes you discover things by accident, even if they were probably there for years.
    I had the same case when I discovered that GNOME allowed the use of a compose key and it was available right from the keyboard settings. Eureka moment? Sort of.
    Allow me to share my 'discovery,' but before that, let me briefly tell you what a compose key is.
    What is a Compose Key?
    A compose key followed by two or more keystrokes lets you type special characters and symbols like ® (registered), © (copyright), and à. You do it directly with your keyboard without having to hunt them down online or dig through character maps.
    This is particularly helpful for people who type in European languages like French, Swedish, etc on a QWERTY keyboard.
    You'll have to enable the compose key first. I am using GNOME desktop environment in this article, but a similar feature should also be available in other desktop environments.
    Enable the Compose Key on GNOME
    Search and open settings from the GNOME Activities overview.
    Open SettingsInside the settings, go to the Keyboard section. Here, you can see an option for Compose Key.
    It is set to Layout default in my Ubuntu 24.04 installation using GNOME 46 and was turned off by default in my Arch installation using GNOME 48.
    Select Compose KeyIn any case, go inside the compose key and either enable it (in case it is turned off) or disable the default layout.
    Set another Compose keyAs soon as you do this, you can see that you can now set another key as the compose key.
    I set the Right CTRL key as the compose key, as shown in the screenshot above.
    🚧If you are using VirtualBox, do not assign the Right-CTRL key. Because in VirtualBox, it is the host key with some special usage.That's it. Whenever you need to type some special symbol, first press the Compose key. This changes the cursor to a special look. Enter the code for the character you want to enter.
    0:00 /0:17 1× A small clip showing the working of the compose key in GNOME.
    Essential compose key codes
    Yes, you need to know the character code. This may seem like an additional burden, but for frequent users, it will soon become muscle memory.
    Press the compose key you had set earlier followed by the sequence of characters shown in the left column and it will output the characters in the second column in the table below:
    Compose Key Plus Types Character ' a á " a ä ` a à a e æ o o ° Degree symbol o c © o r ® s o § t m ™ (Trade Mark) > > » < < « # E ♫ (Beamed Eighth notes) You can check the official documentation for the X11 library's compose key sequences for a comprehensive list of keys and related character.
    Can't Remember? No Worries
    The compose key is particularly useful for people who don't want to divert attention from typing and at the same time need to add symbols.
    But this alone is not the option. Most modern desktop environments have emoji apps like the GNOME Characters app for GNOME.
    Using Emojis on Ubuntu LinuxUbuntu has a built-in emoji picker and you can use it to insert emoticons in native GTK apps quickly. Here’s how to use it.It's FOSSAbhishek PrakashIf you don't use special characters frequently, you can simply search for them in the GNOME Activity overview.
    For example, just search for "copyright" and if the Characters app is enabled, you can see the symbol pop up in the result. Click on it and it is copied to the clipboard and now you can paste it wherever required.
    I highly recommend referring to the X11 library's compose key sequences where you can find all the key sequences, even for typing the obscure infinity symbol.
    Enjoy the compose key.
  22. by: Chris Coyier
    Tue, 09 Sep 2025 15:55:27 +0000

    Chris and Stephen hop on the podcast to discuss the concept of a proxy. Possibly the most “gray hat” thing that CodePen does. We use a third-party analytics tool called Fullres. We could just put a link to the <script> necessary to make that work directly to fullres.com, but being an analytics tool, it’s blocked by a ton of ad blocking browsers and browser extensions. We made the conscious choice to have that <script> point to a codepen.io URL instead (a proxy) so that we get (much) more accurate usage data on the app. Since there is nothing tracked that is an anonymity concern, and we do nothing with the data other than help inform ourselves on how to make a better app, we wear this gray hat.
    If you’d still like to block these requests, the path would be https://codepen.io/stats/fr/*
    Time Jumps
    00:06 Proxies are my passion 00:57 Grey hat, white hat, or black hat 02:39 Small emoji and icons 04:45 Browsers and ad blockers 07:29 Where the grey hat part comes in 12:34 Why do ad blockers matter to us? 25:31 Podcast download tracking bug 26:49 New editor updates
  23. by: Nitij Taneja
    Mon, 08 Sep 2025 18:02:39 GMT

    Introduction
    In an era where data privacy is paramount and artificial intelligence continues to advance at an unprecedented pace, Federated Learning (FL) has emerged as a revolutionary paradigm. This innovative approach allows multiple entities to collaboratively train a shared prediction model without exchanging their raw data.
    Imagine scenarios where hospitals collectively build more accurate disease detection models without sharing sensitive patient records, or mobile devices improve predictive text capabilities by learning from user behavior without sending personal typing data to a central server. This is the core promise of federated learning.
    Traditional machine learning often centralizes vast amounts of data for training, which presents significant challenges related to data privacy, security, regulatory compliance (like GDPR and HIPAA), and logistical hurdles. Federated learning directly addresses these concerns by bringing the model to the data, rather than the data to the model. Instead of pooling raw data, only model updates—small, anonymized pieces of information about how the model learned from local data—are shared and aggregated. This decentralized approach safeguards sensitive information and unlocks AI development in scenarios where data sharing is restricted or impractical.
    This article will delve into the intricacies of federated learning, explaining its core concepts, how it operates, and its critical importance in today's data-conscious world. We will explore its diverse applications across various industries, from healthcare to mobile technology, and discuss the challenges that need to be addressed for its widespread adoption. Furthermore, we will provide a practical code demonstration, illustrating how to implement a federated learning setup, including a placeholder for integrating powerful inference engines like Groq. By the end, you will have a comprehensive understanding of federated learning and its transformative potential in building collaborative, privacy-preserving AI systems.
    What is Federated Learning?
    Federated Learning (FL) is a machine learning paradigm that enables multiple entities, often called 'clients' or 'nodes,' to collaboratively train a shared machine learning model without directly exchanging their raw data. Unlike traditional centralized machine learning, where all data is collected and processed in a single location, FL operates on a decentralized principle. The training data remains on the local devices or servers of each participant, ensuring data privacy and security.
    The core idea is to bring computation to the data, rather than moving data to a central server. This is crucial for sensitive information like medical records, financial transactions, or personal mobile device data, where privacy regulations and ethical considerations prohibit direct data sharing. By keeping data localized, FL significantly reduces risks associated with data breaches, unauthorized access, and compliance violations.
    FL involves an iterative process. A central server (or orchestrator) initializes a global model and distributes it to participating clients. Each client then trains this model locally using its own private dataset. Instead of sending raw data, clients compute and send only model updates (e.g., gradients or learned parameters) to the central server. These updates are typically aggregated, averaged, and used to improve the global model. This updated global model is then redistributed to clients for the next training round, and the cycle continues until the model converges.
    This collaborative yet privacy-preserving approach allows leveraging diverse datasets that would otherwise be inaccessible due to privacy concerns or logistical constraints. It fosters a new era of AI development where collective intelligence can be harnessed without compromising individual data sovereignty.
    How Does Federated Learning Work?
    Federated learning combines distributed computing with privacy-preserving machine learning. It typically involves a central orchestrator (server) and multiple participating clients (edge devices, organizations, or data silos). The process unfolds in several iterative steps:
    Initialization and Distribution: The central server initializes a global machine learning model (either pre-trained or randomly initialized). This model, along with training configurations (e.g., epochs, learning rate), is distributed to all participating clients.
    Local Training: Each client independently trains the model using its own local, private dataset. This data never leaves the client's device. The local training process is similar to traditional machine learning, where the model learns patterns from local data and updates its parameters.
    Model Update Transmission: After local training, clients send only the model updates (e.g., gradients, weight changes, or learned parameters) back to the central server, not their raw data. These updates are often compressed, encrypted, or anonymized to enhance privacy and reduce communication overhead. The specific method varies by federated learning algorithm (e.g., Federated Averaging, Federated SGD).
    Aggregation: The central server receives model updates from multiple clients and aggregates them to create an improved global model. Federated Averaging (FedAvg) is a common algorithm, where the server averages the received model parameters, often weighted by the size of each client's dataset. This step synthesizes knowledge from all clients without seeing their individual data.
    Global Model Update and Redistribution: The aggregated model becomes the new, improved global model. This updated model is then sent back to the clients, initiating the next training round. This iterative cycle continues until the global model converges to a satisfactory performance level.
    This iterative process ensures that collective intelligence is incorporated into the global model, leading to a robust and accurate model, while preserving the privacy and confidentiality of each client's local data. It enables learning from distributed data sources that would otherwise be isolated due to privacy or regulatory restrictions.
    Why is Federated Learning Important Now?
    Federated learning is a rapidly evolving field gaining immense importance due to several converging factors:
    Escalating Data Privacy Concerns and Regulations: Stringent regulations like GDPR and CCPA make centralizing sensitive user data challenging. FL offers a viable solution by allowing AI models to be trained on private data without it leaving its source, ensuring compliance and building user trust.
    Proliferation of Edge Devices: The exponential growth of IoT devices, smartphones, and wearables means vast amounts of data are generated at the network's periphery. Traditional cloud-centric AI models struggle with data transfer, latency, and bandwidth limitations. FL enables on-device AI, reducing reliance on constant cloud connectivity and improving real-time responsiveness.
    Addressing Data Silos: Many organizations possess valuable datasets that are siloed due to competitive reasons, regulations, or logistical complexities. FL provides a mechanism to unlock collective intelligence from these disparate data sources, fostering collaboration without compromising proprietary or sensitive information.
    Enhanced Security against Data Breaches: Centralized data repositories are attractive targets for cyberattacks. By distributing data and sharing only model updates, FL inherently reduces the attack surface. Even if a central server is compromised, raw, sensitive data remains secure on individual devices, significantly mitigating the impact of potential data breaches.
    Continual Learning and Personalization: FL facilitates continuous model improvement. As users interact with devices or new data is generated locally, models can be continuously updated and refined on the device itself. This enables highly personalized AI experiences, such as predictive keyboards that adapt to individual typing styles or recommendation systems that learn from unique user preferences, all while keeping personal data private.
    Ethical AI Development: Beyond compliance, FL promotes a more ethical approach to AI development. It aligns with principles of data minimization and privacy-by-design, ensuring AI systems are built with respect for individual data rights from the ground up. This proactive approach helps build more trustworthy and socially responsible AI applications.
    In essence, federated learning provides a powerful framework for developing advanced AI models in a world increasingly concerned with data privacy, distributed data sources, and the need for efficient, secure, and personalized AI experiences. It represents a significant step towards a future where AI can learn and evolve collaboratively, respecting individual data ownership.
    Use Cases of Federated Learning
    Federated learning's practical applications are rapidly expanding across various industries, offering innovative solutions where data privacy, security, and distributed data sources are critical. Here are some prominent use cases:
    Mobile Applications and On-Device AI
    One of the most intuitive and widely adopted applications of federated learning is in mobile devices. Features like next-word prediction, facial recognition, voice assistants, and personalized recommendation systems on smartphones heavily rely on user data. Traditionally, improving these models would necessitate sending vast amounts of personal user data to central servers for training. However, federated learning allows these models to be trained directly on the user's device.
    For instance, Google's Gboard uses federated learning to improve its predictive text capabilities by learning from how millions of users type, without ever sending individual keystrokes or sensitive data to Google's servers. This approach significantly enhances user privacy, reduces bandwidth consumption, and enables highly personalized AI experiences that adapt to individual usage patterns in real time.
    Healthcare and Medical Research
    The healthcare sector is a prime candidate for federated learning due to the highly sensitive nature of patient data and stringent privacy regulations like HIPAA. Federated learning enables multiple hospitals, clinics, or research institutions to collaboratively train robust diagnostic models for diseases (e.g., cancer detection from medical images, predicting disease progression) without sharing raw patient records.
    Each institution trains the model on its local dataset, and only the learned model parameters or updates are shared and aggregated. This allows for the creation of more accurate and generalizable models by leveraging a larger, more diverse patient population, while strictly adhering to privacy laws and maintaining patient confidentiality. It accelerates medical research and improves diagnostic capabilities across the healthcare ecosystem.
    Autonomous Vehicles
    Autonomous vehicles generate an enormous amount of data from various sensors (cameras, LiDAR, radar) crucial for training AI models for perception, navigation, and decision-making. Sharing all this raw data with a central cloud for training is impractical due to bandwidth limitations, latency, and privacy concerns. Federated learning offers a solution by allowing vehicles to train their AI models locally on their driving data.
    Only aggregated insights or model updates are then shared with a central server to improve a global model. This collaborative learning across a fleet of vehicles helps in developing more robust and safer self-driving systems, enabling them to learn from diverse driving conditions and scenarios encountered by different vehicles, without compromising the privacy of individual vehicle data or location.
    Smart Manufacturing and Industrial IoT
    In the realm of Industry 4.0, smart factories and industrial IoT devices generate vast datasets related to machine performance, product quality, and operational efficiency. Federated learning can be applied here for predictive maintenance, quality control, and anomaly detection. For example, different manufacturing plants can collaboratively train models to predict equipment failures or identify defects without sharing proprietary operational data.
    Each plant trains the model on its local sensor data, and only the model updates are shared. This allows for improved operational efficiency, reduced downtime, and enhanced product quality across a distributed manufacturing network, all while keeping sensitive production data within each facility.
    Financial Services and Fraud Detection
    The financial sector deals with highly sensitive transaction data, making privacy and security paramount. Federated learning can be instrumental in enhancing fraud detection, anti-money laundering (AML) efforts, and credit scoring. Multiple banks or financial institutions can collaboratively train models to identify fraudulent transactions or assess credit risk without directly sharing customer transaction histories.
    By exchanging only model updates, they can leverage a broader dataset of financial activities to build more accurate and robust fraud detection systems, which are more effective at identifying emerging fraud patterns. This approach strengthens the collective defense against financial crime while preserving customer privacy and complying with strict financial regulations.
    These examples underscore federated learning's versatility and its potential to unlock the value of distributed, sensitive data, fostering collaborative AI development in a privacy-preserving manner.
    Challenges and Limitations of Federated Learning
    While federated learning offers compelling advantages, it faces several challenges crucial for its successful adoption:
    Communication Overhead
    One significant bottleneck is communication cost. The iterative exchange of model updates between numerous clients and a central server can lead to substantial network traffic, especially with large models or many devices. Training a complex deep neural network across thousands of mobile phones could generate terabytes of data, straining bandwidth and increasing operational costs. Unstable network connections, common in mobile or IoT environments, can lead to dropped updates, delayed training, and slower convergence. Techniques like model compression and sparsification can mitigate this, but often involve trade-offs in model precision or convergence speed. Developers must balance communication efficiency with model quality, often requiring custom protocols.
    Data Heterogeneity (Non-IID Data)
    In real-world federated settings, data distribution across clients is rarely independent and identically distributed (non-IID). For example, a hospital in one region might primarily treat certain diseases, leading to a skewed dataset compared to another. This heterogeneity challenges model convergence and performance. If the global model is trained on highly diverse local datasets, it might perform poorly on individual clients whose data deviates significantly from the aggregated average. Traditional aggregation methods like Federated Averaging (FedAvg) can struggle, potentially leading to slower convergence or divergence. Advanced techniques, such as personalized federated learning, are being developed but add complexity.
    Security and Privacy Risks
    Despite keeping raw data local, federated learning is not entirely immune to security and privacy risks. Model updates (e.g., gradients) can inadvertently leak sensitive information. Gradient inversion attacks can reconstruct parts of original training data from shared gradients. Malicious actors could also inject poisoned updates, manipulating the global model to perform poorly or exhibit biased behavior (model poisoning attacks). Privacy-enhancing technologies like differential privacy (adding noise) and secure multi-party computation (encrypting updates) can enhance security, but often introduce trade-offs. Differential privacy can degrade accuracy, and cryptographic protocols can increase computational overhead. Implementing robust safeguards requires deep understanding of cryptographic techniques and careful balance between privacy, security, and model utility.
    Resource Constraints and System Heterogeneity
    Many clients in federated learning, especially mobile phones and IoT devices, operate under significant resource constraints (limited computational power, memory, battery life, inconsistent network connectivity). These limitations impact the feasibility and efficiency of local model training. System heterogeneity—variations in hardware, operating systems, and network conditions—can lead to inconsistent training speeds and reliability. Some devices might complete training quickly, while others might take longer or drop out, complicating synchronization and aggregation. This requires robust fault tolerance and careful client selection strategies.
    Fairness and Bias
    If certain client data is underrepresented or inherently biased, the global model might not perform equally well across all client groups. This can lead to fairness issues, where the model performs suboptimally for minority groups whose data was not adequately represented. Ensuring fairness requires careful consideration of data distribution, client sampling, and potentially incorporating fairness-aware aggregation algorithms.
    Addressing these challenges is an active area of research. Innovations in communication efficiency, robust aggregation algorithms for non-IID data, advanced privacy-preserving techniques, and efficient resource management are continuously pushing the boundaries of federated learning, making it a more practical and reliable solution for collaborative AI.
    Practical Implementation: Federated Learning with Groq (Code Demo)
    To illustrate the core concepts of federated learning, we will walk through a simplified Python example. This demonstration simulates a federated learning setup with multiple clients and a central server. For this demo, we use a basic linear regression model and simulate data generation on each client. While a full-fledged federated learning framework involves complex communication protocols, secure aggregation, and robust error handling, this example aims to provide a clear understanding of the iterative local training and global model aggregation process.
    We will also include a placeholder for integrating with a powerful inference engine like Groq. Groq has developed a Language Processing Unit (LPU) inference engine, which can deliver incredibly fast inference for large language models. While our example uses simple linear regression, in more complex federated learning scenarios involving large models (e.g., for natural language processing), Groq's LPU could be leveraged by the central server or powerful edge devices for rapid local inference or model evaluation after receiving the global model.
    Dataset
    For this simplified demonstration, we generate synthetic datasets for each client. Each client will have a small dataset for a linear relationship with some noise. In a real-world scenario, you would replace this with actual distributed datasets from various sources.
    Code Demo: Simplified Federated Learning
    First, let's create a Python file named federated_learning_demo.py:
    import numpy as np # --- Configuration --- NUM_CLIENTS = 5 NUM_ROUNDS = 10 LOCAL_EPOCHS = 5 LEARNING_RATE = 0.01 # Placeholder for Groq API Key (replace with your actual key) GROQ_API_KEY = "your_actual_api_key" # --- Helper Classes --- class Client: def __init__(self, client_id, data_size=100): self.client_id = client_id self.weights = np.random.rand(2) # [slope, intercept] self.data_size = data_size self.X, self.y = self._generate_local_data() def _generate_local_data(self): np.random.seed(self.client_id) X = 2 * np.random.rand(self.data_size, 1) y = 3 * X + 2 + np.random.randn(self.data_size, 1) * 0.5 return X, y def train_local_model(self, global_weights): self.weights = global_weights.copy() for epoch in range(LOCAL_EPOCHS): predictions = self.X.flatten() * self.weights[0] + self.weights[1] errors = predictions - self.y.flatten() gradient_slope = np.mean(errors * self.X.flatten()) gradient_intercept = np.mean(errors) self.weights[0] -= LEARNING_RATE * gradient_slope self.weights[1] -= LEARNING_RATE * gradient_intercept return self.weights def evaluate_local_model(self): predictions = self.X.flatten() * self.weights[0] + self.weights[1] mse = np.mean((predictions - self.y.flatten()) ** 2) return mse class Server: def __init__(self): self.global_weights = np.random.rand(2) def aggregate_models(self, client_weights_list): self.global_weights = np.mean(client_weights_list, axis=0) return self.global_weights # --- Main Federated Learning Loop --- def run_federated_learning(): server = Server() clients = [Client(i) for i in range(NUM_CLIENTS)] print(f"Initial Global Weights: {server.global_weights}") for round_num in range(NUM_ROUNDS): print(f"\n--- Round {round_num + 1}/{NUM_ROUNDS} ---") client_updates = [] for client in clients: updated_weights = client.train_local_model(server.global_weights) client_updates.append(updated_weights) mse = client.evaluate_local_model() print(f"Client {client.client_id} Local MSE: {mse:.4f}") server.aggregate_models(client_updates) print(f"Aggregated Global Weights: {server.global_weights}") print("\nFederated Learning complete!") print(f"Final Global Weights: {server.global_weights}") # --- Groq Integration Placeholder (Conceptual Only) --- print("\nGroq API Response (Conceptual):") print("Federated learning allows multiple devices to collaboratively train a global model without sharing raw data, ensuring data privacy.") if __name__ == "__main__": run_federated_learning() Output Initial Global Weights: [0.47730663 0.489924 ] --- Round 1/10 --- Client 0 Local MSE: 14.8814 Client 1 Local MSE: 14.7992 Client 2 Local MSE: 14.3824 Client 3 Local MSE: 13.8879 Client 4 Local MSE: 15.4143 Aggregated Global Weights: [0.69928276 0.68107642] --- Round 2/10 --- Client 0 Local MSE: 12.1263 Client 1 Local MSE: 12.0160 Client 2 Local MSE: 11.7288 Client 3 Local MSE: 11.2319 Client 4 Local MSE: 12.5132 Aggregated Global Weights: [0.89917009 0.85277157] --- Round 3/10 --- Client 0 Local MSE: 9.8937 Client 1 Local MSE: 9.7640 Client 2 Local MSE: 9.5798 Client 3 Local MSE: 9.0877 Client 4 Local MSE: 10.1658 Aggregated Global Weights: [1.0791868 1.00696659] --- Round 4/10 --- Client 0 Local MSE: 8.0843 Client 1 Local MSE: 7.9418 Client 2 Local MSE: 7.8393 Client 3 Local MSE: 7.3570 Client 4 Local MSE: 8.2667 Aggregated Global Weights: [1.24132819 1.14542193] --- Round 5/10 --- Client 0 Local MSE: 6.6176 Client 1 Local MSE: 6.4675 Client 2 Local MSE: 6.4295 Client 3 Local MSE: 5.9605 Client 4 Local MSE: 6.7300 Aggregated Global Weights: [1.38738906 1.26972114] --- Round 6/10 --- Client 0 Local MSE: 5.4285 Client 1 Local MSE: 5.2745 Client 2 Local MSE: 5.2874 Client 3 Local MSE: 4.8340 Client 4 Local MSE: 5.4868 Aggregated Global Weights: [1.51898384 1.38128857] --- Round 7/10 --- Client 0 Local MSE: 4.4641 Client 1 Local MSE: 4.3093 Client 2 Local MSE: 4.3620 Client 3 Local MSE: 3.9256 Client 4 Local MSE: 4.4809 Aggregated Global Weights: [1.63756474 1.48140547] --- Round 8/10 --- Client 0 Local MSE: 3.6818 Client 1 Local MSE: 3.5282 Client 2 Local MSE: 3.6121 Client 3 Local MSE: 3.1934 Client 4 Local MSE: 3.6670 Aggregated Global Weights: [1.74443803 1.57122427] --- Round 9/10 --- Client 0 Local MSE: 3.0471 Client 1 Local MSE: 2.8963 Client 2 Local MSE: 3.0043 Client 3 Local MSE: 2.6034 Client 4 Local MSE: 3.0085 Aggregated Global Weights: [1.84077873 1.65178162] --- Round 10/10 --- Client 0 Local MSE: 2.5319 Client 1 Local MSE: 2.3849 Client 2 Local MSE: 2.5115 Client 3 Local MSE: 2.1283 Client 4 Local MSE: 2.4758 Aggregated Global Weights: [1.9276438 1.72400996] Federated Learning complete! Final Global Weights: [1.9276438 1.72400996] Groq API Response (Conceptual): Federated learning allows multiple devices to collaboratively train a global model without sharing raw data, ensuring data privacy. How the Code Works
    Configuration: Defines parameters like the number of clients, training rounds, local epochs, and learning rate.
    Client Class:
    Each client represents a local device. It initializes with a unique ID and generates its own synthetic linear data (_generate_local_data). The train_local_model method simulates local training using gradient descent. It takes global_weights from the server, trains on its own data, and returns updated weights. evaluate_local_model calculates Mean Squared Error (MSE) on local data to measure performance. Server Class:
    The server initializes global_weights representing the shared model. The aggregate_models method receives updated weights from all clients, averages them, and updates the global_weights. run_federated_learning Function:
    Orchestrates the federated learning process. Initializes Server and Clients. For each training round: The server's global_weights are sent conceptually to each client. Clients train locally for LOCAL_EPOCHS and send updated weights back. The server collects client_updates and aggregates them into new global_weights. The process repeats for NUM_ROUNDS. Groq Integration Placeholder:
    Demonstrates conceptually where you might integrate the Groq API. In advanced scenarios, Groq can be used after training for rapid inference or enhanced AI functionality. Remember to replace YOUR_GROQ_API_KEY_HERE with your actual Groq API key and install the Groq library (pip install groq) to test integration. How to Run the Code:
    Save the code as federated_learning_demo.py. Open your terminal or command prompt. Navigate to the file's directory. Run using the Python command: python federated_learning_demo.py. You will observe global weights converging over multiple rounds, demonstrating collaborative learning without sharing raw data directly. Local MSE values show each client's individual model improvement.
    This example provides a basic understanding of federated learning. Real-world implementations typically involve advanced models, optimization strategies, security protocols, and communication methods, but the fundamental principle remains collaborative learning on decentralized data.
    Conclusion
    Federated learning stands as a transformative paradigm in artificial intelligence, offering a powerful solution to escalating challenges of data privacy, security, and efficient utilization of distributed data. By enabling collaborative model training without centralizing raw, sensitive information, FL has opened new avenues for AI development in sectors previously constrained by regulatory hurdles, logistical complexities, or ethical considerations.
    We have explored how federated learning operates through an iterative cycle of local training, model update transmission, and global aggregation, ensuring data remains on the device while collective intelligence is harnessed. Its importance is underscored by the pervasive need for privacy-preserving AI, the explosion of edge devices, the imperative to unlock insights from data silos, and the continuous demand for personalized and secure AI experiences.
    The diverse range of applications, from enhancing mobile keyboard predictions and accelerating medical research to bolstering fraud detection in finance and enabling smarter autonomous vehicles, demonstrates FL's versatility and real-world impact. While challenges such as communication overhead, data heterogeneity, and inherent security risks persist, ongoing research and advancements are continuously refining FL techniques, making it more robust, efficient, and scalable.
    The future of AI is increasingly collaborative and privacy-aware. Federated learning is not just a niche solution but a fundamental shift towards building more responsible, ethical, and effective AI systems that respect data sovereignty. As technology evolves and privacy concerns deepen, federated learning will undoubtedly play a pivotal role in shaping the next generation of intelligent applications, fostering innovation while safeguarding the very data that fuels it.
    Frequently Asked Questions (FAQs)
    1. What is the main difference between Federated Learning and traditional Machine Learning?
    The main difference lies in data handling. Traditional machine learning centralizes all data on a single server for training, which can raise privacy and security concerns. Federated Learning, conversely, keeps data decentralized on local devices or servers. Only model updates (e.g., learned parameters or gradients), not raw data, are shared with a central server for aggregation, ensuring data privacy.
    2. Is Federated Learning completely secure and private?
    While federated learning significantly enhances privacy by keeping raw data on local devices, it is not entirely immune to security and privacy risks. Model updates can still potentially leak sensitive information through advanced attacks (e.g., gradient inversion attacks). Therefore, FL often incorporates additional privacy-enhancing technologies like differential privacy and secure multi-party computation to further strengthen security, though these can introduce trade-offs with model accuracy or computational overhead.
    3. What are some real-world applications of Federated Learning?
    Federated Learning is used in various domains. Prominent examples include: improving predictive text and voice recognition on mobile phones (e.g., Google Gboard), enabling collaborative medical research for disease detection across hospitals without sharing patient data, enhancing fraud detection in financial services, and training autonomous vehicle models from distributed driving data.
    4. What is data heterogeneity in Federated Learning, and why is it a challenge?
    Data heterogeneity refers to the non-uniform distribution of data across different participating clients in a federated learning setup. This means each client's local dataset might have unique characteristics or biases. It's a challenge because it can lead to slower model convergence, oscillations, or a global model that performs suboptimally on individual clients whose data significantly differs from the aggregated average. Advanced algorithms are needed to mitigate its effects.
    5. Can Federated Learning be used with any type of machine learning model?
    Federated Learning principles can be applied to a wide range of machine learning models, including linear regression, neural networks (for image classification, natural language processing), and more complex deep learning architectures. The core requirement is the ability to train a model locally and then extract and share its learned parameters or gradients for aggregation.
    6. What is the role of the central server in Federated Learning?
    The central server (or orchestrator) in federated learning plays a crucial role in coordinating the training process. It initializes and distributes the global model to clients, collects model updates from them, aggregates these updates to improve the global model, and then redistributes the updated model for the next training round. It acts as an aggregator and coordinator, not a data collector.
    7. What are the computational requirements for clients in Federated Learning?
    Clients in federated learning need sufficient computational resources (CPU, memory, battery) to train a machine learning model locally on their device. The exact requirements depend on the complexity of the model and the size of the local dataset. While some FL applications run on powerful servers, many are designed for edge devices like smartphones, which necessitates efficient model architectures and optimized training procedures to accommodate their limited resources.
    References
    Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., ... & Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1–210. Available at: https://arxiv.org/pdf/1912.04977.pdf Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020, May). Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine. Available at: https://arxiv.org/pdf/1908.07873.pdf Milvus. (n.d.). What are the main challenges of federated learning? Retrieved from https://milvus.io/ai-quick-reference/what-are-the-main-challenges-of-federated-learning IBM Research. (n.d.). What is federated learning? Retrieved from https://research.ibm.com/blog/what-is-federated-learning Aimultiple. (, May ). Federated Learning: Use Cases & Real Life Examples []. Retrieved from https://research.aimultiple.com/federated-learning/ Yang, Q., Liu, Y., Chen, T., & Tong, Y. (). Federated learning: A survey. arXiv preprint arXiv:.. Li, T., Sahu, A. K., Zaheer, M., Khan, M. F., Smith, V., Talwalkar, A., & Dean, J. (). Federated learning: Distributed machine learning for health informatics. Journal of Healthcare Informatics Research, (), -.
  24. by: Chris Coyier
    Mon, 08 Sep 2025 17:01:16 +0000

    There’s a nice article by Enzo Manuel Mangano called Checkbox Interactions – The beauty of Layout Animations. In the end, you get some nicely animated checkboxes, essentially:
    I like it.
    It’s a modern-looking multiple-choice with very clear UX.
    Enzo’s tutorial is all React Native-ified. I think Enzo is a React Native guy and that’s his thing. And that’s fine and makes sense. A lot of time UI like this is part of highly dynamic web apps that deserve that kind of treatment and embracing that style of code to help it live there is fine.
    But I wouldn’t want anyone to think that it’s necessary you reach for a tool like React Native to create something like this. This is actually pretty darn simple HTML & CSS that can be quite performant, semantic, and accessible.
    Each of those “buttons”, is really this:
    <label> Italian <input type="checkbox" class="screenreader-only"> <svg> <path d="..." /> <svg> </label> Each button is really a label, because then you are appropriately connecting the text of the label to the checkbox in an accessible way. I will call out Enzo for this one: his final demo outputs <div>s and <span>s, which you can’t even tab to, so it’s a pretty inaccessible final result.
    When it’s a label, the whole thing becomes “clickable” then, toggling the checkbox within.
    We can hide the actual checkboxes (notice the UI doesn’t actually show them) by applying an accessible screenreader-only class. But each of them remains an interactive element and thus are able to be tabbed to. But because we can’t see them, we should do…
    label { cursor: pointer; &:focus-within { outline: 3px solid orange; } } This will ensure there is a visual indicator when any of them are focused. And with that focus, the spacebar will work to activate them like any normal checkbox.
    The fun part though is the neat interaction where activating one of the options animated the checkbox in and changes some colors. It’s easy! I promise! We just check if the label has a checked input and make those changes. All right in CSS.
    label { --highlightColor: oklch(0.764 0.1924 65.38); cursor: pointer; border: 1px solid white; &:focus-within { outline: 3px solid var(--highlightColor); } svg { width: 0; transition: width 0.66s; fill: var(--highlightColor); } &:has(:checked) { border-color: var(--highlightColor); background: color-mix(in srgb, var(--highlightColor), transparent 80%); svg { width: 0.88lh; } } } The finished demo, I’d say, is near 100% the same experience as Enzo’s. Cheers!
    CodePen Embed Fallback
  25. Composition in CSS

    by: Zell Liew
    Mon, 08 Sep 2025 13:55:26 +0000

    Tailwind and other utility libraries have been huge proponents of composition. But, to me, their version of composition has always carried a heavy sense of naïveté.
    I mean, utility composition is basically adding CSS values to the element, one at a time…
    <div class="p-4 border-2 border-blue-500"> ... </div> If we’re honest for a minute, how is this composition different from adding CSS rules directly into a class?
    /* This is composition too! */ .card { padding: 1rem; border: 2px solid var(—color-blue-500) } That said, I can’t deny the fact that I’ve been thinking a lot more about composition ever since I began using Tailwind. So, here are a couple of notes that I’ve gathered together about CSS composition.
    It’s not a new concept
    CSS is a composable language by nature. This composition nature is already built into the cascade. Let’s say you’ve decided to style a button with a few properties:
    .button { display: inline-flex; padding: 0.75em 1.5em; /* other styles... */ } You can always tag on other classes to modify the button’s appearance:
    <button class="button primary"> ... </button> <button class="button secondary"> ... </button> .primary { background: orange; } .secondary { background: pink; } You can even change the appearance of other elements to a button by adding the .button class:
    <a href="#" class="button"> ... </a> Composition is happening in both cases:
    We composed .button onto a We composed .red onto .button So, CSS composition has been in existence since forever. We simply don’t talk about composition as a Big Thing because it’s the nature of the language.
    Developers take a pretty narrow view of composition
    When developers talk about composition in CSS, they always seem to always restrict the definition of composition to the addition of classes in the HTML.
    <div class="one two"> ... </div> What’s interesting is that few people, if any, speak about composition within CSS files — from the angle of using Sass mixins or advanced Tailwind utilities.
    In these cases, we are also composing styles… just not directly in the HTML!
    @mixin button () { display: inline-flex; padding: 0.75em 1.5em; /* other styles ... */ } .button { @include button; } What is composition?
    Composition comes from two possible words:
    Compose: Put together Composite: Made up of distinct parts or elements Both words come from the same Latin root componere, which means to arrange or direct.
    In other words… all work is put together in some way, so all work is composed. This makes me wonder why composition is used in such a limited context. 🤔
    Moving on…
    Composition doesn’t reduce bloat
    Class composition reduces CSS bloat only if you’re using utility classes. However, class composition with utility classes is likely to create HTML bloat.
    <div class="utility composition">...</div> <div class="one utility at a time">...</div> <div class="may create html bloat">...</div> On the other hand, class composition with selectors might not reduce CSS bloat. But they definitely introduce lesser HTML bloat.
    <div class="class composition">...</div> <div class="card primary">...</div> <div class="may override properties">...</div> <div class="less html bloat"> ... </div> Which is better? ¯\_(ツ)_/¯
    HTML bloat and CSS bloat are probably the least of your concerns
    We know this:
    HTML can contain a huge amount of things and it doesn’t affect performance much. CSS, too. 500 lines of CSS is approx 12kb to 15kb (according to Claude). An image typically weighs 150kb or perhaps even more. For most projects, optimizing your use of images is going to net you better weight reduction than agonizing over utility vs. selector composition.
    Refactoring your codebase to decrease CSS bloat is not likely to increase performance much. Maybe a 2ms decrease in load times?
    But refactoring your codebase to improve developer recognition and make it easier to style? Much more worth it.
    So, I’d say:
    HTML and CSS bloat are pretty inconsequential. It’s worthwhile to focus on architecture, structure, and clarity instead. Advanced compositions
    If we zoom out, we can see that all styles we write fall into four categories:
    Layouts: Affects how we place things on the page Typography: Everything font related Theming: Everything color related Effects: Nice good to have stuff like gradients, shadows, etc. Styles from each of these four categories don’t intersect with each other. For example:
    font-weight belongs exclusively to the Typography category colour belongs exclusively to the Theming category It makes sense to create composable classes per category — when that’s done, you can mix-and-match these classes together to create the final output. Very much like Lego, for the lack of a better example. (Alright, maybe Duplo for the kids?)
    So your HTML might end up looking like this, assuming you do class composition for these four categories:
    <!-- These are all pseudo classes. Use your imagination for now! --> <div class="layout-1 layout-2 effects-1"> <h2 class="typography-1 theming-1"> ... </div> <div class="typography-2"> ... </div> </div> A real example of this would be the following, if we used classes from Splendid Styles and Splendid Layouts:
    <div class="card vertical elevation-3"> <h2 class="inter-title"> ... </h2> <div class="prose"> ... </div> </div> I’m writing more about this four-category system and how I’m creating composable classes in my latest work: Unorthodox Tailwind. Give it a check if you’re interested!
    Wrapping up
    To sum up:
    CSS is composable by nature. Developers seem to be quite narrow-minded about what composition means in CSS. You can do composition in the HTML or in the CSS. Styles we write can be divided into four categories — layouts, typography, theming, and effects. And finally: Splendid Styles contains classes that can aid composition in each of these four categories. Splendid Layouts handles the layout portion. And I’m writing more about how I create composable classes in my course Unorthodox Tailwind.
    Composition in CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.