Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Adnan Shabbir
    Mon, 23 Jun 2025 13:40:56 +0000

    Bash (Bourne Again Shell) is a free and open-source shell and scripting language. Its journey started in the late 80s, and since then, the Bash has been adopted by routine Linux users and Linux SysAdmins.
    Bash has automated the daily tasks of a Linux System Administrator. A Linux SysAdmin has to spend hours running scripts and commands. Not only the SysAdmins, but the simplicity and easy-to-learn capability of Bash have automated the tasks of a normal Linux user as well.
    Inspired by this, we have today demonstrated the 10 most useful Bash scripts for Linux SysAdmins. These are chosen based on the general working of any Linux System Administrator (from a small scale to a larger scale).
    10 Bash Scripts to Automate Daily Linux SysAdmin Tasks Prerequisite 1: Running a Bash Script | To be Done Before Running Each Script in This Post Prerequisite 2: Package Management Commands for Distros Other Than Debian/Ubuntu Script 1: Update and Upgrade the System Repositories/Packages Index Script 2: Install a Package on Linux Script 3: Remove a Package Script 4: Monitoring Systems Performance Script 5: Log Monitoring Script 6: User Management | Adding a New User, Adding a User to a Group Script 7: Disk Management Script 8: Service Management Script 9: Process Management Script 10: Allow or Deny Services Over the Firewall Bonus: Automating the Scripts using Cron Conclusion 10 Bash Scripts to Automate Daily Linux SysAdmin Tasks
    A System Administrator can create as many scripts as required. We have automated some of the most common and most used tasks through Bash scripts. Let’s go through the prerequisites first and then the Scripts:
    Prerequisite 1: Running a Bash Script | To be Done Before Running Each Script in This Post
    Before we get into the scripts, let’s quickly go through the process to run a bash script.
    Step 1: Make the Script Executable
    A bash script is useless until it is made executable. Here, the scripts refer to the Linux sys admin, so we use the “u+x” with “sudo” to make the scripts executable for admin only:
    sudo chmod u+x /path/to/script
    Step 2: Execute the Script
    Once the script is executable, it can now be run from the terminal using the command:
    sudo /path/to/script Click here to get more details on running a Bash script.
    Prerequisite 2: Package Management Commands for Distros Other Than Debian/Ubuntu
    To assist with Script 1, Script 2, and Script 3, we prepared a command cheat sheet for managing the packages on Linux distros other than Debian/Ubuntu and their derivatives. Here’s the table that lists the commands referring to each package manager of the Linux distro:
    Package Manager Update/Upgrade Install Remove pacman (Arch-based) sudo pacman -Syu sudo pacman -S <package> sudo pacman -R <package-name> zypper (SUSE-based) sudo zypper update/sudo zypper upgrade sudo pacman install <package> sudo zypper remove <package-name> dnf (Fedora/RHEL-based) sudo dnf update/sudo dnf upgrade sudo dnf install <package> sudo dnf remove <package> apt (Debian/Ubuntu-based) sudo apt update/upgrade sudo apt install <package> sudo apt remove <package> Script 1: Update and Upgrade the System Repositories/Packages Index
    “Update and upgrade” commands are the most used commands by any Linux SysAdmin or a routine user.
    Here, the below script updates, upgrades, and autoremoves packages:
    #! /bin/bash

    #updating the system repositories

    sudo apt update -y

    #installing the updated packages from repositories

    sudo apt upgrade -y

    #auto removing the unused dependencies

    sudo apt autoremove -y Note: Please refer to the table (Prerequisites 2) for Linux package management commands.

    Let’s make it executable:

    Permission Denied: Since the script belongs to the SysAdmin, we strictly kept the permissions to the sudo user only:

    Here’s the update, upgrade, and autoremoval of packages:

    Script 2: Install a Package on Linux
    A Linux SysAdmin has to install and remove packages from the systems and keep an eye on this process. Each package installation requires a few commands to effectively install that package.
    Note: Please refer to the table (Prerequisites 2) for Linux package management commands.
    #!/bin/bash

    #update and upgrade system packages repositories

    sudo apt update && sudo apt upgrade

    #install any package

    sudo apt install $1 Update and upgrade package repositories, followed by installing a specific package ($1, specify the package name while running the script):

    Here, we choose $1=ssh and run the script:

    Script 3: Remove a Package
    A complete removal of a package involves multiple commands. Let’s manage it through a single script:
    Note: Go through the table (Prerequisites 2) for the commands of other Linux package managers:
    #!/bin/bash

    #remove the package with only a few dependencies

    sudo apt remove $1

    #remove package and its data

    sudo apt purge $1

    #remove unused dependencies

    sudo apt autoremove $1 Let’s execute it, i.e.,”$1=ssh”:
    sudo ./removepack.sh ssh
    Script 4: Monitoring Systems Performance
    A Linux sysadmin has to monitor and keep an eye on measurable components (CPU, RAM) of the system. These preferences vary from organization to organization.
    Here’s the Bash script that checks the RAM status, Uptime, and CPU/memory stats, which are the primary components to monitor:
    #!/bin/bash

    echo "RAM Status"

    # free: RAM status

    free -h

    echo "Uptime"

    # uptime: how long the system has been running

    uptime

    echo "CPU/memory stats"

    # vmstat: Live CPU/memory stats

    vmstat 2 free -h: RAM status in human-readable form. uptime: how long the system has been running. vmstat 2: Live CPU/memory stats, i.e., records every 2 seconds.
    Once we run the command, the output shows the “Ram Status”, the “Uptime”, and the “CPU/Memory” status:

    Script 5: Log Monitoring
    A Linux SysAdmin has to go through different log files to effectively manage the system. For instance, the “/var/log/auth.log” file contains the user logins/logouts, SSH access, sudo commands, and other authentication mechanisms.
    Here’s the Bash script that allows filtering these logs based on a search result.
    #!/bin/bash

    cat /var/log/auth.log | grep $1 The $1 positional parameter shows that this script would be run with one argument:

    We use “UID=0” as the variable’s value for this script. Thus, only those records are shown that contain UID=0:

    The log file can be changed in the script as per the requirement of the SysAdmin. Here are the log files associated with different types of logs in Linux:
    Log File/Address Purpose/Description /var/log/ The main directory where most of the log files are placed. /var/log//logapache2 Refers to the Apache server logs (access and error logs). /var/log/dmesg Messages relevant to the device drivers /var/log/kern.log Logs/messages related to the Kernel. /var/log/syslog These are general system logs and messages from different system services that are available here There are a few more. Let’s open the “/var/log” directory and look at the logs that SysAdmin can use for fetching details inside each log file:

    Script 6: User Management | Adding a New User, Adding a User to a Group
    Adding a new user is one of the key activities in a Linux sysadmin’s daily tasks. There are numerous ways to add a new user with a Bash script. We have created the following Bash Script that demonstrates the user creation:
    #!/bin/bash

    USER=$1

    GROUP=$2

    #Creating a group

    sudo groupadd $GROUP

    #Creating a User

    sudo adduser $USER

    #Adding a user to a group

    sudo usermod -aG $GROUP $USER 2 positional parameters are initialized, i.e., $1 for user and $2 for group. First, the required group is created. Then, the user is created. Lastly, the newly created user is added to the group.
    Since the script has positional parameters, let’s execute it with the required 2 arguments (one for username and the other for groupname):

    Similarly, the system administrator can create scripts to delete users as well.
    Script 7: Disk Management
    Disk management involves multiple commands, such as listing and checking the number of block devices. We run the “lsblk” command. To “mount” or “unmount” any filesystem, we run the “mount” and “umount” commands.
    Let’s incorporate a few commands in a Bash script to view some data about disks:
    #!/bin/bash

    #Disk space check

    df -h

    #Disk usage of a specific directory

    echo "Disk Usage of:" $1

    du -sh $1 $1 positional parameter refers to the address of the directory whose disk usage is to be checked:

    Let’s run the script:
    sudo ./dfdu.sh /home/adnan/Downloads Also, remember to provide the argument value, i.e., here, the “$1=/home/adnan/Downloads”:

    Script 8: Service Management
    To manage any service, the SysAdmin has to run multiple commands for each service. Like, to start a service, the SysAdmin uses the “systemctl start” command and verifies its status through “systemctl status”. Let’s make this task easy for Linux SysAdmins:
    Start a Service
    The following Bash script refers to only one service, i.e., every time the script only manages the NGINX service:
    #!/bin/bash

    sudo systemctl start nginx

    sudo systemctl enable nginx

    sudo systemctl status nginx
    For a more diverse use case, we declare a positional parameter to manage different services with each new run:

    Now, pass the value of the positional parameter at the time of executing the script:
    sudo ./sys.sh apache2 The “apache2” is the argument on which the script would run:

    Stop a Service
    In this case, we use the positional parameter to make it more convenient for the Linux SysAdmins or the regular users:
    #!/bin/bash

    sudo systemctl stop $1

    sudo systemctl disable $1

    sudo systemctl status $1 The $1 positional parameter refers to the specific service that is mentioned when executing a command:

    Let’s execute the command:
    sudo ./sys1.sh apache2
    Script 9: Process Management
    A Linux System Administrator has a keen eye on the processes and manages each category of process as per the requirement. A simple script can kill the specific processes. For instance, the script demonstrated here fetches the Zombie and Defunct processes, identifies the parent IDs of these processes:
    #!/bin/bash

    #Fetching the process ids of Zombie processes and defunct processes

    ZOM=`ps aux | grep 'Z' | awk '{print $2}'| grep [0-9]`

    DEF=`ps aux | grep 'Z' | awk '{print $2}'| grep [0-9]`

    echo "Zombie and Defunct Process IDs are:" $ZOM "and" $DEF

    #Getting parent process ids of Zombies and defunct

    PPID1=`ps -o ppid= $ZOM`

    PPID2=`ps -o ppid= $DEF`

    echo "ZParent process IDs of Zombie and Defunct Processes are:" $PPID "and" $PPID2. Zombie and Defunct process IDs are fetched and stored in a variable. The parent process IDs of the Zombie and defunct processes are fetched. Then, the parent processes can be killed
    Let’s execute it:
    sudo ./process.sh
    Script 10: Allow or Deny Services Over the Firewall
    A firewall is a virtual wall between your system and the systems connecting to your system. We can set the firewall rules to allow or deny what we want. Firewall has a significant role in managing the system. Let’s automate to allow or deny any service on your system:
    Allow a Service Through the Firewall
    The following script enables SSH through the firewall:
    #!/bin/bash

    sudo ufw allow ssh

    sudo ufw enable

    sudo ufw status
    Let’s execute the script.

    We can also include a positional parameter here to use the same script for multiple services to be allowed on the firewall. For instance, the script below has only one positional parameter. This parameter’s value is to be provided at the time of executing the script.
    #!/bin/bash

    sudo ufw allow $1

    sudo ufw enable

    sudo ufw status
    While executing, just specify the name of the service as an argument:
    sudo ./firewall.sh ssh
    Deny a Service or Deny All:
    We can either deny one service or deny all the services attempting to reach our system. The below script updates the default incoming policy to deny, disables the firewall as well.
    Note: These kinds of denial scripts are run when the overall system is in trouble, and we just need to make sure there is no service trying to approach our system.
    #!/bin/bash

    sudo ufw default deny incoming

    sudo ufw disable

    sudo ufw status

    sudo ufw default allow outgoing
    Running the script:

    Now that you have learned the 10 Bash scripts to automate daily SysAdmin tasks.
    Let’s learn how we can schedule the scripts to run them as per our schedule.
    Bonus: Automating the Scripts Using Cron
    A cron job allows the SysAdmin to execute a specific script at a specific time, i.e., scheduling the execution of the script. It is managed through the crontab file.
    First, use the “crontab -e” command to enter the edit mode of the crontab file:
    crontab -e
    To put a command on a schedule with the cron file, you have to use a specific syntax to put it in the cron file. The below script will run on the 1st minute of each hour.

    There are a total of 5 parameters to be considered for each of the commands:
    m: minute of a specific hour, i.e., choose between 1-59 minutes. h: hour of the day, i.e., choose between 0-23. dom: date of the month → Choose between 1-31. mon: foreach month → Choose between 1- 12 dow: day of the week → Choose between 1-7
    You can check the crontab listings using:
    crontab -l
    Important: Do you want a Linux Commands Cheat Sheet before you start using Bash? Click here to get a detailed commands cheat sheet.
    Conclusion
    Bash has eased the way of commanding in Linux. Usually, we can run a single command each session on a terminal. With Bash scripts, we can automate the command/s execution process to accomplish tasks with the least human involvement. We have to write it once and then keep on repeating the same for multiple tasks.
    With this post, we have demonstrated the 10 Bash scripts to automate daily Linux System Administrator.
    FAQs
    How to run a Bash script as an Admin?
    Use the “sudo /path/to/script” command to run the Bash script as an Admin. It is recommended to restrict the executable permissions of the Bash scripts to only authorized persons.
    What is #!/bin/bash in Bash?
    The “#!/bin/bash” is the Bash shebang. It tells the system to use the “bash” interpreter to run this script. If we don’t use this, our script is a simple shell script.
    How do I give permissions to run a Bash script?
    The “chmod” command is used to give permissions to run the Bash script. For a Linux sysadmin script, use the “sudo chmod u+x /path/of/script” command to give permissions.
    What does $1 mean in Bash?
    In Bash, $1 is a positional parameter. While running a Bash script, the first argument refers to $1, the second argument refers to $2, and so on.
  2. by: Adnan Shabbir
    Mon, 23 Jun 2025 12:34:03 +0000

    Basic Workflow of Ansible | What components are necessary

    sudo apt update
    sudo apt install ansible
    ansible --version
    Ansible Control Node IP: 192.168.140.139 (Where Ansible is configured)
    Ansible Host IPs: {
    Server 1 [172.17.33.7]
    Server2 [192.168.18.140]
    }
    Inventory File:
    Default inventory file location: /etc/ansible/hosts. Usually, it is not available when we install Ansible from the default repositories of the distro, so we need to create it anywhere in the filesystem. If we create it in the default location, then no need to direct Ansible to the location of the file.
    However, when we create the inventory file other than the default, we need to tell Ansible about the location of the inventory file.


    Inventory listing (Verifying the Inventory Listing):
    ansible-inventory --list -y

    SSH (as it is the primary connection medium of Ansible with its hosts):
    sudo apt install ssh
    Allow port 22 through the firewall on the client side:
    sudo ufw allow 22
    Let’s check the status of the firewall:
    sudo ufw status
    Step 2: Establish a No-Password Login on a Specific Username | At the Host End
    Create a new dedicated user for the Ansible operations:
    sudo adduser username
    Adding the Ansible user to the sudo group:
    sudo usermod -aG sudo ansible_root
    Add the user to the sudo group (open the sudoers file):
    sudo nano /etc/sudoers
    SSH Connection (From Ansible Control Node to one Ansible Host):
    ssh username@host-ip-address
    ansible all -m ping -u ansible_root
    SSH key generation and copying the public key to the remote host:
    ssh-keygen
    Note: Copy the public key to the user that you will be using to control the hosts on various machines.
    ssh-copy-id username@host-ip-address

    Test All the Servers Listed in the Inventory File:
    Testing the Ansible Connection to the Ansible host (remember to use the username who is trusted at the host or has a passwordless login). I have the user “adnan” as the trusted user in the Ansible user list.
    ansible all -m ping -u username
    Same with the different username configured on the host side:

    We can ping a specific group, i.e., in our case, we have a group named [servers] in the inventory.

     
  3. by: Ani
    Mon, 23 Jun 2025 11:56:48 +0000

    When young girls see women succeeding in tech, speaking up, being bold, and owning their space makes a difference for them. We need more of that. We, as women, can lift each other by simply sharing our stories, supporting one another, and being visible. That’s how we create a stronger, more diverse, and more inclusive tech environment – one where everyone feels they belong.
    About me
    I am Mari Martikainen. I work as a Partner Manager at Advania, and honestly, no two days are ever the same. My role revolves around building long-term, value-adding collaboration that goes beyond selling products to create competitive advantage and growth opportunities for all parties involved. At the core of everything I do is connection. Listening is one of the most valuable skills I have developed, and it has made a big difference in building trust and strong relationships.
    The Beginning of My Sales Career
    My sales career kicked off when I was 13 years old, selling strawberries. I have basically been in sales ever since. Studying sales and business felt natural for me. During my BBA studies, I understood the theoretical frameworks, what kind of impact sales can have on a business, and how strategic it can be.
    Looking back, I honestly would not be in the tech industry if it were not for one amazing teacher during my BBA studies. My teacher, Pirjo Pitkäpaasi, had a course where we had the opportunity to go for an internship. There were many companies we could choose from, but she encouraged us, a group of 20-year-old girls, to apply for internships in tech companies even though most of us thought it was more of a “guy thing.” That push changed everything for me.
    I picked a distributor company for my internship, and that’s where my journey in tech began. Ever since I have worked in sales-related roles, like key account manager, sales manager, and now as a partner manager.
    I want to give my female teacher credit for me being in the tech field. In my opinion, having a role model makes a difference to young girls. When young girls see women succeeding in tech, speaking up, being bold, and owning their space makes a difference for them. We need more of that. We, as women, can lift each other by simply sharing our stories, supporting one another, and being visible. That’s how we create a stronger, more diverse, and more inclusive tech environment – one where everyone feels they belong.
    In Finland, unfortunately, I’ve still experienced stereotypes, sexism, and the challenge of having your voice truly heard, especially in more male-dominated spaces. It still happens. That’s why I believe so strongly in the power of having women mentors and role models.
    Mari Martikainen, Partner Manager, Advania

    Working in Advania – a technology company with people at heart
    From the very first day at Advania, I felt like part of the family. My colleagues welcomed me with open arms, and the team spirit here is just amazing. I love that in Advania, experts and strong professionalism are the core of our operations. In Advania, employees are valued and trusted, and that reflects in everything we do.
    One of the perks of working at Advania is the flexibility. One can choose how one wants to work on-site, hybrid, or fully remotely. Being in the office is energizing. It’s where ideas start flowing, conversations spark, and -let´s be honest- the humor is great.
    Always Learning
    At Advania, knowledge-sharing is part of the culture. We have different internal channels where we post updates, insights, and learning materials. But of course, it is also up to you to stay curious and keep learning independently.
    One tool I always recommend is Google’s Digital Garage. It’s full of great (and free) courses on everything from digital marketing to coding. It’s a great way to keep your skills sharp and explore new areas at your own pace.
    Learning has always been part of my life. I started my journey with a Bachelor’s degree in Business Administration, majoring in Sales, which laid the groundwork for my tech career. Later on, I completed a Master’s degree in Business Development and Leadership. That program was a game changer. I also completed a Management Essentials Program, which focused on core leadership skills.
    How I found mindfulness
    While I was working on my master’s thesis, I was also promoted to a team leader role. It was an exciting time, but also one of the most challenging periods of my career, mentally and physically. There was so much to learn at once, and I found myself feeling completely overwhelmed.
    That’s when I turned to mindfulness. I started incorporating short breathing exercises into my day, just five minutes at a time. It made a huge difference. It helped me manage stress more effectively. One technique I still use today is box breathing -it is a simple method, but it really works when things get hectic.
    Outside of work, I enjoy the little things that help me recharge -walks with my dog, going to the gym, and catching up with friends.
    Quote I live by
    One of my favorite quotes comes from Melinda Gates. “The more you can be authentic, the happier you are going to be – and life will work itself around that.” It resonates with me.
    I love TED Talks, but the one I love the most is the one by Reshma Saujani, “Teach girls bravery, not perfection.” It is such a powerful message that I think every woman in tech (and beyond) should hear.
    The post Role Model Blog: Mari Martikainen, Advania first appeared on Women in Tech Finland.
  4. by: Abhishek Prakash
    Sun, 22 Jun 2025 05:04:55 GMT

    The omnipresent top command is often the first tool that comes to mind for system resource monitoring in the Linux command line.
    Btop++ is a similar Linux system monitoring tool that shows usage statistics for processor, memory, disk, network, and processes.
    It is a C++ variant of the popular bashtop from the same developer. In fact, the developer states that Btop++ is a continuation of bashtop and bpytop.
    What makes Btop++ interesting
    0:00 /0:10 1× Btop++ default
    Here are a few things that make btop++ a better choice than the top command:
    Full mouse support, with clicks and scrolling Function for showing detailed stats for selected process. Fast, easy to use user interface. Ability to filter processes. Shows IO activity and speeds for disks. Installation
    Btop++ is available in the official repositories of most Linux distributions.
    In Ubuntu 22.04 and above, you can use the following command to install it:
    sudo apt install btopIf you are using Fedora, here is the command for you:
    sudo dnf install btopAnd, for Arch Linux users, you can use this:
    sudo pacman -Syu btop🪛 Troubleshooting tip: No UTF-8 locale detected
    When I first ran btop++ on an Arch Linux system, I encountered a "No UTF-8 locale detected" error.
    ERROR: No UTF-8 locale detected! Use --force-utf argument to force start if you're sure your terminal can handle it. To solve this, either run:
    btop --force-utfOr, edit your ~/.bashrc file to add the following line and fix it permanently:
    export LANG=en_US.UTF-8Running btop++
    To run btop++, open a terminal and run the command:
    btopIn desktops like GNOME, there will be a menu entry for btop++ as well.
    Explore btop++ interface
    While running it, you can see that several letters appearing in the title portions of the interface appear in a different color.
    Special Colours for CharactersYou can press these keys on the keyboard to access the related settings. For example, pressing the m key in the above screenshot will bring a menu screen.
    Btop++ MenuHere, hover over Options and press enter. This will bring up the GUI Settings dialog for btop++.
    Btop++ settings
    Navigate through the settings using the arrow keys and highlighted characters. The above video shows some settings changes using this btop++ menu.
    📋To keep things simpler, Btop++ is also referred to as Btop at times.Some essential Btop functions
    In this section, we will take a look at a couple of important usage of Btop as a system monitor and process manager.
    Terminate a process
    While you are in Btop, press the down or up arrow key to move through the list of processes. When you are above a process you want to terminate, press the t key on your keyboard.
    Terminate a process
    Get more details for a process
    You can press the enter key on top of a process to open it in a separate section. This will then give more insight about that process like status, CPU, elapsed time, etc.
    Process detailsSend more signals
    If you want to send a different signal to a process, Btop can do that as well. Hover over a process and press the s key on your keyboard.
    From the list of signal, enter a number. That's it!
    Send more signals
    Configuring Btop++
    All options in btop++ are configurable via the TUI menu. Still, btop++ provides a text-based configuration file as well.
    You can find this autogenerated config file at ~/.config/btop/btop.conf.
    Edit this file in any of your favorite text editors to modify it.
    Changing the theme
    You'll may come across some themes that are specifically created for btop++. For example, I am a fan of Catppuccin theme these days and I was glad to see a btop theme in this color palette.
    Here's what you should do for changing the theme. Get the .theme files. For Catppuccin, go to their release page and grab the latest themes.tar.gz file.
    Extract it and you'll see four variants of the theme. Either copy all of them or the one of your choice (you can see what it looks like on the GitHub repo) to ~/.config/btop/themes folder.
    Next, edit the file ~/.config/btop/btop.conf and change the color_theme = "Default" line to:
    color_theme = "catppuccin_macchiato"The above will change the theme to Catppuccin Macchiato.
    Getting help
    The best way to get help in btop is by using its TUI menu. While running btop, press the ESC key.
    Now, from the list, select HELP.
    Select HELPThis will print the help window with necessary keys and their functions.
    Help screenWrapping Up
    For many Linux users, htop is the better top. However, Btop++ is a pretty nice system monitor too. If you do not like to use GUI resource monitors, and want something fast, this is a nice option to have. Alternatively, you may also explore glances.
    Glances - A Versatile System Monitoring Tool for Linux SystemsThe most commonly used command line tools for process monitoring on Linux are top and its colorful, feature rich cousin htop . To monitor temperature on Linux, you can use lm-sensors. Similarly, there are many utilities to monitor other real-time metrics such as Disk I/O, Network Stats and others. GlancesIt's FOSSChinmay
  5. By: Janus Atienza
    Sat, 21 Jun 2025 17:07:04 +0000

    In today’s digital landscape, online visibility is paramount for any business’s success—and roofing companies are no exception. In a crowded market, differentiating yourself is challenging. Whether you’re a local roofer serving one town or a nationwide roofing chain, a strong online presence is non‑negotiable. This is where a roofing SEO agency merges powerfully with Linux. By running your SEO infrastructure on Linux, you benefit from scalable performance, stability, and cost‑effectiveness. In this article, we’ll explore:
    What SEO means for roofing businesses.
    Why working with a niche roofing SEO agency matters.
    How Linux underpins and boosts SEO efforts.
    Practical Linux‑based infrastructure strategies for SEO.
    Services offered and Linux tools that support them.
    How to choose the right roofing SEO partner using Linux.
    1. What Is SEO—and Why Roofing Companies Need It
    Search Engine Optimization (SEO) is the art and science of improving your website so it appears higher in search engine results (SERPs) for keywords like:
    “roof repair near me”
    “roof installation [city]”
    “roofing contractor open now”
    In the highly competitive roofing world, SEO is a game‑changer. It ensures your website is discovered by users actively searching for the services you offer. By optimizing content, site structure, and performance, you attract more qualified leads, higher conversion rates—and ultimately more roofing jobs.
    Linux Relevance
    Linux servers are extremely well‑optimized for web operations. They power more than two‑thirds of the internet’s servers. Thanks to tight control over resources, lean configurations, and robust security, running your SEO tools and websites on Linux maximizes uptime and reliability—both critical for search engines and users.
    2. Why Partner with a Roofing SEO Agency?
    A roofing SEO agency isn’t just any digital marketing firm. They specialize in:
    Roofing‑specific keywords
    Local SEO for service providers
    Construction and home‑improvement niches
    Let’s delve into their core advantages—and see how Linux supports every step.
    a. Industry‑Specific Expertise
    Roofing has its own vocabulary and buying behaviors:
    “Emergency roof tarp”
    “Insurance roof claim contractor”
    “Metal roof snow guard [city]”
    A roofing SEO agency understands this landscape. They know which keywords convert best, what content resonates (e.g., “how to file a roof insurance claim”), and what SEO problems roofing sites often face—like image sizes, map integration, or portfolio galleries.
    On Linux, you can run keyword‑tracking scripts, deploy cron jobs for daily rank checks, and host powerful tools like Screaming Frog SEO Spider via Wine or open‑source alternatives like Sitebulb clones.
    b. Local SEO Mastery
    Since roofing is inherently local, most customers search with geographic intent:
    “Roof inspection Springfield MA”
    “Storm damage roof Northeast Ohio”
    A roofing SEO agency optimizes your Google My Business (GMB), builds local citations, and ensures NAP (Name, Address, Phone) consistency.
    On Linux, you can automate GMB posts using scripts scheduled by cron, curate health checks, and aggregate online reviews via Node.js or Python tools running on your VPS.
    c. Improved Search Rankings
    By leveraging keyword research, on‑page optimization, link building, and technical SEO, a roofing SEO agency drives websites up the search results.
    On Linux, you’ll run scalable services—multiple web apps, database servers (PostgreSQL/MySQL), caching layers (Redis, Varnish), and CI/CD pipelines—all with automated deployment, patching, and monitoring (via Prometheus, Grafana).
    d. Boosted Traffic & Leads
    SEO’s goal: more traffic → more leads. Roofing SEO agencies master:
    Targeting location‑based long‑tail keywords (“best roof leak repair in Brisbane suburb”).
    Crafting optimized service pages with clear CTAs.
    Setting up tracking via Google Analytics, GA4, or Matomo (self‑hosted on Linux).
    On Linux, you can host Matomo or Open Web Analytics, enabling full control of user‑tracking while maintaining GDPR compliance.
    e. Enhanced User Experience
    SEO isn’t just keywords—it’s usability. Google factors in:
    Site speed
    Mobile‑friendliness
    Clean navigation
    Linux excels with fast web stacks: Nginx + PHP-FPM, Node.js, and static‑site generators like Hugo or Gatsby running on headless CMS (e.g., Strapi). Use Let’s Encrypt, fail2ban, and SELinux for both performance and security.
    3. Core Services from Roofing SEO Agencies + Linux Tools
    Here are essential SEO services and the Linux‑powered tech that supports them:
    1. Keyword Research & Strategy
    Tools: Keyword Sheeter, AnswerThePublic API, Ahrefs’ CLI
    Linux hosting: VPS with Docker containers or self‑hosted tools like Serposcope
    2. On‑Page SEO
    Tools: Yoast SEO (WordPress), Rank Math, Markdown‑to‑HTML with Hugo,
    Linux servers handle templates, HTML minification, image compression (ImageMagick), and sitemap generation via cron.
    3. Local SEO + Google My Business
    Tools: Google My Business API, Moz Local, BrightLocal CLI
    Linux enables scheduled sync scripts and GMB post automation.
    4. Link Building & Backlinks
    Tools: Majestic via API, Screaming Frog for crawl audits (Wine)
    Automate outreach (via Lemlist CLI) and monitor backlink health with custom Python bots.
    5. Content Marketing & Blogging
    Frameworks: WordPress, Ghost, Hugo, Jekyll, Strapi
    Host on Linux with Nginx, PostgreSQL, CI/CD with GitHub Actions, image optimization pipelines, SEO metadata injection.
    6. Technical SEO
    Tools: Chrome Headless, Lighthouse, SEOptimer, Brotli/Gzip compression, SSL
    Use Linux to monitor uptime, redirect chains (via Apache/Nginx), implement structured data (JSON-LD).
    7. Performance Tracking & Reporting
    Tools: Google Data Studio, Grafana, Matomo
    Linux handles data collection (via MySQL or ClickHouse), scheduled report generation, and auto‑email via Postfix.
    4. Linux Infrastructure Best Practices for Roofing SEO
    Here’s how a roofing SEO agency might architect their setup:
    Server Foundation
    OS: Ubuntu LTS, Debian, CentOS, or AlmaLinux
    Web server: Nginx with HTTP/2
    Runtime: PHP-FPM or Node.js
    Database: MySQL, PostgreSQL, or MariaDB
    Caching: Redis or Varnish
    Automation & Scalability
    Deploy with Docker/Kubernetes
    Use Ansible for configuration management
    Implement CI/CD (Jenkins, GitLab CI)
    Monitoring & Security
    Tools: Prometheus, Grafana, Node Exporter, fail2ban, UFW
    Automated patching: unattended-upgrades
    TLS: Let’s Encrypt / Certbot
    Performance Enhancements
    Enable Brotli/Gzip compression
    Use WebP image formats
    Lazy load images, defer JS, remove unused CSS
    Backup & Recovery
    Use rsync, restic, or managed snapshots
    Offsite backups (AWS S3, Backblaze B2)
    Using Linux, even small SaaS agencies can run top-tier infrastructure to support dozens of roofing clients efficiently and cost‑effectively.
    5. Choosing the Right Roofing SEO Agency + Linux Setup
    When evaluating roofing SEO agencies, consider:
    1. Roofing Industry Experience
    They should understand roofing terminology (e.g., “skylight flashing”), local vs. national competition, and legal/insurance aspects of roofing.
    2. Custom Linux‑Powered Strategies
    Avoid “one-size-fits-all” solutions. Ask about their Linux stack: VPS vs shared? Containerized? Self‑hosted analytics? Pipeline automation?
    3. Transparency & Communication
    Look for real-time dashboards (Grafana, Matomo stats), logs you can access, and regular meetings.
    4. Case Studies & ROI
    They should cite real metrics: “Client X in Denver increased organic leads by 67% in 6 months using local‑SEO + WordPress + Matomo on Linux.”
    5. SEO + DevOps Integration
    Top-tier agencies blend SEO experts with Linux engineers. If asked, they should detail:
    CI/CD deployment
    Server security
    Code‑review processes
    Uptime and performance monitoring
    6. Real‑World Example: A Roofing SEO Campaign on Linux
    Imagine “TopShield Roofing” launches a 12‑month campaign:
    Phase 1: Audit & Setup (Month 1)
    Linux audit server: Ubuntu with Nginx, wired into Grafana.
    Install Matomo; import data from GA4.
    Scanned site for mobile and security health using Lighthouse.
    Phase 2: Keyword + Content (Months 2–4)
    Targeted long‑tails: “hail damage roof repair Minneapolis.”
    Blogs published via Hugo, hosted on Linux CDN.
    XML sitemap auto‑regeneration via cron.
    Phase 3: Local Domination (Months 3–6)
    GMB automation: weekly Linux cron posts.
    Citations across 50 local directories via scripts.
    Reviews fetched and summarized in Grafana.
    Phase 4: Link Building & Performance (Months 4–10)
    Backlinks earned from construction forums; Linux crawler tracked them.
    Site speed cut: Brotli enabled, server cache in Redis.
    Phase 5: Results & Scaling (Months 6–12)
    Organic traffic +82%; form submissions +125%.
    Grafana alerts triggered when load spiked—team optimized SQL queries.
    Sales growth funded expanding to additional cities; new Docker‑ized stacks replicated confidently.
    7. Why Linux + Roofing SEO = Strategic Advantage
    Cost Efficiency: No Windows licensing—just powerful open-source.
    Performance: Linux serves millions of visitors with minimal CPU/RAM.
    Stability: Months‑long server uptime ensures SEO consistency.
    Security: Linux’s hardened tooling reduces hacking risk.
    Customization: Deep access to optimize everything. Want Brotli, Let’s Encrypt, advanced redirects? It’s yours.
    8. Getting Started: How to Integrate Linux with Your Roofing SEO
    Ask potential SEO partners:
    Do you host SEO tools on Linux?
    How do you manage deployments, monitoring, backups?
    Can I see a demo dashboard?
    If handling in-house:
    Start a lightweight Linux VPS (DigitalOcean, Linode).
    Install Matomo, Nginx, certbot.
    Build basic SEO stack: Hugo for blogs, cron jobs for sitemap.
    Add Prometheus + Grafana to visualize metrics.
    Measure, iterate, repeat:
    Track keyword ranking improvements month‑to‑month.
    Monitor server metrics: CPU, memory, response time.
    Adjust deployment based on results—Linux gives flexibility.
    Conclusion
    A roofing SEO agency that leverages Linux isn’t just providing marketing—it’s building daily‑powered infrastructure, capable of rapid scaling, advanced customization, stringent monitoring, and cost-effective operations.
    By combining industry-specific SEO know‑how with open‑source performance and reliability, you gain a competitive edge: higher rankings, more local customers, improved lead flow, and a tech stack that grows with your roofing business.
    When you choose an agency or build your own campaign, ensure they can demonstrate Linux proficiency—from server setup to analytics, automated deployment to security monitoring. That’s when your roofing business truly gets elevated: soaring in search results, capturing local demand, and supported by rock‑solid technical foundations.
    The post Roofing SEO Agency + Linux: Elevate Your Roofing Business with Open‑Source Power appeared first on Unixmen.
  6. Color Everything in CSS

    by: Juan Diego Rodríguez
    Fri, 20 Jun 2025 14:04:12 +0000

    I have had the opportunity to edit over a lot of the new color entries coming to the CSS-Tricks Almanac. We’ve already published several with more on the way, including a complete guide on color functions:
    color() hsl() lab() lch() oklab() oklch() rgb() And I must admit: I didn’t know a lot about color in CSS (I still used rgb(), which apparently isn’t what cool people do anymore), so it has been a fun learning experience. One of the things I noticed while trying to keep up with all this new information was how long the glossary of color goes, especially the “color” concepts. There are “color spaces,” “color models,” “color gamuts,” and basically a “color” something for everything.
    They are all somewhat related, and it can get confusing as you dig into using color in CSS, especially the new color functions that have been shipped lately, like contrast-color() and color-mix(). Hence, I wanted to make the glossary I wish I had when I was hearing for the first time about each concept, and that anyone can check whenever they forget what a specific “color” thing is.
    As a disclaimer, I am not trying to explain color, or specifically, color reproduction, in this post; that would probably be impossible for a mortal like me. Instead, I want to give you a big enough picture for some technicalities behind color in CSS, such that you feel confident using functions like lab() or oklch() while also understanding what makes them special.
    What’s a color?
    Let’s slow down first. In order to understand everything in color, we first need to understand the color in everything.
    While it’s useful to think about an object being a certain color (watch out for the red car, or cut the white cable!), color isn’t a physical property of objects, or even a tangible thing. Yes, we can characterize light as the main cause of color1, but it isn’t until visible light enters our eyes and is interpreted by our brains that we perceive a color. As said by Elle Stone:
    Even if color isn’t a physical thing, we still want to replicate it as reliably as possible, especially in the digital era. If we take a photo of a beautiful bouquet of lilies (like the one on my desk) and then display it on a screen, we expect to see the same colors in both the image and reality. However, “reality” here is a misleading term since, once again, the reality of color depends on the viewer. To solve this, we need to understand how light wavelengths (something measurable and replicable) create different color responses in viewers (something not so measurable).
    Luckily, this task was already carried out 95 years ago by the International Commission on Illumination (CIE, by its French name). I wish I could get into the details of the experiment, but we haven’t gotten into our first color thingie yet. What’s important is that from these measurements, the CIE was able to map all the colors visible to the average human (in the experiment) to light wavelengths and describe them with only three values.
    Initially, those three primary values corresponded to the red, green, and blue wavelengths used in the experiment, and they made up the CIERGB Color Space, but researchers noticed that some colors required a negative wavelength2 to represent a visible color. To avoid that, a series of transformations were performed on the original CIERGB and the resulting color space was called CIEXYZ.
    This new color space also has three values, X and Z represent the chromaticity of a color, while Y represents its luminance. Since it has three axes, it makes a 3D shape, but if we slice it such that its luminance is the same, we get all the visible colors for a given luminance in a figure you have probably seen before.
    This is called the xy chromaticity diagram and holds all the colors visible by the average human eye (based on the average viewer in the CIE 1931 experiment). Colors inside the shape are considered real, while those outside are deemed imaginary.
    Color Spaces
    The purpose of the last explanation was to reach the CIEXYZ Color Space concept, but what exactly is a “color space”? And why is the CIEXYZ Color Space so important?
    The CIEXYZ Color Space is a mapping from all the colors visible by the average human eye into a 3D coordinate system, so we only need three values to define a color. Then, a color space can be thought of as a general mapping of color, with no need to include every visible color, and it is usually defined through three values as well.
    RGB Color Spaces
    The most well-known color spaces are the RGB color spaces (note the plural). As you may guess from the name, here we only need the amount of red, green, and blue to describe a color. And to describe an RGB color space, we only need to define its “reddest”, “greenest”, and “bluest” values3. If we use coordinates going from 0 to 1 to define a color in the RGB color space, then:
    (1, 0, 0) means the reddest color. (0, 1, 0) means the greenest color. (0, 0, 1) means the bluest color. However, “reddest”, “bluest”, and “greenest” are only arbitrary descriptions of color. What makes a color the “bluest” is up to each person. For example, which of the following colors do you think is the bluest?
    As you can guess, something like “bluest” is an appalling description. Luckily, we just have to look back at the CIEXYZ color space — it’s pretty useful! Here, we can define what we consider the reddest, greenest, and bluest colors just as coordinates inside the xy chromaticity diagram. That’s all it takes to create an RGB color space, and why there are so many!
    Credit: Elle Stone In CSS, the most used color space is the standard RGB (sRGB) color space, which, as you can see in the last image, leaves a lot of colors out. However, in CSS, we can use modern RGB color spaces with a lot more colors through the color() function, such as display-p3, prophoto-rgb, and rec2020.
    Credit: Chrome Developer Team Notice how the ProPhoto RGB color space goes out of the visible color. This is okay. Colors outside are clamped; they aren’t new or invisible colors.
    In CSS, besides sRGB, we have two more color spaces: the CIELAB color space and the Oklab color space. Luckily, once we understood what the CIEXYZ color space is, then these two should be simpler to understand. Let’s dig into that next.
    CIELAB and Oklab Color Spaces
    As we saw before, the sRGB color space lacks many of the colors visible by the average human eye. And as modern screens got better at displaying more colors, CSS needed to adopt newer color spaces to fully take advantage of those newer displays. That wasn’t the only problem with sRGB — it also lacks perceptual uniformity, meaning that changes in the color’s chromaticity also change its perceived lightness. Check, for example, this demo by Adam Argyle:
    CodePen Embed Fallback Created in 1976 by the CIE, CIELAB, derived from CIEXYZ, also encompasses all the colors visible by the human eye. It works with three coordinates: L for perceptual lightness, a for the amount of red-green, and b* for the amount of yellow-blue in the color.
    Credit: Linshang Technology It has a way better perceptual uniformity than sRGB, but it still isn’t completely uniform, especially in gradients involving blue. For example, in the following white-to-blue gradient, CIELAB shifts towards purple.
    Image Credits to Björn Ottosson As a final improvement, Björn Ottosson came up with the Oklab color space, which also holds all colors visible by the human eye while keeping a better perceptual uniformity. Oklab also uses the three L*a*b* coordinates. Thanks to all these improvements, it is the color space I try to use the most lately.
    Color Models
    When I was learning about these concepts, my biggest challenge after understanding color spaces was not getting them confused with color models and color gamuts. These two concepts, while complementary and closely related to color spaces, aren’t the same, so they are a common pitfall when learning about color.
    A color model refers to the mathematical description of color through tuples of numbers, usually involving three numbers, but these values don’t give us an exact color until we pair them with a color space. For example, you know that in the RGB color model, we define color through three values: red, green, and blue. However, it isn’t until we match it to an RGB color space (e.g., sRGB with display-p3) that we have a color. In this sense, a color space can have several color models, like sRGB, which uses RGB, HSL, and HWB. At the same time, a color model can be used in several color spaces.
    I found plenty of articles and tutorials where “color spaces” and “color models” were used interchangeably. And some places were they had a different definition of color spaces and models than the one provided here. For example, Chrome’s High definition CSS color guide defines CSS’s RGB and HSL as different color spaces, while MDN’s Color Space entry does define RGB and HSL as part of the sRGB color space.
    Personally, in CSS, I find it easier to understand the idea of RGB, HSL and HWB as different models to access the sRGB color space.
    Color Gamuts
    A color gamut is more straightforward to explain. You may have noticed how we have talked about a color space having more colors than another, but it would be more correct to say it has a “wider” gamut, since a color gamut is the range of colors available in a color space. However, a color gamut isn’t only restricted by color space boundaries, but also by physical limitations. For example, an older screen may decrease the color gamut since it isn’t able to display each color available in a given color space. In this case where a color can’t be represented (due to physical limitation or being outside the color space itself), it’s said to be “out of gamut”.
    Color Functions
    In CSS, the only color space available used to be sRGB. Nowadays, we can work with a lot of modern color spaces through their respective color functions. As a quick reference, each of the color spaces in CSS uses the following functions:
    sRGB: We can work in sRGB using the ol’ hexadecimal notation, named colors, and the rgb(), rgba(), hsl(), hsla() and hwb() functions. CIELAB: Here we have the lab() for Cartesian coordinates and lch() for polar coordinates. Oklab: Similar to CIELAB, we have oklab() for Cartesian coordinates and oklch() for polar coordinates. More through the color() and color-mix(). Outside these three color spaces, we can use many more using the color() and color-mix() functions. Specifically, we can use the RGB color spaces: rgb-linear, display-p3, a98-rgb, prophoto-rgb, rec2020 and the XYZ color space: xyz, xyz-d50, or xyz-d65. TL;DR
    Color spaces are a mapping between available colors and a coordinate system. In CSS, we have three main color spaces: sRGB, CIELAB, and Oklab, but many more are accessible through the color() function. Color models define color with tuples of numbers, but they don’t give us information about the actual color until we pair them with a color space. For example, the RGB model doesn’t mean anything until we assign it an RGB color space. Most of the time, we want to talk about how many colors a color space holds, so we use the term color gamut for the task. However, a color gamut is also tied to the physical limitations of a camera/display. A color may be out-of-gamut, meaning it can’t be represented in a given color space. In CSS, we can access all these color spaces through color functions, of which there are many. The CIEXYZ color space is extremely useful to define other color spaces, describe their gamuts, and convert between them. References
    Completely Painless Programmer’s Guide to XYZ, RGB, ICC, xyY, and TRCs (Elle Stone) Color Spaces (Bartosz Ciechanowski) The CIE XYZ and xyY Color Spaces(Douglas A. Kerr) From personal project to industry standard (Björn Ottosson) High definition CSS color guide (Adam Argyle) Color Spaces: Explained from the Ground Up (Video Tech Explained) Color Space (MDN) What Makes a Color Space Well Behaved? (Elle Stone) Footnotes
    1 Light is the main cause of color, but color can be created by things other than light. For example, rubbing your closed eyes mechanically stimulates your retina, creating color in what’s called phosphene. ⤴️
    2 If negative light also makes you scratch your head, and for more info on how the CIEXYZ color space was created, I highly recommend Douglas A. Kerr The CIE XYZ and xyY Color Spaces paper. ⤴️
    3 We also need to define the darkest dark color (“black”) and the lightest light color (“white”). However, for well-behaved color spaces, these two can be abstracted from the reddest, blues, and greenest colors. ⤴️
    Color Everything in CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  7. by: Abhishek Prakash
    Fri, 20 Jun 2025 18:38:14 +0530

    Here’s your curated dose of Linux news, tutorials, and updates to keep you informed and productive in your open-source journey.
    Find with exec Named and unnamed pipes in Linux Container lifeycycle commands Clickhouse and Dockman And regular dose of important news, tips and memes  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  8. CSS Color Functions

    by: Sunkanmi Fafowora
    Thu, 19 Jun 2025 15:01:18 +0000

    If you asked me a few months ago, “What does it take for a website to stand out?” I may have said fancy animations, creative layouts, cool interactions, and maybe just the general aesthetics, without pointing out something in particular. If you ask me now, after working on color for the better part of the year, I can confidently say it’s all color. Among all the aspects that make a design, a good color system will make it as beautiful as possible.
    However, color in CSS can be a bit hard to fully understand since there are many ways to set the same color, and sometimes they even look the same, but underneath are completely different technologies. That’s why, in this guide, we will walk through all the ways you can set up colors in CSS and all the color-related properties out there!
    Colors are in everything
    They are in your phone, in what your eye sees, and on any screen you look at; they essentially capture everything. Design-wise, I see the amazing use of colors on sites listed over at awwwards.com, and I’m always in awe.
    Not all color is the same. In fact, similar colors can live in different worlds, known as color spaces. Take for example, sRGB, the color space used on the web for the better part of its existence and hence the most known. While it’s the most used, there are many colors that are simply missing in sRGB that new color spaces like CIELAB and Oklab bring, and they cover a wider range of colors sRGB could only dream of, but don’t let me get ahead of myself.
    What’s a color space?
    A color space is the way we arrange and represent colors that exist within a device, like printers and monitors. We have different types of color spaces that exist in media (Rec2020, Adobe RGB, etc), but not all of them are covered in CSS. Luckily, the ones we have are sufficient to produce all the awesome and beautiful colors we need. In this guide, we will be diving into the three main color spaces available in CSS: sRGB, CIELAB, and OkLab.
    The sRGB Color Space
    The sRGB is one of the first color spaces we learn. Inside, there are three color functions, which are essentially notations to define a color: rgb(), hsl(), and hwb().
    sRGB has been a standard color space for the web since 1996. However, it’s closer to how old computers represented color, rather than how humans understand it, so it had some problems like not being able to capture the full gamut of modern screens. Still, many modern applications and websites use sRGB, so even though it is the “old way” of doing things, it is still widely accepted and used today.
    The rgb() function
    rgb() uses three values, r, g, and b which specifies the redness, greenness, and blueness of the color you want.
    All three values are non-negative, and they go from 0 to 255.
    .element { color: rgb(245 123 151); } It also has an optional value (the alpha value) preceded by a forward slash. It determines the level of opacity for the color, which goes from 0 (or 0%) for a completely transparent color, to 1 (or 100%) for a fully opaque one.
    .element { color: rgb(245 123 151 / 20%); } There are two ways you can write inside rgb(). Either using the legacy syntax that separates the three values with commas or the modern syntax that separates each with spaces.
    You want to combine the two syntax formats, yes? That’s a no-no. It won’t even work.
    /* This would not work */ .element { color: rgb(225, 245, 200 / 0.5); } /* Neither will this */ .element { color: rgb(225 245 200, 0.5); } /* Or this */ .element { color: rgb(225, 245 200 / 0.5); } But, following one consistent format will do the trick, so do that instead. Either you’re so used to the old syntax and it’s hard for you to move on, continue to use the legacy syntax, or you’re one who’s willing to try and stick to something new, use the modern syntax.
    /* Valid (Modern syntax) */ .element { color: rgb(245 245 255 / 0.5); } /* Valid (Legacy syntax) */ .element { color: rgb(245, 245, 255, 0.5); } CodePen Embed Fallback The rgba() function
    rgba() is essentially the same as rgb() with an extra alpha value used for transparency.
    In terms of syntax, the rgba() function can be written in two ways:
    Comma-separated and without percentages Space-separated, with the alpha value written after a forward slash (/) .element { color: rgba(100, 50, 0, 0.5); } .element { color: rgba(100 50 0 / 0.5); } So, what’s the difference between rgba() and rgb()?
    Breaking news! There is no difference. Initially, only rgba() could set the alpha value for opacity, but in recent years, rgb() now supports transparency using the forward slash (/) before the alpha value.
    rgb() also supports legacy syntax (commas) and modern syntax (spaces), so there’s practically no reason to use rgba() anymore; it’s even noted as a CSS mistake by folks at W3C.
    In a nutshell, rgb() and rgba() are the same, so just use rgb().
    /* This works */ .element-1 { color: rgba(250 30 45 / 0.8); } /* And this works too, so why not just use this? */ .element-2 { color: rgb(250 30 45 / 0.8); } The hexadecimal notation
    The hexadecimal CSS color code is a 3, 4, 6, or 8 (being the maximum) digit code for colors in sRGB. It’s basically a shorter way of writing rgb(). The hexadecimal color (or hex color) begins with a hash token (#) and then a hexadecimal number, which means it goes from 0 to 9 and then skips to letters a to f (a being 10, b being 11, and so on, up to f for 15).
    In the hexadecimal color system, the 6-digit style is done in pairs. Each pair represents red (RR), blue (BB), and green (GG).
    Each value in the pair can go from 00 to FF, which it’s equivalent to 255 in rgb().
    3-digit hexadecimal. The 3-digit hexadecimal system is a shorter way of writing the 6-digit hexadecimal system, where each value represents the color’s redness, greenness, and blueness, respectively .element { color: #abc; } In reality, each value in the 3-digit system is duplicated and then translated to a visible color
    .element { color: #abc; /* Equals #AABBCC */ } BUT, this severely limits the colors you can set. What if I want to target the color 213 in the red space, or how would I get a blue of value 103? It’s impossible. That’s why you can only get a total number of 4,096 colors here as opposed to the 17 million in the 6-digit notation. Still, if you want a fast way of getting a certain color in hexadecimal without having to worry about the millions of other colors, use the 3-digit notation.
    4-digit hexadecimal. This is similar to the 3-digit hexadecimal notation except it includes the optional alpha value for opacity. It’s a shorter way of writing the 8-digit hexadecimal which also means that all values here are repeated once during color translation. .element { color: #ABCD2; } For the alpha value, 0 represents 00 (a fully transparent color) and F represents FF (a fully opaque color).
    .element { color: #abcd; /* Same as #AABBCCDD */ } 6-digit hexadecimal. The 6-digit hexadecimal system just specifies a hexadecimal color’s redness, blueness, and greenness without its alpha value for color opacity. .element { color: #abcdef; } 8-digit hexadecimal. This 8-digit hexadecimal system specifies hexadecimal color’s redness, blueness, greenness, and its alpha value for color opacity. Basically, it is complete for color control in sRGB. .element { color: #faded101; } The hsl() function
    Both hsl() and rgb() live in the sRGB space, but they access colors differently. And while the consensus is that hsl() is far more intuitive than rgb(), it all boils down to your preference.
    hsl() takes three values: h, s, and l, which set its hue, saturation, and lightness, respectively.
    The hue sets the base color and represents a direction in the color wheel, so it’s written in angles from 0deg to 360deg. The saturation sets how much of the base color is present and goes from 0 (or 0%) to 100 (or 100%). The lightness represents how close to white or black the color gets. One cool thing: the hue angle goes from (0deg–360deg), but we might as well use negative angles or angles above 360deg, and they will circle back to the right hue. Especially useful for infinite color animation. Pretty neat, right?
    Plus, you can easily get a complementary color from the opposite angle (i.e., adding 180deg to the current hue) on the color wheel.
    /* Current color */ .element { color: hsl(120deg 40 60 / 0.8); } /* Complementary color */ .element { color: hsl(300deg 40 60 / 0.8); } You want to combine the two syntax formats like in rgb(), yes? That’s also a no-no. It won’t work.
    /* This would not work */ .element { color: hsl(130deg, 50, 20 / 0.5); } /* Neither will this */ .element { color: hsl(130deg 50 20, 0.5); } /* Or this */ .element { color: hsl(130deg 50, 20 / 0.5); } Instead, stick to one of the syntaxes, like in rgb():
    /* Valid (Modern syntax) */ .element { color: hsl(130deg 50 20 / 0.5); } /* Valid (Modern syntax) */ .element { color: hsl(130deg, 50, 20, 0.5); } CodePen Embed Fallback The hsla() function
    hsla() is essentially the same with hsl(). It uses three values to represent its color’s hue (h), saturation (s), and lightness (l), and yes (again), an alpha value for transparency (a). We can write hsla() in two different ways:
    Comma separated Space separated, with the alpha value written after a forward slash (/) .element { color: hsla(120deg, 100%, 50%, 0.5); } .element { color: hsla(120deg 100% 50% / 0.5); } So, what’s the difference between hsla() and hsl()?
    Breaking news (again)! They’re the same. hsl() and hsla() both:
    Support legacy and modern syntax Have the power to increase or reduce color opacity So, why does hsla() still exist? Well, apart from being one of the mistakes of CSS, many applications on the web still use hsla() since there wasn’t a way to set opacity with hsl() when it was first conceived.
    My advice: just use hsl(). It’s the same as hsla() but less to write.
    /* This works */ .element-1 { color: hsla(120deg 80 90 / 0.8); } /* And this works too, so why not just use this? */ .element-2 { color: hsl(120deg 80 90 / 0.8); } The hwb() function
    hwb() also uses hue for its first value, but instead takes two values for whiteness and blackness to determine how your colors will come out (and yes, it also does have an optional transparency value, a, just like rgb() and hsl()).
    .element { color: hwb(80deg 20 50 / 0.5); } The first value h is the same as the hue angle in hsl(), which represents the color position in the color wheel from 0 (or 0deg) to 360 (or 360deg). The second value, w, represents the whiteness in the color. It ranges from 0/0% (no white) to 100/100% (full white if b is 0). The third value, b, represents the blackness in the color. It ranges from 0/0% (no black) to 100/100% (fully black if w is 0). The final (optional) value is the alpha value, a, for the color’s opacity, preceded by a forward slash The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). Although this color function is barely used, it’s completely valid to use, so it’s up to personal preference.
    CodePen Embed Fallback Named colors
    CSS named colors are hardcoded keywords representing predefined colors in sRGB. You are probably used to the basic: white, blue, black, red, but there are a lot more, totaling 147 in all, that are defined in the Color Modules Level 4 specification.
    Named colors are often discouraged because their names do not always match what color you would expect.
    The CIELAB Color Space
    The CIELAB color space is a relatively new color space on the web that represents a wider color gamut, closer to what the human eye can see, so it holds a lot more color than the sRGB space.
    The lab() function
    For this color function, we have three axes in a space-separated list to determine how the color is set.
    .element { color: lab(50 20 20 / 0.9); } The first value l represents the degree of whiteness to blackness of the color. Its range being 0/(or 0%) (black) to 100 (or 100%) (white). The second value a represents the degree of greenness to redness of the color. Its range being from -125/0% (green) to125 (or 100%) (red). The third value b represents the degree of blueness to yellowness of the color. Its range is also from -125 (or 0%) (blue) to 125 (or 100%) (red). The fourth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). This is useful when you’re trying to obtain new colors and provide support for screens that do support them. Actually, most screens and all major browsers now support lab(), so you should be good.
    CodePen Embed Fallback The lch() function
    The CSS lch() color function is said to be better and more intuitive than lab().
    .element { color: lch(10 30 300deg); } They both use the same color space, but instead of having l, a, and b, lch uses lightness, chroma, and hue.
    The first value l represents the degree of whiteness to blackness of the color. Its range being 0 (or 0%) (black) to 100 (or 100%) (white). The second value c represents the color’s chroma (which is like saturation). Its range being from 0 (or 100%) to 150 or (or 100%). The third value h represents the color hue. The value’s range is also from 0 (or 0deg) to 360 (or 360deg). The fourth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). CodePen Embed Fallback The OkLab Color Space
    Björn Ottosson created this color space as an “OK” and even better version of the lab color space. It was created to solve the limitations of CIELAB and CIELAB color space like image processing in lab(), such as making an image grayscale, and perceptual uniformity. The two color functions in CSS that correspond to this color space are oklab() and oklch().
    Perceptual uniformity occurs when there’s a smooth change in the direction of a gradient color from one point to another. If you notice stark contrasts like the example below for rgb() when transitioning from one hue to another, that is referred to as a non-uniform perceptual colormap.
    CodePen Embed Fallback Notice how the change from one color to another is the same in oklab() without any stark contrasts as opposed to rgb()? Yeah, OKLab color space solves the stark contrasts present and gives you access to many more colors not present in sRGB.
    OKlab actually provides a better saturation of colors while still maintaining the hue and lightness present in colors in CIELAB (and even a smoother tranisition between colors!).
    The oklab() function
    The oklab() color function, just like lab(), generates colors according to their lightness, red/green axis, blue/yellow axis, and an alpha value for color opacity. Also, the values for oklab() are different from that of lab() so please watch out for that.
    .element { color: oklab(30% 20% 10% / 0.9); } The first value l represents the degree of whiteness to blackness of the color. Its range being 0 (or 0%) (black) to 0.1 (or 100%) (white). The second value a represents the degree of greenness to redness of the color. Its range being from -0.4 (or -100%) (green) to 0.4 (or 100%) (red). The third value b represents the degree of blueness to yellowness of the color. The value’s range is also from -0.4 (or 0%) (blue) to 0.4 (or -100%) (red). The fourth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). Again, this solves one of the issues in lab which is perceptual uniformity so if you’re looking to use a better alternative to lab, use oklab().
    CodePen Embed Fallback The oklch() function
    The oklch() color function, just like lch(), generates colors according to their lightness, chroma, hue, and an alpha value for color opacity. The main difference here is that it solves the issues present in lab() and lch().
    .element { color: oklch(40% 20% 100deg / 0.7); } The first value l represents the degree of whiteness to blackness of the color. Its range being 0.0 (or 0%) (black) to 1.0 (or 100%) (white). The second value c represents the color’s chroma. Its range being from 0 (or 0%) to 0.4 (or 100%) (it theoretically doesn’t exceed 0.5). The third value h represents the color hue. The value’s range is also from 0 (or 0deg) to 360 (or 360deg). The fourth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). CodePen Embed Fallback The color() function
    The color() function allows access to colors in nine different color spaces, as opposed to the previous color functions mentioned, which only allow access to one.
    To use this function, you must simply be aware of these 6 parameters:
    The first value specifies the color space you want to access colors from. They can either be srgb, srgb-linear, display-p3, a98-rgb, prophoto-rgb, rec2020, xyz, xyz-d50, or xyz-d65 The next three values (c1, c2, and c3) specifies the coordinates in the color space for the color ranging from 0.0 – 1.0. The sixth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). CodePen Embed Fallback The color-mix() function
    The color-mix() function mixes two colors of any type in a given color space. Basically, you can create an endless number of colors with this method and explore more options than you normally would with any other color function. A pretty powerful CSS function, I would say.
    .element{ color-mix(in oklab, hsl(40 20 60) 80%, red 20%); } You’re basically mixing two colors of any type in a color space. Do take note, the accepted color spaces here are different from the color spaces accepted in the color() function.
    To use this function, you must be aware of these three values:
    The first value in colorspace specifies the interpolation method used to mix the colors, and these can be any of these 15 color spaces: srgb, srgb-linear, display-p3, a98-rgb, prophoto-rgb, rec2020, lab, oklab, xyz, xyz-d50, xyz-d65, hsl, hwb, lch, and oklch. The second and third values specifies an accepted color value and a percentage from 0% to 100%. CodePen Embed Fallback The Relative Color Syntax
    Here’s how it works. We have:
    .element{ color-function(from origin-color c1 c2 c3 / alpha) } The first value from is a mandatory keyword you must set to extract the color values from origin-color. The second value, origin-color, represents a color function or value or even another relative color that you want to get color from. The next three values, c1, c2, and c3 represent the current color function’s color channels and they correspond with the color function’s valid color values. The sixth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%) which either set from the origin-color or set manually, Let’s take an example, say, converting a color from rgb() to lab():
    .element { color: lab(from rgb(255 210 01 / 0.5) l a b / a); } All the values above will be translated to the corresponding colors in rgb(). Now, let’s take a look at another example where we convert a color from rgb() to oklch():
    .element { color: oklch(from rgb(255 210 01 / 0.5) 50% 20% h / a); } Although, the l and c values were changed, the h and a would be taken from the original color, which in this case is a light yellowish color in rgb().
    You can even be wacky and use math functions:
    All CSS color functions support the relative color syntax. The relative color syntax, simply put, is a way to access other colors in another color function or value, then translating it to the values of the current color function. It goes “from <color>” to another.
    .element { color: oklch(from rgb(255 210 01 / 0.5) calc(50% + var(--a)) calc(20% + var(--b)) h / a); } The relative color syntax is, however, different than the color() function in that you have to include the color space name and then fully write out the channels, like this:
    .element { color: color(from origin-color colorspace c1 c2 c3 / alpha); } Remember, the color-mix() function is not a part of this. You can have relative color functions inside the color functions you want to mix, yes, but the relative color syntax is not available in color-mix() directly.
    Color gradients
    CSS is totally capable of transitioning from one color to another. See the “CSS Gradients Guide” for a full run-down, including of the different types of gradients with examples.
    Visit the Guide Properties that support color values
    There are a lot of properties that support the use of color. Just so you know, this list does not contain deprecated properties.
    accent-color This CSS property sets the accent color for UI controls like checkboxes and radio buttons, and any other form element
    progress { accent-color: lightgreen; } background-color Applies solid colors as background on an element.
    .element { background-color: #ff7a18; } border-color Shorthand for setting the color of all four borders.
    /* Sets all border colors */ .element { border-color: lch(50 50 20); } /* Sets top, right, bottom, left border colors */ .element { border-color: black green red blue; } box-shadow Adds shadows to element for creating the illusion of depth. The property accepts a number of arguments, one of which sets the shadow color.
    .element { box-shadow: 0 3px 10px rgb(0 0 0 / 0.2); } caret-color Specifies the color of the text input cursor (caret).
    .element { caret-color: lch(30 40 40); } color Sets the foreground color of text and text decorations.
    .element { color: lch(80 10 20); } color-rule-color Sets the color of a line between columns in a multi-column layout. This property can’t act alone, so you need to set the columns and column-rule-style property first before using this.
    .element { column: 3; column-rule-style: solid; column-rule-color: lch(20 40 40); /* highlight */ } fill Sets the color of the SVG shape
    .element { fill: lch(40 20 10); } flood-color Specifies the flood color to use for <feFlood> and <feDropShadow> elements inside the <filter> element for <svg>. This should not be confused with the flood-color CSS attribute, as this is a CSS property and that’s an HTML attribute (even though they basically do the same thing). If this property is specified, it overrides the CSS flood-color attribute
    .element { flood-color: lch(20 40 40); } lighting-color Specifies the color of the lighting source to use for <feDiffuseLighting> and <feSpecularLighting> elements inside the <filter> element for <svg>.
    .element { lighting-color: lch(40 10 20); } outline-color Sets the color of an element’s outline.
    .element { outline-color: lch(20 40 40); } stop-color Specifies the color of gradient stops for the <stop> tags for <svg>.
    .element { stop-color: lch(20 40 40); } stroke Defines the color of the outline of <svg>.
    .element { stroke: lch(20 40 40); } text-decoration-color Sets the color of text decoration lines like underlines.
    .element { text-decoration-color: lch(20 40 40); } text-emphasis-color Specifies the color of emphasis marks on text.
    .element { text-emphasis-color: lch(70 20 40); } text-shadow Applies shadow effects to text, including color.
    .element { text-shadow: 1px 1px 1px lch(50 10 30); } Almanac references
    Color functions Almanac on Feb 22, 2025 rgb()
    .element { color: rgb(0 0 0 / 0.5); } color Sunkanmi Fafowora Almanac on Feb 22, 2025 hsl()
    .element { color: hsl(90deg, 50%, 50%); } color Sunkanmi Fafowora Almanac on Jun 12, 2025 hwb()
    .element { color: hwb(136 40% 15%); } color Gabriel Shoyombo Almanac on Mar 4, 2025 lab()
    .element { color: lab(50% 50% 50% / 0.5); } color Sunkanmi Fafowora Almanac on Mar 12, 2025 lch()
    .element { color: lch(10% 0.215 15deg); } color Sunkanmi Fafowora Almanac on Apr 29, 2025 oklab()
    .element { color: oklab(25.77% 25.77% 54.88%; } color Sunkanmi Fafowora Almanac on May 10, 2025 oklch()
    .element { color: oklch(70% 0.15 240); } color Gabriel Shoyombo Almanac on May 2, 2025 color()
    .element { color: color(rec2020 0.5 0.15 0.115 / 0.5); } color Sunkanmi Fafowora Color properties Almanac on Apr 19, 2025 accent-color
    .element { accent-color: #f8a100; } color Geoff Graham Almanac on Jan 13, 2025 background-color
    .element { background-color: #ff7a18; } color Chris Coyier Almanac on Jan 27, 2021 caret-color
    .element { caret-color: red; } color Chris Coyier Almanac on Jul 11, 2022 color
    .element { color: #f8a100; } color Sara Cope Almanac on Jul 11, 2022 column-rule-color
    .element { column-rule-color: #f8a100; } color Geoff Graham Almanac on Jan 27, 2025 fill
    .element { fill: red; } color Geoff Graham Almanac on Jul 11, 2022 outline-color
    .element { outline-color: #f8a100; } color Mojtaba Seyedi Almanac on Dec 15, 2024 stroke
    .module { stroke: black; } color Geoff Graham Almanac on Aug 2, 2021 text-decoration-color
    .element { text-decoration-color: orange; } color Marie Mosley Almanac on Jan 27, 2023 text-emphasis
    .element { text-emphasis: circle red; } color Joel Olawanle Almanac on Jan 27, 2023 text-shadow
    p { text-shadow: 1px 1px 1px #000; } color Sara Cope Related articles & tutorials
    Article on Aug 12, 2024 Working With Colors Guide
    color Sarah Drasner Article on Aug 23, 2022 The Expanding Gamut of Color on the Web
    color Ollie Williams Article on Oct 13, 2015 The Tragicomic History of CSS Color Names
    color Geoff Graham Article on Feb 11, 2022 A Whistle-Stop Tour of 4 New CSS Color Features
    color Chris Coyier Article on Feb 7, 2022 Using Different Color Spaces for Non-Boring Gradients
    color Chris Coyier Article on Oct 29, 2024 Come to the light-dark() Side
    color Sara Joy Article on Sep 24, 2024 Color Mixing With Animation Composition
    color Geoff Graham Article on Sep 13, 2016 8-Digit Hex Codes?
    color Chris Coyier Article on Feb 24, 2021 A DRY Approach to Color Themes in CSS
    color Christopher Kirk-Nielsen Article on Apr 6, 2017 Accessibility Basics: Testing Your Page For Color Blindness
    color Chris Coyier Article on Mar 9, 2020 Adventures in CSS Semi-Transparency Land
    color Ana Tudor Article on Mar 4, 2017 Change Color of All Four Borders Even With `border-collapse: collapse;`
    color Daniel Jauch Article on Jan 2, 2020 Color contrast accessibility tools
    color Robin Rendle Article on Aug 14, 2019 Contextual Utility Classes for Color with Custom Properties
    color Christopher Kirk-Nielsen Article on Jun 26, 2021 Creating Color Themes With Custom Properties, HSL, and a Little calc()
    color Dieter Raber Article on May 4, 2021 Creating Colorful, Smart Shadows
    color Chris Coyier Article on Feb 21, 2018 CSS Basics: Using Fallback Colors
    color Chris Coyier Article on Oct 21, 2019 Designing accessible color systems
    color Robin Rendle Article on Jun 22, 2021 Mixing Colors in Pure CSS
    color Carter Li Article on Jul 26, 2016 Overriding The Default Text Selection Color With CSS
    color Chris Coyier Article on Oct 21, 2015 Reverse Text Color Based on Background Color Automatically in CSS
    color Robin Rendle Article on Dec 27, 2019 So Many Color Links
    color Chris Coyier Article on Aug 18, 2018 Switch font color for different backgrounds with CSS
    color Facundo Corradini Article on Jan 20, 2020 The Best Color Functions in CSS?
    color Chris Coyier Article on Dec 3, 2021 What do you name color variables?
    color Chris Coyier Article on May 8, 2025 Why is Nobody Using the hwb() Color Function?
    color Sunkanmi Fafowora Table of contents
    Colors are in everything What’s a color space? The sRGB Color Space The CIELAB Color Space The OkLab Color Space The color() function The color-mix() function The Relative Color Syntax Color gradients Properties that support color values Almanac references Related articles and tutorials CSS Color Functions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  9. by: Sacha Greif
    Tue, 17 Jun 2025 13:13:15 +0000

    How do you keep up with new CSS features?
    Let’s say for example that, hypothetically speaking, you run a popular web development survey focused on CSS, and need to figure out what to include in this year’s edition. (In a total coincidence the aforementioned State of CSS survey for this year is actually open right now — go take it to see what’s new in CSS!)
    You might think you can just type “new CSS features 2025” in Google and be done with it. But while this does give us a few promising leads, it also unearths a lot of cookie-cutter content that proclaims CSS Grid as the “next big thing”, despite the fact it’s been well-supported for over eight years now. 
    We need a better approach. 
    I’ll focus on CSS in this article, but all the resources linked here cover all web platform features, including JavaScript and HTML.
    Web.dev
    A good general starting point is Google’s web.dev blog, and more specifically Rachel Andrew‘s monthly web platform recaps. Here’s a small sample of those:
    New to the web platform in January New to the web platform in February New to the web platform in March New to the web platform in April CSS-Tricks (and others)
    I’d be remiss to not mention that CSS-Tricks is also a great source for up-to-date CSS knowledge, including an ever-growing almanac of CSS features. But you probably already know that since you’re reading this.
    And let’s not discount other fine publications that cover CSS. Here are just a few:
    Smashing Magazine Frontend Masters Blog Piccalilli CSS-Tip Web Platform Features Explorer
    If you need something a bit more structured to help you figure out what’s new, Web Platform Features Explorer is great way to look up features based on their Baseline status.
    Web Platform Status
    A similar tool is the Web Platform Status dashboard. This one features more fine-grained filtering tools, letting you narrow down features by Baseline year or even show features mentioned as Top CSS Interop in the latest State of CSS survey!
    Another very cool feature is the ability to view a feature’s adoption rate, as measured in terms of percentage of Chrome page views where that feature was used, such as here for the popover HTML attribute:
    An important caveat: since sites like Facebook and Google account for a very large percentage of all measured page views, this metric can become skewed once one of these platforms adopts a new feature.
    The Web Platform Status’s stats section also features the “chart of shame” (according to Lea Verou), which highlights how certain browsers might be slightly lagging behind their peers in terms of new feature adoption.
    Chrome Platform Status
    That same adoption data can also be found on the Chrome Platform Status dashboard, which gives you even more details, such as usage among top sites, as well as sample URLs of sites that are using a feature. 
    Polypane Experimental Chromium Features Dashboard
    Polypane is a great developer-focused browser that provides a ton of useful tools like contrast checkers, multi-viewport views, and more. 
    They also provide an experimental Chromium features explorer that breaks new features down by Chromium version, for those of you who want to be at the absolute top of the cutting edge. 
    Kevin Powell’s YouTube Channel
    As YouTube’s de facto CSS expert, Kevin Powell often puts up great video recaps of new features. You should definitely be following him, but statistically speaking you probably already are! It’s also worth mentioning that Kevin runs a site that publishes weekly HTML and CSS tips.
    CSS Working Group
    Of course, you can always also go straight to the source and look at what the CSS Working Group itself has been working on! They have a mailing list you can subscribe to keep tabs on things straight from your inbox, as well as an RSS feed.
    Browser release notes
    Most browsers publish a set of release notes any time a new version ships. For the most part, you can get a good pulse on when new CSS features are released by following the three big names in browsers:
    Chrome release notes Safari release notes Firefox release notes ChatGPT
    Another way to catch up with CSS is to just ask ChatGPT! This sample prompt worked well enough for me:
    Other resources
    If you really want to get in the weeds, Igalia’s BCD Watch displays changes to MDN’s browser-compat-data repo, which itself tracks which features are supported in which browsers. 
    Also, the latest editions of the HTTP Archive Web Almanac do not seem to include a CSS section specifically, but past editions did feature one, which was a great way to catch up with CSS once a year. 
    There’s also caniuse has a news section which does not seem to be frequently updated at the moment, but could potentially become a great resource for up-to-date new feature info in the future.
    The IntentToShip bot (available on Bluesky, Mastodon, Twitter) posts whenever a browser vendor ships or changes a feature. You can’t get more cutting-edge than that!
    And lastly, there’s a ton of folks on social media who are frequently discussing new CSS features and sharing their own thoughts and experiments with them. If you’re on Bluesky, there’s a starter pack of CSS-Tricks authors that’s a good spot to find a deep community of people.
    Wrapping up
    Of course, another great way to make sure no new features are slipping through the cracks is to take the State of CSS survey once a year. I use all the resources mentioned above to try and make sure each survey includes every new important feature. What’s more, you can bookmark features by adding them to your “reading list” as you take the survey to get a nice recap at the end.
    So go take this year’s State of CSS survey and then let me know on Bluesky how many new features you learned about!
    How to Keep Up With New CSS Features originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  10. Role model blog: Anna Salo, Nitor

    by: Ani
    Tue, 17 Jun 2025 09:01:50 +0000

    At the end of high school, I was lost which path to follow with my studies or what interest of  mine to turn into a career. But I had the dream of having an efficient way to innovate something  that could change people’s lives in the future.

    About me
    I’m Anna Salo, an accessibility-oriented software developer consultant at Nitor. For the last five years, I have focused on working with frontends for critical mass user services and B2C, both web and mobile. Therefore, I have taken a special interest in creating sustainable and accessible user interfaces. That means following tightly the Web Content Accessibility Guidelines (WCAG), especially on projects that involve public services and systems like student information platforms, which are legally required to meet WCAG accessibility standards. Many online services have been neglecting the accessibility requirements in the past, ending up in a difficult situation where the service doesn’t pass the updated accessibility regulations. With the new legislation coming into effect this summer, even more services like web stores will need to comply, meaning accessibility has become a very essential part of today’s web and mobile consumer service development.
    My path to IT and accessibility specialist
    At the end of high school, I was lost which path to follow with my studies or what interest of  mine to turn into a career. But I had the dream of having an efficient way to innovate something  that could change people’s lives in the future. From early on, I remember taking notice of a bad  UI design and UX experience, thinking about how I would have wanted to do this and that  somewhat better way. However, I felt insecure of my artistic competence to become a designer myself. This is when the thought of being a developer came along. I had never seriously considered it for myself before, having some preconceived and not-so-flattering misconceptions about the tech field. So, at first, there was no deep enthusiasm behind applying  to computer science; it felt more like a suitable path to find my career. 
    I ended up studying Computer Science at the University of Helsinki, earning my Bachelor’s  degree. I was happy that most of my fears and misconceptions of the field were false. In fact,  the problem-solving nature of it, with a room for creativity and design thinking, suited me well.  While studying, I also started my career in startups, initially working on B2B solutions and  later moving into B2C. I worked as a full-stack developer but gradually specialized in front end development. I also have the typical tale, I took a cap year from master studies to concentrate on my career, and half a decade later, I’m still on that cap year. 
    Prior to Nitor, I developed a React Native based mobile app for iOS and Android. I had no prior experience with React Native or mobile app development, but I took the initiative to learn it from scratch. This idea ended up being a valuable stepping stone, giving me strong hands-on experience with modern technologies and project models. In those early roles in the startups, I had to implement all steps which gave me a broad and practical understanding of real world development—something that isn’t always taught at university.
    My journey with accessibility began when I wrote about Human Computer Interaction and Natural User Interfaces on my Bachelor’s thesis. The research I did had a huge impact on my thinking of UI and UX challenges, as I learned about issues I wouldn’t have understood to even consider prior. I began to find my passion, to change the mindset of others towards  accessibility-oriented thinking. Working with accessibility has been both challenging and  rewarding, and there is always something new to grow my expertise in. My next career target  is to test my skills and earn the IAAP Web Accessibility Specialist (WAS) certificate. Now,  looking at the kind of work I do today, I’m proud of myself taking that unexpected path.  Stepping out of my comfort zone led me to something I truly care about and hopefully will  make an improvement to other people’s lives. 
    Anna Salo, Software Developer, Nitor
    Joining Nitor and working as a consultant
    Joining Nitor was a lucky coincidence for me. After a few years in the startup world, I have started to feel it would be the right moment to move forward with my career, but I was unsure about the next step. Then Nitor contacted me through LinkedIn for a coffee, so in a way, Nitor found me. I wasn’t familiar with the consulting business back then, but I decided to give it a go. Soon I found myself amazed by their warm and supportive Nitorean-driven work community and their ambition to be a sustainable and quality-driven digital consulting house. I felt this could be the perfect work community for me. 
    I was a bit anxious about whether I would be a good fit in a consulting field, but working in startups turned out to be a significant advantage. The hands-on experience with React Native and a good understanding of user interfaces made me a strong match for what Nitor was looking  for. Luckily, I also ended up with a perfect customer case, where I got my chance to work with real and challenging accessibility issues, and later develop my expertise with an accessible Design System as well. 
    When I joined in 2020, Nitor had claimed the ‘Great Place to Work’ title for a few years in a row, and I couldn’t agree more with that evaluation. I get to work with the nicest and most talented people, there is always support available, and I feel valued as part of our work community.  
    Nitor actively supports our self-study and professional development. We’re encouraged to get  certifications and given five paid working days a year to focus on our academic training. We also have the mentoring program, from which I got the chance to learn and practise my UI design skills. This kind of support and freedom to grow makes working at Nitor so rewarding  for me. 
    One of the things I appreciate is the flexibility we have beyond customer work. We get to use 10% of our work time for internal or self-development—we call this “core work.” That could be working on our internal projects, like the Nitor.com website and internal apps, or contributing to various coding initiatives for the common good, like Virtanen.ai. There’s a lot of room to explore things we’re passionate about and a chance to create something truly great  from those things. 
    Keys to better accessibility
    Making a web service fully accessible means considering keyboard navigation, screen reader  compatibility, and ensuring all content is properly structured for assistive technologies. You also must think of the frontend usability for people who do not need special assistive technology. That means securing accessible styles by using good colour contrasts and readable font-sizes and ensuring that the frontend has good cognitive accessibility. Avoiding pitfalls requires high code maintainability, writing clean, semantically correct code, and thorough testing across different devices and browsers. 
    Accessibility isn’t always black and white. Sometimes, especially with more complex design or interaction patterns, there isn’t one definitive “correct” way to do something. In cases where the official WCAG (Web Content Accessibility Guidelines) doesn’t give a clear-cut answer, I assess the best approach based on the context. It becomes a bit of a judgment call: what’s the most inclusive and accessible option in this specific situation? That kind of thinking makes accessibility work both challenging and creative. With accessibility, you should avoid overdoing things and ensure that screen readers get only the information they need for the most fluid and informative user experience. You’re not just following rules—you’re solving real  problems for real people. 
    When I’m stuck on a problem or feeling overwhelmed, I’ve learned that the best thing I can do  is step away and give my brain a break. I often come back with a fresh perspective and usually  the solution suddenly feels obvious and straightforward. It’s a reminder that sometimes the best  way to handle self-doubt or mental blocks is simply to allow yourself space to reset and return  with a clear mind. 
    About the impact of AI
    I’m genuinely concerned about how the rise of AI is impacting junior developers, especially those at the beginning of their careers. If companies continue hiring mostly senior-level talent and overlook junior developers, it will slow down the progress we’ve made in increasing diversity. You can’t become a senior developer without first being a junior one. We all need that early-career experience to grow, learn, and make mistakes. I know I did—I was a total beginner at my first job, and wouldn’t be where I am today, hadn’t I gotten a chance to learn and improve thanks to the support and feedback from more experienced colleagues. 
    Not hiring coders for junior roles is especially affecting women developers, as we have a growing number of women in tech, many of whom are still at the start of their careers. I am afraid that companies often prioritize very high-level experience, creating a cycle where women don’t get the chance to gain that experience. 
    That’s one of the reasons I’ve started studying design and accessibility more deeply. These are  areas where the human perspective is still essential, and I believe they’ll remain relevant even as AI becomes more powerful. Design thinking and accessibility require empathy, context, and a deep understanding of real user needs—things machines still struggle with. 
    I believe AI will transform every profession —none of us is exempt. But we need to be intentional about how we adapt, ensuring we’re not sacrificing long-term skills and diversity for short-term efficiency. 
    What worries me even more is the idea of training new developers to rely solely on AI generated code. If you never have to do the thinking yourself from the start, you miss out on learning how to spot errors, understand best practices, or evaluate the quality of what’s being  produced. Without foundational knowledge, how can you critically assess or improve what AI gives you?
    The post Role model blog: Anna Salo, Nitor first appeared on Women in Tech Finland.
  11. Chris’ Corner: Liquid Ass

    by: Chris Coyier
    Mon, 16 Jun 2025 16:23:56 +0000

    First a quick heads up about… me. I have a weird itch to do “streaming”, so I’m letting myself just be a hardcore beginner and giving it a shot. The plan is just hang out with whoever shows up and make stuff and talk about front end web development and design. So:
    Me on Twitch CodePen on YouTube Seems like those two platforms make the most sense for that, so here we go.
    I made this super sick banner for Twitch, which you can’t even see because it’s covered by UI stuff lol.
    Welp.
    I suppose you knew that there’s no way I’m letting “liquid glass” slide by this week. Or should I say:
    Amazing.
    Marie actually beat me to it doing a whole Spark issue on it last week. Obviously CodePen users are all over this design trend, as it’s an absolutely magnetic challenge in CSS. Kevin Powell did a video which happened to drop at the perfect time. Kevin is so good at these things I’m like sick with jealousy about it. Maybe my stream will level up my video teaching skills.
    It’s not like CodePen is only now starting to have these glass-like effects. People have been doing it for ages. It had a particular boon when backdrop-filter: blur(2px); became a thing — that’s more like “frosted” glass — but still, Apple is doing that, too. Maybe -webkit-box-reflect will get new life on the web also? Feels related.
    Sebastiaan de With fortold it nearly perfectly well. 👏👏👏. Little touches like the reflective progress bar are so cool.

    I don’t know if Apple is actually doing this particular detail, I don’t have the new OS yet, but Sebastiaan’s idea is awesome. Apple is actually quite serious about this, and released a video of the whole idea. Honestly I think it’s kinda awesome looking.
    But I did kinda 😬 about the accessibility of it.
    No chance the text “Nao” above is passing any contrast test. Nao way amiright?
    Feels like text/background contrast has taken a hit. I haven’t seen a full throated takedown of it yet (there are some mentions though), but I imagine that’s coming. There are already settings in there to tone the effects down, I hear.
    I thought out loud the other month: literally everything ships inaccessibly. And since having that thought I’ve seen a half dozen things ship that way. Certainly we’re not immune to it, but it’s good motivation to get some more accessibility testing done (we’ve done a good bit already!) on our new editor before it goes out.
    Random thing before I sign off. The Oatmeal on Erasers is lovely.
  12. by: Zell Liew
    Mon, 16 Jun 2025 12:47:51 +0000

    Resize Observer, Mutation Observer, and Intersection Observers are all good APIs that are more performant than their older counterparts:
    ResizeObserver is better than the resize event MutationObserver replaces the now deprecated Mutation Events IntersectionObserver lets you do certain scroll interactions with less performance overhead. The API for these three observers are quite similar (but they have their differences which we will go into later). To use an observer, you have to follow the steps below:
    Create a new observer with the new keyword: This observer takes in an observer function to execute. Do something with the observed changes: This is done via the observer function that is passed into the observer. Observe a specific element: By using the observe method. (Optionally) unobserve the element: By using the unobserve or disconnect method. (depending on which observer you’re using). In practice, the above steps looks like this with the ResizeObserver.
    // Step 1: Create a new observer const observer = new ResizeObserver(observerFn) // Step 2: Do something with the observed changes function observerFn (entries) { for (let entry of entries) { // Do something with entry } } // Step 3: Observe an element const element = document.querySelector('#some-element') observer.observe(element); // Step 4 (optional): Disconnect the observer observer.disconnect(element) This looks clear (and understandable) after the steps have been made clear. But it can look like a mess without the comments:
    const observer = new ResizeObserver(observerFn) function observerFn (entries) { for (let entry of entries) { // Do something with entry } } const element = document.querySelector('#some-element') observer.observe(element); The good news is: I think we can improve the observer APIs and make them easier to use.
    The Resize Observer
    Let’s start with the ResizeObserver since it’s the simplest of them all. We’ll begin by writing a function that encapsulates the resizeObserver that we create.
    function resizeObserver () { // ... Do something } The easiest way to begin refactoring the ResizeObserver code is to put everything we’ve created into our resizeObserver first.
    function resizeObserver () { const observer = new ResizeObserver(observerFn) function observerFn (entries) { for (let entry of entries) { // Do something with entry } } const node = document.querySelector('#some-element') observer.observe(node); } Next, we can pass the element into the function to make it simpler. When we do this, we can eliminate the document.querySelector line.
    function resizeObserver (element) { const observer = new ResizeObserver(observerFn) function observerFn (entries) { for (let entry of entries) { // Do something with entry } } observer.observe(node); } This makes the function more versatile since we can now pass any element into it.
    // Usage of the resizeObserver function const node = document.querySelector('#some-element') const obs = resizeObserver(node) This is already much easier than writing all of the ResizeObserver code from scratch whenever you wish to use it.
    Next, it’s quite obvious that we have to pass in an observer function to the callback. So, we can potentially do this:
    // Not great function resizeObserver (node, observerFn) { const observer = new ResizeObserver(observerFn) observer.observe(node); } Since observerFn is always the same — it loops through the entries and acts on every entry — we could keep the observerFn and pass in a callback to perform tasks when the element is resized.
    // Better function resizeObserver (node, callback) { const observer = new ResizeObserver(observerFn) function observerFn (entries) { for (let entry of entries) { callback(entry) } } observer.observe(node); } To use this, we can pass callback into the resizeObserver — this makes resizeObserver operate somewhat like an event listener which we are already familiar with.
    // Usage of the resizeObserver function const node = document.querySelector('#some-element') const obs = resizeObserver(node, entry => { // Do something with each entry }) We can make the callback slightly better by providing both entry and entries. There’s no performance hit for passing an additional variable so there’s no harm providing more flexibility here.
    function resizeObserver (element, callback) { const observer = new ResizeObserver(observerFn) function observerFn (entries) { for (let entry of entries) { callback({ entry, entries }) } } observer.observe(element); } Then we can grab entries in the callback if we need to.
    // Usage of the resizeObserver function // ... const obs = resizeObserver(node, ({ entry, entries }) => { // ... }) Next, it makes sense to pass the callback as an option parameter instead of a variable. This will make resizeObserver more consistent with the mutationObserver and intersectionObserver functions that we will create in the next article.
    function resizeObserver (element, options = {}) { const { callback } = options const observer = new ResizeObserver(observerFn) function observerFn (entries) { for (let entry of entries) { callback({ entry, entries }) } } observer.observe(element); } Then we can use resizeObserver like this.
    const obs = resizeObserver(node, { callback ({ entry, entries }) { // Do something ... } }) The observer can take in an option too
    ResizeObserver‘s observe method can take in an options object that contains one property, box. This determines whether the observer will observe changes to content-box, border-box or device-pixel-content-box.
    So, we need to extract these options from the options object and pass them to observe.
    function resizeObserver (element, options = {}) { const { callback, ...opts } = options // ... observer.observe(element, opts); } Optional: Event listener pattern
    I prefer using callback because it’s quite straightforward. But if you want to use a standard event listener pattern, we can do that, too. The trick here is to emit an event. We’ll call it resize-obs since resize is already taken.
    function resizeObserver (element, options = {}) { // ... function observerFn (entries) { for (let entry of entries) { if (callback) callback({ entry, entries }) else { node.dispatchEvent( new CustomEvent('resize-obs', { detail: { entry, entries }, }), ) } } } // ... } Then we can listen to the resize-obs event, like this:
    const obs = resizeObserver(node) node.addEventListener('resize-obs', event => { const { entry, entries } = event.detail }) Again, this is optional.
    Unobserving the element
    One final step is to allow the user to stop observing the element(s) when observation is no longer required. To do this, we can return two of the observer methods:
    unobserve: Stops observing one Element disconnect: Stops observing all Elements function resizeObserver (node, options = {}) { // ... return { unobserve(node) { observer.unobserve(node) }, disconnect() { observer.disconnet() } } } Both methods do the same thing for what we have built so far since we only allowed resizeObserver to observe one element. So, pick whatever method you prefer to stop observing the element.
    const obs = resizeObserver(node, { callback ({ entry, entries }) { // Do something ... } }) // Stops observing all elements obs.disconect() With this, we’ve completed the creation of a better API for the ResizeObserver — the resizeObserver function.
    Code snippet
    Here’s the code we’ve wrote for resizeObserver
    export function resizeObserver(node, options = {}) { const observer = new ResizeObserver(observerFn) const { callback, ...opts } = options function observerFn(entries) { for (const entry of entries) { // Callback pattern if (callback) callback({ entry, entries, observer }) // Event listener pattern else { node.dispatchEvent( new CustomEvent('resize-obs', { detail: { entry, entries, observer }, }) ) } } } observer.observe(node) return { unobserve(node) { observer.unobserve(node) }, disconnect() { observer.disconnect() } } } Using this in practice via Splendid Labz
    Splendid Labz has a utils library that contains an enhanced version of the resizeObserver we made above. You can use it if you wanna use a enhanced observer, or if you don’t want to copy-paste the observer code into your projects.
    import { resizeObserver } from '@splendidlabz/utils/dom' const node = document.querySelector('.some-element') const obs = resizeObserver(node, { callback ({ entry, entries }) { /* Do what you want here */ } }) Bonus: The Splendid Labz resizeObserver is capable of observing multiple elements at once. It can also unobserve multiple elements at once.
    const items = document.querySelectorAll('.elements') const obs = resizeObserver(items, { callback ({ entry, entries }) { /* Do what you want here */ } }) // Unobserves two items at once const subset = [items[0], items[1]] obs.unobserve(subset) Found this refactoring helpful?
    Refactoring is ultra useful (and important) because its a process that lets us create code that’s easy to use or maintain.
    If you found this refactoring exercise useful, you might just love how I teach JavaScript to budding developers in my Learn JavaScript course.
    In this course, you’ll learn to build 20 real-world components. For each component, we start off simple. Then we add features and you’ll learn to refactor along the way.
    That’s it!
    Hope you enjoyed this piece and see you in the next one.
    A Better API for the Resize Observer originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  13. by: Abhishek Prakash
    Fri, 13 Jun 2025 19:18:07 +0530

    Another week, another chance to pretend you're fixing something important by typing furiously in the terminal. You do that, right? Or is it just me? 😉
    This week's highlights are:
    lsattr, chatter and grep commands brace expansions VERT converter And your regular dose of news, memes and tips ❇️ Explore DigitalOcean with $100 free credit
    DigitalOcean is my favorite alternative to the likes of AWS and Azure and Google Cloud. I use it to host Linux Handbook and pretty happy with their performance and ease of deployment. Try their servers and marketplace apps for free with $100 credit which is applicable to new accounts.
    DigitalOcean – The developer cloudHelping millions of developers easily build, test, manage, and scale applications of any size – faster than ever before.Explore our productsGet started on DigitalOcean with a $100, 60-day credit for new users.
     
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  14. by: Abhishek Kumar
    Fri, 13 Jun 2025 18:48:38 +0530

    Note-taking has come a long way from crumpled sticky notes and scattered .txt files. Today, we want our notes to be searchable, linked, visualized, and ideally, available anywhere. That’s where Obsidian shines.
    Source: Obsidian.mdBuilt around plain-text Markdown files, Obsidian offers local-first knowledge management with powerful graph views, backlinks, and a thriving plugin ecosystem.
    For many, it has become the go-to app for personal knowledge bases and second brains.
    While Obsidian does offer Obsidian Sync, a proprietary syncing service that lets you keep your notes consistent across devices, it’s behind a paywall.
    That’s fair for the convenience, but I wanted something different:
    A central Obsidian server, running in my homelab, accessible via browser, no desktop clients, no mobile apps, just one self-hosted solution available from anywhere I go.
    And yes, that’s entirely possible.
    Thanks to LinuxServer.io, who maintain some of the most stable and well-documented Docker images out there, setting this up was a breeze.
    I’ve been using their containers for various services in my homelab, and they’ve always been rock solid.
    Let me walk you through how I deployed Obsidian this way.
    Prerequisites
    We assume you have:
    A Linux system with Docker and Docker Compose installed. A basic understanding of terminal commands. Familiarity with editing YAML files. 💡If you're new to Docker or Compose, check out our beginner Docker series and how to set up Docker Compose articles first.Setting up Obsidian
    If you prefer keeping your self-hosted apps neatly organized (like I do), it's a good idea to create separate folders for each container.
    This not only helps with manageability, but also makes it easier to back up or migrate later.
    1. Create a data directory for Obsidian
    Let’s start by creating a folder for Obsidian data:
    mkdir -p ~/docker/obsidian cd ~/docker/obsidian You can name it whatever you like, but I’m sticking with obsidian to keep things clear.
    2. Create a docker-compose.yml File
    Now, we’ll set up a Docker Compose file, this is the file that tells Docker how to run Obsidian, what image to use, which ports to open, and other important stuff.
    You don’t need to write the whole thing from scratch. I’m using the official example from the LinuxServer.io image page, but with a few changes tailored to my system.
    Just copy the following into a new file named docker-compose.yml:
    version: "3.8" services: obsidian: image: ghcr.io/linuxserver/obsidian:latest container_name: obsidian security_opt: - no-new-privileges:false - seccomp:unconfined healthcheck: test: timeout 10s bash -c ':> /dev/tcp/127.0.0.1/3000' || exit 1 interval: 10s timeout: 5s retries: 3 start_period: 90s ports: - "3000:3000" shm_size: "2gb" volumes: - ./config:/config:rw environment: CUSTOM_USER: yourusername PASSWORD: yourpassword PUID: 1000 PGID: 1000 TZ: Asia/Kolkata restart: unless-stopped Let’s break down a few important parts of this:
    image: We're using the latest Obsidian image provided by LinuxServer.io. volumes: Maps a config folder in your current directory to Obsidian’s internal config directory, this is where all your Obsidian data and settings will live. ports: The app will be available on port 3000 of your machine. You can change this if you prefer a different port. shm_size: Allocates shared memory; useful for apps with a UI like Obsidian. environment: This is where you set up your user, password, timezone, and file ownership. Make sure you replace the following placeholders with your own values:
    yourusername: The username you'll use to log in to Obsidian. yourpassword: Choose a strong password. TZ: Use your local timezone. (Example: Asia/Kolkata) PUID and PGID: These should match your user’s UID and GID on the host system. To find them, run: id yourusername You'll get something like this:
    uid=1000(yourusername) gid=1000(yourusername) groups=1000(yourusername),27(sudo),... Use those values in your Compose file.
    3. Deploy the Container
    Once the docker-compose.yml file is ready and the values are customized, go ahead and start the container:
    docker-compose up -d This command tells Docker to:
    Pull the Obsidian image (if it’s not already downloaded) Create a container using the settings we defined Run it in detached mode (-d), so it continues running in the background Give it a minute or two, the first time you run this, Docker needs to download the entire image and set everything up. After that, it’ll be much faster on subsequent restarts.
    Accessing Obsidian in your browser
    Once it's done, you should be able to open Obsidian in your browser at:
    http://localhost:3000 Or replace localhost with your server's IP if you’re not running it locally.
    💡Optional: If you plan to access this instance from outside your local network, we strongly recommend putting it behind a reverse proxy like Caddy or NGINX with HTTPS and authentication. You can even pair it with a secure tunneling solution (like Cloudflare Tunnel or Tailscale Funnel) if you're behind CGNAT.Log in using the CUSTOM_USER and PASSWORD you set earlier.
    Once inside, it will look like this:
    Here you can:
    Create a new vault. Open an existing vault in the config volume. Explore the graph view, plugins, and everything else, right from the browser. Creating new vault
    For this tutorial, we’ll keep things simple, I’m just going to create a new vault to get started.
    Click on "Create", give your vault a name (anything you like - "secondbrain", "mynotes", "vault", etc.), and Obsidian will take care of the rest.
    It’ll create a new folder inside the mounted config directory we set up in Docker earlier. This means all your notes and settings will be saved persistently on your machine, even if the container is stopped or restarted.
    After you name and create the vault, Obsidian will drop you straight into the note-taking interface. And that’s it, you’re in!
    You can now start writing notes, creating folders, and playing around with features like:
    Graph view to visualize links between notes Command palette to quickly access features Themes and plugin settings to customize your environment Everything is accessible from the left sidebar, just like in the desktop app. No extra setup needed, just start typing and let your ideas flow.
    Final thoughts
    Setting up Obsidian inside Docker was surprisingly easy, it didn’t take much time, and before I knew it, I had the full desktop-like experience running in my browser.
    This setup is especially great for people on the go or students like me who love using Obsidian but can’t always afford the Sync feature just yet.
    Now, I personally don’t mind paying for good software and I think Obsidian Sync is a solid service but those little costs start stacking up fast.
    I’ve also seen quite a few Reddit threads where folks have built their own syncing setups using Syncthing to keep notes in sync across devices, and that seems like a solid workaround as well.
    For me, this self-hosted browser version of Obsidian fits somewhere in the middle. It gives you the full experience without the limitations of a mobile app or the need to sync through someone else’s servers.
    And if you're already in the self-hosting ecosystem, it’s just another powerful tool you can add to your stack.
  15. by: Sladjana Stojanovic
    Thu, 12 Jun 2025 13:58:38 +0000

    For years, I believed that drag-and-drop games — especially those involving rotation, spatial logic, and puzzle solving — were the exclusive domain of JavaScript. Until one day, I asked AI:
    The answer: “No — not really. You’ll need JavaScript.” That was all the motivation I needed to prove otherwise.
    CodePen Embed Fallback But first, let’s ask the obvious question: Why would anyone do this?
    Well…
    To know how far CSS can be pushed in creating interactive UIs. To get better at my CSS skills. And it’s fun! Fair enough?
    Now, here’s the unsurprising truth: CSS isn’t exactly made for this. It’s not a logic language, and let’s be honest, it’s not particularly dynamic either. (Sure, we have CSS variables and some handy built-in functions now, hooray!)
    In JavaScript, we naturally think in terms of functions, loops, conditions, objects, comparisons. We write logic, abstract things into methods, and eventually ship a bundle that the browser understands. And once it’s shipped? We rarely look at that final JavaScript bundle — we just focus on keeping it lean.
    Now ask yourself: isn’t that exactly what Sass does for CSS?
    Why should we hand-write endless lines of repetitive CSS when we can use mixins and functions to generate it — cleanly, efficiently, and without caring how many lines it takes, as long as the output is optimized?
    So, we put it to the test and it turns out Sass can replace JavaScript, at least when it comes to low-level logic and puzzle behavior. With nothing but maps, mixins, functions, and a whole lot of math, we managed to bring our Tangram puzzle to life, no JavaScript required.
    Let the (CSS-only) games begin! 🎉
    The game
    The game consists of seven pieces: the classic Tangram set. Naturally, these pieces can be arranged into a perfect square (and many other shapes, too). But we need a bit more than just static pieces.
    So here’s what I am building:
    A puzzle goal, which is the target shape the player has to recreate. A start button that shuffles all the pieces into a staging area. Each piece is clickable and interactive. The puzzle should let the user know when they get a piece wrong and also celebrate when they finish the puzzle. The HTML structure
    I started by setting up the HTML structure, which is no small task, considering the number of elements involved.
    Each shape was given seven radio buttons. I chose radios over checkboxes to take advantage of their built-in exclusivity. Only one can be selected within the same group. This made it much easier to track which shape and state were currently active. The start button? Also a radio input. A checkbox could’ve worked too, but for the sake of consistency, I stuck with radios across the board. The puzzle map itself is just a plain old <div>, simple and effective. For rotation, we added eight radio buttons, each representing a 45-degree increment: 45°, 90°, 135°, all the way to 360°. These simulate rotation controls entirely in CSS. Every potential shadow position got its own radio button too. (Yes, it’s a lot, I know.) And to wrap it all up, I included a classic reset button inside a <form> using <button type="reset">, so players can easily start over at any point. Given the sheer number of elements required, I used Pug to generate the HTML more efficiently. It was purely a convenience choice. It doesn’t affect the logic or behavior of the puzzle in any way.
    Below is a sample of the compiled HTML. It might look overwhelming at first glance (and this is just a portion of it!), but it illustrates the structural complexity involved. This section is collapsed to not nuke your screen, but it can be expanded if you’d like to explore it.
    Open HTML Code <div class="wrapper"> <div class="tanagram-box"></div> <div class="tanagram-box"></div> <form class="container"> <input class="hide_input start" type="checkbox" id="start" autofocus /> <button class="start-button" type="reset" id="restart">Restart</button> <label class="start-button" for="start">Start </label> <div class="shadow"> <input class="hide_input" type="radio" id="blueTriangle-tan" name="tan-active" /> <input class="hide_input" type="radio" id="yellowTriangle-tan" name="tan-active" /> <!-- Inputs for others tans --> <input class="hide_input" type="radio" id="rotation-reset" name="tan-active" /> <input class="hide_input" type="radio" id="rotation-45" name="tan-rotation" /> <input class="hide_input" type="radio" id="rotation-90" name="tan-rotation" /> <!--radios for 90, 225, 315, 360 --> <input class="hide_input" type="checkbox" id="yellowTriangle-tan-1-135" name="tan-rotation" /> <input class="hide_input" type="checkbox" id="yellowTriangle-tan-1-225" name="tan-rotation" /> <!-- radio for every possible shape shadows--> <label class="rotation rot" for="rotation-45" id="rot45">⟲</label> <label class="rotation rot" for="rotation-90" id="rot90">⟲</label> <!--radios for 90, 225, 315, 360 --> <label class="rotation" for="rotation-reset" id="rotReset">✘</label> <label class="blueTriangle tans" for="blueTriangle-tan" id="tanblueTrianglelab"></label> <div class="tans tan_blocked" id="tanblueTrianglelabRes"></div> <!-- labels for every tan and disabled div --> <label class="blueTriangle tans" for="blueTriangle-tan-1-90" id="tanblueTrianglelab-1-90"></label> <label class="blueTriangle tans" for="blueTriangle-tan-1-225" id="tanblueTrianglelab-1-225"></label> <!-- labels radio for every possible shape shadows--> <div class="shape"></div> </div> </form> <div class="tanagram-box"></div> <div class="tanagram-box"></div> <div class="tanagram-box"></div> <div class="tanagram-box"></div> <div class="tanagram-box"></div> </div> Creating maps for shape data
    Now that HTML skeleton is ready, it’s time to inject it with some real power. That’s where our Sass maps come in, and here’s where the puzzle logic starts to shine.
    Note: Maps in Sass hold pairs of keys and values, and make it easy to look up a value by its corresponding key. Like objects in JavaScript, dictionaries in Python and, well, maps in C++.
    I’m mapping out all the core data needed to control each tangram piece (tan): its color, shape, position, and even interaction logic. These maps contain:
    the background-color for each tan, the clip-path coordinates that define their shapes, the initial position for each tan, the position of the blocking div (which disables interaction when a tan is selected), the shadow positions (coordinates for the tan’s silhouette displayed on the task board), the grid information, and the winning combinations — the exact target coordinates for each tan, marking the correct solution. $colors: ( blue-color: #53a0e0, yellow-color: #f7db4f, /* Colors for each tan */ ); $nth-child-grid: ( 1: (2, 3, 1, 2, ), 2: ( 3, 4, 1, 2, ), 4: ( 1, 2, 2, 3, ), /* More entries to be added */); $bluePosiblePositions: ( 45: none, 90: ( (6.7, 11.2), ), 135: none, 180: none, /* Positions defined up to 360 degrees */); /* Other tans */ /* Data defined for each tan */ $tansShapes: ( blueTriangle: ( color: map.get($colors, blue-color), clip-path: ( 0 0, 50 50, 0 100, ), rot-btn-position: ( -20, -25, ), exit-mode-btn-position: ( -20, -33, ), tan-position: ( -6, -37, ), diable-lab-position: ( -12, -38, ), poss-positions: $bluePosiblePositions, correct-position: ((4.7, 13.5), (18.8, 13.3), ), transform-origin: ( 4.17, 12.5,), ), ); /* Remaining 7 combinations */ $winningCombinations: ( combo1: ( (blueTriangle, 1, 360), (yellowTriangle, 1, 225), (pinkTriangle, 1, 180), (redTriangle, 4, 360), (purpleTriangle, 2, 225), (square, 1, 90), (polygon, 4, 90), ), ); You can see this in action on CodePen, where these maps drive the actual look and behavior of each puzzle piece. At this point, there’s no visible change in the preview. We’ve simply prepared and stored the data for later use.
    CodePen Embed Fallback Using mixins to read from maps
    The main idea is to create reusable mixins that will read data from the maps and apply it to the corresponding CSS rules when needed.
    But before that, we’ve elevated things to a higher level by making one key decision: We never hard-coded units directly inside the maps. Instead, we built a reusable utility function that dynamically adds the desired unit (e.g., vmin, px, etc.) to any numeric value when it’s being used. This way, when can use our maps however we please.
    @function get-coordinates($data, $key, $separator, $unit) { $coordinates: null; // Check if the first argument is a map @if meta.type-of($data) == "map" { // If the map contains the specified key @if map.has-key($data, $key) { // Get the value associated with the key (expected to be a list of coordinates) $coordinates: map.get($data, $key); } // If the first argument is a list } @else if meta.type-of($data) == "list" { // Ensure the key is a valid index (1-based) within the list @if meta.type-of($key) == "number" and $key > 0 and $key <= list.length($data) { // Retrieve the item at the specified index $coordinates: list.nth($data, $key); } // If neither map nor list, throw an error } @else { @error "Invalid input: First argument must be a map or a list."; } // If no valid coordinates were found, return null @if $coordinates == null { @return null; } // Extract x and y values from the list $x: list.nth($coordinates, 1); $y: list.nth($coordinates, -1); // -1 gets the last item (y) // Return the combined x and y values with units and separator @return #{$x}#{$unit}#{$separator}#{$y}#{$unit}; } Sure, nothing’s showing up in the preview yet, but the real magic starts now.
    CodePen Embed Fallback Now we move on to writing mixins. I’ll explain the approach in detail for the first mixin, and the rest will be described through comments.
    The first mixin dynamically applies grid-column and grid-row placement rules to child elements based on values stored in a map. Each entry in the map corresponds to an element index (1 through 8) and contains a list of four values: [start-col, end-col, start-row, end-row].
    @mixin tanagram-grid-positioning($nth-child-grid) { // Loop through numbers 1 to 8, corresponding to the tanam pieces @for $i from 1 through 8 { // Check if the map contains a key for the current piece (1-8) @if map.has-key($nth-child-grid, $i) { // Get the grid values for this piece: [start-column, end-column, start-row, end-row] $values: map.get($nth-child-grid, $i); // Target the nth child (piece) and set its grid positions &:nth-child(#{$i}) { // Set grid-column: start and end values based on the first two items in the list grid-column: #{list.nth($values, 1)} / #{list.nth($values, 2)}; // Set grid-row: start and end values based on the last two items in the list grid-row: #{list.nth($values, 3)} / #{list.nth($values, 4)}; } } } } We can expect the following CSS to be generated:
    .tanagram-box:nth-child(1) { grid-column: 2 / 3; grid-row: 1 / 2; } .tanagram-box:nth-child(2) { grid-column: 3 / 4; grid-row: 1 / 2; } CodePen Embed Fallback In this mixin, my goal was actually to create all the shapes (tans). I am using clip-path. There were ideas to use fancy SVG images, but this test project is more about testing the logic rather than focusing on beautiful design. For this reason, the simplest solution was to cut the elements according to dimensions while they are still in the square (the initial position of all the tans).
    So, in this case, through a static calculation, the $tansShapes map was updated with the clip-path property:
    clip-path: (0 0, 50 50, 0 100); This contains the clip points for all the tans. In essence, this mixin shapes and colors each tan accordingly.
    @mixin set-tan-clip-path($tanName, $values) { // Initialize an empty list to hold the final clip-path points $clip-path-points: (); // Extract the 'clip-path' data from the map, which contains coordinate pairs $clip-path-key: map.get($values, clip-path); // Get the number of coordinate pairs to loop through $count: list.length($clip-path-key); // Loop through each coordinate point @for $i from 1 through $count { // Convert each pair of numbers into a formatted coordinate string with units $current-point: get-coordinates($clip-path-key, $i, " ", "%"); // Add the formatted coordinate to the list, separating each point with a comma $clip-path-points: list.append($clip-path-points, #{$current-point}, comma); } // Style for the preview element (lab version), using the configured background color #tan#{$tanName}lab { background: map.get($values, color); clip-path: polygon(#{$clip-path-points}); // Apply the full list of clip-path points } // Apply the same clip-path to the actual tan element .#{$tanName} { clip-path: polygon(#{$clip-path-points}); } } and output in CSS should be:
    .blueTriangle { clip-path: polygon(0% 0%, 50% 50%, 0% 100%); } /* other tans */ CodePen Embed Fallback Start logic
    Alright, now I’d like to clarify what should happen first when the game loads.
    First, with a click on the Start button, all the tans “go to their positions.” In reality, we assign them a transform: translate() with specific coordinates and a rotation.
    .start:checked ~ .shadow #tanblueTrianglelab { transform-origin: 4.17vmin 12.5vmin; transform: translate(-6vmin,-37vmin) rotate(360deg); cursor: pointer; } CodePen Embed Fallback So, we still maintain this pattern. We use transform and simply change the positions or angles (in the maps) of both the tans and their shadows on the task board.
    When any tan is clicked, the rotation button appears. By clicking on it, the tan should rotate around its center, and this continues with each subsequent click. There are actually eight radio buttons, and with each click, one disappears and the next one appears. When we reach the last one, clicking it makes it disappear and the first one reappears. This way, we get the impression of clicking the same button (they are, of course, styled the same) and being able to click (rotate the tan) infinitely. This is exactly what the following mixin enables.
    @mixin set-tan-rotation-states($tanName, $values, $angles, $color) { // This mixin dynamically applies rotation UI styles based on a tan's configuration. // It controls the positioning and appearance of rotation buttons and visual feedback when a rotation state is active. @each $angle in $angles{ & ~ #rot#{$angle}{ transform: translate(get-coordinates($values,rot-btn-position,',',vmin )); background: $color;} & ~ #rotation-#{$angle}:checked{ @each $key in map.keys($tansShapes){ & ~ #tan#{$key}labRes{ visibility: visible; background:rgba(0,0,0,0.4); } & ~ #tan#{$key}lab{ opacity:.3; } & ~ #rotReset{ visibility: visible; } } } } } And the generated CSS should be:
    #blueTriangle-tan:checked ~ #rotation-45:checked ~ #tanblueTrianglelab { transform: translate(-6vmin,-37vmin) rotate(45deg); } #blueTriangle-tan:checked ~ #rotation-45:checked ~ #tanblueTrianglelabRes { visibility: hidden; } OK, the following mixins use the set-clip-path and set-rotation mixins. They contain all the information about the tans and their behavior in relation to which tan is clicked and which rotation is selected, as well as their positions (as defined in the second mixin).
    @mixin generate-tan-shapes-and-interactions($tansShapes) { // Applies styling logic and UI interactions for each individual tan shape from the $tansShapes map. @each $tanName, $values in $tansShapes{ $color: color.scale(map.get($values, color), $lightness: 10%); $angles: (45, 90, 135, 180, 225, 270, 315, 360); @include set-tan-clip-path($tanName, $values); ##{$tanName}-tan:checked{ & ~ #tan#{$tanName}Res{ visibility:hidden; } & ~ #tan#{$tanName}lab{opacity: 1 !important;background: #{$color};cursor:auto;} @each $key in map.keys($tansShapes){ & ~ #tan#{$tanName}Res:checked ~ #tan#{$key}labRes{visibility: visible;} } & ~ #rot45{display: flex;visibility: visible;} & ~ #rotReset{ transform: translate(get-coordinates($values, exit-mode-btn-position,',', vmin)); } @include set-tan-rotation-states($tanName, $values, $angles, $color); } } } @mixin set-initial-tan-position($tansShapes) { // This mixin sets the initial position and transformation for both the interactive (`lab`) and shadow (`labRes`) versions // of each tan shape, based on coordinates provided in the $tansShapes map. @each $tanName, $values in $tansShapes{ & ~ .shadow #tan#{$tanName}lab{ transform-origin: get-coordinates($values, transform-origin,' ' ,vmin); transform: translate( get-coordinates($values,tan-position,',', vmin)) rotate(360deg) ; cursor: pointer; } & ~ .shadow #tan#{$tanName}labRes{ visibility:hidden; transform: translate(get-coordinates($values,diable-lab-position,',',vmin)); } } } CodePen Embed Fallback As mentioned earlier, when a tan is clicked, one of the things that becomes visible is its shadow — a silhouette that appears on the task board.
    These shadow positions (coordinates) are currently defined statically. Each shadow has a specific place on the map, and a mixin reads this data and applies it to the shadow using transform: translate().
    When the clicked tan is rotated, the number of visible shadows on the task board can change, as well as their angles, which is expected.
    Of course, special care was taken with naming conventions. Each shadow element gets a unique ID, made from the name (inherited from its parent tan) and a number that represents its sequence position for the given angle.
    Pretty cool, right? That way, we avoid complicated naming patterns entirely!
    @mixin render-possible-tan-positions( $name, $angle, $possiblePositions, $visibility, $color, $id, $transformOrigin ) { // This mixin generates styles for possible positions of a tan shape based on its name, rotation angle, and configuration map. // It handles both squares and polygons, normalizing their rotation angles accordingly and applying transform styles if positions exist.} @if $name == 'square' { $angle: normalize-angle($angle); // Normalizujemo ugao ako je u pitanju square } @else if $name == 'polygon'{ $angle: normalize-polygon-angle($angle); } @if map.has-key($possiblePositions, $angle) { $values: map.get($possiblePositions, $angle); @if $values != none { $count: list.length($values); @for $i from 1 through $count { $position: get-coordinates($values, $i, ',', vmin); & ~ #tan#{$name}lab-#{$i}-#{$angle} { @if $visibility == visible { visibility: visible; background-color: $color; opacity: .2; z-index: 2; transform-origin: #{$transformOrigin}; transform: translate(#{$position}) rotate(#{$angle}deg); } @else if $visibility == hidden { visibility: hidden; } &:hover{ opacity: 0.5; cursor: pointer; } } } } } } The generated CSS:
    #blueTriangle-tan:checked ~ #tanblueTrianglelab-1-360 { visibility: visible; background-color: #53a0e0; opacity: 0.2; z-index: 2; transform-origin: 4.17vmin 12.5vmin; transform: translate(4.7vmin,13.5vmin) rotate(360deg); } This next mixin is tied to the previous one and manages when and how the tan shadows appear while their parent tan is being rotated using the button. It listens for the current rotation angle and checks whether there are any shadow positions defined for that specific angle. If there are, it displays them; if not — no shadows!
    @mixin render-possible-positions-by-rotation { // This mixin applies rotation to each tan shape. It loops through each tan, calculates its possible positions for each angle, and handles visibility and transformation. // It ensures that rotation is applied correctly, including handling the transitions between various tan positions and visibility states. @each $tanName, $values in $tansShapes{ $possiblePositions: map.get($values, poss-positions); $possibleTansColor: map.get($values, color); $validPosition: get-coordinates($values, correct-position,',' ,vmin); $transformOrigin: get-coordinates($values,transform-origin,' ' ,vmin); $rotResPosition: get-coordinates($values,exit-mode-btn-position ,',' ,vmin ); $angle: 0; @for $i from 1 through 8{ $angle: $i * 45; $nextAngle: if($angle + 45 > 360, 45, $angle + 45); @include render-position-feedback-on-task($tanName,$angle, $possiblePositions,$possibleTansColor, #{$tanName}-tan, $validPosition,$transformOrigin, $rotResPosition); ##{$tanName}-tan{ @include render-possible-tan-positions($tanName,$angle, $possiblePositions,hidden, $possibleTansColor, #{$tanName}-tan,$transformOrigin) } ##{$tanName}-tan:checked{ @include render-possible-tan-positions($tanName,360, $possiblePositions,visible, $possibleTansColor, #{$tanName}-tan,$transformOrigin); & ~ #rotation-#{$angle}:checked { @include render-possible-tan-positions($tanName,360, $possiblePositions,hidden, $possibleTansColor, #{$tanName}-tan,$transformOrigin); & ~ #tan#{$tanName}lab{transform:translate( get-coordinates($values,tan-position,',', vmin)) rotate(#{$angle}deg) ;} & ~ #tan#{$tanName}labRes{ visibility: hidden; } & ~ #rot#{$angle}{ visibility: hidden; } & ~ #rot#{$nextAngle}{ visibility: visible } @include render-possible-tan-positions($tanName,$angle, $possiblePositions,visible, $possibleTansColor, #{$tanName}-tan,$transformOrigin); } } } } } CodePen Embed Fallback When a tan’s shadow is clicked, the corresponding tan should move to that shadow’s position. The next mixin then checks whether this new position is the correct one for solving the puzzle. If it is correct, the tan gets a brief blinking effect and becomes unclickable, signaling it’s been placed correctly. If it’s not correct, the tan simply stays at the shadow’s location. There’s no effect and it remains draggable/clickable.
    CodePen Embed Fallback Of course, there’s a list of all the correct positions for each tan. Since some tans share the same size — and some can even combine to form larger, existing shapes — we have multiple valid combinations. For this Camel task, all of them were taken into account. A dedicated map with these combinations was created, along with a mixin that reads and applies them.
    CodePen Embed Fallback At the end of the game, when all tans are placed in their correct positions, we trigger a “merging” effect — and the silhouette of the camel turns yellow. At that point, the only remaining action is to click the Restart button.
    Well, that was long, but that’s what you get when you pick the fun (albeit hard and lengthy) path. All as an ode to CSS-only magic!
    Breaking Boundaries: Building a Tangram Puzzle With (S)CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  16. by: Abhishek Prakash
    Thu, 12 Jun 2025 04:28:41 GMT

    It's FOSS is turning 13 this week!
    It was created on 14th June, 2012 as a personal notebook where I shared my Linux discoveries. I didn't know that it will become a force to reckon with, a place to look up to for suggestions and advice on using Linux.
    In the 13 years, it's been viewed over 200 million times and formed a community of hundreds of thousands of Linux lovers from all parts of the world, with the US, Germany, Russia, UK and India taking the top 5 spots.
    I would like this opportunity to express my gratitude to all of you for your continued support 🙏 We shall continue to grow together and help the growth of the Linux community and open source software movement 💪
    As a token of appreciation, I would like to unveil the new It's FOSS Plus website. This portal organizes the existing resources from the main website into course format which can be enjoyed by our paid members. Thank you for supporting us.
    Explore It's FOSS Plus To celebrate 13 years of It's FOSS, I have brought back the lifetime membership option with reduced pricing of $76 instead of the usual $99. If you ever wanted to support us with Plus membership but didn't like the recurring subscription, this is your chance 😃
    Get It's FOSS Lifetime Membership 💬 Let's see what else you get in this edition
    Ubuntu ditching Xorg. Linux Mint 20.x reaching EOL. Nano editor tips. Tower cases for your Raspberry Pi. And other Linux news, tips, and, of course, memes! 📰 Linux and Open Source News
    The CrowPi 3 is now available on Kickstarter. The OpenInfra Foundation has a new home now. Canonical has decided to retire Bazaar from Launchpad. A new open source compiler is here to challenge LLVM. Langfuse, the popular LLM analytics platform, goes open. After Fedora, now Ubuntu opts for Wayland-only release. Linux Mint 20.x has reached end of life. Here's what you can do about it.
    Attention! Linux Mint 20 Has Reached Its EndIt’s time to upgrade! Linux Mint 20.x has reached end of life.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    Big Tech doesn't like self-hosting and media server content. Heck, even posting about it on social media results in post removal.
    Self-Hosting and Media Servers are Big Tech’s Next TargetYouTube is actively silencing legitimate self-hosting content. They don’t want you to own your data?It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials and More
    Benchmark your Linux system to see how it's performing. App is a cross-platform package management tool that rules them all. Why there is no IPv5? Explore the not-so-known features of the magnificent Nano editor.
    10 Tips to Get More Out of Nano EditorLearn and use these tips and tricks to utilize lesser known Nano editor features.It's FOSSSreenath Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 👷 Homelab and Maker's Corner
    Level up your Raspberry Pi 5 with a gaming tower case.
    Raspberry Pi 5 Tower Cases to Give it Desktop Gaming Rig LookPi 5 is a remarkable device and it deserves an awesome case. Transform your Raspberry Pi 5 into a miniature desktop tower PC with these cases.It's FOSSAbhishek PrakashIf that doesn't interest you, how about an open source accessible keyboard that you can build.
    ✨ Project Highlight
    Packet is a Quick Share client for Linux that facilitates wireless file transfers from Android devices.
    GitHub - nozwock/packet: Quick Share client for LinuxQuick Share client for Linux. Contribute to nozwock/packet development by creating an account on GitHub.GitHubnozwock📽️ Videos I am Creating for You
    Like the terminal customization video, I made another detailed one about transforming the looks of Linux Mint.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Arch users, can you beat the Pacman Command Quiz?
    Pacman Command QuizBTW, do you use Arch Linux? If yes, can you answer all these questions correctly?It's FOSSAbhishek Prakash💡 Quick Handy Tip
    In the Dolphin file manager, you can open a folder while dragging a file to it. This is helpful if you want to drag and drop a file into a nested folder arrangement. To enable this, click on the Top-right Hamburger menu ⇾ Configure ⇾ Configure Dolphin.
    Here, go to the View section, select the General tab and toggle the Open folders during drag operations checkbox.
    Now, you can open a folder by dragging files and hovering them over it.
    🤣 Meme of the Week
    An unbreakable bond! 🫂
    🗓️ Tech Trivia
    On June 10, 1977, Apple began shipping the Apple II, a home computer that quickly became a hit, especially in schools, thanks to its user-friendly design and color graphics.
    🧑‍🤝‍🧑 FOSSverse Corner
    Pro FOSSer Neville, is wondering whether ChatGPT has access to books?
    How does ChatGPT access books?I have not tried this, but there is a suggestion here that ChatGPT can reproduce material from a copyrighted book. Books were definitely included in their information intake, but I wonder how far they went. I bet they did not access older books that are only available in libraries or collections. If that is so, their information is biased toward modern material. There was a Google project many years ago to photocopy every book and make them freely available online. They were stopped by…It's FOSS Communitynevj❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  17. by: Chris Coyier
    Mon, 09 Jun 2025 16:37:09 +0000

    I love weird design ideas. Probably because so much of what we need to do as web designers is, appropriately, somewhat serious. We want things to be simple, clear, professional, so that people understand them and in many cases pay for them. So when the constraints relax, so can we. It’s unlikely that Taylor’s homepage would “perform well” in any sort of UX testing, but who cares? It’s not impossible to use, it’s just unusual. And crucially, it’s fun and memorable, which is likely a leg up on the last “dashboard” you saw.
    It’s cool of Blackle Mori to have documented The Lost CSS Tricks of Cohost.org, a social network ultimately too cool for this world. I sort of suspect a lot of this trickery is available in advanced email clients too, where you definitely don’t have JavaScript, but do have access to more-modern-than-you’d-think HTML and CSS.
    And high on the tranfixingly-weird scale is Amit Sheen’s CSS Spotlight Effect. Don’t write it off as a black page with a transparent circle moving with the mouse. I mean, it kinda is, but the filtering and scaling affects that come along for the ride are extremely cool. I actually just got to hang with Amit a bit at CSS Day in Amsterdam this past week. His talk was about building logic gates in CSS was pretty wild, and the whole thing end with him just showing off random amazing Pens of his on stage.
    Sometimes design can feel impressive because of the extreme constraints of where you’re seeing it. I’m at an airport lounge right now where I’ve seen an exhibit of sculptures carved into the lead tips of pencils. It’s that same kind of feeling I get when I see art happen in the terminal, a place usually not regarded for it’s beauty. Like seeing a daisy grow from the cracks of a busted up sidewalk.
    I like serious design as well. Certainly there is more money in it. I’m allowed to like them both, just like I enjoy both fine dining and fast food. I’ll just hit you with some quicker links though as I bet you’re tired of my going on.
    Chris Nager weighs in on Design Engineering from his experience with that title at Carta. “The most important skill design engineers possess is the ability to communicate with both designers and frontend engineers. They’re able to give feedback to both sides, and can act as translators between the two worlds through prototypes.” Emphasis mine, naturally. Lea Verou looks critically at a design change at GitHub in Minimalist Affordances: Making the right tradeoffs. Not only was it interesting, it showcases the power of blogging and making coherent points: GitHub noticed, talked with her, and improved the design. Grant Slatton on How to write a good design document. “Think of a design document like a proof in mathematics. The goal of a proof is to convince the reader that the theorem is true. The goal of a design document is to convince the reader the design is optimal given the situation.”
  18. by: Preethi
    Mon, 09 Jun 2025 12:58:37 +0000

    The HTML popover attribute transforms elements into top-layer elements that can be opened and closed with a button or JavaScript. Most popovers can be light-dismissed, closing when the user clicks or taps outside the popup. Currently, HTML popover lacks built-in auto-close functionality, but it’s easy to add. Auto closing popups are useful for user interfaces like banner notifications — the new-message alerts in phones, for instance.
    A picture demo, is worth a thousand words, right? Click on the “Add to my bookmarks” button in the following example. It triggers a notification that dismisses itself after a set amount of time.
    CodePen Embed Fallback Let’s start with the popover
    The HTML popover attribute is remarkably trivial to use. Slap it on a div, specify the type of popover you need, and you’re done.
    <div popover="manual" id="pop">Bookmarked!</div> A manual popover simply means it cannot be light-dismissed by clicking outside the element. As a result, we have to hide, show, or toggle the popover’s visibility ourselves explicitly with either buttons or JavaScript. Let’s use a semantic HTML button.
    <button popovertarget="pop" popovertargetaction="show"> Add to my bookmarks </button> <div popover="manual" id="pop">Bookmarked!</div> The popovertarget and popovertargetaction attributes are the final two ingredients, where popovertarget links the button to the popover element and popovertargetaction ensures that the popover is show-n when the button is clicked.
    Hiding the popover with a CSS transition
    OK, so the challenge is that we have a popover that is shown when a certain button is clicked, but it cannot be dismissed. The button is only wired up to show the popover, but it does not hide or toggle the popover (since we are not explicitly declaring it). We want the popover to show when the button is clicked, then dismiss itself after a certain amount of time.
    The HTML popover can’t be closed with CSS, but it can be hidden from the page. Adding animation to that creates a visual effect. In our example, we will hide the popover by eliminating its CSS height property. You’ll learn in a moment why we’re using height, and that there are other ways you can go about it.
    We can indeed select the popover attribute using an attribute selector:
    [popover] { height: 0; transition: height cubic-bezier(0.6, -0.28, 0.735, 0.045) .3s .6s; @starting-style { height: 1lh; } } When the popover is triggered by the button, its height value is the one declared in the @starting-style ruleset (1lh). After the transition-delay (which is .6s in the example), the height goes from 1lh to 0 in .3s, effectively hiding the popover.
    Once again, this is only hiding the popover, not closing it properly. That’s the next challenge and we’ll need JavaScript for that level of interaction.
    Closing the popover with JavaScript
    We can start by setting a variable that selects the popover:
    const POPOVER = document.querySelector('[popover]'); Next, we can establish a ResizeObserver that monitors the popover’s size:
    const POPOVER = document.querySelector('[popover]'); const OBSERVER = new ResizeObserver((entries) => { if(entries[0].contentBoxSize[0].blockSize == 0) OBSERVER.unobserve((POPOVER.hidePopover(), POPOVER)); }); And we can fire that off starting when the button to show the popover is clicked:
    const POPOVER = document.querySelector('[popover]'); const OBSERVER = new ResizeObserver((entries) => { if(entries[0].contentBoxSize[0].blockSize == 0) OBSERVER.unobserve((POPOVER.hidePopover(), POPOVER)); }); document.querySelector('button').onclick = () => OBSERVER.observe(POPOVER); The observer will know when the popover’s CSS height reaches zero at the end of the transition, and, at that point, the popover is closed with hidePopover(). From there, the observer is stopped with unobserve().
    In our example, height and ResizeObserver are used to auto-close the notification. You can try any other CSS property and JavaScript observer combination that might work with your preference. Learning about ResizeObserver and MutationObserver can help you find some options.
    Setting an HTML fallback
    When JavaScript is disabled in the browser, if the popover type is set to any of the light-dismissible types, it acts as a fallback. Keep the popover visible by overriding the style rules that hide it. The user can dismiss it by clicking or tapping anywhere outside the element.
    If the popover needs to be light-dismissible only when JavaScript is disabled, then include that popover inside a <noscript> element before the manual popover. It’s the same process as before, where you override CSS styles as needed.
    <noscript> <div popover="auto" id="pop">Bookmarked!</div> </noscript> <div popover="manual" id="pop">Bookmarked!</div> <!-- goes where <head> element's descendants go --> <noscript> <style> [popover] { transition: none; height: 1lh; } </style> </noscript> When to use this method?
    Another way to implement all of this would be to use setTimeout() to create a delay before closing the popover in JavaScript when the button is clicked, then adding a class to the popover element to trigger the transition effect. That way, no observer is needed.
    With the method covered in this post, the delay can be set and triggered in CSS itself, thanks to @starting-style and transition-delay — no extra class required! If you prefer to implement the delay through CSS itself, then this method works best. The JavaScript will catch up to the change CSS makes at the time CSS defines, not the other way around.
    Creating an Auto-Closing Notification With an HTML Popover originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  19. by: Abhishek Prakash
    Sat, 07 Jun 2025 15:36:14 GMT

    The bare Raspberry Pi board has a different appeal but I prefer keeping my Pis under cover, in protective cases.
    Now, there are tons of interesting cases available. You can also build your own with a 3D printer.
    The official Raspberry Pi 5 case and other small box design cases are okay for protection and they don't cost much.
    Raspberry Pi 5 official case beside Pironman 5However, lately, I have been fascinated with the tower cases. With the semi-transparent design and RGB lightings, they look dope. Like those customized gaming rigs people spend hundreds of dollars on.
    Thankfully, the Raspberry Pi is a small device so their tower cases are also not that expensive.
    Let me share a few of such beautiful mini tower PC like protective cases you can get for your Raspberry Pi 5 in the buyer's guide.
    Pironman 5: Full mini PC experience
    Pironman 5 is the ultimate case that got me into the tower PC fetish. It's my prized Pi accessory, beautifully sitting there on my secondary work desk.
    The Pironman 5 case transforms your Raspberry Pi 5 into a sleek aluminum mini-tower with advanced cooling, NVMe M.2 SSD support, customizable RGB lighting, dual standard HDMI ports, and a secure power switch.
    Yes, you read that right. It upgrades your Pi 5's mini HDMI ports into full HDMI ports and also allows you to use NVMe M.2 SSD. Do check the list of supported SSDs.
    Key Features:
    Adds a NVMe M.2 slot for SSD Tower cooler Dual RGB fans with dust filters 0.96" OLED display showing real-time system metrics Safe shutdown functionality and IR receiver Dual full-size HDMI ports and external GPIO access Active support and community US duties and EU VAT included in the pricing 💸 Price: $79.99
    Get Pironman 5 from official websiteGet Pironman 5 from AmazonTom's Hardware found it could handle overclocked Pi 5s at 3GHz while maintaining excellent temperatures. I didn't do such extensive testing but you can still read my full experience of Pironman 5 in the review I did earlier.
    Pironman 5 Review: Best Raspberry Pi 5 Case You Can getIt’s a cooling case with RGB lighting but it turns your Raspberry Pi into a mini PC.It's FOSSAbhishek PrakashPironman 5 Max: NAS/AI option
    Pironman 5 Max is a slight upgrade to the previous entry. What's different here? Well, it primarily adds an additional NVMe M.2 slot so that you can use it as NAS RAID 0/1 setup or add Hailo-8L AI accelerator.
    There might be a few small differences, like the OLED screen has the tap to wake feature, but the main difference is that Pironman 5 Max has an additional NVMe slot. Oh, the black design gives it a more badass look.
    Key Features:
    Dual expandable NVMe M.2 slots with RAID 0/1 support AI accelerator compatibility (e.g., Hailo-8L) for advanced edge AI applications Smart OLED display with vibration wake-up and tap-to-wake functionality Advanced cooling with tower cooler and dual RGB fans Sleeker black aluminum chassis with semi-transparent panels Dual full-size HDMI ports and external GPIO access Active support and community Safe shutdown functionality and IR receiver 💸 Price: $94.99 (Early bird: $71.24 for first 500 units)
    Clearly, it is suitable for NAS builds, AI edge computing, and Home Assistant hubs.
    💡 And at the moment, the pre-order discount makes it cheaper than its predecessor. Grab it before the pricing goes back to normal.
    Get Pironman 5 Max from official websiteGeeekPi Tower Kit: Classic Pi plus M.2 NVMe
    The GeeekPi Tower kit comes into two variants: with and without N07 M.2 NVMe SSD PCIe peripheral.
    The design is not a lot different from Pironman cases, at least from the outside. But here, you DO NOT get full HDMI slots. You access the usual Pi 5 ports. That makes it cheaper than Pironman cases.
    You have one Ice tower cooler with RGB lights to keep the Pi cool.
    Key Features:
    ICE Tower Cooler with LED fan for effective temperature control 0.96" OLED screen for displaying system status information Two acrylic panels offering clear view of internal components N07 M.2 NVMe support in the upgraded model RGB lighting that cycles through colors Regular Pi 5 ports, no full HDMI slots 💸 Price: $49 for the basic model
    Get GeeekPi Tower Kit from AmazonYahboom CUBE Pi: Boxed Tower
    Ignore the quirky Yahboom brand name ;)
    The CUBE Pi features a boxy aluminum alloy construction with 270° panoramic view that clearly displays internal components.
    There is only one fan with blue light at the top but it has ducts at top and bottom for better ventilation. The top is covered by a magnetic mesh.
    You also get programmable RGB lighting to add the oomph factor. The mini-HDMI ports are converted into full HDMI, so that's a good thing.
    There is an OLED display to show you the system stats hidden inside the case instead of being on the exterior.
    The case has enough space for adding an active radiator or M.2 SSD, you have to make those purchases separately.
    Key Features:
    Metal chassis with three highly transparent acrylic side plates offering 270° panoramic view Blue light cooling fan with dual cooling ducts Full HDMI ports Dust-proof magnetic nets to effectively block dust intrusion RGB lighting OLED display inside the case Scope for NVMe M.2 SSD slot (sold separately) 💸 Price: ~$49
    Get CUBE PI from AmazonElectroCookie: The Minimalist Champion
    Sometimes less is more. ElectroCookie's aluminum mini tower combines a large heat dissipation structure with an RGB-lit PWM fan that automatically adjusts speed based on CPU temperature.
    There is scope for the NVMe SSD HAT but you have to purchase it separately. There is a separate model that comes with the HAT.
    And that's it. It's just a case and doesn't add extra ports or slots to it. There is no OLED display, either.
    However, the case comes in five different colors to choose from. Now that's something, right?
    Key Features:
    Large active cooler with RGB PWM fan Compatible M.2 HAT NVMe SSD support (sold separately) Easy access to GPIO pins, SD card slot, and all ports Soft-touch power button Available in silver, black, red, blue and pink colors0-40 (M.2 HAT sold separately) Price: ~$32
    Get ElectroCookie from AmazonWhich one to choose?
    Pick Pironman 5 if you want the complete package with professional features and don't mind paying premium pricing.
    Pick Pironman 5 Max if you need extra storage slot for a NAS or AI options to an overall mini PC build and don't mind the price tag.
    GeeekPi if you want a cool looking mini tower PC with focus on tower cooling and not focused on additional slots.
    Pick Yahboom if you don't necessarily want extra features but agree to pay a premium price for just a beautiful RGB lit tower case.
    Pick ElectroCookie if you want a tower case in your choice of color and don't need fancy features to keep the pricing in check.
    All these cases transform your Pi 5 from exposed board to desktop-class computer. Well, a miniature desktop computer.
    The cooling performance across all options is pretty good - you cannot function a Raspberry Pi as a desktop computer without proper thermal management.
    I am a fan of the Pironman cases. They are on the expensive side when compared to the rest but they also provide more features than the rest of the lot.
  20. by: Abhishek Prakash
    Fri, 06 Jun 2025 20:40:46 +0530

    /* Catppuccin Mocha Dark Theme Colors */ :root { --ctp-base: #1e1e2e; --ctp-mantle: #181825; --ctp-crust: #11111b; --ctp-surface0: #313244; --ctp-surface1: #45475a; --ctp-surface2: #585b70; --ctp-overlay0: #6c7086; --ctp-overlay1: #7f849c; --ctp-overlay2: #9399b2; --ctp-text: #cdd6f4; --ctp-subtext0: #a6adc8; --ctp-subtext1: #bac2de; --ctp-blue: #89b4fa; --ctp-green: #a6e3a1; --ctp-red: #f38ba8; --ctp-mauve: #cba6f7; --ctp-peach: #fab387; } * { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif; background-color: var(--ctp-base); color: var(--ctp-text); line-height: 1.6; min-height: 100vh; display: flex; align-items: center; justify-content: center; padding: 20px; } .container { width: 100%; max-width: 1200px; background-color: var(--ctp-mantle); border-radius: 12px; box-shadow: 0 10px 30px rgba(0, 0, 0, 0.5); overflow: hidden; } .header { background-color: var(--ctp-crust); padding: 20px; text-align: center; border-bottom: 1px solid var(--ctp-surface0); } h1 { font-size: 24px; font-weight: 600; color: var(--ctp-mauve); display: flex; align-items: center; justify-content: center; gap: 10px; } .arrow { color: var(--ctp-blue); font-size: 20px; } .controls { display: flex; justify-content: center; gap: 20px; padding: 20px; background-color: var(--ctp-mantle); border-bottom: 1px solid var(--ctp-surface0); } button { background-color: var(--ctp-surface0); color: var(--ctp-text); border: none; padding: 10px 24px; border-radius: 8px; font-size: 14px; font-weight: 500; cursor: pointer; transition: all 0.3s ease; display: flex; align-items: center; gap: 8px; } button:hover { background-color: var(--ctp-surface1); transform: translateY(-1px); } button:active { transform: translateY(0); } button.primary { background-color: var(--ctp-blue); color: var(--ctp-crust); } button.primary:hover { background-color: #7aa2f7; } button.secondary { background-color: var(--ctp-green); color: var(--ctp-crust); } button.secondary:hover { background-color: #94e089; } .editor-container { display: grid; grid-template-columns: 1fr 1fr; gap: 20px; padding: 20px; min-height: 400px; } .editor-section { display: flex; flex-direction: column; gap: 10px; } .editor-header { display: flex; justify-content: space-between; align-items: center; padding: 10px 15px; background-color: var(--ctp-surface0); border-radius: 8px 8px 0 0; } .editor-title { font-size: 14px; font-weight: 600; color: var(--ctp-subtext1); text-transform: uppercase; letter-spacing: 0.5px; } .copy-btn { background-color: transparent; padding: 6px 12px; font-size: 12px; color: var(--ctp-overlay2); border: 1px solid var(--ctp-surface1); } .copy-btn:hover { background-color: var(--ctp-surface1); color: var(--ctp-text); border-color: var(--ctp-surface2); } .copy-btn.copied { color: var(--ctp-green); border-color: var(--ctp-green); } textarea { flex: 1; background-color: var(--ctp-crust); color: var(--ctp-text); border: 1px solid var(--ctp-surface0); border-radius: 0 0 8px 8px; padding: 15px; font-family: 'Fira Code', 'Courier New', monospace; font-size: 14px; line-height: 1.5; resize: none; outline: none; transition: border-color 0.3s ease; } textarea:focus { border-color: var(--ctp-blue); } .error { position: fixed; bottom: 20px; right: 20px; background-color: var(--ctp-red); color: var(--ctp-crust); padding: 12px 20px; border-radius: 8px; font-size: 14px; font-weight: 500; box-shadow: 0 4px 12px rgba(243, 139, 168, 0.3); animation: slideIn 0.3s ease; z-index: 1000; } @keyframes slideIn { from { transform: translateX(100%); opacity: 0; } to { transform: translateX(0); opacity: 1; } } @media (max-width: 768px) { .editor-container { grid-template-columns: 1fr; } .controls { flex-wrap: wrap; } button { font-size: 13px; padding: 8px 16px; } } /* Custom scrollbar */ textarea::-webkit-scrollbar { width: 8px; } textarea::-webkit-scrollbar-track { background: var(--ctp-crust); } textarea::-webkit-scrollbar-thumb { background: var(--ctp-surface1); border-radius: 4px; } textarea::-webkit-scrollbar-thumb:hover { background: var(--ctp-surface2); } YAML ↔ JSON Converter
    → YAML to JSON ← JSON to YAML ✕ Clear All YAML Copy JSON Copy
  21. by: Temani Afif
    Fri, 06 Jun 2025 13:52:42 +0000

    If you’re following along, this is the third post in a series about the new CSS shape() function. We’ve learned how to draw lines and arcs and, in this third part, I will introduce the curve command — the missing command you need to know to have full control over the shape() function. In reality, there are more commands, but you will rarely need them and you can easily learn about them later by checking the documentation.
    Better CSS Shapes Using shape()
    Lines and Arcs More on Arcs Curves (you are here!) The curve command
    This command adds a Bézier curve between two points by specifying control points. We can either have one control point and create a Quadratic curve or two control points and create a Cubic curve.
    For many of you, that definition is simply unclear, or even useless! You can spend a few minutes reading about Bézier curves but is it really worth it? Probably not, unless your job is to create shapes all the day and you have a solid background in geometry.
    We already have cubic-bezier() as an easing function for animations but, honestly, who really understands how it works? We either rely on a generator to get the code or we read a “boring” explanation that we forget in two minutes. (I have one right here by the way!)
    Don’t worry, this article will not be boring as I will mostly focus on practical examples and more precisely the use case of rounding the corners of irregular shapes. Here is a figure to illustrate a few examples of Bézier curves.
    The blue dots are the starting and ending points (let’s call them A and B) and the black dots are the control points. And notice how the curve is tangent to the dashed lines illustrated in red.
    In this article, I will consider only one control point. The syntax will follow this pattern:
    clip-path: shape( from Xa Ya, curve to Xb Yb with Xc Yc ); arc command vs. curve command
    We already saw in Part 1 and Part 2 that the arc command is useful establishing rounded edges and corners, but it will not cover all the cases. That’s why you will need the curve command. The tricky part is to know when to use each one and the answer is “it depends.” There is no generic rule but my advice is to first see if it’s possible (and easy) using arc. If not, then you have to use curve.
    For some shapes, we can have the same result using both commands and this is a good starting point for us to understand the curve command and compare it with arc.
    Take the following example:
    CodePen Embed Fallback This is the code for the first shape:
    .shape { clip-path: shape(from 0 0, arc to 100% 100% of 100% cw, line to 0 100%) } And for the second one, we have this:
    .shape { clip-path: shape(from 0 0, curve to 100% 100% with 100% 0, line to 0 100%) } The arc command needs a radius (100% in this case), but the curve command needs a control point (which is 100% 0 in this example).
    Now, if you look closely, you will notice that both results aren’t exactly the same. The first shape using the arc command is creating a quarter of a circle, whereas the shape using the curve command is slightly different. If you place both of them above each other, you can clearly see the difference.
    CodePen Embed Fallback This is interesting because it means we can round some corners using either an arc or a curve, but with slightly different results. Which one is better, you ask? I would say it depends on your visual preference and the shape you are creating.
    In Part 1, we created rounded tabs using the arc command, but we can also create them with curve.
    CodePen Embed Fallback Can you spot the difference? It’s barely visible but it’s there.
    Notice how I am using the by directive the same way I am doing with arc, but this time we have the control point, which is also relative. This part can be confusing, so pay close attention to this next bit.
    Consider the following:
    shape(from Xa Ya, curve by Xb Yb with Xc Yc) It means that both (Xb,Yb) and (Xc,Yc) are relative coordinates calculated from the coordinate of the starting point. The equivalent of the above using a to directive is this:
    shape(from Xa Ya, curve to (Xa + Xb) (Ya + Yb) with (Xa + Xc) (Yb + Yc)) We can change the reference of the control point by adding a from directive. We can either use start (the default value), end, or origin.
    shape(from Xa Ya, curve by Xb Yb with Xc Yc from end) The above means that the control point will now consider the ending point instead of the starting point. The result is similar to:
    shape(from Xa Ya, curve to (Xa + Xb) (Ya + Yb) with (Xa + Xb + Xc) (Ya + Yb + Yc)) If you use origin, the reference will be the origin, hence the coordinate of the control point becomes absolute instead of relative.
    The from directive may add some complexity to the code and the calculation, so don’t bother yourself with it. Simply know it exists in case you face it, but keep using the default value.
    I think it’s time for your first homework! Similar to the rounded tab exercise, try to create the inverted radius shape we covered in the Part 1 using curve instead of arc. Here are both versions for you to reference, but try to do it without peeking first, if you can.
    CodePen Embed Fallback Let’s draw more shapes!
    Now that we have a good overview of the curve command, let’s consider more complex shapes where arc won’t help us round the corners and the only solution is to draw curves instead. Considering that each shape is unique, so I will focus on the technique rather than the code itself.
    Slanted edge
    Let’s start with a rectangular shape with a slanted edge.
    Getting the shape on the left is quite simple, but the shape on the right is a bit tricky. We can round two corners with a simple border-radius, but for the slanted edge, we will use shape() and two curve commands.
    The first step is to write the code of the shape without rounded corners (the left one) which is pretty straightforward since we’re only working with the line command:
    .shape { --s: 90px; /* slant size */ clip-path: shape(from 0 0, line to calc(100% - var(--s)) 0, line to 100% 100%, line to 0 100% ); } Then we take each corner and try to round it by modifying the code. Here is a figure to illustrate the technique I am going to use for each corner.
    We define a distance, R, that controls the radius. From each side of the corner point, I move by that distance to create two new points, which are illustrated above in red. Then, I draw my curve using the new points as starting and ending points. The corner point will be the control point.
    The code becomes:
    .shape { --s: 90px; /* slant size */ clip-path: shape(from 0 0, Line to Xa Ya, curve to Xb Yb with calc(100% - var(--s)) 0, line to 100% 100%, line to 0 100% ); } Notice how the curve is using the coordinates of the corner point in the with directive, and we have two new points, A and B.
    Until now, the technique is not that complex. For each corner point, you replace the line command with line + curve commands where the curve command reuses the old point in its with directive.
    If we apply the same logic to the other corner, we get the following:
    .shape { --s: 90px; /* slant size */ clip-path: shape(from 0 0, line to Xa Ya, curve to Xb Yb with calc(100% - var(--s)) 0, line to Xc Yc, curve to Xd Yd with 100% 100%, line to 0 100% ); } Now we need to calculate the coordinates of the new points. And here comes the tricky part because it’s not always simple and it may require some complex calculation. Even if I detail this case, the logic won’t be the same for the other shapes we’re making, so I will skip the math part and give you the final code:
    .box { --h: 200px; /* element height */ --s: 90px; /* slant size */ --r: 20px; /* radius */ height: var(--h); border-radius: var(--r) 0 0 var(--r); --_a: atan2(var(--s), var(--h)); clip-path: shape(from 0 0, line to calc(100% - var(--s) - var(--r)) 0, curve by calc(var(--r) * (1 + sin(var(--_a)))) calc(var(--r) * cos(var(--_a))) with var(--r) 0, line to calc(100% - var(--r) * sin(var(--_a))) calc(100% - var(--r) * cos(var(--_a))), curve to calc(100% - var(--r)) 100% with 100% 100%, line to 0 100% ); } I know the code looks a bit scary, but the good news is that the code is also really easy to control using CSS variables. So, even if the math is not easy to grasp, you don’t have to deal with it. It should be noted that I need to know the height to be able to calculate the coordinates which means the solution isn’t perfect because the height is a fixed value.
    CodePen Embed Fallback Arrow-shaped box
    Here’s a similar shape, but this time we have three corners to round using the curve command.
    CodePen Embed Fallback The final code is still complex but I followed the same steps. I started with this:
    .shape { --s: 90px; clip-path: shape(from 0 0, /* corner #1 */ line to calc(100% - var(--s)) 0, /* corner #2 */ line to 100% 50%, /* corner #3 */ line to calc(100% - var(--s)) 100%, line to 0 100% ); } Then, I modified it into this:
    .shape { --s: 90px; clip-path: shape(from 0 0, /* corner #1 */ line to Xa Ya curve to Xb Yb with calc(100% - var(--s)) 0, /* corner #2 */ line to Xa Ya curve to Xb Yb with 100% 50%, /* corner #3 */ line to Xa Yb curve to Xb Yb with calc(100% - var(--s)) 100%, line to 0 100% ); } Lastly, I use a pen and paper to do all the calculations.
    You might think this technique is useless if you are not good with math and geometry, right? Not really, because you can still grab the code and use it easily since it’s optimized using CSS variables. Plus, you aren’t obligated to be super accurate and precise. You can rely on the above technique and use trial and error to approximate the coordinates. It will probably take you less time than doing all the math.
    Rounded polygons
    I know you are waiting for this, right? Thanks to the new shape() and the curve command, we can now have rounded polygon shapes!
    Here is my implementation using Sass where you can control the radius, number of sides and the rotation of the shape:
    CodePen Embed Fallback If we omit the complex geometry part, the loop is quite simple as it relies on the same technique with a line + curve per corner.
    $n: 9; /* number of sides*/ $r: .2; /* control the radius [0 1] */ $a: 15deg; /* control the rotation */ .poly { aspect-ratio: 1; $m: (); @for $i from 0 through ($n - 1) { $m: append($m, line to Xai Yai, comma); $m: append($m, curve to Xbi Ybi with Xci Yci, comma); } clip-path: shape(#{$m}); } Here is another implementation where I define the variables in CSS instead of Sass:
    CodePen Embed Fallback Having the variables in CSS is pretty handy especially if you want to have some animations. Here is an example of a cool hover effect applied to hexagon shapes:
    CodePen Embed Fallback I have also updated my online generator to add the radius parameter. If you are not familiar with Sass, you can easily copy the CSS code from there. You will also find the border-only and cut-out versions!
    Conclusion
    Are we done with the curve command? Probably not, but we have a good overview of its potential and all the complex shapes we can build with it. As for the code, I know that we have reached a level that is not easy for everyone. I could have extended the explanation by explicitly breaking down the math, but then this article would be overly complex and make it seem like using shape() is harder than it is.
    This said, most of the shapes I code are available within my online collection that I constantly update and optimize so you can easily grab the code of any shape!
    If you want a good follow-up to this article, I wrote an article for Frontend Masters where you can create blob shapes using the curve command.
    Better CSS Shapes Using shape()
    Lines and Arcs More on Arcs Curves (you are here!) Better CSS Shapes Using shape() — Part 3: Curves originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  22. by: Abhishek Prakash
    Fri, 06 Jun 2025 17:33:26 +0530

    Lesser known... that's the theme of this week's newsletter. Hope you like it 😄
    Here are the highlights of this edition :
    Lesser known mouse mode in Vim Lesser known dir command in Linux Lesser known special file permissions And your regular dose of better known memes, tips and news ;) 🚀 Level up your coding skills and build your own bots
    Harness the power of machine learning to create digital agents and more with hot courses like Learning LangChain, The Developer's Playbook for Large Language Model Security, Designing Large Language Model Applications, and more.
    Part of the purchase goes to Code for America! Check out the ebook bundle here.
    Humble Tech Book Bundle: Machine Learning, AI, and Bots by O’Reilly 2025Master machine learning with this comprehensive library of coding and programming courses from the pros at O’Reilly.Humble Bundle  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  23. By: Linux.com Editorial Staff
    Fri, 06 Jun 2025 10:53:28 +0000

    This article was contributed by Vedrana Vidulin, Head of Responsible AI Unit at Intellias (LinkedIn).
    As AI becomes central to smart devices, embedded systems, and edge computing, the ability to run language models locally — without relying on the cloud — is essential. Whether it’s for reducing latency, improving data privacy, or enabling offline functionality, local AI inference opens up new opportunities across industries. LiteLLM offers a practical solution for bringing large language models to resource-constrained devices, bridging the gap between powerful AI tools and the limitations of embedded hardware.
    Deploying LiteLLM, an open source LLM gateway, on embedded Linux unlocks the ability to run lightweight AI models in resource-constrained environments. Acting as a flexible proxy server, LiteLLM provides a unified API interface that accepts OpenAI-style requests — allowing you to interact with local or remote models using a consistent developer-friendly format. This guide walks you through everything from installation to performance tuning, helping you build a reliable, lightweight AI system on embedded Linux distribution.
    Setup checklist
    Before you start, here’s what’s required:
    A device running a Linux-based operating system (Debian) with sufficient computational resources to handle LLM operations.​ Python 3.7 or higher installed on the device.​ Access to the internet for downloading necessary packages and models. Step-by-Step Installation
    Step 1: Install LiteLLM
    First, we make sure the device is up to date and ready for installation. Then we install LiteLLM in a clean and safe environment.

    Update the package lists to ensure access to the latest software versions:
    sudo apt-get update Check if pip (Python Package Installer) is installed:
    pip –version If not, install it using:
    sudo apt-get install python3-pip It is recommended to use a virtual environment. Check if venv is installed:
    dpkg -s python3-venv | grep “Status: install ok installed” If venv is intalled the output would be “Status: install ok installed”. If not installed:
    sudo apt install python3-venv -y Create and activate virtual environment:
    python3 -m venv litellm_envsource litellm_env/bin/activate Use pip to install LiteLLM along with its proxy server component:
    pip install ‘litellm[proxy]’ Use LiteLLM within this environment. To deactivate the virtual environment type deactivate.
    Step 2: Configure LiteLLM
    With LiteLLM installed, the next step is to define how it should operate. This is done through a configuration file, which specifies the language models to be used and the endpoints through which they’ll be served.

    Navigate to a suitable directory and create a configuration file named config.yaml:
    mkdir ~/litellm_configcd ~/litellm_confignano config.yaml In config.yaml specify the models you intend to use. For example, to configure LiteLLM to interface with a model served by Ollama:
    model_list:  – model_name: codegemma litellm_params:   model: ollama/codegemma:2b   api_base: http://localhost:11434 This configuration maps the model name codegemma to the codegemma:2b model served by Ollama at http://localhost:11434.
    Step 3: Serve models with Ollama
    To run your AI model locally, you’ll use a tool called Ollama. It’s designed specifically for hosting large language models (LLMs) directly on your device — without relying on cloud services.

    To get started, install Ollama using the following command:
    curl -fsSL https://ollama.com/install.sh | sh This command downloads and runs the official installation script, which automatically starts the Ollama server.

    Once installed, you’re ready to load the AI model you want to use. In this example, we’ll pull a compact model called codegemma:2b.
    ollama pull codegemma:2b After the model is downloaded, the Ollama server will begin listening for requests — ready to generate responses from your local setup.
    Step 4: Launch the LiteLLM proxy server
    With both the model and configuration ready, it’s time to start the LiteLLM proxy server — the component that makes your local AI model accessible to applications.
    To launch the server, use the command below:
    litellm –config ~/litellm_config/config.yaml The proxy server will initialize and expose endpoints defined in your configuration, allowing applications to interact with the specified models through a consistent API.
    Step 5: Test the deployment
    Let’s confirm if everything works as expected. Write a simple Python script that sends a test request to the LiteLLM server and save it as test_script.py:
    import openai client = openai.OpenAI(api_key=“anything”, base_url=“http://localhost:4000“)response = client.chat.completions.create(    model=“codegemma”,    messages=[{“role”: “user”, “content”: “Write me a Python function to calculate the nth Fibonacci number.”}])print(response)  Finally, run the script using this command:
    python3 ./test_script.py If the setup is correct, you’ll receive a response from the local model — confirming that LiteLLM is up and running.

    Optimize LiteLLM performance on embedded devices

    To ensure fast, reliable performance on embedded systems, it’s important to choose the right language model and adjust LiteLLM’s settings to match your device’s limitations.
    Choosing the Right Language Model
    Not every AI model is built for devices with limited resources — some are just too heavy. That’s why it’s crucial to go with compact, optimized models designed specifically for such environments:​
    DistilBERT – a distilled version of BERT, retaining over 95% of BERT’s performance with 66 million parameters. It’s suitable for tasks like text classification, sentiment analysis, and named entity recognition. TinyBERT – with approximately 14.5 million parameters, TinyBERT is designed for mobile and edge devices, excelling in tasks such as question answering and sentiment classification. MobileBERT – optimized for on-device computations, MobileBERT has 25 million parameters and achieves nearly 99% of BERT’s accuracy. It’s ideal for mobile applications requiring real-time processing. TinyLlama – a compact model with approximately 1.1 billion parameters, TinyLlama balances capability and efficiency, making it suitable for real-time natural language processing in resource-constrained environments. MiniLM – a compact transformer model with approximately 33 million parameters, MiniLM is effective for tasks like semantic similarity and question answering, particularly in scenarios requiring rapid processing on limited hardware. Selecting a model that fits your setup isn’t just about saving space — it’s about ensuring smooth performance, fast responses, and efficient use of your device’s limited resources.
    Configure settings for better performance
    A few small adjustments can go a long way when you’re working with limited hardware. By fine-tuning key LiteLLM settings, you can boost performance and keep things running smoothly.
    Restrict the number of tokens

    Shorter responses mean faster results. Limiting the maximum number of tokens in response can reduce memory and computational load. In LiteLLM, this can be achieved by setting the max_tokens parameter when making API calls. For example:​
    import openai client = openai.OpenAI(api_key=“anything”, base_url=“http://localhost:4000“)response = client.chat.completions.create(    model=“codegemma”,    messages=[{“role”: “user”, “content”: “Write me a Python function to calculate the nth Fibonacci number.”}],    max_tokens=500 # Limits the response to 500 tokens)print(response)  Adjusting max_tokens helps keep replies concise and reduces the load on your device.
    Managing simultaneous requests
    If too many requests hit the server at once, even the best-optimized model can get bogged down. That’s why LiteLLM includes an option to limit how many queries it processes at the same time. For instance, you can restrict LiteLLM to handle up to 5 concurrent requests by setting max_parallel_requests as follows:
    litellm –config ~/litellm_config/config.yaml –num_requests 5 This setting helps distribute the load evenly and ensures your device stays stable — even during periods of high demand.
    A Few More Smart Moves

    Before going live with your setup, here are two additional best practices worth considering:
    Secure your setup – implement appropriate security measures, such as firewalls and authentication mechanisms, to protect the server from unauthorized access. Monitor performance – use LiteLLM’s logging capabilities to track usage, performance, and potential issues. LiteLLM makes it possible to run language models locally, even on low-resource devices. By acting as a lightweight proxy with a unified API, it simplifies integration while reducing overhead. With the right setup and lightweight models, you can deploy responsive, efficient AI solutions on embedded systems — whether for a prototype or a production-ready solution.
    Summary 
    Running LLMs on embedded devices doesn’t necessarily require heavy infrastructure or proprietary services. LiteLLM offers a streamlined, open-source solution for deploying language models with ease, flexibility, and performance — even on devices with limited resources. With the right model and configuration, you can power real-time AI features at the edge, supporting everything from smart assistants to secure local processing.
    Join Our Community
    We’re continuously exploring the future of tech, innovation, and digital transformation at Intellias — and we invite you to be part of the journey.
    Visit our Intellias Blog and dive deeper into industry insights, trends, and expert perspectives. This article was written by Vedrana Vidulin, Head of Responsible AI Unit at Intellias. Connect with Vedrana through her LinkedIn page.  The post How to Deploy Lightweight Language Models on Embedded Linux with LiteLLM appeared first on Linux.com.
  24. by: Abhishek Prakash
    Fri, 06 Jun 2025 16:15:07 +0530

    Think of Vim tabs like browser tabs for your code editor - each tab holds one or more windows, letting you organize multiple files into logical workspaces.
    Unlike window splits that divide your screen, tabs stack contexts you can flip between instantly.
    Three files opened in separate tabs in VimLet's see how you can use tabs in Vim.
    Essential Vim tab commands at a glance
    Here are the most common actions you can use while dealing with tabs in Vim.
    Command Action Memory Hook vim -p file1 file2 Opens files in tabs Vim in pages :tabnew filename Open file in new tab Tab new :tabedit filename Open file for editing in new tab Tab edit gt Next tab Go to next gT Previous tab Go to previous {n}gt Jump to tab number n Go to specific :tabfirst Jump to first tab Self-explanatory :tabclast Jump to last tab Self-explanatory :tabclose Close current tab Self-explanatory :tabonly Close all other tabs Keep only this :tabs List all tabs Show tabs Interesting, right? Let's see it in details.
    Opening files in tabs in Vim
    Let's start by opening files in tabs first.
    Start Vim with multiple files opened in tabs
    Launch Vim with multiple tabs instantly:
    vim -p file1.py file2.py file3.py 0:00 /0:13 1× Open two existing files in tabs while starting Vim
    How can you open just one file in tab? Well... if it's one file, what's the point of tab, right?
    📋Vim tabs aren't file containers - they're viewport organizers. Each tab can hold multiple split windows, making tabs perfect for grouping related files by project, feature, or context. It's like having separate desks for different projects.Open a file in a new tab in the current Vim session
    When you are already inside Vim and want to open a file in a new tab, switch to normal mode by pressing Esc key and use the command:
    :tabnew filenameThis will load the file in a new tab. If the file doesn't exist, it will create a new one.
    Filename is optional. If you don't provide it, it will open a new file without any name:
    :tabnew 0:00 /0:11 1× Opening existing or new files in tabs from existing Vim session
    💡If you use tabedit instead of tabnew, it open the file in Edit mode (insert mode) in the new tab.Search for files and open them in tabs
    Search the current directory for filename matching the given pattern and open it in a new tab:
    :tabf filename*This only works if the search results into a single file. If there are more than one file matched, it will throw an error:
    E77: Too many file names💡While you can open as many tabs as you want, only 10 tabs are shown by default. You can change this by setting tabpagemax in your vimrc to something like set tabpagemax=12Navigating between tabs
    You can move between opened tabs using:
    :tabn: for next tab :tabp: for previous tab Typing the commands could be tedious, so you can use the following key combinations in the nomral mode:
    gt: To go to the next tab gT (i.e. press g and shift and t keys together) To go to the previous tab If there are too many tabs opened, you can use:
    :tabfirst: Jump to first tab :tablast: Jump to last tab 💡You can enable mouse mode in Vim and that makes navigating between tabs easier with mouse clicks.In many distributions these days, Vim is preconfigured to show the tab labels on the top. If that's not the case, add this to your vimrc:
    set showtabline=2You can list all the opened tabs with:
    :tabs💡If you are particular about putting the opened tabs in specific order, you can move the current tab to Nth position with :tabm N. This tabm is short for tabmove. Note that Vim starts numbering at 0.Closing tabs
    How do you close a tab? If the tab has a single filed opened, the regular save/exit Vim commands work.
    But it will be an issue if you have multiple split windows opened in a tab.
    :tabclose: Close current tab :tabonly: Only keep the current tab opened, close all others 0:00 /0:14 1× Tab closing operation in Vim
    💡Accidentally closed a tab? :tabnew | u creates a new tab and undoes in one motion - your file returns.Bulk tab operations
    With :tabdo command, you can run the same operations in all the tabs.
    For example, :tabdo w will save file changes in all tabs, :tabdo normal! gg=G auto-indents every file.
    Similarly, tabdo %s/oldvar/newvar/g executes search-replace across every tab simultaneously. Parallel processing for repetitive changes.
    You get the idea. tabdo is the key here.
    💡You can save your meticulously crafted tab layout. :mksession project.vim then vim -S project.vim restores exact tab layout.Conclusion
    While it is good to enjoy tabs in Vim, don't create dozens of them - they become harder to navigate than helpful. Use buffers for file switching and tabs for context switching.
    As you can see, with the tab feature, you get one inch closer to having the IDE like experience in Vim.
  25. by: Daniel Schwarz
    Thu, 05 Jun 2025 13:45:56 +0000

    In many countries, web accessibility is a human right and the law, and there can be heavy fines for non-compliance. Naturally, this means that text and icons and such must have optimal color contrast in accordance with the benchmarks set by the Web Content Accessibility Guidelines (WCAG). Now, there are quite a few color contrast checkers out there (Figma even has one built-in now), but the upcoming contrast-color() function doesn’t check color contrast, it outright resolves to either black or white (whichever one contrasts the most with your chosen color).
    Right off the bat, you should know that we’ve sorta looked at this feature before. Back then, however, it was called color-contrast() instead of contrast-color() and had a much more convoluted way of going about things. It was only released in Safari Technology Preview 122 back in 2021, and that’s still the case at the time I’m writing this (now at version 220).
    You’d use it like this:
    button { --background-color: darkblue; background-color: var(--background-color); color: contrast-color(var(--background-color)); } CodePen Embed Fallback Here, contrast-color() has determined that white contrasts with darkblue better than black does, which is why contrast-color() resolves to white. Pretty simple, really, but there are a few shortcomings, which includes a lack of browser support (again, it’s only in Safari Technology Preview at the moment).
    We can use contrast-color() conditionally, though:
    @supports (color: contrast-color(red)) { /* contrast-color() supported */ } @supports not (color: contrast-color(red)) { /* contrast-color() not supported */ } The shortcomings of contrast-color()
    First, let me just say that improvements are already being considered, so here I’ll explain the shortcomings as well as any improvements that I’ve heard about.
    Undoubtedly, the number one shortcoming is that contrast-color() only resolves to either black or white. If you don’t want black or white, well… that sucks. However, the draft spec itself alludes to more control over the resolved color in the future.
    But there’s one other thing that’s surprisingly easy to overlook. What happens when neither black nor white is actually accessible against the chosen color? That’s right, it’s possible for contrast-color() to just… not provide a contrasting color. Ideally, I think we’d want contrast-color() to resolve to the closest accessible variant of a preferred color. Until then, contrast-color() isn’t really usable.
    Another shortcoming of contrast-color() is that it only accepts arguments of the <color> data type, so it’s just not going to work with images or anything like that. I did, however, manage to make it “work” with a gradient (basically, two instances of contrast-color() for two color stops/one linear gradient):
    CodePen Embed Fallback <button> <span>A button</span> </button> button { background: linear-gradient(to right, red, blue); span { background: linear-gradient(to right, contrast-color(red), contrast-color(blue)); color: transparent; background-clip: text; } } The reason this looks so horrid is that, as mentioned before, contrast-color() only resolves to black or white, so in the middle of the gradient we essentially have 50% grey on purple. This problem would also get solved by contrast-color() resolving to a wider spectrum of colors.
    But what about the font size? As you might know already, the criteria for color contrast depends on the font size, so how does that work? Well, at the moment it doesn’t, but I think it’s safe to assume that it’ll eventually take the font-size into account when determining the resolved color. Which brings us to APCA.
    APCA (Accessible Perceptual Contrast Algorithm) is a new algorithm for measuring color contrast reliably. Andrew Somers, creator of APCA, conducted studies (alongside many other independent studies) and learned that 23% of WCAG 2 “Fails” are actually accessible. In addition, an insane 47% of “Passes” are inaccessible.
    Not only should APCA do a better job, but the APCA Readability Criterion (ARC) is far more nuanced, taking into account a much wider spectrum of font sizes and weights (hooray for me, as I’m very partial to 600 as a standard font weight). While the criterion is expectedly complex and unnecessarily confusing, the APCA Contrast Calculator does a decent-enough job of explaining how it all works visually, for now.
    contrast-color() doesn’t use APCA, but the draft spec does allude to offering more algorithms in the future. This wording is odd as it suggests that we’ll be able to choose between the APCA and WCAG algorithms. Then again, we have to remember that the laws of some countries will require WCAG 2 compliance while others require WCAG 3 compliance (when it becomes a standard).
    That’s right, we’re a long way off of APCA becoming a part of WCAG 3, let alone contrast-color(). In fact, it might not even be a part of it initially (or at all), and there are many more hurdles after that, but hopefully this sheds some light on the whole thing. For now, contrast-color() is using WCAG 2 only.
    Using contrast-color()
    Here’s a simple example (the same one from earlier) of a darkblue-colored button with accessibly-colored text chosen by contrast-color(). I’ve put this darkblue color into a CSS variable so that we can define it once but reference it as many times as is necessary (which is just twice for now).
    button { --background-color: darkblue; background-color: var(--background-color); /* Resolves to white */ color: contrast-color(var(--background-color)); } And the same thing but with lightblue:
    button { --background-color: lightblue; background-color: var(--background-color); /* Resolves to black */ color: contrast-color(var(--background-color)); } First of all, we can absolutely switch this up and use contrast-color() on the background-color property instead (or in-place of any <color>, in fact, like on a border):
    button { --color: darkblue; color: var(--color); /* Resolves to white */ background-color: contrast-color(var(--color)); } Any valid <color> will work (named, HEX, RGB, HSL, HWB, etc.):
    button { /* HSL this time */ --background-color: hsl(0 0% 0%); background-color: var(--background-color); /* Resolves to white */ color: contrast-color(var(--background-color)); } Need to change the base color on the fly (e.g., on hover)? Easy:
    button { --background-color: hsl(0 0% 0%); background-color: var(--background-color); /* Starts off white, becomes black on hover */ color: contrast-color(var(--background-color)); &:hover { /* 50% lighter */ --background-color: hsl(0 0% 50%); } } CodePen Embed Fallback Similarly, we could use contrast-color() with the light-dark() function to ensure accessible color contrast across light and dark modes:
    :root { /* Dark mode if checked */ &:has(input[type="checkbox"]:checked) { color-scheme: dark; } /* Light mode if not checked */ &:not(:has(input[type="checkbox"]:checked)) { color-scheme: light; } body { /* Different background for each mode */ background: light-dark(hsl(0 0% 50%), hsl(0 0% 0%)); /* Different contrasted color for each mode */ color: light-dark(contrast-color(hsl(0 0% 50%)), contrast-color(hsl(0 0% 0%)); } } CodePen Embed Fallback The interesting thing about APCA is that it accounts for the discrepancies between light mode and dark mode contrast, whereas the current WCAG algorithm often evaluates dark mode contrast inaccurately. This one nuance of many is why we need not only a new color contrast algorithm but also the contrast-color() CSS function to handle all of these nuances (font size, font weight, etc.) for us.
    This doesn’t mean that contrast-color() has to ensure accessibility at the expense of our “designed” colors, though. Instead, we can use contrast-color() within the prefers-contrast: more media query only:
    button { --background-color: hsl(270 100% 50%); background-color: var(--background-color); /* Almost white (WCAG AA: Fail) */ color: hsl(270 100% 90%); @media (prefers-contrast: more) { /* Resolves to white (WCAG AA: Pass) */ color: contrast-color(var(--background-color)); } } Personally, I’m not keen on prefers-contrast: more as a progressive enhancement. Great color contrast benefits everyone, and besides, we can’t be sure that those who need more contrast are actually set up for it. Perhaps they’re using a brand new computer, or they just don’t know how to customize accessibility settings.
    Closing thoughts
    So, contrast-color() obviously isn’t useful in its current form as it only resolves to black or white, which might not be accessible. However, if it were improved to resolve to a wider spectrum of colors, that’d be awesome. Even better, if it were to upgrade colors to a certain standard (e.g., WCAG AA) if they don’t already meet it, but let them be if they do. Sort of like a failsafe approach? This means that web browsers would have to take the font size, font weight, element, and so on into account.
    To throw another option out there, there’s also the approach that Windows takes for its High Contrast Mode. This mode triggers web browsers to overwrite colors using the forced-colors: active media query, which we can also use to make further customizations. However, this effect is quite extreme (even though we can opt out of it using the forced-colors-adjust CSS property and use our own colors instead) and macOS’s version of the feature doesn’t extend to the web.
    I think that forced colors is an incredible idea as long as users can set their contrast preferences when they set up their computer or browser (the browser would be more enforceable), and there are a wider range of contrast options. And then if you, as a designer or developer, don’t like the enforced colors, then you have the option to meet accessibility standards so that they don’t get enforced. In my opinion, this approach is the most user-friendly and the most developer-friendly (assuming that you care about accessibility). For complete flexibility, there could be a CSS property for opting out, or something. Just color contrast by default, but you can keep the colors you’ve chosen as long as they’re accessible.
    What do you think? Is contrast-color() the right approach, or should the user agent bear some or all of the responsibility? Or perhaps you’re happy for color contrast to be considered manually?

    Exploring the CSS contrast-color() Function… a Second Time originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.