Jump to content

All Activity

This stream auto-updates

  1. Yesterday
  2. by: Sourav Rudra Mon, 07 Jul 2025 13:13:17 GMT Most file sharing today takes place through cloud services, but that's not always necessary. Local file transfers are still relevant, letting people send files directly between devices on the same network without involving a nosy middleman (a server, in this case). Instead of uploading confidential documents on WhatsApp and calling it a day, people could share them directly over their local network. This approach is faster, more private, and more reliable than relying on a third-party server. Remember, if you value your data, so does Meta. 🕵️‍♂️ That’s where Packet comes in, offering an easy, secure way to transfer files directly between Linux and Android devices. Wireless File Transfers via Quick ShareIt is a lightweight, open source app for Linux that makes transferring files effortless. It leverages a partial implementation of Google's Quick Share protocol (proprietary) to enable easy wireless transfers over your local Wi-Fi network (via mDNS) without needing any cables or cloud servers. In addition to that, Packet supports device discovery via Bluetooth, making it easy to find nearby devices without manual setup. It can also be integrated with GNOME’s Nautilus file manager (Files), allowing you to send files directly from your desktop with a simple right-click (requires additional configuration). ⭐ Key Features Quick Share SupportLocal, Private TransfersFile Transfer NotificationsNautilus Integration for GNOMEHow to Send Files Using Packet?First things first, you have to download and install the latest release of Packet from Flathub by running this command in your terminal: flatpak install flathub io.github.nozwock.PacketOnce launched, sending files from your Linux computer to your Android smartphone is straightforward. Enable Bluetooth on your laptop/computer, then click on the big blue "Add Files" button and select the files you want to send. Adding new files for transfer to Packet is easy. You can also drag and drop files directly into Packet for a quicker sharing experience. If you are looking to transfer a whole folder, it’s best to first compress it into an archive like a TAR or ZIP, then send it through Packet for transmission. Once you are done choosing files, choose your Android phone from the recipients list and verify the code shown on screen. File transfers from Linux to Android are lightning fast! Though, before you do all that, ensure that Quick Share is set up on your smartphone to allow Nearby sharing with everyone. Additionally, take note of your device’s name; this is how it will appear on your Linux machine when sending/receiving files. When you start the transfer, your smartphone will prompt you to "Accept" or "Decline" the Quick Share request. Only proceed if the PIN or code shown on both devices matches to ensure a secure transfer. Transferring files the other way around, from Android to Linux, is just as simple. On your Android device, select the files you want to share, tap the "Share" button, and choose "Quick Share". Your Linux computer should appear in the list if Packet is running and your device is discoverable. File transfers from Android to Linux are the same! You can change your Linux device’s name from the "Preferences" menu in Packet (accessible via the hamburger menu). This is the name that will show up on your Android device when sharing files. Packet also shows handy system notifications for file transfers, so you don’t miss a thing. Packet shows helpful notifications and lets you change a few basic settings. If you use the GNOME Files app (Nautilus), then there’s an optional plugin that adds a "Send with Packet" option to the right-click menu, making it even easier to share files without opening the app manually. Overall, Packet feels like a practical tool for local file sharing between devices. It works well across Android and Linux devices, and can do the same for two Linux devices on the same network. And, I must say, it gives tough competition to LocalSend, another file transfer tool that’s an AirDrop alternative for Linux users! Suggested Read 📖 LocalSend: An Open-Source AirDrop Alternative For Everyone!It’s time to ditch platform-specific solutions like AirDrop!It's FOSS NewsSourav Rudra
  3. by: Temani Afif Mon, 07 Jul 2025 12:48:29 +0000 This is the fourth post in a series about the new CSS shape() function. So far, we’ve covered the most common commands you will use to draw various shapes, including lines, arcs, and curves. This time, I want to introduce you to two more commands: close and move. They’re fairly simple in practice, and I think you will rarely use them, but they are incredibly useful when you need them. Better CSS Shapes Using shape() Lines and Arcs More on Arcs Curves Close and Move (you are here!) The close command In the first part, we said that shape() always starts with a from command to define the first starting point but what about the end? It should end with a close command. That’s true. I never did because I either “close” the shape myself or rely on the browser to “close” it for me. Said like that, it’s a bit confusing, but let’s take a simple example to better understand: clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%) If you try this code, you will get a triangle shape, but if you look closely, you will notice that we have only two line commands whereas, to draw a triangle, we need a total of three lines. The last line between 100% 100% and 0 0 is implicit, and that’s the part where the browser is closing the shape for me without having to explicitly use a close command. I could have written the following: clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, close) Or instead, define the last line by myself: clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, line to 0 0) But since the browser is able to close the shape alone, there is no need to add that last line command nor do we need to explicitly add the close command. This might lead you to think that the close command is useless, right? It’s true in most cases (after all, I have written three articles about shape() without using it), but it’s important to know about it and what it does. In some particular cases, it can be useful, especially if used in the middle of a shape. CodePen Embed Fallback In this example, my starting point is the center and the logic of the shape is to draw four triangles. In the process, I need to get back to the center each time. So, instead of writing line to center, I simply write close and the browser will automatically get back to the initial point! Intuitively, we should write the following: clip-path: shape( from center, line to 20% 0, hline by 60%, line to center, /* triangle 1 */ line to 100% 20%, vline by 60%, line to center, /* triangle 2 */ line to 20% 100%, hline by 60%, line to center, /* triangle 3 */ line to 0 20%, vline by 60% /* triangle 4 */ ) But we can optimize it a little and simply do this instead: clip-path: shape( from center, line to 20% 0, hline by 60%, close, line to 100% 20%, vline by 60%, close, line to 20% 100%, hline by 60%, close, line to 0 20%, vline by 60% ) We write less code, sure, but another important thing is that if I update the center value with another position, the close command will follow that position. CodePen Embed Fallback Don’t forget about this trick. It can help you optimize a lot of shapes by writing less code. The move command Let’s turn our attention to another shape() command you may rarely use, but can be incredibly useful in certain situations: the move command. Most times when we need to draw a shape, it’s actually one continuous shape. But it may happen that our shape is composed of different parts not linked together. In these situations, the move command is what you will need. Let’s take an example, similar to the previous one, but this time the triangles don’t touch each other: CodePen Embed Fallback Intuitively, we may think we need four separate elements, with its own shape() definition. But the that example is a single shape! The trick is to draw the first triangle, then “move” somewhere else to draw the next one, and so on. The move command is similar to the from command but we use it in the middle of shape(). clip-path: shape( from 50% 40%, line to 20% 0, hline by 60%, close, /* triangle 1 */ move to 60% 50%, line to 100% 20%, vline by 60%, close, /* triangle 2 */ move to 50% 60%, line to 20% 100%, hline by 60%, close, /* triangle 3 */ move to 40% 50%, line to 0 20%, vline by 60% /* triangle 4 */ ) After drawing the first triangle, we “close” it and “move” to a new point to draw the next triangle. We can have multiple shapes using a single shape() definition. A more generic code will look like the below: clip-path: shape( from X1 Y1, ..., close, /* shape 1 */ move to X2 Y2, ..., close, /* shape 2 */ ... move to Xn Yn, ... /* shape N */ ) The close commands before the move commands aren’t mandatory, so the code can be simplified to this: clip-path: shape( from X1 Y1, ..., /* shape 1 */ move to X2 Y2, ..., /* shape 2 */ ... move to Xn Yn, ... /* shape N */ ) CodePen Embed Fallback Let’s look at a few interesting use cases where this technique can be helpful. Cut-out shapes Previously, I shared a trick on how to create cut-out shapes using clip-path: polygon(). Starting from any kind of polygon, we can easily invert it to get its cut-out version: CodePen Embed Fallback We can do the same using shape(). The idea is to have an intersection between the main shape and the rectangle shape that fits the element boundaries. We need two shapes, hence the need for the move command. The code is as follows: .shape { clip-path: shape(from ...., move to 0 0, hline to 100%, vline to 100%, hline to 0); } You start by creating your main shape and then you “move” to 0 0 and you create the rectangle shape (Remember, It’s the first shape we create in the first part of this series). We can even go further and introduce a CSS variable to easily switch between the normal shape and the inverted one. .shape { clip-path: shape(from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; } By default, --i is not defined so var(--i,)will be empty and we get the main shape. If we define the variable with the rectangle shape, we get the inverted version. Here is an example using a rounded hexagon shape: CodePen Embed Fallback In reality, the code should be as follows: .shape { clip-path: shape(evenodd from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; } Notice the evenodd I am adding at the beginning of shape(). I won’t bother you with a detailed explanation on what it does but in some cases, the inverted shape is not visible and the fix is to add evenodd at the beginning. You can check the MDN page for more details. Another improvement we can do is to add a variable to control the space around the shape. Let’s suppose you want to make the hexagon shape of the previous example smaller. It‘s tedious to update the code of the hexagon but it’s easier to update the code of the rectangle shape. .shape { clip-path: shape(evenodd from ... var(--i,)) content-box; } .invert { --d: 20px; padding: var(--d); --i: ,move to calc(-1*var(--d)) calc(-1*var(--d)), hline to calc(100% + var(--d)), vline to calc(100% + var(--d)), hline to calc(-1*var(--d)); } We first update the reference box of the shape to be content-box. Then we add some padding which will logically reduce the area of the shape since it will no longer include the padding (nor the border). The padding is excluded (invisible) by default and here comes the trick where we update the rectangle shape to re-include the padding. That is why the --i variable is so verbose. It uses the value of the padding to extend the rectangle area and cover the whole element as if we didn’t have content-box. CodePen Embed Fallback Not only you can easily invert any kind of shape, but you can also control the space around it! Here is another demo using the CSS-Tricks logo to illustrate how easy the method is: CodePen Embed Fallback This exact same example is available in my SVG-to-CSS converter, providing you with the shape() code without having to do all of the math. Repetitive shapes Another interesting use case of the move command is when we need to repeat the same shape multiple times. Do you remember the difference between the by and the to directives? The by directive allows us to define relative coordinates considering the previous point. So, if we create our shape using only by, we can easily reuse the same code as many times as we want. Let’s start with a simple example of a circle shape: clip-path: shape(from X Y, arc by 0 -50px of 1%, arc by 0 50px of 1%) Starting from X Y, I draw a first arc moving upward by 50px, then I get back to X Y with another arc using the same offset, but downward. If you are a bit lost with the syntax, try reviewing Part 1 to refresh your memory about the arc command. How I drew the shape is not important. What is important is that whatever the value of X Y is, I will always get the same circle but in a different position. Do you see where I am going with this idea? If I want to add another circle, I simply repeat the same code with a different X Y. clip-path: shape( from X1 Y1, arc by 0 -50px of 1%, arc by 0 50px of 1%, move to X2 Y2, arc by 0 -50px of 1%, arc by 0 50px of 1% ) And since the code is the same, I can store the circle shape into a CSS variable and draw as many circles as I want: .shape { --sh:, arc by 0 -50px of 1%, arc by 0 50px of 1%; clip-path: shape( from X1 Y1 var(--sh), move to X2 Y2 var(--sh), ... move to Xn Yn var(--sh) ) } You don’t want a circle? Easy, you can update the --sh variable with any shape you want. Here is an example with three different shapes: CodePen Embed Fallback And guess what? You can invert the whole thing using the cut-out technique by adding the rectangle shape at the end: CodePen Embed Fallback This code is a perfect example of the shape() function’s power. We don’t have any code duplication and we can simply adjust the shape with CSS variables. This is something we are unable to achieve with the path() function because it doesn’t support variables. Conclusion That’s all for this fourth installment of our series on the CSS shape() function! We didn’t make any super complex shapes, but we learned how two simple commands can open a lot of possibilities of what can be done using shape(). Just for fun, here is one more demo recreating a classic three-dot loader using the last technique we covered. Notice how much further we could go, adding things like animation to the mix: CodePen Embed Fallback Better CSS Shapes Using shape() Lines and Arcs More on Arcs Curves Close and Move (you are here!) Better CSS Shapes Using shape() — Part 4: Close and Move originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  4. Last week
  5. by: Abhishek Prakash Sun, 06 Jul 2025 04:43:46 GMT I was trying to update my CachyOS system with in the usual Arch way when I encountered this 'failed to synchronize all databases' error. sudo pacman -Syu [sudo] password for abhishek: :: Synchronizing package databases... error: failed to synchronize all databases (unable to lock database) The fix was rather simple. It worked effortlessly for me and I hope it does for you, too. Handling failed to synchronize all databases errorCheck that no other program is using the pacman command: ps -aux | grep -i pacmanIf you see a single line output with grep --color=auto -i pacman at the end, it means that no program other than the grep command you just ran is using pacman. If you see some other programs, use their process ID to kill them first and then use this command to remove the lock from the database: sudo rm /var/lib/pacman/db.lckOnce done, you can run the pacman update again to see if things are working smooth or not. Here's a screenshot of the entire scenario on my CachyOS linux: That didn't work? Try thisIn some rare cases, just removing the database lock might not fix the issue. What you could try is to delete the entire database of the local cache. The next pacman update will take longer as it will download plenty, but it may fix your issue. sudo rm /var/lib/pacman/sync/*.*Reason why you see this 'unable to lock databases' errorFor the curious few who would like to know why they encountered this failed to synchronize all databases (unable to lock database) error, let me explain. Pacman commands are just one way to install or update packages on an Arch-based system. There could be Pamac or some other tool like KDE Discover with their respective PackageKit plugins or some other instances of pacman running in another terminal. Two processes trying to modify the system package database at the same time could be problematic. This is why the built-in security mechanism in Arch locks the database by creating the /var/lib/pacman/db.lck. This is an indication to let pacman know that some program is using the package database. Once the program finishes up successfully, this lock file is deleted automatically. In some cases, this lock file might not be deleted. For instance, when you turn off your system when pacman command was already running in a terminal. This is what happened in my case. I ran the pacman -Syu command and it was waiting for my Y to start installing the updates. I got distracted and forced turn the system off. On the next boot, I encountered this error when I tried updating the system. This is also the reason why you should check if some other program might be using pacman underneath. Force removing the lock file when there is an active program using the database is not a good idea. In some rare cases, the lock file removal alone won't fix the issue. You may have to delete the local database cache. This happens when the local database of package is corrupted. This is what I mentioned in the earlier section. Did it fix the issue for you?Now that you know the root cause of the issue and the ways of fixing it, let me know if the fix I shared with you here worked for you or not. If it did, drop a quick “Thank You”. That is a motivation booster. And if it didn't, I might try helping you further. The comment section is all yours.
  6. by: Abhishek Prakash Fri, 04 Jul 2025 17:30:52 +0530 Is it too 'AWKward' to use AWK in the age of AI? I don't think so. AWK is so underrated despite being so powerful for creating useful automation scripts. We have had a very good intro to AWK and now I am working on a series that covers the basics of AWK, just like our Bash series. Hopefully, you'll see it in the next newsletter. Stay tuned 😊       This post is for subscribers only Subscribe now Already have an account? Sign in
  7. by: Adnan Shabbir Fri, 04 Jul 2025 05:43:38 +0000 In this technologically rich era, businesses deploy servers in no time and also manage hundreds of devices on the cloud. All this is possible with the assistance of Ansible-like automation engines. Ansible is an automation server that manages multiple remote hosts and can deploy applications, install packages, troubleshoot systems remotely, perform network automation, configuration management, and much more, all at once or one by one. In today’s guide, we’ll elaborate on the steps to install, configure, and automate Linux in minutes. This guide is broadly divided into 2 categories: Install and Configure Ansible → Practical demonstration of installing and configuring Ansible on Control Node. Ansible Playbooks | Automate Linux in Minutes → Creating Ansible Playbooks and implementing playbooks on managed nodes. Let’s have a look at the brief outline: Install and Configure Ansible | Control Node and Host Nodes Step 1: Install and Configure Ansible on the Control Node Step 2: Create an Inventory/Hosts File on the Control Node Step 3: Install and Configure SSH on the Host Nodes Step 4: Create an Ansible User for Remote Connections Step 5: Set up SSH Key | Generate and Copy Step 6: Test the Connection | Control Node to Host Nodes Ansible Playbooks | Automate Linux in Minutes YAML Basics Step 1: Create an Ansible Playbook Step 2: Automate the Tasks Conclusion Install and Configure the Ansible | Control Node and Host Nodes As already discussed, Ansible is the automation server that has the control node and some managed nodes to manage the overall server. In this section, we’ll demonstrate how you can install and configure Ansible to work properly. Prerequisites: Understanding the Basics | Control Node, Managed Nodes, Inventory File, Playbook Before proceeding to the real-time automation, let’s have a look at the list of components that we need to understand before proceeding: Control Node: The system where Ansible is installed. In this guide, the Ansible Server is set up on OpenSUSE Linux. Managed Nodes: The servers that are being managed by the Ansible control node. Inventory/Host File: The inventory file contains a list of host(s) IP(s) that a control node will manage. Playbook: Playbook is an automated script based on YAML that Ansible utilizes to perform automated tasks on the Managed Nodes. Let’s now start the initial configuration: Step 1: Install and Configure Ansible on the Control Node Let’s set up Ansible on the control node, i.e., installing Ansible on the Control node: sudo zypper install ansible The command will automatically select the required essentials (Python and its associated dependencies, especially): Here are the commands to install Ansible on other Linux distros: sudo dnf install ansible sudo apt install ansible sudo pacman -S ansible Let’s check the installed version: ansible --version Step 2: Create an Inventory/Hosts File on the Control Node The inventory file is by default located in the “/etc/ansible/hosts”. However, if it is not available, we can create it manually: sudo nano /etc/ansible/hosts Here, the [main] is the group representing specific servers. Similarly, we can create multiple groups in the same pattern to access the servers and perform the required operation on the group as a whole. Step 3: Install and Configure SSH on the Host Nodes Ansible communicates with the host nodes via SSH. Now, we’ll set up SSH on the host nodes (managed nodes). The process in this “Step” is performed on all the “Managed Nodes”. Let’s first install SSH on the system: sudo apt install openssh-server If you have managed nodes other than Ubuntu/Debian, you can use one of the following commands, as per your Linux distribution, to install SSH: sudo dnf install openssh-server sudo zypper install openssh sudo pacman -S openssh Since we have only one “Control Node”, for better security, we add a rule that only the SSH port can be accessed from the Control Node: sudo ufw allow ssh Note: If you have changed the SSH default port, then you have to mention the port name to open that specific port. Allow Specific IP on SSH Port: When configuring the Firewall on the Managed Nodes, you can only allow a specific IP to interact with the managed node on SSH. For instance, the command below will only allow the IP “192.168.140.142” to interact over the SSH port. sudo ufw allow from 192.168.140.142 to any port 22 Let’s reload the firewall: sudo ufw reload Confirming the firewall status: sudo ufw status Step 4: Create an Ansible User for Remote Connections Let’s use the “adduser” command to create a new user for Ansible. The Control Node only communicates through the Ansible user: sudo adduser <username> Adding it to a sudo group: sudo usermod -aG sudo <username> Creating a no-password login for this user only. Open the “/etc/sudoers” file and add the following line at the end of the file: Step 5: Set up SSH Key | Generate and Copy Let’s generate the SSH keys on the control node: ssh-keygen Now, copy these keys to the remote hosts: ssh-copy-id username@IP-address/hostname Note: There are multiple ways of generating and copying the SSH keys. Read our dedicated guide on “How to Set up SSH Keys” to have a detailed overview of how SSH keys work. Step 6: Test the Connection | Control Node to Host Nodes Once every step is performed error-free, let’s test the connection from the control node to the managed hosts. There are two ways to test the connection, i.e., one-to-one connection and one-to-many. The Ansible command below uses its “ping” module to test the connection from the Control Node to one of the hosts, i.e., linuxhint. ansible linuxhint -m ping -u <user-name> Here, the following Ansible command pings all the hosts that a Control Node has to manage: ansible all -m ping -u ansible_admin The success status paves the way to proceed further. Ansible Playbooks | Automate Linux in Minutes Ansible Playbook is an automated script that runs on the managed nodes (either all or the selected ones). Ansible Playbooks follow the YAML syntax that needs to be followed strictly to avoid any syntax errors. Let’s first have a quick overview of the YAML Syntax: Prerequisites: Understanding the YAML Basics YAML is the primary requirement for writing an Ansible playbook. Since it is a markup language thus its syntax must be followed properly to have an error-free playbook and execution. The main components of the YAML that need to be focused on at the moment to get started with Ansible playbooks are: Indentation → Defines the hierarchy and the overall structure. Only 2 spaces. Don’t use Tab. Key:Value Pairs → Defines the settings/parameters/states to assist the tasks in the playbook. Lists → In YAML, a list contains a series of actions to be performed. The list may act as an independent or can assist with any task. Variables → Just like other scripting/programming languages. The variables in YAML define dynamic values in a playbook for reusability. Dictionaries → Groups relevant “key:value” pairs under a single key, often for module parameters. Strings → Represents text values such as task names, messages, and optional quotes. The strings also have the same primary purpose, just like in other scripting/programming languages. That’s what helps you write Ansible playbooks. Variable File | To be used in the Ansible Playbook Here, we will be using a variable file, which is used in the Playbook for variable calling/assignment. The content of the Vars.yml file is as follows: There are three variables in this file, i.e., the package contains only one package, and the other two variables are “server_packages” and “other_utils,” which contain a group of packages. Step 1: Create an Ansible Playbook Let’s create a playbook file: sudo nano /etc/ansible/testplay.yml Here, the variables file named “vars.yml” is linked to this Playbook. At our first run, we will use the first variable named “package”: --- - hosts: allbecome: yes vars_files: - vars.yml tasks: - name: Install package apt: name: "{{ package }}" state: present Here: The “hosts: all” states that this playbook will be implemented on all the hosts listed in the hosts/inventory file. “become: yes” elevates the permissions, i.e., useful when running the commands that require root privileges. “Vars_file” calls the variable files. The “tasks” contain the tasks to be implemented in this playbook. There is only one task in this playbook: The task is named “Install package”, with the “apt” module, and the variable “name” to be used from the variable file. Step 2: Automate the Tasks Before implementing this playbook, we can have a dry run of the playbook on all the servers to check for its successful execution. Here’s the command to do so: ansible-playbook /etc/ansible/testplay.yml -u ansible_admin --check Let’s run the newly created playbook with the created user: ansible-playbook /etc/ansible/testplay.yml -u ansible_admin Note: We can also provide the hosts/inventory file location (if it is not at the default location, i.e., /etc/ansible/) here as well, i.e., using the “-i” option and providing the path of the inventory file. Similarly, we can use other variable groups mentioned in the variable file as well. For instance, the following playbook now calls the “server_packages” variable and installs the server as per their availability: --- - hosts: allbecome: yes vars_files: - vars.yml tasks: - name: Install package apt: name: "{{ server_packages }}" state: present Here, the “become: yes” is used for the root permissions. This is used when the tasks require root privileges. The task in this playbook utilizes different variables from the variable file. Let’s dry-run the playbook on the managed nodes using the below command: ansible-playbook /etc/ansible/testplay.yml -u ansible_admin --check All green states that the playbook will be successfully implemented. Remove the “–check” flag from the above command to implement the playbook. That’s all about the main course of this article. Since Ansible is backed up by a list of commands, we have compiled a list of commands necessary for beginners to understand while using Ansible. Bonus: Ansible 101 Commands Ansible is an essential automation server with a long list of its own commands to manage the overall server operations. Here’s the list of Ansible commands that would be useful for all those using Ansible or aiming to use Ansible in the future: Command(s) Purpose ansible -i <inventory/host-file> all -m ping Test Ansible’s connectivity with all the hosts in the inventory/hosts file. ansible-playbook -i <inventory/host-file> <playbook> Executes the <playbook> to operate on the hosts/managed nodes. ansible-playbook -i <inventory/hosts-file> <playbook> –check Simulates the playbook without making changes to the target systems/managed nodes. ansible-playbook -i <inventory/hosts-file> <playbook> –syntax-check Checks the YAML syntax ansible -i <inventory/hosts-file> <group> -m command -a “<shell-command>” Executes a specific shell command on the managed nodes. ansible-playbook -i <inventory/hosts-file> <playbook> -v Executes the playbook with verbose output. Use -vv for more detailed options. ansible-inventory -i <inventory_file> –list Displays all hosts/groups in the inventory file to verify the configurations. Note: If the inventory/hosts file is at the default location (/etc/ansible/), we can skip the “-i” flag used in the above commands. For a complete demonstration of the Ansible CLI Cheat Sheet, please see the Ansible documentation – Ansible CLI Cheat Sheet. Conclusion To get started with Ansible, first, install Ansible on one system (Control Node), then install and configure SSH on the remote hosts (Managed Nodes). Now, generate the SSH keys on the Control Node and copy the key to the Managed Nodes. Once the connectivity is resolved, configure the inventory file and write the playbook. That’s it. The Ansible will be configured and ready to run. All these steps are practically demonstrated in this guide. Just go through the guide and let us know if you have any questions or anything that is difficult to understand. We would assist with Ansible’s installation and configuration.
  8. by: Abhishek Prakash Thu, 03 Jul 2025 05:13:51 GMT And we achieved the goal of 75 new lifetime members. Thank you for that 🙏🙏 I think I have activated it for everyone, even for members who didn't explicitly notify me after the payment. But if anyone is still left out, just send me an email. By the way, all the logged-in Plus members can download the 'Linux for DevOps' eBook from this page. I'll be adding a couple of more ebooks (created and extended from existing content) for the Plus members. 💬 Let's see what else you get in this edition Bcachefs running into trouble.A new Rust-based GPU driver.Google giving the Linux Foundation a gift.And other Linux news, tips, and, of course, memes!📰 Linux and Open Source NewsdigiKam 8.7 is here with many upgrades.Tyr is a new Rust-based driver for Arm Mali GPUs.Claudia is an open source GUI solution for Claude AI coding.Broadcom has been bullying enterprises with VMware audits.Google has donated the A2A protocol to the Linux foundation.Murena Fairphone (Gen. 6) has been introduced with some decent specs.Warp 2.0 is here with AI agents, better terminal tools, and more.Cloudflare has released Orange Me2eets, an E2EE video calling solution.Bazzite was looking at a grim future. Luckily, the proposal to retire 32-bit support on Fedora has been dropped, for now.🧠 What We’re Thinking AboutA new Linux kernel drama has unfolded, this time, it's Bcachefs. New Linux Kernel Drama: Torvalds Drops Bcachefs Support After ClashThings have taken a bad turn for Bcachefs as Linux supremo Linus Torvalds is not happy with their objections.It's FOSS NewsSourav RudraWhen you are done with that, you can go through LibreOffice's technical dive of the ODF file format. 🧮 Linux Tips, Tutorials and MoreThere are some superb privacy-focused Notion alternatives out there.Learn a thing or two about monitoring CPU and GPU temperatures in your Linux system.Although commands like inxi are there, this GUI tool gives you an easy way to list the hardware configuration of your computer in Linux.Similarly, there are plenty of CLI tools for system monitoring, but you also have GUI-based task managers.Relive the nostalgia with these tools to get a retro vibe on Linux. Relive the Golden Era: 5 Tools to Get Retro Feel on LinuxGet retro vibe on Linux with these tools.It's FOSSAbhishek Prakash Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 👷 Homelab and Hardware CornerI have received the Pironman Max case for review and have assembled it too. I am looking forward to having a RAID setup for fun on it. I'll keep you posted if I made it or not 😄 Pironman 5-Max: The Best Raspberry Pi 5 Case Just Got UpgradedAnd the first 500 get a 25% pre-order discount offer. So hurry up with the purchase.It's FOSS NewsSourav Rudra✨ Project HighlightAnduinOS is in the spotlight lately, have you checked it out? A New Linux Distro Has Set Out To Look Like Windows 11: I Try AnduinOS!We take a brief look at AnduinOS, trying to mimic the Windows 11 look. Is it worth it?It's FOSS NewsSourav Rudra📽️ Videos I am Creating for YouSee a better top in action in the latest video. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeThis quiz will test your knowledge of Apt. Apt Command QuizDebian or Ubuntu user? This is the apt quiz for you. Pun intended, of course :)It's FOSSAbhishek Prakash💡 Quick Handy TipThe Dolphin file manager offers you a selection mode. To activate it, press the Space bar. In this view, you can single click on a file/folder to select them. Here, you will notice that a quick access bar appears at the bottom when you select items, offering actions like Copy, Cut, Rename, Move to Trash, etc. 🤣 Meme of the Week🗓️ Tech TriviaThe IBM 650, introduced on July 2, 1953, was one of the first widely used computers, featuring a magnetic drum for storage and using punch cards for programming. With a memory capacity of 20,000 decimal digits, it became a workhorse for businesses and universities throughout the 1950s. 🧑‍🤝‍🧑 FOSSverse CornerCanonical is making some serious bank, and our FOSSers have noticed. Ubuntu Maker Canonical Generated Nearly $300M In Revenue Last YearHow do they do this sum, its not from the desktop free version, can only guess its server technologyIt's FOSS Communitycallpaul.eu (Paul)❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  9. by: Patrick Brosset Tue, 01 Jul 2025 12:42:38 +0000 Four years ago, I wrote an article titled Minding the “gap”, where I talked about the CSS gap property, where it applied, and how it worked with various CSS layouts. At the time, I described how easy it was to evenly space items out in a flex, grid, or multi-column layout, by using the gap property. But, I also said that styling the gap areas was much harder, and I shared a workaround. However, workarounds like using extra HTML elements, pseudo-elements, or borders to draw separator lines tend to come with drawbacks, especially those that impact your layout size, interfere with assistive technologies, or pollute your markup with style-only elements. Today, I’m writing again about layout gaps, but this time, to tell you all about a new and exciting CSS feature that’s going to change it all. What you previously had to use workarounds for, you’ll soon be able to do with just a few simple CSS properties that make it easy, yet also flexible, to display styled separators between your layout items. There’s already a specification draft for the feature you can peruse. At the time I’m writing this, it is available in Chrome and Edge 139 behind a flag. But I believe it won’t be long before we turn that flag on. I believe other browsers are also very receptive and engaged. Displaying decorative lines between items of a layout can make a big difference. When used well, these lines can bring more structure to your layout, and give your users more of a sense of how the different regions of a page are organized. Introducing CSS gap decorations If you’ve ever used a multi-column layout, such as by using the column-width property, then you might already be familiar with gap decorations. You can draw vertical lines between the columns of a multi-column layout by using the column-rule property: article { column-width: 20rem; column-rule: 1px solid black; } The CSS gap decorations feature builds on this to provide a more comprehensive system that makes it easy for you to draw separator lines in other layout types. For example, the draft specification says that the column-rule property also works in flexbox and grid layouts: .my-grid-container { display: grid; gap: 2px; column-rule: 2px solid pink; } No need for extra elements or borders! The key benefit here is that the decoration happens in CSS only, where it belongs, with no impacts to your semantic markup. The CSS gap decorations feature also introduces a new row-rule property for drawing lines between rows: .my-flex-container { display: flex; gap: 10px; row-rule: 10px dotted limegreen; column-rule: 5px dashed coral; } But that’s not all, because the above syntax also allows you to define multiple, comma-separated, line style values, and use the same repeat() function that CSS grid already uses for row and column templates. This makes it possible to define different styles of line decorations in a single layout, and adapt to an unknown number of gaps: .my-container { display: grid; gap: 2px; row-rule: repeat(2, 1px dashed red), 2px solid black, repeat(auto, 1px dotted green); } Finally, the CSS gap decorations feature comes with additional CSS properties such as row-rule-break, column-rule-break, row-rule-outset, column-rule-outset, and gap-rule-paint-order, which make it possible to precisely customize the way the separators are drawn, whether they overlap, or where they start and end. And of course, all of this works across grid, flexbox, multi-column, and soon, masonry! Browser support Currently, the CSS gap decorations feature is only available in Chromium-based browsers. The feature is still early in the making, and there’s time for you all to try it and to provide feedback that could help make the feature better and more adapted to your needs. If you want to try the feature today, make sure to use Edge or Chrome, starting with version 139 (or another Chromium-based browser that matches those versions), and enable the flag by following these steps: In Chrome or Edge, go to about://flags. In the search field, search for Enable Experimental Web Platform Features. Enable the flag. Restart the browser. To put this all into practice, let’s walk through an example together that uses the new CSS gap decorations feature. I also have a final example you can demo. Using CSS gap decorations Let’s build a simple web page to learn how to use the feature. Here is what we’ll be building: The above layout contains a header section with a title, a navigation menu with a few links, a main section with a series of short paragraphs of text and photos, and a footer. We’ll use the following markup: <body> <header> <h1>My personal site</h1> </header> <nav> <ul> <li><a href="#">Home</a></li> <li><a href="#">Blog</a></li> <li><a href="#">About</a></li> <li><a href="#">Links</a></li> </ul> </nav> <main> <article> <p>...</p> </article> <article> <img src="cat.jpg" alt="A sleeping cat."> </article> <article> <p>...</p> </article> <article> <img src="tree.jpg" alt="An old olive tree trunk."> </article> <article> <p>...</p> </article> <article> <p>...</p> </article> <article> <p>...</p> </article> <article> <img src="strings.jpg" alt="Snow flakes falling in a motion blur effect."> </article> </main> <footer> <p>© 2025 Patrick Brosset</p> </footer> </body> We’ll start by making the <body> element be a grid container. This way, we can space out the <header>, <nav>, <main>, and <footer> elements apart in one go by using the gap property: body { display: grid; gap: 4rem; margin: 2rem; } Let’s now use the CSS gap decorations feature to display horizontal separator lines within the gaps we just defined: body { display: grid; gap: 4rem; margin: 2rem; row-rule: 1rem solid #efefef; } This gives us the following result: We can do a bit better by making the first horizontal line look different than the other two lines, and simplify the row-rule value by using the repeat() syntax: body { display: grid; gap: 4rem; margin: 2rem; row-rule: 1rem solid #efefef, repeat(2, 2px solid #efefef); } With this new row-rule property value, we’re telling the browser to draw the first horizontal separator as a 1rem thick line, and the next two separators as 2px thick lines, which gives the following result: Now, let’s turn our attention to the navigation element and its list of links. We’ll use flexbox to display the links in a single row, where each link is separated from the other links by a gap and a vertical line: nav ul { display: flex; flex-wrap: wrap; gap: 2rem; column-rule: 2px dashed #666; } Very similarly to how we used the row-rule property before, we’re now using the column-rule property to display a dashed 2px thick separator between the links. Our example web page now looks like this: The last thing we need to change is the <main> element and its paragraphs and pictures. We’ll use flexbox again and display the various children in a wrapping row of varying width items: main { display: flex; flex-wrap: wrap; gap: 4rem; } main > * { flex: 1 1 200px; } main article:has(p) { flex-basis: 400px; } In the above code snippet, we’re setting the <main> element to be a wrapping flex container with a 4rem gap between items and flex lines. We’re also making the items have a flex basis size of 200px for pictures and 400px for text, and allowing them to grow and shrink as needed. This gives us the following result: Let’s use CSS gap decorations to bring a little more structure to our layout by drawing 2px thick separator lines between the rows and columns of the layout: main { display: flex; flex-wrap: wrap; gap: 4rem; row-rule: 2px solid #999; column-rule: 2px solid #999; } This gives us the following result, which is very close to our expected design: The last detail we want to change is related to the vertical lines. We don’t want them to span across the entire height of the flex lines but instead start and stop where the content starts and stops. With CSS gap decorations, we can easily achieve this by using the column-rule-outset property to fine-tune exactly where the decorations start and end, relative to the gap area: main { display: flex; flex-wrap: wrap; gap: 4rem; row-rule: 2px solid #999; column-rule: 2px solid #999; column-rule-outset: 0; } The column-rule-outset property above makes the vertical column separators span the height of each row, excluding the gap area, which is what we want: And with that, we’re done with our example. Check out the live example, and source code. Learn more There’s more to the feature and I mentioned a couple more CSS properties earlier gap-rule-paint-order, which lets you control which of the decorations, rows or columns, appear above the other ones. row-rule-break / column-rule-break, which sets the behavior of the decoration lines at intersections. In particular, whether they are made of multiple segments, which start and end at intersections, or single, continuous lines. Because the feature is new, there isn’t MDN documentation about it yet. So to learn more, check out: CSS Gap Decorations Module Level 1 (First Public Working Draft) Microsoft Edge Explainer The Edge team has also created an interactive playground where you can use visual controls to configure gap decorations. And, of course, the reason this is all implemented behind a flag is to elicit feedback from developers like you! If you have any feedback, questions, or bugs about this feature, I definitely encourage you to open a new ticket on the Chromium issue tracker. The Gap Strikes Back: Now Stylable originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  10. Earlier
  11. by: Chris Coyier Mon, 30 Jun 2025 17:04:57 +0000 Mr. Brad Frost, and his brother Ian, have a new course they are selling called Subatomic: The Complete Guide to Design Tokens. To be honest, I was a smidge skeptical. I know what a design token is. It’s a variable of a color or font-family or something. I pretty much only work on websites, so that exposes itself as a --custom-property and I already know that using those to abstract common usage of colors and fonts is smart and helpful. Done. I get that people managing a whole fleet of sites (and apps running in who-knows-what technologies) need a fancier token system, but that ain’t me. But then we had those fellas on ShopTalk Show and I’ve updated my thinking that you really do want to lean on the expertise of people that have done this time and time again at scale. (p.s. they also gave us a 20% discount code when they were on the show: SHOPTALKSHOWISAWESOME) Spoiler: they advocate for a three-tier system of custom properties. The first is just the raw ingredients. Colors, but you’re just naming the color; sizes, but you’re just naming the sizes. Then there is this middle tier where you are essentially crafting a theme from those raw ingredients. And this layer is the most important, as it gives you this perfect layer of abstraction where you’re both not reaching into the raw ingredients and you’re not being too overly specific, like naming individual parts of components. The third layer should be avoided as best as it can, but if you absolutely need to get hyper specific, this is where you do it, and are still keeping in the land of custom properties. This feels particularly smart to me, and I wish I had the benefit of the Frost Brother’s expertise on this before building some custom property systems I have built in the past. I tend to have that first layer with just the raw ingredients, and then jump right to what they’d call the third tier, which leads to a real blowing up of how many custom properties are in use, to the point that it feels overly cumbersome and like the whole system isn’t even helping that much. I’ll definitely be thinking about the theming tier next time I have a good refactoring opportunity. Brad has also been steady on his global design system idea. I’ve posted my thoughts on this before, but I keep coming back to this one: I’m fascinated at seeing how decisions get made that keep this thing as “global” as possible. That absolutely must be done, otherwise it’s just another design system which I thinks falls short of the goal. I appreciated Brian’s deep thoughts on it all as well, and I’m basically writing all this as an excuse to link to that. Would a global design system have any design to it at all? Maybe; maybe not. It makes me wonder if the era of “flat design” that it seems like we’ve been in for a decade or so was partially the result of design systems, where the simpler things look the more practical it is to build all the “lego blocks” of a cohesive aesthetic. But it’s likely design trends move on. Maybe flat is over. Are design systems ready for very fancy/complex looks? Definitely worth a read is Amelia’s thoughts on “balancing the hard structure and soft flexibility” of UIs. Speaking of design tokens, designtokens.fyi is a nice site for defining all the terms that design systems/tokens people like to throw around. A site with word definitions can be awfully boring so I appreciate the fun design here. I like the idea of calling a value system a “t-shirt” where you’re actually defining, say, a set of padding options, but the options follow the mental model of t-shirt sizes. Sometimes you just need to look and see what other people are doing. In design, there always has been and will be design galleries full of inspirational stuff. But instead of linking to one of those, I’m going to link to to the “Home of the internet’s finest website headlines.” I love a good headline, myself. I’ve seen far too many sites that do a terrible job of just saying what their point is.
  12. by: Zell Liew Mon, 30 Jun 2025 13:16:43 +0000 Adam Wathan has (very cleverly) built Tailwind with CSS Cascade Layers, making it extremely powerful for organizing styles by priority. @layer theme, base, components, utilities; @import 'tailwindcss/theme.css' layer(theme); @import 'tailwindcss/utilities.css' layer(utilities); The core of Tailwind are its utilities. This means you have two choices: The default choice The unorthodox choice The default choice The default choice is to follow Tailwind’s recommended layer order: place components first, and Tailwind utilities last. So, if you’re building components, you need to manually wrap your components with a @layer directive. Then, overwrite your component styles with Tailwind, putting Tailwind as the “most important layer”. /* Write your components */ @layer components { .component { /* Your CSS here */ } } <!-- Override with Tailwind utilities --> <div class="component p-4"> ... </div> That’s a decent way of doing things. But, being the bad boy I am, I don’t take the default approach as the “best” one. Over a year of (major) experimentation with Tailwind and vanilla CSS, I’ve come across what I believe is a better solution. The Unorthodox Choice Before we go on, I have to tell you that I’m writing a course called Unorthodox Tailwind — this shows you everything I know about using Tailwind and CSS in synergistic ways, leveraging the strengths of each. Shameless plug aside, let’s dive into the Unorthodox Choice now. In this case, the Unorthodox Choice is to write your styles in an unnamed layer — or any layer after utilities, really — so that your CSS naturally overwrites Tailwind utilities. Of these two, I prefer the unnamed layer option: /* Unnamed layer option */ @layer theme, base, components, utilities; /* Write your CSS normally here */ .component { /* ... */ } /* Named layer option */ /* Use whatever layer name you come up with. I simply used css here because it made most sense for explaining things */ @layer theme, base, components, utilities, css; @layer css { .component { /* ... */ } } I have many reasons why I do this: I don’t like to add unnecessary CSS layers because it makes code harder to write — more keystrokes, having to remember the specific layer I used it in, etc. I’m pretty skilled with ITCSS, selector specificity, and all the good-old-stuff you’d expect from a seasoned front-end developer, so writing CSS in a single layer doesn’t scare me at all. I can do complex stuff that are hard or impossible to do in Tailwind (like theming and animations) in CSS. Your mileage may vary, of course. Now, if you have followed my reasoning so far, you would have noticed that I use Tailwind very differently: Tailwind utilities are not the “most important” layer. My unnamed CSS layer is the most important one. I do this so I can: Build prototypes with Tailwind (quickly, easily, especially with the tools I’ve created). Shift these properties to CSS when they get more complex — so I don’t have to read messy utility-littered HTML that makes my heart sink. Not because utility HTML is bad, but because it takes lots of brain processing power to figure out what’s happening. Finally, here’s the nice thing about Tailwind being in a utility layer: I can always !important a utility to give it strength. <!-- !important the padding utility --> <div class="component !p-4"> ... </div> Whoa, hold on, wait a minute! Isn’t this wrong, you might ask? Nope. The !important keyword has traditionally been used to override classes. In this case, we’re leveraging on the !important feature in CSS Layers to say the Tailwind utility is more important than any CSS in the unnamed layer. This is perfectly valid and is a built-in feature for CSS Layers. Besides, the !important is so explicit (and used so little) that it makes sense for one-off quick-and-dirty adjustments (without creating a brand new selector for it). Tailwind utilities are more powerful than they seem Tailwind utilities are not a 1:1 map between a class and a CSS property. Built-in Tailwind utilities mostly look like this so it can give people a wrong impression. Tailwind utilities are more like convenient Sass mixins, which means we can build effective tools for layouts, theming, typography, and more, through them. You can find out about these thoughts inside Unorthodox Tailwind. Thanks for reading and I hope you’re enjoying a new way of looking at (or using) Tailwind! Using CSS Cascade Layers With Tailwind Utilities originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  13. by: Abhishek Prakash Mon, 30 Jun 2025 07:16:37 GMT Retro techs are no longer stranger things. Just like vinyl records and vintage fashion, retro computing has captured our collective imagination, irrespective of the age group. I mean, there's something deeply satisfying about amber-on-black terminals and chunky pixel fonts that modern UIs can't replicate. The good thing here is that us Linux users are perfectly positioned to embrace this nostalgia wave. No, I am not talking about those ultra-lightweight distros that involuntarily give retro vibes of late 90s and early 2000s. I am going to share a few interesting software that will help you get the retro feel on your modern Linux system. 1. Cool Retro TermI'll start with my favorite, that is also a functional tool. cool-retro-term is a terminal emulator which mimics the look and feel of the old cathode tube screens. That's just about it. You do not get any special abilities, just the good-old look. But here's the thing. You can use it like your regular terminal, it have vintage looks but the modern features still work the same. There are more than one presets of colors and style available. Cool Retro Term Installing Cool Retro Term You can install it on Ubuntu, Fedora, and Arch Linux using the commands respectively: sudo apt install cool-retro-term #For Debian/Ubuntu sudo dnf install cool-retro-term #For Fedora sudo pacman -Syu cool-retro-term #For Arch based distrosCool Retro Term2. RSC8RSC8 is a CHIP-8 virtual machine/emulator written in Rust with no_std core. It is yet another makeover for your terminal. So, if you like to use a retro terminal but built with Rust, give this a try. RSC8 Chip-8 Virtual machine/emulator Install it using cargo. cargo install --locked --git https://github.com/jerryshell/rsc8To use rsc8, you'll have to download ROMs of your choice from this GitHub repo and then use the following command: rsc8_tui <your_rom.ch8>RSC83. Retro PieRetroPie transforms your Raspberry Pi, ODroid C1/C2, or PC into a nostalgic gaming powerhouse. It leverages platforms like Raspbian, EmulationStation, RetroArch, and other innovative projects, allowing you to enjoy classic Arcade, home-console, and vintage PC games with minimal hassle. RetroPie Walkthrough Since there were multiple kinds of platforms/consoles in the past, there are different emulators for them. But that's only half of the story. You also need to download ROMs that consist of games of that platform. For example, if you want to play games that were available Nintendo's NES console, you download the ROM with NES games and then use the NES emulator in RetroPi to load this ROM. It's like inserting a virtual disk. The problem here is that these ROMs are often deemed illegal to distribute, and hence the websites that host them are often removed. Playing Super Mario World in RetroPie Installing RetroPi Please ensure that you have git installed on your system as you'll have to clone the Git repo here. cd git clone --depth=1 https://github.com/RetroPie/RetroPie-Setup.gitRun the setup script: cd RetroPie-Setup sudo ./retropie_setup.shFollow the onscreen instructions for a basic installation. RetroPie4. Hot Dog LinuxHot Dog Linux is an X11 Window Manager with Windows 3.1 Hot Dog Stand, Amiga Workbench, Atari ST GEM, Mac Classic and Aqua UI pre-installed. HOTDOG is an acronym that stands for Horrible Obsolete Typeface and Dreadful Onscreen Graphics.HOTDOG Linux It is built using Objective-C and uses bitmapped graphics, low DPI displays. There are no unicode support here. Installing Hot Dog Linux: Download the ISO and install in VirtualBox. Make sure 3D acceleration is enabled. HOTDOG Linux🚧It only worked in GNOME Boxes for me.5. DOSBox or DOSBox StagingDOSBox is free and open-source software that allows you to emulate the MS-DOS operating systems from the previous century. It allows you to play the 8-bit games. Playing Doom2 in DOSBox DOSBox also emulates CPU:286/386 realmode/protected mode, Directory FileSystem/XMS/EMS, Tandy/Hercules/CGA/EGA/VGA/VESA graphics, a SoundBlaster/Gravis Ultra Sound card for excellent sound compatibility with older games. DOSBoxInstalling DOSBox On Ubuntu, and Arch, you can use the following commands respectively: sudo apt install dosbox #For Ubuntu/Debina sudo pacman -Syu dosbox #For ArchDOSBox StagingFedora ships with DOSBox Staging, a modern continuation of DOSBox. DOSBox Staging is also available in Flathub. For Arch, it is in AUR. And, for Ubuntu and Mint, add the following PPA to get it installed: sudo add-apt-repository ppa:feignint/dosbox-staging sudo apt-get update sudo apt install dosbox-stagingDOSBox StagingWrapping UpLinux enables users to have a godly amount of customization options. Whether you want your desktop to look clean, and contemporary, or you want to give it a retro look, there are certainly a few tools for that. Come to think of, I should do a tutorial on how to give a retro makeover to your Linux distro, somewhat like the modern makeover video of Linux Mint. Subscribe to It's FOSS YouTube ChannelLinux makes it easy to bring the retro vibe back to life. Whether it’s an old-school terminal, a full-blown vintage desktop, or classic games from the 90s, there’s a tool for every kind of nostalgia. What is your favorite tool that we missed listing here? Let me know in the comments below.
  14. by: Neeraj Mishra Fri, 27 Jun 2025 17:48:27 +0000 MetaTrader 5 (MT5) is an advanced trading platform supporting a multitude of different assets like Forex, cryptos, commodities, and so on. It is incredibly popular among Japanese traders and regulated brokers. Many programmers in Japan are employing its MQL5 programming language to develop advanced trading algorithms and we are going to explain how they are using MT5 for advanced algorithm development and trading below. Identical syntax to C/C++MT5 is free and offered by many reputable brokers that are regulated in Japan, making it a simple process to use the platform’s advanced features. The main advantage of MQL5 is its similarity to the popular programming language C++, which makes it very easy to adopt and learn. The syntax of MQL5 is nearly identical and data types are also familiar, like int, double, char, bool, and string. Functions are declared and used the same way, and MQL5 also supports classes, inheritance, and other OOP (Object-Oriented Programming) objects like C++. You can also pass parameters by reference using &. Integrated IDEMetaEditor, which is a native integrated development environment, is built into the MT5 trading platform. This is super flexible as users can switch back and forth between MT5 and MetaEditor with just one click of a mouse or F4 button. After programming in the MQL5 editor, users can switch back to the MT5 platform quickly and test their indicators or Expert Advisors (EAs) using the strategy editor. No need for APIsThe pricing data is also provided directly by the broker to your MT5 platform and when testing the algorithm, there is a strategy tester plugin on MT5 to test EAs. There is no need for API calls and other functions which makes the whole process not only comfortable but also very fast. Built-in functionsInstead of writing your own low-level hardware codes, MQL5 comes with built-in functions like: OrderSend() to open trades iMA() – to call indicators like moving averages. SymbolInfoDouble()All built-in indicators come with built-in functions which makes it very comfortable to summon them in your EA. Unlike other platforms or programming languages, developers do not need to construct candle data or anything. Instead, just apply your EA to your preferred instrument, timeframe, and chart types and it’s ready to go. Push notifications and alertsMQL5 comes with several alert functions which enable notifications. Users can define where their EAs will send notifications when predefined events occur. SMS, email, and platform alerts are all supported to develop powerful trading algorithms. Faster trade execution and social featuresMT5 supports even faster trade execution natively which is perfect for HFT and other algorithms that rely on fast trade execution for profits. Users can deploy their EA and be sure that it can open and close trades in milliseconds which enables them to deploy a wide range of trading strategies, including arbitrage and scalping techniques. Trading signals and community integrationThe platform integrates copy trading and community features. Traders can easily use copy trading services while developers can develop and sell their EAs to generate passive income. MT5 provides direct access to the MQL5.com community from the platform which makes it very easy to use EAs from the official store. Developers can deploy their EAs in the store to generate revenue which makes it very lucrative to learn and code robots. Large communityAre newcomers to MT5 and MQL5? Then there is good news for you. There is a plethora of educational content provided freely on MQL5 forums where even beginners can learn MQL5 and MT5 programming. The built-in chat system enables communication with other users as well. Free EAs and custom indicatorsAnother big advantage for Japanese programmers is the availability of free EAs and custom indicators. There is an online store to upload and sell or rent your algorithms, which is very flexible. The platform also supports scripts and utilities and by using a free code base, developers can quickly find complex functions and use them easily to enhance their algorithms and reduce time needed for development. The bottom lineJapanese traders choose MT5 for algorithmic trading because it combines a familiar C/C++-like language (MQL5) with fully integrated IDE (MetaEditor) and built-in data feeds. MQL5 provides an extensive library of functions and supports OOP. As a result, Japanese developers can develop complex Expert Advisors with ease. Real-time alerts, and ultra-low latency trade execution, make MT5 perfect for scalping algorithms. Overall, MT5’s MQL5 provides an all-in-one solution to develop, test, and sell or rent EAs quickly. The post How Japanese Programmers Are Leveraging MT5 for Advanced Algorithmic Trading appeared first on The Crazy Programmer.
  15. by: Juan Diego Rodríguez Fri, 27 Jun 2025 13:48:41 +0000 Blob, Blob, Blob. You hate them. You love them. Personally, as a design illiterate, I like to overuse them… a lot. And when you repeat the same process over and over again, it’s only a question of how much you can optimize it, or in this case, what’s the easiest way to create blobs in CSS? Turns out, as always, there are many approaches. To know if our following blobs are worth using, we’ll need them to pass three tests: They can be with just a single element (and preferably without pseudos). They can be easily designed (ideally through an online tool). We can use gradient backgrounds, borders, shadows, and other CSS effects on them. Without further ado, let’s Blob, Blob, Blob right in. Just generate them online I know it’s disenchanting to click on an article about making blobs in CSS just for me to say you can generate them outside CSS. Still, it’s probably the most common way to create blobs on the web, so to be thorough, these are some online tools I’ve used before to create SVG blobs. Haikei. Probably the one I have used the most since, besides blobs, it can also generate lots of SVG backgrounds. Blobmaker. A dedicated tool for making blobs. It’s apparently part of Haikei now, so you can use both. Lastly, almost all graphic programs let you hand-draw blobs and export them as SVGs. For example, this is one I generated just now. Keep it around, as it will come in handy later. <svg viewBox="0 0 200 200" xmlns="http://www.w3.org/2000/svg"> <path fill="#FA4D56" d="M65.4,-37.9C79.2,-13.9,81,17,68.1,38C55.2,59.1,27.6,70.5,1.5,69.6C-24.6,68.8,-49.3,55.7,-56,38.2C-62.6,20.7,-51.3,-1.2,-39,-24.4C-26.7,-47.6,-13.3,-72,6.2,-75.6C25.8,-79.2,51.6,-62,65.4,-37.9Z" transform="translate(100 100)" /> </svg> Using border-radius While counterintuitive, we can use the border-radius property to create blobs. This technique isn’t new by any means; it was first described by Nils Binder in 2018, but it is still fairly unknown. Even for those who use it, the inner workings are not entirely clear. To start, you may know the border-radius is a shorthand to each individual corner’s radius, going from the top left corner clockwise. For example, we can set each corner’s border-radius to get a bubbly square shape: <div class="blob"></div> .blob { border-radius: 25% 50% 75% 100%; } CodePen Embed Fallback However, what border-radius does — and also why it’s called “radius” — is to shape each corner following a circle of the given radius. For example, if we set the top left corner to 25%, it will follow a circle with a radius 25% the size of the shape. .blob { border-top-left-radius: 25%; } CodePen Embed Fallback What’s less known is that each corner property is still a shortcut towards its horizontal and vertical radii. Normally, you set both radii to the same value, getting a circle, but you can set them individually to create an ellipse. For example, the following sets the horizontal radius to 25% of the element’s width and the vertical to 50% of its height: .blob { border-top-left-radius: 25% 50%; } CodePen Embed Fallback We can now shape each corner like an ellipse, and it is the combination of all four ellipses that creates the illusion of a blob! Just take into consideration that to use the horizontal and vertical radii syntax through the border-radius property, we’ll need to separate the horizontal from the vertical radii using a forward slash (/). .blob { border-radius: /* horizontal */ 100% 30% 60% 70% / /* vertical */ 50% 40% 70% 70%; } CodePen Embed Fallback The syntax isn’t too intuitive, so designing a blob from scratch will likely be a headache. Luckily, Nils Binder made a tool exactly for that! Blobbing blobs together This hack is awesome. We aren’t supposed to use border-radius like that, but we still do. Admittedly, we are limited to boring blobs. Due to the nature of border-radius, no matter how hard we try, we will only get convex shapes. Just going off border-radius, we can try to minimize it a little by sticking more than one blob together: CodePen Embed Fallback However, I don’t want to spend too much time on this technique since it is too impractical to be worth it. To name a few drawbacks: We are using more than one element or, at the very least, an extra pseudo-element. Ideally, we want to keep it to one element. We don’t have a tool to prototype our blobby amalgamations, so making one is a process of trial and error. We can’t use borders, gradients, or box shadows since they would reveal the element’s outlines. Multiple backgrounds and SVG filters This one is an improvement in the Gooey Effect, described here by Lucas Bebber, although I don’t know who first came up with it. In the original effect, several elements can be morphed together like drops of liquid sticking to and flowing out of each other: CodePen Embed Fallback It works by first blurring shapes nearby, creating some connected shadows. Then we crank up the contrast, forcing the blur out and smoothly connecting them in the process. Take, for example, this demo by Chris Coyer (It’s from 2014, so more than 10 years ago!): CodePen Embed Fallback If you look at the code, you’ll notice Chris uses the filter property along the blur() and contrast() functions, which I’ve also seen in other blob demos. To be specific, it applies blur() on each individual circle and then contrast() on the parent element. So, if we have the following HTML: <div class="blob"> <div class="subblob"></div> <div class="subblob"></div> <div class="subblob"></div> </div> …we would need to apply filters and background colors as such: .blob { filter: contrast(50); background: white; /* Solid colors are necessary */ } .subblob { filter: blur(15px); background: black; /* Solid colors are necessary */ } However, there is a good reason why those demos stick to white shapes and black backgrounds (or vice versa) since things get unpredictable once colors aren’t contrast-y enough. See it for yourself in the following demo by changing the color. Just be wary: shades get ugly. CodePen Embed Fallback To solve this, we will use an SVG filter instead. I don’t want to get too technical on SVG (if you want to, read Luca’s post!). In a nutshell, we can apply blurring and contrast filters using SVGs, but now, we can also pick which color channel we apply the contrast to, unlike normal contrast(), which modifies all colors. Since we want to leave color channels (R, G and B) untouched, we will only crank the contrast up for the alpha channel. That translates to the next SVG filter, which can be embedded in the HTML: <svg xmlns="http://www.w3.org/2000/svg" version="1.1" style="position: absolute;"> <defs> <filter id="blob"> <feGaussianBlur in="SourceGraphic" stdDeviation="12" result="blur" /> <feColorMatrix in="blur" mode="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -6" result="goo" /> <feBlend in="SourceGraphic" in2="blob" /> </filter> </defs> </svg> To apply it, we will use again filter, but this time we’ll set it to url("#blob"), so that it pulls the SVG from the HTML. .blob { filter: url("#blob"); } And now we can even use it with gradient backgrounds! CodePen Embed Fallback That being said, this approach comes with two small, but important, changes to common CSS filters: The filter is applied to the parent element, not the individual shapes. The parent element must be transparent (which is a huge advantage). To change the background color, we can instead change the body or other ancestors’ background, and it will work with no issues. What’s left is to place the .subblob elements together such that they make a blobby enough shape, then apply the SVG filters to morph them: CodePen Embed Fallback Making it one element This works well, but it has a similar issue to the blob we made by morphing several border-radius instances: too many elements for a simple blob. Luckily, we can take advantage of the background property to create multiple shapes and morph them together using SVG filters, all in a single element. Since we are keeping it to one element, we will go back to just one empty .blob div: <div class="blob"></div> To recap, the background shorthand can set all background properties and also set multiple backgrounds at once. Of all the properties, we only care about the background-image, background-position and background-size. First, we will use background-image along with radial-gradient() to create a circle inside the element: body { background: radial-gradient(farthest-side, var(--blob-color) 100%, #0000); background-repeat: no-repeat; /* Important! */ } CodePen Embed Fallback Here is what each parameter does: farthest-side: Confines the shape to the element’s box farthest from its center. This way, it is kept as a circle. var(--blob-color) 100%: Fills the background shape from 0 to 100% with the same color, so it ends up as a solid color. #0000: After the shape is done, it makes a full stop to transparency, so the color ends. The next part is moving and resizing the circle using the background-position and background-size properties. Luckily, both can be set on background after the gradient, separated from each other by a forward slash (/). body { background: radial-gradient(...) 20% 30% / 30% 40%; background-repeat: no-repeat; /* Important! */ } CodePen Embed Fallback The first pair of percentages sets the shape’s horizontal and vertical position (taking as a reference the top-left corner), while the second pair sets the shape’s width and height (taking as a reference the element’s size). As I mentioned, we can stack up different backgrounds together, which means we can create as many circles/ellipses as we want! For example, we can create three ellipses on the same element: .blob { background: radial-gradient(farthest-side, var(--blob-color) 100%, #0000) 20% 30% / 30% 40%, radial-gradient(farthest-side, var(--blob-color) 100%, #0000) 80% 50% / 40% 60%, radial-gradient(farthest-side, var(--blob-color) 100%, #0000) 50% 70% / 50% 50%; background-repeat: no-repeat; } What’s even better is that SVG filters don’t care whether shapes are made of elements or backgrounds, so we can also morph them together using the last url(#blob) filter! CodePen Embed Fallback While this method may be a little too much for blobs, it unlocks squishing, stretching, dividing, and merging blobs in seamless animations. Again, all these tricks are awesome, but not enough for what we want! We accomplished reducing the blob to a single element, but we still can’t use gradients, borders, or shadows on them, and also, they are tedious to design and model. Then, that brings us to the ultimate blob approach… Using the shape() function Fortunately, there is a new way to make blobs that just dropped to CSS: the shape() function! I’ll explain shape()‘s syntax briefly, but for an in-depth explanation, you’ll want to check out both this explainer from the CSS-Tricks Almanac as well as Temani Afif‘s three-part series on the shape() function, as well as his recent article about blobs. First off, the CSS shape() function is used alongside the clip-path property to cut elements into any shape we want. More specifically, it uses a verbal version of SVG’s path syntax. The syntax has lots of commands for lots of types of lines, but when blobbing with shape(), we’ll define curves using the curve command: .blob { clip-path: shape( from X0 Y0, curve to X1 Y1 with Xc1 Yc1, curve to X2 Y2 with Xc21 Yc21 / Xc22 Yc22 /* ... */ ); } Let’s break down each parameter: X0 Y0 defines the starting point of the shape. curve starts the curve where X1 Y1 is the next point of the shape, while Xc1 Yc1 defines a control point used in Bézier curves. The next parameter is similar, but we used Xc21 Yc21 / Xc22 Yc22 instead to define two control points on the Bézier curve. I honestly don’t understand Bézier curves and control points completely, but luckily, we don’t need them to use shape() and blobs! Again, shape() uses a verbal version of SVG’s path syntax, so it can draw any shape an SVG can, which means that we can translate the SVG blobs we generated earlier… and CSS-ify them. To do so, we’ll grab the d attribute (which defines the path) from our SVG and paste it into Temani’s SVG to shape() generator. This is the exact code the tool generated for me: .blob { aspect-ratio: 0.925; /* Generated too! */ clip-path: shape( from 91.52% 26.2%, curve to 93.52% 78.28% with 101.76% 42.67%/103.09% 63.87%, curve to 44.11% 99.97% with 83.95% 92.76%/63.47% 100.58%, curve to 1.45% 78.42% with 24.74% 99.42%/6.42% 90.43%, curve to 14.06% 35.46% with -3.45% 66.41%/4.93% 51.38%, curve to 47.59% 0.33% with 23.18% 19.54%/33.13% 2.8%, curve to 91.52% 26.2% with 62.14% -2.14%/81.28% 9.66% ); } As you might have guessed, it returns our beautiful blob: CodePen Embed Fallback Let’s check if it passes our requirements: Yes, they can be made of a single element. Yes, they can also be created in a generator and then translated into CSS. Yes, we can use gradient backgrounds, but due to the nature of clip-path(), borders and shadows get cut out. Two out of three? Maybe two and a half of three? That’s a big improvement over the other approaches, even if it’s not perfect. Conclusion So, alas, we failed to find what I believe is the perfect CSS approach to blobs. I am, however, amazed how something so trivial designing blobs can teach us about so many tricks and new CSS features, many of which I didn’t know myself. CSS Blob Recipes originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  16. by: Abhishek Prakash Fri, 27 Jun 2025 18:29:17 +0530 We have converted our text-based Docker course into an eBook; Learn Docker Quickly. It is available for free to LHB Pro members along with all the other eBooks in the resources section. If you are not a Pro member, you can either opt for the Pro membership or purchase just this ebook from our Gumroad page. I am working on the next series and hopefully, you'll see it in July. Stay tuned 😄       This post is for subscribers only Subscribe now Already have an account? Sign in
  17. Blogger posted a blog entry in Programmer's Corner
    by: Geoff Graham Thu, 26 Jun 2025 16:42:32 +0000 KelpUI is new library that Chris Ferdinandi is developing, designed to leverage newer CSS features and Web Components. I’ve enjoyed following Chris as he’s published an ongoing series of articles detailing his thought process behind the library, getting deep into his approach. You really get a clear picture of his strategy and I love it. He outlined his principles up front in a post back in April: And that’s what I’ve seen so far. The Cascade is openly embraced and logically structured with Cascade Layers. Plenty of utility classes are included, with extra care put into how they are named. Selectors are kept simple and specificity is nice and low, where needed. Layouts are flexible with good constraints. Color palettes are accessible and sport semantic naming. Chris has even put a ton of thought into how KelpUI is licensed. KelpUI is still evolving, and that’s part of the beauty of looking at it now and following Chris’s blog as he openly chronicles his approach. There’s always going to be some opinionated directions in a library like this, but I love that the guiding philosophy is so clear and is being used as a yardstick to drive decisions. As I write this, Chris is openly questioning the way he optimizes the library, demonstrating the tensions between things like performance and a good developer experience. Looks like it’ll be a good system, but even more than that, it’s a wonderful learning journey that’s worth following. KelpUI originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  18. by: Abhishek Prakash Thu, 26 Jun 2025 04:57:35 GMT In an interesting turn of events, Linus Torvalds and Bill Gates meet each other for the first time at a dinner invite. What would have they talked about? Any guesses? This photo also made me realize how quickly Torvalds has aged in the past few years 😔 We have 71 new lifetime members, just 4 short of our original target of 75. Would you help us achieve this? To recall, you get the lifetime Plus membership option with a reduced pricing of $76 instead of the usual $99 along with a free Linux command line eBook. If you ever wanted to support us with Plus membership but didn't like the recurring subscription, this is the best time for that 😃 Get It's FOSS Lifetime Membership Before Offer Ends 💬 Let's see what else you get in this edition Kubuntu also dropping Xorg support.Hyprland working on a paid plan and not everyone being happy about it.KDE's new setup tool.Void Editor with open source AI to tackle Cursor supermacy.And other Linux news, tips, and, of course, memes!📰 Linux and Open Source NewsKubuntu is also set to drop Xorg in favor of Wayland. Fedora, Ubuntu and now Kubuntu. I can see more distros following this trend in the near future.KDE plans a new setup tool to welcome users after a fresh installation.Hyprland is planning to launch a paid premium tier and that decision has led to heated discussion in the communities.Murena Find launches as a Qwant-based search engine.Zed Editor's new debugger has arrived with multi-language support. Kingfisher is MongoDB's new open source real-time secrets scanner.Fedora plans to ditch 32-bit support completely. This will impact Steam and Wine. Fedora Looks to Completely Ditch 32-bit SupportFedora plans to drop 32-bit packages completely.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutAccessibility on Linux is being taken for granted. It’s True, “We” Don’t Care About Accessibility on LinuxWhat do concern trolls and privileged people without visible or invisible disabilities who share or make content about accessibility on Linux being trash without contributing anything to projects have in common? They don’t actually really care about the group they’re defending; they just exploit these victims’ unfortunate situation to fuel hate against groups and projects actually trying to make the world a better place. I never thought I’d be this upset to a point I’d be writing an article about something this sensitive with a clickbait-y title. It’s simultaneously demotivating, unproductive, and infuriating. I’m here writing this post fully knowing that I could have been working on accessibility in GNOME, but really, I’m so tired of having my mood ruined because of privileged people spending at most 5 minutes to write erroneous posts and then pretending to be oblivious when confronted while it takes us 5 months of unpaid work to get a quarter of recognition, let alone acknowledgment, without accounting for the time “wasted” addressing these accusations. This is far from the first time, and it will certainly not be the last.TheEvilSkeletonTheEvilSkeleton🧮 Linux Tips, Tutorials and MoreWhat’s a shim file, and why does your Linux distro need it when dealing with UEFI secure boot?You can easily beautify Xfce desktop with these themes I suggest.Did you know you could tweak the file manager in GNOME desktop and extend its features?Quick tip on using dark mode with VLC.Fast, pretty, and actually helpful. Btop++ nails system monitoring. Btop++: Linux System Monitoring Tool That is Definitely Better than TopA sleek terminal-based system monitor that gives you detailed insights to your resources and processes.It's FOSSAbhishek Prakash Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 👷 Homelab and Hardware CornerAbhishek boosted his Raspberry Pi's performance with this simple tweak. However, this is not a trick you should use often. This Simple Change Improved the Performance of My Homelab Running on Raspberry PiMy struggling Raspberry Pi got a performance boost by a small change in the memory configuration.It's FOSSAbhishek KumarSpotted this 'glow bot' smart AI assistant on Kickstarter. A cool desk companion with a futuristic vibe, only if you have money to spare. It is not open source. I hope someone starts a similar open source project soon, as this is an interesting concept to have customized pixel animation that reacts according to interaction. ✨ Project HighlightDon't like Cursor's proprietary nature? You can try Void instead. Void Editor Is Shaping Up Well: Is it Ready to Take on Cursor and Copilot?Looking for a privacy-first AI code editor? Void’s got you covered.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for YouA rare Linux game review from us in video format. There is a text version, too. If you like it, we will cover more indie games that can be played natively on Linux. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeCan you guess all the Shell Built-in commands? Guess the Shell Built-ins: CrosswordTime to exercise those grey cells and correctly guess these popular shell built-ins in this fun crossword for Linux users.It's FOSSAbhishek Prakash💡 Quick Handy TipIn the Konsole, you can view file thumbnails. To accomplish this, first enable "Underline files" in a profile you use in Konsole via Menu → Settings → Configure Konsole → Profiles → Your Profile → Edit → Mouse → Miscellaneous → Underline files. Now, perform Menu → Settings → Configure Konsole → Thumbnails → Enable thumbnails generation. Also, set an activation key to hold while hovering your cursor, I used the Shift key to demonstrate below. That's it. Now, when you press Shift and hover your mouse over a file, a thumbnail will appear! 🤣 Meme of the WeekI feel like a mentor 👨‍🏫 🗓️ Tech TriviaMicrosoft was incorporated on June 25, 1981, in the state of Washington, following its founding by Bill Gates and Paul Allen in 1975. One more fun fact, Linus Torvalds and Bill Gates recently met at a dinner hosted by Microsoft Azure's CTO, Mark Russinovich. 🧑‍🤝‍🧑 FOSSverse CornerHotmail is a name I haven't heard in quite some time now. One of our FOSSers is not happy with it. Hotmail, the final straw. An icy rant from the polar regionsI’m not sure if this rant even belongs to these here pages, but I leave it to a Moderator to correct me and I promise I will stay in line later. Then again - i am seriously p…d off! I’ve had a @hotmail account since they first went online — long before many of you here, brothers and sisters on these pages, were even born. Back then, I was a Windows user, and Hotmail was far better than what my internet provider could offer. We had dial-up modems using landlines, and you still had to physica…It's FOSS Communityaudun_s❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  19. by: Daniel Schwarz Wed, 25 Jun 2025 14:33:45 +0000 Chrome 137 shipped the if() CSS function, so it’s totally possible we’ll see other browsers implement it, though it’s tough to know exactly when. Whatever the case, if() enables us to use values conditionally, which we can already do with queries and other functions (e.g., media queries and the light-dark() function), so I’m sure you’re wondering: What exactly does if() do? Sunkanmi gave us a nice overview of the function yesterday, poking at the syntax at a high level. I’d like to poke at it a little harder in this article, getting into some possible real-world usage. To recap, if() conditionally assigns a value to a property based on the value of a CSS variable. For example, we could assign different values to the color and background properties based on the value of --theme: --theme: "Shamrock" color: ‌hsl(146 50% 3%) background: hsl(146 50% 40%) --theme: Anything else color: hsl(43 74% 3%) background: hsl(43 74% 64%) :root { /* Change to fall back to the ‘else’ values */ --theme: "Shamrock"; body { color: if(style(--theme: "Shamrock"): hsl(146 50% 3%); else: hsl(43 74% 3%)); background: if(style(--theme: "Shamrock"): hsl(146 50% 40%); else: hsl(43 74% 64%)); } } CodePen Embed Fallback I don’t love the syntax (too many colons, brackets, and so on), but we can format it like this (which I think is a bit clearer): color: if( style(--theme: "Shamrock"): hsl(146 50% 3%); else: hsl(43 74% 3%) ); We should be able to do a crazy number of things with if(), and I hope that becomes the case eventually, but I did some testing and learned that the syntax above is the only one that works. We can’t base the condition on the value of an ordinary CSS property (instead of a custom property), HTML attribute (using attr()), or any other value. For now, at least, the condition must be based on the value of a custom property (CSS variable). Exploring what we can do with if() Judging from that first example, it’s clear that we can use if() for theming (and design systems overall). While we could utilize the light-dark() function for this, what if the themes aren’t strictly light and dark, or what if we want to have more than two themes or light and dark modes for each theme? Well, that’s what if() can be used for. First, let’s create more themes/more conditions: :root { /* Shamrock | Saffron | Amethyst */ --theme: "Saffron"; /* ...I choose you! */ body { color: if( style(--theme: "Shamrock"): hsl(146 50% 3%); style(--theme: "Saffron"): hsl(43 74% 3%); style(--theme: "Amethyst"): hsl(282 47% 3%) ); background: if( style(--theme: "Shamrock"): hsl(146 50% 40%); style(--theme: "Saffron"): hsl(43 74% 64%); style(--theme: "Amethyst"): hsl(282 47% 56%) ); transition: 300ms; } } Pretty simple really, but there are a few easy-to-miss things. Firstly, there’s no “else condition” this time, which means that if the theme isn’t Shamrock, Saffron, or Amethyst, the default browser styles are used. Otherwise, the if() function resolves to the value of the first true statement, which is the Saffron theme in this case. Secondly, transitions work right out of the box; in the demo below, I’ve added a user interface for toggling the --theme, and for the transition, literally just transition: 300ms alongside the if() functions: CodePen Embed Fallback Note: if theme-swapping is user-controlled, such as selecting an option, you don’t actually need if() at all. You can just use the logic that I’ve used at the beginning of the demo (:root:has(#shamrock:checked) { /* Styles */ }). Amit Sheen has an excellent demonstration over at Smashing Magazine. To make the code more maintainable though, we can slide the colors into CSS variables as well, then use them in the if() functions, then slide the if() functions themselves into CSS variables: /* Setup */ :root { /* Shamrock | Saffron | Amethyst */ --theme: "Shamrock"; /* ...I choose you! */ /* Base colors */ --shamrock: hsl(146 50% 40%); --saffron: hsl(43 74% 64%); --amethyst: hsl(282 47% 56%); /* Base colors, but at 3% lightness */ --shamrock-complementary: hsl(from var(--shamrock) h s 3%); --saffron-complementary: hsl(from var(--saffron) h s 3%); --amethyst-complementary: hsl(from var(--amethyst) h s 3%); --background: if( style(--theme: "Shamrock"): var(--shamrock); style(--theme: "Saffron"): var(--saffron); style(--theme: "Amethyst"): var(--amethyst) ); --color: if( style(--theme: "Shamrock"): var(--shamrock-complementary); style(--theme: "Saffron"): var(--saffron-complementary); style(--theme: "Amethyst"): var(--amethyst-complementary) ); /* Usage */ body { /* One variable, all ifs! */ background: var(--background); color: var(--color); accent-color: var(--color); /* Can’t forget this! */ transition: 300ms; } } CodePen Embed Fallback As well as using CSS variables within the if() function, we can also nest other functions. In the example below, I’ve thrown light-dark() in there, which basically inverts the colors for dark mode: --background: if( style(--theme: "Shamrock"): light-dark(var(--shamrock), var(--shamrock-complementary)); style(--theme: "Saffron"): light-dark(var(--saffron), var(--saffron-complementary)); style(--theme: "Amethyst"): light-dark(var(--amethyst), var(--amethyst-complementary)) ); CodePen Embed Fallback if() vs. Container style queries If you haven’t used container style queries before, they basically check if a container has a certain CSS variable (much like the if() function). Here’s the exact same example/demo but with container style queries instead of the if() function: :root { /* Shamrock | Saffron | Amethyst */ --theme: "Shamrock"; /* ...I choose you! */ --shamrock: hsl(146 50% 40%); --saffron: hsl(43 74% 64%); --amethyst: hsl(282 47% 56%); --shamrock-complementary: hsl(from var(--shamrock) h s 3%); --saffron-complementary: hsl(from var(--saffron) h s 3%); --amethyst-complementary: hsl(from var(--amethyst) h s 3%); body { /* Container has chosen Shamrock! */ @container style(--theme: "Shamrock") { --background: light-dark(var(--shamrock), var(--shamrock-complementary)); --color: light-dark(var(--shamrock-complementary), var(--shamrock)); } @container style(--theme: "Saffron") { --background: light-dark(var(--saffron), var(--saffron-complementary)); --color: light-dark(var(--saffron-complementary), var(--saffron)); } @container style(--theme: "Amethyst") { --background: light-dark(var(--amethyst), var(--amethyst-complementary)); --color: light-dark(var(--amethyst-complementary), var(--amethyst)); } background: var(--background); color: var(--color); accent-color: var(--color); transition: 300ms; } } CodePen Embed Fallback As you can see, where if() facilitates conditional values, container style queries facilitate conditional properties and values. Other than that, it really is just a different syntax. Additional things you can do with if() (but might not realize) Check if a CSS variable exists: /* Hide icons if variable isn’t set */ .icon { display: if( style(--icon-family): inline-block; else: none ); } Create more-complex conditional statements: h1 { font-size: if( style(--largerHeadings: true): xxx-large; style(--theme: "themeWithLargerHeadings"): xxx-large ); } Check if two CSS variables match: /* If #s2 has the same background as #s1, add a border */ #s2 { border-top: if( style(--s2-background: var(--s1-background)): thin solid red ); } if() and calc(): When the math isn’t mathing This won’t work (maybe someone can help me pinpoint why): div { /* 3/3 = 1 */ --calc: calc(3/3); /* Blue, because if() won’t calculate --calc */ background: if(style(--calc: 1): red; else: blue); } To make if() calculate --calc, we’ll need to register the CSS variable using @property first, like this: @property --calc { syntax: "<number>"; initial-value: 0; inherits: false; } Closing thoughts Although I’m not keen on the syntax and how unreadable it can sometimes look (especially if it’s formatted on one line), I’m mega excited to see how if() evolves. I’d love to be able to use it with ordinary properties (e.g., color: if(style(background: white): black; style(background: black): white);) to avoid having to set CSS variables where possible. It’d also be awesome if calc() calculations could be calculated on the fly without having to register the variable. That being said, I’m still super happy with what if() does currently, and can’t wait to build even simpler design systems. Poking at the CSS if() Function a Little More: Conditional Color Theming originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  20. by: Daniel Schwarz Wed, 25 Jun 2025 14:33:45 +0000 Chrome 137 shipped the if() CSS function, so it’s totally possible we’ll see other browsers implement it, though it’s tough to know exactly when. Whatever the case, if() enables us to use values conditionally, which we can already do with queries and other functions (e.g., media queries and the light-dark() function), so I’m sure you’re wondering: What exactly does if() do? Sunkanmi gave us a nice overview of the function yesterday, poking at the syntax at a high level. I’d like to poke at it a little harder in this article, getting into some possible real-world usage. To recap, if() conditionally assigns a value to a property based on the value of a CSS variable. For example, we could assign different values to the color and background properties based on the value of --theme: --theme: "Shamrock" color: ‌hsl(146 50% 3%) background: hsl(146 50% 40%) --theme: Anything else color: hsl(43 74% 3%) background: hsl(43 74% 64%) :root { /* Change to fall back to the ‘else’ values */ --theme: "Shamrock"; body { color: if(style(--theme: "Shamrock"): hsl(146 50% 3%); else: hsl(43 74% 3%)); background: if(style(--theme: "Shamrock"): hsl(146 50% 40%); else: hsl(43 74% 64%)); } } CodePen Embed Fallback I don’t love the syntax (too many colons, brackets, and so on), but we can format it like this (which I think is a bit clearer): color: if( style(--theme: "Shamrock"): hsl(146 50% 3%); else: hsl(43 74% 3%) ); We should be able to do a crazy number of things with if(), and I hope that becomes the case eventually, but I did some testing and learned that the syntax above is the only one that works. We can’t base the condition on the value of an ordinary CSS property (instead of a custom property), HTML attribute (using attr()), or any other value. For now, at least, the condition must be based on the value of a custom property (CSS variable). Exploring what we can do with if() Judging from that first example, it’s clear that we can use if() for theming (and design systems overall). While we could utilize the light-dark() function for this, what if the themes aren’t strictly light and dark, or what if we want to have more than two themes or light and dark modes for each theme? Well, that’s what if() can be used for. First, let’s create more themes/more conditions: :root { /* Shamrock | Saffron | Amethyst */ --theme: "Saffron"; /* ...I choose you! */ body { color: if( style(--theme: "Shamrock"): hsl(146 50% 3%); style(--theme: "Saffron"): hsl(43 74% 3%); style(--theme: "Amethyst"): hsl(282 47% 3%) ); background: if( style(--theme: "Shamrock"): hsl(146 50% 40%); style(--theme: "Saffron"): hsl(43 74% 64%); style(--theme: "Amethyst"): hsl(282 47% 56%) ); transition: 300ms; } } Pretty simple really, but there are a few easy-to-miss things. Firstly, there’s no “else condition” this time, which means that if the theme isn’t Shamrock, Saffron, or Amethyst, the default browser styles are used. Otherwise, the if() function resolves to the value of the first true statement, which is the Saffron theme in this case. Secondly, transitions work right out of the box; in the demo below, I’ve added a user interface for toggling the --theme, and for the transition, literally just transition: 300ms alongside the if() functions: CodePen Embed Fallback Note: if theme-swapping is user-controlled, such as selecting an option, you don’t actually need if() at all. You can just use the logic that I’ve used at the beginning of the demo (:root:has(#shamrock:checked) { /* Styles */ }). Amit Sheen has an excellent demonstration over at Smashing Magazine. To make the code more maintainable though, we can slide the colors into CSS variables as well, then use them in the if() functions, then slide the if() functions themselves into CSS variables: /* Setup */ :root { /* Shamrock | Saffron | Amethyst */ --theme: "Shamrock"; /* ...I choose you! */ /* Base colors */ --shamrock: hsl(146 50% 40%); --saffron: hsl(43 74% 64%); --amethyst: hsl(282 47% 56%); /* Base colors, but at 3% lightness */ --shamrock-complementary: hsl(from var(--shamrock) h s 3%); --saffron-complementary: hsl(from var(--saffron) h s 3%); --amethyst-complementary: hsl(from var(--amethyst) h s 3%); --background: if( style(--theme: "Shamrock"): var(--shamrock); style(--theme: "Saffron"): var(--saffron); style(--theme: "Amethyst"): var(--amethyst) ); --color: if( style(--theme: "Shamrock"): var(--shamrock-complementary); style(--theme: "Saffron"): var(--saffron-complementary); style(--theme: "Amethyst"): var(--amethyst-complementary) ); /* Usage */ body { /* One variable, all ifs! */ background: var(--background); color: var(--color); accent-color: var(--color); /* Can’t forget this! */ transition: 300ms; } } CodePen Embed Fallback As well as using CSS variables within the if() function, we can also nest other functions. In the example below, I’ve thrown light-dark() in there, which basically inverts the colors for dark mode: --background: if( style(--theme: "Shamrock"): light-dark(var(--shamrock), var(--shamrock-complementary)); style(--theme: "Saffron"): light-dark(var(--saffron), var(--saffron-complementary)); style(--theme: "Amethyst"): light-dark(var(--amethyst), var(--amethyst-complementary)) ); CodePen Embed Fallback if() vs. Container style queries If you haven’t used container style queries before, they basically check if a container has a certain CSS variable (much like the if() function). Here’s the exact same example/demo but with container style queries instead of the if() function: :root { /* Shamrock | Saffron | Amethyst */ --theme: "Shamrock"; /* ...I choose you! */ --shamrock: hsl(146 50% 40%); --saffron: hsl(43 74% 64%); --amethyst: hsl(282 47% 56%); --shamrock-complementary: hsl(from var(--shamrock) h s 3%); --saffron-complementary: hsl(from var(--saffron) h s 3%); --amethyst-complementary: hsl(from var(--amethyst) h s 3%); body { /* Container has chosen Shamrock! */ @container style(--theme: "Shamrock") { --background: light-dark(var(--shamrock), var(--shamrock-complementary)); --color: light-dark(var(--shamrock-complementary), var(--shamrock)); } @container style(--theme: "Saffron") { --background: light-dark(var(--saffron), var(--saffron-complementary)); --color: light-dark(var(--saffron-complementary), var(--saffron)); } @container style(--theme: "Amethyst") { --background: light-dark(var(--amethyst), var(--amethyst-complementary)); --color: light-dark(var(--amethyst-complementary), var(--amethyst)); } background: var(--background); color: var(--color); accent-color: var(--color); transition: 300ms; } } CodePen Embed Fallback As you can see, where if() facilitates conditional values, container style queries facilitate conditional properties and values. Other than that, it really is just a different syntax. Additional things you can do with if() (but might not realize) Check if a CSS variable exists: /* Hide icons if variable isn’t set */ .icon { display: if( style(--icon-family): inline-block; else: none ); } Create more-complex conditional statements: h1 { font-size: if( style(--largerHeadings: true): xxx-large; style(--theme: "themeWithLargerHeadings"): xxx-large ); } Check if two CSS variables match: /* If #s2 has the same background as #s1, add a border */ #s2 { border-top: if( style(--s2-background: var(--s1-background)): thin solid red ); } if() and calc(): When the math isn’t mathing This won’t work (maybe someone can help me pinpoint why): div { /* 3/3 = 1 */ --calc: calc(3/3); /* Blue, because if() won’t calculate --calc */ background: if(style(--calc: 1): red; else: blue); } To make if() calculate --calc, we’ll need to register the CSS variable using @property first, like this: @property --calc { syntax: "<number>"; initial-value: 0; inherits: false; } Closing thoughts Although I’m not keen on the syntax and how unreadable it can sometimes look (especially if it’s formatted on one line), I’m mega excited to see how if() evolves. I’d love to be able to use it with ordinary properties (e.g., color: if(style(background: white): black; style(background: black): white);) to avoid having to set CSS variables where possible. It’d also be awesome if calc() calculations could be calculated on the fly without having to register the variable. That being said, I’m still super happy with what if() does currently, and can’t wait to build even simpler design systems. Poking at the CSS if() Function a Little More: Conditional Color Theming originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  21. by: Abhishek Kumar Wed, 25 Jun 2025 07:55:29 GMT If you’re a Linux user, you might have found yourself tangled in boot issues while installing your favorite distro especially if "Secure Boot is" in the picture. Secure Boot is meant to add an extra layer of protection to our systems, preventing unverified software from running at boot. Sounds like a win, right? Well, not always. For Linux users, Secure Boot can often feel like more of a hassle than a help, leading to issues, failed installations, and troubleshooting headaches. Take, for instance, the Ubuntu 21.04 release fiasco, where the latest shim files (used to enable Secure Boot on Linux) had compatibility issues with early EFI firmware, causing some users’ systems to become unbootable after an upgrade. Ubuntu eventually released a fix, but not before many users found themselves troubleshooting or even downgrading to older shims just to get their systems to boot. But what exactly is Secure Boot, how do shim files play a role, and when should you consider disabling it? In this guide, I’ll break down Secure Boot in simple terms and explain how it affects Linux installations, including what you can do if it gets in the way. What is Secure Boot?Imagine your computer as a castle with a strong gatekeeper who checks the ID of anyone trying to enter. Secure Boot is like that gatekeeper, making sure only trusted, safe programs get to run during the initial phase of starting up your computer, also known as the What are Secure Boot & Shim Files?boot process. Secure Boot is a security standard developed to keep your computer safe from malware that could sneak in and start doing harmful things even before the operating system (OS) fully loads. It is part of what's called the Unified Extensible Firmware Interface (UEFI), which replaced the older BIOS system. UEFI is a modern way for your computer to boot up and check everything is working as expected. When Secure Boot is turned on, your computer will only load software/operating system with a special signature or “stamp” of approval. If something without this signature tries to load, Secure Boot stops it, protecting your computer from potential harm. How does Secure Boot work?Secure Boot uses a chain of trust with different types of cryptographic keys (think of them as digital ID cards) to verify each step of the boot process. Here’s a simple breakdown: Platform Key (PK): This is like the master key, usually held by the device maker (like Dell, HP, etc.). It’s the root of the verification process. Key Exchange Key (KEK): This key confirms whether other keys can be trusted, acting as a bridge between the platform key and bootloaders. Allowed Database (DB): Contains a list of approved signatures for software that’s allowed to load. Forbidden Database (DBX): Stores signatures of known, unsafe programs. If something tries to load from this list, Secure Boot blocks it. During startup, Secure Boot checks each program that tries to load against these keys and databases. Only programs that have valid, signed keys will run, making sure your system stays secure. Image Credit: RedHatWhat are Shim files?Now, let’s say you’re trying to run Linux on a Secure Boot-enabled computer. Linux doesn’t always have the same pre-approved signatures as Windows, so that’s where Shim files come in. A Shim is a small program that acts like a translator between Secure Boot and the Linux OS. The Shim file is signed with a key that Secure Boot recognizes (often by Microsoft), so it’s allowed to load. The Shim then verifies the signature of the Linux bootloader (like GRUB) and passes control to it if everything checks out. This process creates a “chain of trust” from Secure Boot to Linux, so the OS can load securely even on a Secure Boot-enabled system. Why Secure Boot is important?Secure Boot is crucial because it provides a defense against one of the most dangerous kinds of malware: bootkits and rootkits. These are malicious programs that try to hide themselves in the boot process, allowing them to run before the OS is fully up and running. They can be hard to detect and even harder to remove. With Secure Boot: Bootkits and rootkits are blocked from loading by the signature check.Tampered or unauthorized programs are prevented from affecting the boot process.Users are alerted if something is wrong, so they can address potential issues before they become serious problems.When you might need to disable Secure BootSecure Boot is great for security, but there are times when it can cause issues: Installing unsigned operating systems: Some operating systems, especially certain Linux distributions, may not have the required signatures to pass Secure Boot verification. If your OS isn’t recognized, Secure Boot will prevent it from loading.Using custom drivers or bootloaders: Certain drivers or bootloaders might not be signed, which can cause compatibility issues.Advanced Configurations: For power users who want to customize their systems, Secure Boot’s restrictions can feel limiting. Disabling it allows for greater flexibility, especially in homelab or development environments.However, turning off Secure Boot also removes that extra layer of security, so it’s essential to proceed carefully. Which distros support Secure Boot?While Secure Boot has posed compatibility challenges for Linux, many popular distributions have adapted to work smoothly with it. These distros include signed bootloaders and shim binaries that allow them to run without issues on systems with Secure Boot enabled. Most major Linux distributions now support Secure Boot. I can think of these at least: UbuntuFedoraopenSUSE/SUSEZorinLinux MintDebianRed Hat🚧This is not an extensive list of all distros with secure boot support. There are many more distros out there that support secure boot. Please check their official websites for information.Not all distributions offer Secure Boot support, so it’s worth verifying before installation if you plan to keep Secure Boot enabled. For distros that don’t support Secure Boot directly, you can still disable it in the BIOS settings or manually add a trusted bootloader, though it requires some technical knowledge. How to disable Secure Boot (and why you should be careful)If you decide that you need to disable Secure Boot, here’s a simple guide: 🚧Disabling Secure Boot makes your system more vulnerable to boot-level attacks. Ensure that you have other security measures in place, like keeping your OS up-to-date and using antivirus software.Restart your computer and enter the UEFI/BIOS settings (this usually involves pressing a key like F2, F10, or DEL during startup).Find the Secure Boot option: In the settings, look for “Secure Boot” under Security or Boot options.Disable Secure Boot: Set it to “Disabled.” Be sure to save changes and exit.How to Disable UEFI Secure Boot in WindowsSecure boot may not allow you to boot from a bootable USB. Follow this simple tutorial with screenshots and learn to disable UEFI secure boot in Windows.It's FOSSAbhishek PrakashFinal ThoughtsThe discourse around Secure Boot is polarizing, and for good reason. While it’s designed to enhance system security, it often imposes limitations on Linux users, especially those who rely on proprietary drivers or use less mainstream distributions. The need for Microsoft-signed shims raises valid concerns about vendor lock-in and compatibility. In my experience, especially with a dedicated graphics card on my gaming laptop, keeping Secure Boot off is almost a necessity. With Secure Boot enabled, proprietary drivers tend to fail during installation, as I’ve seen firsthand on Pop!_OS. It’s a compromise I choose for compatibility, though it shouldn’t have to be this way. This article is for those interested in understanding Secure Boot’s quirks and why your favorite distro might not boot up smoothly. The debate is nuanced: is it a crucial security layer or an unnecessary barrier for Linux users? I’d love to hear where you stand on this discourse, let me know in the comments!
  22. by: Sunkanmi Fafowora Tue, 24 Jun 2025 15:17:10 +0000 We’ve known it for a few weeks now, but the CSS if() function officially shipped in Chrome 137 version. It’s really fast development for a feature that the CSSWG resolved to add less than a year ago. We can typically expect this sort of thing — especially one that is unlike anything we currently have in CSS — to develop over a number of years before we can get our dirty hands on it. But here we are! I’m not here to debate whether if() in CSS should exist, nor do I want to answer whether CSS is a programming language; Chris already did that and definitely explained how exhausting that fun little argument can be. What I am here to do is poke at if() in these early days of support and explore what we know about it today at a pretty high level to get a feel for its syntax. We’ll poke a little harder at it in another upcoming post where we’ll look at a more heady real-world example. Yes, it’s already here! Conditional statements exist everywhere in CSS. From at-rules to the parsing and matching of every statement to the DOM, CSS has always had conditionals. And, as Lea Verou put it, every selector is essentially a conditional! What we haven’t had, however, is a way to style an element against multiple conditions in one line, and then have it return a result conditionally. The if() function is a more advanced level of conditionals, where you can manipulate and have all your conditional statements assigned to a single property. .element { color: if(style(--theme: dark): oklch(52% 0.18 140); else: oklch(65% 0.05 220)); } CodePen Embed Fallback How does if() work? Well before Chrome implemented the feature, back in 2021 when it was first proposed, the early syntax was like this: <if()> = if( <container-query>, [<declaration-value>]{1, 2} ) Now we’re looking at this instead: <if()> = if( [<if-statement>: <result>]*; <if-statement>: <result> ;? ) Where… The first <if-statement> represents conditions inside either style(), media(), or supports() wrapper functions. This allows us to write multiple if statements, as many as we may desire. Yes, you read that right. As many as we want! The final <if-statement> condition (else) is the default value when all other if statements fail. That’s the “easy” way to read the syntax. This is what’s in the spec: <if()> = if( [ <if-branch> ; ]* <if-branch> ;? ) <if-branch> = <if-condition> : <declaration-value>? <if-condition> = <boolean-expr[ <if-test> ]> | else <if-test> = supports( [ <ident> : <declaration-value> ] | <supports-condition> ) media( <media-feature> | <media-condition> ) | style( <style-query> ) A little wordy, right? So, let’s look at an example to wrap our heads around it. Say we want to change an element’s padding depending on a given active color scheme. We would set an if() statement with a style() function inside, and that would compare a given value with something like a custom variable to output a result. All this talk sounds so complicated, so let’s jump into code: .element { padding: if(style(--theme: dark): 2rem; else: 3rem); } The example above sets the padding to 2rem… if the --theme variable is set to dark. If not, it defaults to 3rem. I know, not exactly the sort of thing you might actually use the function for, but it’s merely to illustrate the basic idea. Make the syntax clean! One thing I noticed, though, is that things can get convoluted very very fast. Imagine you have three if() statements like this: :root { --height: 12.5rem; --width: 4rem; --weight: 2rem; } .element { height: if( style(--height: 3rem): 14.5rem; style(--width: 7rem): 10rem; style(--weight: 100rem): 2rem; else: var(--height) ); } We’re only working with three statements and, I’ll be honest, it makes my eyes hurt with complexity. So, I’m anticipating if() style patterns to be developed soon or prettier versions to adopt a formatting style for this. For example, if I were to break things out to be more readable, I would likely do something like this: :root { --height: 12.5rem; --width: 4rem; --weight: 2rem; } /* This is much cleaner, don't you think? */ .element { height: if( style(--height: 3rem): 14.5rem; style(--width: 7rem): 10rem; style(--weight: 100rem): 2rem; else: var(--height) ); } Much better, right? Now, you can definitely understand what is going on at a glance. That’s just me, though. Maybe you have different ideas… and if you do, I’d love to see them in the comments. Here’s a quick demo showing multiple conditionals in CSS for this animated ball to work. The width of the ball changes based on some custom variable values set. Gentle reminder that this is only supported in Chrome 137+ at the time I’m writing this: CodePen Embed Fallback The supports() and media() statements Think of supports() the same way you would use the @supports at-rule. In fact, they work about the same, at least conceptually: /* formal syntax for @supports */ @supports <supports-condition> { <rule-list> } /* formal syntax for supports() */ supports( [ <ident> : <declaration-value> ] | <supports-condition> ) The only difference here is that supports() returns a value instead of matching a block of code. But, how does this work in real code? Let’s say you want to check for support for the backdrop-filter property, particularly the blur() function. Typically, you can do this with @supports: /* Fallback in case the browser doesn't support backdrop-filter */ .card { backdrop-filter: unset; background-color: oklch(20% 50% 40% / 0.8); } @supports (backdrop-filter: blur(10px)) { .card { backdrop-filter: blur(10px); background-color: oklch(20% 50% 40% / 0.8); } } But, with CSS if(), we can also do this: .card { backdrop-filter: if( supports(backdrop-filter: blur(10px)): blur(10px); else: unset ); } Note: Think of unset here as a possible fallback for graceful degradation. That looks awesome, right? Multiple conditions can be checked as well for supports() and any of the supported functions. For example: .card { backdrop-filter: if( supports(backdrop-filter: blur(10px)): blur(10px); supports(backdrop-filter: invert(50%)): invert(50%); supports(backdrop-filter: hue-rotate(230deg)): hue-rotate(230deg);; else: unset ); } Now, take a look at the @media at-rule. You can compare and check for a bunch of stuff, but I’d like to keep it simple and check for whether or not a screen size is a certain width and apply styles based on that: h1 { font-size: 2rem; } @media (min-width: 768px) { h1 { font-size: 2.5rem; } } @media (min-width: 1200px) { h1 { font-size: 3rem; } } The media() wrapper works almost the same way as its at-rule counterpart. Note its syntax from the spec: /* formal syntax for @media */ @media <media-query-list> { <rule-list> } /* formal syntax for media() */ media( <media-feature> | <media-condition> ) Notice how at the end of the day, the formal syntax (<media-query>) is the same as the syntax for the media() function. And instead of returning a block of code in @media, you’d have something like this in the CSS inline if(): h1 { font-size: if( media(width >= 1200px): 3rem; media(width >= 768px): 2.5rem; else: 2rem ); } Again, these are early days As of the time of this writing, only the latest update of Chrome supports if()). I’m guessing other browsers will follow suit once usage and interest come in. I have no idea when that will happen. Until then, I think it’s fun to experiment with this stuff, just as others have been doing: The What If Machine: Bringing the “Iffy” Future of CSS into the Present (Lee Meyer) How To Correctly Use if() In CSS (Temani Afif) Future-Proofing Indirect Cyclical Conditions (Roma Komarov) The new if() function in CSS has landed in the latest Chrome (Amit Merchant) Experimenting with early features is how we help CSS evolve. If you’re trying things out, consider adding your feedback to the CSSWG and Chromium. The more use cases, the better, and that will certain help make future implementations better as well. Now that we have a high-level feel for the if()syntax, we’ll poke a little harder at the function in another article where we put it up against a real-world use case. We’ll link that up when it publishes tomorrow. Lightly Poking at the CSS if() Function in Chrome 137 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  23. by: Abhishek Prakash Tue, 24 Jun 2025 13:49:11 +0530 Ever wondered how to make your bash scripts more robust and professional? The declare command in bash is your secret weapon for proper variable management! Alright! So, variables in bash don't have any types and you can simply use them as name=value . That's not surprising. What you might not know is that you can control variable types, scope, and behavior by using declare with your variables. Interesting, right? What is Declare in Bash?The declare built-in command in bash allows you to explicitly declare variables with specific attributes. Think of it as giving your variables special properties that control how they behave. The syntax for declare is simple: declare [options] [variable[=value]] If you use it without any options, it will be the same as regular variable assighnment. # Simple variable assignment name="John" # Using declare (equivalent but more explicit) declare name="John" The magic happens with the options that define variable attributes, as they open up a world of possibilities! Stay with me to see the incredible power of this lesser known bash command. Making variables read-only (-r)Want to create constants that can't be accidentally modified? Use the -r flag: declare -r var=valueHere's an example: declare -r API_URL="https://api.example.com" declare -r MAX_RETRIES=3 # This will fail with an error because API_URL is readonly variable API_URL="https://malicious.com" 💡Use read-only variables for configuration values that should never change during script execution!Define integer variables (-i)Force variables to behave as integers for mathematical operations in bash: declare -i int_var=123💡Integer variables automatically handle arithmetic without needing $(( )) or $[ ]. Now that's something, right?Here's a proper example: declare -i counter=10 declare -i result counter=counter+5 # Works without $ or (( )) echo $counter # Output: 15 result=counter*2 # Mathematical operations work directly echo $result # Output: 30 This is a good way to validate user input of your bash script. declare -i user_input read -p "Enter a number: " user_input if [[ $user_input -eq 0 ]] && [[ "$user_input" != "0" ]]; then echo "Invalid input! Please enter a number." else echo "You entered: $user_input" fi ⚠️ Don't mix strings and integers. You won't see an error but you won't get the intended result as well. declare -i number="abc" echo $number # Output: 0 (not an error!) 💡You can use -g option to create global variables when used inside a shell function.Use array variables (-a)You can create indexed arrays explicitly with option -a: declare -a array_var=("get" "LHB" "Pro" "Membership")A better example that shows things in action: declare -a fruits=("apple" "banana" "orange") declare -a numbers # Add elements fruits[3]="grape" numbers[0]=100 numbers[1]=200 echo ${fruits[@]} # Output: apple banana orange grape echo ${#fruits[@]} # Output: 4 (array length) ⚠️ Beware of the index gap issues. In the example below, the element was added at 11th position but length of the array is still counted as 2. Basically, bash is not a high-level programming language. So, be careful of the pitfalls. declare -a sparse_array sparse_array[0]="first" sparse_array[10]="eleventh" echo ${#sparse_array[@]} # Output: 2 (not 11!) Declare associative arrays (-A)I hope you are familiar with the concept of associative arrays in bash. Basically, Associative arrays let you create structured data with key-value pairs, offering a more flexible way to handle data compared to indexed arrays. declare -A user_info user_info["name"]="Alice" user_info["age"]=30 user_info["role"]="developer" echo ${user_info["name"]} # Output: Alice echo ${!user_info[@]} # Output: name age role (keys) echo ${user_info[@]} # Output: Alice 30 developer (values) 💡You can combine options. declare -r -x -i MAX_WORKERS=4 will create a read-only, exported, integer.Create exported variables (-x)By default, the variables you create are not available to the child Make variables available to child processes with option -x. In the screenshot below, you can see that the exported variable with declare was available in the subshell while the normal variable was not. This is useful when you have scripts with the variables that need to be available beyond the current shell. An example with pseudocode: declare -x DATABASE_URL="postgresql://localhost/mydb" declare -x -r CONFIG_FILE="/etc/myapp.conf" # Read-only AND exported # Now DATABASE_URL is available to any command you run python my_script.py # Can access DATABASE_URL 🚧Child processes won't see the array structure! Arrays can't be exported in bash.Unsetting attributesYou can also remove specific attributes from variables by using + in the option. In the example below, I have the variable set as an integer first and then I change it to a string. declare -i number=42 declare +i number # Remove integer attribute number="hello" # Now this works (was previously integer-only) Just letting you know that this option is also there if the situation demands it. 💡declare -p variable_name will show specific variable attributes declare -p will show all variables in the system with their attributes. Pretty huge output.When and where should you use declare?Use declare when you want to: Create constants with -rWork with arrays (-a or -A)Ensure variables are integers (-i)Make variables available to child processes (-x)Create more readable, self-documenting codeStick with simple assignment when: Creating basic string variablesWorking with temporary valuesWriting quick one-linersI can think of some practical, real-world use cases. Let's say you are creating a script for system monitoring (pseudocode for example): #!/bin/bash # System thresholds (read-only) declare -r -i CPU_THRESHOLD=80 declare -r -i MEMORY_THRESHOLD=85 declare -r -i DISK_THRESHOLD=90 # Current values (will be updated) declare -i current_cpu declare -i current_memory declare -i current_disk # Arrays for storing historical data declare -a cpu_history declare -a memory_history # Function to check system resources check_system() { current_cpu=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1) current_memory=$(free | grep Mem | awk '{printf("%.0f", $3/$2 * 100.0)}') # Add to history cpu_history+=($current_cpu) memory_history+=($current_memory) # Alert if thresholds exceeded if (( current_cpu > CPU_THRESHOLD )); then echo "⚠️ CPU usage high: ${current_cpu}%" fi if (( current_memory > MEMORY_THRESHOLD )); then echo "⚠️ Memory usage high: ${current_memory}%" fi } Or, configuration management of your web service: #!/bin/bash # Application configuration declare -r APP_NAME="MyWebApp" declare -r APP_VERSION="2.1.0" declare -r CONFIG_DIR="/etc/myapp" # Runtime settings (can be modified) declare -i PORT=8080 declare -i MAX_CONNECTIONS=100 # Export for child processes declare -x DATABASE_URL="postgresql://localhost/myapp" declare -x LOG_LEVEL="INFO" echo "Starting $APP_NAME v$APP_VERSION on port $PORT" Wrapping UpThe declare command transforms bash from a simple scripting language into a more robust programming environment. It's not just about creating variables - it's about creating reliable, maintainable scripts that handle data properly. That being said, there are still a few pitfalls you should avoid. I mentioned a few of them in the article but it is always good to double test your scripts. Let me know if you have questions on the usage of declare shell built-in.
  24. by: Chris Coyier Mon, 23 Jun 2025 15:47:48 +0000 I like the term “content aware components” like Eric Bailey uses in the Piccalilli article Making content-aware components using CSS :has(), grid, and quantity queries. Does a card have a photo? Yes, do one thing, no, do another. That sort of thing. Eric has some good examples where a UI component has a bunch more “tags” than another, so the layout adjusts to accommodate them better. Thanks to :has(), the idea of “quantity queries” (e.g. style an element if there are, say, 4 or more of an element) have gotten a lot easier. The way I figure it, we could do like: .card:has(.tag:nth-child(4)) { /* apply styles to the card because there are at least 4 tags */ } Admittedly, the logic gets a little bit more complicated if you want like “3 or less tags”, but that’s exactly what Eric covers in the article, linking up the original ideas and resources, so you’re in luck. At CSS Day the other week, I listened to Ahmad Shadeed have a wonderful idea on stage. Imagine a layout with one item, you show it. Two items? Side by side? Three items? Maybe two on top and one on the bottom. Four? Two by Two. Five? Well! That’s getting to be a lot of items. We could break them into a carousel. All entirely in CSS. That’s wild. I’d love to build that one day. Maybe I’ll try to stream it one of these days. (Of course, as I write this Kevin Powell put out a video that verges on the idea, and it’s very clever.) Speaking of Ahmad, he’s got a great article introducing the big improvements that the attr() function has gotten. I absolutely love how we can pluck attribute values out of HTML and actually have them be useful now. I think we’ll realize the power of that more and more. But it occurs to me here that it could factor into this quantity query situation. Say you trust your server to know and output stuff like this. So like: <div class="card-grid" data-cards="13"> ... </div> You can get your hands on 13, as an actual number not a string, in CSS now like: attr(data-cards type(<number>), 2) The number 2 above is a fallback. With a number like that, it makes me think that maybe-just-maybe, we could combine it with the newfangled if() commands in CSS (See Una’s great video) to write the same kind of “quantity query” logic. Ya know how Sass has @mixin to repeat blocks of CSS? Native CSS doesn’t have that yet, but style queries are pretty close. I snagged this screenshot out of Kevin’s video (in CodePen, naturally): See how he just flips on a --custom-property then the style query matches when that custom property is set and outputs a block of styles? That feels an awful lot like a mixin to me. Miriam has a nice homebase page for native mixins, which links to some very active discussion on it. At the pace CSS is moving I imagine we’ll have it before we know it. Just @applying a @mixin seems like a more straightforward approach than the style query approach, and more flexible as it’s likely they’ll take parameters, and possibly even slots. CSS carousels amount to a pretty hefty amount of CSS. Wouldn’t it be cool to make that into a hefty @mixin that takes parameters on what features you want? Ship it. In other news, gap decorations are starting to be a thing and that’s wonderful. Hoping they’ll move right onto styling a grid area without needing an HTML element there, that would be just as wonderful. I’m still hesitant on making entire columns of article content into grids, but that’s a me-problem, I see others are coming around on the idea.
  25. by: Zell Liew Mon, 23 Jun 2025 13:41:34 +0000 In a previous article, I showed you how to refactor the Resize Observer API into something way simpler to use: // From this const observer = new ResizeObserver(observerFn) function observerFn (entries) { for (let entry of entries) { // Do something with each entry } } const element = document.querySelector('#some-element') observer.observe(element); // To this const node = document.querySelector('#some-element') const obs = resizeObserver(node, { callback({ entry }) { // Do something with each entry } }) Today, we’re going to do the same for MutationObserver and IntersectionObserver. Refactoring Mutation Observer MutationObserver has almost the same API as that of ResizeObserver. So we can practically copy-paste the entire chunk of code we wrote for resizeObserver to mutationObserver. export function mutationObserver(node, options = {}) { const observer = new MutationObserver(observerFn) const { callback, ...opts } = options observer.observe(node, opts) function observerFn(entries) { for (const entry of entries) { // Callback pattern if (options.callback) options.callback({ entry, entries, observer }) // Event listener pattern else { node.dispatchEvent( new CustomEvent('mutate', { detail: { entry, entries, observer }, }) ) } } } } You can now use mutationObserver with the callback pattern or event listener pattern. const node = document.querySelector('.some-element') // Callback pattern const obs = mutationObserver(node, { callback ({ entry, entries }) { // Do what you want with each entry } }) // Event listener pattern node.addEventListener('mutate', event => { const { entry } = event.detail // Do what you want with each entry }) Much easier! Disconnecting the observer Unlike ResizeObserver who has two methods to stop observing elements, MutationObserver only has one, the disconnect method. export function mutationObserver(node, options = {}) { // ... return { disconnect() { observer.disconnect() } } } But, MutationObserver has a takeRecords method that lets you get unprocessed records before you disconnect. Since we should takeRecords before we disconnect, let’s use it inside disconnect. To create a complete API, we can return this method as well. export function mutationObserver(node, options = {}) { // ... return { // ... disconnect() { const records = observer.takeRecords() observer.disconnect() if (records.length > 0) observerFn(records) } } } Now we can disconnect our mutation observer easily with disconnect. const node = document.querySelector('.some-element') const obs = mutationObserver(/* ... */) obs.disconnect() MutationObserver’s observe options In case you were wondering, MutationObserver’s observe method can take in 7 options. Each one of them determines what to observe, and they all default to false. subtree: Monitors the entire subtree of nodes childList: Monitors for addition or removal children elements. If subtree is true, this monitors all descendant elements. attributes: Monitors for a change of attributes attributeFilter: Array of specific attributes to monitor attributeOldValue: Whether to record the previous attribute value if it was changed characterData: Monitors for change in character data characterDataOldValue: Whether to record the previous character data value Refactoring Intersection Observer The API for IntersectionObserver is similar to other observers. Again, you have to: Create a new observer: with the new keyword. This observer takes in an observer function to execute. Do something with the observed changes: This is done via the observer function that is passed into the observer. Observe a specific element: By using the observe method. (Optionally) unobserve the element: By using the unobserve or disconnect method (depending on which Observer you’re using). But IntersectionObserver requires you to pass the options in Step 1 (instead of Step 3). So here’s the code to use the IntersectionObserver API. // Step 1: Create a new observer and pass in relevant options const options = {/*...*/} const observer = new IntersectionObserver(observerFn, options) // Step 2: Do something with the observed changes function observerFn (entries) { for (const entry of entries) { // Do something with entry } } // Step 3: Observe the element const element = document.querySelector('#some-element') observer.observe(element) // Step 4 (optional): Disconnect the observer when we're done using it observer.disconnect(element) Since the code is similar, we can also copy-paste the code we wrote for mutationObserver into intersectionObserver. When doing so, we have to remember to pass the options into IntersectionObserver and not the observe method. export function mutationObserver(node, options = {}) { const { callback, ...opts } = options const observer = new MutationObserver(observerFn, opts) observer.observe(node) function observerFn(entries) { for (const entry of entries) { // Callback pattern if (options.callback) options.callback({ entry, entries, observer }) // Event listener pattern else { node.dispatchEvent( new CustomEvent('intersect', { detail: { entry, entries, observer }, }) ) } } } } Now we can use intersectionObserver with the same easy-to-use API: const node = document.querySelector('.some-element') // Callback pattern const obs = intersectionObserver(node, { callback ({ entry, entries }) { // Do what you want with each entry } }) // Event listener pattern node.addEventListener('intersect', event => { const { entry } = event.detail // Do what you want with each entry }) Disconnecting the Intersection Observer IntersectionObserver‘s methods are a union of both resizeObserver and mutationObserver. It has four methods: observe: observe an element unobserve: stops observing one element disconnect: stops observing all elements takeRecords: gets unprocessed records So, we can combine the methods we’ve written in resizeObserver and mutationObserver for this one: export function intersectionObserver(node, options = {}) { // ... return { unobserve(node) { observer.unobserve(node) }, disconnect() { // Take records before disconnecting. const records = observer.takeRecords() observer.disconnect() if (records.length > 0) observerFn(records) }, takeRecords() { return observer.takeRecords() }, } } Now we can stop observing with the unobserve or disconnect method. const node = document.querySelector('.some-element') const obs = intersectionObserver(node, /*...*/) // Disconnect the observer obs.disconnect() IntersectionObserver options In case you were wondering, IntersectionObserver takes in three options: root: The element used to check if observed elements are visible rootMargin: Lets you specify an offset amount from the edges of the root threshold: Determines when to log an observer entry Here’s an article to help you understand IntersectionObserver options. Using this in practice via Splendid Labz Splendid Labz has a utils library that contains resizeObserver, mutationObserver and IntersectionObserver. You can use them if you don’t want to copy-paste the above snippets into every project. import { resizeObserver, intersectionObserver, mutationObserver } from 'splendidlabz/utils/dom' const mode = document.querySelector(‘some-element’) const resizeObs = resizeObserver(node, /* ... */) const intersectObs = intersectionObserver(node, /* ... */) const mutateObs = mutationObserver(node, /* ... */) Aside from the code we’ve written together above (and in the previous article), each observer method in Splendid Labz is capable of letting you observe and stop observing multiple elements at once (except mutationObserver because it doesn’t have a unobserve method) const items = document.querySelectorAll('.elements') const obs = resizeObserver(items, { callback ({ entry, entries }) { /* Do what you want here */ } }) // Unobserves two items at once const subset = [items[0], items[1]] obs.unobserve(subset) So it might be just a tad easier to use the functions I’ve already created for you. 😉 Shameless Plug: Splendid Labz contains a ton of useful utilities — for CSS, JavaScript, Astro, and Svelte — that I have created over the last few years. I’ve parked them all in into Splendid Labz, so I no longer need to scour the internet for useful functions for most of my web projects. If you take a look, you might just enjoy what I’ve complied! (I’m still making the docs at the time of writing so it can seem relatively empty. Check back every now and then!) Learning to refactor stuff If you love the way I explained how to refactor the observer APIs, you may find how I teach JavaScript interesting. In my JavaScript course, you’ll learn to build 20 real life components. We’ll start off simple, add features, and refactor along the way. Refactoring is such an important skill to learn — and in here, I make sure you got cement it into your brain. That’s it! Hope you had fun reading this piece! A Better API for the Intersection and Mutation Observers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  26. by: Adnan Shabbir Mon, 23 Jun 2025 13:40:56 +0000 Bash (Bourne Again Shell) is a free and open-source shell and scripting language. Its journey started in the late 80s, and since then, the Bash has been adopted by routine Linux users and Linux SysAdmins. Bash has automated the daily tasks of a Linux System Administrator. A Linux SysAdmin has to spend hours running scripts and commands. Not only the SysAdmins, but the simplicity and easy-to-learn capability of Bash have automated the tasks of a normal Linux user as well. Inspired by this, we have today demonstrated the 10 most useful Bash scripts for Linux SysAdmins. These are chosen based on the general working of any Linux System Administrator (from a small scale to a larger scale). 10 Bash Scripts to Automate Daily Linux SysAdmin Tasks Prerequisite 1: Running a Bash Script | To be Done Before Running Each Script in This Post Prerequisite 2: Package Management Commands for Distros Other Than Debian/Ubuntu Script 1: Update and Upgrade the System Repositories/Packages Index Script 2: Install a Package on Linux Script 3: Remove a Package Script 4: Monitoring Systems Performance Script 5: Log Monitoring Script 6: User Management | Adding a New User, Adding a User to a Group Script 7: Disk Management Script 8: Service Management Script 9: Process Management Script 10: Allow or Deny Services Over the Firewall Bonus: Automating the Scripts using Cron Conclusion 10 Bash Scripts to Automate Daily Linux SysAdmin Tasks A System Administrator can create as many scripts as required. We have automated some of the most common and most used tasks through Bash scripts. Let’s go through the prerequisites first and then the Scripts: Prerequisite 1: Running a Bash Script | To be Done Before Running Each Script in This Post Before we get into the scripts, let’s quickly go through the process to run a bash script. Step 1: Make the Script Executable A bash script is useless until it is made executable. Here, the scripts refer to the Linux sys admin, so we use the “u+x” with “sudo” to make the scripts executable for admin only: sudo chmod u+x /path/to/script Step 2: Execute the Script Once the script is executable, it can now be run from the terminal using the command: sudo /path/to/script Click here to get more details on running a Bash script. Prerequisite 2: Package Management Commands for Distros Other Than Debian/Ubuntu To assist with Script 1, Script 2, and Script 3, we prepared a command cheat sheet for managing the packages on Linux distros other than Debian/Ubuntu and their derivatives. Here’s the table that lists the commands referring to each package manager of the Linux distro: Package Manager Update/Upgrade Install Remove pacman (Arch-based) sudo pacman -Syu sudo pacman -S <package> sudo pacman -R <package-name> zypper (SUSE-based) sudo zypper update/sudo zypper upgrade sudo pacman install <package> sudo zypper remove <package-name> dnf (Fedora/RHEL-based) sudo dnf update/sudo dnf upgrade sudo dnf install <package> sudo dnf remove <package> apt (Debian/Ubuntu-based) sudo apt update/upgrade sudo apt install <package> sudo apt remove <package> Script 1: Update and Upgrade the System Repositories/Packages Index “Update and upgrade” commands are the most used commands by any Linux SysAdmin or a routine user. Here, the below script updates, upgrades, and autoremoves packages: #! /bin/bash #updating the system repositories sudo apt update -y #installing the updated packages from repositories sudo apt upgrade -y #auto removing the unused dependencies sudo apt autoremove -y Note: Please refer to the table (Prerequisites 2) for Linux package management commands. Let’s make it executable: Permission Denied: Since the script belongs to the SysAdmin, we strictly kept the permissions to the sudo user only: Here’s the update, upgrade, and autoremoval of packages: Script 2: Install a Package on Linux A Linux SysAdmin has to install and remove packages from the systems and keep an eye on this process. Each package installation requires a few commands to effectively install that package. Note: Please refer to the table (Prerequisites 2) for Linux package management commands. #!/bin/bash #update and upgrade system packages repositories sudo apt update && sudo apt upgrade #install any package sudo apt install $1 Update and upgrade package repositories, followed by installing a specific package ($1, specify the package name while running the script): Here, we choose $1=ssh and run the script: Script 3: Remove a Package A complete removal of a package involves multiple commands. Let’s manage it through a single script: Note: Go through the table (Prerequisites 2) for the commands of other Linux package managers: #!/bin/bash #remove the package with only a few dependencies sudo apt remove $1 #remove package and its data sudo apt purge $1 #remove unused dependencies sudo apt autoremove $1 Let’s execute it, i.e.,”$1=ssh”: sudo ./removepack.sh ssh Script 4: Monitoring Systems Performance A Linux sysadmin has to monitor and keep an eye on measurable components (CPU, RAM) of the system. These preferences vary from organization to organization. Here’s the Bash script that checks the RAM status, Uptime, and CPU/memory stats, which are the primary components to monitor: #!/bin/bash echo "RAM Status" # free: RAM status free -h echo "Uptime" # uptime: how long the system has been running uptime echo "CPU/memory stats" # vmstat: Live CPU/memory stats vmstat 2 free -h: RAM status in human-readable form. uptime: how long the system has been running. vmstat 2: Live CPU/memory stats, i.e., records every 2 seconds. Once we run the command, the output shows the “Ram Status”, the “Uptime”, and the “CPU/Memory” status: Script 5: Log Monitoring A Linux SysAdmin has to go through different log files to effectively manage the system. For instance, the “/var/log/auth.log” file contains the user logins/logouts, SSH access, sudo commands, and other authentication mechanisms. Here’s the Bash script that allows filtering these logs based on a search result. #!/bin/bash cat /var/log/auth.log | grep $1 The $1 positional parameter shows that this script would be run with one argument: We use “UID=0” as the variable’s value for this script. Thus, only those records are shown that contain UID=0: The log file can be changed in the script as per the requirement of the SysAdmin. Here are the log files associated with different types of logs in Linux: Log File/Address Purpose/Description /var/log/ The main directory where most of the log files are placed. /var/log//logapache2 Refers to the Apache server logs (access and error logs). /var/log/dmesg Messages relevant to the device drivers /var/log/kern.log Logs/messages related to the Kernel. /var/log/syslog These are general system logs and messages from different system services that are available here There are a few more. Let’s open the “/var/log” directory and look at the logs that SysAdmin can use for fetching details inside each log file: Script 6: User Management | Adding a New User, Adding a User to a Group Adding a new user is one of the key activities in a Linux sysadmin’s daily tasks. There are numerous ways to add a new user with a Bash script. We have created the following Bash Script that demonstrates the user creation: #!/bin/bash USER=$1 GROUP=$2 #Creating a group sudo groupadd $GROUP #Creating a User sudo adduser $USER #Adding a user to a group sudo usermod -aG $GROUP $USER 2 positional parameters are initialized, i.e., $1 for user and $2 for group. First, the required group is created. Then, the user is created. Lastly, the newly created user is added to the group. Since the script has positional parameters, let’s execute it with the required 2 arguments (one for username and the other for groupname): Similarly, the system administrator can create scripts to delete users as well. Script 7: Disk Management Disk management involves multiple commands, such as listing and checking the number of block devices. We run the “lsblk” command. To “mount” or “unmount” any filesystem, we run the “mount” and “umount” commands. Let’s incorporate a few commands in a Bash script to view some data about disks: #!/bin/bash #Disk space check df -h #Disk usage of a specific directory echo "Disk Usage of:" $1 du -sh $1 $1 positional parameter refers to the address of the directory whose disk usage is to be checked: Let’s run the script: sudo ./dfdu.sh /home/adnan/Downloads Also, remember to provide the argument value, i.e., here, the “$1=/home/adnan/Downloads”: Script 8: Service Management To manage any service, the SysAdmin has to run multiple commands for each service. Like, to start a service, the SysAdmin uses the “systemctl start” command and verifies its status through “systemctl status”. Let’s make this task easy for Linux SysAdmins: Start a Service The following Bash script refers to only one service, i.e., every time the script only manages the NGINX service: #!/bin/bash sudo systemctl start nginx sudo systemctl enable nginx sudo systemctl status nginx For a more diverse use case, we declare a positional parameter to manage different services with each new run: Now, pass the value of the positional parameter at the time of executing the script: sudo ./sys.sh apache2 The “apache2” is the argument on which the script would run: Stop a Service In this case, we use the positional parameter to make it more convenient for the Linux SysAdmins or the regular users: #!/bin/bash sudo systemctl stop $1 sudo systemctl disable $1 sudo systemctl status $1 The $1 positional parameter refers to the specific service that is mentioned when executing a command: Let’s execute the command: sudo ./sys1.sh apache2 Script 9: Process Management A Linux System Administrator has a keen eye on the processes and manages each category of process as per the requirement. A simple script can kill the specific processes. For instance, the script demonstrated here fetches the Zombie and Defunct processes, identifies the parent IDs of these processes: #!/bin/bash #Fetching the process ids of Zombie processes and defunct processes ZOM=`ps aux | grep 'Z' | awk '{print $2}'| grep [0-9]` DEF=`ps aux | grep 'Z' | awk '{print $2}'| grep [0-9]` echo "Zombie and Defunct Process IDs are:" $ZOM "and" $DEF #Getting parent process ids of Zombies and defunct PPID1=`ps -o ppid= $ZOM` PPID2=`ps -o ppid= $DEF` echo "ZParent process IDs of Zombie and Defunct Processes are:" $PPID "and" $PPID2. Zombie and Defunct process IDs are fetched and stored in a variable. The parent process IDs of the Zombie and defunct processes are fetched. Then, the parent processes can be killed Let’s execute it: sudo ./process.sh Script 10: Allow or Deny Services Over the Firewall A firewall is a virtual wall between your system and the systems connecting to your system. We can set the firewall rules to allow or deny what we want. Firewall has a significant role in managing the system. Let’s automate to allow or deny any service on your system: Allow a Service Through the Firewall The following script enables SSH through the firewall: #!/bin/bash sudo ufw allow ssh sudo ufw enable sudo ufw status Let’s execute the script. We can also include a positional parameter here to use the same script for multiple services to be allowed on the firewall. For instance, the script below has only one positional parameter. This parameter’s value is to be provided at the time of executing the script. #!/bin/bash sudo ufw allow $1 sudo ufw enable sudo ufw status While executing, just specify the name of the service as an argument: sudo ./firewall.sh ssh Deny a Service or Deny All: We can either deny one service or deny all the services attempting to reach our system. The below script updates the default incoming policy to deny, disables the firewall as well. Note: These kinds of denial scripts are run when the overall system is in trouble, and we just need to make sure there is no service trying to approach our system. #!/bin/bash sudo ufw default deny incoming sudo ufw disable sudo ufw status sudo ufw default allow outgoing Running the script: Now that you have learned the 10 Bash scripts to automate daily SysAdmin tasks. Let’s learn how we can schedule the scripts to run them as per our schedule. Bonus: Automating the Scripts Using Cron A cron job allows the SysAdmin to execute a specific script at a specific time, i.e., scheduling the execution of the script. It is managed through the crontab file. First, use the “crontab -e” command to enter the edit mode of the crontab file: crontab -e To put a command on a schedule with the cron file, you have to use a specific syntax to put it in the cron file. The below script will run on the 1st minute of each hour. There are a total of 5 parameters to be considered for each of the commands: m: minute of a specific hour, i.e., choose between 1-59 minutes. h: hour of the day, i.e., choose between 0-23. dom: date of the month → Choose between 1-31. mon: foreach month → Choose between 1- 12 dow: day of the week → Choose between 1-7 You can check the crontab listings using: crontab -l Important: Do you want a Linux Commands Cheat Sheet before you start using Bash? Click here to get a detailed commands cheat sheet. Conclusion Bash has eased the way of commanding in Linux. Usually, we can run a single command each session on a terminal. With Bash scripts, we can automate the command/s execution process to accomplish tasks with the least human involvement. We have to write it once and then keep on repeating the same for multiple tasks. With this post, we have demonstrated the 10 Bash scripts to automate daily Linux System Administrator. FAQs How to run a Bash script as an Admin? Use the “sudo /path/to/script” command to run the Bash script as an Admin. It is recommended to restrict the executable permissions of the Bash scripts to only authorized persons. What is #!/bin/bash in Bash? The “#!/bin/bash” is the Bash shebang. It tells the system to use the “bash” interpreter to run this script. If we don’t use this, our script is a simple shell script. How do I give permissions to run a Bash script? The “chmod” command is used to give permissions to run the Bash script. For a Linux sysadmin script, use the “sudo chmod u+x /path/of/script” command to give permissions. What does $1 mean in Bash? In Bash, $1 is a positional parameter. While running a Bash script, the first argument refers to $1, the second argument refers to $2, and so on.
  27. by: Adnan Shabbir Mon, 23 Jun 2025 12:34:03 +0000 Basic Workflow of Ansible | What components are necessary sudo apt update sudo apt install ansible ansible --version Ansible Control Node IP: 192.168.140.139 (Where Ansible is configured) Ansible Host IPs: { Server 1 [172.17.33.7] Server2 [192.168.18.140] } Inventory File: Default inventory file location: /etc/ansible/hosts. Usually, it is not available when we install Ansible from the default repositories of the distro, so we need to create it anywhere in the filesystem. If we create it in the default location, then no need to direct Ansible to the location of the file. However, when we create the inventory file other than the default, we need to tell Ansible about the location of the inventory file. Inventory listing (Verifying the Inventory Listing): ansible-inventory --list -y SSH (as it is the primary connection medium of Ansible with its hosts): sudo apt install ssh Allow port 22 through the firewall on the client side: sudo ufw allow 22 Let’s check the status of the firewall: sudo ufw status Step 2: Establish a No-Password Login on a Specific Username | At the Host End Create a new dedicated user for the Ansible operations: sudo adduser username Adding the Ansible user to the sudo group: sudo usermod -aG sudo ansible_root Add the user to the sudo group (open the sudoers file): sudo nano /etc/sudoers SSH Connection (From Ansible Control Node to one Ansible Host): ssh username@host-ip-address ansible all -m ping -u ansible_root SSH key generation and copying the public key to the remote host: ssh-keygen Note: Copy the public key to the user that you will be using to control the hosts on various machines. ssh-copy-id username@host-ip-address Test All the Servers Listed in the Inventory File: Testing the Ansible Connection to the Ansible host (remember to use the username who is trusted at the host or has a passwordless login). I have the user “adnan” as the trusted user in the Ansible user list. ansible all -m ping -u username Same with the different username configured on the host side: We can ping a specific group, i.e., in our case, we have a group named [servers] in the inventory.  
  28. by: Ani Mon, 23 Jun 2025 11:56:48 +0000 When young girls see women succeeding in tech, speaking up, being bold, and owning their space makes a difference for them. We need more of that. We, as women, can lift each other by simply sharing our stories, supporting one another, and being visible. That’s how we create a stronger, more diverse, and more inclusive tech environment – one where everyone feels they belong. About meI am Mari Martikainen. I work as a Partner Manager at Advania, and honestly, no two days are ever the same. My role revolves around building long-term, value-adding collaboration that goes beyond selling products to create competitive advantage and growth opportunities for all parties involved. At the core of everything I do is connection. Listening is one of the most valuable skills I have developed, and it has made a big difference in building trust and strong relationships. The Beginning of My Sales CareerMy sales career kicked off when I was 13 years old, selling strawberries. I have basically been in sales ever since. Studying sales and business felt natural for me. During my BBA studies, I understood the theoretical frameworks, what kind of impact sales can have on a business, and how strategic it can be. Looking back, I honestly would not be in the tech industry if it were not for one amazing teacher during my BBA studies. My teacher, Pirjo Pitkäpaasi, had a course where we had the opportunity to go for an internship. There were many companies we could choose from, but she encouraged us, a group of 20-year-old girls, to apply for internships in tech companies even though most of us thought it was more of a “guy thing.” That push changed everything for me. I picked a distributor company for my internship, and that’s where my journey in tech began. Ever since I have worked in sales-related roles, like key account manager, sales manager, and now as a partner manager. I want to give my female teacher credit for me being in the tech field. In my opinion, having a role model makes a difference to young girls. When young girls see women succeeding in tech, speaking up, being bold, and owning their space makes a difference for them. We need more of that. We, as women, can lift each other by simply sharing our stories, supporting one another, and being visible. That’s how we create a stronger, more diverse, and more inclusive tech environment – one where everyone feels they belong. In Finland, unfortunately, I’ve still experienced stereotypes, sexism, and the challenge of having your voice truly heard, especially in more male-dominated spaces. It still happens. That’s why I believe so strongly in the power of having women mentors and role models. Mari Martikainen, Partner Manager, Advania Working in Advania – a technology company with people at heartFrom the very first day at Advania, I felt like part of the family. My colleagues welcomed me with open arms, and the team spirit here is just amazing. I love that in Advania, experts and strong professionalism are the core of our operations. In Advania, employees are valued and trusted, and that reflects in everything we do. One of the perks of working at Advania is the flexibility. One can choose how one wants to work on-site, hybrid, or fully remotely. Being in the office is energizing. It’s where ideas start flowing, conversations spark, and -let´s be honest- the humor is great. Always LearningAt Advania, knowledge-sharing is part of the culture. We have different internal channels where we post updates, insights, and learning materials. But of course, it is also up to you to stay curious and keep learning independently. One tool I always recommend is Google’s Digital Garage. It’s full of great (and free) courses on everything from digital marketing to coding. It’s a great way to keep your skills sharp and explore new areas at your own pace. Learning has always been part of my life. I started my journey with a Bachelor’s degree in Business Administration, majoring in Sales, which laid the groundwork for my tech career. Later on, I completed a Master’s degree in Business Development and Leadership. That program was a game changer. I also completed a Management Essentials Program, which focused on core leadership skills. How I found mindfulnessWhile I was working on my master’s thesis, I was also promoted to a team leader role. It was an exciting time, but also one of the most challenging periods of my career, mentally and physically. There was so much to learn at once, and I found myself feeling completely overwhelmed. That’s when I turned to mindfulness. I started incorporating short breathing exercises into my day, just five minutes at a time. It made a huge difference. It helped me manage stress more effectively. One technique I still use today is box breathing -it is a simple method, but it really works when things get hectic. Outside of work, I enjoy the little things that help me recharge -walks with my dog, going to the gym, and catching up with friends. Quote I live byOne of my favorite quotes comes from Melinda Gates. “The more you can be authentic, the happier you are going to be – and life will work itself around that.” It resonates with me. I love TED Talks, but the one I love the most is the one by Reshma Saujani, “Teach girls bravery, not perfection.” It is such a powerful message that I think every woman in tech (and beyond) should hear. The post Role Model Blog: Mari Martikainen, Advania first appeared on Women in Tech Finland.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.