Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Abhishek Prakash
    Wed, 19 Mar 2025 12:17:19 GMT

    Imagine you found a cool text editor like Pulsar and downloaded it in the AppImage format. You enjoy using it and now want to make it the default application for markdown files.
    You right-click on the file and click 'open with' option, but here, you don't see the Pulsar listed here.
    That's a problem, right? But it can be easily fixed by creating a desktop entry for that AppImage application.
    Let me show you how to do that.
    Step 1: Create a desktop entry for AppImage
    The very first step is to create a desktop file for the AppImage application. Here, we will use the Gear Lever app to create the desktop entry.
    Gear Lever is available as a Flatpak package from FlatHub. I know. Another package format, but that's how it is.
    Anyway, if you have Flatpak support enabled, install Gear Lever with this command:
    flatpak install flathub it.mijorus.gearleverNow, right-click on the AppImage file you downloaded and select Open With Gear Lever.
    Open AppImage in Gear LeverClick on the Unlock button in Gear Lever.
    Click on UnlockNow click on the "Move to app menu" button.
    Click on the "Move to the app menu" buttonVerify everything is ok by searching for the app in the system menu.
    Verify the app integrationGreat! So we have the application integrated in the desktop. Let's move to the second step.
    Step 2: Setting default app through file manager
    Let's say you want to open all your .txt text files in the Pulsar editor.
    The easiest way to achieve is through the File Manager.
    Open the file manager and right-click on the file of your choice. Now select the Open With option.
    Select the "Open With" optionIn the next window, you can start typing the name of the application to begin a search. It will also show you the AppImage program you integrated with the desktop previously.
    Search for an AppOnce you spot the app, click on it to select and then enable the "Always use for this file type" toggle button. Then click Open as shown in the screenshot below.
    Set a default appThat's it. From now on, your file will be opened in the AppImage of your choice. To verify this, you can right-click on the file. The first entry on the context menu will be the name of your AppImage application. In this case, Pulsar.
    First item in the context menuAlternative method: Change apps from settings
    Let's say you have an AppImage for applications like Web Browser, Music Player, etc. These can be changed from the system settings.
    Given you have created the AppImage desktop entry following the first step, open the system settings in Ubuntu.
    Go to Apps → Default Apps.
    Here, set the apps for categories you want.
    Set Default BrowserIf you click on the drop-down menu corresponding to a category in settings, you can select an app. The AppImage app will also be listed here. In the screenshot above, you can see Vivaldi AppImage is set as the default browser.
    For Linux Mint users, you can set it using the Preferred Application settings.
    Preferred application in Linux MintConclusion
    A lot of AppImage 'issue' or should I say shortcomings, can be solved by desktop integration. It surprises me that AppImage doesn't provide an official way of doing these things.
    Well, we have the wonderful open source developers that help us by creating helpful utilities like Gear Lever here.
    I hope this quick little tip helps you enjoy your AppImages 😄
  2. by: Guest Contributor
    Wed, 19 Mar 2025 06:52:59 GMT

    Introduction
    Backing up the database in MS SQL Server is vital to safeguard and recover the data in case of scenarios, like hardware failure, server failure, database corruption, etc. MS SQL Server provides different types of backups, such as differential, transactional, and full backup. A full backup allows you to restore the database in exactly the same form as it was at the time of creating the backup. The differential backup stores only the edits since the last full backup was created, whereas the transaction log backup is an incremental backup that stores all the transaction logs.
    When you restore SQL database backup, SQL Server offers two options to control the state of the database after restore. These are:
    RESTORE WITH RECOVERY
    When you use the RESTORE WITH RECOVERY option, it indicates no more restores are required and the state of database changes to online after the restore operation.
    RESTORE WITH NORECOVERY
    You can select the WITH NORECOVERY option when you want to continue restoring additional backup files, like transactional or differential. It changes the database to restoring state until it’s recovered.
    Now, let’s learn how to use the WITH RECOVERY and NORECOVERY options when restoring the database.
    How to Restore MS SQL Server Database with the RECOVERY Option?
    You can use the WITH RECOVERY option to restore a database from full backup. It is the default option in the Restore Database window and is used when restoring the last backup or only the backup in a restore sequence. You can restore database WITH RECOVERY option by using SQL Server Management Studio (SSMS) or T-SQL commands.
    1. Restore Database with RECOVERY Option using SSMS
    If you want to restore database without writing code and scripts, then you can use the graphical user interface in SSMS. Here are the steps to restore database WITH RECOVERY using SSMS:
    Open SSMS and connect to your SQL Server instance. Go to Object Explorer, expand databases, and right-click on the database. Click Tasks > Restore.
    On the Restore database page, under General, select the database you want to restore and the available backup. Next, on the same page, click Options. In the Options window, select the recovery state as RESTORE WITH RECOVERY. Click OK.
    2. Restore Database with RECOVERY Option using T-SQL Command
    If you have a large number of operations that need to be managed or you want to automate the tasks, then you can use T-SQL commands. You can use the below T-SQL command to restore the database with the RECOVERY option.
    RESTORE DATABASE [DBName] FROM DISK = 'C:\Backup\DB.bak' WITH RECOVERY; How to Restore MS SQL Server Database with NORECOVERY Option?
    You can use the NORECOVERY option to restore multiple backup files. For example, if your system fails and you need to restore the SQL Server database to the point just before the failure occurred, then you need a multi-step restore. In this, each backup should be in a sequence, like Full Backup > Differential > Transaction log. Here, you need to select the database in NORECOVERY mode except for the last one. This option changes the state of the database to RESTORING and makes the database inaccessible to the users unless additional backups are restored. You can restore the database with the NORECOVERY option by using SSMS or T-SQL commands.
    1. Using T-SQL Commands
    Here are the steps to restore MS SQL database with the NORECOVERY option by using T-SQL commands:
    Step 1: First, you need to restore the Full Backup by using the below command:
    RESTORE DATABASE [YourDatabaseName] FROM DISK = N'C:\Path\To\Your\FullBackup.bak' WITH NORECOVERY, STATS = 10; Step 2: Then, you need to restore the Differential Backup. Use the below command:
    RESTORE DATABASE [YourDatabaseName] FROM DISK = N'C:\Path\To\Your\DifferentialBackup.bak' WITH NORECOVERY, STATS = 10; Step 3: Now, you have to restore the Transaction log backup (last backup WITH RECOVERY). Here’s the command:
    RESTORE LOG [YourDatabaseName] FROM DISK = N'C:\Path\To\Your\LastTransactionLogBackup.bak' WITH RECOVERY, STATS = 10; 2. Using SQL Server Management Studio (SSMS)
    You can follow the below steps to restore the database with NORECOVERY option using the SSMS:
    In SSMS, go to the Object Explorer, expand databases, and right-click the database node. Click Tasks, select Restore, and click Database. In the Restore Database page, select the source (i.e. full backup), and the destination. Click OK. Next, add the information about the selected backup file in the option labelled - Backup sets to restore. Next, on the same Restore Database page, click Options. On the Options page, click RESTORE WITH NORECOVERY in the Recovery state field. Click OK.
    What if the SQL Database Backup File is Corrupted?
    Sometimes, the restore process can fail due to corruption in the database backup file. If your backup file is corrupted or you've not created a backup file, then you can take the help of a professional MS SQL repair tool, like Stellar Repair for MS SQL Technician. It is an advanced SQL repair tool to repair corrupt databases and backup files with complete integrity. The tool can repair backup files of any type - transactional log, full backup, and differential - without any file-size limitations. It can even restore deleted items from the backup database file. The tool is compatible with MS SQL Server version 2022, 2019, and earlier.
    Conclusion
    Above, we have discussed the stepwise process to restore the SQL database with RECOVERY and NORECOVERY options in MS SQL Server. If you face any error or issue while restoring the backup, then you can use a professional SQL repair tool, like Stellar Repair for MS SQL Technician. It can easily restore all the data from corrupt backup (.bak) files and save it in a new database file with complete precision. The tool can help resolve all the errors related to corruption in SQL database and backup (.bak) files.
  3. by: Guest Contributor
    Wed, 19 Mar 2025 06:52:59 GMT

    Introduction
    Backing up the database in MS SQL Server is vital to safeguard and recover the data in case of scenarios, like hardware failure, server failure, database corruption, etc. MS SQL Server provides different types of backups, such as differential, transactional, and full backup. A full backup allows you to restore the database in exactly the same form as it was at the time of creating the backup. The differential backup stores only the edits since the last full backup was created, whereas the transaction log backup is an incremental backup that stores all the transaction logs.
    When you restore SQL database backup, SQL Server offers two options to control the state of the database after restore. These are:
    RESTORE WITH RECOVERY
    When you use the RESTORE WITH RECOVERY option, it indicates no more restores are required and the state of database changes to online after the restore operation.
    RESTORE WITH NORECOVERY
    You can select the WITH NORECOVERY option when you want to continue restoring additional backup files, like transactional or differential. It changes the database to restoring state until it’s recovered.
    Now, let’s learn how to use the WITH RECOVERY and NORECOVERY options when restoring the database.
    How to Restore MS SQL Server Database with the RECOVERY Option?
    You can use the WITH RECOVERY option to restore a database from full backup. It is the default option in the Restore Database window and is used when restoring the last backup or only the backup in a restore sequence. You can restore database WITH RECOVERY option by using SQL Server Management Studio (SSMS) or T-SQL commands.
    1. Restore Database with RECOVERY Option using SSMS
    If you want to restore database without writing code and scripts, then you can use the graphical user interface in SSMS. Here are the steps to restore database WITH RECOVERY using SSMS:
    Open SSMS and connect to your SQL Server instance. Go to Object Explorer, expand databases, and right-click on the database. Click Tasks > Restore.
    On the Restore database page, under General, select the database you want to restore and the available backup. Next, on the same page, click Options. In the Options window, select the recovery state as RESTORE WITH RECOVERY. Click OK.
    2. Restore Database with RECOVERY Option using T-SQL Command
    If you have a large number of operations that need to be managed or you want to automate the tasks, then you can use T-SQL commands. You can use the below T-SQL command to restore the database with the RECOVERY option.
    RESTORE DATABASE [DBName] FROM DISK = 'C:\Backup\DB.bak' WITH RECOVERY; How to Restore MS SQL Server Database with NORECOVERY Option?
    You can use the NORECOVERY option to restore multiple backup files. For example, if your system fails and you need to restore the SQL Server database to the point just before the failure occurred, then you need a multi-step restore. In this, each backup should be in a sequence, like Full Backup > Differential > Transaction log. Here, you need to select the database in NORECOVERY mode except for the last one. This option changes the state of the database to RESTORING and makes the database inaccessible to the users unless additional backups are restored. You can restore the database with the NORECOVERY option by using SSMS or T-SQL commands.
    1. Using T-SQL Commands
    Here are the steps to restore MS SQL database with the NORECOVERY option by using T-SQL commands:
    Step 1: First, you need to restore the Full Backup by using the below command:
    RESTORE DATABASE [YourDatabaseName] FROM DISK = N'C:\Path\To\Your\FullBackup.bak' WITH NORECOVERY, STATS = 10; Step 2: Then, you need to restore the Differential Backup. Use the below command:
    RESTORE DATABASE [YourDatabaseName] FROM DISK = N'C:\Path\To\Your\DifferentialBackup.bak' WITH NORECOVERY, STATS = 10; Step 3: Now, you have to restore the Transaction log backup (last backup WITH RECOVERY). Here’s the command:
    RESTORE LOG [YourDatabaseName] FROM DISK = N'C:\Path\To\Your\LastTransactionLogBackup.bak' WITH RECOVERY, STATS = 10; 2. Using SQL Server Management Studio (SSMS)
    You can follow the below steps to restore the database with NORECOVERY option using the SSMS:
    In SSMS, go to the Object Explorer, expand databases, and right-click the database node. Click Tasks, select Restore, and click Database. In the Restore Database page, select the source (i.e. full backup), and the destination. Click OK. Next, add the information about the selected backup file in the option labelled - Backup sets to restore. Next, on the same Restore Database page, click Options. On the Options page, click RESTORE WITH NORECOVERY in the Recovery state field. Click OK.
    What if the SQL Database Backup File is Corrupted?
    Sometimes, the restore process can fail due to corruption in the database backup file. If your backup file is corrupted or you've not created a backup file, then you can take the help of a professional MS SQL repair tool, like Stellar Repair for MS SQL Technician. It is an advanced SQL repair tool to repair corrupt databases and backup files with complete integrity. The tool can repair backup files of any type - transactional log, full backup, and differential - without any file-size limitations. It can even restore deleted items from the backup database file. The tool is compatible with MS SQL Server version 2022, 2019, and earlier.
    Conclusion
    Above, we have discussed the stepwise process to restore the SQL database with RECOVERY and NORECOVERY options in MS SQL Server. If you face any error or issue while restoring the backup, then you can use a professional SQL repair tool, like Stellar Repair for MS SQL Technician. It can easily restore all the data from corrupt backup (.bak) files and save it in a new database file with complete precision. The tool can help resolve all the errors related to corruption in SQL database and backup (.bak) files.
  4. by: Abhishek Kumar
    Tue, 18 Mar 2025 10:58:47 +0530

    You know that moment when you dive into a project, thinking, "This should be easy," and then hours later, you're buried under obscure errors, outdated forum posts, and conflicting advice?
    Yeah, that was me while trying to package my Python app into a .deb file.
    It all started with my attempt to revive an old project, which some of our long-time readers might remember - Compress PDF.
    PDF Compressor Tool v1.0 by Its FOSSSince I’ve been learning Python these days, I thought, why not customize the UI, tweak the logic, and give it a fresh start?
    The python app was working great when running inside a virtual environment but I was more interested in shipping this app as a .deb binary, making installation as simple as dpkg -i app.deb.
    Every tutorial I read online, covered bits and pieces, but none walked me through the entire process. So here I am, documenting my experience while packaging my script into a .deb file.
    Choosing the right packaging tool
    For turning a Python script into an executable, I am using PyInstaller. Initially, I tried using py2deb, a tool specifically meant for creating .deb packages.
    Bad idea. Turns out, py2deb hasn’t been maintained in years and doesn’t support newer Python versions.
    PyInstaller takes a Python script and bundles it along with its dependencies into a single standalone executable. This means users don’t need to install Python separately, it just works out of the box.
    Step 1: Install PyInstaller
    First, make sure you have PyInstaller installed. If not, install it using pip:
    pip install pyinstaller Check if it's installed correctly:
    pyinstaller --version Step 2: Create the .deb package structure
    To keep things clean and structured, .deb packages follows a specific folder structure.
    compressor/ ├── pdf-compressor/ │ ├── DEBIAN/ │ │ ├── control │ │ ├── postinst │ ├── usr/ │ │ ├── bin/ │ │ ├── share/ │ │ │ ├── applications/ │ │ │ ├── icons/ │ │ │ ├── pdf-compressor/Let’s create it:
    mkdir -p pdf-compressor/DEBIAN mkdir -p pdf-compressor/usr/bin mkdir -p pdf-compressor/usr/share/applications mkdir -p pdf-compressor/usr/share/icons/ mkdir -p pdf-compressor/usr/share/pdf-compressor/What each directory is for?
    usr/bin/: Stores the executable file. usr/share/applications/: Contains the .desktop file (so the app appears in the system menu). usr/share/icons/: Stores the app icon. DEBIAN/: Contains metadata like package info and dependencies. Optional: Packaging dependencies
    Before packaging the app, I wanted to ensure it loads assets and dependencies correctly whether it's run as a script or a standalone binary.
    Initially, I ran into two major problems:
    The in-app logo wasn’t displaying properly because asset paths were incorrect when running as a packaged executable. Dependency errors occurred when running the app as an executable. To keep everything self-contained and avoid conflicts with system packages, I created a virtual environment inside: pdf-compressor/usr/share/pdf-compressor
    python3 -m venv venv source venv/bin/activateThen, I installed all the dependencies inside it:
    pip install -r requirements.txt deactivate This ensures that dependencies are bundled properly and won’t interfere with system packages.
    Now to ensure that the app correctly loads assets and dependencies, I modified the script as follows:
    import sys import os # Ensure the virtual environment is used venv_path = "/usr/share/pdf-compressor/venv" if os.path.exists(venv_path): sys.path.insert(0, os.path.join(venv_path, "lib", "python3.10", "site-packages")) # Detect if running as a standalone binary if getattr(sys, 'frozen', False): app_dir = sys._MEIPASS # PyInstaller's temp folder else: app_dir = os.path.dirname(os.path.abspath(__file__)) # Set correct paths for assets icon_path = os.path.join(app_dir, "assets", "icon.png") logo_path = os.path.join(app_dir, "assets", "itsfoss-logo.webp") pdf_icon_path = os.path.join(app_dir, "assets", "pdf.png") print("PDF Compressor is running...") What’s happening here?
    sys._MEIPASS → When the app is packaged with PyInstaller, assets are extracted to a temporary folder. This ensures they are accessible. Virtual environment path (/usr/share/pdf-compressor/venv) → If it exists, it is added to sys.path, so installed dependencies can be found. Assets paths → Dynamically assigned so they work in both script and standalone modes. After making these changes, my issue was mostly resolved.
    📋I know there are other ways to handle this, but since I'm still learning, this approach worked well for me. If I find a better solution in the future, I’ll definitely improve it!Step 3: Compiling python script into executable binary
    Now comes the exciting part, turning the Python script into a standalone executable. Navigate to the root directory where the main Python script is located. Then run:
    pyinstaller --name=pdf-compressor --onefile --windowed --add-data "assets:assets" pdf-compressor.py --onefile: Packages everything into a single executable file --windowed: Hides the terminal (useful for GUI apps) --name=compress-pdf: Sets the output filename --add-data "assets:assets" → Ensures images/icons are included. After this, PyInstaller will create a dist/ , inside, you'll find compress-pdf . This is the standalone app!
    Try running it:
    ./dist/pdf-compressor If everything works as expected, you’re ready to package it into a .deb file.
    Step 4: Move the executable to the correct location
    Now, move the standalone executable into the bin directory:
    mv dist/compress-pdf pdf-compressor/usr/bin/pdf-compressor Step 5: Add an application icon
    I don't know about you but to me, an app without an icon or just generic gear icons feels incomplete. Icon gives the vibe to your app.
    Let’s place the assets directory which contains the icon and logo files inside the right directory:
    cp assets/ pdf-compressor/usr/share/pdf-compressorStep 6: Create a desktop file
    To make the app appear in the system menu, we need a .desktop file. Open a new file:
    nano pdf-compressor/usr/share/applications/pdf-compressor.desktopPaste this content:
    [Desktop Entry] Name=PDF Compressor Comment=Compress PDF files easily Exec=/usr/bin/pdf-compressor Icon=/usr/share/icons/pdf-compressor.png Terminal=false Type=Application Categories=UtilityExec → Path to the executable. Icon → App icon location. Terminal=false → Ensures it runs as a GUI application. Save and exit (CTRL+X, then Y, then Enter).
    Step 7: Create the control file
    At the heart of every .deb package is a metadata file called control.
    This file is what tells the Debian package manager (dpkg) what the package is, who maintains it, what dependencies it has, and a brief description of what it does.
    That’s why defining them here ensures a smooth experience for users.
    Inside the DEBIAN/ directory, create a control file:
    nano pdf-compressor/DEBIAN/controlthen I added the following content in it:
    Package: pdf-compressor Version: 1.0 Section: utility Priority: optional Architecture: amd64 Depends: python3, ghostscript Recommends: python3-pip, python3-venv Maintainer: Your Name <your@email.com> Description: A simple PDF compression tool. Compress PDF files easily using Ghostscript.Step 8: Create the postinst script
    The post-installation (postinst) script as the name suggests is executed after the package is installed. It ensures all dependencies are correctly set up.
    nano pdf-compressor/DEBIAN/postinstAdd this content:
    #!/bin/bash set -e # Exit if any command fails echo "Setting up PDF Compressor..." chmod +x /usr/bin/pdf-compressor # Install dependencies inside a virtual environment python3 -m venv /usr/share/pdf-compressor/venv source /usr/share/pdf-compressor/venv/bin/activate pip install --no-cache-dir pyqt6 humanize echo "Installation complete!" update-desktop-database What’s happening here?
    set -e → Ensures the script stops on failure. Creates a virtual environment → This allows dependencies to be installed in an isolated way. chmod +x /usr/bin/pdf-compressor → Ensures the binary is executable. update-desktop-database → Updates the system’s application database. Setting up the correct permission for postinst is important:
    chmod 755 pdf-compressor/DEBIAN/postinstStep 9: Build & Install the deb package
    After all the hard work, it's finally time to bring everything together. To build the package, we’ll use dpkg-deb --build, a built-in Debian packaging tool.
    This command takes our structured pdf-compressor directory and turns it into a .deb package that can be installed on any Debian-based system.
    dpkg-deb --build pdf-compressorIf everything goes well, you should see output like:
    dpkg-deb: building package 'pdf-compressor' in 'pdf-compressor.deb'.Now, let’s install it and see our application in action!
    sudo dpkg -i pdf-compressor.deb💡If installation fails due to missing dependencies, fix them using: sudo apt install -fThis installs pdf-compressor onto your system just like any other Debian package. To verify, you can either launch it from the Applications menu or directly via terminal:
    pdf-compressorPDF Compressor v2.0 running inside Lubuntu | P.S. I know the color scheme could have been better 😅Final thoughts
    Packaging a Python application isn’t as straightforward as I initially thought. During my research, I couldn't find any solid guide that walks you through the entire process from start to finish.
    So, I had to experiment, fail, and learn, and that’s exactly what I’ve shared in this guide. Looking back, I realize that a lot of what I struggled with could have been simplified had I known better. But that’s what learning is all about, right?
    I believe that this write-up will serve as a good starting point for new Python developers like me who are still struggling to package their projects.
    That said, I know this isn’t the only way to package Python applications, there are probably better and more efficient approaches out there. So, I’d love to hear from you!
    Also, if you found this guide helpful, be sure to check out our PDF Compressor project on GitHub. Your feedback, contributions, and suggestions are always welcome!
    Happy coding! 😊
  5. Styling Counters in CSS

    by: Juan Diego Rodríguez
    Mon, 17 Mar 2025 16:25:04 +0000

    Yes, you are reading that correctly: This is indeed a guide to styling counters with CSS. Some of you are cheering, “Finally!”, but I understand that the vast majority of you are thinking, “Um, it’s just styling lists.” If you are part of the second group, I get it. Before learning and writing more and more about counters, I thought the same thing. Now I am part of the first group, and by the end of this guide, I hope you join me there.
    There are many ways to create and style counters, which is why I wanted to write this guide and also how I plan to organize it: going from the most basic styling to the top-notch level of customization, sprinkling in between some sections about spacing and accessibility. It isn’t necessary to read the guide in order — each section should stand by itself, so feel free to jump to any part and start reading.
    Table of Contents
    HTML Based Customization Styling Simple Counters in CSS Custom Counters Custom Counters Styles Images in Counters Spacing Things out Accessibility Almanac references Further reading Customizing Counters in HTML
    Lists elements were among the first 18 tags that made up HTML. Their representation wasn’t defined yet but deemed fitting a bulleted list for unordered lists, and a sequence of numbered paragraphs for an ordered list.
    Cool but not enough; soon people needed more from HTML alone and new list attributes were added throughout the years to fill in the gaps.
    start
    The start attribute takes an integer and sets from where the list should start:
    <ol start="2"> <li>Bread</li> <li>Milk</li> <li>Butter</li> <li>Apples</li> </ol> Bread Milk Butter Apples Although, it isn’t limited to positive values; zero and negative integers are allowed as well:
    <ol start="0"> <li>Bread</li> <li>Milk</li> <li>Butter</li> <li>Apples</li> </ol> <ol start="-2"> <li>Bread</li> <li>Milk</li> <li>Butter</li> <li>Apples</li> </ol> Bread Milk Butter Apples Bread Milk Butter Apples type
    We can use the type attribute to change the counter’s representation. It’s similar to CSS’s list-style-type, but it has its own limited uses and shouldn’t be used interchangeably*. Its possible values are:
    1 for decimal numbers (default) a for lowercase alphabetic A for uppercase alphabetic i for lowercase Roman numbers I for uppercase Roman numbers <ol type="a"> <li>Bread</li> <li>Milk</li> <li>Butter</li> <li>Apples</li> </ol> <ol type="i"> <li>Bread</li> <li>Milk</li> <li>Butter</li> <li>Apples</li> </ol> Bread Milk Butter Apples Bread Milk Butter Apples It’s weird enough to use type on ol elements, but it still has some use cases*. However, usage with the ul element is downright deprecated.
    value
    The value attribute sets the value for a specific li element. This also affects the values of the li elements after it.
    <ol> <li>Bread</li> <li value="4">Milk</li> <li>Butter</li> <li>Apples</li> </ol> Bread Milk Butter Apples reversed
    The reversed attribute will start counting elements in reverse order, so from highest to lowest.
    <ol reversed> <li>Bread</li> <li>Milk</li> <li>Butter</li> <li>Apples</li> </ol> Bread Milk Butter Apples All can be combined
    If you ever feel the need, all list attributes can be combined in one (ordered) list.
    <ol reversed start="2" type="i"> <li>Bread</li> <li value="4">Milk</li> <li>Butter</li> <li>Apples</li> </ol> Bread Milk Butter Apples * Do we need them if we now have CSS?
    Funny enough, the first CSS specification already included list-style-type and other properties to style lists, and it was released before HTML 3.2 — the first HTML spec that included some of the previous list attributes. This means that at least on paper, we had CSS list styling before HTML list attributes, so the answer isn’t as simple as “they were there before CSS.”
    Without CSS, a static page (such as this guide) won’t be pretty, but at the very least, it should be readable. For example, the type attribute ensures that styled ordered lists won’t lose their meaning if CSS is missing, which is especially useful in legal or technical documents. Some attributes wouldn’t have a CSS equivalent until years later, including reversed, start and value.
    Styling Simple Counters in CSS
    For most use cases, styling lists in CSS doesn’t take more than a couple of rules, but even in that brevity, we can find different ways to style the same list.
    ::marker or ::before?
    The ::marker pseudo-element represents the counter part of a list item. As a pseudo-element, we can set its content property to any string to change its counter representation:
    li::marker { content: "💜 "; } Bread Milk Butter Apples The content in pseudo-elements also accepts images, which allows us to create custom markers:
    li::marker { content: url("./logo.svg") " "; } bread milk butter apples By default, only li elements have a ::marker but we can give it to any element by setting its display property to list-item:
    h4 { display: list-item; } h4::marker { content: "◦ "; } This will give each h4 a ::marker which we can change to any string:
    List Title
    However, ::marker is an odd case: it was described in the CSS spec more than 20 years ago, but only gained somewhat reliable support in 2020 and still isn’t fully supported in Safari. What’s worst, only font-related properties (such as font-size or color) are allowed, so we can’t change its margin or background-color.
    This has led many to use ::before instead of ::marker, so you’ll see a lot of CSS in which the author got rid of the ::marker using list-style-type: none and used ::before instead:
    li { /* removes ::marker */ list-style-type: none; } li::before { /* mimics ::marker */ content: "▸ "; } list-style-type
    The list-style-type property can be used to replace the ::marker‘s string. Unlike ::marker, list-style-type has been around forever and is most people’s go-to option for styling lists. It can take a lot of different counter styles that are built-in in browsers, but you will probably use one of the following:
    For unordered lists:
    disc circle square ul { list-style-type: square; } ul { list-style-type: circle; } bread milk butter apples For ordered lists:
    decimal decimal-leading-zero lower-roman upper-roman lower-alpha upper-alpha ol { list-style-type: upper-roman; } ol { list-style-type: lower-alpha; } bread milk butter apples You can find a full list of valid counter styles here.
    It can also take none to remove the marker altogether, and since not long ago, it can also take a <string> for ul elements.
    ul { list-style-type: none; } ul { list-style-type: "➡️ "; } Creating Custom Counters
    For a long time, there wasn’t a CSS-equivalent to the HTML reverse, start or value attributes. So if we wanted to reverse or change the start of multiple lists, instead of a CSS class to rule them all, we had to change their HTML one by one. You can imagine how repetitive that would get.
    Besides, list attributes simply had their limitations: we can’t change how they increment with each item and there isn’t an easy way to attach a prefix or suffix to the counter. And maybe the biggest reason of all is that there wasn’t a way to number things that weren’t lists!
    Custom counters let us number any collection of elements with a whole new level of customization. The workflow is to:
    Initiate the counter with the counter-reset property. Increment the counter with the counter-increment property. Individually set the counters with the counter-set property. Output the counters with either the counter() and counters() functions. As I mentioned, we can make a list out of any collection of elements, and while this has its accessibility concerns, just for demonstration’s sake, let’s try to turn a collection of headings like this…
    <div class="index"> <h2>The Old Buccaneer</h2> <h2>The Sea Cook</h2> <h2>My Shore Adventure</h2> <h2>The Log Cabin</h2> <h2>My Sea Adventure</h2> <h2>Captain Silver</h2> </div> …into something that looks list-like. But just because we can make an element look like a list doesn’t always mean we should do it. Be sure to consider how the list will be announced by assistive technologies, like screen readers, and see the Accessibility section for more information.
    Initiate counters: counter-reset
    The counter-reset property takes two things: the name of the counter as a custom ident and the initial count as an integer. If the initial count isn’t given, then it will start at 0 by default:
    .index { counter-reset: index; /* The same as */ counter-reset: index 0; } You can initiate several counters at once with a space-separated list and set a specific value for each one:
    .index { counter-reset: index another-counter 2; } This will start our index counter at 0 (the default) and another-counter at 2.
    Set counters: counter-set
    The counter-set works similar to counter-reset: it takes the counter’s name followed by an integer, but this time it will set the count for that element onwards. If the integer is omitted, it will set the counter to 0 by default.
    h2:nth-child(2) { counter-set: index; /* same as */ counter-set: index 0; } And we can set several counters at once, as well:
    h2:nth-child(3) { counter-set: index 5 another-counter 10; } This will set the third h2 element’s index count to 5 and another-counter to 10.
    If there isn’t an active counter with that name, counter-set will initiate it at 0.
    Increment counters: counter-increment
    Right now, we have our counter, but it will stagnate at 0 since we haven’t set which elements should increment it. We can use the counter-increment property for that, which takes the name of the counter and how much it should be incremented by. If we only write the counter’s name, it will increment it by 1.
    In this case, we want each h2 title to increment the counter by one, and that should be as easy as setting counter-increment to the counter’s name:
    h2 { counter-increment: index; /* same as */ counter-increment: index 1; } Just like with counter-reset, we can increment several counters at once in a space-separated list:
    h2 { counter-increment: index another-counter 2; } This will increment index by one and another-counter by two on each h2 element.
    If there isn’t an active counter with that name, counter-increment will initiate it at 0.
    Output simple lists: counter()
    So far, we won’t see any change in the counter representation. The counters are counting but not showing, so to output the counter’s result we use the counter() and counters() functions. Yes, those are two functions with similar names but important differences.
    The counter() function takes the name of a counter and outputs its content as a string. If many active counters have the same name, it will select the one that is defined closest to the element, so we can only output one counter at a time.
    As mentioned earlier, we can set an element’s display to list-item to work with its ::marker pseudo-element:
    h2 { display: list-item; } Then, we can use counter() in its content property to output the current count. This allows us to prefix and suffix the counter by writing a string before or after the counter() function:
    h2::marker { content: "Part " counter(index) ": "; } Alternatively, we can use the everyday ::before pseudo-element to the same effect:
    h2::before { content: "Part " counter(index) ": "; } Output nested lists: counters()
    counter() works great for most situations, but what if we wanted to do a nested list like this:
    1. Paradise Beaches 1.1. Hawaiian Islands 1.2. Caribbean Getaway 1.2.1. Aruba 1.2.2. Barbados 2. Outdoor Escapades 2.1 National Park Hike 2.2. Mountain Skiing Trip We would need to initiate individual counters and write different counter() functions for each level of nesting, and that’s only possible if we know how deep the nesting goes, which we simply don’t at times.
    In this case, we use the counters() function, which also takes the name of a counter as an argument but instead of just outputting its content, it will join all active counters with that name into a single string and output it. To do so, it takes a string as a second argument, usually something like a dot (".") or dash ("-") that will be used between counters to join them.
    We can use counter-reset and counter-increment to initiate a counter for each ol element, while each li will increment its closest counter by 1:
    ol { counter-reset: item; } li { counter-increment: item; } But this time, instead of using counter() (which would only display one counter per item), we will use counters() to join all active counters by a string (e.g. ".“) and output them at once:
    li::marker { content: counters(item, ".") ". "; } Styling Counters
    Both the counter() and counters() functions accept one additional, yet optional, last argument representing the counter style, the same ones we use in the list-style-type property. So in our last two examples, we could change the counter styles to Roman numbers and alphabetic letters, respectively:
    h2::marker { content: "Part " counter(index, upper-roman) ": "; } li::marker { content: counters(item, ".", lower-alpha) ". "; } Reverse Counters
    It’s possible to count backward using custom counters, but we need to know beforehand the number of elements we’ll count. So for example, if we want to make a Top Five list in reversed order:
    <h1>Best rated animation movies</h1> <ol> <li>Toy Story 2</li> <li>Toy Story 1</li> <li>Finding Nemo</li> <li>How to Train your Dragon</li> <li>Inside Out</li> </ol> We have to initiate our counter at the total number of elements plus one (so it doesn’t end at 0):
    ol { counter-reset: movies 6; } And then set the increment to a negative integer:
    li { counter-increment: movies -1; } To output the count we use counter() as we did before:
    li::marker { content: counter(movies) ". "; } There is also a way to write reversed counters supported in Firefox, but it hasn’t shipped to any other browser. Using the reversed() functional notation, we can wrap the counter name while initiating it to say it should be reversed.
    ol { counter-reset: reversed(movies); } li { counter-increment: movies; } li::marker { content: counter(movies) " ."; } Styling Custom Counters
    The last section was all about custom counters: we changed from where they started and how they increased, but at the end of the day, their output was styled in one of the browser’s built-in counter styles, usually decimal. Now using @counter-style, we’ll build our own counter styles to style any list.
    The @counter-style at-rule, as its name implies, lets you create custom counter styles. After writing the at-rule it takes a custom ident as a name:
    @counter-style my-counter-style { /* etc. */ } That name can be used inside the properties and functions that take a counter style, such as list-style-type or the last argument in counter() and counters():
    ul { list-style-type: my-counter-style; } li::marker { content: counter(my-counter, my-counter-style) ". "; } What do we write inside @counter-style? Descriptors! How many descriptors? Honestly, a lot. Just look at this quick review of all of them:
    system: specifies which algorithm will be used to construct the counter’s string representation. (Obligatory) negative: specifies the counter representation if the counter value is negative. (Optional) prefix: specifies a character that will be attached before the marker representation and any negative sign. (Optional) suffix: specifies a character that will be attached after the marker representation and any negative sign. (Optional) range: specifies the range in which the custom counter is used. Counter values outside the range will drop to their fallback counter style. (Optional) pad: specifies a minimum width all representations have to reach. Representations shorter than the minimum are padded with a character. (Optional) fallback: specifies a fallback counter used whenever a counter style can’t represent a counter value. (Optional) symbols: specifies the symbols used by the construction system algorithm. It’s obligatory unless the system is set to additive or extends. additive-symbols: specifies the symbols used by the construction algorithm when the system descriptor is set to additive. speak-as: specifies how screen readers should read the counter style. (Optional) However, I’ll focus on the required descriptors first: system, symbols and additive-symbols.
    The system descriptor
    The symbols or additive-symbols descriptors define the characters used for the counter style, while system says how to use them.
    The valid system values are:
    cyclic alphabetic symbolic additive fixed extends cyclic will go through the characters set on symbols and repeat them. We can use just one character in the symbols to mimic a bullet list:
    @counter-style cyclic-example { system: cyclic; symbols: "⏵"; suffix: " "; } bread butter milk apples Or alternate between two or more characters:
    @counter-style cyclic-example { system: cyclic; symbols: "🔸" "🔹"; suffix: " "; } fixed will write the characters in symbols descriptor just one time. In the last example, only the first two items will have a custom counter if set to fixed, while the others will drop to their fallback, which is decimal by default.
    @counter-style multiple-example { system: fixed; symbols: "🔸" "🔹"; suffix: " "; } We can set when the custom counters start by appending an <integer> to the fixed value. For example, the following custom counter will start at the fourth item:
    @counter-style fixed-example { system: fixed 4; symbols: "💠"; suffix: " "; } numeric will numerate list items using a custom positional system (base-2, base-8, base-16, etc.). Positional systems start at 0, so the first character at symbols will be used as 0, the next as 1, and so on. Knowing this, we can make an ordered list using non-decimal numerical systems like hexadecimal:
    @counter-style numeric-example { system: numeric; symbols: "0" "1" "2" "3" "4" "5" "6" "7" "8" "9" "A" "B" "C" "D" "E" "F"; suffix: ". "; } bread butter milk apples alphabetic will enumerate the list items using a custom alphabetical system. It’s similar to the numeric system but with the key difference that it doesn’t have a character for 0, so the next digits are just repeated. For example, if our symbols are "A" "B" "C" they will wrap to "AA", "AB", "AC", then BA, BB, BC and so on.
    Since there is no equivalent for 0 and negative values, they will drop down to their fallback.
    @counter-style alphabetic-example { system: alphabetic; symbols: "A" "B" "C"; suffix: ". "; } bread butter milk apples cinnamon symbolic will go through the characters in symbols repeating them one more time each iteration. So for example, if our symbols are "A", "B", "C", it will go “A”, “B”, and “C”, double them in the next iteration as “AA”, “BB”, and “CC”, then triple them as “AAA”, “BBB”, “CCC” and so on.
    Since there is no equivalent for 0 and negative values, they will drop down to their fallback.
    @counter-style symbolic-example { system: symbolic; symbols: "A" "B" "C"; suffix: ". "; } bread butter milk apples cinnamon additive will give characters a numerical value and add them together to get the counter representation. You can think of it as the way we usually count bills: if we have only $5, $2, and $1 bills, we will add them together to get the desired quantity, trying to keep the number of bills used at a minimum. So to represent 10, we will use two $5 bills instead of ten $1 bills.
    Since there is no equivalent for negative values, they will drop down to their fallback.
    @counter-style additive -example { system: additive; additive-symbols: 5 "5️⃣", 2 "2️⃣", 1 "1️⃣"; suffix: " "; } Notice how we use additive-symbols when the system is additive, while we use just symbols for the previous systems.
    extends will create a custom style from another one but with modifications. To do so, it takes a <counter-style-name> after the extends value. For example, we could change the decimal counter style default’s suffix to a closing parenthesis (")")`:
    @counter-style extends-example { system: extends decimal; suffix: ") "; } bread butter milk cinnamon Per spec, “If a @counter-style uses the extends system, it must not contain a symbols or additive-symbols descriptor, or else the @counter-style rule is invalid.”
    The other descriptors
    The negative descriptor allows us to create a custom representation for a list’s negative values. It can take one or two characters: The first one is prepended to the counter, and by default it’s the hyphen-minus ("-"). The second one is appended to the symbol. For example, we could enclose negative representations into parenthesis (2), (1), 0, 1, 2:
    @counter-style negative-example { system: extends decimal; negative: "(" ")"; } bread butter milk apples The prefix and suffix descriptors allow us to prepend and append, respectively, a character to the counter representation. We can use it to add a character at the beginning of each counter using prefix:
    @counter-style prefix-suffix-example { system: extends decimal; prefix: "("; suffix: ") "; } bread butter milk apples The range descriptor defines an inclusive range in which the counter style is used. We can define a bounded range by writing one <integer> next to another. For example, a range of 2 4 will affect elements 2, 3, and 4:
    @counter-style range-example { system: cyclic; symbols: "‣"; suffix: " "; range: 2 4; } bread butter milk apples cinnamon On the other hand, using the infinite value we can unbound the range to one side. For example, we could write infinite 3 so all items up to 3 have a counter style:
    @counter-style range-example { system: alphabetic; symbols: "A" "B" "C"; suffix: ". "; range: infinite 3; } bread butter milk apples cinnamon The pad descriptor takes an <integer> that represents the minimum width for the counter and a character to pad it. For example, a zero-padded counter style would look like the following:
    @counter-style pad-example { system: extends decimal; pad: 3 "0"; } bread butter milk apples The fallback descriptor allows you to define which counter style should be used as a fallback whenever we can’t represent a specific count. For example, the following counter style is fixed and will fallback to lower-roman after the sixth item:
    @counter-style fallback-example { system: fixed; symbols: "⚀" "⚁" "⚂" "⚃"; fallback: lower-roman; } bread butter milk apples cinnamon Lastly, the speak-as descriptor hints to speech readers on how the counter style should be read. It can be:
    auto Uses the system default. bullets reads an unordered list. By default, cyclic systems are read as bullets numbers reads the counter’s numeric value in the content language. By default, additive, fixed, numeric, and, symbolic are read as numbers. words reads the counter representation as words. spell-out reads the counter representation letter by letter. By default, alphabetic is read as spell-out. <counter-style-name> It will use that counter’s speak-as value. @counter-style speak-as-example { system: extends decimal; prefix: "Item "; suffix: " is "; speak-as: words; } symbols()
    The symbols() function defines an only-use counter style without the need to do a whole @counter-style, but at the cost of missing some features. It can be used inside the list-style-type property and the counter() and counters() functions.
    ol { list-style-type: symbols(cyclic "🥬"); } However, its browser support is appalling since it’s only supported in Firefox.
    Images in Counters
    In theory, there are four ways to add images to lists:
    list-style-image property content property symbols descriptor in @counter-style symbols() function. In practice, the only supported ways are using list-style-image and content, since support for images in @counter-style and support in general for symbols() isn’t the best (it’s pretty bad).
    list-style-image
    The list-style-image can take an image or a gradient. In this case, we want to focus on images but gradients can also be used to create custom square bullets:
    li { list-style-image: conic-gradient(red, yellow, lime, aqua, blue, magenta, red); } bread butter milk apples Sadly, changing the shape would require styling more the ::marker and this isn’t currently possible.
    To use an image, we pass its url(), make sure is small enough to work as a counter:
    li { list-style-image: url("./logo.svg"); } bread milk butter apples content
    The content property works similar to list-style-image: we pass the image’s url() and provide a little padding on the left as an empty string:
    li::marker { content: url("./logo.svg") " "; } Spacing Things Out
    You may notice in the last part how the image — depending on its size — isn’t completely centered on the text, and also that we provide an empty string on content properties for spacing instead of giving things either a padding or margin. Well, there’s an explanation for all of this, as since spacing is one of the biggest pain points when it comes to styling lists.
    Margins and paddings are wacky
    Spacing the ::marker from the list item should be as easy as increasing the marker’s or list margin, but in reality, it takes a lot more work.
    First, the padding and margin properties aren’t allowed in ::marker. While lists have two types of elements: the list wrapper (usually ol or ul) and the list item (li), each with a default padding and margin. Which should we use?
    You can test each property in this demo by Šime Vidas in his article dedicated to the gap after the list marker:
    CodePen Embed Fallback You’ll notice how the only property that affects the spacing within ::marker and the text is the li item’s padding property, while the rest of the spacing properties will move the entire list item. Another thing to note is even when the padding is set to 0px, there is a space after the ::marker. This is set by browsers and will vary depending on which browser you’re using.
    list-style-position
    One last thing you may notice in the demo is a checkbox for the list-style-position property, and how once you set it to inside, the ::marker will move to the inside of the box, at the cost of removing any spacing given by the list item’s padding.
    By default, markers are rendered outside the ul element’s box. A lot of times, this isn’t the best behavior: markers sneak out of elements, text-align won’t align the marker, and paradoxically, centered lists with flex or grid won’t look completely centered since the markers are outside the box.
    To change this we can use the list-style-position property, it can be either outside (default) and inside to define where to position the list marker: either outside or the outside of the ul box.
    ul { border: solid 2px red; } .inside { list-style-position: inside; } .outside { list-style-position: outside; } bread butter milk apple content with empty strings
    In the same article, Šime says:
    And I completely agree that’s true, but just using ::marker there isn’t a correct way to add spacing between the ::marker and the list text, especially since most people prefer to set list-style-position to inside. So, as much as it pains me to say it, the simplest way to increase the gap after the marker is to suffix the content property with an empty string:
    li::marker { content: "• "; } bread milk butter apples BUT! This is only if we want to be purists and stick with the ::marker pseudo-element because, in reality, there is a much better way to position that marker: not using it at all.
    Just use ::before
    There is a reason people love using the ::before more than ::marker. First, we can’t use something like CSS Grid or Flexbox since changing the display of li to something other than list-item will remove the ::marker, and we can set the ::marker‘s height or width properties to better align it.
    Let’s be real, ::marker works fine when we just want simple styling. But we are not here for simple styling! Once we want something more involved, ::marker will fall short and we’ll have to use the ::before pseudo-element.
    Using ::before means we can use Flexbox, which allows for two things we couldn’t do before:
    Vertically center the marker with the text Easily increase the gap after the marker Both can be achieved with Flexbox:
    li { display: flex; align-items: center; /* Vertically center the marker */ gap: 20px; /* Increases the gap */ list-style-type: none; } The original ::marker is removed by changing the display.
    Accesibility
    In a previous section we turned things that weren’t lists into seemingly looking lists, so the question arises: should we actually do that? Doesn’t it hurt accessibility to make something look like a list when it isn’t one? As always, it depends. For a visual user, all the examples in this entry look all right, but for assistive technology users, some examples lack the necessary markup for accessible navigation.
    Take for example our initial demo. Here, listing titles serves as decoration since the markup structure is given by the titles themselves. It’s the same deal for the counting siblings demo from earlier, as assistive technology users can read the document through the title structure.
    However, this is the exception rather than the norm. That means a couple of the examples we looked at would fail if we need the list to be announced as a list in assistive technology, like screen readers. For example this list we looked at earlier:
    <div class="index"> <h2>The Old Buccaneer</h2> <h2>The Sea Cook</h2> <h2>My Shore Adventure</h2> <h2>The Log Cabin</h2> <h2>My Sea Adventure</h2> <h2>Captain Silver</h2> </div> …should be written as a list instead:
    <ul class="index"> <li>The Old Buccaneer</li> <li>The Sea Cook</li> <li>My Shore Adventure</li> <li>The Log Cabin</li> <li>My Sea Adventure</li> <li>Captain Silver</li> </ul> Listing elements is rarely used just as decoration, so as a rule of thumb, use lists in the markup even if you are planning to change them with CSS.
    Almanac References
    List Properties
    Almanac on Apr 23, 2021 list-style
    ul { list-style: square outside none; } counters lists Sara Cope Counters
    Almanac on Feb 4, 2025 counter-reset
    article { counter-reset: section; } counters lists Sara Cope Almanac on Jan 14, 2025 counter-increment
    .step { counter-increment: my-awesome-counter; } counters lists Sara Cope Almanac on Apr 23, 2021 counter-set
    h2:first-of-type::before { counter-set: chapter; } counters lists Geoff Graham Almanac on Feb 4, 2025 counter()
    h2::before { content: counter(my-counter, upper-roman) ". "; } counters lists Juan Diego Rodríguez Almanac on Feb 4, 2025 counters()
    li::marker { content: counters(item, ".") ") "; } counters lists Juan Diego Rodríguez Custom Counter Styles
    Almanac on Jan 28, 2025 @counter-style
    @counter-style apple-counter { ... } counters lists Juan Diego Rodríguez Almanac on Jan 30, 2025 symbols()
    ul { list-style: symbols(cyclic "🥬"); } counters lists Juan Diego Rodríguez Pseudo-Elements
    Almanac on Jan 19, 2025 ::marker
    li::marker { color: red; } counters lists Geoff Graham Almanac on Sep 13, 2024 ::before / ::after
    .element::before { content: "Yo!"; } counters lists Sara Cope More Tutorials & Tricks!
    Article on May 5, 2020 List Style Recipes
    counters lists Chris Coyier Article on Apr 29, 2021 List Markers and String Styles
    counters lists Eric Meyer Article on May 19, 2018 Style List Markers in CSS
    counters lists Chris Coyier Article on Jun 11, 2020 How to Reverse CSS Custom Counters
    counters lists Etinosa Obaseki Article on Jan 23, 2025 Some Things You Might Not Know About Custom Counter Styles
    counters lists Geoff Graham Article on Jan 26, 2022 Using CSS Counters for Custom List Number Styling
    counters lists Chris Coyier Article on May 17, 2024 Everything You Need to Know About the Gap After the List Marker
    counters lists Šime Vidas Styling Counters in CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  6. by: Chris Coyier
    Mon, 17 Mar 2025 15:57:00 +0000

    How CSS relates to web performance is a funny dance. Some aspects are entirely negligible the vast majority of time. Some aspects are incredibly impactful and crucial to consider.
    For example, whenever I see research into the performance of some form of CSS syntax, the results always seem to be meh, it’s fine. It can matter, but typically only with fairly extreme DOM weight situations, and spending time optimizing selectors is almost certainly wasted time. I do like that the browser powers that be think and care about this though, like Bramus here measuring the performance of @property for CSS Custom Property performance. In the end, it doesn’t matter much, which is an answer I hope they knew before it shipped everywhere (they almost certainly did). Issues with CSS syntax tend to be about confusion or error-prone situations, not speed.
    But even though the syntax of CSS isn’t particularly worrisome for performance, the weight of it generally does matter. It’s important to remember that CSS that is a regular <link> in the <head> is render blocking, so until it’s downloaded and parsed, the website will not be displayed. Ship, say, 1.5MB of CSS, and the site’s performance will absolutely suffer for absolutely everyone. JavaScript is a worse offender on the web when it comes to size and resources, generally, but at least it’s loading is generally deferred.
    The idea of “Critical CSS” became hot for a minute, meaning ship as little render blocking CSS as you can, and defer the rest, but that idea has it’s own big tradeoffs. Related to that, it absolutely should be easier to make CSS async, so let’s all vote for that. And while I’m linking to Harry, his The Three Cs: 🤝 Concatenate, 🗜️ Compress, 🗳️ Cache is a good one for your brain.
    The times when CSS performance tends to rear it’s head are in extreme DOM weight situations. Like a web page that renders all of Moby Dick, or every single Unicode character, or 10,000 product images, or a million screenshots, or whatever. That way a box-shadow just has a crazy amount of work to do. But even then, while CSS can be the cause of pain, it can be the solution as well. The content-visibility property in CSS can inform the browser to chill out on rendering more than it needs to up front. It’s not the more intuitive feature to use, but it’s nice we have these tools when we need them.
  7. by: Abhishek Kumar
    Mon, 17 Mar 2025 15:44:13 GMT

    Ollama is one of the easiest ways for running large language models (LLMs) locally on your own machine.
    It's like Docker. You download publicly available models from Hugging Face using its command line interface. Connect Ollama with a graphical interface and you have a chatGPT alternative local AI tool.
    In this guide, I'll walk you through some essential Ollama commands, explaining what they do and share some tricks at the end to enhance your experience.
    💡If you're new to Ollama or just getting started, we've already covered a detailed Ollama installation guide for Linux to help you set it up effortlessly.Checking available commands
    Before we dive into specific commands, let's start with the basics. To see all available Ollama commands, run:
    ollama --helpThis will list all the possible commands along with a brief description of what they do. If you want details about a specific command, you can use:
    ollama <command> --helpFor example, ollama run --help will show all available options for running models.
    Here's a glimpse of essential Ollama commands, which we’ve covered in more detail further in the article.
    Command Description ollama create Creates a custom model from a Modelfile, allowing you to fine-tune or modify existing models. ollama run <model> Runs a specified model to process input text, generate responses, or perform various AI tasks. ollama pull <model> Downloads a model from Ollama’s library to use it locally. ollama list Displays all installed models on your system. ollama rm <model> Removes a specific model from your system to free up space. ollama serve Runs an Ollama model as a local API endpoint, useful for integrating with other applications. ollama ps Shows currently running Ollama processes, useful for debugging and monitoring active sessions. ollama stop <model> Stops a running Ollama process using its process ID or name. ollama show <model> Displays metadata and details about a specific model, including its parameters. ollama run <model> "with input" Executes a model with specific text input, such as generating content or extracting information. ollama run <model> < "with file input" Processes a file (text, code, or image) using an AI model to extract insights or perform analysis. 1. Downloading an LLM
    If you want to manually download a model from the Ollama library without running it immediately, use:
    ollama pull <model_name> For instance, to download Llama 3.2 (300M parameters):
    ollama pull phi:2.7b This will store the model locally, making it available for offline use.
    📋There are no ways of fetching available model names from Hugging Face. You have to visit Ollama website and get the available model names to use with the pull command.2. Running an LLM
    To begin chatting with a model, use:
    ollama run <model_name>For example, to run a small model like Phi2:
    ollama run phi:2.7bIf you don’t have the model downloaded, Ollama will fetch it automatically. Once it's running, you can start chatting with it directly in the terminal.
    Some useful tricks while interacting with a running model:
    Type /set parameter num_ctx 8192 to adjust the context window. Use /show info to display model details. Exit by typing /bye. 3. Listing installed LLMs
    If you’ve downloaded multiple models, you might want to see which ones are available locally. You can do this with:
    ollama listThis will output something like:
    This command is great for checking which models are installed before running them.
    4. Checking running LLMs
    If you're running multiple models and want to see which ones are active, use:
    ollama psYou'll see an output like:
    To stop a running model, you can simply exit its session or restart the Ollama server.
    5. Starting the ollama server
    The ollama serve command starts a local server to manage and run LLMs.
    This is necessary if you want to interact with models through an API instead of just using the command line.
    ollama serve By default, the server runs on http://localhost:11434/, and if you visit this address in your browser, you'll see "Ollama is running."
    You can configure the server with environment variables, such as:
    OLLAMA_DEBUG=1 → Enables debug mode for troubleshooting. OLLAMA_HOST=0.0.0.0:11434 → Binds the server to a different address/port. 6. Updating existing LLMs
    There is no ollama command for updating existing LLMs. You can run the pull command periodically to update an installed model:
    ollama pull <model_name>If you want to update all the models, you can combine the commands in this way:
    ollama list | tail -n +2 | awk '{print $1}' | xargs -I {} ollama pull {}That's the magic of AWK scripting tool and the power of xargs command.
    Here's how the command works (if you don't want to ask your local AI).
    Ollama lists all the models and you take the ouput starting at line 2 as line 1 doesn't have model names. And then AWK command gives the first column that has the model name. Now this is passed to xargs command that puts the model name in {} placeholder and thus ollama pull {} runs as ollama pull model_name for each installed model.
    7. Custom model configuration
    One of the coolest features of Ollama is the ability to create custom model configurations.
    For example, let’s say you want to tweak smollm2 to have a longer context window.
    First, create a file named Modelfile in your working directory with the following content:
    FROM llama3.2:3b PARAMETER temperature 0.5 PARAMETER top_p 0.9 SYSTEM You are a senior web developer specializing in JavaScript, front-end frameworks (React, Vue), and back-end technologies (Node.js, Express). Provide well-structured, optimized code with clear explanations and best practices. Now, use Ollama to create a new model from the Modelfile:
    ollama create js-web-dev -f Modelfile Once the model is created, you can run it interactively:
    ollama run js-web-dev "Write a well-optimized JavaScript function to fetch data from an API and handle errors properly." If you want to tweak the model further:
    Adjust temperature for more randomness (0.7) or strict accuracy (0.3). Modify top_p to control diversity (0.8 for stricter responses). Add more specific system instructions, like "Focus on React performance optimization." Some other tricks to enhance your experience
    Ollama isn't just a tool for running language models locally, it can be a powerful AI assistant inside a terminal for a variety of tasks.
    Like, I personally use Ollama to extract info from a document, analyze images and even help with coding without leaving the terminal.
    💡Running Ollama for image processing, document analysis, or code generation without a GPU can be excruciatingly slow.Summarizing documents
    Ollama can quickly extract key points from long documents, research papers, and reports, saving you from hours of manual reading.
    That said, I personally don’t use it much for PDFs. The results can be janky, especially if the document has complex formatting or scanned text.
    If you’re dealing with structured text files, though, it works fairly well.
    ollama run phi "Summarize this document in 100 words." < french_revolution.txt Image analysis
    Though Ollama primarily works with text, some vision models (like llava or even deepseek-r1) are beginning to support multimodal processing, meaning they can analyze and describe images.
    This is particularly useful in fields like computer vision, accessibility, and content moderation.
    ollama run llava:7b "Describe the content of this image." < cat.jpg Code generation and assistance
    Debugging a complex codebase? Need to understand a piece of unfamiliar code?
    Instead of spending hours deciphering it, let Ollama have a look at it. 😉
    ollama run phi "Explain this algorithm step-by-step." < algorithm.py Additional resources
    If you want to dive deeper into Ollama or are looking to integrate it into your own projects, I highly recommend checking out freeCodeCamp’s YouTube video on the topic.
    It provides a clear, hands-on introduction to working with Ollama and its API.
    Conclusion
    Ollama makes it possible to harness AI on your own hardware. While it may seem overwhelming at first, once you get the hang of the basic commands and parameters, it becomes an incredibly useful addition to any developer's toolkit.
    That said, I might not have covered every single command or trick in this guide, I’m still learning myself!
    If you have any tips, lesser-known commands, or cool use cases up your sleeve, feel free to share them in the comments.
    I feel that this should be enough to get you started with Ollama, it’s not rocket science. My advice? Just fiddle around with it.
    Try different commands, tweak the parameters, and experiment with its capabilities. That’s how I learned, and honestly, that’s the best way to get comfortable with any new tool.
    Happy experimenting! 🤖
  8. Web Components Demystified

    by: Geoff Graham
    Fri, 14 Mar 2025 12:51:59 +0000

    Scott Jehl released a course called Web Components Demystified. I love that name because it says what the course is about right on the tin: you’re going to learn about web components and clear up any confusion you may already have about them.
    And there’s plenty of confusion to go around! “Components” is already a loaded term that’s come to mean everything from a piece of UI, like a search component, to an element you can drop in and reuse anywhere, such as a React component. The web is chock-full of components, tell you what.
    But what we’re talking about here is a set of standards where HTML, CSS, and JavaScript rally together so that we can create custom elements that behave exactly how we want them to. It’s how we can make an element called <tasty-pizza> and the browser knows what to do with it.
    This is my full set of notes from Scott’s course. I wouldn’t say they’re complete or even a direct one-to-one replacement for watching the course. You’ll still want to do that on your own, and I encourage you to because Scott is an excellent teacher who makes all of this stuff extremely accessible, even to noobs like me.
    Chapter 1: What Web Components Are… and Aren’t
    Web components are not built-in elements, even though that’s what they might look like at first glance. Rather, they are a set of technologies that allow us to instruct what the element is and how it behaves. Think of it the same way that “responsive web design” is not a thing but rather a set of strategies for adapting design to different web contexts. So, just as responsive web design is a set of ingredients — including media fluid grids, flexible images, and media queries — web components are a concoction involving:
    Custom elements These are HTML elements that are not built into the browser. We make them up. They include a letter and a dash.
    <my-fancy-heading> Hey, I'm Fancy </my-fancy-heading> We’ll go over these in greater detail in the next module.
    HTML templates Templates are bits of reusable markup that generate more markup. We can hide something until we make use of it.
    <template> <li class="user"> <h2 class="name"></h2> <p class="bio"></p> </li> </template> Much more on this in the third module.
    Shadow DOM The DOM is queryable.
    document.querySelector("h1"); // <h1>Hello, World</h1> The Shadow DOM is a fragment of the DOM where markup, scripts, and styles are encapsulated from other DOM elements. We’ll cover this in the fourth module, including how to <slot> content.
    There used to be a fourth “ingredient” called HTML Imports, but those have been nixed.
    In short, web components might be called “components” but they aren’t really components more than technologies. In React, components sort of work like partials. It defines a snippet of HTML that you drop into your code and it outputs in the DOM. Web Components are built off of HTML Elements. They are not replaced when rendered the way they are in JavaScript component frameworks. Web components are quite literally HTML elements and have to obey HTML rules. For example:
    <!-- Nope --> <ul> <my-list-item></my-list-item> <!-- etc. --> </ul> <!-- Yep --> <ul> <li> <my-list-item></my-list-item> </li> </ul> We’re generating meaningful HTML up-front rather than rendering it in the browser through the client after the fact. Provide the markup and enhance it! Web components have been around a while now, even if it seems we’re only starting to talk about them now.
    Chapter 2: Custom Elements
    First off, custom elements are not built-in HTML elements. We instruct what they are and how they behave. They are named with a dash and at must contain least one letter. All of the following are valid names for custom elements:
    <super-component> <a-> <a-4-> <card-10.0.1> <card-♠️> Just remember that there are some reserved names for MathML and SVG elements, like <font-face>. Also, they cannot be void elements, e.g. <my-element />, meaning they have to have a correspoonding closing tag.
    Since custom elements are not built-in elements, they are undefined by default — and being undefined can be a useful thing! That means we can use them as containers with default properties. For example, they are display: inline by default and inherit the current font-family, which can be useful to pass down to the contents. We can also use them as styling hooks since they can be selected in CSS. Or maybe they can be used for accessibility hints. The bottom line is that they do not require JavaScript in order to make them immediately useful.
    Working with JavaScript. If there is one <my-button> on the page, we can query it and set a click handler on it with an event listener. But if we were to insert more instances on the page later, we would need to query it when it’s appended and re-run the function since it is not part of the original document rendering.
    Defining a custom element
    This defines and registers the custom element. It teaches the browser that this is an instance of the Custom Elements API and extends the same class that makes other HTML elements valid HTML elements:
    <my-element>My Element</my-element> <script> customElements.define("my-element", class extends HTMLElement {}); </script> Check out the methods we get immediate access to:
    Breaking down the syntax
    customElements .define( "my-element", class extends HTMLElement {} ); // Functionally the same as: class MyElement extends HTMLElement {} customElements.define("my-element", MyElement); export default myElement // ...which makes it importable by other elements: import MyElement from './MyElement.js'; const myElement = new MyElement(); document.body.appendChild(myElement); // <body> // <my-element></my-element> // </body> // Or simply pull it into a page // Don't need to `export default` but it doesn't hurt to leave it // <my-element>My Element</my-element> // <script type="module" src="my-element.js"></script> It’s possible to define a custom element by extending a specific HTML element. The specification documents this, but Scott is focusing on the primary way.
    class WordCount extends HTMLParagraphElement customElements.define("word-count", WordCount, { extends: "p" }); // <p is="word-count">This is a custom paragraph!</p> Scott says do not use this because WebKit is not going to implement it. We would have to polyfill it forever, or as long as WebKit holds out. Consider it a dead end.
    The lifecycle
    A component has various moments in its “life” span:
    Constructed (constructor) Connected (connectedCallback) Adopted (adoptedCallback) Attribute Changed (attributeChangedCallback) Disconnected (disconnectedCallback) We can hook into these to define the element’s behavior.
    class myElement extends HTMLElement { constructor() {} connectedCallback() {} adoptedCallback() {} attributeChangedCallback() {} disconnectedCallback() {} } customElements.define("my-element", MyElement); constructor()
    class myElement extends HTMLElement { constructor() { // provides us with the `this` keyword super() // add a property this.someProperty = "Some value goes here"; // add event listener this.addEventListener("click", () => {}); } } customElements.define("my-element", MyElement); “When the constructor is called, do this…” We don’t have to have a constructor when working with custom elements, but if we do, then we need to call super() because we’re extending another class and we’ll get all of those properties.
    Constructor is useful, but not for a lot of things. It’s useful for setting up initial state, registering default properties, adding event listeners, and even creating Shadow DOM (which Scott will get into in a later module). For example, we are unable to sniff out whether or not the custom element is in another element because we don’t know anything about its parent container yet (that’s where other lifecycle methods come into play) — we’ve merely defined it.
    connectedCallback()
    class myElement extends HTMLElement { // the constructor is unnecessary in this example but doesn't hurt. constructor() { super() } // let me know when my element has been found on the page. connectedCallback() { console.log(`${this.nodeName} was added to the page.`); } } customElements.define("my-element", MyElement); Note that there is some strangeness when it comes to timing things. Sometimes isConnected returns true during the constructor. connectedCallback() is our best way to know when the component is found on the page. This is the moment it is connected to the DOM. Use it to attach event listeners.
    If the <script> tag comes before the DOM is parsed, then it might not recognize childNodes. This is not an uncommon situation. But if we add type="module" to the <script>, then the script is deferred and we get the child nodes. Using setTimeout can also work, but it looks a little gross.
    disconnectedCallback
    class myElement extends HTMLElement { // let me know when my element has been found on the page. disconnectedCallback() { console.log(`${this.nodeName} was removed from the page.`); } } customElements.define("my-element", MyElement); This is useful when the component needs to be cleaned up, perhaps like stopping an animation or preventing memory links.
    adoptedCallback()
    This is when the component is adopted by another document or page. Say you have some iframes on a page and move a custom element from the page into an iframe, then it would be adopted in that scenario. It would be created, then added, then removed, then adopted, then added again. That’s a full lifecycle! This callback is adopted automatically simply by picking it up and dragging it between documents in the DOM.
    Custom elements and attributes
    Unlike React, HTML attributes are strings (not props!). Global attributes work as you’d expect, though some global attributes are reflected as properties. You can make any attribute do that if you want, just be sure to use care and caution when naming because, well, we don’t want any conflicts.
    Avoid standard attributes on a custom element as well, as that can be confusing particularly when handing a component to another developer. Example: using type as an attribute which is also used by <input> elements. We could say data-type instead. (Remember that Chris has a comprehensive guide on using data attributes.)
    Examples
    Here’s a quick example showing how to get a greeting attribute and set it on the custom element:
    class MyElement extends HTMLElement { get greeting() { return this.getAttribute('greeting'); // return this.hasAttribute('greeting'); } set greeting(val) { if(val) { this.setAttribute('greeting', val); // this setAttribute('greeting', ''); } else { this.removeAttribute('greeting'); } } } customElements.define("my-element", MyElement); Another example, this time showing a callback for when the attribute has changed, which prints it in the element’s contents:
    <my-element greeting="hello">hello</my-element> <!-- Change text greeting when attribite greeting changes --> <script> class MyElement extends HTMLElement { static observedAttributes = ["greeting"]; attributeChangedCallback(name, oldValue, newValue) { if (name === 'greeting' && oldValue && oldValue !== newValue) { console.log(name + " changed"); this.textContent = newValue; } } } customElements.define("my-element", MyElement); </script> A few more custom element methods:
    customElements.get('my-element'); // returns MyElement Class customElements.getName(MyElement); // returns 'my-element' customElements.whenDefined("my-element"); // waits for custom element to be defined const el = document.createElement("spider-man"); class SpiderMan extends HTMLElement { constructor() { super(); console.log("constructor!!"); } } customElements.define("spider-man", SpiderMan); customElements.upgrade(el); // returns "constructor!!" Custom methods and events:
    <my-element><button>My Element</button></my-element> <script> customElements.define("my-element", class extends HTMLElement { connectedCallback() { const btn = this.firstElementChild; btn.addEventListener("click", this.handleClick) } handleClick() { console.log(this); } }); </script> Bring your own base class, in the same way web components frameworks like Lit do:
    class BaseElement extends HTMLElement { $ = this.querySelector; } // extend the base, use its helper class myElement extends BaseElement { firstLi = this.$("li"); } Practice prompt
    Create a custom HTML element called <say-hi> that displays the text “Hi, World!” when added to the page:
    CodePen Embed Fallback Enhance the element to accept a name attribute, displaying "Hi, [Name]!" instead:
    CodePen Embed Fallback Chapter 3: HTML Templates
    The <template> element is not for users but developers. It is not exposed visibly by browsers.
    <template>The browser ignores everything in here.</template> Templates are designed to hold HTML fragments:
    <template> <div class="user-profile"> <h2 class="name">Scott</h2> <p class="bio">Author</p> </div> </template> A template is selectable in CSS; it just doesn’t render. It’s a document fragment. The inner document is a #document-fragment. Not sure why you’d do this, but it illustrates the point that templates are selectable:
    template { display: block; }` /* Nope */ template + div { height: 100px; width: 100px; } /* Works */ The content property
    No, not in CSS, but JavaScript. We can query the inner contents of a template and print them somewhere else.
    <template> <p>Hi</p> </template> <script> const myTmpl = documenty.querySelector("template").content; console.log(myTmpl); </script> Using a Document Fragment without a <template>
    const myFrag = document.createDocumentFragment(); myFrag.innerHTML = "<p>Test</p>"; // Nope const myP = document.createElement("p"); // Yep myP.textContent = "Hi!"; myFrag.append(myP); // use the fragment document.body.append(myFrag); Clone a node
    <template> <p>Hi</p> </template> <script> const myTmpl = documenty.querySelector("template").content; console.log(myTmpl); // Oops, only works one time! We need to clone it. </script> Oops, the component only works one time! We need to clone it if we want multiple instances:
    <template> <p>Hi</p> </template> <script> const myTmpl = document.querySelector("template").content; document.body.append(myTmpl.cloneNode(true)); // true is necessary document.body.append(myTmpl.cloneNode(true)); document.body.append(myTmpl.cloneNode(true)); document.body.append(myTmpl.cloneNode(true)); </script> A more practical example
    Let’s stub out a template for a list item and then insert them into an unordered list:
    <template id="tmpl-user"><li><strong></strong>: <span></span></li></template> <ul id="users"></ul> <script> const usersElement = document.querySelector("#users"); const userTmpl = document.querySelector("#tmpl-user").content; const users = [{name: "Bob", title: "Artist"}, {name: "Jane", title: "Doctor"}]; users.forEach(user => { let thisLi = userTmpl.cloneNode(true); thisLi.querySelector("strong").textContent = user.name; thisLi.querySelector("span").textContent = user.title; usersElement.append(thisLi); }); </script> The other way to use templates that we’ll get to in the next module: Shadow DOM
    <template shadowroot=open> <p>Hi, I'm in the Shadow DOM</p> </template> Chapter 4: Shadow DOM
    Here we go, this is a heady chapter! The Shadow DOM reminds me of playing bass in a band: it’s easy to understand but incredibly difficult to master. It’s easy to understand that there are these nodes in the DOM that are encapsulated from everything else. They’re there, we just can’t really touch them with regular CSS and JavaScript without some finagling. It’s the finagling that’s difficult to master. There are times when the Shadow DOM is going to be your best friend because it prevents outside styles and scripts from leaking in and mucking things up. Then again, you’re most certainly going go want to style or apply scripts to those nodes and you have to figure that part out.
    That’s where web components really shine. We get the benefits of an element that’s encapsulated from outside noise but we’re left with the responsibility of defining everything for it ourselves.
    Select elements are a great example of the Shadow DOM. Shadow roots! Slots! They’re all part of the puzzle. Using the Shadow DOM
    We covered the <template> element in the last chapter and determined that it renders in the Shadow DOM without getting displayed on the page.
    <template shadowrootmode="closed"> <p>This will render in the Shadow DOM.</p> </template> In this case, the <template> is rendered as a #shadow-root without the <template> element’s tags. It’s a fragment of code. So, while the paragraph inside the template is rendered, the <template> itself is not. It effectively marks the Shadow DOM’s boundaries. If we were to omit the shadowrootmode attribute, then we simply get an unrendered template. Either way, though, the paragraph is there in the DOM and it is encapsulated from other styles and scripts on the page.
    These are all of the elements that can have a shadow. Breaching the shadow
    There are times you’re going to want to “pierce” the Shadow DOM to allow for some styling and scripts. The content is relatively protected but we can open the shadowrootmode and allow some access.
    <div> <template shadowrootmode="open"> <p>This will render in the Shadow DOM.</p> </template> </div> Now we can query the div that contains the <template> and select the #shadow-root:
    document.querySelector("div").shadowRoot // #shadow-root (open) // <p>This will render in the Shadow DOM.</p> We need that <div> in there so we have something to query in the DOM to get to the paragraph. Remember, the <template> is not actually rendered at all.
    Additional shadow attributes
    <!-- should this root stay with a parent clone? --> <template shadowrootcloneable> <!-- allow shadow to be serialized into a string object — can forget about this --> <template shadowrootserializable> <!-- click in element focuses first focusable element --> <template shadowrootdelegatesfocus> Shadow DOM siblings
    When you add a shadow root, it becomes the only rendered root in that shadow host. Any elements after a shadow root node in the DOM simply don’t render. If a DOM element contains more than one shadow root node, the ones after the first just become template tags. It’s sort of like the Shadow DOM is a monster that eats the siblings.
    Slots bring those siblings back!
    <div> <template shadowroot="closed"> <slot></slot> <p>I'm a sibling of a shadow root, and I am visible.</p> </template> </div> All of the siblings go through the slots and are distributed that way. It’s sort of like slots allow us to open the monster’s mouth and see what’s inside.
    Declaring the Shadow DOM
    Using templates is the declarative way to define the Shadow DOM. We can also define the Shadow DOM imperatively using JavaScript. So, this is doing the exact same thing as the last code snippet, only it’s done programmatically in JavaScript:
    <my-element> <template shadowroot="open"> <p>This will render in the Shadow DOM.</p> </template> </my-element> <script> customElements.define('my-element', class extends HTMLElement { constructor() { super(); // attaches a shadow root node this.attachShadow({mode: "open"}); // inserts a slot into the template this.shadowRoot.innerHTML = '<slot></slot>'; } }); </script> Another example:
    <my-status>available</my-status> <script> customElements.define('my-status', class extends HTMLElement { constructor() { super(); this.attachShadow({mode: "open"}); this.shadowRoot.innerHTML = '<p>This item is currently: <slot></slot></p>'; } }); </script> So, is it better to be declarative or imperative? Like the weather where I live, it just depends.
    Both approaches have their benefits. We can set the shadow mode via Javascript as well:
    // open this.attachShadow({mode: open}); // closed this.attachShadow({mode: closed}); // cloneable this.attachShadow({cloneable: true}); // delegateFocus this.attachShadow({delegatesFocus: true}); // serialized this.attachShadow({serializable: true}); // Manually assign an element to a slot this.attachShadow({slotAssignment: "manual"}); About that last one, it says we have to manually insert the <slot> elements in JavaScript:
    <my-element> <p>This WILL render in shadow DOM but not automatically.</p> </my-element> <script> customElements.define('my-element', class extends HTMLElement { constructor() { super(); this.attachShadow({ mode: "open", slotAssignment: "manual" }); this.shadowRoot.innerHTML = '<slot></slot>'; } connectedCallback(){ const slotElem = this.querySelector('p'); this.shadowRoot.querySelector('slot').assign(slotElem); } }); </script> Examples
    Scott spent a great deal of time sharing examples that demonstrate different sorts of things you might want to do with the Shadow DOM when working with web components. I’ll rapid-fire those in here.
    Get an array of element nodes in a slot
    this.shadowRoot.querySelector('slot') .assignedElements(); // get an array of all nodes in a slot, text too this.shadowRoot.querySelector('slot') .assignedNodes(); When did a slot’s nodes change?
    let slot = document.querySelector('div') .shadowRoot.querySelector("slot"); slot.addEventListener("slotchange", (e) => { console.log(`Slot "${slot.name}" changed`); // > Slot "saying" changed }) Combining imperative Shadow DOM with templates
    Back to this example:
    <my-status>available</my-status> <script> customElements.define('my-status', class extends HTMLElement { constructor() { super(); this.attachShadow({mode: "open"}); this.shadowRoot.innerHTML = '<p>This item is currently: <slot></slot></p>'; } }); </script> Let’s get that string out of our JavaScript with reusable imperative shadow HTML:
    <my-status>available</my-status> <template id="my-status"> <p>This item is currently: <slot></slot> </p> </template> <script> customElements.define('my-status', class extends HTMLElement { constructor(){ super(); this.attachShadow({mode: 'open'}); const template = document.getElementById('my-status'); this.shadowRoot.append(template.content.cloneNode(true)); } }); </script> Slightly better as it grabs the component’s name programmatically to prevent name collisions:
    <my-status>available</my-status> <template id="my-status"> <p>This item is currently: <slot></slot> </p> </template> <script> customElements.define('my-status', class extends HTMLElement { constructor(){ super(); this.attachShadow({mode: 'open'}); const template = document.getElementById( this.nodeName.toLowerCase() ); this.shadowRoot.append(template.content.cloneNode(true)); } }); </script> Forms with Shadow DOM
    Long story, cut short: maybe don’t create custom form controls as web components. We get a lot of free features and functionalities — including accessibility — with native form controls that we have to recreate from scratch if we decide to roll our own.
    In the case of forms, one of the oddities of encapsulation is that form submissions are not automatically connected. Let’s look at a broken form that contains a web component for a custom input:
    <form> <my-input> <template shadowrootmode="open"> <label> <slot></slot> <input type="text" name="your-name"> </label> </template> Type your name! </my-input> <label><input type="checkbox" name="remember">Remember Me</label> <button>Submit</button> </form> <script> document.forms[0].addEventListener('input', function(){ let data = new FormData(this); console.log(new URLSearchParams(data).toString()); }); </script> This input’s value won’t be in the submission! Also, form validation and states are not communicated in the Shadow DOM. Similar connectivity issues with accessibility, where the shadow boundary can interfere with ARIA. For example, IDs are local to the Shadow DOM. Consider how much you really need the Shadow DOM when working with forms.
    Element internals
    The moral of the last section is to tread carefully when creating your own web components for form controls. Scott suggests avoiding that altogether, but he continued to demonstrate how we could theoretically fix functional and accessibility issues using element internals.
    Let’s start with an input value that will be included in the form submission.
    <form> <my-input name="name"></my-input> <button>Submit</button> </form> Now let’s slot this imperatively:
    <script> customElements.define('my-input', class extends HTMLElement { constructor() { super(); this.attachShadow({mode: 'open'}); this.shadowRoot.innerHTML = '<label><slot></slot><input type="text"></label>' } }); </script> The value is not communicated yet. We’ll add a static formAssociated variable with internals attached:
    <script> customElements.define('my-input', class extends HTMLElement { static formAssociated = true; constructor() { super(); this.attachShadow({mode: 'open'}); this.shadowRoot.innerHTML = '<label><slot></slot><input type="text"></label>' this.internals = this.attachedInternals(); } }); </script> Then we’ll set the form value as part of the internals when the input’s value changes:
    <script> customElements.define('my-input', class extends HTMLElement { static formAssociated = true; constructor() { super(); this.attachShadow({mode: 'open'}); this.shadowRoot.innerHTML = '<label><slot></slot><input type="text"></label>' this.internals = this.attachedInternals(); this.addEventListener('input', () => { this-internals.setFormValue(this.shadowRoot.querySelector('input').value); }); } }); </script> Here’s how we set states with element internals:
    // add a checked state this.internals.states.add("checked"); // remove a checked state this.internals.states.delete("checked"); Let’s toggle a “add” or “delete” a boolean state:
    <form> <my-check name="remember">Remember Me?</my-check> </form> <script> customElements.define('my-check', class extends HTMLElement { static formAssociated = true; constructor(){ super(); this.attachShadow({mode: 'open'}); this.shadowRoot.innerHTML = '<slot></slot>'; this.internals = this.attachInternals(); let addDelete = false; this.addEventListener("click", ()=> { addDelete = !addDelete; this.internals.states[addDelete ? "add" : "delete"]("checked"); } ); } }); </script> Let’s refactor this for ARIA improvements:
    <form> <style> my-check { display: inline-block; inline-size: 1em; block-size: 1em; background: #eee; } my-check:state(checked)::before { content: "[x]"; } </style> <my-check name="remember" id="remember"></my-check><label for="remember">Remember Me?</label> </form> <script> customElements.define('my-check', class extends HTMLElement { static formAssociated = true; constructor(){ super(); this.attachShadow({mode: 'open'}); this.internals = this.attachInternals(); this.internals.role = 'checkbox'; this.setAttribute('tabindex', '0'); let addDelete = false; this.addEventListener("click", ()=> { addDelete = !addDelete; this.internals.states[addDelete ? "add" : "delete"]("checked"); this[addDelete ? "setAttribute" : "removeAttribute"]("aria-checked", true); }); } }); </script> Phew, that’s a lot of work! And sure, this gets us a lot closer to a more functional and accessible custom form input, but there’s still a long way’s to go to achieve what we already get for free from using native form controls. Always question whether you can rely on a light DOM form instead.
    Chapter 5: Styling Web Components
    Styling web components comes in levels of complexity. For example, we don’t need any JavaScript at all to slap a few styles on a custom element.
    <my-element theme="suave" class="priority"> <h1>I'm in the Light DOM!</h1> </my-element> <style> /* Element, class, attribute, and complex selectors all work. */ my-element { display: block; /* custom elements are inline by default */ } .my-element[theme=suave] { color: #fff; } .my-element.priority { background: purple; } .my-element h1 { font-size: 3rem; } </style> This is not encapsulated! This is scoped off of a single element just light any other CSS in the Light DOM. Changing the Shadow DOM mode from closed to open doesn’t change CSS. It allows JavaScript to pierce the Shadow DOM but CSS isn’t affected. Let’s poke at it
    <style> p { color: red; } </style> <p>Hi</p> <div> <template shadowrootmode="open"> <p>Hi</p> </template> </div> <p>Hi</p> This is three stacked paragraphs, the second of which is in the shadow root. The first and third paragraphs are red; the second is not styled because it is in a <template>, even if the shadow root’s mode is set to open. Let’s poke at it from the other direction:
    <style> p { color: red; } </style> <p>Hi</p> <div> <template shadowrootmode="open"> <style> p { color: blue;} </style> <p>Hi</p> </template> </div> <p>Hi</p> The first and third paragraphs are still receiving the red color from the Light DOM’s CSS. The <style> declarations in the <template> are encapsulated and do not leak out to the other paragraphs, even though it is declared later in the cascade. Same idea, but setting the color on the <body>:
    <style> body { color: red; } </style> <p>Hi</p> <div> <template shadowrootmode="open"> <p>Hi</p> </template> </div> <p>Hi</p> Everything is red! This isn’t a bug. Inheritable styles do pass through the Shadow DOM barrier. Inherited styles are those that are set by the computed values of their parent styles. Many properties are inheritable, including color. The <body> is the parent and everything in it is a child that inherits these styles, including custom elements. Let’s fight with inheritance
    We can target the paragraph in the <template> style block to override the styles set on the <body>. Those won’t leak back to the other paragraphs.
    <style> body { color: red; font-family: fantasy; font-size: 2em; } </style> <p>Hi</p> <div> <template shadowrootmode="open"> <style> /* reset the light dom styles */ p { color: initial; font-family: initial; font-size: initial; } </style> <p>Hi</p> </template> </div> <p>Hi</p> This is protected, but the problem here is that it’s still possible for a new role or property to be introduced that passes along inherited styles that we haven’t thought to reset. Perhaps we could use all: initital as a defensive strategy against future inheritable styles. But what if we add more elements to the custom element? It’s a constant fight. Host styles!
    We can scope things to the shadow root’s :host selector to keep things protected.
    <style> body { color: red; font-family: fantasy; font-size: 2em; } </style> <p>Hi</p> <div> <template shadowrootmode="open"> <style> /* reset the light dom styles */ :host { all: initial; } </style> <p>Hi</p> <a href="#">Click me</a> </template> </div> <p>Hi</p> New problem! What if the Light DOM styles are scoped to the universal selector instead?
    <style> * { color: red; font-family: fantasy; font-size: 2em; } </style> <p>Hi</p> <div> <template shadowrootmode="open"> <style> /* reset the light dom styles */ :host { all: initial; } </style> <p>Hi</p> <a href="#">Click me</a> </template> </div> <p>Hi</p> This breaks the custom element’s styles. But that’s because Shadow DOM styles are applied before Light DOM styles. The styles scoped to the universal selector are simply applied after the :host styles, which overrides what we have in the shadow root. So, we’re still locked in a brutal fight over inheritance and need stronger specificity.
    According to Scott, !important is one of the only ways we have to apply brute force to protect our custom elements from outside styles leaking in. The keyword gets a bad rap — and rightfully so in the vast majority of cases — but this is a case where it works well and using it is an encouraged practice. It’s not like it has an impact on the styles outside the custom element, anyway.
    <style> * { color: red; font-family: fantasy; font-size: 2em; } </style> <p>Hi</p> <div> <template shadowrootmode="open"> <style> /* reset the light dom styles */ :host { all: initial; !important } </style> <p>Hi</p> <a href="#">Click me</a> </template> </div> <p>Hi</p> Special selectors
    There are some useful selectors we have to look at components from the outside, looking in.
    :host()
    We just looked at this! But note how it is a function in addition to being a pseudo-selector. It’s sort of a parent selector in the sense that we can pass in the <div> that contains the <template> and that becomes the scoping context for the entire selector, meaning the !important keyword is no longer needed.
    <style> * { color: red; font-family: fantasy; font-size: 2em; } </style> <p>Hi</p> <div> <template shadowrootmode="open"> <style> /* reset the light dom styles */ :host(div) { all: initial; } </style> <p>Hi</p> <a href="#">Click me</a> </template> </div> <p>Hi</p> :host-context()
    <header> <my-element> <template shadowrootmode="open"> <style> :host-context(header) { ... } /* matches the host! */ </style> </template> </my-element> </header> This targets the shadow host but only if the provided selector is a parent node anywhere up the tree. This is super helpful for styling custom elements where the layout context might change, say, from being contained in an <article> versus being contained in a <header>.
    :defined
    Defining an element occurs when it is created, and this pseudo-selector is how we can select the element in that initially-defined state. I imagine this is mostly useful for when a custom element is defined imperatively in JavaScript so that we can target the very moment that the element is constructed, and then set styles right then and there.
    <style> simple-custom:defined { display: block; background: green; color: #fff; } </style> <simple-custom></simple-custom> <script> customElements.define('simple-custom', class extends HTMLElement { constructor(){ super(); this.attachShadow({mode: 'open'}); this.shadowRoot.innerHTML = "<p>Defined!</p>"; } }); </script> Minor note about protecting against a flash of unstyled content (FOUC)… or unstyled element in this case. Some elements are effectively useless until JavsScript has interacted with it to generate content. For example, an empty custom element that only becomes meaningful once JavaScript runs and generates content. Here’s how we can prevent the inevitable flash that happens after the content is generated:
    <style> js-dependent-element:not(:defined) { visibility: hidden; } </style> <js-dependent-element></js-dependent-element> Warning zone! It’s best for elements that are empty and not yet defined. If you’re working with a meaningful element up-front, then it’s best to style as much as you can up-front.
    Styling slots
    This does not style the paragraph green as you might expect:
    <div> <template shadowrootmode="open"> <style> p { color: green; } </style> <slot></slot> </template> <p>Slotted Element</p> </div> The Shadow DOM cannot style this content directly. The styles would apply to a paragraph in the <template> that gets rendered in the Light DOM, but it cannot style it when it is slotted into the <template>.
    Slots are part of the Light DOM. So, this works:
    <style> p { color: green; } </style> <div> <template shadowrootmode="open"> <slot></slot> </template> <p>Slotted Element</p> </div> This means that slots are easier to target when it comes to piercing the shadow root with styles, making them a great method of progressive style enhancement.
    We have another special selected, the ::slotted() pseudo-element that’s also a function. We pass it an element or class and that allows us to select elements from within the shadow root.
    <div> <template shadowrootmode="open"> <style> ::slotted(p) { color: red; } </style> <slot></slot> </template> <p>Slotted Element</p> </div> Unfortunately, ::slotted() is a weak selected when compared to global selectors. So, if we were to make this a little more complicated by introducing an outside inheritable style, then we’d be hosed again.
    <style> /* global paragraph style... */ p { color: green; } </style> <div> <template shadowrootmode="open"> <style> /* ...overrides the slotted style */ ::slotted(p) { color: red; } </style> <slot></slot> </template> <p>Slotted Element</p> </div> This is another place where !important could make sense. It even wins if the global style is also set to !important. We could get more defensive and pass the universal selector to ::slotted and set everything back to its initial value so that all slotted content is encapsulated from outside styles leaking in.
    <style> /* global paragraph style... */ p { color: green; } </style> <div> <template shadowrootmode="open"> <style> /* ...can't override this important statement */ ::slotted(*) { all: initial !important; } </style> <slot></slot> </template> <p>Slotted Element</p> </div> Styling :parts
    A part is a way of offering up Shadow DOM elements to the parent document for styling. Let’s add a part to a custom element:
    <div> <template shadowrootmode="open"> <p part="hi">Hi there, I'm a part!</p> </template> </div> Without the part attribute, there is no way to write styles that reach the paragraph. But with it, the part is exposed as something that can be styled.
    <style> ::part(hi) { color: green; } ::part(hi) b { color: green; } /* nope! */ </style> <div> <template shadowrootmode="open"> <p part="hi">Hi there, I'm a <b>part</b>!</p> </template> </div> We can use this to expose specific “parts” of the custom element that are open to outside styling, which is almost like establishing a styling API with specifications for what can and can’t be styled. Just note that ::part cannot be used as part of a complex selector, like a descendant selector:
    A bit in the weeds here, but we can export parts in the sense that we can nest elements within elements within elements, and so on. This way, we include parts within elements.
    <my-component> <!-- exposes three parts to the nested component --> <nested-component exportparts="part1, part2, part5"></nested-component> </my-component> Styling states and validity
    We discussed this when going over element internals in the chapter about the Shadow DOM. But it’s worth revisiting that now that we’re specifically talking about styling. We have a :state pseudo-function that accepts our defined states.
    <script> this.internals.states.add("checked"); </script> <style> my-checkbox:state(checked) { /* ... */ } </style> We also have access to the :invalid pseudo-class.
    Cross-barrier custom properties
    <style> :root { --text-primary: navy; --bg-primary: #abe1e1; --padding: 1.5em 1em; } p { color: var(--text-primary); background: var(--bg-primary); padding: var(--padding); } </style> Custom properties cross the Shadow DOM barrier!
    <my-elem></my-elem> <script> customElements.define('my-elem', class extends HTMLElement { constructor(){ super(); this.attachShadow({mode: 'open'}); this.shadowRoot.innerHTML = ` <style> p { color: var(--text-primary); background: var(--bg-primary); padding: var(--padding); } </style> <p>Hi there!</p>`; } }) </script> Adding stylesheets to custom elements
    There’s the classic ol’ external <link> way of going about it:
    <simple-custom> <template shadowrootmode="open"> <link rel="stylesheet" href="../../assets/external.css"> <p>This one's in the shadow Dom.</p> <slot></slot> </template> <p>Slotted <b>Element</b></p> </simple-custom> It might seem like an anti-DRY approach to call the same external stylesheet at the top of all web components. To be clear, yes, it is repetitive — but only as far as writing it. Once the sheet has been downloaded once, it is available across the board without any additional requests, so we’re still technically dry in the sense of performance.
    CSS imports also work:
    <style> @import url("../../assets/external.css"); </style> <simple-custom> <template shadowrootmode="open"> <style> @import url("../../assets/external.css"); </style> <p>This one's in the shadow Dom.</p> <slot></slot> </template> <p>Slotted <b>Element</b></p> </simple-custom> One more way using a JavaScript-based approach. It’s probably better to make CSS work without a JavaScript dependency, but it’s still a valid option.
    <my-elem></my-elem> <script type="module"> import sheet from '../../assets/external.css' with { type: 'css' }; customElements.define('my-elem', class extends HTMLElement { constructor(){ super(); this.attachShadow({mode: 'open'}); this.shadowRoot.innerHTML = '<p>Hi there</p>'; this.shadowRoot.adoptedStyleSheets = [sheet]; } }) </script> We have a JavaScript module and import CSS into a string that is then adopted by the shadow root using shadowRoort.adoptedStyleSheets . And since adopted stylesheets are dynamic, we can construct one, share it across multiple instances, and update styles via the CSSOM that ripple across the board to all components that adopt it.
    Container queries!
    Container queries are nice to pair with components, as custom elements and web components are containers and we can query them and adjust things as the container changes.
    <div> <template shadowrootmode="open"> <style> :host { container-type: inline-size; background-color: tan; display: block; padding: 2em; } ul { display: block; list-style: none; margin: 0; } li { padding: .5em; margin: .5em 0; background-color: #fff; } @container (min-width: 50em) { ul { display: flex; justify-content: space-between; gap: 1em; } li { flex: 1 1 auto; } } </style> <ul> <li>First Item</li> <li>Second Item</li> </ul> </template> </div> In this example, we’re setting styles on the :host() to define a new container, as well as some general styles that are protected and scoped to the shadow root. From there, we introduce a container query that updates the unordered list’s layout when the custom element is at least 50em wide.
    Next up…
    How web component features are used together!
    Chapter 6: HTML-First Patterns
    In this chapter, Scott focuses on how other people are using web components in the wild and highlights a few of the more interesting and smart patterns he’s seen.
    Let’s start with a typical counter
    It’s often the very first example used in React tutorials.
    <counter-element></counter-element> <script type="module"> customElements.define('counter-element', class extends HTMLElement { #count = 0; connectedCallback() { this.innerHTML = `<button id="dec">-</button><p id="count">${this.#count}</p><button id="inc">+</button>`; this.addEventListener('click', e => this.update(e) ); } update(e) { if( e.target.nodeName !== 'BUTTON' ) { return } this.#count = e.target.id === 'inc' ? this.#count + 1 : this.#count - 1; this.querySelector('#count').textContent = this.#count; } }); </script> Reef
    Reef is a tiny library by Chris Ferdinandi that weighs just 2.6KB minified and zipped yet still provides DOM diffing for reactive state-based UIs like React, which weighs significantly more. An example of how it works in a standalone way:
    <div id="greeting"></div> <script type="module"> import {signal, component} from '.../reef.es..min.js'; // Create a signal let data = signal({ greeting: 'Hello', name: 'World' }); component('#greeting', () => `<p>${data.greeting}, ${data.name}!</p>`); </script> This sets up a “signal” that is basically a live-update object, then calls the component() method to select where we want to make the update, and it injects a template literal in there that passes in the variables with the markup we want.
    So, for example, we can update those values on setTimeout:
    <div id="greeting"></div> <script type="module"> import {signal, component} from '.../reef.es..min.js'; // Create a signal let data = signal({ greeting: 'Hello', name: 'World' }); component('#greeting', () => `<p>${data.greeting}, ${data.name}!</p>`); setTimeout(() => { data.greeting = '¡Hola' data,name = 'Scott' }, 3000) </script> We can combine this sort of library with a web component. Here, Scott imports Reef and constructs the data outside the component so that it’s like the application state:
    <my-greeting></my-greeting> <script type="module"> import {signal, component} from 'https://cdn.jsdelivr.net/npm/reefjs@13/dist/reef.es.min.js'; window.data = signal({ greeting: 'Hi', name: 'Scott' }); customElements.define('my-greeting', class extends HTMLElement { connectedCallback(){ component(this, () => `<p>${data.greeting}, ${data.name}!</p>` ); } }); </script> It’s the virtual DOM in a web component! Another approach that is more reactive in the sense that it watches for changes in attributes and then updates the application state in response which, in turn, updates the greeting.
    <my-greeting greeting="Hi" name="Scott"></my-greeting> <script type="module"> import {signal, component} from 'https://cdn.jsdelivr.net/npm/reefjs@13/dist/reef.es.min.js'; customElements.define('my-greeting', class extends HTMLElement { static observedAttributes = ["name", "greeting"]; constructor(){ super(); this.data = signal({ greeting: '', name: '' }); } attributeChangedCallback(name, oldValue, newValue) { this.data[name] = newValue; } connectedCallback(){ component(this, () => `<p>${this.data.greeting}, ${this.data.name}!</p>` ); } }); </script> If the attribute changes, it only changes that instance. The data is registered at the time the component is constructed and we’re only changing string attributes rather than objects with properties.
    HTML Web Components
    This describes web components that are not empty by default like this:
    <my-greeting></my-greeting> This is a “React” mindset where all the functionality, content, and behavior comes from JavaScript. But Scott reminds us that web components are pretty useful right out of the box without JavaScript. So, “HTML web components” refers to web components that are packed with meaningful content right out of the gate and Scott points to Jeremy Keith’s 2023 article coining the term.
    Jeremy cites something Robin Rendle mused about the distinction:
    The “React” way:
    <UserAvatar src="https://example.com/path/to/img.jpg" alt="..." /> The props look like HTML but they’re not. Instead, the props provide information used to completely swap out the <UserAvatar /> tag with the JavaScript-based markup.
    Web components can do that, too:
    <user-avatar src="https://example.com/path/to/img.jpg" alt="..." ></user-avatar> Same deal, real HTML. Progressive enhancement is at the heart of an HTML web component mindset. Here’s how that web component might work:
    class UserAvatar extends HTMLElement { connectedCallback() { const src = this.getAttribute("src"); const name = this.getAttribute("name"); this.innerHTML = ` <div> <img src="${src}" alt="Profile photo of ${name}" width="32" height="32" /> <!-- Markup for the tooltip --> </div> `; } } customElements.define('user-avatar', UserAvatar); But a better starting point would be to include the <img> directly in the component so that the markup is immediately available:
    <user-avatar> <img src="https://example.com/path/to/img.jpg" alt="..." /> </user-avatar> This way, the image is downloaded and ready before JavaScript even loads on the page. Strive for augmentation over replacement!
    resizeasaurus
    This helps developers test responsive component layouts, particularly ones that use container queries.
    <resize-asaurus> Drop any HTML in here to test. </resize-asaurus> <!-- for example: --> <resize-asaurus> <div class="my-responsive-grid"> <div>Cell 1</div> <div>Cell 2</div> <div>Cell 3</div> <!-- ... --> </div> </resize-asaurus> lite-youtube-embed
    This is like embedding a YouTube video, but without bringing along all the baggage that YouTube packs into a typical embed snippet.
    <lite-youtube videoid="ogYfd705cRs" style="background-image: url(...);"> <a href="https://youtube.com/watch?v=ogYfd705cRs" class="lyt-playbtn" title="Play Video"> <span class="lyt-visually-hidden">Play Video: Keynote (Google I/O '18)</span> </a> </lite-youtube> <link rel="stylesheet" href="./src.lite-yt-embed.css" /> <script src="./src.lite-yt-embed.js" defer></script> It starts with a link which is a nice fallback if the video fails to load for whatever reason. When the script runs, the HTML is augmented to include the video <iframe>.
    Chapter 7: Web Components Frameworks Tour
    Lit
    Lit extends the base class and then extends what that class provides, but you’re still working directly on top of web components. There are syntax shortcuts for common patterns and a more structured approach.
    The package includes all this in about 5-7KB:
    Fast templating Reactive properties Reactive update lifecycle Scoped styles <simple-greeting name="Geoff"></simple-greeting> <script> import {html, css, LitElement} from 'lit'; export class SimpleGreeting extends LitElement { state styles = css`p { color: blue }`; static properties = { name: {type = String}, }; constructor() { super(); this.name = 'Somebody'; } render() { return html`<p>Hello, ${this.name}!</p>`; } } customElements.define('simple-greeting', SimpleGreeting); </script> ProsConsEcosystemNo official SSR story (but that is changing)CommunityFamiliar ergonomicsLightweightIndustry-proven webc
    This is part of the 11ty project. It allows you to define custom elements as files, writing everything as a single file component.
    <!-- starting element / index.html --> <my-element></my-element> <!-- ../components/my-element.webc --> <p>This is inside the element</p> <style> /* etc. */ </style> <script> // etc. </script> ProsConsCommunityGeared toward SSGSSG progressive enhancementStill in early stagesSingle file component syntaxZach Leatherman! Enhance
    This is Scott’s favorite! It renders web components on the server. Web components can render based on application state per request. It’s a way to use custom elements on the server side. 
    ProsConsErgonomicsStill in early stagesProgressive enhancementSingle file component syntaxFull-stack stateful, dynamic SSR components Chapter 8: Web Components Libraries Tour
    This is a super short module simply highlighting a few of the more notable libraries for web components that are offered by third parties. Scott is quick to note that all of them are closer in spirit to a React-based approach where custom elements are more like replaced elements with very little meaningful markup to display up-front. That’s not to throw shade at the libraries, but rather to call out that there’s a cost when we require JavaScript to render meaningful content.
    Spectrum
    <sp-button variant="accent" href="components/button"> Use Spectrum Web Component buttons </sp-button> This is Adobe’s design system. One of the more ambitious projects, as it supports other frameworks like React Open source Built on Lit Most components are not exactly HTML-first. The pattern is closer to replaced elements. There’s plenty of complexity, but that makes sense for a system that drives an application like Photoshop and is meant to drop into any project. But still, there is a cost when it comes to delivering meaningful content to users up-front. An all-or-nothing approach like this might be too stark for a small website project.
    FAST
    <fast-checkbox>Checkbox</fast-checkbox> This is Microsoft’s system. It’s philosophically like Spectrum where there’s very little meaningful HTML up-front. Fluent is a library that extends the system for UI components. Microsoft Edge rebuilt the browser’s Chrome using these components. Shoelace
    <sl-button>Click Me</sl-button> Purely meant for third-party developers to use in their projects The name is a play on Bootstrap. 🙂 The markup is mostly a custom element with some text in it rather than a pure HTML-first approach. Acquired by Font Awesome and they are creating Web Awesome Components as a new era of Shoelace that is subscription-based Chapter 9: What’s Next With Web Components
    Scott covers what the future holds for web components as far as he is aware.
    Declarative custom elements
    Define an element in HTML alone that can be used time and again with a simpler syntax. There’s a GitHub issue that explains the idea, and Zach Leatherman has a great write-up as well.
    GitHub Issue Cross-root ARIA
    Make it easier to pair custom elements with other elements in the Light DOM as well as other custom elements through ARIA.
    GitHub Explainer GitHub Proposal Container Queries
    How can we use container queries without needing an extra wrapper around the custom element?
    HTML Modules
    This was one of the web components’ core features but was removed at some point. They can define HTML in an external place that could be used over and over.
    GitHub Explainer External styling
    This is also known as “open styling.”
    GitHub Explainer DOM Parts
    This would be a templating feature that allows for JSX-string-literal-like syntax where variables inject data.
    <section> <h1 id="name">{name}</h1> Email: <a id="link" href="mailto:{email}">{email}</a> </section> And the application has produced a template with the following content:
    <template> <section> <h1 id="name">{{}}</h1> Email: <a id="link" href="{{}}">{{}}</a> </section> </template> GitHub Proposal Scoped element registries
    Using variations of the same web component without name collisions.
    GitHub Issue
    Web Components Demystified originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  9. by: Abhishek Prakash
    Thu, 13 Mar 2025 04:27:14 GMT

    Keeping your laptop always plugged-in speeds up the deterioration of its battery life. But if you are using a docking station, you don't have the option to unplug the power cord.
    Thankfully, you can employ a few tricks to limit battery charging levels.
    How to Limit Charging Level in Linux (and Prolong Battery Life)Prolong your laptop’s battery life in long run by limiting the charging to 80%.It's FOSSAbhishek Prakash💬 Let's see what else you get in this edition
    A new COSMIC-equipped Linux distro. Android's native Linux terminal rolling out. File searching And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Zep's Graphiti. ✨ Zep’s Graphiti – Open-Source Temporal Knowledge Graph for AI Agents
    Traditional systems retrieve static documents, not evolving knowledge. Zep’s Graphiti is an open-source temporal knowledge graph that helps AI agents track conversations and structured data over time—enabling better memory, deeper context, and more accurate responses.
    Built to evolve, Graphiti goes beyond static embeddings, powering AI that learns. Open-source, scalable, and ready to deploy.
    Explore Zep’s Graphiti on GitHub and contribute!
    GitHub - getzep/graphiti: Build and query dynamic, temporally-aware Knowledge GraphsBuild and query dynamic, temporally-aware Knowledge Graphs - getzep/graphitiGitHubgetzep📰 Linux and Open Source News
    The first Framework Mono release since the Wine takeover has arrived. CrossOver 25 release is a packed one with new Windows games support. Google has quietly rolled out its native Linux terminal for some Android devices. Mesa 25.1 will not use the old Nouveau OpenGL driver, instead opting for a modern solution. Garuda COSMIC is a new offering that has been introduced to gauge community interest. The Nova NVIDIA GPU driver is shaping up nicely, with a Linux kernel debut imminent.
    Nvidia Driver Written in Rust Could Arrive With Linux Kernel 6.15The Nova GPU driver is still evolving, but a kernel debut is near.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    Those naysayers who say open source software doesn't produce results need to read this.
    Open Source Fueled The Oscar-Winning ‘Flow’A great achievement pulled off using open source software!It's FOSS NewsCommunity🧮 Linux Tips, Tutorials and More
    Searching for files in Linux is synonymous to commands like find, xargs and grep. But not all of us Linux users are command line champs, right? Thankfully, even the file explorers like Nautilus have good search features.
    If you want something more than that, there are a few GUI tools like AngrySearch for this purpose.
    Experts can beef up Linux system's security with Pledge. Here's a guide for advanced users on how to handle PKGBUILD on Arch Linux. And some sudo tips ;)
    7 Ways to Tweak Sudo Command in LinuxUnleash the power of sudo with these tips 💪It's FOSSAbhishek Prakash👷 Homelab and Maker's Corner
    Take the first step towards a homelab with Raspberry Pi and CasaOS.
    Enjoying Self-Hosting Software Locally With CasaOS and Raspberry PiI used CasaOS for self-hosting popular open source services on a Raspberry Pi. Here’s my experience.It's FOSSAbhishek Kumar✨ Apps Highlight
    Tired of Notion? Why not give this open source alternative a chance?
    AFFiNE: A Truly Wonderful Open Source Notion Alternative With a Focus on PrivacyA solid open source rival to Notion and Miro. Let us take a look!It's FOSS NewsSourav Rudra📽️ Videos I am Creating for You
    In the latest video, I show how easy it is to create a multiboot Linux USB.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    How much do you know of the Linux boot process? We have a crossword to jog your memory.
    Crossword Quiz on Linux Boot ProcessTest your knowledge of the Linux boot process in this fun and interactive crossword.It's FOSSAbhishek Prakash💡 Quick Handy Tip
    On Brave, you can search the history/bookmarks/tabs etc. from the address bar. Simply type @ in the address bar and start searching.
    🤣 Meme of the Week
    Are you even a real Linux user if you aren't excited when you see a Penguin? 🐧🤔
    🗓️ Tech Trivia
    TRADIC, developed by Bell Labs in 1954, was one of the first transistorized computers. It used nearly 800 transistors, significantly reducing power consumption.
    TRADIC operated on less than 100 watts, a fraction of what vacuum tube computers needed at that time. Initially, a prototype, it evolved into an airborne version for the U.S. Air Force. This innovation paved the way for future low-power computing systems.
    🧑‍🤝‍🧑 FOSSverse Corner
    Pro FOSSer Ernie dove into customizing his terminal with Starship.
    My most recent adventure: Customizing my terminal prompt using Starship!I read an item in today’s (March 6, 2025) ZDNet newsletter titled “Why the Starship prompt is better than your default on Linux and MacOS”. I was intrigued, so I followed the author’s instructions, and installed starship on my Garuda GNU/Linux system. Interestingly, my prompt did not change following installation and activation of starship, so I asked if Garuda uses starship to customize the terminal prompt in Firefox (I think Firefox uses the Google search engine), and the AI responded yes, ex…It's FOSS Communityernie❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  10. by: Bryan Robinson
    Tue, 11 Mar 2025 15:26:10 +0000

    Static sites are wonderful. I’m a big fan.
    They also have their issues. Namely, static sites either are purely static or the frameworks that generate them completely lose out on true static generation when you just dip your toes in the direction of server routes.
    Astro has been watching the front-end ecosystem and is trying to keep one foot firmly embedded in pure static generation, and the other in a powerful set of server-side functionality.
    With Astro Actions, Astro brings a lot of the power of the server to a site that is almost entirely static. A good example of this sort of functionality is dealing with search. If you have a content-based site that can be purely generated, adding search is either going to be something handled entirely on the front end, via a software-as-a-service solution, or, in other frameworks, converting your entire site to a server-side application.
    With Astro, we can generate most of our site during our build, but have a small bit of server-side code that can handle our search functionality using something like Fuse.js.
    In this demo, we’ll use Fuse to search through a set of personal “bookmarks” that are generated at build time, but return proper results from a server call.
    GitHub Live Demo Starting the project
    To get started, we’ll just set up a very basic Astro project. In your terminal, run the following command:
    npm create astro@latest Astro’s adorable mascot Houston is going to ask you a few questions in your terminal. Here are the basic responses, you’ll need:
    Where should we create your new project? Wherever you’d like, but I’ll be calling my directory ./astro-search How would you like to start your new project? Choose the basic minimalist starter. Install dependencies? Yes, please! Initialize a new git repository? I’d recommend it, personally! This will create a directory in the location specified and install everything you need to start an Astro project. Open the directory in your code editor of choice and run npm run dev in your terminal in the directory.
    When you run your project, you’ll see the default Astro project homepage.
    We’re ready to get our project rolling!
    Basic setup
    To get started, let’s remove the default content from the homepage. Open the  /src/pages/index.astro file.
    This is a fairly barebones homepage, but we want it to be even more basic. Remove the <Welcome /> component, and we’ll have a nice blank page.
    For styling, let’s add Tailwind and some very basic markup to the homepage to contain our site.
    npx astro add tailwind The astro add command will install Tailwind and attempt to set up all the boilerplate code for you (handy!). The CLI will ask you if you want it to add the various components, I recommend letting it, but if anything fails, you can copy the code needed from each of the steps in the process. As the last step for getting to work with Tailwind, the CLI will tell you to import the styles into a shared layout. Follow those instructions, and we can get to work.
    Let’s add some very basic markup to our new homepage.
    --- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; --- <Layout> <div class="max-w-3xl mx-auto my-10"> <h1 class="text-3xl text-center">My latest bookmarks</h1> <p class="text-xl text-center mb-5">This is only 10 of A LARGE NUMBER THAT WE'LL CHANGE LATER</p> </div> </Layout> Your site should now look like this.
    Not exactly winning any awards yet! That’s alright. Let’s get our bookmarks loaded in.
    Adding bookmark data with Astro Content Layer
    Since not everyone runs their own application for bookmarking interesting items, you can borrow my data. Here’s a small subset of my bookmarks, or you can go get 110 items from this link on GitHub. Add this data as a file in your project. I like to group data in a data directory, so my file lives in /src/data/bookmarks.json.
    Open code [ { "pageTitle": "Our Favorite Sandwich Bread | King Arthur Baking", "url": "<https://www.kingarthurbaking.com/recipes/our-favorite-sandwich-bread-recipe>", "description": "Classic American sandwich loaf, perfect for French toast and sandwiches.", "id": "007y8pmEOvhwldfT3wx1MW" }, { "pageTitle": "Chris Coyier's discussion of Automatic Social Share Images | CSS-Tricks ", "url": "<https://css-tricks.com/automatic-social-share-images/>", "description": "It's a pretty low-effort thing to get a big fancy link preview on social media. Toss a handful of specific <meta> tags on a URL and you get a big image-title-description thing ", "id": "04CXDvGQo19m0oXERL6bhF" }, { "pageTitle": "Automatic Social Share Images | ryanfiller.com", "url": "<https://www.ryanfiller.com/blog/automatic-social-share-images/>", "description": "Setting up automatic social share images with Puppeteer and Netlify Functions. ", "id": "04CXDvGQo19m0oXERLoC10" }, { "pageTitle": "Emma Wedekind: Foundations of Design Systems / React Boston 2019 - YouTube", "url": "<https://m.youtube.com/watch?v=pXb2jA43A6k>", "description": "Emma Wedekind: Foundations of Design Systems / React Boston 2019 Presented by: Emma Wedekind – LogMeIn Design systems are in the world around us, from street...", "id": "0d56d03e-aba4-4ebd-9db8-644bcc185e33" }, { "pageTitle": "Editorial Design Patterns With CSS Grid And Named Columns — Smashing Magazine", "url": "<https://www.smashingmagazine.com/2019/10/editorial-design-patterns-css-grid-subgrid-naming/>", "description": "By naming lines when setting up our CSS Grid layouts, we can tap into some interesting and useful features of Grid — features that become even more powerful when we introduce subgrids.", "id": "13ac1043-1b7d-4a5b-a3d8-b6f5ec34cf1c" }, { "pageTitle": "Netlify pro tip: Using Split Testing to power private beta releases - DEV Community 👩‍💻👨‍💻", "url": "<https://dev.to/philhawksworth/netlify-pro-tip-using-split-testing-to-power-private-beta-releases-a7l>", "description": "Giving users ways to opt in and out of your private betas. Video and tutorial.", "id": "1fbabbf9-2952-47f2-9005-25af90b0229e" }, { "pageTitle": "Netlify Public Folder, Part I: What? Recreating the Dropbox Public Folder With Netlify | Jim Nielsen’s Weblog", "url": "<https://blog.jim-nielsen.com/2019/netlify-public-folder-part-i-what/>", "id": "2607e651-7b64-4695-8af9-3b9b88d402d5" }, { "pageTitle": "Why Is CSS So Weird? - YouTube", "url": "<https://m.youtube.com/watch?v=aHUtMbJw8iA&feature=youtu.be>", "description": "Love it or hate it, CSS is weird! It doesn't work like most programming languages, and it doesn't work like a design tool either. But CSS is also solving a v...", "id": "2e29aa3b-45b8-4ce4-85b7-fd8bc50daccd" }, { "pageTitle": "Internet world despairs as non-profit .org sold for $$$$ to private equity firm, price caps axed • The Register", "url": "<https://www.theregister.co.uk/2019/11/20/org_registry_sale_shambles/>", "id": "33406b33-c453-44d3-8b18-2d2ae83ee73f" }, { "pageTitle": "Netlify Identity for paid subscriptions - Access Control / Identity - Netlify Community", "url": "<https://community.netlify.com/t/netlify-identity-for-paid-subscriptions/1947/2>", "description": "I want to limit certain functionality on my website to paying users. Now I’m using a payment provider (Mollie) similar to Stripe. My idea was to use the webhook fired by this service to call a Netlify function and give…", "id": "34d6341c-18eb-4744-88e1-cfbf6c1cfa6c" }, { "pageTitle": "SmashingConf Freiburg 2019: Videos And Photos — Smashing Magazine", "url": "<https://www.smashingmagazine.com/2019/10/smashingconf-freiburg-2019/>", "description": "We had a lovely time at SmashingConf Freiburg. This post wraps up the event and also shares the video of all of the Freiburg presentations.", "id": "354cbb34-b24a-47f1-8973-8553ed1d809d" }, { "pageTitle": "Adding Google Calendar to your JAMStack", "url": "<https://www.raymondcamden.com/2019/11/18/adding-google-calendar-to-your-jamstack>", "description": "A look at using Google APIs to add events to your static site.", "id": "361b20c4-75ce-46b3-b6d9-38139e03f2ca" }, { "pageTitle": "How to Contribute to an Open Source Project | CSS-Tricks", "url": "<https://css-tricks.com/how-to-contribute-to-an-open-source-project/>", "description": "The following is going to get slightly opinionated and aims to guide someone on their journey into open source. As a prerequisite, you should have basic", "id": "37300606-af08-4d9a-b5e3-12f64ebbb505" }, { "pageTitle": "Functions | Netlify", "url": "<https://www.netlify.com/docs/functions/>", "description": "Netlify builds, deploys, and hosts your front end. Learn how to get started, see examples, and view documentation for the modern web platform.", "id": "3bf9e31b-5288-4b3b-89f2-97034603dbf6" }, { "pageTitle": "Serverless Can Help You To Focus - By Simona Cotin", "url": "<https://hackernoon.com/serverless-can-do-that-7nw32mk>", "id": "43b1ee63-c2f8-4e14-8700-1e21c2e0a8b1" }, { "pageTitle": "Nuxt, Next, Nest?! My Head Hurts. - DEV Community 👩‍💻👨‍💻", "url": "<https://dev.to/laurieontech/nuxt-next-nest-my-head-hurts-5h98>", "description": "I clearly know what all of these things are. Their names are not at all similar. But let's review, just to make sure we know...", "id": "456b7d6d-7efa-408a-9eca-0325d996b69c" }, { "pageTitle": "Consuming a headless CMS GraphQL API with Eleventy - Webstoemp", "url": "<https://www.webstoemp.com/blog/headless-cms-graphql-api-eleventy/>", "description": "With Eleventy, consuming data coming from a GraphQL API to generate static pages is as easy as using Markdown files.", "id": "4606b168-21a6-49df-8536-a2a00750d659" }, ] Now that the data is in the project, we need for Astro to incorporate the data into its build process. To do this, we can use Astro’s new(ish) Content Layer API. The Content Layer API adds a content configuration file to your src directory that allows you to run and collect any number of content pieces from data in your project or external APIs. Create the file  /src/content.config.ts (the name of this file matters, as this is what Astro is looking for in your project).
    import { defineCollection, z } from "astro:content"; import { file } from 'astro/loaders'; const bookmarks = defineCollection({ schema: z.object({ pageTitle: z.string(), url: z.string(), description: z.string().optional() }), loader: file("src/data/bookmarks.json"), }); export const collections = { bookmarks }; In this file, we import a few helpers from Astro. We can use defineCollection to create the collection, z as Zod, to help define our types, and file is a specific content loader meant to read data files.
    The defineCollection method takes an object as its argument with a required loader and optional schema. The schema will help make our content type-safe and make sure our data is always what we expect it to be. In this case, we’ll define the three data properties each of our bookmarks has. It’s important to define all your data in your schema, otherwise it won’t be available to your templates.
    We provide the loader property with a content loader. In this case, we’ll use the file loader that Astro provides and give it the path to our JSON.
    Finally, we need to export the collections variable as an object containing all the collections that we’ve defined (just bookmarks in our project). You’ll want to restart the local server by re-running npm run dev in your terminal to pick up the new data.
    Using the new bookmarks content collection
    Now that we have data, we can use it in our homepage to show the most recent bookmarks that have been added. To get the data, we need to access the content collection with the getCollection method from astro:content. Add the following code to the frontmatter for ./src/pages/index.astro .
    --- import Layout from '../layouts/Layout.astro'; import { getCollection } from 'astro:content'; const bookmarks = await getCollection('bookmarks'); --- This code imports the getCollection method and uses it to create a new variable that contains the data in our bookmarkscollection. The bookmarks variable is an array of data, as defined by the collection, which we can use to loop through in our template.
    --- import Layout from '../layouts/Layout.astro'; import { getCollection } from 'astro:content'; const bookmarks = await getCollection('bookmarks'); --- <Layout> <div class="max-w-3xl mx-auto my-10"> <h1 class="text-3xl text-center">My latest bookmarks</h1> <p class="text-xl text-center mb-5"> This is only 10 of {bookmarks.length} </p> <h2 class="text-2xl mb-3">Latest bookmarks</h2> <ul class="grid gap-4"> { bookmarks.slice(0, 10).map((item) => ( <li> <a href={item.data?.url} class="block p-6 bg-white border border-gray-200 rounded-lg shadow-sm hover:bg-gray-100 dark:bg-gray-800 dark:border-gray-700 dark:hover:bg-gray-700"> <h3 class="mb-2 text-2xl font-bold tracking-tight text-gray-900 dark:text-white"> {item.data?.pageTitle} </h3> <p class="font-normal text-gray-700 dark:text-gray-400"> {item.data?.description} </p> </a> </li> )) } </ul> </div> </Layout> This should pull the most recent 10 items from the array and display them on the homepage with some Tailwind styles. The main thing to note here is that the data structure has changed a little. The actual data for each item in our array actually resides in the data property of the item. This allows Astro to put additional data on the object without colliding with any details we provide in our database. Your project should now look something like this.
    Now that we have data and display, let’s get to work on our search functionality.
    Building search with actions and vanilla JavaScript
    To start, we’ll want to scaffold out a new Astro component. In our example, we’re going to use vanilla JavaScript, but if you’re familiar with React or other frameworks that Astro supports, you can opt for client Islands to build out your search. The Astro actions will work the same.
    Setting up the component
    We need to make a new component to house a bit of JavaScript and the HTML for the search field and results. Create the component in a ./src/components/Search.astro file.
    <form id="searchForm" class="flex mb-6 items-center max-w-sm mx-auto"> <label for="simple-search" class="sr-only">Search</label> <div class="relative w-full"> <input type="text" id="search" class="bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2.5 dark:bg-gray-700 dark:border-gray-600 dark:placeholder-gray-400 dark:text-white dark:focus:ring-blue-500 dark:focus:border-blue-500" placeholder="Search Bookmarks" required /> </div> <button type="submit" class="p-2.5 ms-2 text-sm font-medium text-white bg-blue-700 rounded-lg border border-blue-700 hover:bg-blue-800 focus:ring-4 focus:outline-none focus:ring-blue-300 dark:bg-blue-600 dark:hover:bg-blue-700 dark:focus:ring-blue-800"> <svg class="w-4 h-4" aria-hidden="true" xmlns="<http://www.w3.org/2000/svg>" fill="none" viewBox="0 0 20 20"> <path stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="m19 19-4-4m0-7A7 7 0 1 1 1 8a7 7 0 0 1 14 0Z"></path> </svg> <span class="sr-only">Search</span> </button> </form> <div class="grid gap-4 mb-10 hidden" id="results"> <h2 class="text-xl font-bold mb-2">Search Results</h2> </div> <script> const form = document.getElementById("searchForm"); const search = document.getElementById("search"); const results = document.getElementById("results"); form?.addEventListener("submit", async (e) => { e.preventDefault(); console.log("SEARCH WILL HAPPEN"); }); </script> The basic HTML is setting up a search form, input, and results area with IDs that we’ll use in JavaScript. The basic JavaScript finds those elements, and for the form, adds an event listener that fires when the form is submitted. The event listener is where a lot of our magic is going to happen, but for now, a console log will do to make sure everything is set up properly.
    Setting up an Astro Action for search
    In order for Actions to work, we need our project to allow for Astro to work in server or hybrid mode. These modes allow for all or some pages to be rendered in serverless functions instead of pre-generated as HTML during the build. In this project, this will be used for the Action and nothing else, so we’ll opt for hybrid mode.
    To be able to run Astro in this way, we need to add a server integration. Astro has integrations for most of the major cloud providers, as well as a basic Node implementation. I typically host on Netlify, so we’ll install their integration. Much like with Tailwind, we’ll use the CLI to add the package and it will build out the boilerplate we need.
    npx astro add netlify Once this is added, Astro is running in Hybrid mode. Most of our site is pre-generated with HTML, but when the Action gets used, it will run as a serverless function.
    Setting up a very basic search Action
    Next, we need an Astro Action to handle our search functionality. To create the action, we need to create a new file at ./src/actions/index.js. All our Actions live in this file. You can write the code for each one in separate files and import them into this file, but in this example, we only have one Action, and that feels like premature optimization.
    In this file, we’ll set up our search Action. Much like setting up our content collections, we’ll use a method called defineAction and give it a schema and in this case a handler. The schema will validate the data it’s getting from our JavaScript is typed correctly, and the handler will define what happens when the Action runs.
    import { defineAction } from "astro:actions"; import { z } from "astro:schema"; import { getCollection } from "astro:content"; export const server = { search: defineAction({ schema: z.object({ query: z.string(), }), handler: async (query) => { const bookmarks = await getCollection("bookmarks"); const results = await bookmarks.filter((bookmark) => { return bookmark.data.pageTitle.includes(query); }); return results; }, }), }; For our Action, we’ll name it search and expect a schema of an object with a single property named query which is a string. The handler function will get all of our bookmarks from the content collection and use a native JavaScript .filter() method to check if the query is included in any bookmark titles. This basic functionality is ready to test with our front-end.
    Using the Astro Action in the search form event
    When the user submits the form, we need to send the query to our new Action. Instead of figuring out where to send our fetch request, Astro gives us access to all of our server Actions with the actions object in astro:actions. This means that any Action we create is accessible from our client-side JavaScript.
    In our Search component, we can now import our Action directly into the JavaScript and then use the search action when the user submits the form.
    <script> import { actions } from "astro:actions"; const form = document.getElementById("searchForm"); const search = document.getElementById("search"); const results = document.getElementById("results"); form?.addEventListener("submit", async (e) => { e.preventDefault(); results.innerHTML = ""; const query = search.value; const { data, error } = await actions.search(query); if (error) { results.innerHTML = `<p>${error.message}</p>`; return; } // create a div for each search result data.forEach(( item ) => { const div = document.createElement("div"); div.innerHTML = ` <a href="${item.data?.url}" class="block p-6 bg-white border border-gray-200 rounded-lg shadow-sm hover:bg-gray-100 dark:bg-gray-800 dark:border-gray-700 dark:hover:bg-gray-700"> <h3 class="mb-2 text-2xl font-bold tracking-tight text-gray-900 dark:text-white"> ${item.data?.pageTitle} </h3> <p class="font-normal text-gray-700 dark:text-gray-400"> ${item.data?.description} </p> </a>`; // append the div to the results container results.appendChild(div); }); // show the results container results.classList.remove("hidden"); }); </script> When results are returned, we can now get search results!
    Though, they’re highly problematic. This is just a simple JavaScript filter, after all. You can search for “Favorite” and get my favorite bread recipe, but if you search for “favorite” (no caps), you’ll get an error… Not ideal.
    That’s why we should use a package like Fuse.js.
    Adding Fuse.js for fuzzy search
    Fuse.js is a JavaScript package that has utilities to make “fuzzy” search much easier for developers. Fuse will accept a string and based on a number of criteria (and a number of sets of data) provide responses that closely match even when the match isn’t perfect. Depending on the settings, Fuse can match “Favorite”, “favorite”, and even misspellings like “favrite” all to the right results.
    Is Fuse as powerful as something like Algolia or ElasticSearch? No. Is it free and pretty darned good? Absolutely! To get Fuse moving, we need to install it into our project.
    npm install fuse.js From there, we can use it in our Action by importing it in the file and creating a new instance of Fuse based on our bookmarks collection.
    import { defineAction } from "astro:actions"; import { z } from "astro:schema"; import { getCollection } from "astro:content"; import Fuse from "fuse.js"; export const server = { search: defineAction({ schema: z.object({ query: z.string(), }), handler: async (query) => { const bookmarks = await getCollection("bookmarks"); const fuse = new Fuse(bookmarks, { threshold: 0.3, keys: [ { name: "data.pageTitle", weight: 1.0 }, { name: "data.description", weight: 0.7 }, { name: "data.url", weight: 0.3 }, ], }); const results = await fuse.search(query); return results; }, }), }; In this case, we create the Fuse instance with a few options. We give it a threshold value between 0 and 1 to decide how “fuzzy” to make the search. Fuzziness is definitely something that depends on use case and the dataset. In our dataset, I’ve found 0.3 to be a great threshold.
    The keys array allows you to specify which data should be searched. In this case, I want all the data to be searched, but I want to allow for different weighting for each item. The title should be most important, followed by the description, and the URL should be last. This way, I can search for keywords in all these areas.
    Once there’s a new Fuse instance, we run fuse.search(query) to have Fuse check the data, and return an array of results.
    When we run this with our front-end, we find we have one more issue to tackle.
    The structure of the data returned is not quite what it was with our simple JavaScript. Each result now has a refIndex and an item. All our data lives on the item, so we need to destructure the item off of each returned result.
    To do that, adjust the front-end forEach.
    // create a div for each search result data.forEach(({ item }) => { const div = document.createElement("div"); div.innerHTML = ` <a href="${item.data?.url}" class="block p-6 bg-white border border-gray-200 rounded-lg shadow-sm hover:bg-gray-100 dark:bg-gray-800 dark:border-gray-700 dark:hover:bg-gray-700"> <h3 class="mb-2 text-2xl font-bold tracking-tight text-gray-900 dark:text-white"> ${item.data?.pageTitle} </h3> <p class="font-normal text-gray-700 dark:text-gray-400"> ${item.data?.description} </p> </a>`; // append the div to the results container results.appendChild(div); }); Now, we have a fully working search for our bookmarks.
    Next steps
    This just scratches the surface of what you can do with Astro Actions. For instance, we should probably add additional error handling based on the error we get back. You can also experiment with handling this at the page-level and letting there be a Search page where the Action is used as a form action and handles it all as a server request instead of with front-end JavaScript code. You could also refactor the JavaScript from the admittedly low-tech vanilla JS to something a bit more robust with React, Svelte, or Vue.
    One thing is for sure, Astro keeps looking at the front-end landscape and learning from the mistakes and best practices of all the other frameworks. Actions, Content Layer, and more are just the beginning for a truly compelling front-end framework.
    Powering Search With Astro Actions and Fuse.js originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  11. by: Abhishek Prakash
    Tue, 11 Mar 2025 12:50:25 GMT

    In case you didn't know it already, regularly charging the battery to 100% or fully discharging it puts your battery at stress and may lead to poor battery life in long run.
    I am not making claims on my own. This is what the experts and even the computer manufactures tell you.
    As you can see in the official Lenovo video above, continuous full charging and discharging accelerate the deterioration of battery health. They also tell you that the optimum battery charging range is 20-80%.
    Although Lenovo also tells you that battery these days are made to last longer than your computer. Not sure what's their idea of an average computer lifespan, I would prefer to keep the battery life healthy for a longer period and thus extract a good performance from my laptop as long as it lives.
    I mean, it's all about following the best practices, right?
    Now, you could manually plug and unplug the power cord but it won't work if you are connected to a docking station or use a modern monitor to power your laptop.
    What can you do in that case? Well, to control the battery charging on Linux, you have a few options:
    KDE Plasma has this as an in-built feature. That's why KDE is ❤️ GNOME has extensions for this. Typical GNOME thing. There are command line tools to limit battery charging levels. Typical Linux thing 😍 Let's see them one by one.
    📋Please verify which desktop environment you are using and then follow the appropriate method.Limit laptop battery charging in KDE
    If you are using KDE Plasma desktop environment, all you have to do is to open the Settings app and go to Power Management. In the Advanced Power Settings, you'll see the battery levels settings.
    I like that KDE informs the users about reduced battery life due to overcharging. It even sets the charging levels at 50-90% by default.
    Of course, you can change the limit to something like 20-80. Although, I am not a fan of the lower 20% limit and I prefer 40-80% instead.
    That's KDE for you. Always caring for its kusers.
    💡It is possible that the battery charging control feature may need to be enabled from the BIOS. Look for it under power management settings in BIOS.Set battery charging limit in GNOME
    Like most other things, GNOME users can achieve this by using a GNOME extension.
    There is an extension called ThinkPad Battery Threshold for this purpose. Although it mentions ThinkPad everywhere, you don't need to own a Lenovo ThinkPad to use it.
    From what I see, the command it runs should work for most, if not all, laptops from different manufacturers.
    I have a detailed tutorial on using GNOME Extensions, so I won't repeat the steps.
    Use the Extension Manager tool to install ThinkPad Battery Threshold extension.
    Once the extension is enabled, you can find it in the system tray. On the first run, it shows red exclamation mark because it is not enabled yet.
    If you click on the Threshold settings, you will be presented with configuration options.
    Once you have set the desired values, click on apply. Next, you'll have to click Enable thresholds. When you hit that, it will ask for your password.
    At this screen, you can have a partial hint about the command it is going to run it.
    📋From what I experienced, while it does set an upper limit, it didn't set the lower limit for my Asus Zenbook. I'll check it on my Tuxedo laptop later. Meanwhile, if you try it on some other device, do share if it works for the lower charging limit as well.Using command line to set battery charging thresholds
    🚧You must have basic knowledge of the Linux command line. That's because there are many moving parts and variables for this part.Here's the thing. For most laptops, there should be file(s) to control battery charging in /sys/class/power_supply/BAT0/ directory but the file names are not standard. It could be charge_control_end_threshold or charge_stop_threshold or something similar.
    Also, you may have more than one battery. For most laptops, it will be BAT0 that is the main battery but you need to make sure of that.
    Install the upower CLI tool on your distribution and then use this command:
    upower --enumerateIt will show all the power devices present on the system:
    /org/freedesktop/UPower/devices/battery_BAT0 /org/freedesktop/UPower/devices/line_power_AC0 /org/freedesktop/UPower/devices/line_power_ucsi_source_psy_USBC000o001 /org/freedesktop/UPower/devices/line_power_ucsi_source_psy_USBC000o002 /org/freedesktop/UPower/devices/headphones_dev_BC_87_FA_23_77_B2 /org/freedesktop/UPower/devices/DisplayDeviceYou can find the battery name here.
    The next step is to look for the related file in /sys/class/power_supply/BAT0/ directory.
    If you find a file starting with charge, note down its name and then add the threshold value to it.
    In my case, it is /sys/class/power_supply/BAT0/charge_control_end_threshold, so I set an upper threshold of 80 in this way:
    echo 80 | sudo tee /sys/class/power_supply/BAT0/charge_control_end_thresholdYou could also use nano editor to edit the file but using tee command is quicker here.
    💡You can also use tlp for this purpose by editing the /etc/tlp.conf file.Conclusion
    See, if you were getting 10 hours of average battery life on a new laptop, it is normal to expect it to be around 7-8 hours after two years. But if you leave it at full charge all the time, it may come down to 6 hours instead of 7-8 hours. The numbers are for example purpose.
    This 20-80% range is what the industry recommends these days. On my Samsung Galaxy smartphone, there is a "Battery protection" setting to stop charging the device after 80% of the charge.
    I wish a healthy battery life for your laptop 💻
  12. By: Linux.com Editorial Staff
    Mon, 10 Mar 2025 15:30:39 +0000

    Join us for a Complimentary Live Webinar Sponsored by Linux Foundation Education and Arm Education
    March 19, 2025 | 08:00 AM PDT (UTC-7)
    You won’t believe how fast this is! Join us for an insightful webinar on leveraging CPUs for machine learning inference using the recently released, open source KleidiAI library. Discover how KleidiAI’s optimized micro-kernels are already being adopted by popular ML frameworks like PyTorch, enabling developers to achieve amazing inference performance without GPU acceleration. We’ll discuss the key optimizations available in KleidiAI, review real-world use cases, and demonstrate how to get started with ease in a fireside chat format, ensuring you stay ahead in the ML space and harness the full potential of CPUs already in consumer hands. This Linux Foundation Education webinar is supported under the Semiconductor Education Alliance and sponsored by Arm.
    Register Now

    The post Learn how easy it is to leverage CPUs for machine learning with our free webinar appeared first on Linux.com.
  13. Smashing Meets Accessibility

    by: Geoff Graham
    Mon, 10 Mar 2025 15:08:47 +0000

    The videos from Smashing Magazine’s recent event on accessibility were just posted the other day. I was invited to host the panel discussion with the speakers, including a couple of personal heroes of mine, Stéphanie Walter and Sarah Fossheim. But I was just as stoked to meet Kardo Ayoub who shared his deeply personal story as a designer with a major visual impairment.
    I’ll drop the video here:
    I’ll be the first to admit that I had to hold back my emotions as Kardo detailed what led to his impairment, the shock that came of it, and how he has beaten the odds to not only be an effective designer, but a real leader in the industry. It’s well worth watching his full presentation, which is also available on YouTube alongside the full presentations from Stéphanie and Sarah.
    Smashing Meets Accessibility originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  14. by: Abhishek Kumar
    Mon, 10 Mar 2025 11:05:22 GMT

    If you are someone interested in self-hosting, home automation, or just want to tinker with your Raspberry Pi, you have various options to get started.
    But, if you are new, and want something easy to get you up to speed, CasaOS is what you can try.
    CasaOS isn't your ordinary operating system. It is more like a conductor, bringing all your favorite self-hosted applications together under one roof.
    Built around the Docker ecosystem, it simplifies the process of managing various services, apps, and smart devices from a browser-based dashboard.
    CasaOS interface running on ZimaBoardOriginally developed by the makers of ZimaBoard, CasaOS makes the deployment of tools like Jellyfi, Plex, Immich, PhotoPrism a matter of a few clicks.
    ZimaBoard Turned My Dream of Owning a Homelab into RealityGet control of your data by hosting open source software easily with this plug and play homelab device.It's FOSSAbhishek PrakashLet us find out more and explore how CasaOS can help can transform our simple Raspberry Pi into a powerful personal cloud.
    What is CasaOS?
    Think of CasaOS (Casa being "home" in Spanish) as a home for your Raspberry Pi or similar device.
    It sits on top of your existing operating system, like Ubuntu or Raspberry Pi OS, and transforms it into a self-hosting machine.
    CasaOS simplifies the process of installing and managing applications you'd typically run through Docker containers by blending the user-friendliness of docker management platform like Portainer.
    It acts as the interface between you and your applications, providing a sleek, user-friendly dashboard that allows you to control everything from one place.
    You can deploy various applications, including media servers like Jellyfin or file-sharing platforms like Nextcloud, all through its web-based interface.
    Installing CasaOS on Raspberry Pi
    Installing CasaOS on a Raspberry Pi is as easy as running a single bash script. But first, let’s make sure your Raspberry Pi is ready:
    💡Feeling a bit hesitant about running scripts? CasaOS offers a live demo on their website (username: casaos, password: casaos) to familiarize yourself with the interface before taking the plunge.Ensure your Pi’s operating system is up-to-date by running the following commands:
    sudo apt update && sudo apt upgrade -yIf you do not have curl installed already, install it by running:
    sudo apt install curl -yNow, grab the installation script from the official website and run it:
    curl -fsSL https://get.casaos.io | sudo bashAccess the CasaOS web interface
    After the installation completes, you will receive the IP address in the terminal to access CasaOS from your web browser.

    Simply type this address into your browser or if you are unsure type hostname -I on the Raspberry Pi to get your IP, and you will be greeted by the CasaOS welcome screen.

    The initial setup process will guide you through creating an account and getting started with your personal cloud.
    Getting Started
    Once inside, CasaOS welcomes you with a clean, modern interface. You’ll see system stats like CPU usage, memory, and disk space upfront in widget-style panels.
    There’s also a search bar for easy navigation, and at the heart of the dashboard lies the app drawer—your gateway to all installed and available applications.
    CasaOS comes pre-installed with two main apps: Files and the App Store. While the Files app gives you easy access to local storage on your Raspberry Pi, the App Store is where the magic really happens.
    From here, you can install various applications with just a few clicks.
    Exploring the magical app store
    The App Store is one of the main attractions of CasaOS. It offers a curated selection of applications that can be deployed directly on your Pi with minimal effort.
    Here’s how you can install an app:
    Go to the app store
    From the dashboard, click on the App Store icon. Browse or search for an app
    Scroll through the list of available apps or use the search bar to find what you’re looking for. Click install
    Once you find the app you want, simply click on the installation button, and CasaOS will handle the rest. The app will appear in your app drawer once the installation is complete.
    It is that simple.
    💡Container-level settings for the apps can be accessed by right clicking the app icon in the dashboard. It lets you map (docker volume) directories on the disk with the app. For example, if you are using Jellyfin, you should map your media folder in the Jellyfin (container) setting. You should see it in the later sections of this tutorial.Access
    Once you have installed applications in CasaOS, accessing them is straightforward, thanks to its intuitive design.
    All you have to do is click on the Jellyfin icon, and it will automatically open up in a new browser window.
    Each application you install behaves in a similar way, CasaOS takes care of the back-end configurations to make sure the apps are easily accessible through your browser.
    No need to manually input IP addresses or ports, as CasaOS handles that for you.
    For applications like Jellyfin or any self-hosted service, you will likely need to log in with default credentials (which you can and should change after the first use).
    In the case of Jellyfin, the default login credentials were:
    Username: admin Password: admin Of course, CasaOS allows you to customize these credentials when setting up the app initially, and it's always a good idea to use something more secure.
    My experience with CasaOS
    For this article, I installed a few applications on CasaOS tailored to my homelab needs:
    A Jellyfin server for media streaming Transmission as a torrent client File Browser to easily interact with files through the browser Cloudflared for tunneling with Cloudflare Nextcloud to set up my cloud A custom Docker stack for hosting a WordPress site. I spent a full week testing these services in my daily routine and jotted down some key takeaways, both good and bad.
    While CasaOS offers a smooth experience overall, there are some quirks that require you to have Docker knowledge to work with them.
    💡I faced a few issues that were caused by mounting external drives and binding them to the CasaOS apps. I solved them by automounting an external disk.Jellyfin media server: Extra drive mount issue
    When I first set up Jellyfin on day one, it worked well right out of the box. However, things got tricky once I added an extra drive for my media library.
    I spent a good chunk of time managing permissions and binding volumes, which was definitely not beginner-friendly.
    For someone new to Docker or CasaOS, the concept of binding volumes can be perplexing. You don’t just plug in the drive and expect it to work, it requires configuring how your media files will link to the Jellyfin container.
    You need to edit the fstab file if you want it to mount at the exact same location every timeEven after jumping through those hoops, it wasn’t smooth sailing. One evening, I accidentally turned off the Raspberry Pi.
    When it booted back up, the additional drive wasn’t mounted automatically, and I had to go through the whole setup process again ☹️
    So while Jellyfin works, managing external drives in CasaOS feels like it could be a headache for new users.
    Cloudflared connection drops
    I used Cloudflare Tunnel to access the services from outside the home network.
    It was a bit of a mixed bag. For the most part, it worked fine, but there were brief periods where the connection was not working even if said that it was connected.
    The connection would just drop unexpectedly, and I’d have to fiddle around with it to get things running again.
    After doing some digging, I found out that the CLI tool for Cloudflare Tunnels had recently been updated, so that might’ve been the root of the issue.
    Hopefully, it was a temporary glitch, but it is something to keep in mind if you rely on stable connections.
    Transmission torrent Client: Jellyfin's Story Repeats
    💡The default username & password is casaos. The tooltip for some applications contain such information. You can also edit them and add notes for the application.Transmission was solid for saving files locally, but as soon as I tried adding the extra drive to save files on my media library, I hit the same wall as with Jellyfin.
    The permissions errors cropped up, and again, the auto-mount issue reared its head.
    So, I would say it is fine for local use if you’re sticking to one drive, but if you plan to expand your storage, be ready for some trial and error.
    Nextcloud: Good enough but not perfect
    Setting up a basic Nextcloud instance in CasaOS was surprisingly easy. It was a matter of clicking the install button, and within a few moments, I had my personal cloud up and running.
    However, if you’re like me and care about how your data is organized and stored, there are a few things you’ll want to keep in mind.
    When you first access your Nextcloud instance, it defaults to using SQLite as the database, which is fine for simple, small-scale setups.
    But if you’re serious about storing larger files or managing multiple users, you’ll quickly realize that SQLite isn’t the best option. Nextcloud itself warns you that it’s not ideal for handling larger loads, and I would highly recommend setting up a proper MySQL or MariaDB database instead.
    Doing so will give you more stability and performance in the long run, especially as your data grows.
    Beyond the database choice, I found that even after using the default setup, Nextcloud’s health checks flagged several issues.
    For example, it complained about the lack of an HTTPS connection, which is crucial for secure file transfers.
    If you want your Nextcloud instance to be properly configured and secure, you'll need to invest some time to set up things like:
    Setting up secure SSL certificate Optimizing your database Handling other backend details that aren’t obvious to a new user. So while Nextcloud is easy to get running initially, fine-tuning it for real-world use takes a bit of extra work, especially if you are focused on data integrity and security.
    Custom WordPress stack: Good stuff!
    Now, coming to the WordPress stack I manually added, this is where CasaOS pleasantly surprised me.
    While I still prefer using Portainer to manage my custom Docker stacks, I have to admit that CasaOS has put in great effort to make the process intuitive.
    It is clear they’ve thought about users who want to deploy their own stacks using Docker Compose files or Docker commands.
    Adding the stack was simple, and the CasaOS interface made it relatively easy to navigate.
    Final thoughts
    After using CasaOS for several days, I can confidently say it’s a tool with immense potential. The ease of deploying apps like Jellyfin and Nextcloud makes it a breeze for users who want a no-hassle, self-hosted solution.
    However, CasaOS is not perfect yet. The app store, while growing, feels limited, and those looking for a more customizable experience may find the lack of advanced Docker controls frustrating at first.
    Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekThat said, CasaOS succeeds in making Docker and self-hosting more accessible to the masses.
    For homelab enthusiasts like me, it is a great middle ground between the complexity of Docker CLI and the bloated nature of full-blown home automation systems.
    Whether you are a newcomer or a seasoned tinker, CasaOS is worth checking out, if you are not afraid to deal with a few bumps along the way.
  15. by: Tatiana P
    Mon, 10 Mar 2025 07:27:07 +0000

    When things get overwhelming, I take a step back – whether by going for a walk or doing some different activity. It gives my mind some breathing space and helps me tackle challenges more effectively. 
    About me
    I am Mala Devi Selvarathinam. I am currently working as an Azure Cloud Consultant at Eficode. My role is fascinating because there are new challenges and new things to do every day. This keeps my work exciting.
    From India to Finland
    I completed my bachelor’s in India in 2014 and I worked as a GRC analyst for an MNC company for five years. I was leading a team, but I felt like I was not very interested in the field. I had always been fascinated about Cloud technologies, and I knew I had to make a switch. That’s when I started applying for a scholarship in Erasmus Mundus to study in Europe for my Master’s degree.
    When I got selected, I was super happy and packed my bags to Finland in 2019. It was a double program degree. I studied at Aalto University in Finland for my first year and my second year at the Technical University of Denmark. Little did I know, the journey ahead would test me in ways I hadn’t imagined.
    Mala Devi Selvarathinam, Azure Cloud Consultant, Eficode
    Overcoming challenges
    I started my Master’s in a new country in 2019, and just a few months later, the pandemic hit the world. This was one of the most stressful periods because everything around me was shutting down. It was also difficult because summer internships were challenging to find at the time and no one knew what was happening. But Finland and Aalto University were very supportive. I took the initiative to join one of Aalto’s research teams as a research assistant, which gave me insight into the research industry and kept me going through uncertain times.
    In 2021, I completed my Master’s degree, but the world was still feeling the aftershocks of the pandemic. Starting over in a new country was daunting—especially when I found myself back at square one, working as a trainee at KPMG. The thought of beginning as a trainee again after years of experience in India was intimidating.
    However, KPMG turned out to be a great learning experience. I smoothly transitioned into consulting in Finland, climbing from trainee to junior consultant and then to senior consultant. I learned how consultation works in Finland and was also embracing the work culture which was very different to how it is back home. Leaving KPMG was bittersweet, but I wanted to dive deeper into Cloud and DevOps-related aspects, and Eficode matched what I wanted to do.
    How did I find Eficode?
    I’ve been aware of Eficode since my Master’s days at Aalto University. I have been following their work, subscribed to their newsletters, and admired their expertise in DevOps and Cloud—areas that aligned perfectly with my career aspirations.
    So, when I spotted an opening, I didn’t hesitate to apply. Six months ago, I officially became part of Eficode, and it has been an exciting journey ever since!
    Working at Eficode
    One key thing that I find exceptional at Eficode is that I’m not afraid to ask anyone any questions. There are no silly questions. That is nice, and it makes your work easier because you don’t get stuck with anything. You know you can always reach out for some help. We have a welcoming and helpful culture. The company’s hybrid work model offers flexibility, allowing me to balance work and life seamlessly.
    As a consultant, I work with multiple clients, and the requirements are different for each client. My day involves prioritizing which client has urgent demands and working on those things. No day is the same here, which keeps the job exciting. The ever-changing nature of the job keeps things fresh and engaging—I’m constantly learning, adapting, and growing.
    My motivation to join the IT field
    In the early 2000s, computers were part of my high school lab. This was a prestigious place to get access to. Seeing a machine complete tasks that we once did manually was mind-blowing. This intrigued me quite a lot. But what truly inspired me was my uncle who worked in IT—he always had answers to my questions, all thanks to the internet!
    So, after high school, when I had to choose my specialization, it was a natural choice to do Computer Science. From playing video games to exploring programming, my curiosity only grew stronger, leading me to where I am today. I was fascinated as a kid, and I still am.
    Tips to overcome challenges
    When things get overwhelming, I take a step back – whether by going for a walk or doing some different activity. It gives my mind some breathing space and helps me tackle challenges more effectively. I tackle things one at a time and see where that leads. Second, I remind myself of my end goal, and why I am here doing these things. Keeping the bigger picture in mind puts everything into perspective.
    About the impact of AI
    AI is an incredible tool—it has the potential to handle mundane, repetitive tasks, freeing up our time for more meaningful work. But like any powerful tool, it needs to be used wisely. Striking the right balance is key.
    I have been experimenting myself with AI tools and it’s amazing to see what they are capable of. It is going to be exciting to see what the future has in store and how this will change the ways of working in IT.
    Skills in IT
    There are two primary skills you must have in the field of IT: the first one, and probably the most important, is being adaptive. Technology evolves rapidly. Adapting to changes is essential because what’s relevant today might be obsolete tomorrow. Staying informed is essential.
    While it’s good to have a broad understanding, specializing in one area gives you a strong foundation. Once you’ve mastered one domain, it’s easier to branch into another. And this further improves your expertise.
    My life outside work
    Outside of work, I’m passionate about art—I enjoy calligraphy, painting, and knitting. Reading is another big part of my life; I make it a point to read at least a couple of books a month, preferably fiction.
    Cooking became a necessity when I moved to Finland, but over time, I fell in love with it. Now, I’m always experimenting with new recipes alongside my husband.
    And of course, I’m a huge fan of animated movies! My life mottos—”Just keep swimming” (from Finding Nemo) and “Keep moving forward” (from Walt Disney)—keep me motivated no matter what challenges come my way.
    The post Role model blog: Mala Devi Selvarathinam, Eficode first appeared on Women in Tech Finland.
  16. Email Spam Checker AI

    by: aiparabellum.com
    Mon, 10 Mar 2025 05:58:30 +0000


    Email Spam Checker is an intuitive online tool that helps users determine whether their emails might trigger spam filters. By analyzing various elements of an email, including content, formatting, and header information, this tool provides a comprehensive assessment of an email’s likelihood of being marked as spam. Whether you’re a business owner sending marketing emails or an individual concerned about the deliverability of your messages, this tool offers valuable insights to optimize your email communication.
    Features of Email Spam Checker AI
    Real-time Email Analysis – The tool scans your email content instantly, providing immediate feedback on potential spam triggers. Comprehensive Spam Score – Receive a detailed spam score that indicates how likely your email is to be flagged by common spam filters. Content Evaluation – The checker analyzes the text content of your email for spam-triggering words, phrases, and patterns. Header Analysis – The tool examines email headers for potential issues that might affect deliverability. HTML Structure Check – For HTML emails, the tool checks the code structure for potential red flags. Improvement Suggestions – Receive actionable recommendations to improve your email’s deliverability. User-friendly Interface – The intuitive design makes it easy for users of all technical levels to check their emails. Privacy-focused – Your email content is analyzed securely without storing sensitive information. How It Works
    Copy and Paste Your Email Content – Simply copy the content of your email, including the subject line and body, and paste it into the provided text field. Include Headers (Optional) – For a more thorough analysis, you can also include the email headers. Click ‘Check for Spam’ – Once you’ve entered your email content, click the button to initiate the analysis. Review the Analysis Results – The tool will process your email and provide a detailed report on potential spam triggers. Implement Suggested Changes – Use the recommendations provided to modify your email content and improve its deliverability. Re-check If Necessary – After making changes, you can run the check again to see if your spam score has improved. Benefits of Email Spam Checker AI
    Improved Email Deliverability – By identifying and addressing potential spam triggers, you can increase the chances of your emails reaching the intended recipients. Time and Resource Savings – Avoid the frustration and wasted resources associated with emails being filtered out before reaching recipients. Enhanced Sender Reputation – Consistently sending non-spammy emails helps maintain a positive sender reputation with email service providers. Marketing Campaign Optimization – For businesses, the tool helps optimize marketing emails to ensure they reach customers’ inboxes. Real-time Feedback – Get immediate insights into potential issues with your email content before sending. Educational Value – Learn about common spam triggers and best practices for email composition. Reduced Risk of Blacklisting – By avoiding spam-like behavior, reduce the risk of your domain being blacklisted by email providers. Professional Communication – Ensure your professional communications maintain a high standard of deliverability. Pricing
    Free Basic Check – A limited version allowing users to check a small number of emails per day. Premium Plan – $9.99/month for unlimited email checks and additional analysis features. Business Plan – $24.99/month including API access and bulk email checking capabilities. Enterprise Solutions – Custom pricing for organizations with specific needs and high-volume requirements. Annual Discount – Save 20% when subscribing to annual plans instead of monthly billing. 14-Day Free Trial – Available for Premium and Business plans to test all features before committing. Review
    After thorough testing, AI Para Bellum’s Email Spam Checker proves to be a reliable and efficient tool for anyone concerned about email deliverability. The interface is clean and straightforward, making it accessible even for users with limited technical knowledge. The analysis is comprehensive, covering various aspects that might trigger spam filters, from specific keywords to HTML structure.
    The detailed reports provide clear insights into potential issues, and the suggested improvements are practical and easy to implement. Business users will particularly appreciate the bulk checking capabilities available in higher-tier plans, allowing for the analysis of entire email campaigns efficiently.
    One notable strength is the tool’s ability to keep up with evolving spam detection algorithms used by major email providers. This ensures that the recommendations remain relevant in the constantly changing landscape of email filtering.
    While the free version offers limited functionality, the paid plans provide excellent value for businesses and individuals who rely heavily on email communication. The pricing structure is reasonable considering the potential cost savings from improved email deliverability.
    Conclusion
    In an era where effective email communication is crucial, AI Para Bellum’s Email Spam Checker stands out as an essential tool for ensuring your messages reach their intended recipients. By providing detailed analysis and actionable recommendations, this tool helps users optimize their emails and avoid common spam triggers. Whether you’re a marketing professional managing email campaigns or an individual concerned about important messages being filtered out, this spam checker offers valuable insights to improve deliverability. With its user-friendly interface, comprehensive analysis, and reasonable pricing, AI Para Bellum’s Email Spam Checker is a worthwhile investment for anyone serious about effective email communication.
    Visit Website The post Email Spam Checker AI appeared first on AI Parabellum • Your Go-To AI Tools Directory for Success.
  17. Email Spam Checker AI

    by: aiparabellum.com
    Mon, 10 Mar 2025 05:58:30 +0000

    Email Spam Checker is an intuitive online tool that helps users determine whether their emails might trigger spam filters. By analyzing various elements of an email, including content, formatting, and header information, this tool provides a comprehensive assessment of an email’s likelihood of being marked as spam. Whether you’re a business owner sending marketing emails or an individual concerned about the deliverability of your messages, this tool offers valuable insights to optimize your email communication.
    Features of Email Spam Checker AI
    Real-time Email Analysis – The tool scans your email content instantly, providing immediate feedback on potential spam triggers. Comprehensive Spam Score – Receive a detailed spam score that indicates how likely your email is to be flagged by common spam filters. Content Evaluation – The checker analyzes the text content of your email for spam-triggering words, phrases, and patterns. Header Analysis – The tool examines email headers for potential issues that might affect deliverability. HTML Structure Check – For HTML emails, the tool checks the code structure for potential red flags. Improvement Suggestions – Receive actionable recommendations to improve your email’s deliverability. User-friendly Interface – The intuitive design makes it easy for users of all technical levels to check their emails. Privacy-focused – Your email content is analyzed securely without storing sensitive information. How It Works
    Copy and Paste Your Email Content – Simply copy the content of your email, including the subject line and body, and paste it into the provided text field. Include Headers (Optional) – For a more thorough analysis, you can also include the email headers. Click ‘Check for Spam’ – Once you’ve entered your email content, click the button to initiate the analysis. Review the Analysis Results – The tool will process your email and provide a detailed report on potential spam triggers. Implement Suggested Changes – Use the recommendations provided to modify your email content and improve its deliverability. Re-check If Necessary – After making changes, you can run the check again to see if your spam score has improved. Benefits of Email Spam Checker AI
    Improved Email Deliverability – By identifying and addressing potential spam triggers, you can increase the chances of your emails reaching the intended recipients. Time and Resource Savings – Avoid the frustration and wasted resources associated with emails being filtered out before reaching recipients. Enhanced Sender Reputation – Consistently sending non-spammy emails helps maintain a positive sender reputation with email service providers. Marketing Campaign Optimization – For businesses, the tool helps optimize marketing emails to ensure they reach customers’ inboxes. Real-time Feedback – Get immediate insights into potential issues with your email content before sending. Educational Value – Learn about common spam triggers and best practices for email composition. Reduced Risk of Blacklisting – By avoiding spam-like behavior, reduce the risk of your domain being blacklisted by email providers. Professional Communication – Ensure your professional communications maintain a high standard of deliverability. Pricing
    Free Basic Check – A limited version allowing users to check a small number of emails per day. Premium Plan – $9.99/month for unlimited email checks and additional analysis features. Business Plan – $24.99/month including API access and bulk email checking capabilities. Enterprise Solutions – Custom pricing for organizations with specific needs and high-volume requirements. Annual Discount – Save 20% when subscribing to annual plans instead of monthly billing. 14-Day Free Trial – Available for Premium and Business plans to test all features before committing. Review
    After thorough testing, AI Para Bellum’s Email Spam Checker proves to be a reliable and efficient tool for anyone concerned about email deliverability. The interface is clean and straightforward, making it accessible even for users with limited technical knowledge. The analysis is comprehensive, covering various aspects that might trigger spam filters, from specific keywords to HTML structure.
    The detailed reports provide clear insights into potential issues, and the suggested improvements are practical and easy to implement. Business users will particularly appreciate the bulk checking capabilities available in higher-tier plans, allowing for the analysis of entire email campaigns efficiently.
    One notable strength is the tool’s ability to keep up with evolving spam detection algorithms used by major email providers. This ensures that the recommendations remain relevant in the constantly changing landscape of email filtering.
    While the free version offers limited functionality, the paid plans provide excellent value for businesses and individuals who rely heavily on email communication. The pricing structure is reasonable considering the potential cost savings from improved email deliverability.
    Conclusion
    In an era where effective email communication is crucial, AI Para Bellum’s Email Spam Checker stands out as an essential tool for ensuring your messages reach their intended recipients. By providing detailed analysis and actionable recommendations, this tool helps users optimize their emails and avoid common spam triggers. Whether you’re a marketing professional managing email campaigns or an individual concerned about important messages being filtered out, this spam checker offers valuable insights to improve deliverability. With its user-friendly interface, comprehensive analysis, and reasonable pricing, AI Para Bellum’s Email Spam Checker is a worthwhile investment for anyone serious about effective email communication.
    Visit Website The post Email Spam Checker AI appeared first on AI Parabellum • Your Go-To AI Tools Directory for Success.
  18. by: Community
    Sat, 08 Mar 2025 08:54:21 GMT

    Imagine a scenario, you downloaded a new binary called ls from the internet. The application could be malicious by intention. Binary files are difficult to trust and run over the system. It could lead to a system hijacking attack, sending your sensitive files and clipboard information to the malicious server or interfere with the existing process of your machine.
    Won’t it be great if you’ve the tool to run and test the application within the defined security parameter. Like, we all know, ls command list the files in the current working directory. So, why would it require a network connection to operate? Does it make sense?
    That’s where the tool, Pledge, comes in. Pledge restricts the system calls a program can make. Pledge is natively supported on OpenBSD systems. Although it isn’t officially supported on Linux systems, I’ll show you a cool hack to utilize pledge on your Linux systems.
    🚧As you can see, this is rather an advanced tool for sysadmins, network engineers and people in the network security field. Most desktop Linux users would not need something like this but that does not mean you cannot explore it out of curiosity.What makes this port possible?
    Thanks to the remarkable work done by Justine Tunney. She is the core developer behind the project- Cosmopolitan Libc.
    Cosmopolitan makes it a bridge for compiling a c programs for 7 different platforms (Linux + Mac + Windows + FreeBSD + OpenBSD 7.3 + NetBSD + BIOS) at one go.
    Utilizing Libc Cosmopolitan, she was able to port OpenBSD Pledge to the Linux system. Here's the nice blog done by her.
    📋A quick disclaimer: Just because you can compile a C program for 7 different platforms doesn’t mean you would be able to successfully run on all these platforms. You need to handle program dependencies as well. For instance, Iptables uses Linux sockets, so you can’t expect it to work magically on Windows systems unless you come up with a way to establish Linux socket networking to Windows.Restrict system calls() with Pledge
    You might be surprised to know one single binary can run on 7 different platforms - Windows, Linux, Mac, FreeBSD, OpenBSD, NetBSD and BIOS.
    These binary files are called Actually Portable Executable (APE). You can check out this blog for more information. These binary files have the .com suffix and it’s necessary to work.
    This guide will show how to use pledge.com binary on your Linux system to restrict system calls while launching any binaries or applications.
    Step 1: Download pledge.com
    You can download pledge-1.8.com from the url- http://justine.lol/pledge/pledge-1.8.com.
    You can rename the file pledge-1.8.com to pledge.com.
    Step 2: Make it executable
    Run this command to make it executable.
    chmod +x ./pledge.comStep 3: Add pledge.com to the path
    A quick way to accomplish this is to move the binary in standard /usr/local/bin/ location.
    sudo mv ./pledge.com /usr/local/binStep 4: Run and test
    pledge.com curl http://itsfoss.comI didn’t assign any permission (called promises) to it so it would fail as expected. But it gives us a hint on what system calls are required by the binary ‘curl’ when it is run.
    With this information, you can see if a program is requesting a system call that it should not. For example, a file explorer program asking for dns. Is it normal?
    Curl is a tool that deals with URLs and indeed requires those system calls.
    Let's assign promises using the -p flag. I'll explain what each of these promises does in the next section.
    pledge.com -p 'stdio rpath inet dns tty sendfd recvfd' \ curl -s http://itsfoss.com📋The debug message error:pledge inet for socket is mis-leading. Even a similar open issue exists at the project's GitHub repo. It is evident that after providing these sets of promises "stdio rpath inet dns tty sendfd recvfd" to our curl binary, it works as expected.It’s successfully redirecting to the https version of our website. Let’s try to see, if with the same set of promises, it can talk to https enabled websites or not.
    pledge.com -p 'stdio rpath inet dns tty sendfd recvfd' \ curl -s https://itsfoss.comYeah! It worked.
    A quick glance at promises
    In the above section, we used 7 promises to make our curl request successful. Here’s a quick glimpse into what each promises intended for:
    stdio: Allows reading and writing to standard input/output (like printing to the console). rpath: Allows reading files from the filesystem. inet: Allows network-related operations (for example, connecting to a server). dns: Allows resolving DNS queries. tty: Allows access to the terminal. sendfd: Allow sending file descriptors. recvfd: Allow received file descriptors To know what other promises are supported by the pledge binary, head over to this blog.
    Porting OpenBSD pledge() to LinuxSandboxing for Linux has never been easier.Conclusion
    OpenBSD’s pledge follows the Least Privilege model. It prevents programs from mis-utilizing system resources. Following this security model, the damage done by a malicious application can be quite limited. Although Linux has seccomp and apparmor in its security arsenal, I find pledge more intuitive and easy to use.
    With Actually Portable Executable (APE), Linux users can now enjoy the simplicity of pledge to make their systems more secure. Users can provide more granular control over what processes can do within these environments would add an extra layer of defense.
    Author Info
    Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has passion for working with Kubernetes.
  19. by: Temani Afif
    Fri, 07 Mar 2025 13:14:12 +0000

    In the last article, we created a CSS-only star rating component using the CSS mask and border-image properties, as well as the newly enhanced attr() function. We ended with CSS code that we can easily adjust to create component variations, including a heart rating and volume control.
    This second article will study a different approach that gives us more flexibility. Instead of the border-image trick we used in the first article, we will rely on scroll-driven animations!
    Here is the same star rating component with the new implementation. And since we’re treading in experimental territory, you’ll want to view this in Chrome 115+ while we wait for Safari and Firefox support:
    CodePen Embed Fallback Do you spot the difference between this and the final demo in the first article? This time, I am updating the color of the stars based on how many of them are selected — something we cannot do using the border-image trick!
    I highly recommend you read the first article before jumping into this second part if you missed it, as I will be referring to concepts and techniques that we explored over there.
    One more time: At the time of writing, only Chrome 115+ and Edge 115+ fully support the features we will be using in this article, so please use either one of those as you follow along.
    Why scroll-driven animations?
    You might be wondering why we’re talking about scroll-driven animation when there’s nothing to scroll to in the star rating component. Scrolling? Animation? But we have nothing to scroll or animate! It’s even more confusing when you read the MDN explainer for scroll-driven animations:
    But if you keep reading you will see that we have two types of scroll-based timelines: scroll progress timelines and view progress timelines. In our case, we are going to use the second one; a view progress timeline, and here is how MDN describes it:
    You can check out the CSS-Tricks almanac definition for view-timeline-name while you’re at it for another explanation.
    Things start to make more sense if we consider the thumb element as the subject and the input element as the scroller. After all, the thumb moves within the input area, so its visibility changes. We can track that movement as a percentage of progress and convert it to a value we can use to style the input element. We are essentially going to implement the equivalent of document.querySelector("input").value in JavaScript but with vanilla CSS!
    The implementation
    Now that we have an idea of how this works, let’s see how everything translates into code.
    @property --val { syntax: "<number>"; inherits: true; initial-value: 0; } input[type="range"] { --min: attr(min type(<number>)); --max: attr(max type(<number>)); timeline-scope: --val; animation: --val linear both; animation-timeline: --val; animation-range: entry 100% exit 0%; overflow: hidden; } @keyframes --val { 0% { --val: var(--max) } 100% { --val: var(--min) } } input[type="range"]::thumb { view-timeline: --val inline; } I know, this is a lot of strange syntax! But we will dissect each line and you will see that it’s not all that complex at the end of the day.
    The subject and the scroller
    We start by defining the subject, i.e. the thumb element, and for this we use the view-timeline shorthand property. From the MDN page, we can read:
    I think it’s self-explanatory. The view timeline name is --val and the axis is inline since we’re working along the horizontal x-axis.
    Next, we define the scroller, i.e. the input element, and for this, we use overflow: hidden (or overflow: auto). This part is the easiest but also the one you will forget the most so let me insist on this: don’t forget to define overflow on the scroller!
    I insist on this because your code will work fine without defining overflow, but the values won’t be good. The reason is that the scroller exists but will be defined by the browser (depending on your page structure and your CSS) and most of the time it’s not the one you want. So let me repeat it another time: remember the overflow property!
    The animation
    Next up, we create an animation that animates the --val variable between the input’s min and max values. Like we did in the first article, we are using the newly-enhanced attr() function to get those values. See that? The “animation” part of the scroll-driven animation, an animation we link to the view timeline we defined on the subject using animation-timeline. And to be able to animate a variable we register it using @property.
    Note the use of timeline-scope which is another tricky feature that’s easy to overlook. By default, named view timelines are scoped to the element where they are defined and its descendant. In our case, the input is a parent element of the thumb so it cannot access the named view timeline. To overcome this, we increase the scope using timeline-scope. Again, from MDN:
    Never forget about this! Sometimes everything is correctly defined but nothing is working because you forget about the scope.
    There’s something else you might be wondering:
    To understand this, let’s first take the following example where you can scroll the container horizontally to reveal a red circle inside of it.
    CodePen Embed Fallback Initially, the red circle is hidden on the right side. Once we start scrolling, it appears from the right side, then disappears to the left as you continue scrolling towards the right. We scroll from left to right but our actual movement is from right to left.
    In our case, we don’t have any scrolling since our subject (the thumb) will not overflow the scroller (the input) but the main logic is the same. The starting point is the right side and the ending point is the left side. In other words, the animation starts when the thumb is on the right side (the input’s max value) and will end when it’s on the left side (the input’s min value).
    The animation range
    The last piece of the puzzle is the following important line of code:
    animation-range: entry 100% exit 0%; By default, the animation starts when the subject starts to enter the scroller from the right and ends when the subject has completely exited the scroller from the left. This is not good because, as we said, the thumb will not overflow the scroller, so it will never reach the start and the end of the animation.
    To rectify this we use the animation-range property to make the start of the animation when the subject has completely entered the scroller from the right (entry 100%) and the end of the animation when the subject starts to exit the scroller from the left (exit 0%).
    To summarize, the thumb element will move within input’s area and that movement is used to control the progress of an animation that animates a variable between the input’s min and max attribute values. We have our replacement for document.querySelector("input").value in JavaScript!
    I am deliberately using the same --val everywhere to confuse you a little and push you to try to understand what is going on. We usually use the dashed ident (--) notation to define custom properties (also called CSS variables) that we later call with var(). This is still true but that same notation can be used to name other things as well.
    In our examples we have three different things named --val:
    The variable that is animated and registered using @property. It contains the selected value and is used to style the input. The named view timeline defined by view-timeline and used by animation-timeline. The keyframes named --val and called by animation. Here is the same code written with different names for more clarity:
    @property --val { syntax: "<number>"; inherits: true; initial-value: 0; } input[type="range"] { --min: attr(min type(<number>)); --max: attr(max type(<number>)); timeline-scope: --timeline; animation: value_update linear both; animation-timeline: --timeline; animation-range: entry 100% exit 0%; overflow: hidden; } @keyframes value_update { 0% { --val: var(--max) } 100% { --val: var(--min) } } input[type="range"]::thumb { view-timeline: --timeine inline; } The star rating component
    All that we have done up to now is get the selected value of the input range — which is honestly about 90% of the work we need to do. What remains is some basic styles and code taken from what we made in the first article.
    If we omit the code from the previous section and the code from the previous article here is what we are left with:
    input[type="range"] { background: linear-gradient(90deg, hsl(calc(30 + 4 * var(--val)) 100% 56%) calc(var(--val) * 100% / var(--max)), #7b7b7b 0 ); } input[type="range"]::thumb { opacity: 0; } We make the thumb invisible and we define a gradient on the main element to color in the stars. No surprise here, but the gradient uses the same --val variable that contains the selected value to inform how much is colored in.
    When, for example, you select three stars, the --val variable will equal 3 and the color stop of the first color will equal 3*100%/5 , or 60%, meaning three stars are colored in. That same color is also dynamic as I am using the hsl() function where the first argument (the hue) is a function of --val as well.
    Here is the full demo, which you will want to open in Chrome 115+ at the time I’m writing this:
    CodePen Embed Fallback And guess what? This implementation works with half stars as well without the need to change the CSS. All you have to do is update the input’s attributes to work in half increments. Remember, we’re yanking these values out of HTML into CSS using attr(), which reads the attributes and returns them to us.
    <input type="range" min=".5" step=".5" max="5"> CodePen Embed Fallback That’s it! We have our rating star component that you can easily control by adjusting the attributes.
    So, should I use border-image or a scroll-driven animation?
    If we look past the browser support factor, I consider this version better than the border-image approach we used in the first article. The border-image version is simpler and does the job pretty well, but it’s limited in what it can do. While our goal is to create a star rating component, it’s good to be able to do more and be able to style an input range as you want.
    With scroll-driven animations, we have more flexibility since the idea is to first get the value of the input and then use it to style the element. I know it’s not easy to grasp but don’t worry about that. You will face scroll-driven animations more often in the future and it will become more familiar with time. This example will look easy to you in good time.
    Worth noting, that the code used to get the value is a generic code that you can easily reuse even if you are not going to style the input itself. Getting the value of the input is independent of styling it.
    Here is a demo where I am adding a tooltip to a range slider to show its value:
    CodePen Embed Fallback Many techniques are involved to create that demo and one of them is using scroll-driven animations to get the input value and show it inside the tooltip!
    Here is another demo using the same technique where different range sliders are controlling different variables on the page.
    CodePen Embed Fallback And why not a wavy range slider?
    CodePen Embed Fallback This one is a bit crazy but it illustrates how far we go with styling an input range! So, even if your goal is not to create a star rating component, there are a lot of use cases where such a technique can be really useful.
    Conclusion
    I hope you enjoyed this brief two-part series. In addition to a star rating component made with minimal code, we have explored a lot of cool and modern features, including the attr() function, CSS mask, and scroll-driven animations. It’s still early to adopt all of these features in production because of browser support, but it’s a good time to explore them and see what can be done soon using only CSS.
    Article series
    A CSS-Only Star Rating Component and More! (Part 1) A CSS-Only Star Rating Component and More! (Part 2)
    A CSS-Only Star Rating Component and More! (Part 2) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  20. by: Geoff Graham
    Thu, 06 Mar 2025 16:33:55 +0000

    Manuel Matuzović:
    This easily qualifies as a “gotcha” in CSS and is a good reminder that the cascade doesn’t know everything all at the same time. If a custom property is invalid, the cascade won’t ignore it, and it gets evaluated, which invalidates the declaration. And if we set an invalid custom property on a shorthand property that combines several constituent properties — like how background and animation are both shorthand for a bunch of other properties — then the entire declaration becomes invalid, including all of the implied constituents. No bueno indeed.
    What to do, then?
    Great advice, Manuel!
    Maybe don’t use custom properties in shorthand properties originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  21. by: Abhishek Prakash
    Thu, 06 Mar 2025 05:27:13 GMT

    Skype is being discontinued by Microsoft on 5th May.
    Once a hallmark of the old internet, Skype was already dying a slow death. It just could not keep up with the competition from WhatsApp, Zoom etc despite Microsoft's backing.
    While there are open source alternatives to Skype, I doubt if friends and family would use them.
    I am not going to miss it, as I haven't used Skype in years. Let's keep it in the museum of Internet history.
    Speaking of the old internet, Digg is making a comeback. 20 years back, it was the 'front page of the internet'.
    💬 Let's see what else you get in this edition
    VLC aiming for the Moon. EA open sourcing its games. GNOME 48 features to expect. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by ONLYOFFICE. ✨ ONLYOFFICE PDF Editor: Create, Edit and Collaborate on PDFs on Linux
    The ONLYOFFICE suite now offers an updated PDF editor that comes equipped with collaborative PDF editing and other useful features.
    You can deploy ONLYOFFICE Docs on your Linux server and integrate it with your favourite platform, such as Nextcloud, Moodle and more. Alternatively, you can download the free desktop app for your Linux distro.
    Online PDF editor, reader and converter | ONLYOFFICEView and create PDF files from any text document, spreadsheet or presentation, convert PDF to DOCX online, create fillable PDF forms.ONLYOFFICE📰 Linux and Open Source News
    Electronic Arts has open sourced four Command & Conquer games. VLC is literally reaching for the Moon to mark its 20-year anniversary. Internxt Drive has become the first cloud storage with post-quantum encryption. Electronic Frontier Foundation has launched a new open source tool to detect eavesdropping on cellular networks. GNOME 48 is just around the corner, check out what features are coming:
    Discover What’s New in GNOME 48 With Our Feature Rundown!GNOME 48 is just around the corner. Explore what’s coming with it.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    A German startup has published open source plans for its Nuclear Fusion power plant!
    As per the latest desktop market share report, macOS usage has seen a notable dip on Steam.
    🧮 Linux Tips, Tutorials and More
    Get the 'Create new document' option back in the right-click context menu in GNOME. Facing errors while trying to watch a DVD on Fedora? It can be fixed. Learn to record a selected area or an application window with OBS Studio. Knowing how to edit files with Nano text editor, might come in handy while dealing with config files. New users often get confused with so many Ubuntu versions. This article helps clear the doubt.
    Explained: Which Ubuntu Version Should I Use?Confused about Ubuntu vs Xubuntu vs Lubuntu vs Kubuntu?? Want to know which Ubuntu flavor you should use? This beginner’s guide helps you decide which Ubuntu should you choose.It's FOSSAbhishek Prakash👷 Homelab and Maker's Corner
    As a Kodi user, you cannot miss out on installing add-ons and builds. We also have a list of the best add-ons to spice up your media server.
    And you can use virtual keyboard with Raspberry Pi easily.
    Using On-screen Keyboard in Raspberry Pi OSHere’s what you can do to use a virtual keyboard on Raspberry Pi OS.It's FOSSAbhishek Prakash✨ Apps Highlight
    Facing slow downloads on your Android smartphone? Aria2App can help.
    Aria2App is a Super Fast Versatile Open-Source Download Manager for AndroidA useful open-source download manager for AndroidIt's FOSS NewsSourav Rudralichess lets you compete with other players in online games of Chess.
    📽️ Video I am Creating for You
    How much does an active cooler cools down a Raspberry Pi 5? Let's find it out in this quick video.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    For a change, you can take the text processing command crossword challenge.
    Commands to Work With Text Files: CrosswordSolve this crossword with commands for text processing.It's FOSSAnkush Das💡 Quick Handy Tip
    You can play Lofi music in VLC Media Player. First, switch to the Playlist view in VLC by going into View → Playlist.
    Now, in the sidebar, scroll down and select Icecast Radio Directory. Here, search for Lofi in the search bar.
    Now, double-click on any Lo-fi channel to start playing. On the other hand, if you want to listen to music via the web browser, you can use freeCodeCamp.org Code Radio.
    🤣 Meme of the Week
    You didn't have to join the dark side, Firefox. 🫤
    🗓️ Tech Trivia
    In 1953, MIT's Whirlwind computer showcased an early form of system management software called "Director," developed by Douglas Ross. Demonstrated at a digital fire control symposium, Director automated resource allocation (like memory, storage, and printing), making it one of the earliest examples of an operating system-like program.
    🧑‍🤝‍🧑 FOSSverse Corner
    An important question has been raised by one of our longtime FOSSers.
    Do we all see the same thing on the internet?I think we all assume we are seeing the same content on a website. But do we.? Read this quote from an article on the Australian ABC news “Many people are unaware that the internet they see is unique to them. Even if we surf the same news websites, we’ll see different news stories based on our previous likes. And on a website like Amazon, almost every item and price we see is unique to us. It is chosen by algorithms based on what we were previously wanting to buy and willing to pay. There is…It's FOSS Communitynevj❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  22. by: Sreenath
    Thu, 06 Mar 2025 03:09:13 GMT

    When it comes to screen recording in Linux or any other operating system, OBS Studio becomes he go-to choice.
    It offers all the features baked in for users, ranging from casual screen recorders to advanced streamers.
    One such useful feature is to record a part of the screen in OBS Studio. I'll share the detailed steps for Linux users in this tutorial.
    🚧The method mentioned is based on a Wayland session. Also, this is a personal workflow, and if readers have better options, feel free to comment, so that I can improve the article for everyone.Record an application window in OBS Studio
    Before starting, first click on File → Settings from OBS Studio main menu. Here, in the Settings window, go to the Video section and note the Canvas resolution and Output scale resolution for your system.
    Note Canvas and Output Scale valuesThis will be helpful when you are reverting in a later step.
    Step 1: Create a new source
    First, let's create a new source for our recording. Click on the “+” icon on the OBS Studio home screen as shown in the screenshot below. Select “Screen Capture (Pipewire)” option.
    📋For X11 system, this may be Display Capture (XSHM).Click on "+" to add a new sourceOn the resulting window, give a name to the source and then click OK.
    Give a name to the sourceOnce you press OK, you will be shown a dialog box to select the record area.
    Step 2: Select the window to record
    Here, select the Window option from the top bar.
    Select the window to be recorded.Once you click on the Window option, you will be able to see all the open windows listed. Select a window that you want to record from the list, as shown in the screenshot above.
    This will give you a dialog box, with a preview of the window being recorded.
    Enable the cursor recording (if needed) and click OK.
    Selected window in previewStep 3: Crop the video to window size
    Now, in the main OBS window, you can see that the application you have selected is not filling the full canvas, in my case 1920×1080.
    Empty space in canvasThe output will contain this window and the rest of the canvas in black if you keep recording with this setting.
    You need to crop the area so that only the necessary part is present on the output file.
    For this, right-click on our source and select Resize Output (Source Size) option, as shown below:
    Resize output source sizeClick on Yes, when prompted.
    Accept ConfirmationAs soon as you click Yes, you can see that the canvas is now reduced to the size of the window.
    Canvas ResizedStep 4: Record the video
    You can now start recording the video using the Record button.
    Start video recordingOnce finished, stop recording, and the saved video file won't contain any other part, except the window.
    Step 5: Delete the video source
    Now that you have recorded the video, let's remove this particular source.
    Right-click on the source and select Remove.
    Remove the sourceStep 6: Revert the canvas and output scale
    While we were resizing the canvas to the window, the setting has been also changed on your OBS Studio video settings. If left unchanged, your future videos will also be recorded with the reduced size.
    So, click on File in the OBS Studio main menu and select Settings.
    Click on File → SettingsOn the Settings window, go to Videos and revert the Base Canvas Resolution and Output Scaled Resolution to your preferred normal values. Then click Apply.
    Revert Canvas Size to normalRecord an area on the screen in OBS Studio
    This is the same process as the one described above, except for the area selection.
    Step 1: Create a new source
    Click on the plus button on the Sources section in OBS Studio and select Screen Capture.
    Select Screen CaptureName the source and click OK.
    Step 2: Select a region
    On the area selection dialog box, click on Region. From the section, select Select Region option.
    Select RegionNotice the cursor has now changed to a plus sign. Drag the area you want to record.
    Select Area to RecordYou can see that the preview now has the selected area. Don't forget to enable the cursors, if needed.
    It is normal that the canvas is way too big and your video occupies only a part of it.
    Canvas Size MismatchStep 3: Resize the source
    Like in the previous section, right-click on the source and select Resize output option.
    Resize Output to Area CaptureStep 4: Record and revert the settings
    Start recording the video. Once it is completed, save the recording and remove the source. Revert the canvas and output scale settings, as shown in step 6 of the previous section.
    💬 Hope this guide has helped you record with OBS Studio. Please let me know if this tutorial helped you or if you need further help.
  23. Whereis Command Examples

    by: Abhishek Prakash
    Wed, 05 Mar 2025 20:45:06 +0530

    The whereis command helps users locate the binary, source, and manual page files for a given command. And in this tutorial, I will walk you through practical examples to help you understand how to use whereis command.
    Unlike other search commands like find that scan the entire file system, whereis searches predefined directories, making it faster and more efficient.
    It is particularly useful for system administrators and developers to locate files related to commands without requiring root privileges.
    whereis Command Syntax
    To use any command to its maximum potential, it is important to know its syntax and that is why I'm starting this tutorial by introducing the syntax of the whereis command:
    whereis [OPTIONS] FILE_NAME...Here,
    OPTIONS: Flags that modify the search behavior. FILE_NAME: The name of the file or command to locate. Now, let's take a look at available options of the whereis command:
    Flag Description -b Search only for binary files. -s Search only for source files. -m Search only for manual pages. -u Search for unusual files (files missing one or more of binary, source, or manual). -B Specify directories to search for binary files (must be followed by -f). -S Specify directories to search for source files (must be followed by -f). -M Specify directories to search for manual pages (must be followed by -f). -f Terminate directory lists provided with -B, -S, or -M, signaling the start of file names. -l Display directories that are searched by default. 1. Locate all files related to a command
    To find all files (binary, source, and manual) related to a command, all you have to do is append the command name to the whereis command as shown here:
    whereis commandFor example, if I want to locate all files related to bash, then I would use the following:
    whereis bashHere,
    /usr/bin/bash: Path to the binary file. /usr/share/man/man1/bash.1.gz: Path to the manual page. 2. Search for binary files only
    To locate only the binary (executable) file of a command, use the -b flag along with the target command as shown here:
    whereis -b commandIf I want to search for the binary files for the ls command, then I would use the following:
    whereis -b ls3. Search for the manual page only
    To locate only the manual page for a specific command, use the -m flag along with the targeted command as shown here:
    whereis -m commandFor example, if I want to search for the manual page location for the grep command, then I would use the following:
    whereis -m grepAs you can see, it gave me two locations:
    /usr/share/man/man1/grep.1.gz: A manual page which can be accessed through man grep command. /usr/share/info/grep.info.gz: An info page that can be accessed through info grep command. 4. Search for source files only
    To find only source code files associated with a command, use the -s flag along with the targeted command as shown here:
    whereis -s commandFor example, if I want to search source files for the gcc, then I would use the following:
    whereis -s gccMy system is fresh and I haven't installed any packages from the source so I was given a blank output.
    5. Specify custom directories for searching
    To limit your search to specific directories, use options like -B, -S, or -M. For example, if I want to limit my search to the /bin directory for the cp command, then I would use the following command:
    whereis -b -B /bin -f cpHere,
    -b: This flag tells whereis to search only for binary files (executables), ignoring source and manual files. -B /bin: The -B flag specifies a custom directory (/bin in this case) where whereis should look for binary files. It also limits the search to the /bin directory instead of searching all default directories. -f cp: Without -f, the whereis command would interpret cp as another directory. 6. Identify commands missing certain files (unusual files)
    The whereis command can help you find commands that are missing one or more associated files (binary, source, or manual). This is particularly useful for troubleshooting or verifying file completeness.
    For example, to search for commands in the /bin directory that is missing manual pages, you first have to change your directory to /bin and then use the -u flag to search for unusual files:
    cd /bin whereis -u -m *Wrapping Up...
    This was a quick tutorial on how you can use the whereis command in Linux including practical examples and syntax. I hope you will find this guide helpful.
    If you have any queries or suggestions, leave us a comment.
  24. by: Preethi
    Wed, 05 Mar 2025 13:16:32 +0000

    Grouping selected items is a design choice often employed to help users quickly grasp which items are selected and unselected. For instance, checked-off items move up the list in to-do lists, allowing users to focus on the remaining tasks when they revisit the list.
    We’ll design a UI that follows a similar grouping pattern. Instead of simply rearranging the list of selected items, we’ll also lay them out horizontally using CSS Grid. This further distinguishes between the selected and unselected items.
    We’ll explore two approaches for this. One involves using auto-fill, which is suitable when the selected items don’t exceed the grid container’s boundaries, ensuring a stable layout. In contrast, CSS Grid’s span keyword provides another approach that offers greater flexibility.
    The HTML is the same for both methods:
    <ul> <li> <label> <input type="checkbox" /> <div class=icon>🍱</div> <div class=text>Bento</div> </label> </li> <li> <label> <input type="checkbox" /> <div class=icon>🍡</div> <div class=text>Dangos</div> </label> </li> <!-- more list items --> </ul> The markup consists of an unordered list (<ul>). However, we don’t necessarily have to use <ul> and <li> elements since the layout of the items will be determined by the CSS grid properties. Note that I am using an implicit <label> around the <input> elements mostly as a way to avoid needing an extra wrapper around things, but that explicit labels are generally better supported by assistive technologies.
    Method 1: Using auto-fill
    CodePen Embed Fallback ul { width: 250px; display: grid; gap: 14px 10px; grid-template-columns: repeat(auto-fill, 40px); justify-content: center; /* etc. */ } The <ul> element, which contains the items, has a display: grid style rule, turning it into a grid container. It also has gaps of 14px and 10px between its grid rows and columns. The grid content is justified (inline alignment) to center.
    The grid-template-columns property specifies how column tracks will be sized in the grid. Initially, all items will be in a single column. However, when items are selected, they will be moved to the first row, and each selected item will be in its own column. The key part of this declaration is the auto-fill value.
    The auto-fill value is added where the repeat count goes in the repeat() function. This ensures the columns repeat, with each column’s track sizing being the given size in repeat() (40px in our example), that will fit inside the grid container’s boundaries.
    For now, let’s make sure that the list items are positioned in a single column:
    li { width: inherit; grid-column: 1; /* Equivalent to: grid-column-start: 1; grid-column-end: auto; */ /* etc. */ } When an item is checked, that is when an <li> element :has() a :checked checkbox, we’re selecting that. And when we do, the <li> is given a grid-area that puts it in the first row, and its column will be auto-placed within the grid in the first row as per the value of the grid-template-columns property of the grid container (<ul>). This causes the selected items to group at the top of the list and be arranged horizontally:
    li { width: inherit; grid-column: 1; /* etc. */ &:has(:checked) { grid-area: 1; /* Equivalent to: grid-row-start: 1; grid-column-start: auto; grid-row-end: auto; grid-column-end: auto; */ width: 40px; /* etc. */ } /* etc. */ } And that gives us our final result! Let’s compare that with the second method I want to show you.
    Method 2: Using the span keyword
    CodePen Embed Fallback We won’t be needing the grid-template-columns property now. Here’s the new <ul> style ruleset:
    ul { width: 250px; display: grid; gap: 14px 10px; justify-content: center; justify-items: center; /* etc. */ } The inclusion of justify-items will help with the alignment of grid items as we’ll see in a moment. Here are the updated styles for the <li> element:
    li { width: inherit; grid-column: 1 / span 6; /* Equivalent to: grid-column-start: 1; grid-column-end: span 6; */ /* etc. */ } As before, each item is placed in the first column, but now they also span six column tracks (since there are six items). This ensures that when multiple columns appear in the grid, as items are selected, the following unselected items remain in a single column under the selected items — now the unselected items span across multiple column tracks. The justify-items: center declaration will keep the items aligned to the center.
    li { width: inherit; grid-column: 1 / span 6; /* etc. */ &:has(:checked) { grid-area: 1; width: 120px; /* etc. */ } /* etc. */ } The width of the selected items has been increased from the previous example, so the layout of the selection UI can be viewed for when the selected items overflow the container.
    Selection order
    The order of selected and unselected items will remain the same as the source order. If the on-screen order needs to match the user’s selection, dynamically assign an incremental order value to the items as they are selected.
    onload = ()=>{ let i=1; document.querySelectorAll('input').forEach((input)=>{ input.addEventListener("click", () => { input.parentElement.parentElement.style.order = input.checked ? i++ : (i--, 0); }); }); } CodePen Embed Fallback Wrapping up
    CSS Grid helps make both approaches very flexible without a ton of configuration. By using auto-fill to place items on either axis (rows or columns), the selected items can be easily grouped within the grid container without disturbing the layout of the unselected items in the same container, for as long as the selected items don’t overflow the container.
    If they do overflow the container, using the span approach helps maintain the layout irrespective of how long the group of selected items gets in a given axis. Some design alternatives for the UI are grouping the selected items at the end of the list, or swapping the horizontal and vertical structure.
    Grouping Selection List Items Together With CSS Grid originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  25. QuestionAI

    by: aiparabellum.com
    Wed, 05 Mar 2025 08:08:08 +0000

    QuestionAI represents the next evolution in question-answering technology, bridging the gap between vast information repositories and human curiosity. Unlike traditional search engines that return lists of potentially relevant links, QuestionAI directly delivers concise, accurate answers drawn from analyzing multiple sources. The platform understands natural language queries, recognizes context, and processes information much like a human researcher would—but at machine speed. Designed for professionals, researchers, students, and curious minds alike, QuestionAI serves as an intelligent assistant that not only answers questions but also helps users explore topics more deeply through related insights and explanations.
    Features
    Natural Language Processing: Ask questions in everyday language just as you would to a human expert, without needing to format queries in any special way. Contextual Understanding: The AI maintains conversation context, allowing for follow-up questions without repeating background information. Multi-Source Analysis: Draws information from diverse and verified sources to provide comprehensive answers that consider multiple perspectives. Citation Tracking: Clearly identifies the sources of information, allowing users to verify facts and explore topics more deeply. Customizable Knowledge Base: Ability to connect to specific document sets or databases for specialized questioning in professional environments. Visual Response Options: Receives responses not just as text but also as charts, graphs, and other visualizations when appropriate. Multilingual Support: Ask questions and receive answers in multiple languages, breaking down language barriers to information. Conversation History: Maintains a searchable log of previous questions and answers for easy reference and continued learning. Integration Capabilities: Connects with popular productivity tools, research platforms, and learning management systems. Privacy Controls: Robust security features ensure sensitive questions and information remain confidential. How It Works
    Question Input: Users enter their question in natural language through the clean, intuitive interface. AI Processing: The system analyzes the question to understand intent, context, and required information. Knowledge Base Search: QuestionAI scans its vast knowledge repositories, identifying relevant information from verified sources. Answer Synthesis: Rather than simply retrieving information, the AI synthesizes a coherent, comprehensive answer tailored to the specific question. Source Attribution: The system attributes information to its original sources, allowing users to verify claims and explore further. Refinement Loop: Users can ask follow-up questions or request clarification, with the system maintaining context from the previous exchanges. Continuous Learning: The platform improves over time, learning from interactions to provide increasingly relevant and accurate responses. Benefits
    Time Efficiency: Receive immediate answers to complex questions without sifting through multiple search results or documents. Information Accuracy: Benefit from answers drawn from verified sources with clear citations, reducing the risk of misinformation. Knowledge Depth: Explore topics more thoroughly with an AI that can handle increasingly specific and technical questions. Research Enhancement: Accelerate research processes by quickly establishing foundational knowledge before diving deeper. Learning Support: Supplement educational experiences with explanations that adapt to different knowledge levels. Decision Support: Make more informed decisions by accessing relevant information quickly and in context. Productivity Boost: Reduce time spent searching for information and more time applying insights to tasks and projects. Linguistic Accessibility: Access information regardless of language barriers through multilingual support. Reduced Cognitive Load: Offload the mental effort of searching and synthesizing information to an AI assistant. Continuous Knowledge Access: Stay informed with up-to-date information through a system that regularly updates its knowledge base. Pricing
    Free Tier: Limited daily questions with basic functionality, perfect for casual users exploring the platform. Personal Plan ($19/month): Unlimited questions, full feature access, and priority processing for individual users. Professional Plan ($49/month): Enhanced capabilities including document uploads, team sharing features, and advanced analytics. Enterprise Solution (Custom Pricing): Tailored implementation with private knowledge bases, enhanced security, dedicated support, and custom integrations. Educational Discount: Special pricing for students and educational institutions with additional learning-focused features. Annual Subscription: Save 20% when purchasing an annual subscription for any paid tier. Review
    QuestionAI delivers on its promise to revolutionize information retrieval through artificial intelligence. During our extensive testing, the platform consistently provided relevant, accurate answers to questions ranging from straightforward facts to complex conceptual inquiries.
    The natural language processing capabilities are particularly impressive, accurately interpreting questions even when phrased conversationally or with ambiguous terms. The system rarely misunderstands intent and, when it does, the clarification process is smooth and intuitive.
    Conclusion
    QuestionAI stands at the forefront of a new paradigm in information access, where artificial intelligence serves not just as a tool for finding information but as an active partner in knowledge exploration and synthesis. By combining advanced natural language processing, comprehensive knowledge repositories, and intuitive design, it delivers on the promise of AI as an extension of human cognitive capabilities.
    Visit Website The post QuestionAI appeared first on AI Parabellum • Your Go-To AI Tools Directory for Success.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.