Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. In a lab environment, lots of new users will be using JupyterHub. The default Authenticator of JupyterHub allows only the Linux system users to log in to JupyterHub. So, if you want to create a new JupyterHub user, you will have to create a new Linux user. Creating new Linux users manually might be a lot of hassle for you. Instead, you can configure JupyterHub to use FirstUseAuthenticator. FirstUseAuthenticator as the name says, automatically creates a new user while logging in to JupyterHub for the first time. After the user is created, the same username and password can be used to log in to JupyterHub.
    In this article, I am going to show you how to install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment. I am also going to show you how to configure JupyterHub to use the FirstUseAuthenticator.
    NOTE: If you don’t have JupyterHub installed on your computer, you can read one of the articles depending on the Linux distribution you’re using:
    How to Install the Latest Version of JupyterHub on Ubuntu 22.04 LTS/ Debian 12/Linux Mint 21 How to Install the Latest Version of JupyterHub on Fedora 38+/RHEL 9/Rocky Linux 9  
    Table of Contents:
    Creating a Group for JupyterHub Users Installing JupyterHub FirstUseAuthenticator on the JupyterHub Virtual Environment Configuring JupyterHub FirstUseAuthenticator Restarting the JupyterHub Service Verifying if JupyterHub FirstUseAuthenticator is Working Creating New JupyterHub Users using JupyterHub FirstUseAuthenticator Conclusion References  
    Creating a Group for JupyterHub Users:
    I want to keep all the new JupyterHub users in a Linux group jupyterhub-users for easier management.
    You can create a new Linux group jupyterhub-users with the following command:
    $ sudo groupadd jupyterhub-users  
    Installing JupyterHub FirstUseAuthenticator on the JupyterHub Virtual Environment:
    If you’ve followed my JupyterHub Installation Guide to install JupyterHub on your favorite Linux distributions (Debian-based and RPM-based), you can install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment with the following command:
    $ sudo /opt/jupyterhub/bin/python3 -m pip install jupyterhub-firstuseauthenticator  
    The JupyterHub FirstUseAuthenticator should be installed on the JupyterHub virtual environment.

     
    Configuring JupyterHub FirstUseAuthenticator:
    To configure the JupyterHub FirstUseAuthenticator, open the JupyterHub configuration file jupyterhub_config.py with the nano text editor as follows:
    $ sudo nano /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py  
    Type in the following lines in the jupyterhub_config.py configuration file.
    # Configure FirstUseAuthenticator for Jupyter Hub

    from jupyterhub.auth import LocalAuthenticator

    from firstuseauthenticator import FirstUseAuthenticator

     

    LocalAuthenticator.create_system_users = True

    LocalAuthenticator.add_user_cmd = ['useradd', '--create-home', '--gid', 'jupyterhub_users' , '--shell', '/bin/bash']

    FirstUseAuthenticator.dbm_path = '/opt/jupyterhub/etc/jupyterhub/passwords.dbm'

    FirstUseAuthenticator.create_users = True

     

    class LocalNativeAuthenticator(FirstUseAuthenticator, LocalAuthenticator):

    pass

     

    c.JupyterHub.authenticator_class = LocalNativeAuthenticator  
    Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the jupyterhub_config.py file.

     
    Restarting the JupyterHub Service:
    For the changes to take effect, restart the JupyterHub systemd service with the following command:
    $ sudo systemctl restart jupyterhub.service  
    If the JupyterHub configuration file has no errors, the JupyterHub systemd service should run just fine.

     
    Verifying if JupyterHub FirstUseAuthenticator is Working:
    To verify whether the JupyterHub FirstUseAuthenticator is working, visit JupyterHub from your favorite web browser and try to log in as a random user with a short and easy password like 123, abc, etc.
    You should see the marked error message that the password is too short and the password should be at least 7 characters long. It means that the JupyterHub FirstUseAuthenticator is working just fine.

     
    Creating New JupyterHub Users using JupyterHub FirstUseAuthenticator:
    To create a new JupyterHub user using the FirstUseAuthenticator, visit the JupyterHub login page from a web browser, type in your desired login username and the password that you want to set for the new user, and click on Sign in.

     
    A new JupyterHub user should be created and your desired password should be set for the new user.
    Once the new user is created, the newly created user should be logged into his/her JupyterHub account.

     
    The next time you try to log in as the same user with a different password, you will see the error Invalid username or password. So, once a user is created using the FirstUseAuthenticator, only that user can log in with the same username and password combination. No one else can replace this user account.

     
    Conclusion:
    In this article, I have shown you how to install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment. I have also shown you how to configure JupyterHub to use the FirstUseAuthenticator.
     
    References:
    firstuseauthenticator/firstuseauthenticator/firstuseauthenticator.py at main · jupyterhub/firstuseauthenticator · GitHub  
  2. MySQL is a reliable and widely used DBMS that utilizes SQL and a relational model to manage data. MySQL is installed as part of LAMP in Linux, but you can install it separately.Even in Ubuntu 24.04, installing MySQL is straightforward. This guide outlines the steps to follow. Read on!
    Step-By-Step Guide to Install MySQL on Ubuntu 24.04
    If you have a user account on your Ubuntu 24.04 and have sudo privileges, installing MySQL requires you to follow the procedure below.
    Step 1: Update the System’s Repository
    When installing packages on Ubuntu, you should update the system’s repository to refresh the sources list. Doing so ensures the MySQL package you install is the latest stable version.
    $ sudo apt update Step 2: Install MySQL Server
    Once the package index updates, the next step is to install the MySQL server package using the below command.
    $ sudo apt install mysql-server After the installation, start the MySQL service on your Ubuntu 24.04.
    $ sudo systemctl start mysql.service Step 3: Configure MySQL
    Before we can start working with MySQL, we need to make a couple of configurations. First, access the MySQL shell using the command below.
    $ sudo mysql Once the shell opens up, set a password for your ’root’ using the below syntax.
    ALTER USER ‘root’@’localhost’ IDENTIFIED WITH mysql_native_password BY ‘your_password’; We’ve also specified to use the mysql_native_password authentication method.
    Exit the MySQL shell.
    exit; Step 4: Run the MySQL Script
    One interesting feature of MySQL is that it offers a script that you should run to quickly set it up. The script prompts you to specify different settings based on your preference. For example, you will be prompted to set a password for the root user. Go through each prompt and respond accordingly.
    $ sudo mysql_secure_installation Step 5: Modify the Authentication Method
    After successfully running the MySQL installation script, you should change the authentication method and set it to use the auth_socket plugin.
    Start by accessing your MySQL shell using the root account.
    $ mysql -u root -p Once logged in, run the below command to modify the authentication plugin.
    ALTER USER ‘root’@’localhost’ IDENTIFIED WITH auth_socket; Step 6: Create a MySQL User
    So far, we have only access to MySQL using the root account. We should create a new user and specify what privileges they should have. When creating a new user, you must add their username and the login password using the syntax below.
    create user ‘username’@’localhost’ IDENTIFIED BY ‘password’; Now that the user is created, we need to specify what privileges the user has when using MySQL. For instance, you can give them privileges, such as CREATE, ALTER, etc., on a specific or all the databases.
    Here’s an example where we’ve specified a few privileges to the added user on all available databases. Feel free to specify whichever privileges are ideal for your user.
    GRANT CREATE, ALTER, INSERT, UPDATE, SELECT on *.* TO ‘username’@’localhost’ WITH GRANT OPTION; For the new user and the privileges to apply, flush the privileges and exit MySQL.
    flush privileges; Step 7: Confirm the Created User
    As the last step, we should verify that our user can access the database and has the specified privileges. Start by checking the MySQL service to ensure it is running.
    $ sudo systemctl status mysql Next, access MySQL using the credentials of the user you added in the previous step.
    $ mysql -u username -p A successful login confirms that you’ve successfully installed MySQL, configured it, and added a new user.
    Conclusion
    MySQL is a relational DBMS widely used for various purposes. It supports SQL in managing data, and this post discusses all the steps you should follow to install it on Ubuntu 24.04. Hopefully, you’ve installed MySQL on your Ubuntu 24.04 with the help of the covered steps.
  3. Task Manager is an app on the Windows 10/11 operating system that is used to monitor the running apps and services of your Windows 10/11 operating system. The Task Manager app is also used for monitoring the CPU, memory, disk, network, GPU, and other hardware usage information.
    A few screenshots of the Windows Task Manager app are shown below:

    In this article, I am going to show you 6 different ways of opening the Task Manager app on Windows 10/11.
     
    Table of Contents:
    Opening the Task Manager App from the Start Menu Opening the Task Manager App from the Windows Taskbar Opening the Task Manager App from Run Window Opening the Task Manager App from the Command Prompt/Terminal Opening the Task Manager App from the Windows Logon Menu Opening the Task Manager app Using the Keyboard Shortcut  
    1. Opening the Task Manager App from the Start Menu
    Search for the term app:task in the Start Menu and click on the Task Manager app from the search result as marked in the screenshot below.
    The Task Manager app should be opened.

     
    2. Opening the Task Manager App from the Windows Taskbar
    Right-click (RMB) on an empty location of the Windows taskbar and click on Task Manager.
    The Task Manager app should be opened.

     
    3. Opening the Task Manager App from Run Window
    To open the Run window, press <Windows> + Run.
    In the Run window, type in taskmgr in the Open section[1] and click on OK[2].
    The Task Manager app should be opened.

     
    4. Opening the Task Manager App from the Command Prompt/Terminal
    To open the Terminal app, right-click (RMB) on the Start Menu and click on Terminal.

     
    The Terminal app should be opened.
    Type in the command taskmgr and press <Enter>. The Task Manager app should be opened.

     
    5. Opening the Task Manager App from the Windows Logon Menu
    To open the Windows logon menu, press <Ctrl> + <Alt> + <Delete>.
    From the Windows logon menu, click on Task Manager. The Task Manager app should be opened.

     
    6. Opening the Task Manager app Using the Keyboard Shortcut
    To Windows 10/11 Task Manager app can be opened with the keyboard shortcut <Ctrl> + <Shift> + <Escape>.
     
    Conclusion:
    In this article, I have shown you how to open the Task Manager app on Windows 10/11 in 6 different ways. Feel free to use the method you like the best.
  4. Python and R programming languages rely on Anaconda as their package and environment manager. With Anaconda, you will get tons of the necessary packages for your data science, machine learning, or other computational tasks.To utilize Anaconda on Ubuntu 24.04, install the conda utility for your Python flavor. This post shares the steps for installing conda for Python 3, and we will install version 2024.2-1. Read on!
    How to Install conda n Ubuntu 24.04
    Anaconda is an open-source platform and by installing conda, you will have access to it and use it for any scientific computational tasks, such as machine learning. The beauty of Anaconda lies in its numerous scientific packages, ensuring you can freely use it for your project needs.
    Installing conda on Ubuntu 24.04 follows a series of steps, which we’ve discussed in detail.
    Step 1: Downloading the Anaconda Installer
    When installing Anaconda, you should check and use the latest version of the installer script. You can access all the latest Anaconda3 installer scripts from the Anaconda Downloads Page.
    As of writing this post, we have version 2024.2-1 as the latest version, and we can go ahead and download it using curl.
    $ curl https://repo.anaconda.com/archive/Anaconda3-2024.2-1-Linux-x86_64.sh --output anaconda.sh Ensure you change the version when using the above command. Also, navigate to where you want the installer script to be saved. In the above command, we’ve specified to save the installer as anaconda.sh, but you can use any preferred name.
    The installer script is large and will take some time, depending on your network’s performance. Once the download is completed, verify the file is available using the ls command. Another crucial thing is to check the integrity of the installer script.
    To do so, we’ve used the SHA-256 checksum by running the below command.
    $ sha256sum anaconda.sh Once you get the output, confirm that it matches against the available Anaconda3 hashes from the website. Once everything checks out, you can proceed with the installation.
    Step 2: Run the conda Installer Script
    Anaconda has an installer script that will take you through installing it. To run the bash script, execute the below command.
    $ bash anaconda.sh The script will trigger different prompts that will walk you through the installation. For instance, you must press the Enter key to confirm that you are okay with the installation.
    Next, a document containing the lengthy Anaconda license agreement will open.
    Please go through it, and once you reach the bottom, type yes to confirm that you agree with the license terms.
    You must also specify where you want the installation to be installed. By default, the script selects a location in your home directory, which is okay in some instances. However, if you prefer a different location, specify it and press the enter key again to proceed with the process.
    Conda will start installing, and the process will take a few minutes. In the end, you will get prompted to initialize Anaconda3. If you wish to initialize it later, choose ‘no.’ Otherwise, type ‘yes,’ as in our case.
    That’s it! You will get an output thanking you for installing Anaconda3. This message confirms that the conda utility was installed successfully on Ubuntu 24.04, and you now have the green light to start using it.
    Step 3: Activate the Installation and Test Anaconda3
    Start by sourcing the ~/.bashrc with the below command.
    $ source ~/.bashrc Next, restart your shell to open up in the Anaconda3 base environment.
    You can now check the installed conda version.
    $ conda --version Better yet, you can view all the available packages by listing them using the command below.
    $ conda list With that, you’ve installed Conda on Ubuntu 24.04. You can start working on your projects and maximize the power of Anaconda3 courtesy of its multiple packages.
    Conclusion
    Anaconda is installed by installing the conda command-line utility. To install conda, you must download its installer script, execute it, go through the installation prompts, and agree to the license terms. Once you complete the process, you can use Anaconda3 for your projects and leverage all the packages it offers.
  5. Install Java on Ubuntu 24.04

    Now that you have Ubuntu 24.04 installed, the remaining task is ensuring that you install all the software you need, including Java. Installing Java on Ubuntu 24.04 makes it possible to develop and run Java applications, and as a Java programmer, you will inevitably install Java on Ubuntu.Java isn’t pre-installed on Ubuntu. As such, you must know what steps are required to quickly install Java before you start using it for your projects. Reading this post will arm you with a simple procedure to install Java on Ubuntu 24.04. Java JDK vs JRE
    When installing Java on Ubuntu 24.04, a common concern is understanding the difference between JDK and JRE and knowing which to install. Here’s the thing: Java Development Kit (JDK) comprises all the required tools to develop Java applications. It comprises of the Java compiler and debugger and for someone looking to create Java apps, you must have JDK installed.
    As for Java Runtime Environment(JRE), it is required for anyone looking to run Java applications on their system. So, if you only want to run Java applications without building them, you only need to install JRE and not the JDK.
    As a programmer, you will likely develop and run Java applications. Therefore, you must install JDK and JRE for everything to work correctly.
    How to Install Java on Ubuntu 24.04
    Installing Java only requires access to an internet connection. Again, when you install the JDK, it should install the default JRE by default. However, that’s not always the case. Besides, if you want a specific version, you can specify it when running the install command.
    Here, we’ve provided the steps to follow to install Java quickly. Take a look!
    Step 1: Update Ubuntu’s Repository
    Updating the system repository ensures that the package you install is the latest stable version. The update command refreshes the sources list, and when you install Java, you will have the updated source index for the latest version.
    $ sudo update Step 2: Install Default JRE
    Before we can start installing Java, first verify that it isn’t already installed on your Ubuntu 24.04 by checking its version with the following command.
    $ java --version If Java is installed, you will get its version displayed on the output. Otherwise, you will get an output showing ’Java’ not found.
    Otherwise, install the default JRE using the below command.
    $ sudo apt install default-jre The installation time will depend on your network’s speed.
    Step 3: Install OpenJDK
    After successfully installing JRE, you are ready to install OpenJDK. Here, you can choose to install the default JDK, which will install the available version. Alternatively, you can opt to install a specific JDK version depending on your project requirements.
    For instance, if we want to install OpenJDK 17, we would execute our command as follows.
    $ sudo apt install openjdk-21-jdk During the installation process, you will get prompted to confirm a few things. Press ’y’ and hit the enter key to proceed with the installation. Once the installation is complete, you will have Java installed on your Ubuntu 24.04 and ready for use.
    The last task is to verify that Java is installed. By checking the version, you will get an output showing which version is installed. If you want a different version, ensure you specify it in the previous commands, as your project requirements could be different.
    $ java --version For our case, the output shows that we’ve installed Java v21.0.3 .
    Conclusion
    Installing Java on Ubuntu 24.04 isn’t a complicated process. However, you must know what your project requirements are to guide which version you install. To recap, installing Java requires you to first update the repository. Next, install JRE and then specify what OpenJDK version to install. You will have managed to install Java on Ubuntu 24.04, and this post shares more details on each step.
  6. Install NPM on Ubuntu 24.04

    The Node Package Manager (NPM) is a tool that allows developers to install and work with different JavaScript packages efficiently. Installing NPM involves installing Node.js, and this post shares all the insights you need to install NPM.Node.js is a suitable option for anyone looking to have a scalable backend that utilizes JavaScript. Node.js is built on Chrome’s V8 JS engine, and you can easily install it on your Ubuntu 24.04 to start powering your backend functionality in your projects. We will focus on understanding three options for installing NPM on Ubuntu 24.04. Method 1: Install NPM on Ubuntu 24.04 via APT
    You can find the NPM and Node.js from the Ubuntu repository. If you don’t need any specific Node.js version for your project, you can utilize this option to install NPM and Node.js with the below commands.
    First, run the update command.
    $ sudo apt update Next, source Node.js from the default repository and install it using the command below.
    $ sudo apt install nodejs At this point, you have Node.js installed, and you can verify the installed version using the command below.
    $ node -v To install NPM, run the following command.
    $ sudo apt install npm Verify that NPM is installed by checking its version.
    $ npm --version We have npm v9 for our case. You can now comfortably start working on your Node.js project, and with NPM installed, you have room to install any dependencies or packages.
    That’s the first option of installing NPM and Node.js on Ubuntu 24.04.
    Method 2: Install NPM Using NodeSource PPA
    When you install the NodeSource package, it will install NPM without you needing to install it separately. This method allows you to install a specific Nodejs package provided you’ve correctly added the PPA by downloading it using wget or curl.
    Start by visiting the Nodejs project to see which version you want to install.
    Once you decide on the version, retrieve it using curl as in the following command. For our example, we’ve retrieved version 20.x.
    $ curl -sL https://deb.nodesource.com/setup_20.x -o nodesource_setup.sh The script will get saved in your current directory, and you can verify it using the ls command.
    The next step is to run the script, but before that, you can open it with a text editor to confirm that it is safe to execute.
    You can then run the script using bash with the following command.
    $ sudo bash nodesocurce_setup.sh The command will add the NodeSource PPA to your local package, where you can source and install Node.js. When the script completes executing, you will get an output confirming the PPA has been added, and it will display the command you should use to install Node.js.
    Note that before installing the Node.js package, if you have already installed it using the previous method, it’s best to uninstall it to avoid running into an error. To do so, use the below command.
    $ sudo apt autoremove nodejs npm To install the Nodejs package, which will also install NPM, run the following command.
    $ sudo apt install nodejs Your system will source the package from the local package where we added the PPA. It will then proceed to install the NodeSource package version that you downloaded.
    Once the installation is complete, check its version using the following command.
    $ node -v The output will display the node version you downloaded, which is v20.12.2 for our case. Still, if we check the installed NPM version, you will notice it’s different from what we had earlier.
    $ npm --version The PPA installed NPM v10.5.0, which is higher than what we installed in method one earlier.
    Conclusion
    For anyone looking to use NPM and Node.js to power their backend application, this post shares two different methods for seamlessly installing NPM. This allows you to run your Node.js and install packages effectively. You can install NPM from the default Ubuntu 24.04 repository or add its PPA from the Node Source project, which will automatically install NPM alongside Node.js.
  7. As the name suggests, grep or global regular expression print lets you search for specific text patterns within a file’s contents. Its functionalities include pattern recognition, defining case sensitivity, searching multiple files, recursive search, and many more. 
    So whether you’re a beginner or a system administrator, knowing about the grep command to locate the files efficiently is good. This tutorial will explain how to use grep in Linux and discuss its different applications.
    How To Use Grep Command in Linux
    The basic function of the grep command is to search for a particular text inside a file. You can do that by entering the following command:
    grep "text_to_search" file.txt Please replace ‘text_to_search’ with the text you want to search for and ‘file.txt’ with the target file. For example, to find the string “Hello” in the file named file.txt, we will use:
    grep "Hello" file.txt
    On entering the above command, grep will scan the Intro.txt file for “Hello.” As a result, it shows the output of the whole line or lines containing the target text.
    If the target file is on a path different from your current directory, please mention that path along with the file name. For instance:
    grep "Hello" ~/Documents/file.txt
    Here, the tildes ‘~’ mark represents your home directory. The above example shows how you can search for a piece of text in a single file. However, if you want to do the same search on multiple files at once, mention them subsequently in one grep command:
    grep "Hello" file.txt Linux_info.txt Password.txt
    In case you’re not sure about your string’s cases(uppercase or lowercase), perform a case-insensitive search by using the i option:
    grep -i "hello" Intro.txt
    Although the string we input was not the exact match, we received accurate results through the case-insensitive search. In case you want to invert the changes and check files that don’t contain the specific pattern, then please use the v option:
    grep -v "Hello" file.txt Linux_info.txt Password.txt
    Moreover, if you want to display the lines that start with a certain word, use the ‘^’ symbol. It serves as an anchor that specifies the beginning of the line.
    grep "^Hello" file.txt
    The above commands will only be useful when you know which file to search. In this case, you can recursively search the string inside the whole directory using the r option. For example, let’s search “Hello” inside the Documents directory:
    grep -r "Hello" ~/Documents
    Furthermore, you can also count the number of times the input string appears in a file through the c option:
    grep -c "Hello" Intro.txt
    Similarly, you can display the line numbers along with the matched lines with the n option:
    grep -n "Hello" Intro.txt
    A Quick Wrap-up
    Users often remember that a file used to contain a piece of text but forget the file name, which can land them in deep trouble. Hence, this tutorial was about using the grep command to search for text in a file’s contents. Furthermore, we have used different examples to demonstrate how you can tweak the grep command’s functioning with a few options. You can experiment by combining multiple options to find out what suits best according to your use case.
  8. Linux works well as a multiuser operating system. Many users can access a single OS simultaneously without interpreting each other. However, if others can access your directories or files, the risk may increase. 
    Hence, from a security perspective, securing the data from others is essential. Linux has features to control access from permissions and ownership. The ownership of files, folders, or directories is categorized into three parts, which are:
    User (u): This is the default owner, also called the file’s creator. Group (g): It is the collection of multiple users with the same permissions to access folders or files.  Other (o): Those users not in the above two categories belong to it.  That’s why Linux offers simple ways to change file permissions without hassles. So in this quick blog, we have included all the possible methods to change file permissions in Linux. 
    How to Change File Permissions in Linux
    In Linux, mainly Linux file permissions are divided into three parts, and these are:
    Read (r): In this category, users can only open and read the file and can’t make any changes to it.  Write (w): Users can edit, delete, and modify the file content with written permission. Execute (x): When the user has this permission, they can execute the executable script and access the file details. Owner Representation Modify permission using the operator   Permission symbols for symbolic mode Permission symbols for absolute mode User → u To add use ‘+’ Read → r To add or subtract read use ± 4 Group → g To subtract use ‘-‘ Write → w To add or subtract read use ± 2 Other → o To set use ‘=’ Execute → x To add or subtract read use ± 1 As you can see from the above table, there are two types of symbol representation of permission. You can use both of these modes (symbolic and absolute) to change file permissions using the chmod command. The chmod refers to the change mode that allows users to modify the access permission of files or folders.
    Using chmod Symbolic Mode
    In this method, we use the symbol (for owner- u, g, o; for permission- r, w, x) to add, subtract, or set the permissions using the following syntax:
    chmod <owner_symbol> mode <permission_symbol> <filename> Before changing the file permission, first, we need to find the current one. For this, we use the ‘ls’ command.
    ls -l
    Here the permission symbols belong to the following owner:
    ‘-‘ : shows the file type. ‘rw-‘ : shows the permission of the user (read and write) ‘rw-‘ : shows the permission of the group(read and write) ‘r- -‘ : shows the permission of others (read) In the above image, we highlighted one file in which the user has read and write permission, the group has read and write permission, and the other has only read permission. So here, we are going to add executable permission to others. For this, use the following command:
    chmod o+x os.txt
    As you can see, the execute permission has been added to the other category. Simultaneously, you can also change the multiple permissions of different owners. Following the above example, again, we change the permissions in it. So, here, we add executable permission from the user, remove write permission from the group, and add write permission to others. For this, we can run the below command:
    chmod -v u+x ,g-w,o+w os.txt
    Note: Use commas while separating owners, but do not leave space between them.
    Using chmod Absolute Mode
    Similarly, you can change the permission through absolute mode. In this method, mathematical operators (+, -, =) and numbers represent the permissions, as shown in the above table. For example, let’s take an example and the updated permission of the file data is as follows:

    Mathematical representation of the permission:
    User Read + Write Permission is represented as  665
    4+2=6 Group Read + Write 4+2=6 Other Read + Execute 4+1=5 Now, we are going to remove read permission from the user and others, and the final calculation is:
    User Read + Write -Read (-4) Updated permission is represented as  261
    4+2=6 6-4=2 Group Read + Write – 4+2=6 6 Other Read + Execute -Read (-4) 4+1=5 5-4=1 To update the permission, use the following chmod command:
    chmod -v 261 os.txt
    Change User Ownership of the File
    Apart from changing the file permission, you may also have a situation where you have to change the file ownership. For this, the chown is used which represents the change owner. 

    The file details represent the following details:
    <filetype> <file_permission> <user_name> <group_name> <file_name> So, in the above example, the owner’s or user name is ‘prateek’, and you can change the user name that only exists on your system. Before changing the username, first list all the users using the following command:
    cat /etc/passwd Or
    awk -F ':' '{print $1}' /etc/passwd
    Now, you can change the username of your current or new file between these names. The general syntax to change the file owner is as follows:
    sudo chown <new_username> <filename> Note: Sudo permission is required in some cases.
    Based on the above result, we want to change the username from ‘prateek’ to ‘proxy.’ To do this, we run the below command in the terminal:
    sudo chown proxy os.txt
    Change Group Ownership of the File
    First, list all the groups that are present in your system using the following command:
    cat /etc/group | cut -d: f1
    The  ‘chgrp’ command (change group) changes the filegroup. Here, we change the group name from ‘prateek’ to ‘disk’ using the following command:
    sudo chgrp disk os.txt
    Conclusion
    Managing file permissions is essential for access control and data security. In this guide, we focused on changing the file permissions in Linux. It has a feature through which you can control ownership (user, group, others) and permissions (read, write, execute). Users can add, subtract, or set the permissions according to their needs. Users can easily modify the file permissions through the chmod command using the symbolic and absolute methods. 
  9. Operating systems use packets for transferring the data on a network. These are small chunks of information that carry data and travel among devices. Moreover, when any network problem arises, packets aid in identifying the root cause of the underlying problem. How? By tracing the route of those packets.
    The traceroute command in Linux helps you map the path packets take while traveling to a specific destination. This further helps you troubleshoot network latency, packet loss, network hops, DNS resolution issues, slow website access, and more. So, in this blog, we will explain simple ways to use the traceroute command in Linux.
    How To Use Traceroute Command in Linux
    Firstly, the traceroute does not come preinstalled in many Linux distributions. However, you can install it by executing one of the below command according to your system: 
    Operating System Command Debian/Ubuntu sudo apt install traceroute Fedora sudo dnf install traceroute Arch Linux sudo pacman -Sy traceroute openSUSE sudo zypper install traceroute After installation, you can implement the traceroute command by entering:
    traceroute <destination_IP>
    Replace <destination_IP> with the device’s IP address at the destination. Once you run the command, your system will display the list of hops with the IP address and response time. Hops are the devices that your packets go through while traveling to a specific destination. For example, let’s use the traceroute command for Google’s IP address:
    traceroute 8.8.8.8
    The result shows only one hop while marking others as an asterisk(*). This happens because the subsequent hops did not respond within the timeout period of 3 seconds. Moreover, the traceroute command, by default, uses DNS resolution to get the hostnames of hops, which slows down the process. You can omit that part and guide it to display only the IP addresses by using the -n option:
    traceroute -n <destination_IP>
    If you want to limit the number of hops, use the -m option along with the traceroute command:
    traceroute -m N <destination_IP>
    Here, put the desired number of hops in place of N. On execution, it will return only N number of hops in the results. The traceroute command only displays every hop’s round-trip time(RTT). However, you can get more detailed timing information with the -I option:
    traceroute -I <destination_IP>
    This command sends an ICMP echo request to retrieve more accurate RTT data. For instance, retake the example of Google:
    Tip: If your specified destination restricts ICMP packets, you can instead trace the UDP packets by employing the -U option:
    traceroute -U <destination_IP>
    In case you want to explore more options for traceroute, then please run the below command: 
    traceroute --help
    A Quick Wrap-up
    Traceroute is an amazing CLI utility that you can use to diagnose network-related issues in Linux. It traces the path of packets to identify all the critical issues of the network. Hence, We have explained every single detail about the traceroute command with the help of some examples. 
  10. The htop is a CLI utility to check an interactive list of running processes in real-time. It is a more feature-rich and user-friendly alternative to the top command. The htop command allows you to manage system processes, monitor resources, and perform other administration tasks.
    One of the most prominent features of htop is that it shows color-coded processes, which helps you differentiate them based on resource usage. Furthermore, it lets you customize the results with its sort and filter options. So, this short tutorial is about how to use the htop command in Linux without hassles. Unlike top, the htop command is not preinstalled in most Linux systems. That’s why you must install it using the following commands:
    Operating System Command Debian/Ubuntu sudo apt-get install htop Fedora sudo dnf install htop RHEL/CentOS sudo yum install htop Now, you can use the htop command, so let’s start with the basics:

    htop  

    When you execute the above command, it launches the htop utility. Here, you can use the arrow keys to navigate up and down the processes. Moreover, press ‘F1’ or ‘?’ to get the help screen for additional navigation shortcuts.
    Sort Processes in htop
    In htop, you can sort the processes by CPU, memory, and other usage. Open the sorting menu by pressing F6:

    For example, select the PERCENT_CPU option and press ‘Enter.’

    As you can see in the above image, all the processes are now sorted by CPU consumption. 
    Search and Filter Processes in htop
    To search any process in htop, please go through the following steps:
    Press ‘F3’ to open the search bar.

    Similarly, press ‘F4’ to filter out the processes.
    Additional Options with htop
    -d, –delay=[argument]: By default, htop updates the processes every second, but you can add a delay using this option. For instance, to introduce a delay of 10 seconds, we would enter ‘–delay=10.’

    -C, –no-color: This option disables the color output, which is helpful in systems with limited terminal support for colors.

    -u, –user=[username]: To display the processes for a specific user. Just replace ‘[username]’ with the target user’s name.

    -p, –pid=[PID1,PID2]: Displays information for specified process IDs. For example, let’s check the details of PID 1:

    htop -p 1  

    -v, –version: Prints htop version information.

    -h, –help: This displays a help message with usage information.

    Kill a Process in htop
    If you want to kill any process, select it and press the ‘F9’ key or ‘k’ to transmit a kill signal for the selected process.
    Wrapping Up
    Htop is a powerful utility for interactively checking system processes in real time. This tutorial briefly discusses how to use the htop command. As htop is not a preinstalled utility in Linux distributions, your first step is to install it using the mentioned commands. Later, we explained how to sort, search, filter, and kill processes from the htop utility.
  11. All UNIX-based operating systems, including Linux, follow the structure that “everything is a file.” These systems treat all the regular files, directories, processes, symbolic links, and devices like external hardware as files. You can create, modify, and delete files using the commands or from the File Manager.
    Deleting files is essential when you accidentally create multiple files that become unnecessary for the system. So, in this quick blog, we will explain quick ways to delete a file in Linux with no trouble. There are a few methods of deleting the files, so let’s look at them individually with the correct examples. 
    The rm Command
    You can use the rm command to delete the file from the terminal. For example, you want to delete the “filename.txt” located in the Downloads directory, so first run the below command to open the directory in the terminal:
    cd ~/Downloads
     
    Then, use the following command:
    rm filename.txt
    The rm command doesn’t display any output, but you can use the -v option to get the output:
    rm -v filename.txt
    If you want to delete multiple files from the current directory, you can mention all those files in a single rm command. For example, to delete three files– file1.txt, file2.txt, file3.txt, please run the below command:
    rm file1.txt file2.txt file3.txt
    In case you want to delete all the files with the same extension, then you can run the following command: 
    rm *.txt
    As the above image shows, we have deleted all the .txt files from the Downloads directory. Moreover, you can use multiple extensions in a single command to delete different types of files simultaneously. For example, let’s delete all the files having the .txt and the .sh extensions: 
    rm -v *.sh *.txt
    Similarly, you can empty a directory by only adding the * in the rm command: 
    rm *
    Remember, the above command deletes all files except the directories. Hence, if there is a subdirectory, then the terminal will show the following output: 

    However, you can use the -r option with the rm command to delete the subdirectories. The -r option recursively deletes the directory along with its contents:
    rm -r *
    In case you want to get the confirmation before deleting the file, please use the -i option. 
    rm -i *
    Once you run the command, the system will show a confirmation prompt, so all you have to do is press Y to delete or N to decline it. 
    From the File Manager
    We recommend deleting the file from the File Manager if you are a Linux beginner. So first open the File Manager and locate the directory: 

    Now select the file and right-click it to get the context menu.

    Finally, click on the Move to Trash option or press Delete button.
    A Quick Wrap-up
    Linux has various commands and methods to delete a file quickly. However, users must know how to delete files to maintain an organized system and minimal storage consumption. This quick tutorial explained two ways of doing so. Initially, we discussed how the rm command works, then explained briefly the step-by-step process of deleting files using the GUI.
  12. The Logrotate utility simplifies the process of administering log files. It relocates and replaces log files to manage their size and organize them while maintaining the information present inside them. For example, it will maintain seven log files to keep daily records for seven days.
    While rotating the log files, Logrotate deletes irrelevant old logs, preventing them from consuming excessive disk space. It runs periodically in the background to keep your systems organized and clean. So, if you want to learn about Logrotate, this blog is for you. Here, we have included in-depth information about how to set Logrotate on Linux.
    How To Set Logrotate on Linux
    Although many Linux distributions have Logrotate as the pre-installed utility. However, if your system does not have Logrotate, please use the following command to install it:

    sudo apt install logrotate
    Now, let’s move to the configuration part. There are two kinds of logrotate configurations– global and system-specific. Open the ‘/etc/logrotate.conf’ file using a text editor. It is Logrotate’s primary configuration file, and any changes made to it will affect the whole system.

    sudo nano /etc/logrotate.conf
    This file has three key sections:
    To specify the rotation frequency, i.e., the time it should rotate the logs. It is set to weekly by default, but you can change it to daily, weekly, or monthly. To define the number of rotated files it should keep, adjust the value based on how much historical data you want to retain. For instance, ‘rotate 4’ guides it to keep the latest four rotated log files and delete the earlier ones to free up disk space. The third is to specify the permissions and ownership of the new log files it’ll create. You can tweak these settings according to what suits your system best. For instance, to maintain weekly records for one month(28 days), you must enter:

    weekly
    rotate 4
    create 0644 root root This way, it will rotate one file weekly and keep four such files. Further, it creates a new log file for currently occurring events while giving the root user and group the read-and-write permissions and read-only for others.
    If you have to monitor a specific application’s logs for underlying issues. In that case, you can tailor log rotation settings for that application by creating its separate logrotate configuration file. Let’s take an example of conda. First, create its file using:

    sudo nano /etc/logrotate.d/conda In this file, add configurations specific to the conda logs:

    /var/log/conda/*.log {
    weekly
    rotate 4
    compress
    delaycompress
    missingok
    notifempty
    create 0644 root root
    }
    Here, the compress command guides to compress the files so that resulting files take up less space. With the delaycompress command, you can hold the latest rotated file uncompressed to make it convenient for the users to refer to it.
    The missingok option tells logrotate to ignore the absence of a log file and continue its operations without any error. At last, with notifempty, logrotate won’t rotate any empty log file. The logrotate should run automatically as per the default settings. However, you must confirm it using:

    nano /etc/cron.daily/logrotate A Quick Wrap-up
    Knowing the configuration process of the logrotate utility is crucial for system administrators and is also essential for disk management in Linux devices. Hence, this blog explains the approaches used to set logrotate on Linux. You can modify configurations globally and simultaneously change them for specific applications. Moreover, system-specific configurations should be used responsibly because they always override global settings.
  13. Processes are the running instances of programs that consume system resources. Listing these processes helps you monitor system activity, and  troubleshoot issues. That’s why there are multiple tools and utilities in Linux that you can use to list the currently running process. However, many beginners don’t know the exact way to list the process without errors. So, in this short article, we will explain different methods to list the process in Linux. We have divided this section into multiple parts to give you the best commands to list the processes in Linux.
    The ps Command
    The ps, or “process status,” is the most common utility to list processes in the terminal:
    ps -e
    The -e option guides ps to show every process regardless of whether the user owns those processes. Furthermore, you can customize the ps command to produce additional details using the “aux” options:
    ps aux
    The top Command
    If you desire to view the real-time list of system processes, please use the top command. It continuously updates the process list according to new and completed processes, providing more accurate results:
    top
    The above command on execution shows the list of processes as per their CPU consumption. Moreover, You can not interact with the terminal until you press “q” to quit the top utility.
    The pstree Command
    The pstree is very different from the above two commands because it displays the hierarchical relationship of processes in a tree-like structure. It helps you visually understand how a process starts and its connection with other active processes.
    pstree
    The Glances Tool
    The Glances tool provides a brief overview of the currently running process. However, you have to install the tool by running the below command: 
    Operating System Command Debian/Ubuntu sudo apt install glances Fedora sudo dnf install glances Arch Linux sudo pacman -Sy glances openSUSE sudo zypper install glances After the successful installation, you can open the Glances by running the following command:
    glances
    A Quick Summary
    Knowing how to list processes can help free up the space and turn off the currently running process. This article covered four ways– the top, ps, pstree, and pgrep commands. You can choose to use any of them according to what suits you best. We recommend you use any commands carefully, or you may get errors.
  14. by: Guest Contributor
    Tue, 31 Oct 2023 00:55:00 GMT

    In the realm of database offerings, where data is the lifeblood of modern businesses, constructing resilient systems isn't just a best practice; it's a strategic imperative. Disaster recovery planning has become a cornerstone in ensuring the continuity of operations, safeguarding valuable data, and minimizing the impact of unexpected events. This article delves into the critical factors of disaster recovery planning in database services, highlighting the essential requirements and strategies to build resilient systems that can withstand the challenges of unexpected disruptions.
    Understanding the Need for Disaster Recovery Planning
    Unpredictable Nature of Disasters
    Disasters, whether natural or human-triggered, are inherently unpredictable. From earthquakes and floods to cyber attacks and hardware failures, a myriad of events can threaten the availability, integrity, and security of database systems.
    Business Continuity and Data Integrity
    Database services play a pivotal role in the daily operations of organizations. Ensuring business continuity and maintaining data integrity are paramount, as disruptions can cause financial losses, reputational damage, and operational setbacks.
    Key Principles of Disaster Recovery Planning
    Risk Assessment and Impact Analysis
    Conduct a thorough risk assessment to identify potential threats and vulnerabilities. Additionally, perform an impact analysis to understand the effects of different disaster scenarios on database services. This foundational step guides the development of a focused and effective recovery plan.
    Define Recovery Objectives
    Clearly define recovery objectives, such as Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). RTO outlines the acceptable downtime, while RPO determines the maximum acceptable data loss in the event of a disaster. These objectives serve as benchmarks for the effectiveness of the recovery plan.
    Data Backup and Redundancy
    Implement robust data backup and redundancy strategies. Regularly back up critical data and store copies in geographically diverse locations. This ensures that, in the event of a disaster, businesses can quickly restore operations using the most recent available data.
    While both terms are often used in the same conversations, this isn’t an either/or decision. Both backups and redundancy offer two distinct and equally valuable solutions to ensuring business continuity in the face of unplanned accidents, unexpected attacks, or system failures.
    Redundancy is designed to increase your operational time, boost workforce productivity, and reduce the amount of time a system is unavailable due to a failure. Backup, however, is designed to kick in when something goes wrong, allowing you to completely rebuild regardless of what caused the failure. Moreover, if you use ELT tools for regular updating of critical data across backup and redundancy systems, maintaining seamless data access and continuity will become much easier. This becomes especially important when you stream your data to databases or data warehouses through such ELT solutions as BigQuery connectors.
    In short, redundancy prevents failure while backups prevent loss. In a modern business environment that is inherently dependent on access to large volumes of data, it’s clear that operational redundancy and backups are both critical elements of an effective continuity strategy.
    Comprehensive Documentation
    Document all aspects of the disaster recovery plan comprehensively. This includes procedures for data backup, system restoration, communication protocols, and the roles and responsibilities of the recovery team. Well-documented plans facilitate a smooth and coordinated response during crises.
    Strategies for Building Resilient Systems
    Geographical Distribution and Cloud Services
    Leverage the geographical distribution capabilities of cloud services. Distributing data across multiple regions and utilizing cloud-based databases enhances redundancy and ensures data availability even if one region is impacted by a disaster.
    Redundant Infrastructure
    Implement redundant infrastructure at both the hardware and software levels. Redundant servers, storage systems, and network components can mitigate the impact of hardware failures. Additionally, consider using load balancing and failover mechanisms to distribute workloads and ensure continuous service availability.
    Regular Testing and Simulation
    Conduct regular testing and simulation exercises to validate the effectiveness of the disaster recovery plan. Simulating different disaster scenarios, such as data corruption, network failures, or system outages, helps organizations identify weaknesses and fine-tune their recovery strategies.
    Automated Monitoring and Alerts
    Implement automated monitoring tools that continuously track the health and performance of database services. Set up alerts for critical thresholds and potential issues, enabling proactive identification of anomalies and rapid response to emerging problems.
    Incident Response and Communication
    Incident Response Team
    Form an incident response team responsible for executing the disaster recovery plan. Clearly define the roles and responsibilities of team members, ensuring that each member is well-trained and familiar with their specific duties during a disaster.
    Communication Protocols
    Establish clear communication protocols for disseminating information during a disaster. Define channels, responsibilities, and escalation procedures to ensure that stakeholders, including employees, customers, and relevant authorities, are informed promptly and accurately.
    Continuous Improvement and Adaptability
    Post-Incident Review and Analysis
    Conduct post-incident reviews and analysis after each simulation or actual disaster. This retrospective examination allows organizations to identify areas for improvement, refine recovery strategies, and enhance the overall resilience of database services.
    Adaptability to Evolving Threats
    Recognize that the threat landscape is dynamic, with new risks emerging over time. Disaster recovery plans need to be adaptable and evolve alongside technological advancements and changing security threats. Regularly update and refine the plan to address new challenges effectively.
    Scaling Disaster Recovery with Business Growth
    As businesses expand, data volume grows, and infrastructure becomes more complex. Old disaster recovery strategies and plans may now fall short. It becomes essential for businesses to evaluate and improve their disaster recovery plans to adapt to growing needs. This includes scaling resources and updating recovery objectives.
    Conclusion
    Building resilient systems through comprehensive disaster recovery planning is a crucial investment in the long-term success and viability of database services. By adhering to key principles, implementing strategic recovery strategies, and fostering a culture of continuous improvement, organizations can make their databases more robust against unexpected events. As the digital landscape evolves, the ability to recover quickly and efficiently from disasters will become a hallmark of organizations that prioritize data integrity, business continuity, and trust within their stakeholders.
  15. by: Guest Contributor
    Tue, 31 Oct 2023 00:55:00 GMT

    In the realm of database offerings, where data is the lifeblood of modern businesses, constructing resilient systems isn't just a best practice; it's a strategic imperative. Disaster recovery planning has become a cornerstone in ensuring the continuity of operations, safeguarding valuable data, and minimizing the impact of unexpected events. This article delves into the critical factors of disaster recovery planning in database services, highlighting the essential requirements and strategies to build resilient systems that can withstand the challenges of unexpected disruptions.
    Understanding the Need for Disaster Recovery Planning
    Unpredictable Nature of Disasters
    Disasters, whether natural or human-triggered, are inherently unpredictable. From earthquakes and floods to cyber attacks and hardware failures, a myriad of events can threaten the availability, integrity, and security of database systems.
    Business Continuity and Data Integrity
    Database services play a pivotal role in the daily operations of organizations. Ensuring business continuity and maintaining data integrity are paramount, as disruptions can cause financial losses, reputational damage, and operational setbacks.
    Key Principles of Disaster Recovery Planning
    Risk Assessment and Impact Analysis
    Conduct a thorough risk assessment to identify potential threats and vulnerabilities. Additionally, perform an impact analysis to understand the effects of different disaster scenarios on database services. This foundational step guides the development of a focused and effective recovery plan.
    Define Recovery Objectives
    Clearly define recovery objectives, such as Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). RTO outlines the acceptable downtime, while RPO determines the maximum acceptable data loss in the event of a disaster. These objectives serve as benchmarks for the effectiveness of the recovery plan.
    Data Backup and Redundancy
    Implement robust data backup and redundancy strategies. Regularly back up critical data and store copies in geographically diverse locations. This ensures that, in the event of a disaster, businesses can quickly restore operations using the most recent available data.
    While both terms are often used in the same conversations, this isn’t an either/or decision. Both backups and redundancy offer two distinct and equally valuable solutions to ensuring business continuity in the face of unplanned accidents, unexpected attacks, or system failures.
    Redundancy is designed to increase your operational time, boost workforce productivity, and reduce the amount of time a system is unavailable due to a failure. Backup, however, is designed to kick in when something goes wrong, allowing you to completely rebuild regardless of what caused the failure. Moreover, if you use ELT tools for regular updating of critical data across backup and redundancy systems, maintaining seamless data access and continuity will become much easier.
    In short, redundancy prevents failure while backups prevent loss. In a modern business environment that is inherently dependent on access to large volumes of data, it’s clear that operational redundancy and backups are both critical elements of an effective continuity strategy.
    Comprehensive Documentation
    Document all aspects of the disaster recovery plan comprehensively. This includes procedures for data backup, system restoration, communication protocols, and the roles and responsibilities of the recovery team. Well-documented plans facilitate a smooth and coordinated response during crises.
    Strategies for Building Resilient Systems
    Geographical Distribution and Cloud Services
    Leverage the geographical distribution capabilities of cloud services. Distributing data across multiple regions and utilizing cloud-based databases enhances redundancy and ensures data availability even if one region is impacted by a disaster.
    Redundant Infrastructure
    Implement redundant infrastructure at both the hardware and software levels. Redundant servers, storage systems, and network components can mitigate the impact of hardware failures. Additionally, consider using load balancing and failover mechanisms to distribute workloads and ensure continuous service availability.
    Regular Testing and Simulation
    Conduct regular testing and simulation exercises to validate the effectiveness of the disaster recovery plan. Simulating different disaster scenarios, such as data corruption, network failures, or system outages, helps organizations identify weaknesses and fine-tune their recovery strategies.
    Automated Monitoring and Alerts
    Implement automated monitoring tools that continuously track the health and performance of database services. Set up alerts for critical thresholds and potential issues, enabling proactive identification of anomalies and rapid response to emerging problems.
    Incident Response and Communication
    Incident Response Team
    Form an incident response team responsible for executing the disaster recovery plan. Clearly define the roles and responsibilities of team members, ensuring that each member is well-trained and familiar with their specific duties during a disaster.
    Communication Protocols
    Establish clear communication protocols for disseminating information during a disaster. Define channels, responsibilities, and escalation procedures to ensure that stakeholders, including employees, customers, and relevant authorities, are informed promptly and accurately.
    Continuous Improvement and Adaptability
    Post-Incident Review and Analysis
    Conduct post-incident reviews and analysis after each simulation or actual disaster. This retrospective examination allows organizations to identify areas for continuous improvement, refine recovery strategies, and enhance the overall resilience of database services.
    Adaptability to Evolving Threats
    Recognize that the threat landscape is dynamic, with new risks emerging over time. Disaster recovery plans need to be adaptable and evolve alongside technological advancements and changing security threats. Regularly update and refine the plan to address new challenges effectively.
    Conclusion
    Building resilient systems through comprehensive disaster recovery planning is a crucial investment in the long-term success and viability of database services. By adhering to key principles, implementing strategic recovery strategies, and fostering a culture of continuous improvement, organizations can make their databases more robust against unexpected events. As the digital landscape evolves, the ability to recover quickly and efficiently from disasters will become a hallmark of organizations that prioritize data integrity, business continuity, and trust within their stakeholders.
  16. by: Scott Robinson
    Mon, 23 Oct 2023 14:12:00 GMT

    Deleting a file in Python is fairly easy to do. Let's discuss two methods to accomplish this task using different Python modules.
    Using the 'os' Module
    The os module in Python provides a method called os.remove() that can be used to delete a file. Here's a simple example:
    import os # specify the file name file_name = "test_file.txt" # delete the file os.remove(file_name) In the above example, we first import the os module. Then, we specify the name of the file to be deleted. Finally, we call os.remove() with the file name as the parameter to delete the file.
    Note: The os.remove() function can only delete files, not directories. If you try to delete a directory using this function, you'll get a IsADirectoryError.
    Using the 'shutil' Module
    The shutil module, short for "shell utilities", also provides a method to delete files - shutil.rmtree(). But why use shutil when os can do the job? Well, shutil can delete a whole directory tree (i.e., a directory and all its subdirectories). Let's see how to delete a file with shutil.
    import shutil # specify the file name file_name = "test_file.txt" # delete the file shutil.rmtree(file_name) The code looks pretty similar to the os example, right? That's one of the great parts of Python's design - consistency across modules. However, remember that shutil.rmtree() is more powerful and can remove non-empty directories as well, which we'll look at more closely in a later section.
    Deleting a Folder in Python
    Moving on to the topic of directory deletion, we can again use the os and shutil modules to accomplish this task. Here we'll explore both methods.
    Using the 'os' Module
    The os module in Python provides a method called os.rmdir() that allows us to delete an empty directory. Here's how you can use it:
    import os # specify the directory you want to delete folder_path = "/path/to/your/directory" # delete the directory os.rmdir(folder_path) The os.rmdir() method only deletes empty directories. If the directory is not empty, you'll encounter an OSError: [Errno 39] Directory not empty error.
    Using the 'shutil' Module
    In case you want to delete a directory that's not empty, you can use the shutil.rmtree() method from the shutil module.
    import shutil # specify the directory you want to delete folder_path = "/path/to/your/directory" # delete the directory shutil.rmtree(folder_path) The shutil.rmtree() method deletes a directory and all its contents, so use it cautiously!
    Wait! Always double-check the directory path before running the deletion code. You don't want to accidentally delete important files or directories!
    Common Errors
    When dealing with file and directory operations in Python, it's common to encounter a few specific errors. Understanding these errors is important to handling them gracefully and ensuring your code continues to run smoothly.
    PermissionError: [Errno 13] Permission denied
    One common error you might encounter when trying to delete a file or folder is the PermissionError: [Errno 13] Permission denied. This error occurs when you attempt to delete a file or folder that your Python script doesn't have the necessary permissions for.
    Here's an example of what this might look like:
    import os try: os.remove("/root/test.txt") except PermissionError: print("Permission denied") In this example, we're trying to delete a file in the root directory, which generally requires administrative privileges. When run, this code will output Permission denied.
    To avoid this error, ensure your script has the necessary permissions to perform the operation. This might involve running your script as an administrator, or modifying the permissions of the file or folder you're trying to delete.
    FileNotFoundError: [Errno 2] No such file or directory
    Another common error is the FileNotFoundError: [Errno 2] No such file or directory. This error is thrown when you attempt to delete a file or folder that doesn't exist.
    Here's how this might look:
    import os try: os.remove("nonexistent_file.txt") except FileNotFoundError: print("File not found") In this example, we're trying to delete a file that doesn't exist, so Python throws a FileNotFoundError.
    To avoid this, you can check if the file or folder exists before trying to delete it, like so:
    import os if os.path.exists("test.txt"): os.remove("test.txt") else: print("File not found") OSError: [Errno 39] Directory not empty
    The OSError: [Errno 39] Directory not empty error occurs when you try to delete a directory that's not empty using os.rmdir().
    For instance:
    import os try: os.rmdir("my_directory") except OSError: print("Directory not empty") This error can be avoided by ensuring the directory is empty before trying to delete it, or by using shutil.rmtree(), which can delete a directory and all its contents:
    import shutil shutil.rmtree("my_directory") Similar Solutions and Use-Cases
    Python's file and directory deletion capabilities can be applied in a variety of use-cases beyond simply deleting individual files or folders.
    Deleting Files with Specific Extensions
    Imagine you have a directory full of files, and you need to delete only those with a specific file extension, say .txt. Python, with its versatile libraries, can help you do this with ease. The os and glob modules are your friends here.
    import os import glob # Specify the file extension extension = "*.txt" # Specify the directory directory = "/path/to/directory/" # Combine the directory with the extension files = os.path.join(directory, extension) # Loop over the files and delete them for file in glob.glob(files): os.remove(file) This script will delete all .txt files in the specified directory. The glob module is used to retrieve files/pathnames matching a specified pattern. Here, the pattern is all files ending with .txt.
    Deleting Empty Directories
    Have you ever found yourself with a bunch of empty directories that you want to get rid of? Python's os module can help you here as well.
    import os # Specify the directory directory = "/path/to/directory/" # Use listdir() to check if directory is empty if not os.listdir(directory): os.rmdir(directory) The os.listdir(directory) function returns a list containing the names of the entries in the directory given by path. If the list is empty, it means the directory is empty, and we can safely delete it using os.rmdir(directory).
    Note: os.rmdir(directory) can only delete empty directories. If the directory is not empty, you'll get an OSError: [Errno 39] Directory not empty error.
  17. 398: DevOops

    by: Chris Coyier
    Thu, 26 Jan 2023 01:30:59 +0000

    Stephen and I hop on the podcast to chat about some of our recent tooling, local development, and DevOps work. A little while back, we cleaned up our entire monorepo’s circular dependency problems using Madge and elbow grease. That kind of thing usually isn’t the biggest of deals and the kind of thing a super mature bundler like webpack deals with, but other bundlers might choke on. Later, we learned that we had more dependency issues like inter-package circular dependencies (nothing like production deployments to keep you honest) and used more tooling (shout out npx depcheck) to clean more of it up. Workspaces in a monorepo can also paper over missing dependencies — blech.
    Another change was moving off using a .dev domain for local development, which oddly actually caused some strange and hard-to-diagnose DNS issues sometimes. We’re on .test now, which should never be a public TLD.
    Time Jumps
    00:26 Dev ops spring cleaning 01:25 Local dev with .dev, wait, no, .test 06:58 Sponsor: Notion 07:54 Circular dependency 11:41 Monorepo update 13:35 Interpackage and unused packages 16:25 TypeScript 17:54 Upgrading packages 20:35 Hierarchy of packages Sponsor: Notion
    Notion is an amazing collaborative tool that not only helps organize your company’s information but helps with project management as well. We know that all too well here at CodePen, as we use Notion for countless business tasks. Learn more and get started for free at notion.com. Take your first step toward an organized, happier team, today.
  18. by: Temani Afif
    Mon, 07 Jul 2025 12:48:29 +0000

    This is the fourth post in a series about the new CSS shape() function. So far, we’ve covered the most common commands you will use to draw various shapes, including lines, arcs, and curves. This time, I want to introduce you to two more commands: close and move. They’re fairly simple in practice, and I think you will rarely use them, but they are incredibly useful when you need them.
    Better CSS Shapes Using shape()
    Lines and Arcs More on Arcs Curves Close and Move (you are here!) The close command
    In the first part, we said that shape() always starts with a from command to define the first starting point but what about the end? It should end with a close command.
    That’s true. I never did because I either “close” the shape myself or rely on the browser to “close” it for me. Said like that, it’s a bit confusing, but let’s take a simple example to better understand:
    clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%) If you try this code, you will get a triangle shape, but if you look closely, you will notice that we have only two line commands whereas, to draw a triangle, we need a total of three lines. The last line between 100% 100% and 0 0 is implicit, and that’s the part where the browser is closing the shape for me without having to explicitly use a close command.
    I could have written the following:
    clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, close) Or instead, define the last line by myself:
    clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, line to 0 0) But since the browser is able to close the shape alone, there is no need to add that last line command nor do we need to explicitly add the close command.
    This might lead you to think that the close command is useless, right? It’s true in most cases (after all, I have written three articles about shape() without using it), but it’s important to know about it and what it does. In some particular cases, it can be useful, especially if used in the middle of a shape.
    CodePen Embed Fallback In this example, my starting point is the center and the logic of the shape is to draw four triangles. In the process, I need to get back to the center each time. So, instead of writing line to center, I simply write close and the browser will automatically get back to the initial point!
    Intuitively, we should write the following:
    clip-path: shape( from center, line to 20% 0, hline by 60%, line to center, /* triangle 1 */ line to 100% 20%, vline by 60%, line to center, /* triangle 2 */ line to 20% 100%, hline by 60%, line to center, /* triangle 3 */ line to 0 20%, vline by 60% /* triangle 4 */ ) But we can optimize it a little and simply do this instead:
    clip-path: shape( from center, line to 20% 0, hline by 60%, close, line to 100% 20%, vline by 60%, close, line to 20% 100%, hline by 60%, close, line to 0 20%, vline by 60% ) We write less code, sure, but another important thing is that if I update the center value with another position, the close command will follow that position.
    CodePen Embed Fallback Don’t forget about this trick. It can help you optimize a lot of shapes by writing less code.
    The move command
    Let’s turn our attention to another shape() command you may rarely use, but can be incredibly useful in certain situations: the move command.
    Most times when we need to draw a shape, it’s actually one continuous shape. But it may happen that our shape is composed of different parts not linked together. In these situations, the move command is what you will need.
    Let’s take an example, similar to the previous one, but this time the triangles don’t touch each other:
    CodePen Embed Fallback Intuitively, we may think we need four separate elements, with its own shape() definition. But the that example is a single shape!
    The trick is to draw the first triangle, then “move” somewhere else to draw the next one, and so on. The move command is similar to the from command but we use it in the middle of shape().
    clip-path: shape( from 50% 40%, line to 20% 0, hline by 60%, close, /* triangle 1 */ move to 60% 50%, line to 100% 20%, vline by 60%, close, /* triangle 2 */ move to 50% 60%, line to 20% 100%, hline by 60%, close, /* triangle 3 */ move to 40% 50%, line to 0 20%, vline by 60% /* triangle 4 */ ) After drawing the first triangle, we “close” it and “move” to a new point to draw the next triangle. We can have multiple shapes using a single shape() definition. A more generic code will look like the below:
    clip-path: shape( from X1 Y1, ..., close, /* shape 1 */ move to X2 Y2, ..., close, /* shape 2 */ ... move to Xn Yn, ... /* shape N */ ) The close commands before the move commands aren’t mandatory, so the code can be simplified to this:
    clip-path: shape( from X1 Y1, ..., /* shape 1 */ move to X2 Y2, ..., /* shape 2 */ ... move to Xn Yn, ... /* shape N */ ) CodePen Embed Fallback Let’s look at a few interesting use cases where this technique can be helpful.
    Cut-out shapes
    Previously, I shared a trick on how to create cut-out shapes using clip-path: polygon(). Starting from any kind of polygon, we can easily invert it to get its cut-out version:
    CodePen Embed Fallback We can do the same using shape(). The idea is to have an intersection between the main shape and the rectangle shape that fits the element boundaries. We need two shapes, hence the need for the move command.
    The code is as follows:
    .shape { clip-path: shape(from ...., move to 0 0, hline to 100%, vline to 100%, hline to 0); } You start by creating your main shape and then you “move” to 0 0 and you create the rectangle shape (Remember, It’s the first shape we create in the first part of this series). We can even go further and introduce a CSS variable to easily switch between the normal shape and the inverted one.
    .shape { clip-path: shape(from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; } By default, --i is not defined so var(--i,)will be empty and we get the main shape. If we define the variable with the rectangle shape, we get the inverted version.
    Here is an example using a rounded hexagon shape:
    CodePen Embed Fallback In reality, the code should be as follows:
    .shape { clip-path: shape(evenodd from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; } Notice the evenodd I am adding at the beginning of shape(). I won’t bother you with a detailed explanation on what it does but in some cases, the inverted shape is not visible and the fix is to add evenodd at the beginning. You can check the MDN page for more details.
    Another improvement we can do is to add a variable to control the space around the shape. Let’s suppose you want to make the hexagon shape of the previous example smaller. It‘s tedious to update the code of the hexagon but it’s easier to update the code of the rectangle shape.
    .shape { clip-path: shape(evenodd from ... var(--i,)) content-box; } .invert { --d: 20px; padding: var(--d); --i: ,move to calc(-1*var(--d)) calc(-1*var(--d)), hline to calc(100% + var(--d)), vline to calc(100% + var(--d)), hline to calc(-1*var(--d)); } We first update the reference box of the shape to be content-box. Then we add some padding which will logically reduce the area of the shape since it will no longer include the padding (nor the border). The padding is excluded (invisible) by default and here comes the trick where we update the rectangle shape to re-include the padding.
    That is why the --i variable is so verbose. It uses the value of the padding to extend the rectangle area and cover the whole element as if we didn’t have content-box.
    CodePen Embed Fallback Not only you can easily invert any kind of shape, but you can also control the space around it! Here is another demo using the CSS-Tricks logo to illustrate how easy the method is:
    CodePen Embed Fallback This exact same example is available in my SVG-to-CSS converter, providing you with the shape() code without having to do all of the math.
    Repetitive shapes
    Another interesting use case of the move command is when we need to repeat the same shape multiple times. Do you remember the difference between the by and the to directives? The by directive allows us to define relative coordinates considering the previous point. So, if we create our shape using only by, we can easily reuse the same code as many times as we want.
    Let’s start with a simple example of a circle shape:
    clip-path: shape(from X Y, arc by 0 -50px of 1%, arc by 0 50px of 1%) Starting from X Y, I draw a first arc moving upward by 50px, then I get back to X Y with another arc using the same offset, but downward. If you are a bit lost with the syntax, try reviewing Part 1 to refresh your memory about the arc command.
    How I drew the shape is not important. What is important is that whatever the value of X Y is, I will always get the same circle but in a different position. Do you see where I am going with this idea? If I want to add another circle, I simply repeat the same code with a different X Y.
    clip-path: shape( from X1 Y1, arc by 0 -50px of 1%, arc by 0 50px of 1%, move to X2 Y2, arc by 0 -50px of 1%, arc by 0 50px of 1% ) And since the code is the same, I can store the circle shape into a CSS variable and draw as many circles as I want:
    .shape { --sh:, arc by 0 -50px of 1%, arc by 0 50px of 1%; clip-path: shape( from X1 Y1 var(--sh), move to X2 Y2 var(--sh), ... move to Xn Yn var(--sh) ) } You don’t want a circle? Easy, you can update the --sh variable with any shape you want. Here is an example with three different shapes:
    CodePen Embed Fallback And guess what? You can invert the whole thing using the cut-out technique by adding the rectangle shape at the end:
    CodePen Embed Fallback This code is a perfect example of the shape() function’s power. We don’t have any code duplication and we can simply adjust the shape with CSS variables. This is something we are unable to achieve with the path() function because it doesn’t support variables.
    Conclusion
    That’s all for this fourth installment of our series on the CSS shape() function! We didn’t make any super complex shapes, but we learned how two simple commands can open a lot of possibilities of what can be done using shape().
    Just for fun, here is one more demo recreating a classic three-dot loader using the last technique we covered. Notice how much further we could go, adding things like animation to the mix:
    CodePen Embed Fallback Better CSS Shapes Using shape()
    Lines and Arcs More on Arcs Curves Close and Move (you are here!) Better CSS Shapes Using shape() — Part 4: Close and Move originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  19. by: Abhishek Prakash
    Thu, 19 Jun 2025 05:21:30 GMT

    You probably have noticed a lack of new articles this week. And there is a 'good' reason for that. I have been busy with the arrival of my second child 🚼
    That is also the reason why there was a slight delay in lifetime membership activation. But it's done for all the 43 new members so far (of the set goal of 75 new lifetime members).
    Things are getting back on the track as the mother and baby duo have been discharged from the hospital. You should start seeing more tutorials, I promise 😸
    The 13th anniversary offer is still going on. You get the lifetime membership option with reduced pricing of $76 instead of the usual $99 along with a Linux command line eBook. If you ever wanted to support us with Plus membership but didn't like the recurring subscription, this is the best time for that 😃
    Get It's FOSS Lifetime Membership 💬 Let's see what else you get in this edition
    A new Kali Linux release. ONLYOFFICE 9 with more modern features. Nitrux Linux offers Hyprland by default. Linux Foundation launching a package manager. And other Linux news, tips, and, of course, memes! 📰 Linux and Open Source News
    Kali Linux 2025.2 release is packed with many visual buffs. A cheaper variant of the Linux-powered Liberux NEXX is here. The Linux Foundation has launched FAIR Package Manager for WordPress. Apple has introduced an open source tool for running Linux containers on macOS. ONLYOFFICE 9.0 release brings modern new features to the open source office suite.
    With Version 9.0 Release, ONLYOFFICE Becomes an Even Better Choice for Linux UsersThere are some cool new features in this From AI powered OCR to form editor to more file compatibility, ONLYOFFICE is getting better with each release.It's FOSS NewsSourav RudraNitrux has moved to Hyprland, ditching NX Desktop and KDE Plasma in the process.
    Nitrux Gets Rid of Plasma & NX Desktop for HyprlandFew Linux distributions can pull this off.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    Denmark has set out to replace Microsoft Office with LibreOffice in its Ministry of Digital Affairs.
    Excellent! Denmark Set to Replace Microsoft Office with Open Source AlternativeDenmark’s Digital Ministry is replacing Microsoft services with LibreOffice and Linux.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials and More
    You can master Joplin with these handy tips. Here are 9 major annoyances with Linux that aren't an issue anymore. Here's a fix for the continuous buzzing noise from your desk speakers on Linux. Using Tiling Assistant on GNOME is an easy way to speed up your workflow.
    How to Use Tiling Assistant on GNOME DesktopWondering how to use tiling windows on GNOME? Try the tiling assistant. Here’s how it works.It's FOSSSagar Sharma Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 👷 Homelab and Maker's Corner
    The SAKURA-II looks like a nice addition for the Raspberry Pi AI enthusiasts in the house.
    SAKURA-II Brings Energy-Efficient Edge AI to Raspberry Pi 5The SAKURA-II is an interesting bit of kit for the Raspberry Pi 5.It's FOSS NewsSourav Rudra✨ Project Highlight
    If you ever wanted to relive classic games, then RetroArch is the way to go.
    RetroArch is The Best Way to Play Classic Games on LinuxA powerful frontend for emulators, that offers a clean interface and wide platform support.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for You
    Use terminal like a pro with these terminal shortcuts.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Do you know other shells beyond Bash? Prove it.
    Guess the Shell CrosswordThere is a shell, there is a way.It's FOSSAbhishek Prakash💡 Quick Handy Tip
    If you are using Vivaldi, you can rename tabs by simply double-clicking on the tab title and entering a name. Before doing that, ensure that double-click tab rename is enabled in the settings.
    Open Settings and go to the Tabs section. Here, check whether the double-click action is set to "Rename tab".
    This is useful when the tab names are taking up too much space, this way, you can give a nickname to easily identify the tab.
    🤣 Meme of the Week
    It's still going strong thanks to Linux! 💪
    🗓️ Tech Trivia
    On June 14, 1822, Charles Babbage presented a paper to the Royal Astronomical Society proposing a design for a machine he called the Difference Engine, the first significant example of a mechanical computing device.
    🧑‍🤝‍🧑 FOSSverse Corner
    There is a long-running discussion surrounding the bias against Ubuntu. Do you have insights to add?
    Why do people have such an unreasonable bias against Ubuntu?I saw this post on Reddit this morning and thought I’d share. I’ve posted something similar myself. Why do people hate Ubuntu so much? : r/linux When I switched to Linux 4 years ago, I used Pop OS as my first distro. Then switched to Fedora and used it for a long time until recently I switched again. This time I finally experienced Ubuntu. I know it’s usually the first distro of most of the users, but I avoided it because I heard people badmouth it a lot for some reason and I blindly believe…It's FOSS Communitypdecker❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  20. by: Sreenath
    Wed, 04 Jun 2025 12:59:54 GMT

    In an earlier article, I wrote about using plugins in Obsidian. In this one, let me share a few of my favorite plugins. I recommend them but only use the ones that fit your needs.
    Just to recall, Obsidian has two kinds of plugins:
    Core plugins: Officially developed and maintained by the Obsidian team. Community Plugins: Created by users in the Obsidian community 🚧Note that some plugins may make your Markdown notes fully readable only in Obsidian as they add extra features that are not available in usual Markdown. This can be a vendor lock in. Use plugins only according to your needs.Essential Core Plugins
    At the time of writing this article, I see 28 core plugins in my Obsidian installation.
    I have picked only a handful of them. It doesn't mean others are not good. All core plugins have some use case for a particular set of users!
    Several of the plugins I discuss here are enabled by default. But these plugins have settings of their own and I share these settings that have enhanced my note management experience in Obsidian.
    ✋Non-FOSS Warning! Obsidian is not an open source software but it is loved and used by many open source developers and Linux users.Backlinks
    The backlinks are among Obsidian's greatest features. It is crucial for managing interconnected notes and data.
    I know that the backlink plugin is enabled by default but there is a useful feature that you'll have to manually enable. It is “Show backlinks at the bottom of notes” option.
    Enable it by going to Backlinks plugin settings.
    Enable backlinksNow, under each note, backlinks will be shown.
    0:00 /0:18 1× Backlinks in Obsidian
    It is particularly useful if you are creating new notes from a single place like Daily Notes, which is our next plugin!
    Daily Notes
    Daily Notes is like diary pages. It will create a Markdown page for each day and you can write your thoughts here.
    By default, you can access the daily notes from the Obsidian ribbon menu. But a more efficient way is to open daily notes whenever you open Obsidian.
    Go to the Daily Notes settings. Here, enable the "Open daily note on startup" toggle button.
    Daily Notes SettingsIn the screenshot above, you can see some other settings have been changed.
    Date Format: How the title of the daily note appear. You can get the date format options here. New file location: I have created a separate folder called Journals in my Obsidian vault to store all the daily notes. Page Preview
    This is enabled by default for you. With this plugin, you can hover over a note while pressing the CTRL key to get a preview.
    You can also quickly edit the note in the preview or go to another sub-preview, etc. Very useful tool if you are deep into note interlinking.
    0:00 /0:23 1× Preview page in Obsidian
    Slash Commands
    This plugin is disabled by default. Go to the Core Plugins in Obsidian settings and enable this plugin.
    Once enabled, you can press the / key when typing a note to access commands. For example, insert attachment, insert code block, etc. A simple preview is shown in the below video.
    0:00 /0:24 1× Slash command in Obsidian
    Notion, Ghost and many modern editors use this feature.|
    Web Viewer
    This is a cool plugin that allows you to visit web links from within Obsidian. More than that, you can save a website to vault using this core plugin.
    It is not enabled by default, so do that first. Once enabled, click on the settings gear adjacent to the plugin to go to the plugin settings.
    Web viewer settings buttonHere, you can set further options like where to save the page by default, search engine, etc.
    Web Viewer SettingsYou can see some examples in the video below.
    Web viewer in Obsidian
    Interesting community plugins I like
    Now, let's take a look at some cool community plugins that can enhance your knowledge base, as they do for me.
    Calendar
    If you are a daily notes writer, this is a must-have plugin. Even if you are not into diary writing, it is still pretty cool to have a calendar placed on Obsidian.
    Calendar ViewYou can visit notes of any date simply by clicking on that date. If there is no note, it will prompt you to create one!
    There are many more features that you can explore, like a meter to track how much you have written on a particular day.
    💡If you press CTRL key and hover over a date, those day's notes will be shown in a preview.Calendar PluginQuickAdd
    QuickAdd is a much needed automation tool in Obsidian. It offers features like templates, captures, macros, multis, etc. which essentially allow users to create notes quickly.
    For example, the template feature can create a note based on a given template in a specified directory. All you have to do is invoke the command.
    The screenshot below shows three templates created by me for my use cases.
    Created TemplatesUse the settings gear to change additional settings like where to create a note, open the note automatically, etc.
    The video below shows how it quickly creates a note on a specified location.
    0:00 /0:13 1× QuickAdd Working
    With macros, you can even assign key bindings to make your workflow even faster!
    QuickAddIconize
    Emojis and icons are all the rage these days. From GitHub to changelogs, you'll see them everywhere. How about adding them to Obsidian?
    Obsidian organizes notes into folders and subfolders. With the Iconize plugin, you can set icons to folders.
    Icons applied to foldersYou can add new icon packs by going to the Settings → Community plugins → Installed plugins -→Iconize -→Settings gear → Icon packs.
    Icon packs added in IconizeRight-click on a folder or file and use the Change icon option to add a new icon to that folder/file.
    IconizeHighlightr
    Remember highlighting important stuff in a book? You can do the same in your notes on Obsidian.
    Highlight text in Vimeo
    It also provides different styles of highlighting, all selectable from the plugin settings.
    HighlightrCallout Manager and Callout Suggestions
    These are two different plugins which, when used together, are a great way to add callouts.
    📋If you are not aware, callout blocks can improve your notes by making specific visually separated blocks for tips, warnings, etc. Like this 'note' callout block I used for telling you about callouts.By default, obsidian has some callouts like Note, Tips, Warnings, etc.
    Callout Manager allows you to create more callout blocks. Say you want to create a new callout block called "Read Later" and assign a particular color and icon. You can do that with this plugin.
    Callout blocks from Callout ManagerThe Callout Suggestions plugins will help you access these defined callout blocks easily in your notes.
    You can press >! and a dropdown menu will appear asking what block to use.
    0:00 /0:23 1× Inserting Callouts in Obsidian
    PDF++
    Annotating a PDF document is a must-have feature in any PDF viewer. How about doing it in Obsidian? PDF++ is a great tool for this purpose.
    You can add your PDF notes to your vault and start annotating!
    Once the plugin is installed and enabled, make sure you have enabled the PDF editing feature.
    PDF++ plugin settingsNow, you can select text and then right-click to get the annotation menu. Unlike other plugins, this has a slight learning curve and plenty of options to tweak. Use it carefully.
    Annotate PDF in ObsidianPDF++LanguageTool Integration
    This is for those who want to create notes without grammatical errors or spelling mistakes.
    LanguageTool is a proofreading software that checks the grammar, style, and spelling in over 20 languages. With this plugin, you can get error notifications for your text in Obsidian.
    If you have a premium subscription for LanguageTool, you can use it here as well.
    Spell check in Obsidian🚧You should disable the Obsidian spell check (Settings → Editor → Behavior → Spell Check) feature if you want to use this plugin.LanguageTool IntegrationTasks
    You can use Obsidian as a task/to-do manager. That's no secret.
    However, Tasks is plugin that can do a lot more than just simple to-dos. It supports scheduling tasks, recurring tasks etc.
    You can also list all the tasks, today's tasks, etc. by using simple tasks specific queries.
    To create a task, you can enter CTRL+P (open command in Obsidian) and search for Tasks.
    Using the Tasks plugin to create tasksYou can retrieve tasks as shown in the small video below:
    0:00 /0:28 1× Retrieve tasks in Obsidian
    TasksExcalidraw
    Excalidraw is a plugin to edit and view Excalidraw drawings in Obsidian. This sketching solution can make wonderful diagrams within Obsidian, embed drawings into your documents and much more.
    An Excalidraw drawing in ObsidianYou can find a huge list of settings for this plugin in the Obsidian settings. If you are into creative note-taking, look no further.
    ExcalidrawHonorable mentions
    Style Settings: Allows you to tweak several themes in Obsidian. One such theme that I am using and is heavily customizable is Border. Git: Allows you to version control your notes. You can pull changes from and push changes to GitHub, GitLab, etc. Dataview: Dataview is a live index and query engine over your personal knowledge base. You can query data from your Obsidian vault. QuickAdd: QuickAdd is like a super-smart shortcut button in Obsidian that lets you quickly create new notes or add stuff to existing ones using pre-made templates and automated steps you set up. Kanban: This plugins created a Markdown-based Kanban board. There are many other plugins, enabled/disabled in a default Obsidian installation. What I mentioned above are a couple of special ones. Don't forget to read the descriptions and try others too.
    Now I let you share your favorite Obsidian plugin in the comments.
  21. by: Sreenath
    Mon, 26 May 2025 00:50:21 GMT

    Obsidian has emerged as a powerful and flexible knowledge management tool, despite NOT being an open source product.
    Using plugins is just one of the many tips that you can follow to get the most out of Obsidian.
    However, there is a small catch when it comes to compatibility. If you have used several Obsidian-specific plugins, then your notes may not be fully compatible in other plain markdown editors.
    In this article, we will take a look at Plugins in Obsidian, how you can install it, and also some essential plugins that can make your learning more effective.
    But first, a quick heads-up: Obsidian offers two types of plugins:
    Core Plugins: These are officially developed and maintained by the Obsidian team. While limited in number, they are stable and deeply integrated. Community Plugins: Created by users in the Obsidian community, these plugins offer a wide variety of features, although they aren’t officially supported by the core team. 🚧Note that some plugins may make your Markdown notes fully readable only in Obsidian. This can be a vendor lock in. Use plugins only according to your needs.Using the core plugins
    Core plugins are officially built by Obsidian. They will come pre-installed. So, naturally, that is the recommended method of installation when it comes to plugins.
    Core plugins are displayed in Obsidian settings page. Click on the settings gear icon at the bottom of the Obsidian app window to go to the settings.
    Click on the Settings gearIn the settings, select Core Plugins to view the Core plugins.
    Select Core PluginsMost of the core plugins are enabled when you install the Obsidian app. But some plugins will be disabled by default.
    I have included a brief description under each plugin to know what the plugin does and enable/disable as needed.
    Suggested Read 📖
    13 Useful Tips on Organizing Notes Better With ObsidianUtilize Obsidian knowledge tool more effectively with these helpful tips and tweaks.It's FOSSSreenathUsing the community plugins
    I’ve found that community plugins are one of the best ways to boost Obsidian’s capabilities. There’s a massive collection to choose from, and at the time of writing this, there are 2,430 community plugins available for installation.
    These plugins are built by third-party developers and go through an initial review process before being listed.
    However, since they have the same level of access as Obsidian itself, it’s important to be cautious. If privacy and security are essential for your work, I suggest doing a bit of homework before installing any plugin, just to be safe.
    Disable the restricted mode
    To protect you from unofficial plugins, Obsidian starts with a restricted mode, where the community plugins are disabled. To install community plugins, you need to disable the restricted mode first, just like the auto blocker in some Android phones to block app installations from unauthorized sources.
    Go to the Obsidian settings and select the Community Plugins option. Here, click on the "Turn on community plugins" button.
    Turn on community pluginsThis will disable the restricted mode. And, you are all set! 😄
    Install community plugins
    Once the restricted mode is disabled, you can browse for community plugins and get them installed.
    Click on the Browse buttonUse the Browse button to go to the plugins page, as shown in the screenshot above. You will reach the plugins store, that lists 2000+ plugins.
    Do not worry about the numbers, just search for what you need, or browse through some suggested options, just like I did.
    Plugins StoreWhen you have spotted a plugin that matches your need, click on it. Now, to install that plugin, use the Install button.
    Click on the Install buttonOnce installed, you can see two additional buttons called Enable and Uninstall. As the name suggests, they are for enabling a plugin or uninstalling a plugin.
    Enable/Uninstall a pluginThis can be done more efficiently from the Obsidian settings. For this, go to the Settings → Community plugins → Installed plugins. Here, use the toggle button to enable a plugin.
    Enable Plugins in SettingsThis section lists all the installed community plugins. You can enable/disable, uninstall, access plugin settings, assign a keybinding, or donate to that particular plugin.
    Manually install plugins
    🚧I do not recommend this method, since most of the plugins are available in Obsidian store and have gone through an initial review.Even though not recommended, if you want to install a plugin, manually, for version compatibility or other personal reasons, make sure to source it from the official repositories or websites.
    If it is on GitHub, go to the release page of the plugin GitHub repository and download main.js, manifest.json, and style.css files.
    Download Plugin filesNow, create a directory with the name of the project in the <Your-obsidian-vault>/.obsidian/plugins directory. Press CTRL+H to view hidden files.
    Paste plugin contentsIn my case, I tried Templater. Next, I transfer the downloaded files to this project directory. Now, open Obsidian and go to the Settings → Community plugins and enable the new plugin.
    Enable manually installed pluginInstall beta version of plugins
    This is not for regular users, but for those who want to be testers and reviewers of beta plugins. I usually do this to test interesting things or help with the development of plugins I believe in.
    We are using the BRAT (Beta Reviewers Auto-Update Tool) to install and update beta versions of Obsidian plugins.
    First, install the BRAT plugin from the Obsidian plugins store and enable it.
    Install BRAT PluginNow, go to the GitHub repository of the plugin you want to install the beta version of. Copy the URL of the repository.
    Select the BRAT plugin from Settings → Community plugins and click on the “Add beta plugin” button.
    Click on the "Add beta plugin" buttonHere, add the GitHub URL, select a version from the list, and click on the Add Plugin button.
    Add URL and select versionYou can see that the plugin has been added with BRAT. Since we selected a specific version, it is shown as frozen and cannot be updated. Select Latest as version to get updates.
    Beta plugin added using BRATUpdate plugins
    To update community plugins, go to Obsidian settings and select Community plugins.
    Here, click on the Check for updates button.
    If there is an update available, it will notify you.
    There is an update available for one plugin.Click on Update All to update all the plugins that have an update available. Or, scroll down and update individual plugins by clicking on the Update button.
    Move community plugins
    You can copy selected or all plugins from your directory to another vault to avoid installing everything from scratch.
    Go to the <your-obsidian-vault>/.obsidian/plugins directory. Now, copy directories of those plugins you want to use in another vault.
    Copy those directories to your new plugin directory for your other vault (or the newer vault) <your-new-vault>/.obsidian/plugins directory.
    If there is no plugins directory in the new vault, create one. Once you open the new vault, you will be asked to trust the plugins.
    If it is you, who copied all the folders and no others are involved, click on the "Trust author and enable plugins" button.
    Or you can use the "Browse Vault in restricted mode" and then enable the plugins by going to Settings → Community plugins → Turn on Community plugins → Enable plugins.
    Plugin security notificationIn both cases, you don't have to install the plugin from scratch.
    Don't forget to enable the plugins through Settings → Community plugins to start using them.
    Remove a plugin
    Removing a plugin is easy. Go to the community plugins in settings and click on the delete button (bin icon) adjacent to the plugin you want to remove.
    Remove a pluginOr, if you just want to disable all community plugins, you can turn on the restricted mode. Click on the Turn on and reload button in community plugins settings.
    Turn on restricted modeSo, if you turn off the restricted mode, all the installed plugins will be enabled. Pretty easy, I know, right?
    Another way to remove plugins is to delete specific folders in the plugins directory, but it is unnecessary unless you are testing something specific.
    🚧Don't use this method for everything since it is safer to do so from within Obsidian.Go to the <your-obsiidian-vault>/.obsidian/plugins directory and remove the directory that has the name of the plugin you want to remove.
    Now open Obsidian and you won't see that plugin. Voila!
    Enjoy using Obsidian
    I have shared many more Obsidian tips to improve your experience with this wonderful too.
    13 Useful Tips on Organizing Notes Better With ObsidianUtilize Obsidian knowledge tool more effectively with these helpful tips and tweaks.It's FOSSSreenathPlugin is just part of how you can go beyond the obvious and default Obsidian offering. I hope you found this tutorial helpful. Enjoy.
  22. by: LHB Community
    Mon, 12 May 2025 10:43:21 +0530

    Automating tasks is great, but what's even better is knowing when they're done or if they've gotten derailed.
    Slack is a popular messaging tool used by many techies. And it supports bots that you can configure to get automatic alerts about things you care about.
    Web server is down? Get an alert. Shell script completes running? Get an alert.
    Yes, that could be done too. By adding Slack notifications to your shell scripts, you can share script outcomes with your team effortlessly and respond quickly to issues and stay in the loop without manual checks. It lets you monitor automated tasks without constantly checking logs. 
    🚧I am assuming you already use Slack and you have a fair idea about what a Slack Bot is. Of course, you should have at least basic knowledge of Bash scripting.The Secret Sauce: curl and Webhooks
    The magic behind delivering Slack notifications from shell scripts is Slack's Incoming Webhooks and the curl command line tool.
    Basically, everything is already there for you to use, it just needs some setup for connections. I found it pretty easy, and I'm sure you will too.
    Here are the details for what webhooks and the command line tool is for:  
    Incoming Webhooks: Slack allows you to create unique Webhook URLs for your workspace that serve as endpoints for sending HTTP POST requests containing messages.   curl: This powerful command-line tool is great for making HTTP requests. We'll use it to send message-containing JSON payloads to Slack webhook URLs. Enabling webhooks on Slack side
    Create a Slack account (if you don't have it already) and (optionally) create a Slack workspace for testing. Go to api.slack.com/apps and create a new app. Open the application and, under the “Features” section, click on “Incoming Webhooks” and “Activate Incoming Webhooks”. Under the same section, scroll to the bottom. You’ll find a button “Add New Webhook to Workspace”. Click on it and add the channel. Test the sample CURL request.  Important: The CURL command you see above also has the webhook URL. Notice that https://hooks.slack.com/services/xxxxxxxxxxxxx things? Note it down.
    Sending Slack notifications from shell scripts
    Set SLACK_WEBHOOK_URL environment variable in your .bashrc file as shown below.
    Use the webhook URL you got from Slack in the previous stepCreate a new file, notify_slack.sh, under your preferred directory location.
    # Usage: notify_slack "text message" # Requires: SLACK_WEBHOOK_URL environment variable to be set notify_slack() {     local text="$1"     curl -s -X POST -H 'Content-type: application/json' \         --data "{\"text\": \"$text\"}" \         "$SLACK_WEBHOOK_URL" }Now, you can simply source this bash script wherever you need to notify Slack. I created a simple script to check disk usage and CPU load.
    source ~/Documents/notify_slack.sh  disk_usage=$(df -h / | awk 'NR==2 {print $5}') # Get CPU load average cpu_load=$(uptime | awk -F'load average:' '{ print $2 }' | cut -d',' -f1 | xargs) hostname=$(hostname) message="*System Status Report - $hostname*\n* Disk Usage (/): $disk_usage\n* CPU Load (1 min): $cpu_load" # Send the notification notify_slack "$message"Running this script will post a new message on the Slack channel associated with the webhook.
    Best Practices 
    It is crucial to think about security and limitations when you are integrating things, no matter how insignificant you think it is. So, to avoid common pitfalls, I recommend you to follow these two tips:
    Avoid direct hardware encoding in publicly shared scripts. Consider using environment variables or configuration files. Be aware of Slack's rate limitation for incoming webhooks, especially if your scripts may trigger notifications frequently. You may want to send notifications only in certain circumstances (for example, only on failure or only for critical scripts). Conclusion
    What I shared here was just a simple example. You can utilize cron in the mix and periodically send notifications about server stats to Slack. You put in some logic to get notified when disk usage reaches a certain stage.
    There can be many more use cases and it is really up to you how you go about using it. With the power of Incoming Webhooks and curl, you can easily deliver valuable information directly to your team's communication center. Happy scripting!
    Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.
  23. by: Abhishek Prakash
    Thu, 17 Apr 2025 06:27:20 GMT

    It's the release week. Fedora 42 is already out. Ubuntu 25.04 will be releasing later today along with its flavors like Kubuntu, Xubuntu, Lubuntu etc.
    In the midst of these two heavyweights, MX Linux and Manjaro also quickly released newer versions. For Manjaro, it is more of an ISO refresh, as it is a rolling release distribution.
    Overall, a happening week for Linux lovers 🕺
    💬 Let's see what else you get in this edition
    Arco Linux bids farewell. Systemd working on its own Linux distro. Looking at the origin of UNIX. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Aiven. ❇️ Aiven for ClickHouse® - The Fastest Open Source Analytics Database, Fully Managed
    ClickHouse processes analytical queries 100-1000x faster than traditional row-oriented systems. Aiven for ClickHouse® gives you the lightning-fast performance of ClickHouse–without the infrastructure overhead.
    Just a few clicks is all it takes to get your fully managed ClickHouse clusters up and running in minutes. With seamless vertical and horizontal scaling, automated backups, easy integrations, and zero-downtime updates, you can prioritize insights–and let Aiven handle the infrastructure.
    Managed ClickHouse database | AivenAiven for ClickHouse® – fully managed, maintenance-free data warehouse ✓ All-in-one open source cloud data platform ✓ Try it for freeAiven📰 Linux and Open Source News
    The Arch-based ArcoLinux has been discontinued. Fedora 42 has been released with some rather interesting changes. Manjaro 25.0 'Zetar' is here, offering a fresh image for new installations. ParticleOS is Systemd's attempt at a Linux distribution.
    ParticleOS: Systemd’s Very Own Linux Distro in MakingA Linux distro from systemd? Sounds interesting, right?It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    Linus Torvalds was told that Git is more popular than Linux.
    Git is More Popular than Linux: TorvaldsLinus Torvalds reflects on 20 years of Git.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials and More
    11 vibe coding tools to 10x your dev workflow. Adding comments in bash scripts. Understand the difference between Pipewire and Pulseaudio. Make your Logseq notes more readable by formatting them. That's a new series focusing on Logseq. From UNIX to today’s tech. Learn how it shaped the digital world. Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 👷 Homelab and Maker's Corner
    These 28 cool Raspberry Pi Zero W projects will keep you busy.
    28 Super Cool Raspberry Pi Zero W Project IdeasWondering what to do with your Raspberry Pi Zero W? Here are a bunch of project ideas you can spend some time on and satisfy your DIY craving.It's FOSSChinmay✨ Apps Highlight
    You can download YouTube videos using Seal on Android.
    Seal: A Nifty Open Source Android App to Download YouTube Video and AudioDownload YouTube video/music (for educational purpose or with consent) with this little, handy Android app.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for You
    See the new features of Ubuntu 25.04 in action in this video.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Our Guess the Desktop Environment Crossword will test your knowledge.
    Guess the Desktop Environment: CrosswordTest your desktop Linux knowledge with this simple crossword puzzle. Can you solve it all correctly?It's FOSSAbhishek PrakashAlternatively, guess all of these open source privacy tools correctly?
    Know The Best Open-Source Privacy ToolsDo you utilize open-source tools for privacy?It's FOSSAnkush Das💡 Quick Handy Tip
    You can make Thunar open a new tab instead of a new window. This is good in situations when opening a folder from other apps, like a web browser. This reduces screen clutter.
    First, click on Edit ⇾ Preferences. Here, go to the Behavior tab. Now, under "Tabs and Windows", enable the first checkbox as shown above or all three if you need the functionality of the other two.
    🤣 Meme of the Week
    We are generally a peaceful bunch, for the most part. 🫣
    🗓️ Tech Trivia
    On April 16, 1959, John McCarthy publicly introduced LISP, a programming language for AI that emphasized symbolic computation. This language remains influential in AI research today.
    🧑‍🤝‍🧑 FOSSverse Corner
    FOSSers are discussing VoIP, do you have any insights to add here?
    A discussion over Voice Over Internet Protocol (VoIP)I live in a holiday village where we have several different committees and meetings, for those not present to attend the meetings we do video conférences using voip. A few years back the prefered system was skype, we changed to whatsapp last year as we tend to use its messaging facilities and its free. We have a company who manages our accounts, they prefer using teams, paid for version as they can invoice us for its use … typical accountant. My question, does it make any difference in band w…It's FOSS Communitycallpaul.eu (Paul)❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  24. by: Juan Diego Rodríguez
    Wed, 12 Feb 2025 14:15:28 +0000

    We’ve been able to get the length of the viewport in CSS since… checks notes… 2013! Surprisingly, that was more than a decade ago. Getting the viewport width is as easy these days as easy as writing 100vw, but what does that translate to, say, in pixels? What about the other properties, like those that take a percentage, an angle, or an integer?
    Think about changing an element’s opacity, rotating it, or setting an animation progress based on the screen size. We would first need the viewport as an integer — which isn’t currently possible in CSS, right?
    What I am about to say isn’t a groundbreaking discovery, it was first described amazingly by Jane Ori in 2023. In short, we can use a weird hack (or feature) involving the tan() and atan2() trigonometric functions to typecast a length (such as the viewport) to an integer. This opens many new layout possibilities, but my first experience was while writing an Almanac entry in which I just wanted to make an image’s opacity responsive.
    Resize the CodePen and the image will get more transparent as the screen size gets smaller, of course with some boundaries, so it doesn’t become invisible:
    CodePen Embed Fallback This is the simplest we can do, but there is a lot more. Take, for example, this demo I did trying to combine many viewport-related effects. Resize the demo and the page feels alive: objects move, the background changes and the text smoothly wraps in place.
    CodePen Embed Fallback I think it’s really cool, but I am no designer, so that’s the best my brain could come up with. Still, it may be too much for an introduction to this typecasting hack, so as a middle-ground, I’ll focus only on the title transition to showcase how all of it works:
    CodePen Embed Fallback Setting things up
    The idea behind this is to convert 100vw to radians (a way to write angles) using atan2(), and then back to its original value using tan(), with the perk of coming out as an integer. It should be achieved like this:
    :root { --int-width: tan(atan2(100vw, 1px)); } But! Browsers aren’t too keep on this method, so a lot more wrapping is needed to make it work across all browsers. The following may seem like magic (or nonsense), so I recommend reading Jane’s post to better understand it, but this way it will work in all browsers:
    @property --100vw { syntax: "<length>"; initial-value: 0px; inherits: false; } :root { --100vw: 100vw; --int-width: calc(10000 * tan(atan2(var(--100vw), 10000px))); } Don’t worry too much about it. What’s important is our precious --int-width variable, which holds the viewport size as an integer!
    CodePen Embed Fallback Wideness: One number to rule them all
    Right now we have the viewport as an integer, but that’s just the first step. That integer isn’t super useful by itself. We oughta convert it to something else next since:
    different properties have different units, and we want each property to go from a start value to an end value. Think about an image’s opacity going from 0 to 1, an object rotating from 0deg to 360deg, or an element’s offset-distance going from 0% to 100%. We want to interpolate between these values as --int-width gets bigger, but right now it’s just an integer that usually ranges between 0 to 1600, which is inflexible and can’t be easily converted to any of the end values.
    The best solution is to turn --int-width into a number that goes from 0 to 1. So, as the screen gets bigger, we can multiply it by the desired end value. Lacking a better name, I call this “0-to-1” value --wideness. If we have --wideness, all the last examples become possible:
    /* If `--wideness is 0.5 */ .element { opacity: var(--wideness); /* is 0.5 */ translate: rotate(calc(wideness(400px, 1200px) * 360deg)); /* is 180deg */ offset-distance: calc(var(--wideness) * 100%); /* is 50% */ } So --wideness is a value between 0 to 1 that represents how wide the screen is: 0 represents when the screen is narrow, and 1 represents when it’s wide. But we still have to set what those values mean in the viewport. For example, we may want 0 to be 400px and 1 to be 1200px, our viewport transitions will run between these values. Anything below and above is clamped to 0 and 1, respectively.
    In CSS, we can write that as follows:
    :root { /* Both bounds are unitless */ --lower-bound: 400; --upper-bound: 1200; --wideness: calc( (clamp(var(--lower-bound), var(--int-width), var(--upper-bound)) - var(--lower-bound)) / (var(--upper-bound) - var(--lower-bound)) ); } Besides easy conversions, the --wideness variable lets us define the lower and upper limits in which the transition should run. And what’s even better, we can set the transition zone at a middle spot so that the user can see it in its full glory. Otherwise, the screen would need to be 0px so that --wideness reaches 0 and who knows how wide to reach 1.
    CodePen Embed Fallback We got the --wideness. What’s next?
    For starters, the title’s markup is divided into spans since there is no CSS-way to select specific words in a sentence:
    <h1><span>Resize</span> and <span>enjoy!</span></h1> And since we will be doing the line wrapping ourselves, it’s important to unset some defaults:
    h1 { position: absolute; /* Keeps the text at the center */ white-space: nowrap; /* Disables line wrapping */ } The transition should work without the base styling, but it’s just too plain-looking. They are below if you want to copy them onto your stylesheet:
    CodePen Embed Fallback And just as a recap, our current hack looks like this:
    @property --100vw { syntax: "<length>"; initial-value: 0px; inherits: false; } :root { --100vw: 100vw; --int-width: calc(10000 * tan(atan2(var(--100vw), 10000px))); --lower-bound: 400; --upper-bound: 1200; --wideness: calc( (clamp(var(--lower-bound), var(--int-width), var(--upper-bound)) - var(--lower-bound)) / (var(--upper-bound) - var(--lower-bound)) ); } OK, enough with the set-up. It’s time to use our new values and make the viewport transition. We first gotta identify how the title should be rearranged for smaller screens: as you saw in the initial demo, the first span goes up and right, while the second span does the opposite and goes down and left. So, the end position for both spans translates to the following values:
    h1 { span:nth-child(1) { display: inline-block; /* So transformations work */ position: relative; bottom: 1.2lh; left: 50%; transform: translate(-50%); } span:nth-child(2) { display: inline-block; /* So transformations work */ position: relative; bottom: -1.2lh; left: -50%; transform: translate(50%); } } Before going forward, both formulas are basically the same, but with different signs. We can rewrite them at once bringing one new variable: --direction. It will be either 1 or -1 and define which direction to run the transition:
    h1 { span { display: inline-block; position: relative; bottom: calc(1.2lh * var(--direction)); left: calc(50% * var(--direction)); transform: translate(calc(-50% * var(--direction))); } span:nth-child(1) { --direction: 1; } span:nth-child(2) { --direction: -1; } } CodePen Embed Fallback The next step would be bringing --wideness into the formula so that the values change as the screen resizes. However, we can’t just multiply everything by --wideness. Why? Let’s see what happens if we do:
    span { display: inline-block; position: relative; bottom: calc(var(--wideness) * 1.2lh * var(--direction)); left: calc(var(--wideness) * 50% * var(--direction)); transform: translate(calc(var(--wideness) * -50% * var(--direction))); } As you’ll see, everything is backwards! The words wrap when the screen is too wide, and unwrap when the screen is too narrow:
    CodePen Embed Fallback Unlike our first examples, in which the transition ends as --wideness increases from 0 to 1, we want to complete the transition as --wideness decreases from 1 to 0, i.e. while the screen gets smaller the properties need to reach their end value. This isn’t a big deal, as we can rewrite our formula as a subtraction, in which the subtracting number gets bigger as --wideness increases:
    span { display: inline-block; position: relative; bottom: calc((1.2lh - var(--wideness) * 1.2lh) * var(--direction)); left: calc((50% - var(--wideness) * 50%) * var(--direction)); transform: translate(calc((-50% - var(--wideness) * -50%) * var(--direction))); } And now everything moves in the right direction while resizing the screen!
    CodePen Embed Fallback However, you will notice how words move in a straight line and some words overlap while resizing. We can’t allow this since a user with a specific screen size may get stuck at that point in the transition. Viewport transitions are cool, but not at the expense of ruining the experience for certain screen sizes.
    Instead of moving in a straight line, words should move in a curve such that they pass around the central word. Don’t worry, making a curve here is easier than it looks: just move the spans twice as fast in the x-axis as they do in the y-axis. This can be achieved by multiplying --wideness by 2, although we have to cap it at 1 so it doesn’t overshoot past the final value.
    span { display: inline-block; position: relative; bottom: calc((1.2lh - var(--wideness) * 1.2lh) * var(--direction)); left: calc((50% - min(var(--wideness) * 2, 1) * 50%) * var(--direction)); transform: translate(calc((-50% - min(var(--wideness) * 2, 1) * -50%) * var(--direction))); } Look at that beautiful curve, just avoiding the central text:
    CodePen Embed Fallback This is just the beginning!
    It’s surprising how powerful having the viewport as an integer can be, and what’s even crazier, the last example is one of the most basic transitions you could make with this typecasting hack. Once you do the initial setup, I can imagine a lot more possible transitions, and --widenesss is so useful, it’s like having a new CSS feature right now.
    I expect to see more about “Viewport Transitions” in the future because they do make websites feel more “alive” than adaptive.
    Typecasting and Viewport Transitions in CSS With tan(atan2()) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.