Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Everything posted by Blogger

  1. By: Josh Njiruh Sat, 26 Apr 2025 15:55:04 +0000 Emojis have become an essential part of modern digital communication, adding emotion and context to our messages. While typing emojis is straightforward on mobile devices, doing so on Ubuntu and other Linux distributions can be less obvious. This guide covers multiple methods on how to type emojis in Ubuntu, from keyboard shortcuts to dedicated applications. Why Use Emojis on Ubuntu? Emojis aren’t just for casual conversations. They can enhance: Professional communications (when used appropriately) Documentation Social media posts Blog articles Desktop applications Terminal customizations Method 1: Character Map (Pre-installed) Ubuntu comes with a Character Map utility that includes emojis: Press the Super (Windows) key and search for “Character Map” Open the application In the search box, type “emoji” or browse categories Double-click an emoji to select it Click “Copy” to copy it to your clipboard Paste it where needed using Ctrl+V Pros: No installation required Cons: Slower to use for frequent emoji needs Method 2: How to Type Emojis Using Keyboard Shortcuts Ubuntu provides a built-in keyboard shortcut for emoji insertion: Press Ctrl+Shift+E or Ctrl+. (period) in most applications An emoji picker window will appear Browse or search for your desired emoji Click to insert it directly into your text Note: This shortcut works in most GTK applications (like Firefox, GNOME applications) but may not work in all software. Method 3: Emoji Selector Extension For GNOME desktop users: Open the “Software” application Search for “Extensions” Install GNOME Extensions app if not already installed Visit extensions.gnome.org in Firefox Search for “Emoji Selector” Install the extension Access emojis from the top panel Pros: Always accessible from the panel Cons: Only works in GNOME desktop environment Method 4: EmojiOne Picker A dedicated emoji application: sudo apt install emoji-picker After installation, launch it from your applications menu or by running: emoji-picker Pros: Full-featured dedicated application Cons: Requires installation Method 5: Using the Compose Key Set up a compose key to create emoji sequences: Go to Settings > Keyboard > Keyboard Shortcuts > Typing Set a Compose Key (Right Alt is common) Use combinations like: Compose + : + ) for Compose + : + ( for Pros: Works system-wide Cons: Limited emoji selection, requires memorizing combinations Method 6: Copy-Paste from the Web A simple fallback option: Visit a website like Emojipedia Browse or search for emojis Copy and paste as needed Pros: Access to all emojis with descriptions Cons: Requires internet access, less convenient Method 7: Using Terminal and Commands For terminal lovers, you can install emote : sudo snap install emote Then launch it from the terminal: emote Or set up a keyboard shortcut to launch it quickly. Method 8: IBus Emoji For those using IBus input method: Install IBus if not already installed: sudo apt install ibus Configure IBus to start at login: im-config -n ibus Log out and back in Press Ctrl+Shift+e to access the emoji picker in text fields Troubleshooting Emoji Display Issues If emojis appear as boxes or don’t display correctly: Install font support: sudo apt install fonts-noto-color-emoji Update font cache: fc-cache -f -v Log out and back in Using Emojis in Specific Applications In the Terminal Most modern terminal emulators support emoji display. Try: echo "Hello 👋 Ubuntu!" In LibreOffice Use the Insert > Special Character menu or the keyboard shortcuts mentioned above. In Code Editors like VS Code Most code editors support emoji input through the standard keyboard shortcuts or by copy-pasting. Summary Ubuntu offers multiple ways to type and use emojis, from built-in utilities to specialized applications. Choose the method that best fits your workflow, whether you prefer keyboard shortcuts, graphical selectors, or terminal-based solutions. By incorporating these methods into your Ubuntu usage, you can enhance your communications with the visual expressiveness that emojis provide, bringing your Linux experience closer to what you might be used to on mobile devices. More From Unixmen Similar Articles https://askubuntu.com/questions/1045915/how-to-insert-an-emoji-into-a-text-in-ubuntu-18-04-and-later/ http://www.omgubuntu.co.uk/2018/06/use-emoji-linux-ubuntu-apps The post How to Type Emojis in Ubuntu Linux appeared first on Unixmen.
  2. by: Abhishek Prakash Fri, 25 Apr 2025 21:30:04 +0530 Choosing the right tools is important for an efficient workflow. A seasoned Fullstack dev shares his favorites. 7 Utilities to Boost Development Workflow ProductivityHere are a few tools that I have discovered and use to improve my development process.Linux HandbookLHB CommunityHere are the highlights of this edition : The magical CDPATHUsing host networking with docker composeDocker interview questionsAnd more tools, tips and memes for youThis edition of LHB Linux Digest newsletter is supported by PikaPods.❇️ Self-hosting without hasslePikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics. Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods. PikaPods - Instant Open Source App HostingRun the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.Instant Open Source App Hosting       This post is for subscribers only Subscribe now Already have an account? Sign in
  3. by: Abhishek Prakash Fri, 25 Apr 2025 20:55:16 +0530 If you manage servers on a regular basis, you'll often find yourself entering some directories more often than others. For example, I self-host Ghost CMS to run this website. The Ghost install is located at /var/www/ghost/ . I have to cd to this directory and then use its subdirectories to manage the Ghost install. If I have to enter its log directory directly, I have to type /var/www/ghost/content/log. Typing out ridiculously long paths that take several seconds even with tab completion. Relatable? But what if I told you there's a magical shortcut that can make those lengthy directory paths vanish like free merchandise at a tech conference? Enter CDPATH, the unsung hero of Linux navigation that I'm genuinely surprised that many new Linux users are not even aware of! What is CDPATH?CDPATH is an environment variable that works a lot like the more familiar PATH variable (which helps your shell find executable programs). But instead of finding programs, CDPATH helps the cd command find directories Normally, when you use cd some-dir, the shell looks for some-dir only in the current working directory. With CDPATH, you tell the shell to also look in other directories you define. If it finds the target directory there, it cds into it — no need to type full paths. How does CDPATH works?Imagine this directory structure: /home/abhishek/ ├── Work/ │ └── Projects/ │ └── WebApp/ ├── Notes/ └── Scripts/ Let's say, I often visit the WebApp directory and for that I'll have to type the absolute path if I am at a strange location: cd /home/abhishek/Work/Projects/WebAppOr, since I am a bit smart, I'll use ~ shortcut for home directory. cd ~/Work/Projects/WebApp But if I add this location to the CDPATH variable: export CDPATH=$HOME/Work/ProjectsI could enter WebApp directory from anywhere in the filesystem just by typing this: cd WebAppAwesome! Isn't it? 🚧You should always add . (current directory) in the CDPATH and your CDPATH should start with it. This way, it will look for the directory in the current directory first and then in the directories you have specified in the CDPATH variable.How to set CDPATH variable?Setting up CDPATH is delightfully straightforward. If you ever added anything to the PATH variable, it's pretty much the same. First, think about the frequently used directories where you would want to cd to search for when no specific paths have been provided. Let's say, I want to add /home/abhishek/work and /home/abhishek/projects in CDPATH. I would use: export CDPATH=.:/home/abhishek/work:/home/abhishek/projectsThis creates a search path that includes: The current directory (.)My work directoryMy projects directoryWhich means if I type cd some_dir, it will first look if some_dir exists in the current directory. If not found, it searches 🚧The order of the directories in CDPATH matters.Let's say that both work and projects directories have a directory named docs which is not in the current directory. If I use cd docs, it will take me to /home/abhishek/work/docs. Why? because work directory comes first in the CDPATH. 💡If things look fine in your testing, you should make it permanent by adding the "export CDPATH" command you used earlier to your shell profile.Whatever you exported in CDPATH will only be valid for the current session. To make the changes permanent, you should add it to your shell profile. I am assuming that you are using bash shell. In that case, it should be /.profile~ or ~/.bash_profile. Open this file with a text editor like Nano and add the CDPATH export command to the end. 📋When you use cd command with absolute path or relative path, it won't refer to the CDPATH. CDPATH is more like, hey, instead of just looking into my current sub-directories, search it in specified directories, too. When you specify the full path (absolute or relative) already with cd, there is no need to search. cd knows where you want to go.How to find the CDPATH value?CDPATH is an environment variable. How do you print the value of an environment variable? Simplest way is to use the echo command: echo $CDPATH📋If you have tab completion set with cd command already, it will also work for the directories listed in CDPATH.When not to use CDPATH?Like all powerful tools, CDPATH comes with some caveats: Duplicate names: If you have identically named directories across your filesystem, you might not always land where you expect.Scripts: Be cautious about using CDPATH in scripts, as it might cause unexpected behavior. Scripts generally should use absolute paths for clarity.Demo and teaching: When working with others who aren't familiar with your CDPATH setup, your lightning-fast navigation might look like actual wizardry (which is kind of cool to be honest) but it could confuse your students.💡Including .. (parent directory) in your CDPATH creates a super-neat effect: you can navigate to 'sibling directories' without typing ../. If you're in /usr/bin and want to go to /usr/lib, just type cd lib.Why aren’t more sysadmins using CDPATH in 2025?The CDPATH used to be a popular tool in the 90s, I think. Ask any sysadmin older than 50 years, and CDPATH would have been in their arsenal of CLI tools. But these days, many Linux users have not even heard of the CDPATH concept. Surprising, I know. Ever since I discovered CDPATH, I have been using it extensively specially on the Ghost and Discourse servers I run. Saves me a few keystrokes and I am proud of those savings. By the way, if you don't mind including 'non-standard' tools in your workflow, you may also explore autojump instead of CDPATH. GitHub - wting/autojump: A cd command that learns - easily navigate directories from the command lineA cd command that learns - easily navigate directories from the command line - wting/autojumpGitHubwting🗨️ Your turn. Were you already familiar with CDPATH? If yes, how do you use it? If not, is this something you are going to use in your workflow?
  4. by: Ankush Das Fri, 25 Apr 2025 10:58:48 +0530 As an engineer who has been tossing around Kubernetes in a production environment for a long time, I've witnessed the evolution from manual kubectl deployment to CI/CD script automation, to today's GitOps. In retrospect, GitOps is really a leap forward in the history of K8s Ops. Nowadays, the two hottest players in GitOps tools are Argo CD and Flux CD, both of which I've used in real projects. So I'm going to talk to you from the perspective of a Kubernetes engineer who has stepped in the pits: which one is better for you? Why GitOps?The essence of GitOps is simple:  “Manage your Kubernetes cluster with Git, and make Git the sole source of truth.” This means:  All deployment configurations are written in Git repositoriesTools automatically detect changes and deploy updatesGit revert if something goes wrong, and everything is back to normalMore reliable for auditing and security.I used to maintain a game service, and in the early days, I used scripts + CI/CD tools to do deployment. Late one night, something went wrong, and a manual error pushed an incorrect configuration into the cluster, and the whole service hung. Since I started using GitOps, I haven't had any more of these “man-made disasters”. Now, let me start comparing Argo CS vs Flux CD. Installation & SetupArgo CD can be installed with a single YAML, and the UI and API are deployed together out of the box. Here are the commands that make it happen: kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml kubectl port-forward svc/argocd-server -n argocd 8080:443 kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echoFlux CD follows a modular architecture, you need to install Source Controller, Kustomize Controller, etc., separately. You can also simplify the process by flux install. curl -s https://fluxcd.io/install.sh | sudo bash flux --version flux install --components="source-controller,kustomize-controller" kubectl get pods -n flux-systemFor me, the winner here is: Argo CD (because of more things out of the box in a single install setup). Visual Interface (UI) Argo CD UIArgo CD has a powerful built-in Web UI to visually display the application structure, compare differences, synchronize operations, etc. Unfortunately, Flux CD has no UI by default. It can be used with Weave GitOps or Grafana to check the status because it relies on the command line primarily. Again, winner for me: Argo CD, because of a web UI. Synchronization and Deployment Strategies Argo CD supports manual synchronization, automatic synchronization, and forced synchronization, suitable for fine-grained control. Flux CD uses a fully automated synchronization strategy that polls Git periodically and automatically aligns the cluster state. Flux CD gets the edge here and is the winner for me. Toolchain and Integration Capabilities Argo CD supports Helm, Kustomize, Jsonnet, etc. and can be extended with plugins. Flux CD supports Helm, Kustomize, OCI mirroring, SOPS encryption configuration, GitHub Actions, etc., the ecology is very rich. Flux CD is the winner here for its wide range of integration support. Multi-tenancy and Privilege Management Argo CD has built-in RBAC, supports SSOs such as OIDC, LDAP, and fine-grained privilege assignment. Flux CD uses Kubernetes' own RBAC system, which is more native but slightly more complex to configure. If you want ease of use, the winner is Argo CD. Multi-Cluster Management Capabilities Argo CD supports multi-clustering natively, allowing you to switch and manage applications across multiple clusters directly in the UI. Flux CD also supports it, but you need to manually configure bootstrap and GitRepo for multiple clusters via GitOps.  Winner: Argo CD  Security and Keys Argo CD is usually combined with Sealed Secrets, Vault, or through plugins to realize SOPS.  Flux CD supports native integration for SOPS, just configure it once, and it's very easy to decrypt automatically. Personally, I prefer to use Flux + SOPS in security-oriented scenarios, and the whole key management process is more elegant. Performance and Scalability Flux CD controller architecture naturally supports horizontal scaling with stable performance for large-scale environments. Argo CD features a centralized architecture, feature-rich but slightly higher resource consumption. Winner: Flux CD  Observability and Problem Troubleshooting Real-time status, change history, diff comparison, synchronized logs, etc. are available within the Argo CD UI. Flux CD relies more on logs and Kubernetes Events and requires additional tools to assist with visualization. Winner: Argo CD  Learning Curve Argo CD UI is intuitive and easy to install, suitable for GitOps newcomers to get started. Flux CD focuses more on CLI operations and GitOps concepts, and has a slightly higher learning curve. Argo CD is easy to get started. GitOps Principles Flux CD follows GitOps principles 100%: all declarative configurations, cluster auto-aligning Git.  Argo CD supports manual operations and UI synchronization, leaning towards "Controlled GitOps". While Argo CD has a lot of goodies, if you are a stickier for principles, then Flux CD will be more appealing to you. Final ThoughtsArgo CD can be summed up as, quick to get started, comes with a web interface Seriously, the first time I used Argo CD, I had a feeling of “relief”. After deployment, you can open the web UI and see the status of each application, deploy with one click, rollback, compare Git and cluster differences - for people like me who are used to kubectl get, it's like a boon for information overload. Its “App of Apps ”model is also great for organizing large configurations. For example, I use Argo to manage different configuration repos in multiple environments (dev/stage/prod), which is very intuitive. On the downside, it's a bit “heavy”. It has its API server, UI, Controller, which takes up a bit of resources. You have to learn its Application CRD if you want to adjust the configuration. Argo CD even provides CLI for application management and cluster automation. Here are the commands that can come in handy for the purpose stated above: argocd app sync rental-app argocd app rollback rental-app 2Flux CD can be summed up as a modular tool. Flux is the engineer's tool: the ultimate in flexibility, configurable in plain text, and capable of being combined into anything you want. It emphasizes declarative configuration and automated synchronization. Flux CD offers these features: Triggers on Git changeauto-applyauto-push notifications to Slackimage updates automatically trigger deployment.Although this can be done in Argo, Flux's modular controllers (e.g. SourceController, KustomizeController) allow us to have fine-grained control over every aspect and build the entire platform like Lego. Of course, the shortcomings are obvious:  No UIThe configuration is all based on YAMLDocumentation is a little less than Argo, you need to read more official examples.Practical advice: how to choose in different scenarios?Scenario 1: Small team, first time with GitOps? Choose Argo CD.  The visualization interface is friendly.Supports manual deployment/rollback. Low learning cost, easy for the team to accept.Scenario 2: Strong security compliance needs? Choose Flux CD.  Fully declarative.Scales seamlessly across hundreds of clusters.It can be integrated with GitHub Actions, SOPS, Flagger, etc. to create a powerful CI/CD system.Scenario 3: You're already using Argo Workflows or Rollouts  Then, continue to use Argo CD for a better unified ecosystem experience. The last bit of personal advice Don't get hung up on which one to pick; choose one and start using it, that's the most important thing! I also had a “tool-phobia” at the beginning, but after using it, I realized that GitOps itself is the revolutionary concept, and the tools are just the vehicle. You can start with Argo CD to get started, and then move on to Flux.  If you're about to design a GitOps process, start with the tool stack you're most familiar with and the capabilities of your team, and then evolve gradually.
  5. By: Edwin Fri, 25 Apr 2025 05:28:30 +0000 The “grep” command is short for “Global Regular Expression Print”. This is a powerful tool in Unix-based systems used to search and filter text based on specific patterns. If you work with too many text-based files like logs, you will find it difficult to search for multiple strings in parallel. “grep” has the ability to search for multiple strings simultaneously, streamlining the process of extracting relevant information from files or command outputs. In this article, let us explain the variants of grep, instructions on how to use grep multiple string search, practical examples, and some best practices. Let’s get started! “grep” and Its Variants At Unixmen, we always start with the basics. So, before diving into searching for multiple strings, it’s necessary to understand the basic usage of “grep” and its variants: grep: Searches files for lines that match a given pattern using basic regular expressions. egrep: Equivalent to “grep -E”, it interprets patterns as extended regular expressions, allowing for more complex searches. Note that “egrep” is deprecated but still widely used. fgrep: Equivalent to “grep -F”, it searches for fixed strings rather than interpreting patterns as regular expressions. You are probably wondering why we have two functions for doing the same job. egrep and grep -E do the same task and similarly, fgrep and grep -F have the same functionality. This is a part of a consistency exercise to make sure all commands have a similar pattern. At Unixmen, we recommend using grep -E and grep -F instead of egrep and fgrep respectively so that your code is future-proof. Now, let’s get back to the topic. For example, to search for the word “error” in a file named “logfile.txt”, your code will look like: grep "error" logfile.txt How to Search for Multiple Strings with grep There are multiple approaches to use grep to search for multiple strings. Let us learn each approach with some examples. Using Multiple “-e” Options The “-e`” option lets you specify multiple patterns. Each pattern is provided as an argument to “-e”: grep -e "string1" -e "string2" filename This command searches for lines containing either “string1” or “string2” in the specified file. Using Extended Regular Expressions with “-E” By enabling extended regular expressions with the “-E” option, you can use the pipe symbol “|” to separate multiple patterns within a single quoted string: grep -E "string1|string2" filename Alternatively, you can use the “egrep” command, which is equivalent to grep -E, but we do not recommend it considering egrep is deprecated. egrep "pattern1|pattern2" filename Both commands will match lines containing either “pattern1” or “pattern2”. Using Basic Regular Expressions (RegEx) with Escaped Pipe In basic regular expressions, the pipe symbol “|” is not recognized as a special character unless escaped. Therefore, you can use: grep "pattern1\|pattern2" filename This approach searches for lines containing either “pattern1” or “pattern2” in the specified file. Practical Examples Now that we know the basics and the multiple methods to use grep to search multiple strings, let us look at some real-world applications. How to Search for Multiple Words in a File If you have a file named “unixmen.txt” containing the following lines: alpha bravo charlie delta fox golf kilo lima mike To search for lines containing either “alpha” or “kilo”, you can use: grep -E "apple|kiwi" sample.txt The output will be: apple banana cherry kiwi lemon mango Searching for Multiple Patterns in Command Output You can also use grep to filter the output of other commands. For example, to search for processes containing either “bash” or “ssh” in their names, you can use: ps aux | grep -E "bash|ssh" This command will display all running processes that include “bash” or “ssh” in their command line. Case-Insensitive Searches To perform case-insensitive searches, add the “-i” option: grep -i -e "string1" -e "string2" filename This command matches lines containing “string1” or “string2” regardless of case. How to Count Number of Matches To count the number of lines that match any of the specified patterns, use the “-c” option: grep -c -e "string1" -e "string2" filename This command outputs the number of matching lines. Displaying Only Matching Parts of Lines To display only the matching parts of lines, use the “-o” option: grep -o -e "string1" -e "string2" filename This command prints only the matched strings, one per line. Searching Recursively in Directories To search for patterns in all files within a directory and its subdirectories, use the “-r” (short for recursive) option: grep -r -e "pattern1" -e "pattern2" /path/to/directory This command searches for the specified patterns in all files under the given directory. How to Use awk for Multiple String Searches While “grep” is powerful, there are scenarios where “awk” might be more suitable, especially when searching for multiple patterns with complex conditions. For example, to search for lines containing both “string1” and “string2”, you can use: awk '/string1/ && /string2/' filename This command displays lines that contain both “string1” and “string2”. Wrapping Up with Some Best Practices Now that we have covered everything there is to learn about using grep to search multiple strings, it may feel a little overwhelming. Here’s why it is worth the effort. “grep” can be easily integrated into scripts to automate repetitive tasks, like finding specific keywords across multiple files or generating reports. It’s widely available on Unix-like systems and can often be found on Windows through tools like Git Bash or WSL. Knowing how to use “grep” makes your skills portable across systems. Mastering grep enhances your problem-solving capabilities, whether you’re debugging code, parsing logs, or extracting specific information from files. By leveraging regular expressions, grep enables complex pattern matching, which expands its functionality beyond simple string searches. In short, learning grep is like gaining a superpower for text processing. Once you learn it, you’ll wonder how you ever managed without it! Related Articles How to Refine your Search Results Using Grep Exclude VI Save and Exit: Essential Commands in Unix’s Text Editor Why It Is Better to Program on Linux The post grep: Multiple String Search Feature appeared first on Unixmen.
  6. By: Edwin Fri, 25 Apr 2025 05:26:57 +0000 Today at Unixmen, we are about to explain everything there is about the “.bashrc” file. This file serves as a script that initializes settings for interactive Bash shell sessions. The bashrc file is typically located in your home directory as a hidden file (“~/.bashrc”). This file lets you customize your shell environment, enhancing both efficiency and personalization. Let’s get started! Why is the bashrc File Required? Whenever a new interactive non-login Bash shell is launched like when you open a new terminal window, the “.bashrc” file is executed. This execution sets up the environment according to user-defined configurations, which includes: Aliases: Shortcuts for longer commands to streamline command-line operations. Functions: Custom scripts that can be called within the shell to perform specific tasks. Environment variables: Settings that define system behaviour, such as the “PATH” variable, which determines where the system looks for executable files. Prompt customization: Modifying the appearance of the command prompt to display information like the current directory or git branch. By configuring these elements in “.bashrc” file, you can automate repetitive tasks, set up their preferred working environment, and ensure consistency across sessions. How to Edit the “.bashrc” File The “.bashrc” file resides in the your home directory and is hidden by default. Follow these instructions to view and edit this file: Launch your terminal application. In other words, open the terminal window. Navigate to the home directory by executing the “cd ~” command. Use your preferred text editor to open the file. For example, to use “nano” to open the file, execute the command: “nano .bashrc”. We encourage you to always create a backup of the .bashrc file before you make any changes to it. Execute this command to create a backup of the file: cp ~/.bashrc ~/.bashrc_backup When you encounter any errors, this precaution allows you to restore the original settings if needed. Common Customizations (Modifications) to .bashrc File Here are some typical modifications the tech community makes to their “.bashrc” file: How to Add Aliases Aliases create shortcuts for longer commands, saving time and reducing typing errors. For instance: alias ll='ls -alF' alias gs='git status' When you add these lines to “.bashrc”, typing “ll” in the terminal will execute “ls -alF”, and “gs” will execute “git status”. In simpler terms, you are creating shortcuts in the terminal. Defining Functions If you are familiar with Python, you would already know the advantages of defining functions (Tip: If you want to learn Python, two great resources are Stanford’s Code in Place program and PythonCentral). Functions allow for more complex command sequences. For example, here is a function to navigate up multiple directory levels: up() { local d="" limit=$1 for ((i=1 ; i <= limit ; i++)) do d="../$d" done d=$(echo $d | sed 's/\/$//') cd $d } Adding this function lets you type “up 3” to move up three directory levels. How to Export Environment Variables Setting environment variables can configure system behaviour. For example, adding a directory to the “PATH”: export PATH=$PATH:/path/to/directory This addition lets the executables in “/path/to/directory” be run from any location in the terminal. Customizing the Prompt The appearance of the command prompt can be customized to display useful information. For example, execute this command display the username (“\u”), hostname (“\h”), and current working directory (“\W”). export PS1="\u@\h \W \$ " How to Apply Changes After editing and saving the .bashrc file, apply the changes to the current terminal session by sourcing the file. To apply the changes, execute the command: source ~/.bashrc Alternatively, closing and reopening the terminal will also load the new configurations. Wrapping Up with Some Best Practices That is all there is to learn about the bashrc file. Here are some best practices to make sure you do not encounter any errors. Always add comments to your .bashrc file to document the purpose of each customization. This practice aids in understanding and maintaining the file. For extensive configurations, consider sourcing external scripts from within .bashrc to keep the file organized. Be very careful when you add commands that could alter system behaviour or performance. Test new configurations in a separate terminal session before applying them globally. By effectively utilizing the “.bashrc” file, you can create a tailored and efficient command-line environment that aligns with their workflow and preferences. Related Articles Fun in Terminal How To Use The Linux Terminal Like A Real Pro, First Part How To Use Git Commands From Linux Terminal The post .bashrc: The Configuration File of Linux Terminal appeared first on Unixmen.
  7. By: Edwin Fri, 25 Apr 2025 05:26:43 +0000 The Windows Subsystem for Linux (WSL) is a powerful tool that allows you to run a Linux environment directly on Windows. WSL gives you seamless integration between the two most common operating systems. One of the key features of WSL is the ability to access and manage files across both Windows and Linux platforms. Today at Unixmen, we will walk you through the methods to access Windows files from Linux within WSL and vice versa. Let’s get started! How to Access Windows Files from WSL In WSL, Windows drives are mounted under the “/mnt” directory, allowing Linux to interact with the Windows file system. Here’s how you can navigate to your Windows files: Step 1: Locate the Windows Drive Windows drives are mounted as “/mnt/<drive_letter>”. For example, the C: drive is accessible at “/mnt/c”. Step 2: Navigate to Your User Directory To access your Windows user profile, use the following commands: cd /mnt/c/Users/<Your_Windows_Username> Replace “<Your_Windows_Username>” with your actual Windows username. Step 3: List the Contents Once you are in your user directory, you can list the contents using: ls This will display all files and folders in your Windows user directory. By navigating through “/mnt/c/”, you can access any file or folder on your Windows C: drive. This integration lets you manipulate Windows files using Linux commands within WSL. Steps to Access WSL Files from Windows In Windows accessing files stored within the WSL environment is very straightforward. Here is how you can do it: Using File Explorer: Open File Explorer. In the address bar, type “\\wsl$” and press the Enter key. You’ll see a list of installed Linux distributions. Navigate to your desired distribution to access its file system. Direct Access to Home Directory: For quick access to your WSL home directory, navigate to: \\wsl$\<Your_Distribution>\home\<Your_Linux_Username> Replace “<Your_Distribution>” with the name of your Linux distribution (e.g., Ubuntu) and “<Your_Linux_Username>” with your Linux username. This method allows you to seamlessly transfer files between Windows and WSL environments using the familiar Windows interface. Best Practices At Unixmen, we recommend these best practices for better file management between Windows and WSL. File location: For optimal performance, store project files within the WSL file system when you work primarily with Linux tools. If you need to use Windows tools on the same files, consider storing them in the Windows file system and accessing them from WSL. Permissions: Be mindful of file permissions. Files created in the Windows file system may have different permissions when accessed from WSL. Path conversions: Use the “wslpath” utility to convert Windows paths to WSL paths and vice versa: wslpath 'C:\Users\Your_Windows_Username\file.txt' This command will output the equivalent WSL path. Wrapping Up By understanding these methods and best practices, you can effectively manage and navigate files between Windows and Linux environments within WSL, enhancing your workflow and productivity. Related Links WinUSB: Create A Bootable Windows USB In Linux Run Windows Apps on Linux Easily Linux vs. Mac vs. Windows OS Guide The post Windows Linux Subsystem (WSL): Run Linux on Windows appeared first on Unixmen.
  8. By: Edwin Fri, 25 Apr 2025 05:26:38 +0000 If you work with Python a lot, you might be familiar with the process of constantly installing packages. But what happens when you decide that a package is no longer required? That is when you use “pip” to uninstall packages. The “pip” tool, which is Python’s package installer, offers a straightforward method to uninstall packages. Today at Unixmen, we will walk you through the process, ensuring even beginners can confidently manage their Python packages. Let’s get started! What is pip and Its Role in Python Package Management “pip” team named their product interestingly because it stands for “Pip Installs Packages”. It is the standard package manager for Python. It lets you install, update, and remove Python packages from the Python Package Index (PyPI) and other indexes. You will need package management to be as efficient as possible because that ensures your projects remain organized and free from unnecessary or conflicting dependencies. How to Uninstall a Single Package with “pip” Let us start with simple steps. Here is how you can remove a package using pip. First, open your system’s command line interface (CLI or terminal): On Windows, search for “cmd” or “Command Prompt” in the Start menu. On macOS or Linux, open the Terminal application. Type the following command, replacing “package_name” with the name of the package you wish to uninstall: pip uninstall package_name For example, to uninstall the `requests` package: pip uninstall requests As a precaution, always confirm the uninstallation process. “pip” will display a list of files to be removed and prompt for confirmation like this: Proceed (y/n)? When you see this prompt, type “y” and press the Enter key to proceed. This process makes sure that the specified package is removed from your Python environment. Uninstall Multiple Packages Simultaneously Let’s take it to the next level. Now that we are familiar with uninstalling a single package, let us learn how to uninstall multiple packages at once. When you need to uninstall multiple packages at once, “pip” allows you to do so by listing the package names separated by spaces. Here is how you can do it: pip uninstall package1 package2 package3 For example, to uninstall both “numpy” and “pandas”: pip uninstall numpy pandas As expected, when this command is executed, a prompt will appear for confirmation before removing each package. How to Uninstall Packages Without Confirmation When you are confident that you are uninstalling the correct package, the confirmation prompts will be a little irritating. To solve this and bypass the confirmation prompts, use the “-y”flag: pip uninstall -y package_name What is being done here is you are instructing the command prompt that it has confirmation with the “-y” flag. This is particularly useful in scripting or automated workflows where manual intervention is impractical. Uninstalling All Installed Packages To remove all installed packages and achieve a clean slate, you can use the following command: pip freeze | xargs pip uninstall -y Here’s a breakdown of the command: “pip freeze” lists all installed packages. “xargs” takes this list and passes each package name to “pip uninstall -y”, which uninstalls them without requiring confirmation. Be very careful when you are executing this command. This will remove all packages in your environment. Ensure this is your intended action before proceeding. Best Practices for Managing Python Packages We have covered almost everything when it comes to using pip to uninstall packages. Before we wrap up, let us learn the best practices as well. Always use virtual environments to manage project-specific dependencies without interfering with system-wide packages. Tools like “venv” (included with Python 3.3 and later) or “virtualenv” can help you create isolated environments. Periodically check for and remove unused packages to keep your environment clean and efficient. Documentation can be boring for most of the beginners but always maintain a “requirements.txt” file for each project, listing all necessary packages and their versions. This practice aids in reproducibility and collaboration. Prefer installing packages within virtual environments rather than globally to avoid potential conflicts and permission issues. Wrapping Up Managing Python packages is crucial for maintaining a streamlined and conflict-free development environment. The “pip uninstall” command provides a simple yet powerful means to remove unnecessary or problematic packages. By understanding and utilizing the various options and best practices outlined in this guide, even beginners can confidently navigate Python package management. Related Articles Pip: Install Specific Version of a Python Package Instructions How to Update and Upgrade the Pip Command Install Pip Ubuntu: A Detailed Guide to Cover Every Step The post Pip: Uninstall Packages Instructions with Best Practices appeared first on Unixmen.
  9. By: Edwin Fri, 25 Apr 2025 05:26:26 +0000 Today at Unixmen, we are about to explain a key configuration file that defines how disk partitions, devices, and remote filesystems are mounted and integrated into the system’s directory structure. The file we are talking about is the “/etc/fstab”. By automating the mounting process at boot time, fstab ensures consistent and reliable access to various storage resources. In this article, we will explain the structure, common mount options, best practices, and the common pitfalls learners are prone to face. Let’s get started! Structure of the “/etc/fstab” File Each line in the “fstab” file represents a filesystem and contains six fields, each separated by spaces or tabs. Here are the components: Filesystem: Specifies the device or remote filesystem to be mounted, identified by device name (for example: “/dev/sda1”) or UUID. Mounting point: The directory where the filesystem will be mounted, such as “/”, “/home”, or “/mnt/data”. Filesystem type: Indicates the type of filesystem, like “ext4”, “vfat”, or “nfs”. Options: Comma-separated list of mount options that control the behaviour of the filesystem like “defaults”, “noatime”, “ro”. Dump: A binary value (0 or 1) used by the “dump” utility to decide if the filesystem needs to be backed up. Pass: An integer (0, 1, or 2) that determines the order in which “fsck” checks the filesystem during boot. Some of the Common Mount Options Let us look at some of the common mount options: defaults: This option applies the default settings: “rw”, “suid”, “dev”, “exec”, “auto”, “nouser”, and “async”. noauto: Prevents the filesystem from being mounted automatically at boot. user: Allows any user to mount the filesystem. nouser: Restricts mounting to the superuser. ro: Mounts the filesystem as read-only. rw: Mounts the filesystem as read-write. sync: Ensures that input and output operations are done synchronously. noexec: Prevents execution of binaries on the mounted filesystem. As usual, let us understand the concept of “fstab” with an example. Here is a sample entry: UUID=123e4567-e89b-12d3-a456-426614174000 /mnt/data ext4 defaults 0 2 Let us break down this example a little. UUID=123e4567-e89b-12d3-a456-426614174000: Specifies the unique identifier of the filesystem. /mnt/data: Designates the mount point. ext4: Indicates the filesystem type. defaults: Applies default mount options. 0: Excludes the filesystem from “dump” backups. 2: Sets the “fsck” order. Non-root filesystems are typically assigned “2”. Best Practices While the fstab file is a pretty straightforward component, here are some best practices to help you work more efficiently. Always use UUIDs or labels: Employing UUIDs or filesystem labels instead of device names (like “/dev/unixmen”) enhances reliability, especially when device names change due to hardware modifications. Create backups before editing: Always create a backup of the “fstab” file before making changes to prevent system boot issues. Verify entries: After editing “fstab”, test the configuration with “mount -a” to ensure all filesystems mount correctly without errors. Common Pitfalls You May Face Misconfigurations in this file can lead to various issues, affecting system stability and accessibility. Common problems you could face include: Incorrect device identification: Using device names like “/dev/sda1” can be problematic, especially when hardware changes cause device reordering. This can result in the system attempting to mount the wrong partition. Using an incorrect Universally Unique Identifier (UUID) can prevent the system from locating and mounting the intended filesystem, leading to boot failures. Misconfigured mount options: Specifying unsupported or invalid mount options can cause mounting failures. For example, using “errors=remount-rw” instead of the correct “errors=remount-ro” will cause system boot issues. File system type mismatch: Specifying an incorrect file system type can prevent proper mounting. For example, specifying an “ext4” partition as “xfs” in “fstab” will result in mounting errors. Wrapping Up You could have noticed that the basics of fstab does not feel that complex, but we included a thorough section for the best practices and challenges. This is because identifying the exact cause of fstab error is little difficult for the untrained eye. The error messages can be vague and non-specific. Determining the proper log fail for troubleshooting is another pain. We recommend including the “nofail” option, so that the system boots even if the device is unavailable. Now you are ready to work with the fstab file! Related Articles How to Rename Files in UNIX / Linux Untar tar.gz file: The Only How-to Guide You Will Need Fsck: How to Check and Repair a Filesystem   The post fstab: Storage Resource Configuration File appeared first on Unixmen.
  10. by: Blackle Mori Thu, 24 Apr 2025 12:49:42 +0000 You would be forgiven if you’ve never heard of Cohost.org. The bespoke, Tumblr-like social media website came and went in a flash. Going public in June 2022 with invite-only registrations, Cohost’s peach and maroon landing page promised that it would be “posting, but better.” Just over two years later, in September 2024, the site announced its shutdown, its creators citing burnout and funding problems. Today, its servers are gone for good. Any link to cohost.org redirects to the Wayback Machine’s slow but comprehensive archive. The landing page for Cohost.org, featuring our beloved eggbug. Despite its short lifetime, I am confident in saying that Cohost delivered on its promise. This is in no small part due to its user base, consisting mostly of niche internet creatives and their friends — many of whom already considered “posting” to be an art form. These users were attracted to Cohost’s opinionated, anti-capitalist design that set it apart from the mainstream alternatives. The site was free of advertisements and follower counts, all feeds were purely chronological, and the posting interface even supported a subset of HTML. It was this latter feature that conjured a community of its own. For security reasons, any post using HTML was passed through a sanitizer to remove any malicious or malformed elements. But unlike most websites, Cohost’s sanitizer was remarkably permissive. The vast majority of tags and attributes were allowed — most notably inline CSS styles on arbitrary elements. Users didn’t take long to grasp the creative opportunities lurking within Cohost’s unassuming “new post” modal. Within 48 hours of going public, the fledgling community had figured out how to post poetry using the <details> tag, port the Apple homepage from 1999, and reimplement a quick-time WarioWare game. We called posts like these “CSS Crimes,” and the people who made them “CSS Criminals.” Without even intending to, the developers of Cohost had created an environment for a CSS community to thrive. In this post, I’ll show you a few of the hacks we found while trying to push the limits of Cohost’s HTML support. Use these if you dare, lest you too get labelled a CSS criminal. Width-hacking Many of the CSS crimes of Cohost were powered by a technique that user @corncycle dubbed “width-hacking.” Using a combination of the <details> element and the CSS calc() function, we can get some pretty wild functionality: combination locks, tile matching games, Zelda-style top-down movement, the list goes on. If you’ve been around the CSS world for a while, there’s a good chance you’ve been exposed to the old checkbox hack. By combining a checkbox, a label, and creative use of CSS selectors, you can use the toggle functionality of the checkbox to implement all sorts of things. Tabbed areas, push toggles, dropdown menus, etc. However, because this hack requires CSS selectors, that meant we couldn’t use it on Cohost — remember, we only had inline styles. Instead, we used the relatively new elements <details> and <summary>. These elements provide the same visibility-toggling logic, but now directly in HTML. No weird CSS needed. CodePen Embed Fallback These elements work like so: All children of the <details> element are hidden by default, except for the <summary> element. When the summary is clicked, it “opens” the parent details element, causing its children to become visible. We can add all sorts of styles to these elements to make this example more interesting. Below, I have styled the constituent elements to create the effect of a button that lights up when you click on it. CodePen Embed Fallback This is achieved by giving the <summary> element a fixed position and size, a grey background color, and an outset border to make it look like a button. When it’s clicked, a sibling <div> is revealed that covers the <summary> with its own red background and border. Normally, this <div> would block further click events, but I’ve given it the declaration pointer-events: none. Now all clicks pass right on through to the <summary> element underneath, allowing you to turn the button back off. This is all pretty nifty, but it’s ultimately the same logic as before: something is toggled either on or off. These are only two states. If we want to make games and other gizmos, we might want to represent hundreds to thousands of states. Width-hacking gives us exactly that. Consider the following example: CodePen Embed Fallback In this example, three <details> elements live together in an inline-flex container. Because all the <summary> elements are absolutely-positioned, the width of their respective <details> elements are all zero when they’re closed. Now, each of these three <details> has a small <div> inside. The first has a child with a width of 1px, the second a child with a width of 2px, and the third a width of 4px. When a <details> element is opened, it reveals its hidden <div>, causing its own width to increase. This increases the width of the inline-flex container. Because the width of the container is the sum of its children, this means its width directly corresponds to the specific <details> elements that are open. For example, if just the first and third <details> are open, the inline-flex container will have the width 1px + 4px = 5px. Conversely, if the inline-flex container is 2px wide, we can infer that the only open <details> element is the second one. With this trick, we’ve managed to encode all eight states of the three <details> into the width of the container element. This is pretty cool. Maybe we could use this as an element of some kind of puzzle game? We could show a secret message if the right combination of buttons is checked. But how do we do that? How do we only show the secret message for a specific width of that container div? CodePen Embed Fallback In the preceding CodePen, I’ve added a secret message as two nested divs. Currently, this message is always visible — complete with a TODO reminding us to implement the logic to hide it unless the correct combination is set. You may wonder why we’re using two nested divs for such a simple message. This is because we’ll be hiding the message using a peculiar method: We will make the width of the parent div.secret be zero. Because the overflow: hidden property is used, the child div.message will be clipped, and thus invisible. Now we’re ready to implement our secret message logic. Thanks to the fact that percentage sizes are relative to the parent, we can use 100% as a stand-in for the parent’s width. We can then construct a complicated CSS calc() formula that is 350px if the container div is our target size, and 0px otherwise. With that, our secret message will be visible only when the center button is active and the others are inactive. Give it a try! CodePen Embed Fallback This complicated calc() function that’s controlling the secret div’s width has the following graph: You can see that it’s a piecewise linear curve, constructed from multiple pieces using min/max. These pieces are placed in just the right spots so that the function maxes out when the container div is 2px— which we’ve established is precisely when only the second button is active. A surprising variety of games can be implemented using variations on this technique. Here is a tower of Hanoi game I had made that uses both width and height to track the game’s state. SVG animation So far, we’ve seen some basic functionality for implementing a game. But what if we want our games to look good? What if we want to add ✨animations?✨ Believe it or not, this is actually possible entirely within inline CSS using the power of SVG. SVG (Scalable Vector Graphics) is an XML-based image format for storing vector images. It enjoys broad support on the web — you can use it in <img> elements or as the URL of a background-image property, among other things. Like HTML, an SVG file is a collection of elements. For SVG, these elements are things like <rect>, <circle>, and <text>, to name a few. These elements can have all sorts of properties defined, such as fill color, stroke width, and font family. A lesser-known feature of SVG is that it can contain <style> blocks for configuring the properties of these elements. In the example below, an SVG is used as the background for a div. Inside that SVG is a <style> block that sets the fillcolor of its <circle> to red. CodePen Embed Fallback An even lesser-known feature of SVG is that its styles can use media queries. The size used by those queries is the size of the div it is a background of. In the following example, we have a resizable <div> with an SVG background. Inside this SVG is a media query which will change the fill color of its <circle> to blue when the width exceeds 100px. Grab the resize handle in its bottom right corner and drag until the circle turns blue. CodePen Embed Fallback Because resize handles don’t quite work on mobile, unfortunately, this and the next couple of CodePens are best experienced on desktop. This is an extremely powerful technique. By mixing it with width-hacking, we could encode the state of a game or gizmo in the width of an SVG background image. This SVG can then show or hide specific elements depending on the corresponding game state via media queries. But I promised you animations. So, how is that done? Turns out you can use CSS animations within SVGs. By using the CSS transition property, we can make the color of our circle smoothly transition from red to blue. CodePen Embed Fallback Amazing! But before you try this yourself, be sure to look at the source code carefully. You’ll notice that I’ve had to add a 1×1px, off-screen element with the ID #hack. This element has a very simple (and nearly unnoticeable) continuous animation applied. A “dummy animation” like this is necessary to get around some web browsers’ buggy detection of SVG animation. Without that hack, our transition property wouldn’t work consistently. For the fun of it, let’s combine this tech with our previous secret message example. Instead of toggling the secret message’s width between the values of 0px and 350px, I’ve adjusted the calc formula so that the secret message div is normally 350px, and becomes 351px if the right combination is set. Instead of HTML/CSS, the secret message is now just an SVG background with a <text> element that says “secret message.” Using media queries, we change the transform scale of this <text> to be zero unless the div is 351px. With the transition property applied, we get a smooth transition between these two states. Click the center button to activate the secret message: CodePen Embed Fallback The first cohost user to discover the use of media queries within SVG backgrounds was @ticky for this post. I don’t recall who figured out they could animate, but I used the tech quite extensively for this quiz that tells you what kind of soil you’d like if you were a worm. Wrapping up And that’s will be all for now. There are a number of techniques I haven’t touched on — namely the fun antics one can get up to with the resize property. If you’d like to explore the world of CSS crimes further, I’d recommend this great linkdump by YellowAfterlife, or this video retrospective by rebane2001. It will always hurt to describe Cohost in the past tense. It truly was a magical place, and I don’t think I’ll be able to properly convey what it was like to be there at its peak. The best I can do is share the hacks we came up with: the lost CSS tricks we invented while “posting, but better.” The Lost CSS Tricks of Cohost.org originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  11. by: Abhishek Kumar Thu, 24 Apr 2025 11:57:47 +0530 When deploying containerized services such as Pi-hole with Docker, selecting the appropriate networking mode is essential for correct functionality, especially when the service is intended to operate at the network level. The host networking mode allows a container to share the host machine’s network stack directly, enabling seamless access to low-level protocols and ports. This is particularly critical for applications that require broadcast traffic handling, such as DNS and DHCP services. This article explores the practical use of host networking mode in docker, explains why bridge mode is inadequate for certain network-wide configurations, and provides a Docker compose example to illustrate correct usage. What does “Host Network” actually mean?By default, Docker containers run in an isolated virtual network known as the bridge network. Each container receives an internal IP address (typically in the 172.17.0.0/16 range) and communicates through Network Address Translation (NAT). This setup is well-suited for application isolation, but it limits the container’s visibility to the outside LAN. For instance, services running inside such containers are not directly reachable from other devices on the local network unless specific ports are explicitly mapped. In contrast, using host network mode grants the container direct access to the host machine’s network stack. Rather than using a virtual subnet, the container behaves as if it were running natively on the host's IP address (e.g., 192.168.x.x or 10.1.x.x), as assigned by your router. It can open ports without needing Docker's ports directive, and it responds to network traffic as though it were a system-level process. Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekSetting up host network mode using docker composeWhile this setup can also be achieved using the docker run command with the --network host flag, I prefer using Docker Compose. It keeps things declarative and repeatable, especially when you need to manage environment variables, mount volumes, or configure multiple containers together. Let’s walk through an example config, that runs an nginx container using host network mode: version: "3" services: web: container_name: nginx-host image: nginx:latest network_mode: hostThis configuration tells Docker to run the nginx-host container using the host's network stack. No need to specify ports, if Nginx is listening on port 80, it’s directly accessible at your host's IP address on port 80, without any NAT or port mapping. Start it up with: docker compose up -dThen access it via: http://192.168.x.xYou’ll get Nginx’s default welcome page directly from your host IP. How is this different from Bridge networking?By default, Docker containers use the bridge network, where each container is assigned an internal IP (commonly in the 172.17.0.0/16 range). Here’s how you would configure that: version: "3" services: web: image: nginx:latest ports: - "8080:80" This exposes the container’s port 80 to your host’s port 8080. The traffic is routed through Docker’s internal bridge interface, with NAT handling the translation. It’s great for isolation and works well for most applications. Optional: Defining custom bridge network with external referenceIn Docker Compose, a user-defined bridge network offers better flexibility and control than the host network, especially when dealing with multiple services. This allows you to define custom aliasing, service discovery, and isolation between services, while still enabling them to communicate over a single network. I personally use this with Nginx Proxy Manager that needs to communicate with multiple services. These are the services that are all connected to my external npm network: Let's walk through how you can create and use a custom bridge network in your homelab setup. First, you'll need to create the network using the following command: docker network create my_custom_network Then, you can proceed with the Docker Compose configuration: version: "3" services: web: image: nginx:latest networks: - hostnet networks: hostnet: external: true name: hostnet Explanation:hostnet: This is the name you give to your network inside the Compose file.external: true: This tells Docker Compose to use an existing network, in this case, the network we just created. Docker will not try to create it, assuming it's already available.By using an external bridge network like this, you can ensure that your services can communicate within a shared network context, but they still benefit from Docker’s built-in networking features, such as automatic service name resolution and DNS, without the potential limitations of the host network. But... What’s the catch?Everything has a trade-off, and host networking is no exception. Here’s where things get real: ❌ Security takes a hitYou lose the isolation that containers are famous for. A process inside your container could potentially see or interfere with host-level services. ❌ Port conflicts are a thingBecause your container is now sharing the same network stack as your host, you can’t run multiple containers using the same ports without stepping on each other. With the bridge network, Docker handles this neatly using port mappings. With host networking, it’s all manual. ❌ Not cross-platform friendlyHost networking works only on Linux hosts. If you're on macOS or Windows, it simply doesn’t behave the same way, thanks to how Docker Desktop creates virtual machines under the hood. This could cause consistency issues if your team is split across platforms. ❌ You can’t use some docker featuresThings like service discovery (via Docker's DNS) or custom internal networks just won’t work with host mode. You’re bypassing Docker's clever internal network stack altogether. When to choose which Docker network modeHere’s a quick idea of when to use what: Bridge Network: Great default. Perfect for apps that just need to run and expose ports with isolation. Works well with Docker Compose and lets you connect services easily using their names.Host Network: Use it when performance or native networking is critical. Ideal for edge services, proxies, or tightly coupled host-level apps.None: There's a network_mode: none too—this disables networking entirely. Use it for highly isolated jobs like offline batch processing or building artifacts.Wrapping UpThe host network mode in Docker is best suited for services that require direct interaction with the local network. Unlike Docker's default bridge network, which isolates containers with internal IP addresses, host mode allows a container to share the host's network stack, including its IP address and ports, without any abstraction. In my own setup, I use host mode exclusively for Pi-hole, which acts as both a DNS resolver and DHCP server for the entire network. For most other containers, such as web applications, reverse proxies, or databases, the bridge network is more appropriate. It ensures better isolation, security, and flexibility when exposing services selectively through port mappings. In summary, host mode is a powerful but specialized tool. Use it only when your containerized service needs to behave like a native process on the host system. Otherwise, Docker’s default networking modes will serve you better in terms of control and compartmentalization.
  12. by: Abhishek Prakash Thu, 24 Apr 2025 05:35:31 GMT I guess you already know that It's FOSS has an active community forum. I recently upgraded its server and changed its look slightly. Hope you like it. If you have questions about using Linux or if you want to share something interesting you discovered with your Linux setup, you are more than welcome to utilize the Community. It’s FOSS CommunityA place for desktop Linux users and It’s FOSS readersIt's FOSS Community💬 Let's see what else you get in this edition New Ubuntu flavor release.Exploring pages, links, and tags in Logseq.Ubisoft doing something really unexpected.And other Linux news, tips, and, of course, memes!This edition of FOSS Weekly is supported by Valkey.❇️ Valkey – The Drop-in Alternative to Redis OSSWith the change of Redis licensing in March of 2024 came the end of Redis as an open source project. Enter Valkey – the community driven fork that preserves and improves the familiar high-performance, key-value datastore for improving application performance. Stewarded by the Linux foundation, Valkey serves as an open source drop-in alternative to Redis OSS – no code changes needed, with the same developer-friendly experience. For your open source database, check out Valkey. What is Valkey? – Valkey Datastore Explained - Amazon Web ServicesValkey is an open source, high performance, in-memory, key-value datastore. It is designed as a drop-in replacement for Redis OSS.Amazon Web Services, Inc.📰 Linux and Open Source NewsUbisoft has surprised us with its open source move.Ubuntu 25.04 has arrived, delivering many upgrades.Similarly, Xubuntu 25.04 and Kubuntu 25.04 are here, offering up useful refinements.ZimaBoard 2 crowdfunding campaign is live on Kickstarter. I had the early prototype sent a few days ago and I share my experience with ZimaBoard 2 in this article. Initial Impressions of the ZimaBoard 2 Homelab DeviceApart from the silver exterior, I have nothing to complain about in ZimaBoard 2 even if it is a prototype at this stage.It's FOSS NewsAbhishek🧠 What We’re Thinking AboutAndroid's Linux Terminal just got a noteworthy power up. Android 16 lets the Linux Terminal use your phone’s entire storageAndroid 16 Beta 4 uncaps the disk resizing slider, allowing you to allocate your phone’s entire storage to the Linux Terminal.Android AuthorityMishaal Rahman🧮 Linux Tips, Tutorials and MoreIt's easy to check which desktop environment your Linux distro has.Check out these 21 basic, yet essential Linux networking commands.Some tools you can use when you have to share files in GB over the internet.Continuing the Logseq series, learn how to tag, link, and reference in Logseq the right way, and when you are done with that, you can try customizing it.If you just installed or upgraded to the Ubuntu 25.04 release, here are 13 things you should do right away: 13 Things to do After Installing Ubuntu 25.04Just installed Ubuntu 25.04? Here are some neat tips for you.It's FOSS NewsSourav Rudra Why should you opt for It's FOSS Plus membership: ✅ Ad-free reading experience ✅ Badges in the comment section and forum ✅ Supporting creation of educational Linux materials Join It's FOSS Plus 👷 Homelab and Maker's CornerBuild a real-time knowledge retrieval system with our step-by-step RAG guide. Tuning Local LLMs With RAG Using Ollama and LangchainInterested in taking your local AI set up to the next step? Here’s a sample PDF-based RAG project.It's FOSSAbhishek Kumar✨ Apps HighlightDocs is a self-hostable document editor solution with many neat features. Online Docs... but Sovereign: This is Europe’s Open Source Answer to Big TechDocs is an open source, self-hosted document editor that allows real-time collaboration and gives users control over their data.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for YouSee Fedora 42 in action in the latest video. By the way, that weird default wallpaper has a significance. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeTry your hand at our Essential Linux Commands crossword. Guess the Popular Linux Command: CrosswordTest your Linux command line knowledge with this simple crossword puzzle. All you have to do is to correctly guess the essential Linux command.It's FOSSAbhishek PrakashAlternatively, can you match the Linux distros with their logos? 💡 Quick Handy TipIn GNOME File Manager (Nautilus), you can invert the selection of items using the keyboard shortcut CTRL + SHIFT + I. 🤣 Meme of the WeekHah, this couldn't be more true. 😆 🗓️ Tech TriviaOn April 20, 1998, during a demonstration of a beta version of Windows 98 by Microsoft's Bill Gates, at COMDEX, the system crashed in the live event. Gates jokingly said, "That must be why we're not shipping Windows 98 yet". If you ever used Windows 98, you know that it should have never been shipped 😉 🧑‍🤝‍🧑 FOSSverse CornerCan you help a newbie FOSSer with their search for a Linux distribution chart? Looking for a Linux distribution chartI’m wondering if there’s ever been a table/spreadsheet created to provide a basic breakdown of most, if not all, distros at a glance. I’m not talking about subjective metrics like some charts display (e.g. user friendliness, cutting edge, community, etc.), but about objective criteria like the following: Available architecture Desktop environments (possibly with individual categories for filing system, window manager, terminal, etc.) Shell Package manager Installation (e.g. CLI, Calamares, etc…It's FOSS CommunityThunder_Chicken❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  13. by: Sreenath Wed, 23 Apr 2025 03:05:46 GMT Logseq provides all the necessary elements you need for creating your knowledge base. But one size doesn't fit all. You may need something extra that is either too complicated to achieve in Logseq or not possible at all. What do you do, then? You use external plugins and extensions. Thankfully, Logseq has a thriving marketplace where you can explore various plugins and extensions created by individuals who craved more from Logseq, Let me show you how you can install themes and plugins. 🚧Privacy alert! Do note that plugins can access your graph and local files. You'll see this warning in Logseq as well. More granular permission control system is not yet available at the moment.Installing a plugin in LogseqClick on the top-bar menu button and select Plugins as shown in the screenshot below. Menu → PluginsIn the Plugins window, click on Marketplace. Click on Marketplace tabThis will open the Logseq Plugins Marketplace. You can click on the title of a plugin to get the details about that plugin, including a sample screenshot. Click on Plugin TitleIf you find the plugin useful, use the Install button adjacent to the Plugin in the Marketplace section. Install a PluginManaging PluginsTo manage a plugin, like enable/disable, fine-tune, etc., go to Menu → Plugins. This will take you to the Manage Plugin interface. 📋If you are on the Marketplace, just use the Installed tab to get all the installed plugins.Installed plugins sectionHere, you can enable/disable plugins in Logseq using the corresponding toggle button. Similarly, hover over the settings gear icon for a plugin and select Open Settings option to access plugin configuration. Click on Plugin settings gear iconInstalling themes in LogseqLogseq looks good by default to me but you can surely experiment with its looks by installing new themes. Similar to what you saw in plugin installation section, click on the Plugins option from Logseq menu button. Click on Menu → PluginsWhy did I not click the Themes option above? Well, because that is for switching themes, not installing. In the Plugins window, click on Marketplace section and select Themes. Select Marketplace → ThemesClick on the title of a theme to get the details, including screenshots. Logseq theme details pageTo install a theme, use the Install button adjacent to the theme in Marketplace. Click Install to install the themeEnable/disable themes in Logseq🚧Changing themes is not done in this window. Theme switching will be discussed below.All the installed themes will be listed in Menu → Plugins → Installed → Themes section. Installed themes listedFrom here, you can disable/enable themes using the toggle button. Changing themesMake sure all the desired installed themes are enabled because disabled themes won't be shown in the theme switcher. Click on the main menu button and select the Themes option. Click on Menu → ThemesThis will bring a drop-down menu interface from where you can select a theme. This is shown in the short video below. Updating plugins and themesOccasionally, plugins and themes will provide updates. To check for available plugin/theme updates, click on Menu → Plugins. Here, select the Installed section to access installed Themes and Plugins. There should be a Check for Update button for each item. Click on Check UpdateClick on it to check if any updates are available for the selected plugin/theme. Uninstall plugins and themesBy now you know that in Logseq, both Plugins and themes are considered as plugins. So, you can uninstall both in the same way. First, click on Menu button and select the Plugins option. Click on the Menu and select PluginsHere, go to the Installed section. Now, if you want to remove an installed Plugin, go to the Plugins tab. Else, if you would like to remove an installed theme, go to the Themes tab. Select Plugins or Themes SectionHover over the settings gear of the item that needs to be removed and select the Uninstall button. Uninstall a Plugin or ThemeWhen prompted for confirmation, click on Yes, and the plugin/theme will be removed. Manage plugins from Logseq settingsLogseq settings provides a neat place for tweaking the installed Plugins and themes if they provide some extra settings. Click on the menu button on the top-bar and select the Settings button. Click on Menu → SettingsIn the settings window, click on Plugins section. Click on Plugins Section in SettingsHere, you can get a list of plugins and themes that offer some tweaks. Plugin settings in Logseq Settings windowAnd that's all you need to know about exploring plugins and themes in Logseq. In the next tutorial in this series, I'll discuss special pages like Journal. Stay tuned.
  14. by: Chris Coyier Mon, 21 Apr 2025 17:10:35 +0000 I enjoyed Trys Mudford’s explanation of making rounded triangular boxes. It was a very real-world client need, and I do tend to prefer reading about technical solutions to real problems over theoretical ones. This one was tricky because this particular shape doesn’t have a terribly obvious way to draw it on the web. CSS’ clip-path is useful, but the final rounding was done with an unintuitive feGaussianBlur SVG filter. You could draw it all in SVG, but I think the % values you get to use with clip-path are a more natural fit to web content than pure SVG is. SVG just wasn’t born in a responsive web design world. The thing is: SVG has a viewBox which is a fixed coordinate system on which you draw things. The final SVG can be scaled and squished and stuff, but it’s all happening on this fixed grid. I remember when trying to learn the <path d=""> syntax in SVG how it’s almost an entire language unto itself, with lots of different letters issues commands to a virtual pen. For example: That syntax for the d attribute (also expressed with the path() function) can be applied in CSS, but I always thought that was very weird. The numbers are “unitless” in SVG, and that makes sense because the numbers apply to that invisible fixed grid put in place by the viewBox. But there is no viewBox in regular web layout., so those unitless numbers are translated to px, and px also isn’t particularly responsive web design friendly. This was my mind’s context when I saw the Safari 18.4 new features. One of them being a new shape() function: Yes! I’m glad they get it. I felt like I was going crazy when I would talk about this issue and get met with blank stares. Tyrs got so close with clip-path: polygon() alone on those rounded arrow shapes. The % values work nicely for random amounts of content inside (e.g. the “nose” should be at 50% of the height) and if the shape of the arrow needed to be maintained px values could be mix-and-matched in there. But the rounding was missing. There is no rounding with polygon(). Or so I thought? I was on the draft spec anyway looking at shape(), which we’ll circle back to, but it does define the same round keyword and provide geometric diagrams with expectations on how it’s implemented. There are no code examples, but I think it would look something like this: /* might work one day? */ clip-path: polygon(0% 0% round 0%, 75% 0% round 10px, 100% 50% round 10px, 75% 100% round 10px, 0% 100% round 0%); I’d say “draft specs are just… draft specs”, but stable Safari is shipping with stuff in this draft spec so I don’t know how all that works. I did test this syntax across the browsers and nothing supports it. If it did, Trys’ work would have been quite a bit easier. Although the examples in that post where a border follows the curved paths… that’s still hard. Maybe we need clip-path-border? There is precedent for rounding in “basic shape” functions already. The inset() function has a round keyword which produces a rounded rectangle (think a simple border-radius). See this example, which actually does work. But anyway: that new shape() function. It looks like it is trying to replicate (the entire?) power of <path d=""> but do it with a more CSS friendly/native syntax. I’ll post the current syntax from the spec to help paint the picture it’s a whole new language (🫥): <shape-command> = <move-command> | <line-command> | close | <horizontal-line-command> | <vertical-line-command> | <curve-command> | <smooth-command> | <arc-command> <move-command> = move <command-end-point> <line-command> = line <command-end-point> <horizontal-line-command> = hline [ to [ <length-percentage> | left | center | right | x-start | x-end ] | by <length-percentage> ] <vertical-line-command> = vline [ to [ <length-percentage> | top | center | bottom | y-start | y-end ] | by <length-percentage> ] <curve-command> = curve [ [ to <position> with <control-point> [ / <control-point> ]? ] | [ by <coordinate-pair> with <relative-control-point> [ / <relative-control-point> ]? ] ] <smooth-command> = smooth [ [ to <position> [ with <control-point> ]? ] | [ by <coordinate-pair> [ with <relative-control-point> ]? ] ] <arc-command> = arc <command-end-point> [ [ of <length-percentage>{1,2} ] && <arc-sweep>? && <arc-size>? && [rotate <angle>]? ] <command-end-point> = [ to <position> | by <coordinate-pair> ] <control-point> = [ <position> | <relative-control-point> ] <relative-control-point> = <coordinate-pair> [ from [ start | end | origin ] ]? <coordinate-pair> = <length-percentage>{2} <arc-sweep> = cw | ccw <arc-size> = large | small So instead of somewhat obtuse single-letter commands in the path syntax, these have more understandable names. Here’s an example again from the spec that draws a speech bubble shape: .bubble { clip-path: shape( from 5px 0, hline to calc(100% - 5px), curve to right 5px with right top, vline to calc(100% - 8px), curve to calc(100% - 5px) calc(100% - 3px) with right calc(100% - 3px), hline to 70%, line by -2px 3px, line by -2px -3px, hline to 5px, curve to left calc(100% - 8px) with left calc(100% - 3px), vline to 5px, curve to 5px top with left top ); } You can see the rounded corners being drawn there with literal curve commands. I think it’s neat. So again Trys’ shapes could be drawn with this once it has more proper browser support. I love how with this syntax we can mix and match units, we could abstract them out with custom properties, we could animate them, they accept readable position keywords like “right”, we can use calc(), and all this really nice native CSS stuff that path() wasn’t able to give us. This is born in a responsive web design world. Very nice win, web platform.
  15. By: Janus Atienza Mon, 21 Apr 2025 16:36:45 +0000 Microsoft MS SQL server supports Linux operating systems, including Red Hat Enterprise Linux, Ubuntu, and container images on Virtual machine platforms like Kubernetes, Docker engine, and OpenShift. Regardless of the platform on which you are using SQL Server, the databases are prone to corruption and inconsistencies. If your MDF/NDF files on a Linux system get corrupted for any reason, you can repair them. In this post, we’ll discuss the procedure to repair and restore a corrupt SQL database on a Linux system. Causes of corruption in MDF/NDF files in Linux: The SQL database files stored in Linux system can get corrupted due to one of the following reasons: Sudden system shutdown. Bugs in the Server The system’s hard drive, where the database files are saved, has bad sectors. The operating system suddenly crashes at the time you are working on the database. Hardware or malware infection. The system runs out of space. Ways to repair and restore corrupt SQL databases in Linux To repair the corrupt SQL database file stored on Linux system, you can use SQL Server management studio on Ubuntu or Red hat enterprise itself Or use a professional SQL repair tool. Steps to repair a corrupt SQL database on a Linux system: First, launch the SQL Server on your Linux system by the below steps: Open the terminal by Ctrl+Alt+T or ALT +F2 Next, run the command below with the application name and then press the Enter key. sudo systemctl start mssql-server In SSMS, follow the below steps to restore and repair the database file on Linux system: Step 1- If you have an updated Backup file, you can use it to restore the corrupt Database. Here’s the command: BACKUP DATABASE [AdventureWorks2019] TO DISK = N’C:\backups\DBTesting.bak’ WITH DIFFERENTIAL, NOFORMAT, NOINIT, NAME = N’AdventureWorks2019-Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO Step 2- If you have no backup, then, with Admin rights, run the DBCC CHECKDB command on SQL Server Management Studio (SSMS). Here the corrupted database name is “DBTesting”. Before using the command, first change the status to SET SINGLE_USER. Here is the command: ALTER DATABASE DBTesting SET SINGLE_USER DBCC CHECKDB (‘DBTesting’, REPAIR_REBUILD) GO If REPAIR_REBUILD tool fails to repair the problematic MDF file then you can try the below REPAIR_ALLOW_DATA_LOSS command of DBCC CHECKDB command: DBCC CHECKDB (N ’Dbtesting’, REPAIR_ALLOW_DATA_LOSS) WITH ALL_ERRORMSGS, NO_INFOMSGS; GO Next, change the mode of the database from SINGLE_USER to MULTI_USER by executing the below command: ALTER DATABASE DBTesting SET MULTI_USER Using the above command can help you repair corrupt MDF file but it may removes majority of the data pages containing inconsistent data while repairing. Due to which, you can lose your data. Step 3-Use a Professional SQL Repair tool: If you don’t want to risk data in database then install a professional MS SQL recovery tool such as Stellar Repair for MSSQL. The tool is equipped with enhanced algorithms that can help you repair corrupt or inconsistent MDF/NDF file even in Linux system. Here are the steps to install and launch Stellar Repair for MS SQL: First open Terminal on Linux/Ubuntu Next, run the below command: $ sudo apt install app_name Here: Add the absolute path of the Stellar Repair for MSSQL tool. Next, launch the application in your Ubuntu using the below steps: On your desktop, find, and click In the Activities overview window, locate the Stellar Repair for MS SQL application and press the Enter key. Enter the system password to authenticate. Next, select the database in Stellar Repair for MS SQL’s user interface by clicking on Select Database. After selecting an MDF file, click Repair. For detailed steps you can read the KB To Conclude If you are working on an SQL Server installed on a Linux system on the Virtual machine, your system suddenly crashes and the MDF file gets corrupted. In this case or any other scenarios where the SQL database file become inaccessible on Linux system, you can repair it using the two methods described above. To repair corrupt MDF files quickly, without data loss and file size restrictions, you can use the help of a professional MS SQL Repair tool. The tool supports repairing MDF files in both Windows and Linux systems. The post Linux SQL Server Database Recovery: Restoring Corrupt Databases appeared first on Unixmen.
  16. by: Abhishek Kumar Sun, 20 Apr 2025 14:46:21 GMT Large Language Models (LLMs) are powerful, but they have one major limitation: they rely solely on the knowledge they were trained on. This means they lack real-time, domain-specific updates unless retrained, an expensive and impractical process. This is where Retrieval-Augmented Generation (RAG) comes in. RAG allows an LLM to retrieve relevant external knowledge before generating a response, effectively giving it access to fresh, contextual, and specific information. Imagine having an AI assistant that not only remembers general facts but can also refer to your PDFs, notes, or private data for more precise responses. This article takes a deep dive into how RAG works, how LLMs are trained, and how we can use Ollama and Langchain to implement a local RAG system that fine-tunes an LLM’s responses by embedding and retrieving external knowledge dynamically. By the end of this tutorial, we’ll build a PDF-based RAG project that allows users to upload documents and ask questions, with the model responding based on stored data. ✋I’m not an AI expert. This article is a hands-on look at Retrieval Augmented Generation (RAG) with Ollama and Langchain, meant for learning and experimentation. There might be mistakes, and if you spot something off or have better insights, feel free to share. It’s nowhere near the scale of how enterprises handle RAG, where they use massive datasets, specialized databases, and high-performance GPUs.What is Retrieval-Augmented Generation (RAG)?RAG is an AI framework that improves LLM responses by integrating real-time information retrieval. Instead of relying only on its training data, the LLM retrieves relevant documents from an external source (such as a vector database) before generating an answer. How RAG worksQuery Input – The user submits a question.Document Retrieval – A search algorithm fetches relevant text chunks from a vector store.Contextual Response Generation – The retrieved text is fed into the LLM, guiding it to produce a more accurate and relevant answer.Final Output – The response, now grounded in the retrieved knowledge, is returned to the user.Why use RAG instead of fine-tuning?No retraining required – Traditional fine-tuning demands a lot of GPU power and labeled datasets. RAG eliminates this need by retrieving data dynamically.Up-to-date knowledge – The model can refer to newly uploaded documents instead of relying on outdated training data.More accurate and domain-specific answers – Ideal for legal, medical, or research-related tasks where accuracy is crucial.How LLMs are trained (and why RAG improves them)Before diving into RAG, let’s understand how LLMs are trained: Pre-training – The model learns language patterns, facts, and reasoning from vast amounts of text (e.g., books, Wikipedia).Fine-tuning – It is further trained on specialized datasets for specific use cases (e.g., medical research, coding assistance).Inference – The trained model is deployed to answer user queries.While fine-tuning is helpful, it has limitations: It is computationally expensive.It does not allow dynamic updates to knowledge.It may introduce biases if trained on limited datasets.With RAG, we bypass these issues by allowing real-time retrieval from external sources, making LLMs far more adaptable. Building a local RAG application with Ollama and LangchainIn this tutorial, we'll build a simple RAG-powered document retrieval app using LangChain, ChromaDB, and Ollama. The app lets users upload PDFs, embed them in a vector database, and query for relevant information. 💡All the code is available in our GitHub repository. You can clone it and start testing right away.Installing dependenciesTo avoid messing up our system packages, we’ll first create a Python virtual environment. This keeps our dependencies isolated and prevents conflicts with system-wide Python packages. Navigate to your project directory and create a virtual environment: cd ~/RAG-Tutorial python3 -m venv venvNow, activate the virtual environment: source venv/bin/activateOnce activated, your terminal prompt should change to indicate that you are now inside the virtual environment. With the virtual environment activated, install the necessary Python packages using requirements.txt: pip install -r requirements.txtThis will install all the required dependencies for our RAG pipeline, including Flask, LangChain, Ollama, and Pydantic. Once installed, you’re all set to proceed with the next steps! Project structureOur project is structured as follows: RAG-Tutorial/ │── app.py # Main Flask server │── embed.py # Handles document embedding │── query.py # Handles querying the vector database │── get_vector_db.py # Manages ChromaDB instance │── .env # Stores environment variables │── requirements.txt # List of dependencies └── _temp/ # Temporary storage for uploaded filesStep 1: Creating app.py (Flask API Server)This script sets up a Flask server with two endpoints: /embed – Uploads a PDF and stores its embeddings in ChromaDB./query – Accepts a user query and retrieves relevant text chunks from ChromaDB.route_embed(): Saves an uploaded file and embeds its contents in ChromaDB.route_query(): Accepts a query and retrieves relevant document chunks.import os from dotenv import load_dotenv from flask import Flask, request, jsonify from embed import embed from query import query from get_vector_db import get_vector_db load_dotenv() TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp') os.makedirs(TEMP_FOLDER, exist_ok=True) app = Flask(__name__) @app.route('/embed', methods=['POST']) def route_embed(): if 'file' not in request.files: return jsonify({"error": "No file part"}), 400 file = request.files['file'] if file.filename == '': return jsonify({"error": "No selected file"}), 400 embedded = embed(file) return jsonify({"message": "File embedded successfully"}) if embedded else jsonify({"error": "Embedding failed"}), 400 @app.route('/query', methods=['POST']) def route_query(): data = request.get_json() response = query(data.get('query')) return jsonify({"message": response}) if response else jsonify({"error": "Query failed"}), 400 if __name__ == '__main__': app.run(host="0.0.0.0", port=8080, debug=True)Step 2: Creating embed.py (embedding documents)This file handles document processing, extracts text, and stores vector embeddings in ChromaDB. allowed_file(): Ensures only PDFs are processed.save_file(): Saves the uploaded file temporarily.load_and_split_data(): Uses UnstructuredPDFLoader and RecursiveCharacterTextSplitter to extract text and split it into manageable chunks.embed(): Converts text chunks into vector embeddings and stores them in ChromaDB.import os from datetime import datetime from werkzeug.utils import secure_filename from langchain_community.document_loaders import UnstructuredPDFLoader from langchain_text_splitters import RecursiveCharacterTextSplitter from get_vector_db import get_vector_db TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp') def allowed_file(filename): return filename.lower().endswith('.pdf') def save_file(file): filename = f"{datetime.now().timestamp()}_{secure_filename(file.filename)}" file_path = os.path.join(TEMP_FOLDER, filename) file.save(file_path) return file_path def load_and_split_data(file_path): loader = UnstructuredPDFLoader(file_path=file_path) data = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100) return text_splitter.split_documents(data) def embed(file): if file and allowed_file(file.filename): file_path = save_file(file) chunks = load_and_split_data(file_path) db = get_vector_db() db.add_documents(chunks) db.persist() os.remove(file_path) return True return FalseStep 3: Creating query.py (Query processing)It retrieves relevant information from ChromaDB and uses an LLM to generate responses. get_prompt(): Creates a structured prompt for multi-query retrieval.query(): Uses Ollama's LLM to rephrase the user query, retrieve relevant document chunks, and generate a response.import os from langchain_community.chat_models import ChatOllama from langchain.prompts import ChatPromptTemplate, PromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough from langchain.retrievers.multi_query import MultiQueryRetriever from get_vector_db import get_vector_db LLM_MODEL = os.getenv('LLM_MODEL') OLLAMA_HOST = os.getenv('OLLAMA_HOST', 'http://localhost:11434') def get_prompt(): QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an AI assistant. Generate five reworded versions of the user question to improve document retrieval. Original question: {question}""", ) template = "Answer the question based ONLY on this context:\n{context}\nQuestion: {question}" prompt = ChatPromptTemplate.from_template(template) return QUERY_PROMPT, prompt def query(input): if input: llm = ChatOllama(model=LLM_MODEL) db = get_vector_db() QUERY_PROMPT, prompt = get_prompt() retriever = MultiQueryRetriever.from_llm(db.as_retriever(), llm, prompt=QUERY_PROMPT) chain = ({"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser()) return chain.invoke(input) return NoneStep 4: Creating get_vector_db.py (Vector database management)It initializes and manages ChromaDB, which stores text embeddings for fast retrieval. get_vector_db(): Initializes ChromaDB with the Nomic embedding model and loads stored document vectors.import os from langchain_community.embeddings import OllamaEmbeddings from langchain_community.vectorstores.chroma import Chroma CHROMA_PATH = os.getenv('CHROMA_PATH', 'chroma') COLLECTION_NAME = os.getenv('COLLECTION_NAME') TEXT_EMBEDDING_MODEL = os.getenv('TEXT_EMBEDDING_MODEL') OLLAMA_HOST = os.getenv('OLLAMA_HOST', 'http://localhost:11434') def get_vector_db(): embedding = OllamaEmbeddings(model=TEXT_EMBEDDING_MODEL, show_progress=True) return Chroma(collection_name=COLLECTION_NAME, persist_directory=CHROMA_PATH, embedding_function=embedding)Step 5: Environment variablesCreate .env, to store environment variables: TEMP_FOLDER = './_temp' CHROMA_PATH = 'chroma' COLLECTION_NAME = 'rag-tutorial' LLM_MODEL = 'smollm:360m' TEXT_EMBEDDING_MODEL = 'nomic-embed-text' TEMP_FOLDER: Stores uploaded PDFs temporarily.CHROMA_PATH: Defines the storage location for ChromaDB.COLLECTION_NAME: Sets the ChromaDB collection name.LLM_MODEL: Specifies the LLM model used for querying.TEXT_EMBEDDING_MODEL: Defines the embedding model for vector storage.I'm using these light weight LLMs for this tutorial, as I don't have dedicated GPU to inference large models. | You can edit your LLMs in the .env fileTesting the makeshift RAG + LLM PipelineNow that our RAG app is set up, we need to validate its effectiveness. The goal is to ensure that the system correctly: Embeds documents – Converts text into vector embeddings and stores them in ChromaDB.Retrieves relevant chunks – Fetches the most relevant text snippets from ChromaDB based on a query.Generates meaningful responses – Uses Ollama to construct an intelligent response based on retrieved data.This testing phase ensures that our makeshift RAG pipeline is functioning as expected and can be fine-tuned if necessary. Running the flask serverWe first need to make sure our Flask app is running. Open a terminal, navigate to your project directory, and activate your virtual environment: cd ~/RAG-Tutorial source venv/bin/activate # On Linux/macOS # or venv\Scripts\activate # On Windows (if using venv) Now, run the Flask app: python3 app.pyIf everything is set up correctly, the server should start and listen on http://localhost:8080. You should see output like: Once the server is running, we'll use curl commands to interact with our pipeline and analyze the responses to confirm everything works as expected. 1. Testing Document EmbeddingThe first step is to upload a document and ensure its contents are successfully embedded into ChromaDB. curl --request POST \ --url http://localhost:8080/embed \ --header 'Content-Type: multipart/form-data' \ --form file=@/path/to/file.pdfBreakdown: curl --request POST → Sends a POST request to our API.--url http://localhost:8080/embed → Targets our embed endpoint running on port 8080.--header 'Content-Type: multipart/form-data' → Specifies that we are uploading a file.--form file=@/path/to/file.pdf → Attaches a file (in this case, a PDF) to be processed.Expected Response: What’s Happening Internally?The server reads the uploaded PDF file.The text is extracted, split into chunks, and converted into vector embeddings.These embeddings are stored in ChromaDB for future retrieval.If Something Goes Wrong: IssuePossible CauseFix"status": "error"File not found or unreadableCheck the file path and permissionscollection.count() == 0ChromaDB storage failureRestart ChromaDB and check logs 2. Querying the DocumentNow that our document is embedded, we can test whether relevant information is retrieved when we ask a question. curl --request POST \ --url http://localhost:8080/query \ --header 'Content-Type: application/json' \ --data '{ "query": "Question about the PDF?" }'Breakdown: curl --request POST → Sends a POST request.--url http://localhost:8080/query → Targets our query endpoint.--header 'Content-Type: application/json' → Specifies that we are sending JSON data.--data '{ "query": "Question about the PDF?" }' → Sends our search query to retrieve relevant information.Expected Response: What’s Happening Internally?The query "Whats in this file?" is passed to ChromaDB to retrieve the most relevant chunks.The retrieved chunks are passed to Ollama as context for generating a response.Ollama formulates a meaningful reply based on the retrieved information.If the Response is Not Good Enough: IssuePossible CauseFixRetrieved chunks are irrelevantPoor chunking strategyAdjust chunk sizes and retry embedding"llm_response": "I don't know"Context wasn't passed properlyCheck if ChromaDB is returning resultsResponse lacks document detailsLLM needs better instructionsModify the system prompt 3. Fine-tuning the LLM for better responsesIf Ollama’s responses aren’t detailed enough, we need to refine how we provide context. Tuning strategies:Improve Chunking – Ensure text chunks are large enough to retain meaning but small enough for effective retrieval.Enhance Retrieval – Increase n_results to fetch more relevant document chunks.Modify the LLM Prompt – Add structured instructions for better responses.Example system prompt for Ollama:prompt = f""" You are an AI assistant helping users retrieve information from documents. Use the following document snippets to provide a helpful answer. If the answer isn't in the retrieved text, say 'I don't know.' Retrieved context: {retrieved_chunks} User's question: {query_text} """ This ensures that Ollama: Uses retrieved text properly.Avoids hallucinations by sticking to available context.Provides meaningful, structured answers.Final thoughtsBuilding this makeshift RAG LLM tuning pipeline has been an insightful experience, but I want to be clear, I’m not an AI expert. Everything here is something I’m still learning myself. There are bound to be mistakes, inefficiencies, and things that could be improved. If you’re someone who knows better or if I’ve missed any crucial points, please feel free to share your insights. That said, this project gave me a small glimpse into how RAG works. At its core, RAG is about fetching the right context before asking an LLM to generate a response. It’s what makes AI chatbots capable of retrieving information from vast datasets instead of just responding based on their training data. Large companies use this technique at scale, processing massive amounts of data, fine-tuning their models, and optimizing their retrieval mechanisms to build AI assistants that feel intuitive and knowledgeable. What we built here is nowhere near that level, but it was still fascinating to see how we can direct an LLM’s responses by controlling what information it retrieves. Even with this basic setup, we saw how much impact retrieval quality, chunking strategies, and prompt design have on the final response. This makes me wonder, have you ever thought about training your own LLM? Would you be interested in something like this but fine-tuned specifically for Linux tutorials? Imagine a custom-tuned LLM that could answer your Linux questions with accurate, RAG-powered responses, would you use it? Let us know in the comments!
  17. by: Ojekudo Oghenemaro Emmanuel Sun, 20 Apr 2025 08:04:07 GMT Introduction In today’s digital world, security is paramount, especially when dealing with sensitive data like user authentication and financial transactions. One of the most effective ways to enhance security is by implementing One-Time Password (OTP) authentication. This article explores how to implement OTP authentication in a Laravel backend with a Vue.js frontend, ensuring secure transactions. Why Use OTP Authentication? OTP authentication provides an extra layer of security beyond traditional username and password authentication. Some key benefits include: Prevention of Unauthorized Access: Even if login credentials are compromised, an attacker cannot log in without the OTP. Enhanced Security for Transactions: OTPs can be used to confirm high-value transactions, preventing fraud. Temporary Validity: Since OTPs expire after a short period, they reduce the risk of reuse by attackers. Prerequisites Before getting started, ensure you have the following: Laravel 8 or later installed Vue.js configured in your project A mail or SMS service provider for sending OTPs (e.g., Twilio, Mailtrap) Basic understanding of Laravel and Vue.js In this guide, we’ll implement OTP authentication in a Laravel (backend) and Vue.js (frontend) application. We’ll cover: Setting up Laravel and Vue (frontend) from scratch Setting up OTP generation and validation in Laravel Creating a Vue.js component for OTP input Integrating OTP authentication into login workflows Enhancing security with best practices By the end, you’ll have a fully functional OTP authentication system ready to enhance the security of your fintech or web application. Setting Up Laravel for OTP Authentication Step 1: Install Laravel and Required Packages If you haven't already set up a Laravel project, create a new one: composer create-project "laravel/laravel:^10.0" example-app Next, install the Laravel Breeze package for frontend scaffolding: composer require laravel/breeze --dev After composer has finished installing, run the following command to select the framework you want to use—the Vue configuration: php artisan breeze:install You’ll see a prompt with the available stacks: Which Breeze stack would you like to install? - Vue with Inertia Would you like any optional features? - None Which testing framework do you prefer? - PHPUnit Breeze will automatically install the necessary packages for your Laravel Vue project. You should see: INFO Breeze scaffolding installed successfully. Now run the npm command to build your frontend assets: npm run dev Then, open another terminal and launch your Laravel app: php artisan serve Step 2: Setting up OTP generation and validation in Laravel We'll use a mail testing platform called Mailtrap to send and receive mail locally. If you don’t have a mail testing service set up, sign up at Mailtrap to get your SMTP credentials and add them to your .env file: MAIL_MAILER=smtp MAIL_HOST=sandbox.smtp.mailtrap.io MAIL_PORT=2525 MAIL_USERNAME=1780944422200a MAIL_PASSWORD=a8250ee453323b MAIL_ENCRYPTION=tls MAIL_FROM_ADDRESS=hello@example.com MAIL_FROM_NAME="${APP_NAME}" To send OTPs to users, we’ll use Laravel’s built-in mail services. Create a mail class and controller: php artisan make:mail OtpMail php artisan make:controller OtpController Then modify the OtpMail class: <?php namespace App\Mail; use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Mail\Mailable; use Illuminate\Mail\Mailables\Content; use Illuminate\Mail\Mailables\Envelope; use Illuminate\Queue\SerializesModels; class OtpMail extends Mailable { use Queueable, SerializesModels; public $otp; /** * Create a new message instance. */ public function __construct($otp) { $this->otp = $otp; } /** * Build the email message. */ public function build() { return $this->subject('Your OTP Code') ->view('emails.otp') ->with(['otp' => $this->otp]); } /** * Get the message envelope. */ public function envelope(): Envelope { return new Envelope( subject: 'OTP Mail', ); } } Create a Blade view in resources/views/emails/otp.blade.php: <!DOCTYPE html> <html> <head> <title>Your OTP Code</title> </head> <body> <p>Hello,</p> <p>Your One-Time Password (OTP) is: <strong>{{ $otp }}</strong></p> <p>This code is valid for 10 minutes. Do not share it with anyone.</p> <p>Thank you!</p> </body> </html> Step 3: Creating a Vue.js component for OTP input Normally, after login or registration, users are redirected to the dashboard. In this tutorial, we add an extra security step that validates users with an OTP before granting dashboard access. Create two Vue files: Request.vue: requests the OTP Verify.vue: inputs the OTP for verification Now we create the routes for the purpose of return the View and the functionality of creating OTP codes, storing OTP codes, sending OTP codes through the mail class, we head to our web.php file: Route::middleware('auth')->group(function () { Route::get('/request', [OtpController::class, 'create'])->name('request'); Route::post('/store-request', [OtpController::class, 'store'])->name('send.otp.request'); Route::get('/verify', [OtpController::class, 'verify'])->name('verify'); Route::post('/verify-request', [OtpController::class, 'verify_request'])->name('verify.otp.request'); }); Putting all of this code in the OTP controller returns the View for our request.vue and verify.vue file and the functionality of creating OTP codes, storing OTP codes, sending OTP codes through the mail class and verifying OTP codes, we head to our web.php file to set up the routes. public function create(Request $request) { return Inertia::render('Request', [ 'email' => $request->query('email', ''), ]); } public function store(Request $request) { $request->validate([ 'email' => 'required|email|exists:users,email', ]); $otp = rand(100000, 999999); Cache::put('otp_' . $request->email, $otp, now()->addMinutes(10)); Log::info("OTP generated for " . $request->email . ": " . $otp); Mail::to($request->email)->send(new OtpMail($otp)); return redirect()->route('verify', ['email' => $request->email]); } public function verify(Request $request) { return Inertia::render('Verify', [ 'email' => $request->query('email'), ]); } public function verify_request(Request $request) { $request->validate([ 'email' => 'required|email|exists:users,email', 'otp' => 'required|digits:6', ]); $cachedOtp = Cache::get('otp_' . $request->email); Log::info("OTP entered: " . $request->otp); Log::info("OTP stored in cache: " . ($cachedOtp ?? 'No OTP found')); if (!$cachedOtp) { return back()->withErrors(['otp' => 'OTP has expired. Please request a new one.']); } if ((string) $cachedOtp !== (string) $request->otp) { return back()->withErrors(['otp' => 'Invalid OTP. Please try again.']); } Cache::forget('otp_' . $request->email); $user = User::where('email', $request->email)->first(); if ($user) { $user->email_verified_at = now(); $user->save(); } return redirect()->route('dashboard')->with('success', 'OTP Verified Successfully!'); } Having set all this code, we return to the request.vue file to set it up. <script setup> import AuthenticatedLayout from '@/Layouts/AuthenticatedLayout.vue'; import InputError from '@/Components/InputError.vue'; import InputLabel from '@/Components/InputLabel.vue'; import PrimaryButton from '@/Components/PrimaryButton.vue'; import TextInput from '@/Components/TextInput.vue'; import { Head, useForm } from '@inertiajs/vue3'; const props = defineProps({ email: { type: String, required: true, }, }); const form = useForm({ email: props.email, }); const submit = () => { form.post(route('send.otp.request'), { onSuccess: () => { alert("OTP has been sent to your email!"); form.get(route('verify'), { email: form.email }); // Redirecting to OTP verification }, }); }; </script> <template> <Head title="Request OTP" /> <AuthenticatedLayout> <form @submit.prevent="submit"> <div> <InputLabel for="email" value="Email" /> <TextInput id="email" type="email" class="mt-1 block w-full" v-model="form.email" required autofocus /> <InputError class="mt-2" :message="form.errors.email" /> </div> <div class="mt-4 flex items-center justify-end"> <PrimaryButton :class="{ 'opacity-25': form.processing }" :disabled="form.processing"> Request OTP </PrimaryButton> </div> </form> </AuthenticatedLayout> </template> Having set all this code, we return to the verify.vue to set it up: <script setup> import AuthenticatedLayout from '@/Layouts/AuthenticatedLayout.vue'; import InputError from '@/Components/InputError.vue'; import InputLabel from '@/Components/InputLabel.vue'; import PrimaryButton from '@/Components/PrimaryButton.vue'; import TextInput from '@/Components/TextInput.vue'; import { Head, useForm, usePage } from '@inertiajs/vue3'; const page = usePage(); // Get the email from the URL query params const email = page.props.email || ''; // Initialize form with email and OTP field const form = useForm({ email: email, otp: '', }); // Submit function const submit = () => { form.post(route('verify.otp.request'), { onSuccess: () => { alert("OTP verified successfully! Redirecting..."); window.location.href = '/dashboard'; // Change to your desired redirect page }, onError: () => { alert("Invalid OTP. Please try again."); }, }); }; </script> <template> <Head title="Verify OTP" /> <AuthenticatedLayout> <form @submit.prevent="submit"> <div> <InputLabel for="otp" value="Enter OTP" /> <TextInput id="otp" type="text" class="mt-1 block w-full" v-model="form.otp" required /> <InputError class="mt-2" :message="form.errors.otp" /> </div> <div class="mt-4 flex items-center justify-end"> <PrimaryButton :disabled="form.processing"> Verify OTP </PrimaryButton> </div> </form> </AuthenticatedLayout> </template> Step 4: Integrating OTP authentication into login and register workflows Update the login controller: public function store(LoginRequest $request): RedirectResponse { $request->authenticate(); $request->session()->regenerate(); return redirect()->intended(route('request', absolute: false)); } Update the registration controller: public function store(Request $request): RedirectResponse { $request->validate([ 'name' => 'required|string|max:255', 'email' => 'required|string|lowercase|email|max:255|unique:' . User::class, 'password' => ['required', 'confirmed', Rules\Password::defaults()], ]); $user = User::create([ 'name' => $request->name, 'email' => $request->email, 'password' => Hash::make($request->password), ]); event(new Registered($user)); Auth::login($user); return redirect(route('request', absolute: false)); } Conclusion Implementing OTP authentication in Laravel and Vue.js enhances security for user logins and transactions. By generating, sending, and verifying OTPs, we can add an extra layer of protection against unauthorized access. This method is particularly useful for financial applications and sensitive user data.
  18. by: LHB Community Sun, 20 Apr 2025 12:23:45 +0530 As a developer, efficiency is key. Being a full-stack developer myself, I’ve always thought of replacing boring tasks with automation. What could happen if I just keep writing new code in a Python file, and it gets evaluated every time I save it? Isn’t that a productivity boost? 'Hot Reload' is that valuable feature of the modern development process that automatically reloads or refreshes the code after you make changes to a file. This helps the developers see the effect of their changes instantly and avoid manually restarting or refreshing the browser. Over these years, I’ve used tools like entr to keep docker containers on the sync every time I modify docker-compose.yml file or keep testing with different CSS designs on the fly with browser-sync.  1. entrentr (Event Notify Test Runner) is a lightweight command line tool for monitoring file changes and triggering specified commands. It’s one of my favorite tools to restart any CLI process, whether it be triggering a docker build or restarting a python script or keep rebuilding the C project. For developers who are used to the command line, entr provides a simple and efficient way to perform tasks such as building, testing, or restarting services in real time. Key Features Lightweight, no additional dependencies.Highly customizableIdeal for use in conjunction with scripts or build tools.Linux only.Installation All you have to do is type in the following command in the terminal: sudo apt install -y entrUsage Auto-trigger build tools: Use entr to automatically execute build commands like make, webpack, etc. Here's the command I use to do that: ls docker-compose.yml | entr -r docker buildHere, -r flag reloads the child process, which is the run command ‘docker build’. 0:00 /0:23 1× Automatically run tests: Automatically re-run unit tests or integration tests after modifying the code. ls *.ts | entr bun test2. nodemonnodemon is an essential tool for developers working on Node.js applications. It automatically monitors changes to project files and restarts the Node.js server when files are modified, eliminating the need for developers from restarting the server manually. Key Features Monitor file changes and restart Node.js server automatically.Supports JavaScript and TypeScript projectsCustomize which files and directories to monitor.Supports common web frameworks such as Express, Hapi.Installation You can type in a single command in the terminal to install the tool: npm install -g nodemonIf you are installing Node.js and npm for the first on Ubuntu-based distributions. You can follow our Node.js installation tutorial. Usage When you type in the following command, it starts server.js and will automatically restart the server if the file changes. nodemon server.js3. LiveReload.netLiveReload.net is a very popular tool, especially for front-end developers. It automatically refreshes the browser after you save a file, helping developers see the effect of changes immediately, eliminating the need to manually refresh the browser. Unlike others, it is a web–based tool, and you need to head to its official website to get started. Every file remains in your local network. No files are uploaded to a third-party server. Key Features Seamless integration with editorsSupports custom trigger conditions to refresh the pageGood compatibility with front-end frameworks and static websites.Usage It's stupidly simple. Just load up the website, and drag and drop your folder to start making live changes.  4. fswatchfswatch is a cross-platform file change monitoring tool for Linux, macOS, and developers using it on Windows via WSL (Windows Subsystem Linux). It is powerful enough to monitor multiple files and directories for changes and perform actions accordingly. Key Features Supports cross-platform operation and can be used on Linux and macOS.It can be used with custom scripts to trigger multiple operations.Flexible configuration options to filter specific types of file changes.Installation To install it on a Linux distribution, type in the following in the terminal: sudo apt install -y fswatchIf you have a macOS computer, you can use the command: brew install fswatchUsage You can try typing in the command here: fswatch -o . | xargs -n1 -I{} makeAnd, then you can chain this command with an entr command for a rich interactive development experience. ls hellomake | entr -r ./hellomakeThe “fswatch” command will invoke make to compile the c application, and then if our binary “hellomake” is modified, we’ll run it again. Isn’t this a time saver?  5. WatchexecWatchexec is a cross-platform command line tool for automating the execution of specified commands when a file or directory changes. It is a lightweight file monitor that helps developers automate tasks such as running tests, compiling code, or reloading services when a source code file changes.    Key Features Support cross-platform use (macOS, Linux, Windows).Fast, written in Rust.Lightweight, no complex configuration.Installation On Linux, just type in: sudo apt install watchexecAnd, if you want to try it on macOS (via homebrew): brew install watchexecYou can also download corresponding binaries for your system from the project’s Github releases section. Usage All you need to do is just run the command: watchexec -e py "pytest"This will run pytests every time a Python file in the current directory is modified. 6. BrowserSyncBrowserSync is a powerful tool that not only monitors file changes, but also synchronizes pages across multiple devices and browsers. BrowserSync can be ideal for developers who need to perform cross-device testing. Key features Cross-browser synchronization.Automatically refreshes multiple devices and browsers.Built-in local development server.Installation Considering you have Node.js installed first, type in the following command: npm i -g browser-syncOr, you can use: npx browser-syncUsage Here is how the commands for it would look like: browser-sync start --server --files "/*.css, *.js, *.html" npx browser-sync start --server --files "/*.css, *.js, *.html"You can use either of the two commands for your experiments. This command starts a local server and monitors the CSS, JS, and HTML files for changes, and the browser is automatically refreshed as soon as a change occurs. If you’re a developer and aren't using any modern frontend framework, this comes handy. 7. watchdog & watchmedoWatchdog is a file system monitoring library written in Python that allows you to monitor file and directory changes in real time. Whether it's file creation, modification, deletion, or file move, Watchdog can help you catch these events and trigger the appropriate action. Key Features Cross-platform supportProvides full flexibility with its Python-based APIIncludes watchmedo script to hook any CLI application easilyInstallation Install Python first, and then install with pip using the command below: pip install watchdogUsage Type in the following and watch it in action: watchmedo shell-command --patterns="*.py" --recursive --command="python factorial.py" .This command watches a directory for file changes and prints out the event details whenever a file is modified, created, or deleted. In the command, --patterns="*.py" watches .py files, --recursive watches subdirectories and --command="python factorial.py" run the python file. ConclusionHot reloading tools have become increasingly important in the development process, and they can help developers save a lot of time and effort and increase productivity. With tools like entr, nodemon, LiveReload, Watchexec, Browser Sync, and others, you can easily automate reloading and live feedback without having to manually restart the server or refresh the browser. Integrating these tools into your development process can drastically reduce repetitive work and waiting time, allowing you to focus on writing high-quality code. Whether you're developing a front-end application or a back-end service or managing a complex project, using these hot-reloading tools will enhance your productivity. Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.
  19. by: Sreenath Sat, 19 Apr 2025 13:00:24 GMT Simply creating well-formatted notes isn’t enough to manage the information you collect in daily life—accessibility is key. If you can't easily retrieve that information and its context, the whole point of "knowledge management" falls apart. From my experience using it daily for several months, I’d say Logseq does a better job of interlinking notes than any other app I’ve tried. So, without further ado, let’s dive in. The concept of page, links, and tagsIf you’ve used Logseq before, you’ve likely noticed one key thing: everything is a block. Your data is structured as intentional, individual blocks. When you type a sentence and hit Enter, instead of just creating a new line, Logseq starts a new bullet point. This design brings both clarity and complexity. In Logseq, pages are made up of bullet-formatted text. Each page acts like a link—and when you search for a page that doesn’t exist, Logseq simply creates it for you. Here’s the core idea: pages and tags function in a very similar way. You can think of a tag as a special kind of page that collects links to all content marked with that tag. For a deeper dive into this concept, I recommend checking out this forum post. Logseq also supports block references, which let you link directly to any specific block—meaning you can reference a single sentence from one note in another. 📋Ultimately, it is the end-user's creativity that creates a perfect content organization. There is no one way of using Logseq for knowledge management. It's up to you how you use it.Creating a new page in LogseqClick on the top-left search icon. This will bring a search overlay. Here, enter the name of the page you want to create. If no such page is present, you will get an option to create a new page. Search for a noteFor example, I created a page called "My Logseq Notes" and you can see this newly created page in 'All pages' tab on Logseq sidebar. New page listed in "All Pages" tabLogseq stores all the created page in the pages directory inside the Logseq folder you have chosen on your system. The Logseq pages directory in File ManagerThere won't be any nested directories to store sub-pages. All those things will be done using links and tags. In fact, there is no point to look into the Logseq directory manually. Use the app interface, where the data will appear organized. ⌨️ Use keyboard shortcut for creating pagesPowerful tools like Logseq are better used with keyboard. You can create pages/links/references using only keyboard, without touching the mouse. The common syntax to create a page or link in Logseq is: #One-word-page-nameYou can press the # symbol and enter a one word name. If there are no pages with the name exists, a new page is created. Else, link to the mentioned page is added. If you need to create a page with multiple words, use: #[[Page with multiple words separated with space]]Place the name of the note within two [[]] symbol. 0:00 /0:32 1× Create pages with single word name or multi-word names. Using TagsIn the example above, I have created two pages, one without spaces in the name, while the other has spaces. Both of them can be considered as tags. Confused? The further interlinking of these pages actually defines if it's a page or a tag. If you are using it as a 'special page' to accumulate similar contents, then it can be considered as a tag. If you are filling paragraphs of text inside it, then it will be a regular page. Basically, a tag-page is also a page but it has the links to all the pages marked with the said tag. To add a tag to a particular note, you can type #<tag-name> anywhere in the note. For convenience and better organization, you can add at the end of the note. Adding Simple TagsLinking to a pageCreating a new page and adding a link to an existing page is the same process in Logseq. You have seen it above. If you press the [[]] and type a name, if that name already exists, a link to that page is created. Else, a new page is created. In the short video below, you can see the process of linking a note in another note. 0:00 /0:22 1× Adding link to a page in Logseq in another note. Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus Referencing a blockThe main flexibility of Logseq lies in the linking of individual blocks. In each note, you have a parent node, then child nodes and grand-child nodes. These are distinguished by the indentation it has. So, in the case of block referencing, you should take utmost care in properly adding indent to the note blocks. Now, type ((. A search box will appear above the cursor. Start typing something, and it will highlight the matching block anywhere in Logseq. 0:00 /0:29 1× Referencing a block inside a note. The block we are adding is part of another note. Similarly, you can right-click on a node and select "Copy block ref" to copy the reference code for that block. Copy Block ReferenceNow, if you paste this on other note, the main node content is pasted and the rest of that block (intended contents) will be visible on hover. Hover over reference for preview💡Instead of the "Copy block ref", you can also choose "Copy block embed" and then paste the embed code. This will paste the whole block in the area where you pasted the embed code.Using the block referencing and Markdown linksOnce you have the block reference code, you can use it as a URL to link to a particular word, instead of pasting raw in a line. To do that, use the Markdown link syntax: [This is a link to the block](reference code of the block)For example: [This is a link to the block](((679b6c26-2ce9-48f2-be6a-491935b314a6)))So, when you hover over the text, the referenced content is previewed. Reference as Markdown HyperlinkNow that you have the basic building blocks, you can start organizing your notes into a proper knowledge base. In the next tutorial of this series, I'll discuss how you can use plugins and themes to customize Logseq.
  20. by: LHB Community Sat, 19 Apr 2025 15:59:35 +0530 As a Kubernetes engineer, I deal with kubectl almost every day. Pod status, service list, CrashLoopBackOff location, YAML configuration comparison, log view...... are almost daily operations! But to be honest, in the process of cutting namespaces, manually copying pod names, and scrolling the log again and again, I gradually felt burned out. That is, until I came across KubeTUI — a little tool that made me feel like “getting back on my feet”. What is KubeTUIKubeTUI, known as Kubernetes Terminal User Interface, is a Kubernetes visual dashboard that can be used in the terminal. It's not like the traditional kubectl, which lets you memorize and knock out commands, or the Kubernetes Dashboard, which requires a browser, Ingress, and a token to log in to a bunch of configurations. In a nutshell, it's a tool that lets you happily browse the state of your Kubernetes cluster from your terminal. Installing KubeTUIKubeTUI is written in Rust, and you can download its binary releases from Github. Once you do that, you need to set up a Kubernetes environment to build and monitor your application. Let me show you how that is done, with an example of building a WordPress application. Setting up the Kubernetes environmentWe’ll use K3s to spin up a Kubernetes environment. The steps are mentioned below. Step 1: Install k3s and runcurl -sfL https://get.k3s.io | sh -With this single command, k3s will start itself after installation. At later times, you can use the below command to start k3s server.  sudo k3s server --write-kubeconfig-mode='644'Here’s a quick explanation of what the command includes : k3s server: It starts the K3s server component, which is the core of the Kubernetes control plane.--write-kubeconfig-mode='644': It ensures that the generated kubeconfig file has permissions that allow the owner to read and write it, and the group and others to only read it. If you start the server without this flag, you need to use sudo for all k3s commands.Step 2: Check available nodes via kubectlWe need to verify if Kubernetes control plane is actually working before we can make any deployments. You can use the command below to check that: k3s kubectl get nodeStep 3: Deploy WordPress using Helm chart (Sample Application)K3s provides helm integration, which helps manage the Kubernetes application. Simply apply this YAML manifest to spin up WordPress in Kubernetes environment from Bitnami helm chart. Create a file named wordpress.yaml with the contents: Content MissingYou can then apply the configuration file to the application using the command: k3s kubectl apply -f wordpress.yamlIt will take around 2–3 minutes for the whole setup to complete. Step 4: Launch KubeTUITo KubeTUI, type in the following command in the terminal. kubetuiHere's what you will see. There are no pods in the default namespace. Let’s switch namespace to wpdev we created earlier by hitting “n”. How to Use KubeTuiTo navigate to different tabs, like switching screens from Pod to Config and Network, you can click with your mouse or press the corresponding number as shown: You can also switch tabs with the keyboard: If you need help with Kubetui at any time, press ? to see all the available options. It integrates a vim-like search mode. To activate search mode, enter /. Tip for Log filtering I discovered an interesting feature to filter logs from multiple Kubernetes resources. For example, say we want to target logs from all pods with names containing WordPress. It will combine logs from both of these pods. We can use the query: pod:wordpressYou can target different resource types like svc, jobs, deploy, statefulsets, replicasets with the log filtering in place. Instead of combining logs, if you want to remove some pods or container logs, you can achieve it with !pod:pod-to-exclude and !container:container-to-exclude filters. ConclusionWorking with Kubernetes involves switching between different namespaces, pods, networks, configs, and services. KubeTUI can be a valuable asset in managing and troubleshooting Kubernetes environment.  I find myself more productive using tools like KubeTUI. Share your thoughts on what tools you’re utilizing these days to make your Kubernetes journey smoother. Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.
  21. by: Geoff Graham Fri, 18 Apr 2025 12:12:35 +0000 Hey, did you see the post Jen Simmons published about WebKit’s text-wrap: pretty implementation? It was added to Safari Technology Preview and can be tested now, as in, like, today. Slap this in a stylesheet and your paragraphs get a nice little makeover that improves the ragging, reduces hyphenation, eliminates typographic orphans at the end of the last line, and generally avoids large typographic rivers as a result. The first visual in the post tells the full story, showing how each of these is handled. Credit: WebKit Blog That’s a lot of heavy lifting for a single value! And according to Jen, this is vastly different from Chromium’s implementation of the exact same feature. Jen suggests that performance concerns are the reason for the difference. It does sound like the pretty value does a lot of work, and you might imagine that would have a cumulative effect when we’re talking about long-form content where we’re handling hundreds, if not thousands, of lines of text. If it sounds like Safari cares less about performance, that’s not the case, as their approach is capable of handling the load. Great, carry on! But now you know that two major browsers have competing implementations of the same feature. I’ve been unclear on the terminology of pretty since it was specced, and now it truly seems that what is considered “pretty” really is in the eye of the beholder. And if you’re hoping to choose a side, don’t, because the specification is intentionally unopinionated in this situation, as it says (emphasis added): So, there you have it. One new feature. Two different approaches. Enjoy your new typographic powers. 💪 “Pretty” is in the eye of the beholder originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  22. by: Zell Liew Thu, 17 Apr 2025 12:38:05 +0000 There was once upon a time when native CSS lacked many essential features, leaving developers to come up with all sorts of ways to make CSS easier to write over the years. These ways can mostly be categorized into two groups: Pre-processors Post-processors Pre-processors include tools like Sass, Less, and Stylus. Like what the category’s name suggests, these tools let you write CSS in their syntax before compiling your code into valid CSS. Post-processors work the other way — you write non-valid CSS syntax into a CSS file, then post-processors will change those values into valid CSS. There are two major post-processors today: PostCSS LightningCSS PostCSS is the largest kid on the block while Lightning CSS is a new and noteworthy one. We’ll talk about them both in a bit. I think post-processors have won the compiling game Post-processors have always been on the verge of winning since PostCSS has always been a necessary tool in the toolchain. The most obvious (and most useful) PostCSS plugin for a long time is Autoprefixer — it creates vendor prefixes for you so you don’t have to deal with them. /* Input */ .selector { transform: /* ... */; } .selector { -webkit-transform: /* ... */; transform: /* ... */; } Arguably, we don’t need Autoprefixer much today because browsers are more interopable, but nobody wants to go without Autoprefixer because it eliminates our worries about vendor prefixing. What has really tilted the balance towards post-processors includes: Native CSS gaining essential features Tailwind removing support for pre-processors Lightning CSS Let me expand on each of these. Native CSS gaining essential features CSS pre-processors existed in the first place because native CSS lacked features that were critical for most developers, including: CSS variables Nesting capabilities Allowing users to break CSS into multiple files without additional fetch requests Conditionals like if and for Mixins and functions Native CSS has progressed a lot over the years. It has gained great browser support for the first two features: CSS Variables Nesting With just these two features, I suspect a majority of CSS users won’t even need to fire up pre-processors or post-processors. What’s more, The if() function is coming to CSS in the future too. But, for the rest of us who needs to make maintenance and loading performance a priority, we still need the third feature — the ability to break CSS into multiple files. This can be done with Sass’s use feature or PostCSS’s import feature (provided by the postcss-import plugin). PostCSS also contains plugins that can help you create conditionals, mixins, and functions should you need them. Although, from my experience, mixins can be better replaced with Tailwind’s @apply feature. This brings us to Tailwind. Tailwind removing support for pre-processors Tailwind 4 has officially removed support for pre-processors. From Tailwind’s documentation: If you included Tailwind 4 via its most direct installation method, you won’t be able to use pre-processors with Tailwind. @import `tailwindcss` That’s because this one import statement makes Tailwind incompatible with Sass, Less, and Stylus. But, (fortunately), Sass lets you import CSS files if the imported file contains the .css extension. So, if you wish to use Tailwind with Sass, you can. But it’s just going to be a little bit wordier. @layer theme, base, components, utilities; @import "tailwindcss/theme.css" layer(theme); @import "tailwindcss/preflight.css" layer(base); @import "tailwindcss/utilities.css" layer(utilities); Personally, I dislike Tailwind’s preflight styles so I exclude them from my files. @layer theme, base, components, utilities; @import 'tailwindcss/theme.css' layer(theme); @import 'tailwindcss/utilities.css' layer(utilities); Either way, many people won’t know you can continue to use pre-processors with Tailwind. Because of this, I suspect pre-processors will get less popular as Tailwind gains more momentum. Now, beneath Tailwind is a CSS post-processor called Lightning CSS, so this brings us to talking about that. Lightning CSS Lightning CSS is a post-processor can do many things that a modern developer needs — so it replaces most of the PostCSS tool chain including: postcss-import postcss-preset-env autoprefixer Besides having a decent set of built-in features, it wins over PostCSS because it’s incredibly fast. Speed helps Lightning CSS win since many developers are speed junkies who don’t mind switching tools to achieve reduced compile times. But, Lightning CSS also wins because it has great distribution. It can be used directly as a Vite plugin (that many frameworks support). Ryan Trimble has a step-by-step article on setting it up with Vite if you need help. // vite.config.mjs export default { css: { transformer: 'lightningcss' }, build: { cssMinify: 'lightningcss' } }; If you need other PostCSS plugins, you can also include that as part of the PostCSS tool chain. // postcss.config.js // Import other plugins... import lightning from 'postcss-lightningcss' export default { plugins: [lightning, /* Other plugins */], } Many well-known developers have switched to Lightning CSS and didn’t look back. Chris Coyier says he’ll use a “super basic CSS processing setup” so you can be assured that you are probably not stepping in any toes if you wish to switch to Lightning, too. If you wanna ditch pre-processors today You’ll need to check the features you need. Native CSS is enough for you if you need: CSS Variables Nesting capabilities Lightning CSS is enough for you if you need: CSS Variables Nesting capabilities import statements to break CSS into multiple files Tailwind (with @apply) is enough for you if you need: all of the above Mixins If you still need conditionals like if, for and other functions, it’s still best to stick with Sass for now. (I’ve tried and encountered interoperability issues between postcss-for and Lightning CSS that I shall not go into details here). That’s all I want to share with you today. I hope it helps you if you have been thinking about your CSS toolchain. So, You Want to Give Up CSS Pre- and Post-Processors… originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  23. by: Abhishek Prakash Thu, 17 Apr 2025 06:27:20 GMT It's the release week. Fedora 42 is already out. Ubuntu 25.04 will be releasing later today along with its flavors like Kubuntu, Xubuntu, Lubuntu etc. In the midst of these two heavyweights, MX Linux and Manjaro also quickly released newer versions. For Manjaro, it is more of an ISO refresh, as it is a rolling release distribution. Overall, a happening week for Linux lovers 🕺 💬 Let's see what else you get in this edition Arco Linux bids farewell.Systemd working on its own Linux distro.Looking at the origin of UNIX.And other Linux news, tips, and, of course, memes!This edition of FOSS Weekly is supported by Aiven.❇️ Aiven for ClickHouse® - The Fastest Open Source Analytics Database, Fully ManagedClickHouse processes analytical queries 100-1000x faster than traditional row-oriented systems. Aiven for ClickHouse® gives you the lightning-fast performance of ClickHouse–without the infrastructure overhead. Just a few clicks is all it takes to get your fully managed ClickHouse clusters up and running in minutes. With seamless vertical and horizontal scaling, automated backups, easy integrations, and zero-downtime updates, you can prioritize insights–and let Aiven handle the infrastructure. Managed ClickHouse database | AivenAiven for ClickHouse® – fully managed, maintenance-free data warehouse ✓ All-in-one open source cloud data platform ✓ Try it for freeAiven📰 Linux and Open Source NewsThe Arch-based ArcoLinux has been discontinued.Fedora 42 has been released with some rather interesting changes.Manjaro 25.0 'Zetar' is here, offering a fresh image for new installations. ParticleOS is Systemd's attempt at a Linux distribution. ParticleOS: Systemd’s Very Own Linux Distro in MakingA Linux distro from systemd? Sounds interesting, right?It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutLinus Torvalds was told that Git is more popular than Linux. Git is More Popular than Linux: TorvaldsLinus Torvalds reflects on 20 years of Git.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials and More11 vibe coding tools to 10x your dev workflow.Adding comments in bash scripts.Understand the difference between Pipewire and Pulseaudio.Make your Logseq notes more readable by formatting them. That's a new series focusing on Logseq.From UNIX to today’s tech. Learn how it shaped the digital world. Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 👷 Homelab and Maker's CornerThese 28 cool Raspberry Pi Zero W projects will keep you busy. 28 Super Cool Raspberry Pi Zero W Project IdeasWondering what to do with your Raspberry Pi Zero W? Here are a bunch of project ideas you can spend some time on and satisfy your DIY craving.It's FOSSChinmay✨ Apps HighlightYou can download YouTube videos using Seal on Android. Seal: A Nifty Open Source Android App to Download YouTube Video and AudioDownload YouTube video/music (for educational purpose or with consent) with this little, handy Android app.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for YouSee the new features of Ubuntu 25.04 in action in this video. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeOur Guess the Desktop Environment Crossword will test your knowledge. Guess the Desktop Environment: CrosswordTest your desktop Linux knowledge with this simple crossword puzzle. Can you solve it all correctly?It's FOSSAbhishek PrakashAlternatively, guess all of these open source privacy tools correctly? Know The Best Open-Source Privacy ToolsDo you utilize open-source tools for privacy?It's FOSSAnkush Das💡 Quick Handy TipYou can make Thunar open a new tab instead of a new window. This is good in situations when opening a folder from other apps, like a web browser. This reduces screen clutter. First, click on Edit ⇾ Preferences. Here, go to the Behavior tab. Now, under "Tabs and Windows", enable the first checkbox as shown above or all three if you need the functionality of the other two. 🤣 Meme of the WeekWe are generally a peaceful bunch, for the most part. 🫣 🗓️ Tech TriviaOn April 16, 1959, John McCarthy publicly introduced LISP, a programming language for AI that emphasized symbolic computation. This language remains influential in AI research today. 🧑‍🤝‍🧑 FOSSverse CornerFOSSers are discussing VoIP, do you have any insights to add here? A discussion over Voice Over Internet Protocol (VoIP)I live in a holiday village where we have several different committees and meetings, for those not present to attend the meetings we do video conférences using voip. A few years back the prefered system was skype, we changed to whatsapp last year as we tend to use its messaging facilities and its free. We have a company who manages our accounts, they prefer using teams, paid for version as they can invoice us for its use … typical accountant. My question, does it make any difference in band w…It's FOSS Communitycallpaul.eu (Paul)❤️ With loveShare it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  24. by: Preethi Wed, 16 Apr 2025 12:34:50 +0000 This article covers tips and tricks on effectively utilizing the CSS backdrop-filter property to style contemporary user interfaces. You’ll learn how to layer backdrop filters among multiple elements, and integrate them with other CSS graphical effects to create elaborate designs. Below is a hodgepodge sample of what you can build based on everything we’ll cover in this article. More examples are coming up. CodePen Embed Fallback The blurry, frosted glass effect is popular with developers and designers these days — maybe because Josh Comeau wrote a deep-dive about it somewhat recently — so that is what I will base my examples on. However, you can apply everything you learn here to any relevant filter. I’ll also be touching upon a few of them in my examples. What’s essential in a backdrop filter? If you’re familiar with CSS filter functions like blur() and brightness(), then you’re also familiar with backdrop filter functions. They’re the same. You can find a complete list of supported filter functions here at CSS-Tricks as well as over at MDN. The difference between the CSS filter and backdrop-filter properties is the affected part of an element. Backdrop filter affects the backdrop of an element, and it requires a transparent or translucent background in the element for its effect to be visible. It’s important to remember these fundamentals when using a backdrop filter, for these reasons: to decide on the aesthetics, to be able to layer the filters among multiple elements, and to combine filters with other CSS effects. The backdrop Design is subjective, but a little guidance can be helpful. If you’ve applied a blur filter to a plain background and felt the result was unsatisfactory, it could be that it needed a few embellishments, like shadows, or more often than not, it’s because the backdrop is too plain. Plain backdrops can be enhanced with filters like brightness(), contrast(), and invert(). Such filters play with the luminosity and hue of an element’s backdrop, creating interesting designs. Textured backdrops complement distorting filters like blur() and opacity(). <main> <div> <section> <h1>Weather today</h1> Cloudy with a chance of meatballs. Ramenstorms at 3PM that will last for ten minutes. </section> </div> </main> main { background: center/cover url("image.jpg"); box-shadow: 0 0 10px rgba(154 201 255 / 0.6); /* etc. */ div { backdrop-filter: blur(10px); color: white; /* etc. */ } } CodePen Embed Fallback Layering elements with backdrop filters As we just discussed, backdrop filters require an element with a transparent or translucent background so that everything behind it, with the filters applied, is visible. If you’re applying backdrop filters on multiple elements that are layered above one another, set a translucent (not transparent) background to all elements except the bottommost one, which can be transparent or translucent, provided it has a backdrop. Otherwise, you won’t see the desired filter buildup. <main> <div> <section> <h1>Weather today</h1> Cloudy with a chance of meatballs. Ramenstorms at 3PM that will last for ten minutes. </section> <p>view details</p> </div> </main> main { background: center/cover url("image.jpg"); box-shadow: 0 0 10px rgba(154 201 255 / 0.6); /* etc. */ div { background: rgb(255 255 255 / .1); backdrop-filter: blur(10px); /* etc. */ p { backdrop-filter: brightness(0) contrast(10); /* etc. */ } } } CodePen Embed Fallback Combining backdrop filters with other CSS effects When an element meets a certain criterion, it gets a backdrop root (not yet a standardized name). One criterion is when an element has a filter effect (from filter and background-filter). I believe backdrop filters can work well with other CSS effects that also use a backdrop root because they all affect the same backdrop. Of those effects, I find two interesting: mask and mix-blend-mode. Combining backdrop-filter with mask resulted in the most reliable outcome across the major browsers in my testing. When it’s done with mix-blend-mode, the blur backdrop filter gets lost, so I won’t use it in my examples. However, I do recommend exploring mix-blend-mode with backdrop-filter. Backdrop filter with mask Unlike backdrop-filter, CSS mask affects the background and foreground (made of descendants) of an element. We can use that to our advantage and work around it when it’s not desired. <main> <div> <div class="bg"></div> <section> <h1>Weather today</h1> Cloudy with a chance of meatballs. Ramenstorms at 3PM that will last for ten minutes. </section> </div> </main> main { background: center/cover url("image.jpg"); box-shadow: 0 0 10px rgba(154 201 255 / 0.6); /* etc. */ > div { .bg { backdrop-filter: blur(10px); mask-image: repeating-linear-gradient(90deg, transparent, transparent 2px, white 2px, white 10px); /* etc. */ } /* etc. */ } } CodePen Embed Fallback Backdrop filter for the foreground We have the filter property to apply graphical effects to an element, including its foreground, so we don’t need backdrop filters for such instances. However, if you want to apply a filter to a foreground element and introduce elements within it that shouldn’t be affected by the filter, use a backdrop filter instead. <main> <div class="photo"> <div class="filter"></div> </div> <!-- etc. --> </main> .photo { background: center/cover url("photo.jpg"); .filter { backdrop-filter: blur(10px) brightness(110%); mask-image: radial-gradient(white 5px, transparent 6px); mask-size: 10px 10px; transition: backdrop-filter .3s linear; /* etc.*/ } &:hover .filter { backdrop-filter: none; mask-image: none; } } In the example below, hover over the blurred photo. CodePen Embed Fallback There are plenty of ways to play with the effects of the CSS backdrop-filter. If you want to layer the filters across stacked elements then ensure the elements on top are translucent. You can also combine the filters with other CSS standards that affect an element’s backdrop. Once again, here’s the set of UI designs I showed at the beginning of the article, that might give you some ideas on crafting your own. CodePen Embed Fallback References backdrop-filter (CSS-Tricks) backdrop-filter (MDN) Backdrop root (CSS Filter Effects Module Level 2) Filter functions (MDN) Using CSS backdrop-filter for UI Effects originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  25. by: Abhishek Kumar Tue, 15 Apr 2025 05:41:55 GMT Once upon a time, coding meant sitting down, writing structured logic, and debugging for hours. Fast-forward to today, and we have Vibe Coding, a trend where people let AI generate entire chunks of code based on simple prompts. No syntax, no debugging, no real understanding of what’s happening under the hood. Just vibes. Coined by OpenAI co-founder Andrej Karpathy, Vibe Coding is the act of developing software by giving natural language instructions to AI and accepting whatever it spits out. Source : XSome people even take it a step further by using voice-to-text tools so they don’t have to type at all. Just describe your dream app, and boom, the AI makes it for you. Or does it? People are building full-fledged SaaS products in days, launching MVPs overnight, and somehow making more money than seasoned engineers who swear by Agile methodologies. And here I am, writing about them instead of cashing in myself. Life isn’t fair, huh? But don’t get me wrong, I’m not here to hate. I’m here to expand on this interesting movement and hand you the ultimate arsenal to embrace vibe coding with these tools. ✋Non-FOSS Warning! Some of the applications mentioned here may not be open source. They have been included in the context of Linux usage. Also, some tools provide interface for popular, commercial LLMs like ChatGPT and Claude.1. Aider - AI pair programming in your terminalAider is the perfect choice if you're looking for a pair programmer to help you ship code faster. It allows you to pair programs with LLMs to edit code in your local GitHub repository. You can start a new project or work with an existing GitHub repo—all from your terminal. Key Features ✅ Aider works best with Claude 3.7 Sonnet, DeepSeek R1 & Chat V3, OpenAI o1, o3-mini & GPT-4o, but can connect to almost any LLM, including local models. ✅ Aider makes a map of your entire codebase, which helps it work well in larger projects. ✅ Supports most popular programming languages: Python, JavaScript, Rust, Ruby, Go, C++, PHP, HTML, CSS, and more. ✅ Automatically commits changes with sensible commit messages. Use familiar Git tools to easily diff, manage, and undo AI changes. ✅ Use Aider from within your favorite IDE or editor. Ask for changes by adding comments to your code, and Aider will get to work. ✅ Add images and web pages to the chat to provide visual context, screenshots, and reference docs. ✅ Automatically lint and test your code every time Aider makes changes. It can fix problems detected by linters and test suites. ✅ Works best with LLM APIs but also supports web chat interfaces, making copy-pasting code seamless. Aider2. VannaAI - Chat with SQL DatabaseWriting SQL queries can be tedious, but VannaAI changes that by letting you interact with SQL databases using natural language. Instead of manually crafting queries, you describe what you need, and VannaAI generates the SQL for you. It Works in two steps, Train a RAG "model" on your data and then ask questions that return SQL queries. Key Features ✅ Out-of-the-box support for Snowflake, BigQuery, Postgres, and more. ✅ The Vanna Python package and frontend integrations are all open-source, allowing deployment on your infrastructure. ✅ Database contents are never sent to the LLM unless explicitly enabled. ✅ Improves continuously by augmenting training data. ✅ Use Vanna in Jupyter Notebooks, Slackbots, web apps, Streamlit apps, or even integrate it into your own web app. VannaAI makes querying databases as easy as having a conversation, making it a game-changer for both technical and non-technical users. Vanna AI3. All Hands - Open source agents for developersAll Hands is an open-source platform for AI developer agents, capable of building projects, adding features, debugging, and more. Competing with Devin, All Hands recently topped the SWE-bench leaderboard with 53% accuracy. Key Features ✅ Use All Hands via an interactive GUI, command-line interface (CLI), or non-interactive modes like headless execution and GitHub Actions. ✅ Open-source freedom, built under the MIT license to ensure AI technology remains accessible to all. ✅ Handles complex tasks, from code generation to debugging and issue fixing. ✅ Developed in collaboration with AI safety experts like Invariant Labs to balance innovation and security. To get started, install Docker 26.0.0+ and run OpenHands using the provided Docker commands. Once running, configure your LLM provider and start coding with AI-powered assistance. All Hands4. Continue - Leading AI-powered code assistantYou must have heard about Cursor IDE, the popular AI-powered IDE; Continue is similar to it but open source under Apache license. It is highly customizable and lets you add any language model for auto-completion or chat. This can immensely improve your productivity. You can add Continue to VScode and JetBrains. Key Features ✅ Continue autocompletes single lines or entire sections of code in any programming language as you type. ✅ Attach code or other context to ask questions about functions, files, the entire codebase, and more. ✅ Select code sections and press a keyboard shortcut to rewrite code from natural language. ✅ Works with Ollama, OpenAI, Together, Anthropic, Mistral, Azure OpenAI Service, and LM Studio. ✅ Codebase, GitLab Issues, Documentation, Methods, Confluence pages, Files. ✅ Data blocks, Docs blocks, Rules blocks, MCP blocks, Prompts blocks. Continue5. Wave - Terminal with local LLMsWave terminal introduces BYOLLM (Bring Your Own Large Language Model), allowing users to integrate their own local or cloud-based LLMs into their workflow. It currently supports local LLM providers such as Ollama, LM Studio, llama.cpp, and LocalAI while also enabling the use of any OpenAI API-compatible model. Key Features ✅ Use local or cloud-based LLMs, including OpenAI-compatible APIs. ✅ Seamlessly integrate LLM-powered responses into your terminal workflow. ✅ Set the AI Base URL and AI Model in the settings or via CLI. ✅ Plans to include support for commercial models like Gemini and Claude. Waveterm6. Warp terminal - Agent mode (not open source)After WaveTerm, we have another amazing contender in the AI-powered terminal space, Warp Terminal. I personally use this so I may sound biased. 😛 It’s essentially an AI-powered assistant that can understand natural language, execute commands, and troubleshoot issues interactively. Instead of manually looking up commands or switching between documentation, you can simply describe the task in English and let Agent Mode guide you through it. Key Features ✅ No need to remember complex CLI commands, just type what you want, like "Set up an Nginx reverse proxy with SSL", and Agent Mode will handle the details. ✅ Ran into a “port 3000 already in use” error? Just type "fix it", and Warp will suggest running kill $(lsof -t -i:3000). If that doesn’t work, it’ll refine the approach automatically. ✅ Works seamlessly with Git, AWS, Kubernetes, Docker, and any other tool with a CLI. If it doesn’t know a command, you can tell it to read the help docs, and it will instantly learn how to use the tool. ✅ Warp doesn’t send anything to the cloud without your permission. You approve each command before it runs, and it only reads outputs when explicitly allowed. It seems like Warp is moving from a traditional AI-assisted terminal to an interactive AI-powered shell, making the command line much more intuitive. Would you consider switching to it, or do you think this level of automation might be risky for some tasks? Warp Terminal7. Pieces : AI extension to IDE (not open source)Pieces isn’t a code editor itself, it’s an AI-powered extension that supercharges editors like VS Code, Sublime Text, Neovim and many more IDE's with real-time intelligence and memory. Its highlighted feature is Long-Term Memory Agent that captures up to 9 months of coding context, helping you seamlessly resume work, even after a long break. Everything runs locally for full privacy. It understands your code, recalls snippets, and blends effortlessly into your dev tools to eliminate context switching. Bonus: it’s free for now, with a free tier promised forever, but they will start charging soon, so early access might come with perks. Key Features ✅ Stores 9 months of local coding context ✅ Integrates with Neovim, VS Code, and Sublime Text ✅ Fully on-device AI with zero data sharing ✅ Context-aware suggestions via Pieces Copilot ✅ Organize and share snippets using Pieces Drive ✅ Always-free tier promised, with early adopter perks Pieces8. Aidermacs: AI aided coding in EmacsAidermacs by MatthewZMD is for the Emacs power users who want that sweet Cursor-style AI experience; but without leaving their beloved terminal. It’s a front-end for the open-source Aider, bringing powerful pair programming into Emacs with full respect for its workflows and philosophy. Whether you're using GPT-4, Claude, or even DeepSeek, Aidermacs auto-detects your available models and lets you chat with them directly inside Emacs. And yes, it's deeply customizable, as all good Emacs things should be. Key Features ✅ Integrates Aider into Emacs for collaborative coding ✅ Intelligent model selection from OpenAI, Anthropic, Gemini, and more ✅ Built-in Ediff for side-by-side AI-generated changes ✅ Fine-grained file control: edit, read-only, scratchpad, and external ✅ Fully theme-aware with Emacs-native UI integration ✅ Works well in terminal via vterm with theme-based colors Aidermacs9. Jeddict AI AssistantThis one is for my for the Java folks, It’s a plugin for Apache NetBeans. I remember using NetBeans back in school, and if this AI stuff was around then, I swear I would've aced my CS practicals. This isn’t your average autocomplete tool. Jeddict AI Assistant brings full-on AI integration into your IDE: smarter code suggestions, context-aware documentation, SQL query help, even commit messages. It's especially helpful if you're dealing with big Java projects and want AI that actually understands what’s going on in your code. Key Features ✅ Smart, inline code completions using OpenAI, DeepSeek, Mistral, and more ✅ AI chat with full awareness of project/class/package context ✅ Javadoc creation & improvement with a single shortcut ✅ Variable renaming, method refactoring, and grammar fixes via AI hints ✅ SQL query assistance & inline completions in the database panel ✅ Auto-generated Git commit messages based on your diffs ✅ Custom rules, file context preview, and experimental in-editor updates ✅ Fully customizable AI provider settings (supports LM Studio, Ollama, GPT4All too!) Jeddict AI Assistant10. Amazon CodeWhispererIf your coding journey revolves around AWS services, then Amazon CodeWhisperer might be your ideal AI-powered assistant. While it works like other AI coding tools, its real strength lies in its deep integration with AWS SDKs, Lambda, S3, and DynamoDB. CodeWhisperer is fine-tuned for cloud-native development, making it a go-to choice for developers building serverless applications, microservices, and infrastructure-as-code projects. Since it supports Visual Studio Code and JetBrains IDEs, AWS developers can seamlessly integrate it into their workflow and get AWS-specific coding recommendations that follow best practices for scalability and security. Plus, individual developers get free access, making it an attractive option for solo builders and startup developers. Key Features ✅ Optimized code suggestions for AWS SDKs and cloud services. ✅ Built-in security scanning to detect vulnerabilities. ✅ Supports Python, Java, JavaScript, and more. ✅ Free for individual developers. Amazon CodeWhisperer11. Qodo AI (previously Codium)If you’ve ever been frustrated by the limitations of free AI coding tools, qodo might be the answer. Supporting over 50 programming languages, including Python, Java, C++, and TypeScript, qodo integrates smoothly with Visual Studio Code, IntelliJ, and JetBrains IDEs. It provides intelligent autocomplete, function suggestions, and even code documentation generation, making it a versatile tool for projects of all sizes. While it may not have some of the advanced features of paid alternatives, its zero-cost access makes it a game-changer for budget-conscious developers. Key Features ✅ Unlimited free code completions with no restrictions. ✅ Supports 50+ programming languages, including Python, Java, and TypeScript. ✅ Works with popular IDEs like Visual Studio Code and JetBrains. ✅ Lightweight and responsive, ensuring a smooth coding experience. QodoFinal thoughts📋I deliberately skipped IDEs from this list. I have a separate list of editors for vibe coding on Linux.With time, we’re undoubtedly going to see more AI-assisted coding take center stage. As Anthropic CEO Dario Amodei puts it, AI will write 90% of code within six months and could automate software development entirely within a year. Whether that’s an exciting leap forward or a terrifying thought depends on how much you trust your AI pair programmer. If you’re diving into these tools, I highly recommend brushing up on the basics of coding and version control. AI can write commands for you, but if you don’t know what it’s doing, you might go from “I just built the next billion-dollar SaaS!” to “Why did my AI agent just delete my entire codebase?” in a matter of seconds. XThat said, this curated list of amazing open-source tools should get you started. Whether you're a seasoned developer or just someone who loves typing cool things into a terminal, these tools will level up your game. Just remember: the AI can vibe with you, but at the end of the day, you're still the DJ of your own coding playlist (sorry for the cringy line 👉👈).

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.