Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. Blogger
    by: Sreenath
    Sat, 03 May 2025 08:56:47 GMT

    In an earlier article, I discussed installing plugins and themes in Logseq.
    And you already know that there are plenty of third-party plugins available in Logseq plugins Marketplace.
    Let me share some of the Plugins I use to organize my contents.
    🚧Before installing Plugins, it is always good to frequently take backups of your notes. In case of any unexpected data loss, you can roll back easily.I presume you know it already, but in case you need help, here's a detailed tutorial on installing plugins in Logseq.
    Customize Logseq With Themes and PluginsExtend the capability and enhance the looks for Logseq with themes and plugins.It's FOSSSreenathMarkdown Table Editor
    Creating tables in Markdown syntax is a tedious process. Tools like Obsidian have a table creator helper that allows you to create and edit tables easily.
    When it comes to Logseq, we have a very cool plugin, Markdown Table Editor that does the job neatly and greatly.
    You can install this extension from the Logseq plugin Marketplace.
    To create a table, press the / key. This will bring you a small popup search. Enter table here and select Markdown Table Editor.
    This will create a popup window with a straight-forward interface to edit table entries. The interface is self-explanatory where you can add/delete columns, rows, etc.
    0:00 /1:00 1× Creating Markdown table in Logseq using the Markdown Table Editor plugin.
    Markdown Table Editor GitHubBullet Threading
    Logseq follows a bullet blocks approach, with each data block is a properly indented bullet point.
    Now, the point to note here is "Properly indented".
    You should be careful about the organization of parent, child, and grandchild nodes (bullets) in Logseq. Otherwise, when you reference a particular block of a note in the future, not all related data will be retrieved. Some points may appear as part of another nested block, which destroys the whole purpose of linking.
    Bullet-Threading extension will help you keep track of the position you are currently editing in the greater nested data tree. This is done by visually indicating the bullet path. Such an approach makes the current indent location visually clear for you.
    0:00 /0:15 1× Example of Bullet Threading Extension
    Never again loss track of data organization because of the lack of awareness about the indentation tree.
    Bullet-Threading GitHubTags
    Tags is the best plugin to organize the data in logseq where there is only a very narrow difference between pages and tags. It is the context of usage that differentiate pages and tags from each other.
    So, assigning single-word or small phrase tags to your notes will help you access and connect between the knowledge in the future.
    The Tags extension will query the notes and list all the tags in your data collection; be it a #note, #[[note sample]], or tags:: Newtag tag.
    You can arrange them alphabetically or according to the number of notes tagged with that specific tag.
    🚧As of February 1, 2025, the GitHub repository of this project was archived by the creator. Keep an eye on further development for hassle-free usage.Tags Plugin listing available tagsYou can install the plugin from the Logseq plugins Marketplace.
    TagsTabs
    Working with multiple documents at a time is a necessity. Opening and closing documents one by one is surely not the best experience these days.
    Logseq has the Tabs plugin that implements a tab bar on top of the window so that you can have easy access to multiple opened documents.
    This plugin offers several must-needed features like pin tabs, reorder tabs, persisting tabs, etc.
    0:00 /0:26 1× Working with Tabs in Logseq.
    Usually, newly opened document replace the current tabs. But you can use Ctrl+click to open links in background tab, which is a very handy feature.
    Tabs GitHubJournals Calendar
    Journal is a very important page in Logseq.
    You can neatly organize document tree and scribble things and tag them properly. Each day in the Journal is an independent Markdown file in the Journals directory in your File manager.
    Journal Markdown FilesBut it may feel a bit crowded over time, and getting a note from a particular date often includes searching and scrolling the result.
    The Journals Calendar plugin is a great help in this scenario. This plugin adds a small calendar button to the top bar of Logseq. You can click on it and select a date from the calendar. If there is no Journal at that date, it will create one for you.
    0:00 /0:46 1× Journal Calendar Plugin in Logseq
    Pages with Journals will be marked with a dot allowing you to distinguish them easily.
    Journals Calendar GitHubTodo Master Plugin
    Todo Master plugin is a simple plugin that puts a neat progress bar next to a task. You can use this as a visual progress tracking.
    You can press the slash command (/) and select TODO Master from there to add the progress bar to the task of your choice. Watch the video to understand it better.
    TODO Master PluginLogseq TOC Plugin
    Since Logseq follows a different approach for data management compared to popular tools like Obsidian, there is no built-in table of contents for a page.
    There is a "Contents" page in Logseq, which has an entire different purpose. In this case, this real table of contents renderer plugin is a great relief.
    It renders the TOC using the Markdown headers.
    Logseq TOC renderingLogseq TOC PluginWrapping Up
    Logseq plugin Marketplace has numerous plugins and themes available to choose from.
    But you should be careful since third-party plugins can result in data losses sometimes. Weird, I know.
    It is always good to take proper backup of the data, especially if you are following a local-first note management policy. You won't want to lose your notes, do you?
    💬 Which Logseq plugin do you use the most? Feel free to suggest your recommendations in the comment section, so that other users may find useful!
  2. Blogger

    CSS shape() Commands

    by: Geoff Graham
    Fri, 02 May 2025 12:36:10 +0000

    The CSS shape() function recently gained support in both Chromium and WebKit browsers. It’s a way of drawing complex shapes when clipping elements with the clip-path property. We’ve had the ability to draw basic shapes for years — think circle, ellipse(), and polygon() — but no “easy” way to draw more complex shapes.
    Well, that’s not entirely true. It’s true there was no “easy” way to draw shapes, but we’ve had the path() function for some time, which we can use to draw shapes using SVG commands directly in the function’s arguments. This is an example of an SVG path pulled straight from WebKit’s blog post linked above:
    <svg viewBox="0 0 150 100" xmlns="http://www.w3.org/2000/svg"> <path fill="black" d="M0 0 L 100 0 L 150 50 L 100 100 L 0 100 Q 50 50 0 0 z " /> </svg> Which means we can yank those <path> coordinates and drop them into the path() function in CSS when clipping a shape out of an element:
    .clipped { clip-path: path("M0 0 L 100 0 L 150 50 L 100 100 L 0 100 Q 50 50 0 0 z"); } I totally understand what all of those letters and numbers are doing. Just kidding, I’d have to read up on that somewhere, like Myriam Frisano’s more recent “Useful Recipes For Writing Vectors By Hand” article. There’s a steep learning curve to all that, and not everyone — including me — is going down that nerdy, albeit interesting, road. Writing SVG by hand is a niche specialty, not something you’d expect the average front-ender to know. I doubt I’m alone in saying I’d rather draw those vectors in something like Figma first, export the SVG code, and copy-paste the resulting paths where I need them.
    The shape() function is designed to be more, let’s say, CSS-y. We get new commands that tell the browser where to draw lines, arcs, and curves, just like path(), but we get to use plain English and native CSS units rather than unreadable letters and coordinates. That opens us up to even using CSS calc()-ulations in our drawings!
    Here’s a fairly simple drawing I made from a couple of elements. You’ll want to view the demo in either Chrome 135+ or Safari 18.4+ to see what’s up.
    CodePen Embed Fallback So, instead of all those wonky coordinates we saw in path(), we get new terminology. This post is really me trying to wrap my head around what those new terms are and how they’re used.
    In short, you start by telling shape() where the starting point should be when drawing. For example, we can say “from top left” using directional keywords to set the origin at the top-left corner of the element. We can also use CSS units to set that position, so “from 0 0” works as well. Once we establish that starting point, we get a set of commands we can use for drawing lines, arcs, and curves.
    I figured a table would help.
    CommandWhat it meansUsageExampleslineA line that is drawn using a coordinate pairThe by keyword sets a coordinate pair used to determine the length of the line.line by -2px 3pxvlineVertical lineThe to keyword indicates where the line should end, based on the current starting point.

    The by keyword sets a coordinate pair used to determine the length of the line.vline to 50pxhlineHorizontal lineThe to keyword indicates where the line should end, based on the current starting point.

    The by keyword sets a coordinate pair used to determine the length of the line.hline to 95%arcAn arc (oh, really?!). An elliptical one, that is, sort of like the rounded edges of a heart shape.The to keyword indicates where the arc should end.

    The with keyword sets a pair of coordinates that tells the arc how far right and down the arc should slope.

    The of keyword specifies the size of the ellipse that the arc is taken from. The first value provides the horizontal radius of the ellipse, and the second provides the vertical radius. I’m a little unclear on this one, even after playing with it.arc to 10% 50% of 1%curveA curved lineThe to keyword indicates where the curved line should end.

    The with keyword sets “control points” that affect the shape of the curve, making it deep or shallow.curve to 0% 100% with 50% 0%smoothAdds a smooth Bézier curve command to the list of path data commandsThe to keyword indicates where the curve should end.

    The by keyword sets a coordinate pair used to determine the length of the curve.

    The with keyword specifies control points for the curve.I have yet to see any examples of this in the wild, but let me know if you do, and I can add it here. The spec is dense, as you might expect with a lot of moving pieces like this. Again, these are just my notes, but let me know if there’s additional nuance you think would be handy to include in the table.
    Oh, another fun thing: you can adjust the shape() on hover/focus. The only thing is that I was unable to transition or animate it, at least in the current implementation.
    CodePen Embed Fallback CSS shape() Commands originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  3. Blogger
    by: Sacha Greif
    Thu, 01 May 2025 12:34:58 +0000

    I don’t know if I should say this on a website devoted to programming, but I sometimes feel like *lowers voice* coding is actually the least interesting part of our lives.
    After all, last time I got excited meeting someone at a conference it was because we were both into bouldering, not because we both use React. And The Social Network won an Oscar for the way it displayed interpersonal drama, not for its depiction of Mark Zuckerberg’s PHP code. 
    Yet for the past couple years, I’ve been running developer surveys (such as the State of JS and State of CSS) that only ask about code. It was time to fix that. 
    A new kind of survey
    The State of Devs survey is now open to participation, and unlike previous surveys it covers everything except code: career, workplace, but also health, hobbies, and more. 
    I’m hoping to answer questions such as:
    What are developers’ favorite recent movies and video games? What kind of physical activity do developers practice? How much sleep are we all getting? But also address more serious topics, including:
    What do developers like about their workplace? What factors lead to workplace discrimination? What global issues are developers most concerned with? Reaching out to new audiences
    Another benefit from branching out into new topics is the chance to reach out to new audiences.
    It’s no secret that people who don’t fit the mold of the average developer (whether because of their gender, race, age, disabilities, or a myriad of other factors) often have a harder time getting involved in the community, and this also shows up in our data. 
    In the past, we’ve tried various outreach strategies to help address these imbalances in survey participation, but the results haven’t always been as effective as we’d hoped. 
    So this time, I thought I’d try something different and have the survey itself include more questions relevant to under-represented groups, asking about workplace discrimination:
    As well as actions taken in response to said discrimination:
    Yet while obtaining a more representative data sample as a result of this new focus would be ideal, it isn’t the only benefit. 
    The most vulnerable among us are often the proverbial canaries in the coal mine, suffering first from issues or policies that will eventually affect the rest of the community as well, if left unchecked. 
    So, facing these issues head-on is especially valuable now, at a time when “DEI” is becoming a new taboo, and a lot of the important work that has been done to make things slightly better over the past decade is at risk of being reversed.
    The big questions
    Finally, the survey also tries to go beyond work and daily life to address the broader questions that keep us up at night:
    There’s been talk in recent years about keeping the workplace free of politics. And why I can certainly see the appeal in that, in 2025, it feels harder than ever to achieve that ideal. At a time when people are losing rights and governments are sliding towards authoritarianism, should we still pretend that everything is fine? Especially when you factor in the fact that the tech community is now a major political player in its own right…
    So while I didn’t push too far in that direction for this first edition of the survey, one of my goals for the future is to get a better grasp of where exactly developers stand in terms of ideology and worldview. Is this a good idea, or should I keep my distance from any hot-button issues? Don’t hesitate to let me know what you think, or suggest any other topic I should be asking about next time. 
    In the meantime, go take the survey, and help us get a better picture of who exactly we all are!
    State of Devs: A Survey for Every Developer originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  4. Blogger
    by: Abhishek Prakash
    Thu, 01 May 2025 05:49:00 GMT

    Before the age of blogs, forums, and YouTube tutorials, Linux users relied on printed magazines to stay informed and inspired. Titles like Linux Journal, Linux Format, and Maximum Linux were lifelines for enthusiasts, packed with tutorials, distro reviews, and CD/DVDs.
    These glossy monthly issues weren’t just publications—they were portals into a growing open-source world.
    Let's recollect the memories of your favorite Linux magazines. Ever read them or had their subscription?
    Linux Magazines That Rule(d) The LinuxverseOnce upon a time when it was fashionable to read magazines in print format, these were the choices for the Linux users.It's FOSSAbhishek Prakash💬 Let's see what else you get in this edition
    RISC-V based SBC, Muse Pi. Lenovo offering Linux laptops. Trying tab grouping in Firefox. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by PikaPods. ❇️ PikaPods: Enjoy Self-hosting Hassle-free
    PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. PikaPods also share revenue with the original developers of the software.
    You get a $5 free credit to try it out and see if you can rely on PikaPods. I know, you can 😄
    PikaPods - Instant Open Source App HostingRun the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.Instant Open Source App Hosting📰 Linux and Open Source News
    QEMU 10 just released with many new upgrades. Proton Pass now allows attaching files to passwords. The Indian court orders a ban on Proton Mail. Kali Linux is urging users to add their new signing key. Running Arch Linux inside WSL is now officially possible. The Muse Pi Pro is a new RISC-V SBC with AI acceleration. Lenovo offers Linux laptops with cheaper price tag .
    Lenovo Cuts the Windows Tax and offers Cheaper Laptops with Linux Pre-installedLenovo is doing something that many aren’t.It's FOSS NewsSourav Rudra🧠 What We’re Thinking About
    Perplexity is ready to track everything users do with its upcoming AI-powered web browser.
    Perplexity Wants to Track Your Every Move With its AI-powered BrowserPerplexity’s new Comet web browser is bad news if you care about privacy.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials and More
    Organize better with Logseq journals and contents pages. Learn how to create a password-protected Zip file in Linux. Our apt command guide is a one-stop resource for all your apt command needs. Dual-booting CachyOS and Windows is a nice way to get the best of both worlds. Firefox has finally introduced Tab Groups, join us as we explore it.
    Exploring Firefox Tab Groups: Has Mozilla Redeemed Itself?Firefox’s Tab Groups help you organize tabs efficiently. But how efficiently? Let me share my experience.It's FOSSSourav Rudra Desktop Linux is mostly neglected by the industry but loved by the community. For the past 12 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content.
    If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a burger meal each month) and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community.
    Join It's FOSS Plus 👷 Homelab and Maker's Corner
    Someone managed to run a website on a Nintendo Wii.
    This Website Is Running on a WiiAlex Haydock found a dusty old Wii console at a hardware swap and modded it to run his website.404 MediaSamantha Cole✨ Apps Highlight
    We tested out GNOME's new document viewer, Papers.
    Hands-on with Papers, GNOME’s new Document ReaderTried GNOME’s new document reader, it didn’t disappoint.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for You
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Test your Ubuntu knowledge with our All About Ubuntu crossword.
    All About Ubuntu: Crossword PuzzleA true Ubuntu fan should be able to guess this crossword correctly.It's FOSSAbhishek Prakash🛍️ Deal you might like
    This e-book bundle is tailored for DevOps professionals and rookies alike—learn from a diverse library of hot courses like Terraform Cookbook, Continuous Deployment, Policy as Code and more.
    And your purchase supports Code for America!
    Humble Tech Book Bundle: DevOps 2025 by O’ReillyA digital apprenticeship with the pros at O’Reilly—add new skills to your DevOp toolkit with our latest guides bundle.Humble Bundle💡 Quick Handy Tip
    In Brave Browser, you can open two tabs in a split view. First, select two tabs by Ctrl + Left-Click. Now, Right-Click on any tab and select "Open in split view". The two tabs will then be opened in a split view.
    You can click on the three-dot button in the middle of the split to swap the position of tabs, unsplit tabs, and resize them.
    🤣 Meme of the Week
    We really need to value them more 🥹
    🗓️ Tech Trivia
    On April 27, 1995, the U.S. Justice Department sued to block Microsoft’s $2.1 billion acquisition of Intuit, arguing it would hurt competition in personal finance software. Microsoft withdrew from the deal shortly after.
    🧑‍🤝‍🧑 FOSSverse Corner
    Know of a way to rename many files on Linux in one go? Pro FOSSer Neville is looking for ways:
    What is the best way to rename a heap of files?There are two rename apps a Perl program a utility from util-linux You can also use mv in a loop I have the util-linux version trinity:[nevj]:~$ rename -V rename from util-linux 2.41 I used it to do the following The syntax of that rename version is rename ′ from ′ ′ to ′ files I have several folders of these image files so I just cd’d around and did each folder by hand. Just wondering… has anyone used the Perl version of rename or do people do it with the File Manager or some o…It's FOSS Communitynevj❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  5. Blogger
    by: Sourav Rudra
    Thu, 01 May 2025 05:10:17 GMT

    Mozilla's Firefox needs no introduction. It is one of the few web browsers around that is not based on Chromium, setting out to provide a privacy-focused browsing experience for its users.
    Sadly, some recent maneuvers have landed it in hot water, the most recent of which was a policy change that resulted in an intense backlash from the open source community, who felt wronged.
    The consensus being that Mozilla broke their promise of not selling user data, leading to widespread concern over the organization's commitment to user privacy.
    Since then, they have tweaked Firefox's Terms of Use to better reflect how they handle user data, clarifying that they do not claim ownership over user content and that any data collected is used for maintaining and improving Firefox, in line with their Privacy Policy.
    Behind the scenes, Mozilla has also been focusing on developing more AI-powered features for Firefox—an approach that has drawn mixed reactions, with many asking for improvements to the core, everyday browser functionality.
    Luckily, they have finally delivered something on that front by implementing the long-requested Tab Groups feature.
    Firefox Tab Groups: Why Should You Use It?
    As the name implies, Tab Groups allows users to organize multiple open tabs into customizable, color-coded, and collapsible sections—making it significantly easier for users to reduce visual clutter, stay focused on priority tasks, and streamline workflows.
    This can greatly boost productivity, especially when paired with the right tools and tips for optimizing your workflow on a Linux desktop. Being someone who has to go through a lot of material when researching topics, I fully understand the importance of efficient tab management on a web browser.
    Using a tab grouping feature like this helps minimize distractions, keeps your browser organized, and ensures quick access to important information without you getting overwhelmed by an endless stack of tabs.
    You can learn more about how this came to be on the announcement blog.
    How to Group Tabs in Firefox?
    If you are looking to integrate this neat feature into your workflow, then you have to first ensure that you are on Firefox 138 or later. After that, things are quite straightforward.
    Open up a bunch of new tabs and drag/drop one onto the other. This should open up the "Create tab group" dialog. Here, enter the name for the tab group, give it a color, and then click on "Done".
    You can right-click on existing tabs to quickly add them to tab groups, or remove them for easy reorganization into new groups.
    Tab groups can be expanded or collapsed with a simple left-click, and you can drag them to rearrange as needed. If you accidentally close Firefox, or even do so intentionally, you can still access your previous tab groups by clicking the downward arrow button above the address bar.
    Similarly, managing an existing tab group is easy—just right-click on the group to open the "Manage tab group" dialog. From there, you can rename the group, change its color, move it around, or delete it entirely.
    Besides that, Mozilla has mentioned that they are already experimenting with AI-powered tools for organizing tabs by topic, which runs on their on-device AI implementation. It is live on the Firefox Nightly build and can be accessed from the "Suggest more of my tabs" button.
    Suggested Read 📖
    I Tried This Upcoming AI Feature in FirefoxFirefox will be bringing an experimental AI-generated link previews, offering quick on-device summaries. Here’s my quick experience with it.It's FOSS NewsSourav Rudra
  6. Blogger
    By: Edwin
    Wed, 30 Apr 2025 13:08:34 +0000


    A lot of people want Linux but do not want to go either remove Windows or take up the overwhelming task of dual booting. For those people, WSL (Windows Subsystem for Linux) came as a blessing. WSL lets you run Linux on your Windows device without the overhead of a Virtual Machine (VM). But in some cases where you want to fix a problem or simply do not want WSL anymore, you may have to uninstall WSL from your Windows system.
    Here is step-by-step guide to remove WSL from your Windows system, remove any Linux distribution, delete all related files, and clear up some disk space. Ready? Get. Set. Learn!
    What is WSL
    You probably knew by now that we will always start with the basics i.e., what WSL does. Think of WSL as a compatibility layer for running Linux binaries on Microsoft Windows systems. It comes in two versions:
    WSL 1: Uses a translation layer between Linux and Windows. WSL 2: Uses a real Linux kernel in a lightweight VM. All around the world, WSL is a favourite among developers, system administrators, and students for running Linux tools like bash, ssh, grep, awk, and even Docker. But if you have moved to a proper Linux system or just want to do a clean reinstall, here are the instructions to remove WSL completely without any errors.
    Step 1: How to Uninstall Linux Distributions
    The first step to uninstall WSL completely is to remove all installed Linux distributions.
    Check Installed Distros
    To check for the installed Linux distributions, open PowerShell or Command Prompt and run the command:
    wsl --list --all After executing this command, you will see a list of installed distros, such as:
    Ubuntu Debian Kali Alpine How to Uninstall a Linux Distro
    To uninstall a distro like Ubuntu, follow these instructions:
    Press Windows key + I to open Settings window. Go to Apps, then click Installed Apps (or Apps & Features). Search for your distro and click Uninstall. Repeat for all distros you no longer need. If you plan to uninstall WSL completely, we recommend removing all distros.
    if you prefer PowerShell, run these commands
    wsl --unregister <DistroName> For example, if you want to remove Ubuntu, execute the command:
    wsl --unregister Ubuntu This removes the Linux distro and all its associated files.
    Step 2: Uninstall WSL Components
    Once we have removed the unwanted distros, let us uninstall the WSL platform itself.
    Open Control Panel and navigate to Programs and then click Turn Windows features on or off. Uncheck these boxes: Windows Subsystem for Linux Virtual Machine Platform (used by WSL 2) Windows Hypervisor Platform (optional) Click OK and restart your system. Step 3: Remove WSL Files and Cache
    Even after uninstalling WSL and Linux distributions, some data might remain. Here are the instructions to delete WSL’s cached files and reclaim disk space.
    To delete the WSL Folder, open File Explorer and go to:
    %USERPROFILE%\AppData\Local\Packages Look for folders like:
    CanonicalGroupLimited…Ubuntu Debian… KaliLinux… Delete any folders related to WSL distros you removed.
    Step 4: Remove WSL CLI Tool (Optional)
    If you installed WSL using the Microsoft Store (i.e., “wsl.exe” package), you can also uninstall it directly from the Installed Apps section:
    Go to Settings, and then to Apps and then open Installed Apps. Search for Windows Subsystem for Linux. Click Uninstall. Step 5: Clean Up with Disk Cleanup Tool
    Finally, use the built-in Disk Cleanup utility to clear any temporary files.
    Press “Windows key + S and search for Disk Cleanup. Choose your system drive (usually drive C:). Select options like: Temporary files System created Windows error reporting Delivery optimization files Click OK to clean up. Bonus Section: How to Reinstall WSL (Optional)
    If you are removing WSL due to issues or conflicts, you can always do a fresh reinstall.
    Here is how you can install latest version of WSL via PowerShell
    wsl --install This installs WSL 2 by default, along with Ubuntu.
    Wrapping Up
    Uninstalling WSL may sound tricky, but by following these steps, you can completely remove Linux distributions, WSL components, and unwanted files from your system. Whether you are making space for something new or just doing some digital spring cleaning, this guide ensures that WSL is uninstalled safely and cleanly.
    If you ever want to come back to the Linux world, WSL can be reinstalled with a single command, which we have covered as a precaution. Let us know if you face any errors. Happy learning!
    The post Uninstall WSL: Step-by-Step Simple Guide appeared first on Unixmen.
  7. Blogger
    By: Edwin
    Wed, 30 Apr 2025 13:08:28 +0000


    There are multiple very useful built-ins in Bash other than cd, ls, and echo. For shell scripting and terminal command execution, there is one lesser known but very powerful built-in command. It is the ” shopt”. This comes in handy when you are customizing your shell behaviour or writing advanced scripts. If you understand shopt, you can improve your workflow and also your scripts’ reliability.
    In this guide, let us explain everything there is about the shopt command, how to use it, and some practical applications as well (as usual in Unixmen). Ready? Get. Set. Learn!
    The Basics: What is shopt
    shopt stands for Shell Options. It is a built-in command in Bash, that allows you to view and modify the behaviour of the shell by enabling or disabling certain options. These options affect things like filename expansion, command history behaviour, script execution, and more.
    Unlike environment variables, options in shopt are either on or off i.e., boolean.
    Basic Syntax of shopt
    Here is the basic syntax of shopt command:
    shopt [options] [optname...] Executing
    Without arguments: Lists all shell options and their current status (on or off). With “-s” (set): Turns on the specified option. With “-u” (unset): Turns off the specified option. Use “-q” (quiet): Suppresses output, useful in scripts for conditional checks. How to View All Available Shell Options
    To view the list of all shopt options and to see which are enabled, execute this command:
    shopt The output to this command will list the options and their status like:
    autocd on cdable_vars off dotglob off extglob on Enabling and Disabling Options with shopt
    We just learnt how to see if an option is enabled or not. Now let us learn how to enable an option:
    shopt -s optname Similarly, execute this command to disable an option:
    shopt -u optname Here is a couple of examples:
    shopt -s dotglob # This command is to include dotfiles in pathname expansion shopt -u dotglob # This command is to exclude dotfiles (which is the default behaviour) Some of the Commonly Used shopt Options
    Here are some shopt options that will be useful for you:
    dotglob
    When this option is enabled, shell includes dotfiles in globbing patterns i.e., the * operator will match “.bashrc”. This option will be helpful for you when you want to apply operations to hidden files.
    shopt -s dotglob autocd
    The autocd option lets you cd into a directory without typing the cd command explicitly. For example, typing “Documents” will change into the “Documents” directory. Here is how you can enable it:
    shopt -s autocd nocaseglob
    This option makes filename matching case insensitive. Using this option will help you when you write scripts that deal with unpredictable casing in filenames.
    shopt -s nocaseglob How to Write Scripts with shopt
    You can use shopt within Bash scripts to ensure consistent behaviour, especially for scripts that involve operations like pattern matching and history control. Here is an example script snippet to get you started:
    # First let us enable dotglob to include dotfiles shopt -s dotglob for file in *; do echo "Processing $file" done In this script, “dotglob” option ensures hidden files are also processed by the “for” loop.
    Resetting All shopt Options
    If you’ve made changes and want to restore to the default behaviours, you can unset the options you enabled by executing these commands for the appropriate options:
    shopt -u dotglob shopt -u autocd shopt -u extglob Advantages of shopt
    It gives you fine-grained control over your shell environment. Once you are familiar with it, it improves script portability and reliability. With shopt, you can enable advanced pattern matching and globbing. It can be toggled temporarily and reset as needed and also helps you avoid unexpected behaviours when writing automation scripts.
    Wrapping Up
    The shopt command is not as famous as other built-in tools in shell but it a very powerful hidden gem. Whether you are starting to explore shell scripting or you are a power user automating workflows, learning to use shopt can save time and prevent headaches. Once you’re comfortable, you’ll find that Bash scripting becomes more predictable and powerful.
    Related Articles
    Bash Functions in Shell Scripts How to Run a Python Script: A Beginners Guide Bash Script Example: Guide for Beginners bash – shopt works in command line, not found when run in a script – Ask Ubuntu The post shopt in Bash: How to Improve Script Reliability appeared first on Unixmen.
  8. Blogger
    By: Edwin
    Wed, 30 Apr 2025 13:08:26 +0000


    AI is almost everywhere. Every day, we see new AI models surprising the world with their capabilities. The tech community (which includes you as well) wanted something else. They wanted to run AI models like ChatGPT or LLaMA on their own devices without spending much on cloud. The answer came in the form of Ollama. In this article, let us learn what Ollama is, why is it gaining popularity, and the features that set it apart.
    In addition to those, we will also explain what Ollama does, how it works, and how you can use Ollama to run AI locally. Ready? Get. Set. Learn!
    What is Ollama?
    Ollama is an open-source tool designed to make it easy to run large language models (LLMs) locally on your computer. It acts as a wrapper and manager for AI models like LLaMA, Mistral, Codellama, and others, enabling you to interact with them in a terminal or through an API. The best part about this is that you can do all these without needing a powerful cloud server. In simple words, Ollama brings LLMs to your local machine with minimal setup.
    Why Should You Use Ollama?
    Here are a few reasons why developers and researchers are using Ollama:
    Run LLMs locally: No expensive subscriptions or hardware required. Enhanced privacy: Your data stays on your device. Faster response times: Especially useful for prototyping or development. Experiment with multiple models: Ollama supports various open models. Simple CLI and REST API: Easy to integrate with existing tools or workflows. How Does Ollama Work?
    Ollama provides a command-line interface (CLI) and backend engine to download, run, and interact with language models.
    It handles:
    Downloading pre-optimized models Managing RAM/GPU requirements Providing a REST API or shell-like experience Handling model switching or multiple instances For example, to start using the llama2 model, execute this command:
    ollama run llama2 Executing this command will fetch the model if not already downloaded and start an interactive session.
    Supported Models in Ollama
    Here are some of the popular models you can run with it and their distinguishing factor:
    LLaMA 2 by Meta, used in Meta AI Mistral 7B Codellama: Optimized for code generation Gemma: Google’s open model Neural Chat Phi: Lightweight models for fast inference You can even create your own model file using a “Modelfile”, similar to how Dockerfiles work.
    How to Install Ollama on Linux, macOS, or Windows
    On Linux devices, execute this command:
    curl -fsSL https://ollama.com/install.sh | sh You can install from source via GitHub as well.
    If you have a macOS device, open Terminal window and execute this command:
    brew install ollama Ollama now supports Windows natively via WSL (Windows Subsystem for Linux). You can also install it using the “.msi” installer from the official Ollama site.
    Key Features of Ollama
    Easy setup: No need for complex Python environments or dependency hell Built-in GPU acceleration: Supports NVIDIA GPUs (with CUDA) API access: Plug into any app using HTTP Low resource footprint: Runs on machines with as little as 8 GB RAM Model customization: Create, fine-tune, or combine models Practical Applications of Ollama
    Here are some real-world applications to understand better. Try these projects when you have got answers to your question: what is Ollama.
    Chatbot development: Build an AI assistant locally. Code generation: Use Codellama to assist in coding. Offline AI experimentation: Perfect for research in low-connectivity environments. Privacy-sensitive applications: Ensure data never leaves your machine. Learning and prototyping: This is a great tool for beginners to understand how LLMs work. Limitations of Ollama
    At Unixmen, we included this section for educational purposes only. Ollama is a great tool considering it is open for all. While it is powerful, it has a few limitations:
    You may still need a decent CPU or GPU for smoother performance. Not all LLMs are supported (especially closed-source ones). Some models can be large and require storage bandwidth for downloading. Still, it provides a great balance between usability and performance.
    Wrapping Up
    If you’ve been wondering what is Ollama, now you know. It is a powerful tool that lets you run open-source AI models locally, without the need for cloud infrastructure. It’s simple, efficient, and perfect for both hobbyists and professionals looking to explore local LLMs.
    With growing interest in privacy, open AI, and local compute, tools like this are making AI more accessible than ever. Keep an eye on Unixmen because as AI models get better, we will keep adding more and more information about them.
    Related Articles
    The Impact of Artificial Intelligence on Linux Security The Dawn of Artificial Intelligence: The Many Benefits of AI for Small Businesses How AI is Revolutionizing Linux System Administration: Tools and Techniques for Automation The post What is Ollama? How to Run LLMs Locally appeared first on Unixmen.
  9. Blogger
    By: Edwin
    Wed, 30 Apr 2025 13:08:24 +0000


    Firefox is the browser of choice for many tech-enthusiasts. If you are reading this, it probably means that your go-to browser is Firefox. But very often, we find ourselves buried under dozens of open tabs in Firefox? You are not alone. Tab overload is a real productivity killer and Firefox dev team knows it. Here is the solution: Firefox Tab Groups.
    Firefox stunned the world by removing the built-in tab grouping but there are powerful extensions and workarounds that help bring that functionality back. Some of these tricks even improve tab grouping in Firefox. In this detailed guide, we will explore what tab groups in Firefox are, how to implement them using modern tools, and why they’re a must-have for efficient browsing. Ready? Get. Set. Learn!
    What Are Firefox Tab Groups?
    Tab groups help you organize your open browser tabs into categories or collections. Think of them as folders for tabs. You can switch between different contexts like “Work”, “Research”, “Shopping”, or “Social Media” without cluttering your current window.
    While Firefox once had native support for tab groups (known as Panorama), it was removed in version 45. Fortunately, the Firefox community has filled the gap with powerful extensions.
    Why Should You Use Tab Groups?
    Here’s why tab grouping in Firefox is helpful and the Firefox community went to great lengths to bring it back:
    Helps you in decluttering your tab bar: Endless scrolling to find one tab is tough. Focus on one task or project at a time. Save tab groups for future sessions. Restore your groups after closing the browser. Easily categorize tabs by topic or purpose (like Christmas shopping reminder). Whether you’re a developer, student, or just a multitasker, organizing tabs can drastically improve your workflow.
    Best Firefox Extension for Tab Groups
    Let us look at a tried and tested Firefox extension to create tab groups.
    Simple Tab Groups
    Simple Tab Groups (STG) is the most popular and powerful Firefox extension for creating and managing tab groups. Let us list some features that sets this extension apart:
    Create multiple tab groups Assign custom names and icons Automatically save sessions Move tabs between groups Keyboard shortcuts for switching groups Dark mode and compact view How to Install Simple Tab Groups
    Go to the Firefox Add-ons page. Search for “Simple Tab Groups”. Click “Add to Firefox” and follow the prompts. Once the installation is successful, you will see an icon in your toolbar. Click it to start creating groups.
    Panorama View (Optional)
    Panorama View brings back the old visual tab management feature from classic Firefox, letting you see tab groups in a grid layout. While it’s not essential, it is a great visual complement to STG for those who prefer drag-and-drop tab organization.
    Using Simple Tab Groups
    Here is a quick walkthrough for beginners:
    How to create a Group
    Click the Simple Tab Groups icon in the toolbar. Select “Create new group”. Name the group, e.g., “Work” or “Unixmen”. Firefox will switch to a new, empty tab set. Switching Between Groups
    You can switch using:
    The STG toolbar icon Right-click menu on any tab Custom hotkeys (configurable in STG settings) How to Move Tabs Between Groups
    Drag and drop tabs in the STG group manager interface or use the context menu.
    Backing Up Your Groups
    STG allows you to export and import your tab groups, which is perfect for syncing between machines or saving work environments.
    Some Best Practices and Tips
    Use keyboard shortcuts for faster group switching. Enable auto-save groups in the STG settings to avoid losing tabs on crash or shutdown. Use Firefox Sync along with STG’s export/import feature to keep your tab setup across devices. Combine with Tree Style Tab to organize tabs vertically within a group. Wrapping Up
    While Firefox doesn’t have native tab groups anymore, extensions like Simple Tab Groups not only replace that functionality but expand it with advanced session management, export options, and more. If you are serious about browsing efficiency and keeping your digital workspace organized, Firefox tab groups are an essential upgrade. Here are some more tips to get you started:
    Start with a few basic groups (e.g., Work, Studies, Shopping). Use names and colours to easily identify each group. Experiment with automation features like auto-grouping. Related Articles
    The Best Private Browsers for Linux Install Wetty (Web + tty) on Ubuntu 15.04 and CentOS 7 – Terminal in Web Browser Over Http/Https Install Log.io on Ubuntu – Real-time log monitoring in your browser The post Firefox Tab Groups: Managing Tabs Like a Pro appeared first on Unixmen.
  10. Blogger
    By: Edwin
    Wed, 30 Apr 2025 13:08:23 +0000


    Many hardcore Linux users were introduced into the tech world after playing with the tiny Raspberry Pi devices. One such tiny device is the Raspberry Pi Zero. Its appearance might fool a lot of people, but it packs a surprising punch for its size and price. Whether you’re a beginner, a maker, or a developer looking to prototype on a budget, there are countless Raspberry Pi Zero projects you can build to automate tasks, learn Linux, or just have fun.
    In this detailed guide, we will list and explain ten of the most practical and creative projects you can do with a Raspberry Pi Zero or Zero W (the version with built-in Wi-Fi). These ideas are beginner-friendly and open-source focused. We at Unixmen, carefully curated these because they are perfect for DIY tech enthusiasts. Ready? Get. Set. Create!
    What is the Raspberry Pi Zero?
    The Raspberry Pi Zero is tiny (size of a credit-card) single-board computer designed for low-power, low-cost computing. The typical specs are:
    1GHz single-core CPU 512MB RAM Mini HDMI, micro USB ports 40 GPIO pins Available with or without built-in Wi-Fi (Zero W/WH) Though the size looks misleading, it is enough and ideal for most lightweight Linux-based projects.
    Ad Blocker
    This will be very useful to you and your friends and family. Create a network-wide ad blocker with Pi-Hole and Raspberry Pi Zero. It filters DNS queries to block ads across all devices connected to your Wi-Fi.
    Why this will be famous:
    Blocks ads on websites, apps, and smart TVs Reduces bandwidth and improves speed Enhances privacy How to Install Pi-hole
    Execute this command to install Pi-hole
    curl -sSL https://install.pi-hole.net | bash Retro Gaming Console
    If you are a fan of retro games, you will love this while you create it. Transform your Pi Zero into a portable gaming device using RetroPie or Lakka. Play classic games from NES, SNES, Sega, and more.
    Prerequisites
    Micro SD card USB controller or GPIO-based gamepad Mini HDMI cable for output Ethical Testing Wi-Fi Hacking Lab
    Use tools like Kali Linux ARM or PwnPi to create a portable penetration testing toolkit. The Pi Zero W is ideal for ethical hacking practice, especially for cybersecurity students.
    How Will This be Useful
    Wi-Fi scanning Packet sniffing Network auditing We must warn you to use this project responsibly. Deploy this on networks you own or have permission to test.
    Lightweight Web Server
    Run a lightweight Apache or Nginx web server to host static pages or mini applications. This project is great for learning web development or hosting a personal wiki.
    How Can You Use this Project
    Personal homepage Markdown notes Self-hosted tools like Gitea, DuckDNS, or Uptime Kuma Smart Mirror Controller
    Build a smart mirror using a Raspberry Pi Zero and a 2-way acrylic mirror to display:
    Time and weather News headlines Calendar events Use MagicMirror² for easy configuration.
    IoT Sensor Node
    Add a DHT11/22, PIR motion sensor, or GPS module to your Pi Zero and turn it into an IoT data collector. Send the data to:
    Home Assistant MQTT broker Google Sheets or InfluxDB This is a great lightweight solution for remote sensing.
    Portable File Server (USB OTG)
    You can set up your Pi Zero as a USB gadget that acts like a storage device or even an Ethernet adapter when plugged into a host PC. To do this, use “g_mass_storage” or “g_ether” kernel modules to emulate devices:
    modprobe g_mass_storage file=/path/to/file.img Time-Lapse Camera
    You can connect a Pi Camera module and capture time-lapse videos of sunsets, plant growth, or construction projects.
    Tools You Require
    raspistill “ffmpeg” for converting images to video Cron jobs for automation Headless Linux Learning Box
    You can install Raspberry Pi OS Lite and practice:
    SSH Command line tools (grep, sed, awk) Bash scripting Networking with “netcat”, “ss”, “iptables” E-Ink Display Projects
    Libraries like Python EPD make it easy to control e-ink displays. Use the Pi Zero with a small e-ink screen to display functional events like:
    Calendar events Quotes of the day Weather updates RSS feeds Fun Tip: Combine Projects!
    You can combine several of these Raspberry Pi Zero projects into one system. For example, you can create an e-ink display with ad-blocker as well or a retro game console that also acts as a media server.
    Wrapping Up
    Whether you’re into IoT, cybersecurity, retro gaming, or automation, the Raspberry Pi Zero helps you create fun and useful projects. With its low cost, tiny size, and solid performance, it’s the perfect device for building compact, lightweight Linux-based systems.
    As of 2025, there is a growing number of open-source tools and community tutorials to support even the most ambitious Raspberry Pi Zero projects. All you need is an idea and a little curiosity. Learn more and more about Linux based applications at Unixmen!
    Related Articles
    How to Use Fopen: C projects guide Raspberry Pi Firewall: Step-by-step guide for an easy setup Gooseberry; An alternative to Raspberry Pi The post Raspberry Pi Zero Projects: Top 10 in 2025 appeared first on Unixmen.
  11. Blogger

    Revisiting Image Maps

    by: Andy Clarke
    Wed, 30 Apr 2025 12:12:45 +0000

    I mentioned last time that I’ve been working on a new website for Emmy-award-winning game composer Mike Worth. He hired me to create a highly graphical design that showcases his work.
    Mike loves ’90s animation, particularly Disney’s Duck Tales and other animated series. He challenged me to find a way to incorporate their retro ’90s style into his design without making it a pastiche. But that wasn’t my only challenge. I also needed to achieve that ’90s feel by using up-to-the-minute code to maintain accessibility, performance, responsiveness, and semantics.
    Designing for Mike was like a trip back to when mainstream website design seemed more spontaneous and less governed by conventions and best practices. Some people describe these designs as “whimsical”:
    But I’m not so sure that’s entirely accurate. “Playful?” Definitely. “Fanciful?” Possibly. But “fantastic?” That depends. “Whimsy” sounds superfluous, so I call it “expressive” instead.
    Studying design from way back, I remembered how websites often included graphics that combined branding, content, and navigation. Pretty much every reference to web design in the ’90s — when I designed my first website — talks about Warner Brothers’ Space Jam from 1996.
    Warner Brothers’ Space Jam (1996) So, I’m not going to do that.
    Brands like Nintendo used their home pages to direct people to their content while making branded visual statements. Cheestrings combined graphics with navigation, making me wonder why we don’t see designs like this today. Goosebumps typified this approach, combining cartoon illustrations with brightly colored shapes into a functional and visually rich banner, proving that being useful doesn’t mean being boring.
    Left to right: Nintendo, Cheestrings, Goosebumps. In the ’90s, when I developed graphics for websites like these, I either sliced them up and put their parts in tables or used mostly forgotten image maps.
    A brief overview of properties and values
    Let’s run through a quick refresher. Image maps date all the way back to HTML 3.2, where, first, server-side maps and then client-side maps defined clickable regions over an image using map and area elements. They were popular for graphics, maps, and navigation, but their use declined with the rise of CSS, SVG, and JavaScript.
    <map> adds clickable areas to a bitmap or vector image.
    <map name="projects"> ... </map> That <map> is linked to an image using the usemap attribute:
    <img usemap="#projects" ...> Those elements can have separate href and alt attributes and can be enhanced with ARIA to improve accessibility:
    <map name="projects"> <area href="" alt="" … /> ... </map> The shape attribute specifies an area’s shape. It can be a primitive circle or rect or a polygon defined by a set of absolute x and y coordinates:
    <area shape="circle" coords="..." ... /> <area shape="rect" coords="..." ... /> <area shape="poly" coords="..." ... /> Despite their age, image maps still offer plenty of benefits. They’re lightweight and need (almost) no JavaScript. More on that in just a minute. They’re accessible and semantic when used with alt, ARIA, and title attributes. Despite being from a different era, even modern mobile browsers support image maps.
    Design by Andy Clarke, Stuff & Nonsense. Mike Worth’s website will launch in April 2025, but you can see examples from this article on CodePen. My design for Mike Worth includes several graphic navigation elements, which made me wonder if image maps might still be an appropriate solution.
    Image maps in action
    Mike wants his website to showcase his past work and the projects he’d like to do. To make this aspect of his design discoverable and fun, I created a map for people to explore by pressing on areas of the map to open modals. This map contains numbered circles, and pressing one pops up its modal.
    My first thought was to embed anchors into the external map SVG:
    <img src="projects.svg" alt="Projects"> <svg ...> ... <a href="..."> <circle cx="35" cy="35" r="35" fill="#941B2F"/> <path fill="#FFF" d="..."/> </a> </svg> This approach is problematic. Those anchors are only active when SVG is inline and don’t work with an <img> element. But image maps work perfectly, even though specifying their coordinates can be laborious. Fortunately, plenty of tools are available, which make defining coordinates less tedious. Upload an image, choose shape types, draw the shapes, and copy the markup:
    <img src="projects.svg" usemap="#projects-map.svg"> <map name="projects-map.svg"> <area href="" alt="" coords="..." shape="circle"> <area href="" alt="" coords="..." shape="circle"> ... </map> Image maps work well when images are fixed sizes, but flexible images present a problem because map coordinates are absolute, not relative to an image’s dimensions. Making image maps responsive needs a little JavaScript to recalculate those coordinates when the image changes size:
    function resizeMap() { const image = document.getElementById("projects"); const map = document.querySelector("map[name='projects-map']"); if (!image || !map || !image.naturalWidth) return; const scale = image.clientWidth / image.naturalWidth; map.querySelectorAll("area").forEach(area => { if (!area.dataset.originalCoords) { area.dataset.originalCoords = area.getAttribute("coords"); } const scaledCoords = area.dataset.originalCoords .split(",") .map(coord => Math.round(coord * scale)) .join(","); area.setAttribute("coords", scaledCoords); }); } ["load", "resize"].forEach(event => window.addEventListener(event, resizeMap) ); I still wasn’t happy with this implementation as I wanted someone to be able to press on much larger map areas, not just the numbered circles.
    Every <path> has coordinates which define how it’s drawn, and they’re relative to the SVG viewBox:
    <svg width="1024" height="1024"> <path fill="#BFBFBF" d="…"/> </svg> On the other hand, a map’s <area> coordinates are absolute to the top-left of an image, so <path> values need to be converted. Fortunately, Raphael Monnerat has written PathToPoints, a tool which does precisely that. Upload an SVG, choose the point frequency, copy the coordinates for each path, and add them to a map area’s coords:
    <map> <area href="" shape="poly" coords="..."> <area href="" shape="poly" coords="..."> <area href="" shape="poly" coords="..."> ... </map> More issues with image maps
    Image maps are hard-coded and time-consuming to create without tools. Even with tools for generating image maps, converting paths to points, and then recalculating them using JavaScript, they could be challenging to maintain at scale.
    <area> elements aren’t visible, and except for a change in the cursor, they provide no visual feedback when someone hovers over or presses a link. Plus, there’s no easy way to add animations or interaction effects.
    But the deal-breaker for me was that an image map’s pixel-based values are unresponsive by default. So, what might be an alternative solution for implementing my map using CSS, HTML, and SVG?
    Anchors positioned absolutely over my map wouldn’t solve the pixel-based positioning problem or give me the irregular-shaped clickable areas I wanted. Anchors within an external SVG wouldn’t work either.
    But the solution was staring me in the face. I realized I needed to:
    Create a new SVG path for each clickable area. Make those paths invisible. Wrap each path inside an anchor. Place the anchors below other elements at the end of my SVG source. Replace that external file with inline SVG. I created a set of six much larger paths which define the clickable areas, each with its own fill to match its numbered circle. I placed each anchor at the end of my SVG source:
    <svg … viewBox="0 0 1024 1024"> <!-- Visible content --> <g>...</g> <!-- Clickable areas -->` <g id="links">` <a href="..."><path fill="#B48F4C" d="..."/></a>` <a href="..."><path fill="#6FA676" d="..."/></a>` <a href="..."><path fill="#30201D" d="..."/></a>` ... </g> </svg> Then, I reduced those anchors’ opacity to 0 and added a short transition to their full-opacity hover state:
    #links a { opacity: 0; transition: all .25s ease-in-out; } #links a:hover { opacity: 1; } While using an image map’s <area> sadly provides no visual feedback, embedded anchors and their content can respond to someone’s action, hint at what’s to come, and add detail and depth to a design.
    I might add gloss to those numbered circles to be consistent with the branding I’ve designed for Mike. Or, I could include images, titles, or other content to preview the pop-up modals:
    <g id="links"> <a href="…"> <path fill="#B48F4C" d="..."/> <image href="..." ... /> </a> </g> Try it for yourself:
    CodePen Embed Fallback Expressive design, modern techniques
    Designing Mike Worth’s website gave me a chance to blend expressive design with modern development techniques, and revisiting image maps reminded me just how important a tool image maps were during the period Mike loves so much.
    Ultimately, image maps weren’t the right tool for Mike’s website. But exploring them helped me understand what I really needed: a way to recapture the expressiveness and personality of ’90s website design using modern techniques that are accessible, lightweight, responsive, and semantic. That’s what design’s about: choosing the right tool for a job, even if that sometimes means looking back to move forward.
    Biography: Andy Clarke
    Often referred to as one of the pioneers of web design, Andy Clarke has been instrumental in pushing the boundaries of web design and is known for his creative and visually stunning designs. His work has inspired countless designers to explore the full potential of product and website design.
    Andy’s written several industry-leading books, including Transcending CSS, Hardboiled Web Design, and Art Direction for the Web. He’s also worked with businesses of all sizes and industries to achieve their goals through design.
    Visit Andy’s studio, Stuff & Nonsense, and check out his Contract Killer, the popular web design contract template trusted by thousands of web designers and developers.
    Revisiting Image Maps originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  12. Blogger
    by: Sreenath
    Wed, 30 Apr 2025 05:46:58 GMT

    Logseq is different from the conventional note-taking applications in many aspects.
    Firstly, it follows a note block approach, rather than a page-first approach for content organization. This allows Logseq to achieve data interlinking at the sentence level. That is, you can refer to any sentence of a note in any other note inside your database.
    Another equally important feature is the “Special Pages”. These are the “Journals” and “Contents” pages. Both of these special pages have use-cases far higher than what their names indicate.
    The Journals page
    The “Journals” is the first page you will see when you open Logseq. Here, you can see dates as headings. The Logseq documentation suggests that a new user, before understanding Logseq better, should use this Journals page heavily for taking notes.
    Journals PageAs the name suggests, this is the daily journals page. Whatever you write under a date will be saved as a separate Markdown file with the date as the title. You can see these pages in your file manager, too. Head to the location you use for Logseq, then visit the journals page.
    Journals Markdown Files in File ManagerLet's see how to make this Journals page most useful.
    Journal page as a daily diary
    Let's start with the basics. The “Journals” page can be used as your daily diary page.
    If you are a frequent diary writer, Logseq is the best tool to digitize your life experiences and daily thoughts.
    Each day, a new page will be created for you.
    If you need a page for a day in the past, Just click on the Create button on the bottom of Logseq window and select “New page”.
    Click on Create → New PageIn the dialog, enter the date for the required journal in the format, Mar 20th, 2023. Press enter. This will create the Journal page for the specified that for you!
    Create Journal page for an old dateJournal as a note organizer
    If you have read the Logseq Pages and Links article in this series, you should recall the fact that Logseq considers the concept of Pages, Tags, etc. in almost similar manner. If you want to create a new note, the best way is to use the keyboard method:
    #[[Note Title Goes Here]]The above creates a page for you. Now, the best place to create a new page is the Journals page.
    Logseq has a powerful backlink feature. With this, if you use the Journals page to create a new page, you don't need to add any date references inside the page separately, since at the very end of the page, you will have a backlink to that day's journal.
    Note with date referenceThis is beneficial because you can recall when a note was first created easily.
    Journal as a to-do organizer
    Logseq can be used as a powerful task manager application as well, and the Journals page plays a crucial role in it.
    If you come across any task while you are in the middle of something, just open the Journals page in Logseq and press the / key.
    Search and enter TODO. Then type the task you are about to do.
    Once done, press / again and search for Date Picker. Select a date from the calendar.
    0:00 /0:29 1× Creating a TODO task in Logseq
    That's it. You have created a to-do item with a due date. Now, when the date arrives, you will get a link on that day's Journal page. Thus, when you open Logseq on that day, you will see this item.
    It will also contain the link to the journal page from where you added the task.
    Other than that, you can search for the TODO page and open it to see all your task list, marked with TODO.
    0:00 /0:23 1× Search for the TODO page to list all the to-do tasks
    Journal to manage tasks
    Task management is not just adding due date to your tasks. You should be able to track a project and know at what stage a particular task is. For this, Logseq has some built-in tags/pages. For example, LATER, DOING, DONE, etc.
    These tags can be accessed by pressing the / key and searching for the name.
    For example, if you have some ideas that should be done at a later date, but not sure when exactly, add these with the LATER tag, just like the TODO tag explained above.
    Now, you can search for the LATER tag to know what all tasks are added to that list.
    0:00 /0:22 1× Using the LATER tag in Logseq
    Using the Journal page is beneficial here because you will be able to recollect on what date a particular task was added, allowing you to get more insight about that task. This will help you more, if you have entered your thoughts of that day in the Journal.
    The Contents Page
    Logseq has a special Contents page type, but don't confuse it with the usual table of contents. That is not its purpose. Here, I will mention the way I use the contents page. You can create your own workflows once you know its potential.
    You can think of the Contents page as a manually created Dashboard to your notes and database. Or, a simple home page from where you can access contents needed frequently.
    The most interesting thing that sets the contents page apart from others is the fact that it will always be visible in the right sidebar. Therefore, if you enable the sidebar permanently, you can see the quick links in the contents all the time.
    Edit the Contents page
    As said above, the Contents page is available on the right sidebar. So click on the sidebar button in the top panel and select Contents. You can edit this page from this sidebar view, which is the most convenient way.
    Click on the Sidebar button and select ContentsAll the text formatting, linking, etc., that work on Logseq pages works on this page as well.
    1. Add all important pages/tags
    The first thing you can do is to add frequently accessed pages or tags.
    For example, let's say you will be accessing the Kernel, Ubuntu, and APT tags frequently. So, what you can do is to add a Markdown heading:
    ## List of TagsNow, link the tags right in there, one per line:
    #Kernel #Ubuntu #APTFor better arrangement, you can use the Markdown horizontal rule after each section.
    ---2. Link the task management pages
    As discussed in the Journals section, you can have a variety of task related tags like TODO, LATER, WAITING, etc. So you can link each of these in the contents page:
    ## List of Tasks #TODO #LATER #WAITING ---🚧Please note the difference between the Markdown heading and the Logseq tags. So, don't forget to add a space after the # if you are creating a Markdown header.3. Quick access links
    If you are visiting some websites daily, you can bookmark these websites on the contents page for quickly accessing them.
    ## Quick access links [It's FOSS](https://itsfoss.com/) [It's FOSS Community](https://itsfoss.community/) [Arch Linux News](https://archlinux.org/) [GitHub](https://github.com/) [Reddit](https://www.reddit.com/)After all this, your contents page will look like this:
    Contents page in LogseqWrapping Up
    As you can see, you can utilize these pages in non-conventional ways to get a more extensive experience from Logseq. That's the beauty of this open-source tool. The more you explore, the more you discover, the more you enjoy.
    In the next part of this series, I'll share my favorite Logseq extensions.
  13. Blogger
    by: Geoff Graham
    Tue, 29 Apr 2025 14:27:25 +0000

    Brad Frost is running this new little podcast called Open Up. Folks write in with questions about the “other” side of web design and front-end development — not so much about tools and best practices as it is about the things that surround the work we do, like what happens if you get laid off, or AI takes your job, or something along those lines. You know, the human side of what we do in web design and development.
    Well, it just so happens that I’m co-hosting the show. In other words, I get to sprinkle in a little advice on top of the wonderful insights that Brad expertly doles out to audience questions.
    Our second episode just published, and I thought I’d share it. We’re finding our sea legs with this whole thing and figuring things out as we go. We’ve opened things up (get it?!) to a live audience and even pulled in one of Brad’s friends at the end to talk about the changing nature of working on a team and what it looks like to collaborate in a remote-first world.
    https://www.youtube.com/watch?v=bquVF5Cibaw
    Open Up With Brad Frost, Episode 2 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  14. Blogger

    Resize Panes in Tmux

    by: Pranav Krishna
    Tue, 29 Apr 2025 09:53:03 +0530

    In this series of managing the tmux utility, the first level division, panes, are considered.
    Panes divide the terminal window horizontally or vertically. Various combinations of these splits can result in different layouts, according to your liking.
    Pane split of a tmux windowThis is how panes work in tmux.
    Creating Panes
    Take into focus any given pane. It could be a fresh window as well.
    The current window can be split horizontally (up and down) with the key
    [Ctrl+B] + " Horizontal SplitAnd to split the pane vertically, use the combination
    [Ctrl+B] + %Vertical SplitResizing your panes
    Tmux uses 'cells' to quantify the amount of resizing done at once. To quantify, this is what resizing by 'one cell' looks like. One more character can be accommodated on the side.
    Resizing by 'one cell'The combination part is a bit tricky for resizing. Stick with me.
    Resize by one cell
    Use the prefix Ctrl+B followed by Ctrl+arrow keys to resize in the required direction.
    [Ctrl+B] Ctrl+arrowThis combination takes a fair number of keypresses, but can be precise.
    0:00 /0:08 1× Resize by five cells (quicker)
    Instead of holding the Ctrl key, you could use the Alt key to resize faster. This moves the pane by five cells.
    [Ctrl+B] Alt+arrow 0:00 /0:12 1× Resize by a specific number of cells (advanced)
    Just like before, the command line options can resize the pane to any number of cells.
    Enter the command line mode with
    [Ctrl+B] + :Then type
    resize-pane -{U/D/L/R} xxU/D/L/R represents the direction of resizing xx is the number of cells to be resized To resize a pane left by 20 cells, this is the command:
    resize-pane -L 20 0:00 /0:06 1× Resizing left by 20 cells
    Similarly, to resize a pane upwards, the -U tag is used instead.
    0:00 /0:05 1× Resizing upwards by 15 cells
    This resize-pane command could be primarily incorporated into reprogramming a tmux layout whenever a new session is spawned.
    Conclusion
    Since the pane lengths are always bound to change, knowing all the methods to vary the pane sizes can come in handy. Hence, all possible methods are covered.
    Pro tip 🚀 - If you make use of a mouse with tmux, your cursor is capable of resizing the panes.
    0:00 /0:15 1× Turning on mouse mode and resizing the panes
    Go ahead and tell me which method you use in the comments.
  15. Blogger

    Chris’ Corner: Reacting

    by: Chris Coyier
    Mon, 28 Apr 2025 17:20:59 +0000

    I was listening to Wes and Scott on a recent episode of Syntax talking about RSCs (React Server Components). I wouldn’t say it was particularly glowing.
    We use them here at CodePen, and will likely be more and more as we ship more with Next.js, which is part of our most modern stack that we are always moving toward. Me, I like Next.js. React makes good sense to me for use in a very interactive, long-session style application with oodles of state management. By virtue of being on the latest Next.js release, whatever we put in the app directory (“The App Router” as they call it) automatically uses RSCs when it can. I mostly like that. We do have to fight it sometimes, but those fights are generally about server-side rendering and making sure we are set up for that and doing things right to take advantage of it, which honestly we should be doing as much as possible anyway. I’ll also add some anecdotal data that we haven’t exactly seen huge drops in JavaScript bundle size when we move things that direction, which I was hoping would be a big benefit of that work.
    But React is more than Next.js, right? Right?! Yes and no. I use React without Next.js sometimes, and we do at CodePen in plenty of places. Without Next.js, usage of RSCs is hard or not happening. Precious few other frameworks are using them, and some have thrown up their hands and refused. To be fair: Parcel has support in Beta and Waku also supports them.
    A little hard to call them a big success in this state. But it’s also hard to call the concept of them a bad idea. It’s generally just a good idea to make the servers do more work than browsers, as well as send as little data across the network as possible. If the JavaScript in a React component can be run on the server, and we can make the server part kinda smart, let’s let it?
    If you’ve got the time and inclination, Dam Abramov’s React for Two Computers is a massive post that is entirely a conceptual walkthrough abstracting the ideas of RSCs into an “Early World” and “Late World” to understand the why and where it all came from. He just recently followed it up with Impossible Components which gets more directly into using RSCs.
    Welp — while we’re talking React lemme drop some related links I found interesting lately.
    React itself, aside from RSCs, isn’t sitting idle. They’ve shipped an experimental <ViewTransition> component which is nice to see as someone who has struggled forcing React to do this before. They’ve also shipped an RC (Release Candidate) for the React Compiler (also RC? awkward?). The compiler is interesting in that it doesn’t necessarily make your bundles smaller it makes them run faster. Fancy Components is a collection of “mainly React, TypeScript, Tailwind, Motion” components that are… fancy. I’ve seen a bit of pushback on the accessibility of some of them, but I’ve also poked through them and found what look like solid attempts at making them accessible, so YMMV. Sahaj Jain says The URL is a great place to store state in React. Joshua Wootonn details the construction of a Drag to Select interaction in React which is… pretty complicated. The blog Expression Statement (no byline) says HTML Form Validation is heavily underused. I just added a bit of special validation to a form in React this week and I tend to agree. Short story: GMail doesn’t render <img>s where the src has a space in it. 😭. I used pattern directly on the input, and we have our own error message system, otherwise I would have also used setCustomValidity. Thoughtbot: Superglue 1.0: React ❤️ Rails
  16. Blogger
    by: Geoff Graham
    Mon, 28 Apr 2025 12:43:01 +0000

    Ten divs walk into a bar:
    <div>1</div> <div>2</div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> <div>10</div> There’s not enough chairs for them to all sit at the bar, so you need the tenth div to sit on the lap of one of the other divs, say the second one. We can visually cover the second div with the tenth div but have to make sure they are sitting next to each other in the HTML as well. The order matters.
    <div>1</div> <div>2</div> <div>10</div><!-- Sitting next to Div #2--> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> The tenth div needs to sit on the second div’s lap rather than next to it. So, perhaps we redefine the relationship between them and make this a parent-child sorta thing.
    <div>1</div> <div class="parent"> 2 <div class="child">10</div><!-- Sitting in Div #2's lap--> </div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> Now we can do a little tricky positioning dance to contain the tenth div inside the second div in the CSS:
    .parent { position: relative; /* Contains Div #10 */ } .child { position: absolute; } We can inset the child’s position so it is pinned to the parent’s top-left edge:
    .child { position: absolute; inset-block-start: 0; inset-inline-start: 0; } And we can set the child’s width to 100% of the parent’s size so that it is fully covering the parent’s lap and completely obscuring it.
    .child { position: absolute; inset-block-start: 0; inset-inline-start: 0; width: 100%; } Cool, it works!
    CodePen Embed Fallback Anchor positioning simplifies this process a heckuva lot because it just doesn’t care where the tenth div is in the HTML. Instead, we can work with our initial markup containing 10 individuals exactly as they entered the bar. You’re going to want to follow along in the latest version of Chrome since anchor positioning is only supported there by default at the time I’m writing this.
    <div>1</div> <div class="parent">2</div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> <div class="child">10</div> Instead, we define the second div as an anchor element using the anchor-name property. I’m going to continue using the .parent and .child classes to keep things clear.
    .parent { anchor-name: --anchor; /* this can be any name formatted as a dashed ident */ } Then we connect the child to the parent by way of the position-anchor property:
    .child { position-anchor: --anchor; /* has to match the `anchor-name` */ } The last thing we have to do is position the child so that it covers the parent’s lap. We have the position-area property that allows us to center the element over the parent:
    .child { position-anchor: --anchor; position-area: center; } If we want to completely cover the parent’s lap, we can set the child’s size to match that of the parent using the anchor-size() function:
    .child { position-anchor: --anchor; position-area: center; width: anchor-size(width); } CodePen Embed Fallback No punchline — just one of the things that makes anchor positioning something I’m so excited about. The fact that it eschews HTML source order is so CSS-y because it’s another separation of concerns between content and presentation.
    Anchor Positioning Just Don’t Care About Source Order originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  17. Blogger
    by: Abhishek Prakash
    Mon, 28 Apr 2025 06:04:44 GMT

    There is something about CachyOS. It feels fast. The performance is excellently smooth, specially if you have newer hardware.
    I don't have data to prove it but my new Asus Zenbook that I bought in November last year is rocking CachyOS superbly.
    The new laptop came with Windows, which is not surprising. I didn't replace Windows with Linux. Instead, I installed CachyOS in dual boot mode alongside Windows.
    The thing is that it was straightforward to do so. Anything simple in the Arch domain is amusing in itself.
    So, I share my amusing experience in this video.
    Subscribe to It's FOSS YouTube ChannelI understand that video may not be everyone's favorite format so I created this tutorial in the text format too.
    There are a few things to note here:
    An active internet connection is mandatory. Offline installation is not possible. An 8 GB USB is needed to create the installation medium. At least 40 GB free disk space (it could be 20 GB as well but that would be way too less). Time and patience is of essence. 🚧You should back up your important data on an external disk or cloud. It is rare that anything will go wrong, but if you are not familiar to dealing with disk partitions, a backup will save your day. SPONSORED Use Swiss-based pCloud storage
    Back up important folders from your computer to pCloud, securely. Keep and recover old versions in up to 1 year.
    Learn more about pCloud backup Creating live USB of CachyOS and booting from it
    First, download the desktop edition of CachyOS from its website:
    Download CachyOSYou can create the live USB on any computer with the help of Ventoy. I used my TUXEDO notebook for this purpose.
    Download Ventoy from the official Website. When you extract it, there will be a few executables in it to run it either in a browser or in a GUI. Use whatever you want.
    Making sure that USB is plugged in, install Ventoy on it.
    Once done, all you need to do is to drag the CachyOS ISO to the Ventoy disk. The example below shows it for Mint but it's the same for any Linux ISO.
    If you need detailed steps for using Ventoy, please follow this tutorial.
    Install and Use Ventoy on Ubuntu [Complete Guide]Tired of flashing USB drives for every ISO? Get started with Ventoy and get the ability to easily boot from ISOs.It's FOSSSagar SharmaOnce I had the CachyOS live USB, I put it in the Asus Zenbook and restarted it. When the computer was starting up, pressing F2/F10 button took me to the BIOS Settings.
    I did that to ensure that the system boots from the USB instead of the hard disk by changing the boot order.
    Change boot priorityWhen the system booted next, Ventoy screen was visible and I could see the option to load the CachyOS live session.
    Select CachyOSI selected to boot in normal mode.
    Normal ModeThere was an option to boot into CachyOS with NVIDIA. I went with the default option.
    Open-source or closed-source driversWhile booting into CachyOS, I ran into an issue. There was a "Start Job is running..." message for more than a minute or two. I force restarted the system and the live USB worked fine the next time.
    Start job duration notificationIf this error persists for you, try to change the USB port or create live USB again.
    Another issue I discovered by trial and error was relating to the password. CachyOS showed a login screen that seemed to be asking for username and password. As per the official docs, there are no password required in live session.
    What I did was to change the display server to Wayland and then click the next button, and I was logged into the system without any password.
    Select WaylandInstalling CachyOS
    Again, active internet is mandatory to download the desktop environment and other packages.
    Select the "Launch installer" option.
    Click on "Launch Installer"My system was not plugged into a power source but it had almost 98% battery and I knew that it could handle the quick installation easily.
    System not connected to power source warningQuite straight forward settings in the beginning. Like selecting time zone
    Set Locationand keyboard layout.
    Set keyboard layoutThe most important step is the disk partition and I was pleasantly surprised to see that the Calamares installer detected Windows presence and gave option to install CachyOS alongside.
    I have a single disk with Windows partition as well as EFI system partition.
    All I had to do was to drag the slider and shrink the storage appropriately.
    Storage settingsI gave more space to Linux because it was going to be my main operating system.
    The next screen gave the options to install a desktop environment or window manager. I opted for GNOME. You can see why it is important to have active internet connection. The desktop environment is not on the ISO file. It needs to be downloaded first.
    Select Desktop EnvironmentAnd a few additional packages are added to the list automatically.
    Installing additional packagesAnd as the last interactive step of install, I created the user account.
    Enter user credentialsA quick overview of what is going to be done at this point. Things looked fine so I hit the Install button.
    Click on InstallAnd then just wait for a few minutes for the installation to complete.
    Installation progressWhen the installation completes, restart the system and take out the live USB. In my case, I forgot to take the USB out, but still booted from the hard disk.
    Fixing the missing Windows from grub
    When the system booted next, I could see the usual Grub bootloader screen but there was no Windows option in it.
    Windows Boot Manager is absentFixing it was simple. I opened the grub config file for editing in Nano.
    sudo nano /etc/default/grubOS_PROBER was disabled, so I uncommented that line, saved the file and exited.
    Uncomment OS ProberThe next step was to update grub to make it aware of the config changes.
    sudo grub-mkconfig -o /boot/grub/grub.cfgAnd on the next reboot, the Windows boot manager option there to let me use Windows.
    Windows Boot Manager in the boot screenThis is what I did to install CachyOS Linux alongside Windows. For an Arch-based distro, the procedure was pretty standard, and that's a good thing. Installing Linux should not be super complicated.
    💬 If you tried dual booting CachyOS, do let me know how it went in the comment section.
  18. Blogger
    By: Linux.com Editorial Staff
    Sun, 27 Apr 2025 23:40:06 +0000

    Talos Linux is a specialized operating system designed for running Kubernetes. First and foremost it handles full lifecycle management for Kubernetes control-plane components. On the other hand, Talos Linux focuses on security, minimizing the user’s ability to influence the system. A distinctive feature of this OS is the near-complete absence of executables, including the absence of a shell and the inability to log in via SSH. All configuration of Talos Linux is done through a Kubernetes-like API.
    Talos Linux is provided as a set of pre-built images for various environments.
    The standard installation method assumes you will take a prepared image for your specific cloud provider or hypervisor and create a virtual machine from it. Or go the bare metal route and load  the Talos Linux image using ISO or PXE methods.
    Unfortunately, this does not work when dealing with providers that offer a pre-configured server or virtual machine without letting you upload a custom image or even use an ISO for installation through KVM. In that case, your choices are limited to the distributions the cloud provider makes available.
    Usually during the Talos Linux installation process, two questions need to be answered: (1) How to load and boot the Talos Linux image, and (2) How to prepare and apply the machine-config (the main configuration file for Talos Linux) to that booted image. Let’s talk about each of these steps.
    Booting into Talos Linux
    One of the most universal methods is to use a Linux kernel mechanism called kexec.
    kexec is both a utility and a system call of the same name. It allows you to boot into a new kernel from the existing system without performing a physical reboot of the machine. This means you can download the required vmlinuz and initramfs for Talos Linux, and then, specify the needed kernel command line and immediately switch over to the new system. It is as if the kernel were loaded by the standard bootloader at startup, only in this case your existing Linux operating system acts as the bootloader.
    Essentially, all you need is any Linux distribution. It could be a physical server running in rescue mode, or even a virtual machine with a pre-installed operating system. Let’s take a look at a case using Ubuntu on, but it can be literally any other Linux distribution.
    Log in via SSH and install the kexec-tools package, it contains the kexec utility, which you’ll need later:
    apt install kexec-tools -y Next, you need to download the Talos Linux, that is the kernel and initramfs. They can be downloaded from the official repository:
    wget -O /tmp/vmlinuz https://github.com/siderolabs/talos/releases/latest/download/vmlinuz-amd64
    wget -O /tmp/initramfs.xz https://github.com/siderolabs/talos/releases/latest/download/initramfs-amd64.xz If you have a physical server rather than a virtual one, you’ll need to build your own image with all the necessary firmware using Talos Factory service. Alternatively, you can use the pre-built images from the Cozystack project (a solution for building clouds we created at Ænix and transferred to CNCF Sandbox) – these images already include all required modules and firmware:
    wget -O /tmp/vmlinuz https://github.com/cozystack/cozystack/releases/latest/download/kernel-amd64
    wget -O /tmp/initramfs.xz https://github.com/cozystack/cozystack/releases/latest/download/initramfs-metal-amd64.xz Now you need the network information that will be passed to Talos Linux at boot time. Below is a small script that gathers everything you need and sets environment variables:
    IP=$(ip -o -4 route get 8.8.8.8 | awk -F”src ” ‘{sub(” .*”, “”, $2); print $2}’)
    GATEWAY=$(ip -o -4 route get 8.8.8.8 | awk -F”via ” ‘{sub(” .*”, “”, $2); print $2}’)
    ETH=$(ip -o -4 route get 8.8.8.8 | awk -F”dev ” ‘{sub(” .*”, “”, $2); print $2}’)
    CIDR=$(ip -o -4 addr show “$ETH” | awk -F”inet $IP/” ‘{sub(” .*”, “”, $2); print $2; exit}’)
    NETMASK=$(echo “$CIDR” | awk ‘{p=$1;for(i=1;i<=4;i++){if(p>=8){o=255;p-=8}else{o=256-2^(8-p);p=0}printf(i<4?o”.”:o”\n”)}}’)
    DEV=$(udevadm info -q property “/sys/class/net/$ETH” | awk -F= ‘$1~/ID_NET_NAME_ONBOARD/{print $2; exit} $1~/ID_NET_NAME_PATH/{v=$2} END{if(v) print v}’) You can pass these parameters via the kernel cmdline. Use ip= parameter to configure the network using the Kernel level IP configuration mechanism for this. This method lets the kernel automatically set up interfaces and assign IP addresses during boot, based on information passed through the kernel cmdline. It’s a built-in kernel feature enabled by the CONFIG_IP_PNP option. In Talos Linux, this feature is enabled by default. All you need to do is provide a properly formatted network settings in the kernel cmdline.
    You can find proper syntax for this option in the Talos Linux documentation. Also official Linux kernel documentation provides more detailed examples. Set the CMDLINE variable with the ip option that contains the current system’s settings, and then print it out:
    CMDLINE=”init_on_alloc=1 slab_nomerge pti=on console=tty0 console=ttyS0 printk.devkmsg=on talos.platform=metal ip=${IP}::${GATEWAY}:${NETMASK}::${DEV}:::::”
    echo $CMDLINE The output should look something like:
    init_on_alloc=1 slab_nomerge pti=on console=tty0 console=ttyS0 printk.devkmsg=on talos.platform=metal ip=10.0.0.131::10.0.0.1:255.255.255.0::eno2np0::::: Verify that everything looks correct, then load our new kernel:
    kexec -l /tmp/vmlinuz –initrd=/tmp/initramfs.xz –command-line=”$CMDLINE”
    kexec -e The first command loads the Talos kernel into RAM, the second command switches the current system to this new kernel.
    As a result, you’ll get a running instance of Talos Linux with networking configured. However it’s currently running entirely in RAM, so if the server reboots, the system will return to its original state (by loading the OS from the hard drive, e.g., Ubuntu).
    Applying machine-config and installing Talos Linux on disk
    To install Talos Linux persistently on the disk and replace the current OS, you need to apply a machine-config specifying the disk to install. To configure the machine, you can use either the official talosctl utility or the Talm, utility maintained by the Cozystack project (Talm works with vanilla Talos Linux as well).
    First, let’s consider configuration using talosctl. Before applying the config, ensure it includes network settings for your node; otherwise, after reboot, the node won’t configure networking. During installation, the bootloader is written to disk and does not contain the ip option for kernel autoconfiguration.
    Here’s an example of a config patch containing the necessary values:
    # node1.yaml
    machine:
      install:
        disk: /dev/sda
      network:
        hostname: node1
        nameservers:
        – 1.1.1.1
        – 8.8.8.8
        interfaces:
        – interface: eno2np0
          addresses:
          – 10.0.0.131/24
          routes:
          – network: 0.0.0.0/0
            gateway: 10.0.0.1 You can use it to generate a full machine-config:
    talosctl gen secrets
    talosctl gen config –with-secrets=secrets.yaml –config-patch-control-plane=@node1.yaml <cluster-name> <cluster-endpoint> Review the resulting config and apply it to the node:
    talosctl apply -f controlplane.yaml -e 10.0.0.131 -n 10.0.0.131 -i  Once you apply controlplane.yaml, the node will install Talos on the /dev/sda disk, overwriting the existing OS, and then reboot.
    All you need now is to run the bootstrap command to initialize the etcd cluster:
    talosctl –talosconfig=talosconfig bootstrap -e 10.0.0.131 -n 10.0.0.131 You can view the node’s status at any time using dashboard commnad:
    talosctl –talosconfig=talosconfig dashboard -e 10.0.0.131 -n 10.0.0.131 As soon as all services reach the Ready state, retrieve the kubeconfig and you’ll be able to use your newly installed Kubernetes:
    talosctl –talosconfig=talosconfig kubeconfig kubeconfig
    export KUBECONFIG=${PWD}/kubeconfig Use Talm for configuration management
    When you have a lot of configs, you’ll want a convenient way to manage them. This is especially useful with bare-metal nodes, where each node may have different disks, interfaces and specific network settings. As a result, you might need to hold a patch for each node.
    To solve this, we developed Talm — a configuration manager for Talos Linux that works similarly to Helm.
    The concept is straightforward: you have a common config template with lookup functions, and when you generate a configuration for a specific node, Talm dynamically queries the Talos API and substitutes values into the final config.
    Talm includes almost all of the features of talosctl, adding a few extras. It can generate configurations from Helm-like templates, and remember the node and endpoint parameters for each node in the resulting file, so you don’t have to specify these parameters every time you work with a node.
    Let me show how to perform the same steps to install Talos Linux using Talm:
    First, initialize a configuration for a new cluster:
    mkdir talos
    cd talos
    talm init Adjust values for your cluster in values.yaml:
    endpoint: “https://10.0.0.131:6443”
    podSubnets:
    – 10.244.0.0/16
    serviceSubnets:
    – 10.96.0.0/16
    advertisedSubnets:
    – 10.0.0.0/24 Generate a config for your node:
    talm template -t templates/controlplane.yaml -e 10.0.0.131 -n 10.0.0.131 > nodes/node1.yaml The resulting output will look something like:
    # talm: nodes=[“10.0.0.131”], endpoints=[“10.0.0.131”], templates=[“templates/controlplane.yaml”]
    # THIS FILE IS AUTOGENERATED. PREFER TEMPLATE EDITS OVER MANUAL ONES.
    machine:
      type: controlplane
      kubelet:
        nodeIP:
          validSubnets:
            – 10.0.0.0/24
      network:
        hostname: node1
        # — Discovered interfaces:
        # eno2np0:
        #   hardwareAddr:a0:36:bc:cb:eb:98
        #   busPath: 0000:05:00.0
        #   driver: igc
        #   vendor: Intel Corporation
        #   product: Ethernet Controller I225-LM)
        interfaces:
          – interface: eno2np0
            addresses:
              – 10.0.0.131/24
            routes:
              – network: 0.0.0.0/0
                gateway: 10.0.0.1
        nameservers:
          – 1.1.1.1
          – 8.8.8.8
      install:
        # — Discovered disks:
        # /dev/sda:
        #    model: SAMSUNG MZQL21T9HCJR-00A07
        #    serial: S64GNG0X444695
        #    wwid: eui.36344730584446950025384700000001
        #    size: 1.9 TB
        disk: /dev/sda
    cluster:
      controlPlane:
        endpoint: https://10.0.0.131:6443
      clusterName: talos
      network:
        serviceSubnets:
          – 10.96.0.0/16
      etcd:
        advertisedSubnets:
          – 10.0.0.0/24 All that remains is to apply it to your node:
    talm apply -f nodes/node1.yaml -i 
    Talm automatically detects the node address and endpoint from the “modeline” (a conditional comment at the top of the file) and applies the config.
    You can also run other commands in the same way without specifying node address and endpoint options. Here are a few examples:
    View the node status using the built-in dashboard command:
    talm dashboard -f nodes/node1.yaml Bootstrap etcd cluster on node1:
    talm bootstrap -f nodes/node1.yaml Save the kubeconfig to your current directory:
    talm kubeconfig kubeconfig -f nodes/node1.yaml Unlike the official talosctl utility, the generated configs do not contain secrets, allowing them to be stored in git without additional encryption. The secrets are stored at the root of your project and only in these files: secrets.yaml, talosconfig, and kubeconfig.
    Summary
    That’s our complete scheme for installing Talos Linux in nearly any situation. Here’s a quick recap:
    Use kexec to run Talos Linux on any existing system. Make sure the new kernel has the correct network settings, by collecting them from the current system and passing via the ip parameter in the cmdline. This lets you connect to the newly booted system via the API. When the kernel is booted via kexec, Talos Linux runs entirely in RAM. To install Talos on disk, apply your configuration using either talosctl or Talm. When applying the config, don’t forget to specify network settings for your node, because on-disk bootloader configuration doesn’t automatically have them. Enjoy your newly installed and fully operational Talos Linux. Additional materials:
    How we built a dynamic Kubernetes API Server for the API Aggregation Layer in Cozystack DIY: Create Your Own Cloud with Kubernetes Cozystack Becomes a CNCF Sandbox Project Journey to Stable Infrastructures with Talos Linux & Cozystack | Andrei Kvapil | SREday London 2024 Talos Linux: You don’t need an operating system, you only need Kubernetes / Andrei Kvapil Comparing GitOps: Argo CD vs Flux CD, with Andrei Kvapil | KubeFM Cozystack on Talos Linux
    The post A Simple Way to Install Talos Linux on Any Machine, with Any Provider appeared first on Linux.com.
  19. Blogger
    By: Josh Njiruh
    Sat, 26 Apr 2025 16:27:06 +0000


    When you encounter the error ModuleNotFoundError: No module named ‘numpy’ on a Linux system, it means Python cannot find the NumPy package, which is one of the most fundamental libraries for scientific computing in Python. Here’s a comprehensive guide to resolve this issue.
    Understanding the Error
    The ModuleNotFoundError: No module named ‘numpy’ error occurs when:
    NumPy is not installed on your system NumPy is installed but in a different Python environment than the one you’re using Your Python path variables are not configured correctly Solution Methods
    Method 1: Install NumPy Using pip
    The simplest and most common solution is to install NumPy using pip, Python’s package installer:
    # For system-wide installation (may require sudo)
    sudo pip install numpy

    # For user-specific installation (recommended)
    pip install --user numpy

    # If you have multiple Python versions, be specific
    pip3 install numpy Method 2: Install NumPy Using Your Distribution’s Package Manager
    Many Linux distributions provide NumPy as a package:
    Debian/Ubuntu:
    sudo apt update
    sudo apt install python3-numpy Fedora:
    sudo dnf install python3-numpy Arch Linux:
    sudo pacman -S python-numpy Method 3: Verify the Python Environment
    If you’re using virtual environments or conda, make sure you’re activating the correct environment:
    # For virtualenv
    source myenv/bin/activate
    pip install numpy

    # For conda
    conda activate myenv
    conda install numpy Method 4: Check Your Python Path
    Sometimes the issue is related to the Python path:
    # Check which Python you're using
    which python
    which python3

    # Check installed packages
    pip list | grep numpy
    pip3 list | grep numpy Method 5: Install Using Requirements File
    If you’re working on a project with multiple dependencies:
    # Create requirements.txt with numpy listed
    echo "numpy" &gt; requirements.txt
    pip install -r requirements.txt Troubleshooting Common Issues
    Insufficient Permissions
    If you get a permission error during installation:
    pip install --user numpy Pip Not Found
    If pip command is not found:
    sudo apt install python3-pip  # For Debian/Ubuntu Build Dependencies Missing
    NumPy requires certain build dependencies:
    # For Debian/Ubuntu
    sudo apt install build-essential python3-dev Version Conflicts
    If you need a specific version:
    pip install numpy==1.20.3  # Install specific version Verifying the Installation
    After installation, verify that NumPy is properly installed:
    python -c "import numpy; print(numpy.__version__)"
    # or
    python3 -c "import numpy; print(numpy.__version__)" Best Practices
    Use Virtual Environments: Isolate your projects with virtual environments to avoid package conflicts Keep pip Updated: Run pip install --upgrade pip regularly
    Document Dependencies: Maintain a requirements.txt file for your projects Use Version Pinning: Specify exact versions of packages for production environments Additional Resources
    NumPy Official Documentation Python Package Index (PyPI)  
    More from Unixmen


    The post Resolving ModuleNotFoundError: No Module Named ‘numpy’ appeared first on Unixmen.
  20. Blogger
    By: Josh Njiruh
    Sat, 26 Apr 2025 16:23:36 +0000


    In today’s interconnected world, DNS plays a crucial role in how we access websites and online services. If you’ve ever wondered “what’s my DNS?” or why it matters, this comprehensive guide will explain everything you need to know about DNS settings, how to check them, and why they’re important for your online experience.
    What is DNS?
    DNS (Domain Name System) acts as the internet’s phonebook, translating human-friendly website names like “example.com” into machine-readable IP addresses that computers use to identify each other. Without DNS, you’d need to remember complex numerical addresses instead of simple domain names.
    Why Should You Know Your DNS Settings?
    Understanding your DNS configuration offers several benefits:
    Improved browsing speed: Some DNS providers offer faster resolution times than others Enhanced security: Certain DNS services include protection against malicious websites Access to blocked content: Alternative DNS servers can sometimes bypass regional restrictions Troubleshooting: Knowing your DNS settings is essential when diagnosing connection issues How to Check “What’s My DNS” on Different Devices
    Linux
    Open Terminal Type cat /etc/resolv.conf and press Enter
    Look for “nameserver” entries Windows
    Open Command Prompt (search for “cmd” in the Start menu) Type ipconfig /all and press Enter
    Look for “DNS Servers” in the results Mac
    Open System Preferences Click on Network Select your active connection and click Advanced Go to the DNS tab to view your DNS servers Mobile Devices
    Android
    Go to Settings > Network & Internet > Advanced > Private DNS iOS
    Go to Settings > Wi-Fi Tap the (i) icon next to your connected network Scroll down to find DNS information Popular DNS Providers
    Several organizations offer public DNS services with various features:
    Google DNS: 8.8.8.8 and 8.8.4.4 Cloudflare: 1.1.1.1 and 1.0.0.1 OpenDNS: 208.67.222.222 and 208.67.220.220 Quad9: 9.9.9.9 and 149.112.112.112 When to Consider Changing Your DNS
    You might want to change your default DNS settings if:
    You experience slow website loading times You want additional security features Your current DNS service is unreliable You’re looking to bypass certain network restrictions The Impact of DNS on Security and Privacy
    Your DNS provider can see which websites you visit, making your choice of DNS service an important privacy consideration. Some providers offer enhanced privacy features like DNS-over-HTTPS (DoH) or DNS-over-TLS (DoT) to encrypt your DNS queries.
    Summary
    Knowing “what’s my DNS” is more than just technical curiosity—it’s an important aspect of managing your internet connection effectively. Whether you’re troubleshooting connection issues, looking to improve performance, or concerned about privacy, understanding and potentially customizing your DNS settings can significantly enhance your online experience.
    Similar Articles 
    https://nordvpn.com/blog/what-is-my-dns/
    https://us.norton.com/blog/how-to/what-is-my-dns/ 
    More Articles from Unixmen




    The post Understanding DNS: What’s My DNS and Why Does It Matter? appeared first on Unixmen.
  21. Blogger
    By: Josh Njiruh
    Sat, 26 Apr 2025 16:02:32 +0000


    When working with Markdown, understanding how to create new lines is essential for proper formatting and readability. This guide will explain everything you need to know about creating line breaks in Markdown documents.
    What is a Markdown New Line?
    In Markdown, creating new lines isn’t as straightforward as simply pressing the Enter key. Markdown has specific syntax requirements for line breaks that differ from traditional word processors.
    How to Create a New Line in Markdown
    There are several methods to create a new line in Markdown:
    1. The Double Space Method
    The most common way to create a line break in Markdown is by adding two spaces at the end of a line before pressing Enter:
    <span class="">This is the first line.··
    </span><span class="">This is the second line.</span> (Note: The “··” represents two spaces that aren’t visible in the rendered output)
    2. The Backslash Method
    You can also use a backslash at the end of a line to force a line break:
    <span class="">This is the first line.\
    </span><span class="">This is the second line.</span> 3. HTML Break Tag
    For guaranteed compatibility across all Markdown renderers, you can use the HTML
    &lt;br&gt; tag:
    <span class="">This is the first line.<span class="token tag punctuation">&lt;</span><span class="token tag">br</span><span class="token tag punctuation">&gt;</span>
    </span><span class="">This is the second line.</span> Common Issues
    Many newcomers to Markdown struggle with line breaks because:
    The double space method isn’t visible in the editor Different Markdown flavors handle line breaks differently Some Markdown editors automatically trim trailing spaces Creating New Lines in Different Markdown Environments
    Different platforms have varying implementations of Markdown:
    GitHub Flavored Markdown (GFM) supports the double space method CommonMark requires two spaces for line breaks Some blogging platforms like WordPress may handle line breaks automatically Best Practices for Line Breaks
    For the most consistent results across platforms:
    1. HTML
    &lt;br&gt; for Portability:
    The
    &lt;br&gt; tag forces a line break, ensuring consistency across browsers and platforms. Use it when precise line control is vital, like in addresses or poems. Avoid overuse to maintain clean HTML.
    2. Double Spaces in Documentation:
    In plain text and markdown, double spaces at line ends often create breaks. This is readable, but not universally supported. Best for simple documentation, not HTML.
    3. Test Before Publishing:
    Platforms interpret line breaks differently. Always test your content in the target environment to guarantee correct formatting and prevent unexpected layout issues.
    Creating Paragraph Breaks
    To create a paragraph break (with extra spacing), simply leave a blank line between paragraphs:
    <span class="">This is paragraph one.
    </span>
    <span class="">This is paragraph two.</span> Understanding the nuances of line breaks in Markdown will make your documents more readable and ensure they render correctly across different platforms and applications.
    Similar Articles
    https://www.markdownguide.org/basic-syntax/
    https://dev.to/cassidoo/making-a-single-line-break-in-markdown-3db1 
    More Articles from Unixmen



     
    The post Markdown: How to Add A New Line appeared first on Unixmen.
  22. Blogger

    How to Update Ubuntu

    By: Josh Njiruh
    Sat, 26 Apr 2025 15:58:04 +0000


    Updating your Ubuntu system is crucial for maintaining security, fixing bugs, and accessing new features. This article will guide you through the various methods to update Ubuntu, from basic command-line options to graphical interfaces.
    Why Regular Updates Matter
    Keeping your Ubuntu system updated provides several benefits:
    Security patches that protect against vulnerabilities Bug fixes for smoother operation Access to new features and improvements Better hardware compatibility Longer-term system stability Command-Line Update Methods
    The Basic Update Process
    The simplest way to update Ubuntu via the terminal is:
    sudo apt update
    sudo apt upgrade The first command refreshes your package lists, while the second installs available updates.
    Comprehensive System Updates
    For a more thorough update, including kernel updates and package removals:
    sudo apt update
    sudo apt full-upgrade Security Updates Only
    If you only want security-related updates:
    sudo apt update
    sudo apt upgrade -s
    sudo unattended-upgrade --dry-run Graphical Interface Updates
    Software Updater
    Ubuntu’s built-in Software Updater provides a user-friendly way to update:
    Click on the “Activities” button in the top-left corner Search for “Software Updater” Launch the application and follow the prompts Software & Updates Settings
    For more control over update settings:
    Open “Settings” > “Software & Updates” Navigate to the “Updates” tab Configure how often Ubuntu checks for updates and what types to install Upgrading Ubuntu to a New Version
    Using the Update Manager
    To upgrade to a newer Ubuntu version:
    sudo do-release-upgrade For a graphical interface, use:
    Open Software Updater Click “Settings” Set “Notify me of a new Ubuntu version” to your preference When a new version is available, you’ll be notified Scheduled Updates
    For automatic updates:
    sudo apt install unattended-upgrades
    sudo dpkg-reconfigure unattended-upgrades This configures your system to install security updates automatically.
    Troubleshooting Common Update Issues
    Package Locks
    If you encounter “unable to acquire the dpkg frontend lock”:
    sudo killall apt apt-get
    sudo rm /var/lib/apt/lists/lock
    sudo rm /var/cache/apt/archives/lock
    sudo rm /var/lib/dpkg/lock Repository Issues
    If repositories aren’t responding:
    Navigate to “Software & Updates” Under “Ubuntu Software,” change the download server Insufficient Space
    For disk space issues:
    sudo apt clean
    sudo apt autoremove Best Practices for Ubuntu Updates
    Regular Schedule: Update at least weekly for security Backups: Always back up important data before major updates Changelogs: Review update notes for critical changes Timing: Schedule updates during low-usage periods Testing: For servers, test updates in a development environment first Summary
    In summation, regularly updating your Ubuntu system is essential for security and performance. Whether you prefer the command line or graphical interfaces, Ubuntu provides flexible options to keep your system current and protected.
    Similar Articles
    https://ubuntu.com/server/docs/how-to-upgrade-your-release/
    https://www.cyberciti.biz/faq/upgrade-update-ubuntu-using-terminal/
    More Articles from Unixmen


    The post How to Update Ubuntu appeared first on Unixmen.
  23. Blogger
    By: Josh Njiruh
    Sat, 26 Apr 2025 15:55:04 +0000


    Emojis have become an essential part of modern digital communication, adding emotion and context to our messages. While typing emojis is straightforward on mobile devices, doing so on Ubuntu and other Linux distributions can be less obvious. This guide covers multiple methods on how to type emojis in Ubuntu, from keyboard shortcuts to dedicated applications.
    Why Use Emojis on Ubuntu?
    Emojis aren’t just for casual conversations. They can enhance:
    Professional communications (when used appropriately) Documentation Social media posts Blog articles Desktop applications Terminal customizations Method 1: Character Map (Pre-installed)
    Ubuntu comes with a Character Map utility that includes emojis:
    Press the Super (Windows) key and search for “Character Map” Open the application In the search box, type “emoji” or browse categories Double-click an emoji to select it Click “Copy” to copy it to your clipboard Paste it where needed using Ctrl+V Pros: No installation required Cons: Slower to use for frequent emoji needs
    Method 2: How to Type Emojis Using Keyboard Shortcuts
    Ubuntu provides a built-in keyboard shortcut for emoji insertion:
    Press Ctrl+Shift+E or Ctrl+. (period) in most applications An emoji picker window will appear Browse or search for your desired emoji Click to insert it directly into your text Note: This shortcut works in most GTK applications (like Firefox, GNOME applications) but may not work in all software.
    Method 3: Emoji Selector Extension
    For GNOME desktop users:
    Open the “Software” application Search for “Extensions” Install GNOME Extensions app if not already installed Visit extensions.gnome.org in Firefox Search for “Emoji Selector” Install the extension Access emojis from the top panel Pros: Always accessible from the panel Cons: Only works in GNOME desktop environment
    Method 4: EmojiOne Picker
    A dedicated emoji application:
    sudo apt install emoji-picker After installation, launch it from your applications menu or by running:
    emoji-picker Pros: Full-featured dedicated application Cons: Requires installation
    Method 5: Using the Compose Key
    Set up a compose key to create emoji sequences:
    Go to Settings > Keyboard > Keyboard Shortcuts > Typing Set a Compose Key (Right Alt is common) Use combinations like: Compose + : + ) for Compose + : + ( for Pros: Works system-wide Cons: Limited emoji selection, requires memorizing combinations
    Method 6: Copy-Paste from the Web
    A simple fallback option:
    Visit a website like Emojipedia Browse or search for emojis Copy and paste as needed Pros: Access to all emojis with descriptions Cons: Requires internet access, less convenient
    Method 7: Using Terminal and Commands
    For terminal lovers, you can install
    emote :
    sudo snap install emote Then launch it from the terminal:
    emote Or set up a keyboard shortcut to launch it quickly.
    Method 8: IBus Emoji
    For those using IBus input method:
    Install IBus if not already installed: sudo apt install ibus Configure IBus to start at login: im-config -n ibus Log out and back in Press Ctrl+Shift+e to access the emoji picker in text fields Troubleshooting Emoji Display Issues
    If emojis appear as boxes or don’t display correctly:
    Install font support: sudo apt install fonts-noto-color-emoji Update font cache: fc-cache -f -v Log out and back in Using Emojis in Specific Applications
    In the Terminal
    Most modern terminal emulators support emoji display. Try:
    echo "Hello 👋 Ubuntu!" In LibreOffice
    Use the Insert > Special Character menu or the keyboard shortcuts mentioned above.
    In Code Editors like VS Code
    Most code editors support emoji input through the standard keyboard shortcuts or by copy-pasting.
    Summary
    Ubuntu offers multiple ways to type and use emojis, from built-in utilities to specialized applications. Choose the method that best fits your workflow, whether you prefer keyboard shortcuts, graphical selectors, or terminal-based solutions.
    By incorporating these methods into your Ubuntu usage, you can enhance your communications with the visual expressiveness that emojis provide, bringing your Linux experience closer to what you might be used to on mobile devices.
    More From Unixmen


    Similar Articles
    https://askubuntu.com/questions/1045915/how-to-insert-an-emoji-into-a-text-in-ubuntu-18-04-and-later/
    http://www.omgubuntu.co.uk/2018/06/use-emoji-linux-ubuntu-apps
    The post How to Type Emojis in Ubuntu Linux appeared first on Unixmen.
  24. Blogger
    by: Abhishek Prakash
    Fri, 25 Apr 2025 21:30:04 +0530

    Choosing the right tools is important for an efficient workflow. A seasoned Fullstack dev shares his favorites.
    7 Utilities to Boost Development Workflow ProductivityHere are a few tools that I have discovered and use to improve my development process.Linux HandbookLHB CommunityHere are the highlights of this edition :
    The magical CDPATH Using host networking with docker compose Docker interview questions And more tools, tips and memes for you This edition of LHB Linux Digest newsletter is supported by PikaPods. ❇️ Self-hosting without hassle
    PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.
    Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.
    PikaPods - Instant Open Source App HostingRun the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.Instant Open Source App Hosting  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  25. Blogger
    by: Abhishek Prakash
    Fri, 25 Apr 2025 20:55:16 +0530

    If you manage servers on a regular basis, you'll often find yourself entering some directories more often than others.
    For example, I self-host Ghost CMS to run this website. The Ghost install is located at /var/www/ghost/ . I have to cd to this directory and then use its subdirectories to manage the Ghost install. If I have to enter its log directory directly, I have to type /var/www/ghost/content/log.
    Typing out ridiculously long paths that take several seconds even with tab completion.
    Relatable? But what if I told you there's a magical shortcut that can make those lengthy directory paths vanish like free merchandise at a tech conference?
    Enter CDPATH, the unsung hero of Linux navigation that I'm genuinely surprised that many new Linux users are not even aware of!
    What is CDPATH?
    CDPATH is an environment variable that works a lot like the more familiar PATH variable (which helps your shell find executable programs). But instead of finding programs, CDPATH helps the cd command find directories
    Normally, when you use cd some-dir, the shell looks for some-dir only in the current working directory.
    With CDPATH, you tell the shell to also look in other directories you define. If it finds the target directory there, it cds into it — no need to type full paths.
    How does CDPATH works?
    Imagine this directory structure:
    /home/abhishek/ ├── Work/ │ └── Projects/ │ └── WebApp/ ├── Notes/ └── Scripts/ Let's say, I often visit the WebApp directory and for that I'll have to type the absolute path if I am at a strange location:
    cd /home/abhishek/Work/Projects/WebAppOr, since I am a bit smart, I'll use ~ shortcut for home directory.
    cd ~/Work/Projects/WebApp But if I add this location to the CDPATH variable:
    export CDPATH=$HOME/Work/ProjectsI could enter WebApp directory from anywhere in the filesystem just by typing this:
    cd WebAppAwesome! Isn't it?
    🚧You should always add . (current directory) in the CDPATH and your CDPATH should start with it. This way, it will look for the directory in the current directory first and then in the directories you have specified in the CDPATH variable.How to set CDPATH variable?
    Setting up CDPATH is delightfully straightforward. If you ever added anything to the PATH variable, it's pretty much the same.
    First, think about the frequently used directories where you would want to cd to search for when no specific paths have been provided.
    Let's say, I want to add /home/abhishek/work and /home/abhishek/projects in CDPATH. I would use:
    export CDPATH=.:/home/abhishek/work:/home/abhishek/projectsThis creates a search path that includes:
    The current directory (.) My work directory My projects directory Which means if I type cd some_dir, it will first look if some_dir exists in the current directory. If not found, it searches
    🚧The order of the directories in CDPATH matters.Let's say that both work and projects directories have a directory named docs which is not in the current directory.
    If I use cd docs, it will take me to /home/abhishek/work/docs. Why? because work directory comes first in the CDPATH.
    💡If things look fine in your testing, you should make it permanent by adding the "export CDPATH" command you used earlier to your shell profile.Whatever you exported in CDPATH will only be valid for the current session. To make the changes permanent, you should add it to your shell profile.
    I am assuming that you are using bash shell. In that case, it should be /.profile~ or ~/.bash_profile.
    Open this file with a text editor like Nano and add the CDPATH export command to the end.
    📋When you use cd command with absolute path or relative path, it won't refer to the CDPATH. CDPATH is more like, hey, instead of just looking into my current sub-directories, search it in specified directories, too. When you specify the full path (absolute or relative) already with cd, there is no need to search. cd knows where you want to go.How to find the CDPATH value?
    CDPATH is an environment variable. How do you print the value of an environment variable? Simplest way is to use the echo command:
    echo $CDPATH📋If you have tab completion set with cd command already, it will also work for the directories listed in CDPATH.When not to use CDPATH?
    Like all powerful tools, CDPATH comes with some caveats:
    Duplicate names: If you have identically named directories across your filesystem, you might not always land where you expect. Scripts: Be cautious about using CDPATH in scripts, as it might cause unexpected behavior. Scripts generally should use absolute paths for clarity. Demo and teaching: When working with others who aren't familiar with your CDPATH setup, your lightning-fast navigation might look like actual wizardry (which is kind of cool to be honest) but it could confuse your students. 💡Including .. (parent directory) in your CDPATH creates a super-neat effect: you can navigate to 'sibling directories' without typing ../. If you're in /usr/bin and want to go to /usr/lib, just type cd lib.Why aren’t more sysadmins using CDPATH in 2025?
    The CDPATH used to be a popular tool in the 90s, I think. Ask any sysadmin older than 50 years, and CDPATH would have been in their arsenal of CLI tools.
    But these days, many Linux users have not even heard of the CDPATH concept. Surprising, I know.
    Ever since I discovered CDPATH, I have been using it extensively specially on the Ghost and Discourse servers I run. Saves me a few keystrokes and I am proud of those savings.
    By the way, if you don't mind including 'non-standard' tools in your workflow, you may also explore autojump instead of CDPATH.
    GitHub - wting/autojump: A cd command that learns - easily navigate directories from the command lineA cd command that learns - easily navigate directories from the command line - wting/autojumpGitHubwting🗨️ Your turn. Were you already familiar with CDPATH? If yes, how do you use it? If not, is this something you are going to use in your workflow?

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.