Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. By: Edwin
    Wed, 30 Apr 2025 13:08:23 +0000


    Many hardcore Linux users were introduced into the tech world after playing with the tiny Raspberry Pi devices. One such tiny device is the Raspberry Pi Zero. Its appearance might fool a lot of people, but it packs a surprising punch for its size and price. Whether you’re a beginner, a maker, or a developer looking to prototype on a budget, there are countless Raspberry Pi Zero projects you can build to automate tasks, learn Linux, or just have fun.
    In this detailed guide, we will list and explain ten of the most practical and creative projects you can do with a Raspberry Pi Zero or Zero W (the version with built-in Wi-Fi). These ideas are beginner-friendly and open-source focused. We at Unixmen, carefully curated these because they are perfect for DIY tech enthusiasts. Ready? Get. Set. Create!
    What is the Raspberry Pi Zero?
    The Raspberry Pi Zero is tiny (size of a credit-card) single-board computer designed for low-power, low-cost computing. The typical specs are:
    1GHz single-core CPU 512MB RAM Mini HDMI, micro USB ports 40 GPIO pins Available with or without built-in Wi-Fi (Zero W/WH) Though the size looks misleading, it is enough and ideal for most lightweight Linux-based projects.
    Ad Blocker
    This will be very useful to you and your friends and family. Create a network-wide ad blocker with Pi-Hole and Raspberry Pi Zero. It filters DNS queries to block ads across all devices connected to your Wi-Fi.
    Why this will be famous:
    Blocks ads on websites, apps, and smart TVs Reduces bandwidth and improves speed Enhances privacy How to Install Pi-hole
    Execute this command to install Pi-hole
    curl -sSL https://install.pi-hole.net | bash Retro Gaming Console
    If you are a fan of retro games, you will love this while you create it. Transform your Pi Zero into a portable gaming device using RetroPie or Lakka. Play classic games from NES, SNES, Sega, and more.
    Prerequisites
    Micro SD card USB controller or GPIO-based gamepad Mini HDMI cable for output Ethical Testing Wi-Fi Hacking Lab
    Use tools like Kali Linux ARM or PwnPi to create a portable penetration testing toolkit. The Pi Zero W is ideal for ethical hacking practice, especially for cybersecurity students.
    How Will This be Useful
    Wi-Fi scanning Packet sniffing Network auditing We must warn you to use this project responsibly. Deploy this on networks you own or have permission to test.
    Lightweight Web Server
    Run a lightweight Apache or Nginx web server to host static pages or mini applications. This project is great for learning web development or hosting a personal wiki.
    How Can You Use this Project
    Personal homepage Markdown notes Self-hosted tools like Gitea, DuckDNS, or Uptime Kuma Smart Mirror Controller
    Build a smart mirror using a Raspberry Pi Zero and a 2-way acrylic mirror to display:
    Time and weather News headlines Calendar events Use MagicMirror² for easy configuration.
    IoT Sensor Node
    Add a DHT11/22, PIR motion sensor, or GPS module to your Pi Zero and turn it into an IoT data collector. Send the data to:
    Home Assistant MQTT broker Google Sheets or InfluxDB This is a great lightweight solution for remote sensing.
    Portable File Server (USB OTG)
    You can set up your Pi Zero as a USB gadget that acts like a storage device or even an Ethernet adapter when plugged into a host PC. To do this, use “g_mass_storage” or “g_ether” kernel modules to emulate devices:
    modprobe g_mass_storage file=/path/to/file.img Time-Lapse Camera
    You can connect a Pi Camera module and capture time-lapse videos of sunsets, plant growth, or construction projects.
    Tools You Require
    raspistill “ffmpeg” for converting images to video Cron jobs for automation Headless Linux Learning Box
    You can install Raspberry Pi OS Lite and practice:
    SSH Command line tools (grep, sed, awk) Bash scripting Networking with “netcat”, “ss”, “iptables” E-Ink Display Projects
    Libraries like Python EPD make it easy to control e-ink displays. Use the Pi Zero with a small e-ink screen to display functional events like:
    Calendar events Quotes of the day Weather updates RSS feeds Fun Tip: Combine Projects!
    You can combine several of these Raspberry Pi Zero projects into one system. For example, you can create an e-ink display with ad-blocker as well or a retro game console that also acts as a media server.
    Wrapping Up
    Whether you’re into IoT, cybersecurity, retro gaming, or automation, the Raspberry Pi Zero helps you create fun and useful projects. With its low cost, tiny size, and solid performance, it’s the perfect device for building compact, lightweight Linux-based systems.
    As of 2025, there is a growing number of open-source tools and community tutorials to support even the most ambitious Raspberry Pi Zero projects. All you need is an idea and a little curiosity. Learn more and more about Linux based applications at Unixmen!
    Related Articles
    How to Use Fopen: C projects guide Raspberry Pi Firewall: Step-by-step guide for an easy setup Gooseberry; An alternative to Raspberry Pi The post Raspberry Pi Zero Projects: Top 10 in 2025 appeared first on Unixmen.
  2. Revisiting Image Maps

    by: Andy Clarke
    Wed, 30 Apr 2025 12:12:45 +0000

    I mentioned last time that I’ve been working on a new website for Emmy-award-winning game composer Mike Worth. He hired me to create a highly graphical design that showcases his work.
    Mike loves ’90s animation, particularly Disney’s Duck Tales and other animated series. He challenged me to find a way to incorporate their retro ’90s style into his design without making it a pastiche. But that wasn’t my only challenge. I also needed to achieve that ’90s feel by using up-to-the-minute code to maintain accessibility, performance, responsiveness, and semantics.
    Designing for Mike was like a trip back to when mainstream website design seemed more spontaneous and less governed by conventions and best practices. Some people describe these designs as “whimsical”:
    But I’m not so sure that’s entirely accurate. “Playful?” Definitely. “Fanciful?” Possibly. But “fantastic?” That depends. “Whimsy” sounds superfluous, so I call it “expressive” instead.
    Studying design from way back, I remembered how websites often included graphics that combined branding, content, and navigation. Pretty much every reference to web design in the ’90s — when I designed my first website — talks about Warner Brothers’ Space Jam from 1996.
    Warner Brothers’ Space Jam (1996) So, I’m not going to do that.
    Brands like Nintendo used their home pages to direct people to their content while making branded visual statements. Cheestrings combined graphics with navigation, making me wonder why we don’t see designs like this today. Goosebumps typified this approach, combining cartoon illustrations with brightly colored shapes into a functional and visually rich banner, proving that being useful doesn’t mean being boring.
    Left to right: Nintendo, Cheestrings, Goosebumps. In the ’90s, when I developed graphics for websites like these, I either sliced them up and put their parts in tables or used mostly forgotten image maps.
    A brief overview of properties and values
    Let’s run through a quick refresher. Image maps date all the way back to HTML 3.2, where, first, server-side maps and then client-side maps defined clickable regions over an image using map and area elements. They were popular for graphics, maps, and navigation, but their use declined with the rise of CSS, SVG, and JavaScript.
    <map> adds clickable areas to a bitmap or vector image.
    <map name="projects"> ... </map> That <map> is linked to an image using the usemap attribute:
    <img usemap="#projects" ...> Those elements can have separate href and alt attributes and can be enhanced with ARIA to improve accessibility:
    <map name="projects"> <area href="" alt="" … /> ... </map> The shape attribute specifies an area’s shape. It can be a primitive circle or rect or a polygon defined by a set of absolute x and y coordinates:
    <area shape="circle" coords="..." ... /> <area shape="rect" coords="..." ... /> <area shape="poly" coords="..." ... /> Despite their age, image maps still offer plenty of benefits. They’re lightweight and need (almost) no JavaScript. More on that in just a minute. They’re accessible and semantic when used with alt, ARIA, and title attributes. Despite being from a different era, even modern mobile browsers support image maps.
    Design by Andy Clarke, Stuff & Nonsense. Mike Worth’s website will launch in April 2025, but you can see examples from this article on CodePen. My design for Mike Worth includes several graphic navigation elements, which made me wonder if image maps might still be an appropriate solution.
    Image maps in action
    Mike wants his website to showcase his past work and the projects he’d like to do. To make this aspect of his design discoverable and fun, I created a map for people to explore by pressing on areas of the map to open modals. This map contains numbered circles, and pressing one pops up its modal.
    My first thought was to embed anchors into the external map SVG:
    <img src="projects.svg" alt="Projects"> <svg ...> ... <a href="..."> <circle cx="35" cy="35" r="35" fill="#941B2F"/> <path fill="#FFF" d="..."/> </a> </svg> This approach is problematic. Those anchors are only active when SVG is inline and don’t work with an <img> element. But image maps work perfectly, even though specifying their coordinates can be laborious. Fortunately, plenty of tools are available, which make defining coordinates less tedious. Upload an image, choose shape types, draw the shapes, and copy the markup:
    <img src="projects.svg" usemap="#projects-map.svg"> <map name="projects-map.svg"> <area href="" alt="" coords="..." shape="circle"> <area href="" alt="" coords="..." shape="circle"> ... </map> Image maps work well when images are fixed sizes, but flexible images present a problem because map coordinates are absolute, not relative to an image’s dimensions. Making image maps responsive needs a little JavaScript to recalculate those coordinates when the image changes size:
    function resizeMap() { const image = document.getElementById("projects"); const map = document.querySelector("map[name='projects-map']"); if (!image || !map || !image.naturalWidth) return; const scale = image.clientWidth / image.naturalWidth; map.querySelectorAll("area").forEach(area => { if (!area.dataset.originalCoords) { area.dataset.originalCoords = area.getAttribute("coords"); } const scaledCoords = area.dataset.originalCoords .split(",") .map(coord => Math.round(coord * scale)) .join(","); area.setAttribute("coords", scaledCoords); }); } ["load", "resize"].forEach(event => window.addEventListener(event, resizeMap) ); I still wasn’t happy with this implementation as I wanted someone to be able to press on much larger map areas, not just the numbered circles.
    Every <path> has coordinates which define how it’s drawn, and they’re relative to the SVG viewBox:
    <svg width="1024" height="1024"> <path fill="#BFBFBF" d="…"/> </svg> On the other hand, a map’s <area> coordinates are absolute to the top-left of an image, so <path> values need to be converted. Fortunately, Raphael Monnerat has written PathToPoints, a tool which does precisely that. Upload an SVG, choose the point frequency, copy the coordinates for each path, and add them to a map area’s coords:
    <map> <area href="" shape="poly" coords="..."> <area href="" shape="poly" coords="..."> <area href="" shape="poly" coords="..."> ... </map> More issues with image maps
    Image maps are hard-coded and time-consuming to create without tools. Even with tools for generating image maps, converting paths to points, and then recalculating them using JavaScript, they could be challenging to maintain at scale.
    <area> elements aren’t visible, and except for a change in the cursor, they provide no visual feedback when someone hovers over or presses a link. Plus, there’s no easy way to add animations or interaction effects.
    But the deal-breaker for me was that an image map’s pixel-based values are unresponsive by default. So, what might be an alternative solution for implementing my map using CSS, HTML, and SVG?
    Anchors positioned absolutely over my map wouldn’t solve the pixel-based positioning problem or give me the irregular-shaped clickable areas I wanted. Anchors within an external SVG wouldn’t work either.
    But the solution was staring me in the face. I realized I needed to:
    Create a new SVG path for each clickable area. Make those paths invisible. Wrap each path inside an anchor. Place the anchors below other elements at the end of my SVG source. Replace that external file with inline SVG. I created a set of six much larger paths which define the clickable areas, each with its own fill to match its numbered circle. I placed each anchor at the end of my SVG source:
    <svg … viewBox="0 0 1024 1024"> <!-- Visible content --> <g>...</g> <!-- Clickable areas -->` <g id="links">` <a href="..."><path fill="#B48F4C" d="..."/></a>` <a href="..."><path fill="#6FA676" d="..."/></a>` <a href="..."><path fill="#30201D" d="..."/></a>` ... </g> </svg> Then, I reduced those anchors’ opacity to 0 and added a short transition to their full-opacity hover state:
    #links a { opacity: 0; transition: all .25s ease-in-out; } #links a:hover { opacity: 1; } While using an image map’s <area> sadly provides no visual feedback, embedded anchors and their content can respond to someone’s action, hint at what’s to come, and add detail and depth to a design.
    I might add gloss to those numbered circles to be consistent with the branding I’ve designed for Mike. Or, I could include images, titles, or other content to preview the pop-up modals:
    <g id="links"> <a href="…"> <path fill="#B48F4C" d="..."/> <image href="..." ... /> </a> </g> Try it for yourself:
    CodePen Embed Fallback Expressive design, modern techniques
    Designing Mike Worth’s website gave me a chance to blend expressive design with modern development techniques, and revisiting image maps reminded me just how important a tool image maps were during the period Mike loves so much.
    Ultimately, image maps weren’t the right tool for Mike’s website. But exploring them helped me understand what I really needed: a way to recapture the expressiveness and personality of ’90s website design using modern techniques that are accessible, lightweight, responsive, and semantic. That’s what design’s about: choosing the right tool for a job, even if that sometimes means looking back to move forward.
    Biography: Andy Clarke
    Often referred to as one of the pioneers of web design, Andy Clarke has been instrumental in pushing the boundaries of web design and is known for his creative and visually stunning designs. His work has inspired countless designers to explore the full potential of product and website design.
    Andy’s written several industry-leading books, including Transcending CSS, Hardboiled Web Design, and Art Direction for the Web. He’s also worked with businesses of all sizes and industries to achieve their goals through design.
    Visit Andy’s studio, Stuff & Nonsense, and check out his Contract Killer, the popular web design contract template trusted by thousands of web designers and developers.
    Revisiting Image Maps originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  3. by: Sreenath
    Wed, 30 Apr 2025 05:46:58 GMT

    Logseq is different from the conventional note-taking applications in many aspects.
    Firstly, it follows a note block approach, rather than a page-first approach for content organization. This allows Logseq to achieve data interlinking at the sentence level. That is, you can refer to any sentence of a note in any other note inside your database.
    Another equally important feature is the “Special Pages”. These are the “Journals” and “Contents” pages. Both of these special pages have use-cases far higher than what their names indicate.
    The Journals page
    The “Journals” is the first page you will see when you open Logseq. Here, you can see dates as headings. The Logseq documentation suggests that a new user, before understanding Logseq better, should use this Journals page heavily for taking notes.
    Journals PageAs the name suggests, this is the daily journals page. Whatever you write under a date will be saved as a separate Markdown file with the date as the title. You can see these pages in your file manager, too. Head to the location you use for Logseq, then visit the journals page.
    Journals Markdown Files in File ManagerLet's see how to make this Journals page most useful.
    Journal page as a daily diary
    Let's start with the basics. The “Journals” page can be used as your daily diary page.
    If you are a frequent diary writer, Logseq is the best tool to digitize your life experiences and daily thoughts.
    Each day, a new page will be created for you.
    If you need a page for a day in the past, Just click on the Create button on the bottom of Logseq window and select “New page”.
    Click on Create → New PageIn the dialog, enter the date for the required journal in the format, Mar 20th, 2023. Press enter. This will create the Journal page for the specified that for you!
    Create Journal page for an old dateJournal as a note organizer
    If you have read the Logseq Pages and Links article in this series, you should recall the fact that Logseq considers the concept of Pages, Tags, etc. in almost similar manner. If you want to create a new note, the best way is to use the keyboard method:
    #[[Note Title Goes Here]]The above creates a page for you. Now, the best place to create a new page is the Journals page.
    Logseq has a powerful backlink feature. With this, if you use the Journals page to create a new page, you don't need to add any date references inside the page separately, since at the very end of the page, you will have a backlink to that day's journal.
    Note with date referenceThis is beneficial because you can recall when a note was first created easily.
    Journal as a to-do organizer
    Logseq can be used as a powerful task manager application as well, and the Journals page plays a crucial role in it.
    If you come across any task while you are in the middle of something, just open the Journals page in Logseq and press the / key.
    Search and enter TODO. Then type the task you are about to do.
    Once done, press / again and search for Date Picker. Select a date from the calendar.
    0:00 /0:29 1× Creating a TODO task in Logseq
    That's it. You have created a to-do item with a due date. Now, when the date arrives, you will get a link on that day's Journal page. Thus, when you open Logseq on that day, you will see this item.
    It will also contain the link to the journal page from where you added the task.
    Other than that, you can search for the TODO page and open it to see all your task list, marked with TODO.
    0:00 /0:23 1× Search for the TODO page to list all the to-do tasks
    Journal to manage tasks
    Task management is not just adding due date to your tasks. You should be able to track a project and know at what stage a particular task is. For this, Logseq has some built-in tags/pages. For example, LATER, DOING, DONE, etc.
    These tags can be accessed by pressing the / key and searching for the name.
    For example, if you have some ideas that should be done at a later date, but not sure when exactly, add these with the LATER tag, just like the TODO tag explained above.
    Now, you can search for the LATER tag to know what all tasks are added to that list.
    0:00 /0:22 1× Using the LATER tag in Logseq
    Using the Journal page is beneficial here because you will be able to recollect on what date a particular task was added, allowing you to get more insight about that task. This will help you more, if you have entered your thoughts of that day in the Journal.
    The Contents Page
    Logseq has a special Contents page type, but don't confuse it with the usual table of contents. That is not its purpose. Here, I will mention the way I use the contents page. You can create your own workflows once you know its potential.
    You can think of the Contents page as a manually created Dashboard to your notes and database. Or, a simple home page from where you can access contents needed frequently.
    The most interesting thing that sets the contents page apart from others is the fact that it will always be visible in the right sidebar. Therefore, if you enable the sidebar permanently, you can see the quick links in the contents all the time.
    Edit the Contents page
    As said above, the Contents page is available on the right sidebar. So click on the sidebar button in the top panel and select Contents. You can edit this page from this sidebar view, which is the most convenient way.
    Click on the Sidebar button and select ContentsAll the text formatting, linking, etc., that work on Logseq pages works on this page as well.
    1. Add all important pages/tags
    The first thing you can do is to add frequently accessed pages or tags.
    For example, let's say you will be accessing the Kernel, Ubuntu, and APT tags frequently. So, what you can do is to add a Markdown heading:
    ## List of TagsNow, link the tags right in there, one per line:
    #Kernel #Ubuntu #APTFor better arrangement, you can use the Markdown horizontal rule after each section.
    ---2. Link the task management pages
    As discussed in the Journals section, you can have a variety of task related tags like TODO, LATER, WAITING, etc. So you can link each of these in the contents page:
    ## List of Tasks #TODO #LATER #WAITING ---🚧Please note the difference between the Markdown heading and the Logseq tags. So, don't forget to add a space after the # if you are creating a Markdown header.3. Quick access links
    If you are visiting some websites daily, you can bookmark these websites on the contents page for quickly accessing them.
    ## Quick access links [It's FOSS](https://itsfoss.com/) [It's FOSS Community](https://itsfoss.community/) [Arch Linux News](https://archlinux.org/) [GitHub](https://github.com/) [Reddit](https://www.reddit.com/)After all this, your contents page will look like this:
    Contents page in LogseqWrapping Up
    As you can see, you can utilize these pages in non-conventional ways to get a more extensive experience from Logseq. That's the beauty of this open-source tool. The more you explore, the more you discover, the more you enjoy.
    In the next part of this series, I'll share my favorite Logseq extensions.
  4. by: Geoff Graham
    Tue, 29 Apr 2025 14:27:25 +0000

    Brad Frost is running this new little podcast called Open Up. Folks write in with questions about the “other” side of web design and front-end development — not so much about tools and best practices as it is about the things that surround the work we do, like what happens if you get laid off, or AI takes your job, or something along those lines. You know, the human side of what we do in web design and development.
    Well, it just so happens that I’m co-hosting the show. In other words, I get to sprinkle in a little advice on top of the wonderful insights that Brad expertly doles out to audience questions.
    Our second episode just published, and I thought I’d share it. We’re finding our sea legs with this whole thing and figuring things out as we go. We’ve opened things up (get it?!) to a live audience and even pulled in one of Brad’s friends at the end to talk about the changing nature of working on a team and what it looks like to collaborate in a remote-first world.
    https://www.youtube.com/watch?v=bquVF5Cibaw
    Open Up With Brad Frost, Episode 2 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  5. Resize Panes in Tmux

    by: Pranav Krishna
    Tue, 29 Apr 2025 09:53:03 +0530

    In this series of managing the tmux utility, the first level division, panes, are considered.
    Panes divide the terminal window horizontally or vertically. Various combinations of these splits can result in different layouts, according to your liking.
    Pane split of a tmux windowThis is how panes work in tmux.
    Creating Panes
    Take into focus any given pane. It could be a fresh window as well.
    The current window can be split horizontally (up and down) with the key
    [Ctrl+B] + " Horizontal SplitAnd to split the pane vertically, use the combination
    [Ctrl+B] + %Vertical SplitResizing your panes
    Tmux uses 'cells' to quantify the amount of resizing done at once. To quantify, this is what resizing by 'one cell' looks like. One more character can be accommodated on the side.
    Resizing by 'one cell'The combination part is a bit tricky for resizing. Stick with me.
    Resize by one cell
    Use the prefix Ctrl+B followed by Ctrl+arrow keys to resize in the required direction.
    [Ctrl+B] Ctrl+arrowThis combination takes a fair number of keypresses, but can be precise.
    0:00 /0:08 1× Resize by five cells (quicker)
    Instead of holding the Ctrl key, you could use the Alt key to resize faster. This moves the pane by five cells.
    [Ctrl+B] Alt+arrow 0:00 /0:12 1× Resize by a specific number of cells (advanced)
    Just like before, the command line options can resize the pane to any number of cells.
    Enter the command line mode with
    [Ctrl+B] + :Then type
    resize-pane -{U/D/L/R} xxU/D/L/R represents the direction of resizing xx is the number of cells to be resized To resize a pane left by 20 cells, this is the command:
    resize-pane -L 20 0:00 /0:06 1× Resizing left by 20 cells
    Similarly, to resize a pane upwards, the -U tag is used instead.
    0:00 /0:05 1× Resizing upwards by 15 cells
    This resize-pane command could be primarily incorporated into reprogramming a tmux layout whenever a new session is spawned.
    Conclusion
    Since the pane lengths are always bound to change, knowing all the methods to vary the pane sizes can come in handy. Hence, all possible methods are covered.
    Pro tip 🚀 - If you make use of a mouse with tmux, your cursor is capable of resizing the panes.
    0:00 /0:15 1× Turning on mouse mode and resizing the panes
    Go ahead and tell me which method you use in the comments.
  6. Chris’ Corner: Reacting

    by: Chris Coyier
    Mon, 28 Apr 2025 17:20:59 +0000

    I was listening to Wes and Scott on a recent episode of Syntax talking about RSCs (React Server Components). I wouldn’t say it was particularly glowing.
    We use them here at CodePen, and will likely be more and more as we ship more with Next.js, which is part of our most modern stack that we are always moving toward. Me, I like Next.js. React makes good sense to me for use in a very interactive, long-session style application with oodles of state management. By virtue of being on the latest Next.js release, whatever we put in the app directory (“The App Router” as they call it) automatically uses RSCs when it can. I mostly like that. We do have to fight it sometimes, but those fights are generally about server-side rendering and making sure we are set up for that and doing things right to take advantage of it, which honestly we should be doing as much as possible anyway. I’ll also add some anecdotal data that we haven’t exactly seen huge drops in JavaScript bundle size when we move things that direction, which I was hoping would be a big benefit of that work.
    But React is more than Next.js, right? Right?! Yes and no. I use React without Next.js sometimes, and we do at CodePen in plenty of places. Without Next.js, usage of RSCs is hard or not happening. Precious few other frameworks are using them, and some have thrown up their hands and refused. To be fair: Parcel has support in Beta and Waku also supports them.
    A little hard to call them a big success in this state. But it’s also hard to call the concept of them a bad idea. It’s generally just a good idea to make the servers do more work than browsers, as well as send as little data across the network as possible. If the JavaScript in a React component can be run on the server, and we can make the server part kinda smart, let’s let it?
    If you’ve got the time and inclination, Dam Abramov’s React for Two Computers is a massive post that is entirely a conceptual walkthrough abstracting the ideas of RSCs into an “Early World” and “Late World” to understand the why and where it all came from. He just recently followed it up with Impossible Components which gets more directly into using RSCs.
    Welp — while we’re talking React lemme drop some related links I found interesting lately.
    React itself, aside from RSCs, isn’t sitting idle. They’ve shipped an experimental <ViewTransition> component which is nice to see as someone who has struggled forcing React to do this before. They’ve also shipped an RC (Release Candidate) for the React Compiler (also RC? awkward?). The compiler is interesting in that it doesn’t necessarily make your bundles smaller it makes them run faster. Fancy Components is a collection of “mainly React, TypeScript, Tailwind, Motion” components that are… fancy. I’ve seen a bit of pushback on the accessibility of some of them, but I’ve also poked through them and found what look like solid attempts at making them accessible, so YMMV. Sahaj Jain says The URL is a great place to store state in React. Joshua Wootonn details the construction of a Drag to Select interaction in React which is… pretty complicated. The blog Expression Statement (no byline) says HTML Form Validation is heavily underused. I just added a bit of special validation to a form in React this week and I tend to agree. Short story: GMail doesn’t render <img>s where the src has a space in it. 😭. I used pattern directly on the input, and we have our own error message system, otherwise I would have also used setCustomValidity. Thoughtbot: Superglue 1.0: React ❤️ Rails
  7. by: Geoff Graham
    Mon, 28 Apr 2025 12:43:01 +0000

    Ten divs walk into a bar:
    <div>1</div> <div>2</div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> <div>10</div> There’s not enough chairs for them to all sit at the bar, so you need the tenth div to sit on the lap of one of the other divs, say the second one. We can visually cover the second div with the tenth div but have to make sure they are sitting next to each other in the HTML as well. The order matters.
    <div>1</div> <div>2</div> <div>10</div><!-- Sitting next to Div #2--> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> The tenth div needs to sit on the second div’s lap rather than next to it. So, perhaps we redefine the relationship between them and make this a parent-child sorta thing.
    <div>1</div> <div class="parent"> 2 <div class="child">10</div><!-- Sitting in Div #2's lap--> </div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> Now we can do a little tricky positioning dance to contain the tenth div inside the second div in the CSS:
    .parent { position: relative; /* Contains Div #10 */ } .child { position: absolute; } We can inset the child’s position so it is pinned to the parent’s top-left edge:
    .child { position: absolute; inset-block-start: 0; inset-inline-start: 0; } And we can set the child’s width to 100% of the parent’s size so that it is fully covering the parent’s lap and completely obscuring it.
    .child { position: absolute; inset-block-start: 0; inset-inline-start: 0; width: 100%; } Cool, it works!
    CodePen Embed Fallback Anchor positioning simplifies this process a heckuva lot because it just doesn’t care where the tenth div is in the HTML. Instead, we can work with our initial markup containing 10 individuals exactly as they entered the bar. You’re going to want to follow along in the latest version of Chrome since anchor positioning is only supported there by default at the time I’m writing this.
    <div>1</div> <div class="parent">2</div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> <div class="child">10</div> Instead, we define the second div as an anchor element using the anchor-name property. I’m going to continue using the .parent and .child classes to keep things clear.
    .parent { anchor-name: --anchor; /* this can be any name formatted as a dashed ident */ } Then we connect the child to the parent by way of the position-anchor property:
    .child { position-anchor: --anchor; /* has to match the `anchor-name` */ } The last thing we have to do is position the child so that it covers the parent’s lap. We have the position-area property that allows us to center the element over the parent:
    .child { position-anchor: --anchor; position-area: center; } If we want to completely cover the parent’s lap, we can set the child’s size to match that of the parent using the anchor-size() function:
    .child { position-anchor: --anchor; position-area: center; width: anchor-size(width); } CodePen Embed Fallback No punchline — just one of the things that makes anchor positioning something I’m so excited about. The fact that it eschews HTML source order is so CSS-y because it’s another separation of concerns between content and presentation.
    Anchor Positioning Just Don’t Care About Source Order originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  8. by: Abhishek Prakash
    Mon, 28 Apr 2025 06:04:44 GMT

    There is something about CachyOS. It feels fast. The performance is excellently smooth, specially if you have newer hardware.
    I don't have data to prove it but my new Asus Zenbook that I bought in November last year is rocking CachyOS superbly.
    The new laptop came with Windows, which is not surprising. I didn't replace Windows with Linux. Instead, I installed CachyOS in dual boot mode alongside Windows.
    The thing is that it was straightforward to do so. Anything simple in the Arch domain is amusing in itself.
    So, I share my amusing experience in this video.
    Subscribe to It's FOSS YouTube ChannelI understand that video may not be everyone's favorite format so I created this tutorial in the text format too.
    There are a few things to note here:
    An active internet connection is mandatory. Offline installation is not possible. An 8 GB USB is needed to create the installation medium. At least 40 GB free disk space (it could be 20 GB as well but that would be way too less). Time and patience is of essence. 🚧You should back up your important data on an external disk or cloud. It is rare that anything will go wrong, but if you are not familiar to dealing with disk partitions, a backup will save your day. SPONSORED Use Swiss-based pCloud storage
    Back up important folders from your computer to pCloud, securely. Keep and recover old versions in up to 1 year.
    Learn more about pCloud backup Creating live USB of CachyOS and booting from it
    First, download the desktop edition of CachyOS from its website:
    Download CachyOSYou can create the live USB on any computer with the help of Ventoy. I used my TUXEDO notebook for this purpose.
    Download Ventoy from the official Website. When you extract it, there will be a few executables in it to run it either in a browser or in a GUI. Use whatever you want.
    Making sure that USB is plugged in, install Ventoy on it.
    Once done, all you need to do is to drag the CachyOS ISO to the Ventoy disk. The example below shows it for Mint but it's the same for any Linux ISO.
    If you need detailed steps for using Ventoy, please follow this tutorial.
    Install and Use Ventoy on Ubuntu [Complete Guide]Tired of flashing USB drives for every ISO? Get started with Ventoy and get the ability to easily boot from ISOs.It's FOSSSagar SharmaOnce I had the CachyOS live USB, I put it in the Asus Zenbook and restarted it. When the computer was starting up, pressing F2/F10 button took me to the BIOS Settings.
    I did that to ensure that the system boots from the USB instead of the hard disk by changing the boot order.
    Change boot priorityWhen the system booted next, Ventoy screen was visible and I could see the option to load the CachyOS live session.
    Select CachyOSI selected to boot in normal mode.
    Normal ModeThere was an option to boot into CachyOS with NVIDIA. I went with the default option.
    Open-source or closed-source driversWhile booting into CachyOS, I ran into an issue. There was a "Start Job is running..." message for more than a minute or two. I force restarted the system and the live USB worked fine the next time.
    Start job duration notificationIf this error persists for you, try to change the USB port or create live USB again.
    Another issue I discovered by trial and error was relating to the password. CachyOS showed a login screen that seemed to be asking for username and password. As per the official docs, there are no password required in live session.
    What I did was to change the display server to Wayland and then click the next button, and I was logged into the system without any password.
    Select WaylandInstalling CachyOS
    Again, active internet is mandatory to download the desktop environment and other packages.
    Select the "Launch installer" option.
    Click on "Launch Installer"My system was not plugged into a power source but it had almost 98% battery and I knew that it could handle the quick installation easily.
    System not connected to power source warningQuite straight forward settings in the beginning. Like selecting time zone
    Set Locationand keyboard layout.
    Set keyboard layoutThe most important step is the disk partition and I was pleasantly surprised to see that the Calamares installer detected Windows presence and gave option to install CachyOS alongside.
    I have a single disk with Windows partition as well as EFI system partition.
    All I had to do was to drag the slider and shrink the storage appropriately.
    Storage settingsI gave more space to Linux because it was going to be my main operating system.
    The next screen gave the options to install a desktop environment or window manager. I opted for GNOME. You can see why it is important to have active internet connection. The desktop environment is not on the ISO file. It needs to be downloaded first.
    Select Desktop EnvironmentAnd a few additional packages are added to the list automatically.
    Installing additional packagesAnd as the last interactive step of install, I created the user account.
    Enter user credentialsA quick overview of what is going to be done at this point. Things looked fine so I hit the Install button.
    Click on InstallAnd then just wait for a few minutes for the installation to complete.
    Installation progressWhen the installation completes, restart the system and take out the live USB. In my case, I forgot to take the USB out, but still booted from the hard disk.
    Fixing the missing Windows from grub
    When the system booted next, I could see the usual Grub bootloader screen but there was no Windows option in it.
    Windows Boot Manager is absentFixing it was simple. I opened the grub config file for editing in Nano.
    sudo nano /etc/default/grubOS_PROBER was disabled, so I uncommented that line, saved the file and exited.
    Uncomment OS ProberThe next step was to update grub to make it aware of the config changes.
    sudo grub-mkconfig -o /boot/grub/grub.cfgAnd on the next reboot, the Windows boot manager option there to let me use Windows.
    Windows Boot Manager in the boot screenThis is what I did to install CachyOS Linux alongside Windows. For an Arch-based distro, the procedure was pretty standard, and that's a good thing. Installing Linux should not be super complicated.
    💬 If you tried dual booting CachyOS, do let me know how it went in the comment section.
  9. By: Linux.com Editorial Staff
    Sun, 27 Apr 2025 23:40:06 +0000

    Talos Linux is a specialized operating system designed for running Kubernetes. First and foremost it handles full lifecycle management for Kubernetes control-plane components. On the other hand, Talos Linux focuses on security, minimizing the user’s ability to influence the system. A distinctive feature of this OS is the near-complete absence of executables, including the absence of a shell and the inability to log in via SSH. All configuration of Talos Linux is done through a Kubernetes-like API.
    Talos Linux is provided as a set of pre-built images for various environments.
    The standard installation method assumes you will take a prepared image for your specific cloud provider or hypervisor and create a virtual machine from it. Or go the bare metal route and load  the Talos Linux image using ISO or PXE methods.
    Unfortunately, this does not work when dealing with providers that offer a pre-configured server or virtual machine without letting you upload a custom image or even use an ISO for installation through KVM. In that case, your choices are limited to the distributions the cloud provider makes available.
    Usually during the Talos Linux installation process, two questions need to be answered: (1) How to load and boot the Talos Linux image, and (2) How to prepare and apply the machine-config (the main configuration file for Talos Linux) to that booted image. Let’s talk about each of these steps.
    Booting into Talos Linux
    One of the most universal methods is to use a Linux kernel mechanism called kexec.
    kexec is both a utility and a system call of the same name. It allows you to boot into a new kernel from the existing system without performing a physical reboot of the machine. This means you can download the required vmlinuz and initramfs for Talos Linux, and then, specify the needed kernel command line and immediately switch over to the new system. It is as if the kernel were loaded by the standard bootloader at startup, only in this case your existing Linux operating system acts as the bootloader.
    Essentially, all you need is any Linux distribution. It could be a physical server running in rescue mode, or even a virtual machine with a pre-installed operating system. Let’s take a look at a case using Ubuntu on, but it can be literally any other Linux distribution.
    Log in via SSH and install the kexec-tools package, it contains the kexec utility, which you’ll need later:
    apt install kexec-tools -y Next, you need to download the Talos Linux, that is the kernel and initramfs. They can be downloaded from the official repository:
    wget -O /tmp/vmlinuz https://github.com/siderolabs/talos/releases/latest/download/vmlinuz-amd64
    wget -O /tmp/initramfs.xz https://github.com/siderolabs/talos/releases/latest/download/initramfs-amd64.xz If you have a physical server rather than a virtual one, you’ll need to build your own image with all the necessary firmware using Talos Factory service. Alternatively, you can use the pre-built images from the Cozystack project (a solution for building clouds we created at Ænix and transferred to CNCF Sandbox) – these images already include all required modules and firmware:
    wget -O /tmp/vmlinuz https://github.com/cozystack/cozystack/releases/latest/download/kernel-amd64
    wget -O /tmp/initramfs.xz https://github.com/cozystack/cozystack/releases/latest/download/initramfs-metal-amd64.xz Now you need the network information that will be passed to Talos Linux at boot time. Below is a small script that gathers everything you need and sets environment variables:
    IP=$(ip -o -4 route get 8.8.8.8 | awk -F”src ” ‘{sub(” .*”, “”, $2); print $2}’)
    GATEWAY=$(ip -o -4 route get 8.8.8.8 | awk -F”via ” ‘{sub(” .*”, “”, $2); print $2}’)
    ETH=$(ip -o -4 route get 8.8.8.8 | awk -F”dev ” ‘{sub(” .*”, “”, $2); print $2}’)
    CIDR=$(ip -o -4 addr show “$ETH” | awk -F”inet $IP/” ‘{sub(” .*”, “”, $2); print $2; exit}’)
    NETMASK=$(echo “$CIDR” | awk ‘{p=$1;for(i=1;i<=4;i++){if(p>=8){o=255;p-=8}else{o=256-2^(8-p);p=0}printf(i<4?o”.”:o”\n”)}}’)
    DEV=$(udevadm info -q property “/sys/class/net/$ETH” | awk -F= ‘$1~/ID_NET_NAME_ONBOARD/{print $2; exit} $1~/ID_NET_NAME_PATH/{v=$2} END{if(v) print v}’) You can pass these parameters via the kernel cmdline. Use ip= parameter to configure the network using the Kernel level IP configuration mechanism for this. This method lets the kernel automatically set up interfaces and assign IP addresses during boot, based on information passed through the kernel cmdline. It’s a built-in kernel feature enabled by the CONFIG_IP_PNP option. In Talos Linux, this feature is enabled by default. All you need to do is provide a properly formatted network settings in the kernel cmdline.
    You can find proper syntax for this option in the Talos Linux documentation. Also official Linux kernel documentation provides more detailed examples. Set the CMDLINE variable with the ip option that contains the current system’s settings, and then print it out:
    CMDLINE=”init_on_alloc=1 slab_nomerge pti=on console=tty0 console=ttyS0 printk.devkmsg=on talos.platform=metal ip=${IP}::${GATEWAY}:${NETMASK}::${DEV}:::::”
    echo $CMDLINE The output should look something like:
    init_on_alloc=1 slab_nomerge pti=on console=tty0 console=ttyS0 printk.devkmsg=on talos.platform=metal ip=10.0.0.131::10.0.0.1:255.255.255.0::eno2np0::::: Verify that everything looks correct, then load our new kernel:
    kexec -l /tmp/vmlinuz –initrd=/tmp/initramfs.xz –command-line=”$CMDLINE”
    kexec -e The first command loads the Talos kernel into RAM, the second command switches the current system to this new kernel.
    As a result, you’ll get a running instance of Talos Linux with networking configured. However it’s currently running entirely in RAM, so if the server reboots, the system will return to its original state (by loading the OS from the hard drive, e.g., Ubuntu).
    Applying machine-config and installing Talos Linux on disk
    To install Talos Linux persistently on the disk and replace the current OS, you need to apply a machine-config specifying the disk to install. To configure the machine, you can use either the official talosctl utility or the Talm, utility maintained by the Cozystack project (Talm works with vanilla Talos Linux as well).
    First, let’s consider configuration using talosctl. Before applying the config, ensure it includes network settings for your node; otherwise, after reboot, the node won’t configure networking. During installation, the bootloader is written to disk and does not contain the ip option for kernel autoconfiguration.
    Here’s an example of a config patch containing the necessary values:
    # node1.yaml
    machine:
      install:
        disk: /dev/sda
      network:
        hostname: node1
        nameservers:
        – 1.1.1.1
        – 8.8.8.8
        interfaces:
        – interface: eno2np0
          addresses:
          – 10.0.0.131/24
          routes:
          – network: 0.0.0.0/0
            gateway: 10.0.0.1 You can use it to generate a full machine-config:
    talosctl gen secrets
    talosctl gen config –with-secrets=secrets.yaml –config-patch-control-plane=@node1.yaml <cluster-name> <cluster-endpoint> Review the resulting config and apply it to the node:
    talosctl apply -f controlplane.yaml -e 10.0.0.131 -n 10.0.0.131 -i  Once you apply controlplane.yaml, the node will install Talos on the /dev/sda disk, overwriting the existing OS, and then reboot.
    All you need now is to run the bootstrap command to initialize the etcd cluster:
    talosctl –talosconfig=talosconfig bootstrap -e 10.0.0.131 -n 10.0.0.131 You can view the node’s status at any time using dashboard commnad:
    talosctl –talosconfig=talosconfig dashboard -e 10.0.0.131 -n 10.0.0.131 As soon as all services reach the Ready state, retrieve the kubeconfig and you’ll be able to use your newly installed Kubernetes:
    talosctl –talosconfig=talosconfig kubeconfig kubeconfig
    export KUBECONFIG=${PWD}/kubeconfig Use Talm for configuration management
    When you have a lot of configs, you’ll want a convenient way to manage them. This is especially useful with bare-metal nodes, where each node may have different disks, interfaces and specific network settings. As a result, you might need to hold a patch for each node.
    To solve this, we developed Talm — a configuration manager for Talos Linux that works similarly to Helm.
    The concept is straightforward: you have a common config template with lookup functions, and when you generate a configuration for a specific node, Talm dynamically queries the Talos API and substitutes values into the final config.
    Talm includes almost all of the features of talosctl, adding a few extras. It can generate configurations from Helm-like templates, and remember the node and endpoint parameters for each node in the resulting file, so you don’t have to specify these parameters every time you work with a node.
    Let me show how to perform the same steps to install Talos Linux using Talm:
    First, initialize a configuration for a new cluster:
    mkdir talos
    cd talos
    talm init Adjust values for your cluster in values.yaml:
    endpoint: “https://10.0.0.131:6443”
    podSubnets:
    – 10.244.0.0/16
    serviceSubnets:
    – 10.96.0.0/16
    advertisedSubnets:
    – 10.0.0.0/24 Generate a config for your node:
    talm template -t templates/controlplane.yaml -e 10.0.0.131 -n 10.0.0.131 > nodes/node1.yaml The resulting output will look something like:
    # talm: nodes=[“10.0.0.131”], endpoints=[“10.0.0.131”], templates=[“templates/controlplane.yaml”]
    # THIS FILE IS AUTOGENERATED. PREFER TEMPLATE EDITS OVER MANUAL ONES.
    machine:
      type: controlplane
      kubelet:
        nodeIP:
          validSubnets:
            – 10.0.0.0/24
      network:
        hostname: node1
        # — Discovered interfaces:
        # eno2np0:
        #   hardwareAddr:a0:36:bc:cb:eb:98
        #   busPath: 0000:05:00.0
        #   driver: igc
        #   vendor: Intel Corporation
        #   product: Ethernet Controller I225-LM)
        interfaces:
          – interface: eno2np0
            addresses:
              – 10.0.0.131/24
            routes:
              – network: 0.0.0.0/0
                gateway: 10.0.0.1
        nameservers:
          – 1.1.1.1
          – 8.8.8.8
      install:
        # — Discovered disks:
        # /dev/sda:
        #    model: SAMSUNG MZQL21T9HCJR-00A07
        #    serial: S64GNG0X444695
        #    wwid: eui.36344730584446950025384700000001
        #    size: 1.9 TB
        disk: /dev/sda
    cluster:
      controlPlane:
        endpoint: https://10.0.0.131:6443
      clusterName: talos
      network:
        serviceSubnets:
          – 10.96.0.0/16
      etcd:
        advertisedSubnets:
          – 10.0.0.0/24 All that remains is to apply it to your node:
    talm apply -f nodes/node1.yaml -i 
    Talm automatically detects the node address and endpoint from the “modeline” (a conditional comment at the top of the file) and applies the config.
    You can also run other commands in the same way without specifying node address and endpoint options. Here are a few examples:
    View the node status using the built-in dashboard command:
    talm dashboard -f nodes/node1.yaml Bootstrap etcd cluster on node1:
    talm bootstrap -f nodes/node1.yaml Save the kubeconfig to your current directory:
    talm kubeconfig kubeconfig -f nodes/node1.yaml Unlike the official talosctl utility, the generated configs do not contain secrets, allowing them to be stored in git without additional encryption. The secrets are stored at the root of your project and only in these files: secrets.yaml, talosconfig, and kubeconfig.
    Summary
    That’s our complete scheme for installing Talos Linux in nearly any situation. Here’s a quick recap:
    Use kexec to run Talos Linux on any existing system. Make sure the new kernel has the correct network settings, by collecting them from the current system and passing via the ip parameter in the cmdline. This lets you connect to the newly booted system via the API. When the kernel is booted via kexec, Talos Linux runs entirely in RAM. To install Talos on disk, apply your configuration using either talosctl or Talm. When applying the config, don’t forget to specify network settings for your node, because on-disk bootloader configuration doesn’t automatically have them. Enjoy your newly installed and fully operational Talos Linux. Additional materials:
    How we built a dynamic Kubernetes API Server for the API Aggregation Layer in Cozystack DIY: Create Your Own Cloud with Kubernetes Cozystack Becomes a CNCF Sandbox Project Journey to Stable Infrastructures with Talos Linux & Cozystack | Andrei Kvapil | SREday London 2024 Talos Linux: You don’t need an operating system, you only need Kubernetes / Andrei Kvapil Comparing GitOps: Argo CD vs Flux CD, with Andrei Kvapil | KubeFM Cozystack on Talos Linux
    The post A Simple Way to Install Talos Linux on Any Machine, with Any Provider appeared first on Linux.com.
  10. By: Josh Njiruh
    Sat, 26 Apr 2025 16:27:06 +0000


    When you encounter the error ModuleNotFoundError: No module named ‘numpy’ on a Linux system, it means Python cannot find the NumPy package, which is one of the most fundamental libraries for scientific computing in Python. Here’s a comprehensive guide to resolve this issue.
    Understanding the Error
    The ModuleNotFoundError: No module named ‘numpy’ error occurs when:
    NumPy is not installed on your system NumPy is installed but in a different Python environment than the one you’re using Your Python path variables are not configured correctly Solution Methods
    Method 1: Install NumPy Using pip
    The simplest and most common solution is to install NumPy using pip, Python’s package installer:
    # For system-wide installation (may require sudo)
    sudo pip install numpy

    # For user-specific installation (recommended)
    pip install --user numpy

    # If you have multiple Python versions, be specific
    pip3 install numpy Method 2: Install NumPy Using Your Distribution’s Package Manager
    Many Linux distributions provide NumPy as a package:
    Debian/Ubuntu:
    sudo apt update
    sudo apt install python3-numpy Fedora:
    sudo dnf install python3-numpy Arch Linux:
    sudo pacman -S python-numpy Method 3: Verify the Python Environment
    If you’re using virtual environments or conda, make sure you’re activating the correct environment:
    # For virtualenv
    source myenv/bin/activate
    pip install numpy

    # For conda
    conda activate myenv
    conda install numpy Method 4: Check Your Python Path
    Sometimes the issue is related to the Python path:
    # Check which Python you're using
    which python
    which python3

    # Check installed packages
    pip list | grep numpy
    pip3 list | grep numpy Method 5: Install Using Requirements File
    If you’re working on a project with multiple dependencies:
    # Create requirements.txt with numpy listed
    echo "numpy" &gt; requirements.txt
    pip install -r requirements.txt Troubleshooting Common Issues
    Insufficient Permissions
    If you get a permission error during installation:
    pip install --user numpy Pip Not Found
    If pip command is not found:
    sudo apt install python3-pip  # For Debian/Ubuntu Build Dependencies Missing
    NumPy requires certain build dependencies:
    # For Debian/Ubuntu
    sudo apt install build-essential python3-dev Version Conflicts
    If you need a specific version:
    pip install numpy==1.20.3  # Install specific version Verifying the Installation
    After installation, verify that NumPy is properly installed:
    python -c "import numpy; print(numpy.__version__)"
    # or
    python3 -c "import numpy; print(numpy.__version__)" Best Practices
    Use Virtual Environments: Isolate your projects with virtual environments to avoid package conflicts Keep pip Updated: Run pip install --upgrade pip regularly
    Document Dependencies: Maintain a requirements.txt file for your projects Use Version Pinning: Specify exact versions of packages for production environments Additional Resources
    NumPy Official Documentation Python Package Index (PyPI)  
    More from Unixmen


    The post Resolving ModuleNotFoundError: No Module Named ‘numpy’ appeared first on Unixmen.
  11. By: Josh Njiruh
    Sat, 26 Apr 2025 16:23:36 +0000


    In today’s interconnected world, DNS plays a crucial role in how we access websites and online services. If you’ve ever wondered “what’s my DNS?” or why it matters, this comprehensive guide will explain everything you need to know about DNS settings, how to check them, and why they’re important for your online experience.
    What is DNS?
    DNS (Domain Name System) acts as the internet’s phonebook, translating human-friendly website names like “example.com” into machine-readable IP addresses that computers use to identify each other. Without DNS, you’d need to remember complex numerical addresses instead of simple domain names.
    Why Should You Know Your DNS Settings?
    Understanding your DNS configuration offers several benefits:
    Improved browsing speed: Some DNS providers offer faster resolution times than others Enhanced security: Certain DNS services include protection against malicious websites Access to blocked content: Alternative DNS servers can sometimes bypass regional restrictions Troubleshooting: Knowing your DNS settings is essential when diagnosing connection issues How to Check “What’s My DNS” on Different Devices
    Linux
    Open Terminal Type cat /etc/resolv.conf and press Enter
    Look for “nameserver” entries Windows
    Open Command Prompt (search for “cmd” in the Start menu) Type ipconfig /all and press Enter
    Look for “DNS Servers” in the results Mac
    Open System Preferences Click on Network Select your active connection and click Advanced Go to the DNS tab to view your DNS servers Mobile Devices
    Android
    Go to Settings > Network & Internet > Advanced > Private DNS iOS
    Go to Settings > Wi-Fi Tap the (i) icon next to your connected network Scroll down to find DNS information Popular DNS Providers
    Several organizations offer public DNS services with various features:
    Google DNS: 8.8.8.8 and 8.8.4.4 Cloudflare: 1.1.1.1 and 1.0.0.1 OpenDNS: 208.67.222.222 and 208.67.220.220 Quad9: 9.9.9.9 and 149.112.112.112 When to Consider Changing Your DNS
    You might want to change your default DNS settings if:
    You experience slow website loading times You want additional security features Your current DNS service is unreliable You’re looking to bypass certain network restrictions The Impact of DNS on Security and Privacy
    Your DNS provider can see which websites you visit, making your choice of DNS service an important privacy consideration. Some providers offer enhanced privacy features like DNS-over-HTTPS (DoH) or DNS-over-TLS (DoT) to encrypt your DNS queries.
    Summary
    Knowing “what’s my DNS” is more than just technical curiosity—it’s an important aspect of managing your internet connection effectively. Whether you’re troubleshooting connection issues, looking to improve performance, or concerned about privacy, understanding and potentially customizing your DNS settings can significantly enhance your online experience.
    Similar Articles 
    https://nordvpn.com/blog/what-is-my-dns/
    https://us.norton.com/blog/how-to/what-is-my-dns/ 
    More Articles from Unixmen




    The post Understanding DNS: What’s My DNS and Why Does It Matter? appeared first on Unixmen.
  12. By: Josh Njiruh
    Sat, 26 Apr 2025 16:02:32 +0000


    When working with Markdown, understanding how to create new lines is essential for proper formatting and readability. This guide will explain everything you need to know about creating line breaks in Markdown documents.
    What is a Markdown New Line?
    In Markdown, creating new lines isn’t as straightforward as simply pressing the Enter key. Markdown has specific syntax requirements for line breaks that differ from traditional word processors.
    How to Create a New Line in Markdown
    There are several methods to create a new line in Markdown:
    1. The Double Space Method
    The most common way to create a line break in Markdown is by adding two spaces at the end of a line before pressing Enter:
    <span class="">This is the first line.··
    </span><span class="">This is the second line.</span> (Note: The “··” represents two spaces that aren’t visible in the rendered output)
    2. The Backslash Method
    You can also use a backslash at the end of a line to force a line break:
    <span class="">This is the first line.\
    </span><span class="">This is the second line.</span> 3. HTML Break Tag
    For guaranteed compatibility across all Markdown renderers, you can use the HTML
    &lt;br&gt; tag:
    <span class="">This is the first line.<span class="token tag punctuation">&lt;</span><span class="token tag">br</span><span class="token tag punctuation">&gt;</span>
    </span><span class="">This is the second line.</span> Common Issues
    Many newcomers to Markdown struggle with line breaks because:
    The double space method isn’t visible in the editor Different Markdown flavors handle line breaks differently Some Markdown editors automatically trim trailing spaces Creating New Lines in Different Markdown Environments
    Different platforms have varying implementations of Markdown:
    GitHub Flavored Markdown (GFM) supports the double space method CommonMark requires two spaces for line breaks Some blogging platforms like WordPress may handle line breaks automatically Best Practices for Line Breaks
    For the most consistent results across platforms:
    1. HTML
    &lt;br&gt; for Portability:
    The
    &lt;br&gt; tag forces a line break, ensuring consistency across browsers and platforms. Use it when precise line control is vital, like in addresses or poems. Avoid overuse to maintain clean HTML.
    2. Double Spaces in Documentation:
    In plain text and markdown, double spaces at line ends often create breaks. This is readable, but not universally supported. Best for simple documentation, not HTML.
    3. Test Before Publishing:
    Platforms interpret line breaks differently. Always test your content in the target environment to guarantee correct formatting and prevent unexpected layout issues.
    Creating Paragraph Breaks
    To create a paragraph break (with extra spacing), simply leave a blank line between paragraphs:
    <span class="">This is paragraph one.
    </span>
    <span class="">This is paragraph two.</span> Understanding the nuances of line breaks in Markdown will make your documents more readable and ensure they render correctly across different platforms and applications.
    Similar Articles
    https://www.markdownguide.org/basic-syntax/
    https://dev.to/cassidoo/making-a-single-line-break-in-markdown-3db1 
    More Articles from Unixmen



     
    The post Markdown: How to Add A New Line appeared first on Unixmen.
  13. How to Update Ubuntu

    By: Josh Njiruh
    Sat, 26 Apr 2025 15:58:04 +0000


    Updating your Ubuntu system is crucial for maintaining security, fixing bugs, and accessing new features. This article will guide you through the various methods to update Ubuntu, from basic command-line options to graphical interfaces.
    Why Regular Updates Matter
    Keeping your Ubuntu system updated provides several benefits:
    Security patches that protect against vulnerabilities Bug fixes for smoother operation Access to new features and improvements Better hardware compatibility Longer-term system stability Command-Line Update Methods
    The Basic Update Process
    The simplest way to update Ubuntu via the terminal is:
    sudo apt update
    sudo apt upgrade The first command refreshes your package lists, while the second installs available updates.
    Comprehensive System Updates
    For a more thorough update, including kernel updates and package removals:
    sudo apt update
    sudo apt full-upgrade Security Updates Only
    If you only want security-related updates:
    sudo apt update
    sudo apt upgrade -s
    sudo unattended-upgrade --dry-run Graphical Interface Updates
    Software Updater
    Ubuntu’s built-in Software Updater provides a user-friendly way to update:
    Click on the “Activities” button in the top-left corner Search for “Software Updater” Launch the application and follow the prompts Software & Updates Settings
    For more control over update settings:
    Open “Settings” > “Software & Updates” Navigate to the “Updates” tab Configure how often Ubuntu checks for updates and what types to install Upgrading Ubuntu to a New Version
    Using the Update Manager
    To upgrade to a newer Ubuntu version:
    sudo do-release-upgrade For a graphical interface, use:
    Open Software Updater Click “Settings” Set “Notify me of a new Ubuntu version” to your preference When a new version is available, you’ll be notified Scheduled Updates
    For automatic updates:
    sudo apt install unattended-upgrades
    sudo dpkg-reconfigure unattended-upgrades This configures your system to install security updates automatically.
    Troubleshooting Common Update Issues
    Package Locks
    If you encounter “unable to acquire the dpkg frontend lock”:
    sudo killall apt apt-get
    sudo rm /var/lib/apt/lists/lock
    sudo rm /var/cache/apt/archives/lock
    sudo rm /var/lib/dpkg/lock Repository Issues
    If repositories aren’t responding:
    Navigate to “Software & Updates” Under “Ubuntu Software,” change the download server Insufficient Space
    For disk space issues:
    sudo apt clean
    sudo apt autoremove Best Practices for Ubuntu Updates
    Regular Schedule: Update at least weekly for security Backups: Always back up important data before major updates Changelogs: Review update notes for critical changes Timing: Schedule updates during low-usage periods Testing: For servers, test updates in a development environment first Summary
    In summation, regularly updating your Ubuntu system is essential for security and performance. Whether you prefer the command line or graphical interfaces, Ubuntu provides flexible options to keep your system current and protected.
    Similar Articles
    https://ubuntu.com/server/docs/how-to-upgrade-your-release/
    https://www.cyberciti.biz/faq/upgrade-update-ubuntu-using-terminal/
    More Articles from Unixmen


    The post How to Update Ubuntu appeared first on Unixmen.
  14. By: Josh Njiruh
    Sat, 26 Apr 2025 15:55:04 +0000


    Emojis have become an essential part of modern digital communication, adding emotion and context to our messages. While typing emojis is straightforward on mobile devices, doing so on Ubuntu and other Linux distributions can be less obvious. This guide covers multiple methods on how to type emojis in Ubuntu, from keyboard shortcuts to dedicated applications.
    Why Use Emojis on Ubuntu?
    Emojis aren’t just for casual conversations. They can enhance:
    Professional communications (when used appropriately) Documentation Social media posts Blog articles Desktop applications Terminal customizations Method 1: Character Map (Pre-installed)
    Ubuntu comes with a Character Map utility that includes emojis:
    Press the Super (Windows) key and search for “Character Map” Open the application In the search box, type “emoji” or browse categories Double-click an emoji to select it Click “Copy” to copy it to your clipboard Paste it where needed using Ctrl+V Pros: No installation required Cons: Slower to use for frequent emoji needs
    Method 2: How to Type Emojis Using Keyboard Shortcuts
    Ubuntu provides a built-in keyboard shortcut for emoji insertion:
    Press Ctrl+Shift+E or Ctrl+. (period) in most applications An emoji picker window will appear Browse or search for your desired emoji Click to insert it directly into your text Note: This shortcut works in most GTK applications (like Firefox, GNOME applications) but may not work in all software.
    Method 3: Emoji Selector Extension
    For GNOME desktop users:
    Open the “Software” application Search for “Extensions” Install GNOME Extensions app if not already installed Visit extensions.gnome.org in Firefox Search for “Emoji Selector” Install the extension Access emojis from the top panel Pros: Always accessible from the panel Cons: Only works in GNOME desktop environment
    Method 4: EmojiOne Picker
    A dedicated emoji application:
    sudo apt install emoji-picker After installation, launch it from your applications menu or by running:
    emoji-picker Pros: Full-featured dedicated application Cons: Requires installation
    Method 5: Using the Compose Key
    Set up a compose key to create emoji sequences:
    Go to Settings > Keyboard > Keyboard Shortcuts > Typing Set a Compose Key (Right Alt is common) Use combinations like: Compose + : + ) for Compose + : + ( for Pros: Works system-wide Cons: Limited emoji selection, requires memorizing combinations
    Method 6: Copy-Paste from the Web
    A simple fallback option:
    Visit a website like Emojipedia Browse or search for emojis Copy and paste as needed Pros: Access to all emojis with descriptions Cons: Requires internet access, less convenient
    Method 7: Using Terminal and Commands
    For terminal lovers, you can install
    emote :
    sudo snap install emote Then launch it from the terminal:
    emote Or set up a keyboard shortcut to launch it quickly.
    Method 8: IBus Emoji
    For those using IBus input method:
    Install IBus if not already installed: sudo apt install ibus Configure IBus to start at login: im-config -n ibus Log out and back in Press Ctrl+Shift+e to access the emoji picker in text fields Troubleshooting Emoji Display Issues
    If emojis appear as boxes or don’t display correctly:
    Install font support: sudo apt install fonts-noto-color-emoji Update font cache: fc-cache -f -v Log out and back in Using Emojis in Specific Applications
    In the Terminal
    Most modern terminal emulators support emoji display. Try:
    echo "Hello 👋 Ubuntu!" In LibreOffice
    Use the Insert > Special Character menu or the keyboard shortcuts mentioned above.
    In Code Editors like VS Code
    Most code editors support emoji input through the standard keyboard shortcuts or by copy-pasting.
    Summary
    Ubuntu offers multiple ways to type and use emojis, from built-in utilities to specialized applications. Choose the method that best fits your workflow, whether you prefer keyboard shortcuts, graphical selectors, or terminal-based solutions.
    By incorporating these methods into your Ubuntu usage, you can enhance your communications with the visual expressiveness that emojis provide, bringing your Linux experience closer to what you might be used to on mobile devices.
    More From Unixmen


    Similar Articles
    https://askubuntu.com/questions/1045915/how-to-insert-an-emoji-into-a-text-in-ubuntu-18-04-and-later/
    http://www.omgubuntu.co.uk/2018/06/use-emoji-linux-ubuntu-apps
    The post How to Type Emojis in Ubuntu Linux appeared first on Unixmen.
  15. by: Abhishek Prakash
    Fri, 25 Apr 2025 21:30:04 +0530

    Choosing the right tools is important for an efficient workflow. A seasoned Fullstack dev shares his favorites.
    7 Utilities to Boost Development Workflow ProductivityHere are a few tools that I have discovered and use to improve my development process.Linux HandbookLHB CommunityHere are the highlights of this edition :
    The magical CDPATH Using host networking with docker compose Docker interview questions And more tools, tips and memes for you This edition of LHB Linux Digest newsletter is supported by PikaPods. ❇️ Self-hosting without hassle
    PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.
    Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.
    PikaPods - Instant Open Source App HostingRun the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.Instant Open Source App Hosting  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  16. by: Abhishek Prakash
    Fri, 25 Apr 2025 20:55:16 +0530

    If you manage servers on a regular basis, you'll often find yourself entering some directories more often than others.
    For example, I self-host Ghost CMS to run this website. The Ghost install is located at /var/www/ghost/ . I have to cd to this directory and then use its subdirectories to manage the Ghost install. If I have to enter its log directory directly, I have to type /var/www/ghost/content/log.
    Typing out ridiculously long paths that take several seconds even with tab completion.
    Relatable? But what if I told you there's a magical shortcut that can make those lengthy directory paths vanish like free merchandise at a tech conference?
    Enter CDPATH, the unsung hero of Linux navigation that I'm genuinely surprised that many new Linux users are not even aware of!
    What is CDPATH?
    CDPATH is an environment variable that works a lot like the more familiar PATH variable (which helps your shell find executable programs). But instead of finding programs, CDPATH helps the cd command find directories
    Normally, when you use cd some-dir, the shell looks for some-dir only in the current working directory.
    With CDPATH, you tell the shell to also look in other directories you define. If it finds the target directory there, it cds into it — no need to type full paths.
    How does CDPATH works?
    Imagine this directory structure:
    /home/abhishek/ ├── Work/ │ └── Projects/ │ └── WebApp/ ├── Notes/ └── Scripts/ Let's say, I often visit the WebApp directory and for that I'll have to type the absolute path if I am at a strange location:
    cd /home/abhishek/Work/Projects/WebAppOr, since I am a bit smart, I'll use ~ shortcut for home directory.
    cd ~/Work/Projects/WebApp But if I add this location to the CDPATH variable:
    export CDPATH=$HOME/Work/ProjectsI could enter WebApp directory from anywhere in the filesystem just by typing this:
    cd WebAppAwesome! Isn't it?
    🚧You should always add . (current directory) in the CDPATH and your CDPATH should start with it. This way, it will look for the directory in the current directory first and then in the directories you have specified in the CDPATH variable.How to set CDPATH variable?
    Setting up CDPATH is delightfully straightforward. If you ever added anything to the PATH variable, it's pretty much the same.
    First, think about the frequently used directories where you would want to cd to search for when no specific paths have been provided.
    Let's say, I want to add /home/abhishek/work and /home/abhishek/projects in CDPATH. I would use:
    export CDPATH=.:/home/abhishek/work:/home/abhishek/projectsThis creates a search path that includes:
    The current directory (.) My work directory My projects directory Which means if I type cd some_dir, it will first look if some_dir exists in the current directory. If not found, it searches
    🚧The order of the directories in CDPATH matters.Let's say that both work and projects directories have a directory named docs which is not in the current directory.
    If I use cd docs, it will take me to /home/abhishek/work/docs. Why? because work directory comes first in the CDPATH.
    💡If things look fine in your testing, you should make it permanent by adding the "export CDPATH" command you used earlier to your shell profile.Whatever you exported in CDPATH will only be valid for the current session. To make the changes permanent, you should add it to your shell profile.
    I am assuming that you are using bash shell. In that case, it should be /.profile~ or ~/.bash_profile.
    Open this file with a text editor like Nano and add the CDPATH export command to the end.
    📋When you use cd command with absolute path or relative path, it won't refer to the CDPATH. CDPATH is more like, hey, instead of just looking into my current sub-directories, search it in specified directories, too. When you specify the full path (absolute or relative) already with cd, there is no need to search. cd knows where you want to go.How to find the CDPATH value?
    CDPATH is an environment variable. How do you print the value of an environment variable? Simplest way is to use the echo command:
    echo $CDPATH📋If you have tab completion set with cd command already, it will also work for the directories listed in CDPATH.When not to use CDPATH?
    Like all powerful tools, CDPATH comes with some caveats:
    Duplicate names: If you have identically named directories across your filesystem, you might not always land where you expect. Scripts: Be cautious about using CDPATH in scripts, as it might cause unexpected behavior. Scripts generally should use absolute paths for clarity. Demo and teaching: When working with others who aren't familiar with your CDPATH setup, your lightning-fast navigation might look like actual wizardry (which is kind of cool to be honest) but it could confuse your students. 💡Including .. (parent directory) in your CDPATH creates a super-neat effect: you can navigate to 'sibling directories' without typing ../. If you're in /usr/bin and want to go to /usr/lib, just type cd lib.Why aren’t more sysadmins using CDPATH in 2025?
    The CDPATH used to be a popular tool in the 90s, I think. Ask any sysadmin older than 50 years, and CDPATH would have been in their arsenal of CLI tools.
    But these days, many Linux users have not even heard of the CDPATH concept. Surprising, I know.
    Ever since I discovered CDPATH, I have been using it extensively specially on the Ghost and Discourse servers I run. Saves me a few keystrokes and I am proud of those savings.
    By the way, if you don't mind including 'non-standard' tools in your workflow, you may also explore autojump instead of CDPATH.
    GitHub - wting/autojump: A cd command that learns - easily navigate directories from the command lineA cd command that learns - easily navigate directories from the command line - wting/autojumpGitHubwting🗨️ Your turn. Were you already familiar with CDPATH? If yes, how do you use it? If not, is this something you are going to use in your workflow?
  17. by: Ankush Das
    Fri, 25 Apr 2025 10:58:48 +0530

    As an engineer who has been tossing around Kubernetes in a production environment for a long time, I've witnessed the evolution from manual kubectl deployment to CI/CD script automation, to today's GitOps. In retrospect, GitOps is really a leap forward in the history of K8s Ops.
    Nowadays, the two hottest players in GitOps tools are Argo CD and Flux CD, both of which I've used in real projects. So I'm going to talk to you from the perspective of a Kubernetes engineer who has stepped in the pits: which one is better for you?
    Why GitOps?
    The essence of GitOps is simple: 
    “Manage your Kubernetes cluster with Git, and make Git the sole source of truth.”
    This means: 
    All deployment configurations are written in Git repositories Tools automatically detect changes and deploy updates Git revert if something goes wrong, and everything is back to normal More reliable for auditing and security. I used to maintain a game service, and in the early days, I used scripts + CI/CD tools to do deployment. Late one night, something went wrong, and a manual error pushed an incorrect configuration into the cluster, and the whole service hung. Since I started using GitOps, I haven't had any more of these “man-made disasters”.
    Now, let me start comparing Argo CS vs Flux CD.
    Installation & Setup
    Argo CD can be installed with a single YAML, and the UI and API are deployed together out of the box.
    Here are the commands that make it happen:
    kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml kubectl port-forward svc/argocd-server -n argocd 8080:443 kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echoFlux CD follows a modular architecture, you need to install Source Controller, Kustomize Controller, etc., separately. You can also simplify the process by flux install.
    curl -s https://fluxcd.io/install.sh | sudo bash flux --version flux install --components="source-controller,kustomize-controller" kubectl get pods -n flux-systemFor me, the winner here is: Argo CD (because of more things out of the box in a single install setup).
    Visual Interface (UI) 
    Argo CD UIArgo CD has a powerful built-in Web UI to visually display the application structure, compare differences, synchronize operations, etc.
    Unfortunately, Flux CD has no UI by default. It can be used with Weave GitOps or Grafana to check the status because it relies on the command line primarily.
    Again, winner for me: Argo CD, because of a web UI.
    Synchronization and Deployment Strategies 
    Argo CD supports manual synchronization, automatic synchronization, and forced synchronization, suitable for fine-grained control.
    Flux CD uses a fully automated synchronization strategy that polls Git periodically and automatically aligns the cluster state.
    Flux CD gets the edge here and is the winner for me.
    Toolchain and Integration Capabilities 
    Argo CD supports Helm, Kustomize, Jsonnet, etc. and can be extended with plugins.
    Flux CD supports Helm, Kustomize, OCI mirroring, SOPS encryption configuration, GitHub Actions, etc., the ecology is very rich.
    Flux CD is the winner here for its wide range of integration support.
    Multi-tenancy and Privilege Management 
    Argo CD has built-in RBAC, supports SSOs such as OIDC, LDAP, and fine-grained privilege assignment.
    Flux CD uses Kubernetes' own RBAC system, which is more native but slightly more complex to configure.
    If you want ease of use, the winner is Argo CD.
    Multi-Cluster Management Capabilities 
    Argo CD supports multi-clustering natively, allowing you to switch and manage applications across multiple clusters directly in the UI.
    Flux CD also supports it, but you need to manually configure bootstrap and GitRepo for multiple clusters via GitOps. 
    Winner: Argo CD 
    Security and Keys 
    Argo CD is usually combined with Sealed Secrets, Vault, or through plugins to realize SOPS. 
    Flux CD supports native integration for SOPS, just configure it once, and it's very easy to decrypt automatically.
    Personally, I prefer to use Flux + SOPS in security-oriented scenarios, and the whole key management process is more elegant.
    Performance and Scalability 
    Flux CD controller architecture naturally supports horizontal scaling with stable performance for large-scale environments.
    Argo CD features a centralized architecture, feature-rich but slightly higher resource consumption.
    Winner: Flux CD 
    Observability and Problem Troubleshooting 
    Real-time status, change history, diff comparison, synchronized logs, etc. are available within the Argo CD UI.
    Flux CD relies more on logs and Kubernetes Events and requires additional tools to assist with visualization.
    Winner: Argo CD 
    Learning Curve 
    Argo CD UI is intuitive and easy to install, suitable for GitOps newcomers to get started.
    Flux CD focuses more on CLI operations and GitOps concepts, and has a slightly higher learning curve.
    Argo CD is easy to get started.
    GitOps Principles 
    Flux CD follows GitOps principles 100%: all declarative configurations, cluster auto-aligning Git. 
    Argo CD supports manual operations and UI synchronization, leaning towards "Controlled GitOps".
    While Argo CD has a lot of goodies, if you are a stickier for principles, then Flux CD will be more appealing to you.
    Final Thoughts
    Argo CD can be summed up as, quick to get started, comes with a web interface
    Seriously, the first time I used Argo CD, I had a feeling of “relief”.
    After deployment, you can open the web UI and see the status of each application, deploy with one click, rollback, compare Git and cluster differences - for people like me who are used to kubectl get, it's like a boon for information overload.
    Its “App of Apps ”model is also great for organizing large configurations. For example, I use Argo to manage different configuration repos in multiple environments (dev/stage/prod), which is very intuitive.
    On the downside, it's a bit “heavy”. It has its API server, UI, Controller, which takes up a bit of resources.
    You have to learn its Application CRD if you want to adjust the configuration. Argo CD even provides CLI for application management and cluster automation.
    Here are the commands that can come in handy for the purpose stated above:
    argocd app sync rental-app argocd app rollback rental-app 2Flux CD can be summed up as a modular tool.
    Flux is the engineer's tool: the ultimate in flexibility, configurable in plain text, and capable of being combined into anything you want. It emphasizes declarative configuration and automated synchronization.
    Flux CD offers these features:
    Triggers on Git change auto-apply auto-push notifications to Slack image updates automatically trigger deployment. Although this can be done in Argo, Flux's modular controllers (e.g. SourceController, KustomizeController) allow us to have fine-grained control over every aspect and build the entire platform like Lego.
    Of course, the shortcomings are obvious: 
    No UI The configuration is all based on YAML Documentation is a little less than Argo, you need to read more official examples. Practical advice: how to choose in different scenarios?
    Scenario 1: Small team, first time with GitOps? Choose Argo CD. 
    The visualization interface is friendly. Supports manual deployment/rollback.  Low learning cost, easy for the team to accept. Scenario 2: Strong security compliance needs? Choose Flux CD. 
    Fully declarative. Scales seamlessly across hundreds of clusters. It can be integrated with GitHub Actions, SOPS, Flagger, etc. to create a powerful CI/CD system. Scenario 3: You're already using Argo Workflows or Rollouts 
    Then, continue to use Argo CD for a better unified ecosystem experience.
    The last bit of personal advice 
    Don't get hung up on which one to pick; choose one and start using it, that's the most important thing!
    I also had a “tool-phobia” at the beginning, but after using it, I realized that GitOps itself is the revolutionary concept, and the tools are just the vehicle. You can start with Argo CD to get started, and then move on to Flux. 
    If you're about to design a GitOps process, start with the tool stack you're most familiar with and the capabilities of your team, and then evolve gradually.
  18. By: Edwin
    Fri, 25 Apr 2025 05:28:30 +0000


    The “grep” command is short for “Global Regular Expression Print”. This is a powerful tool in Unix-based systems used to search and filter text based on specific patterns. If you work with too many text-based files like logs, you will find it difficult to search for multiple strings in parallel. “grep” has the ability to search for multiple strings simultaneously, streamlining the process of extracting relevant information from files or command outputs. In this article, let us explain the variants of grep, instructions on how to use grep multiple string search, practical examples, and some best practices. Let’s get started!
    “grep” and Its Variants
    At Unixmen, we always start with the basics. So, before diving into searching for multiple strings, it’s necessary to understand the basic usage of “grep” and its variants:
    grep: Searches files for lines that match a given pattern using basic regular expressions. egrep: Equivalent to “grep -E”, it interprets patterns as extended regular expressions, allowing for more complex searches. Note that “egrep” is deprecated but still widely used. fgrep: Equivalent to “grep -F”, it searches for fixed strings rather than interpreting patterns as regular expressions. You are probably wondering why we have two functions for doing the same job. egrep and grep -E do the same task and similarly, fgrep and grep -F have the same functionality. This is a part of a consistency exercise to make sure all commands have a similar pattern. At Unixmen, we recommend using grep -E and grep -F instead of egrep and fgrep respectively so that your code is future-proof.
    Now, let’s get back to the topic. For example, to search for the word “error” in a file named “logfile.txt”, your code will look like:
    grep "error" logfile.txt How to Search for Multiple Strings with grep
    There are multiple approaches to use grep to search for multiple strings. Let us learn each approach with some examples.
    Using Multiple “-e” Options
    The “-e`” option lets you specify multiple patterns. Each pattern is provided as an argument to “-e”:
    grep -e "string1" -e "string2" filename This command searches for lines containing either “string1” or “string2” in the specified file.
    Using Extended Regular Expressions with “-E”
    By enabling extended regular expressions with the “-E” option, you can use the pipe symbol “|” to separate multiple patterns within a single quoted string:
    grep -E "string1|string2" filename Alternatively, you can use the “egrep” command, which is equivalent to grep -E, but we do not recommend it considering egrep is deprecated.
    egrep "pattern1|pattern2" filename Both commands will match lines containing either “pattern1” or “pattern2”.
    Using Basic Regular Expressions (RegEx) with Escaped Pipe
    In basic regular expressions, the pipe symbol “|” is not recognized as a special character unless escaped. Therefore, you can use:
    grep "pattern1\|pattern2" filename This approach searches for lines containing either “pattern1” or “pattern2” in the specified file.
    Practical Examples
    Now that we know the basics and the multiple methods to use grep to search multiple strings, let us look at some real-world applications.
    How to Search for Multiple Words in a File
    If you have a file named “unixmen.txt” containing the following lines:
    alpha bravo charlie delta fox golf kilo lima mike To search for lines containing either “alpha” or “kilo”, you can use:
    grep -E "apple|kiwi" sample.txt The output will be:
    apple banana cherry kiwi lemon mango Searching for Multiple Patterns in Command Output
    You can also use grep to filter the output of other commands. For example, to search for processes containing either “bash” or “ssh” in their names, you can use:
    ps aux | grep -E "bash|ssh" This command will display all running processes that include “bash” or “ssh” in their command line.
    Case-Insensitive Searches
    To perform case-insensitive searches, add the “-i” option:
    grep -i -e "string1" -e "string2" filename This command matches lines containing “string1” or “string2” regardless of case.
    How to Count Number of Matches
    To count the number of lines that match any of the specified patterns, use the “-c” option:
    grep -c -e "string1" -e "string2" filename This command outputs the number of matching lines.
    Displaying Only Matching Parts of Lines
    To display only the matching parts of lines, use the “-o” option:
    grep -o -e "string1" -e "string2" filename This command prints only the matched strings, one per line.
    Searching Recursively in Directories
    To search for patterns in all files within a directory and its subdirectories, use the “-r” (short for recursive) option:
    grep -r -e "pattern1" -e "pattern2" /path/to/directory This command searches for the specified patterns in all files under the given directory.
    How to Use awk for Multiple String Searches
    While “grep” is powerful, there are scenarios where “awk” might be more suitable, especially when searching for multiple patterns with complex conditions. For example, to search for lines containing both “string1” and “string2”, you can use:
    awk '/string1/ && /string2/' filename This command displays lines that contain both “string1” and “string2”.
    Wrapping Up with Some Best Practices
    Now that we have covered everything there is to learn about using grep to search multiple strings, it may feel a little overwhelming. Here’s why it is worth the effort.
    “grep” can be easily integrated into scripts to automate repetitive tasks, like finding specific keywords across multiple files or generating reports. It’s widely available on Unix-like systems and can often be found on Windows through tools like Git Bash or WSL. Knowing how to use “grep” makes your skills portable across systems. Mastering grep enhances your problem-solving capabilities, whether you’re debugging code, parsing logs, or extracting specific information from files. By leveraging regular expressions, grep enables complex pattern matching, which expands its functionality beyond simple string searches.
    In short, learning grep is like gaining a superpower for text processing. Once you learn it, you’ll wonder how you ever managed without it!
    Related Articles
    How to Refine your Search Results Using Grep Exclude VI Save and Exit: Essential Commands in Unix’s Text Editor Why It Is Better to Program on Linux The post grep: Multiple String Search Feature appeared first on Unixmen.
  19. By: Edwin
    Fri, 25 Apr 2025 05:26:57 +0000


    Today at Unixmen, we are about to explain everything there is about the “.bashrc” file. This file serves as a script that initializes settings for interactive Bash shell sessions. The bashrc file is typically located in your home directory as a hidden file (“~/.bashrc”). This file lets you customize your shell environment, enhancing both efficiency and personalization. Let’s get started!
    Why is the bashrc File Required?
    Whenever a new interactive non-login Bash shell is launched like when you open a new terminal window, the “.bashrc” file is executed. This execution sets up the environment according to user-defined configurations, which includes:
    Aliases: Shortcuts for longer commands to streamline command-line operations. Functions: Custom scripts that can be called within the shell to perform specific tasks. Environment variables: Settings that define system behaviour, such as the “PATH” variable, which determines where the system looks for executable files. Prompt customization: Modifying the appearance of the command prompt to display information like the current directory or git branch. By configuring these elements in “.bashrc” file, you can automate repetitive tasks, set up their preferred working environment, and ensure consistency across sessions.
    How to Edit the “.bashrc” File
    The “.bashrc” file resides in the your home directory and is hidden by default. Follow these instructions to view and edit this file:
    Launch your terminal application. In other words, open the terminal window. Navigate to the home directory by executing the “cd ~” command. Use your preferred text editor to open the file. For example, to use “nano” to open the file, execute the command: “nano .bashrc”. We encourage you to always create a backup of the .bashrc file before you make any changes to it. Execute this command to create a backup of the file:
    cp ~/.bashrc ~/.bashrc_backup When you encounter any errors, this precaution allows you to restore the original settings if needed.
    Common Customizations (Modifications) to .bashrc File
    Here are some typical modifications the tech community makes to their “.bashrc” file:
    How to Add Aliases
    Aliases create shortcuts for longer commands, saving time and reducing typing errors. For instance:
    alias ll='ls -alF' alias gs='git status' When you add these lines to “.bashrc”, typing “ll” in the terminal will execute “ls -alF”, and “gs” will execute “git status”. In simpler terms, you are creating shortcuts in the terminal.
    Defining Functions
    If you are familiar with Python, you would already know the advantages of defining functions (Tip: If you want to learn Python, two great resources are Stanford’s Code in Place program and PythonCentral). Functions allow for more complex command sequences. For example, here is a function to navigate up multiple directory levels:
    up() { local d="" limit=$1 for ((i=1 ; i <= limit ; i++)) do d="../$d" done d=$(echo $d | sed 's/\/$//') cd $d } Adding this function lets you type “up 3” to move up three directory levels.
    How to Export Environment Variables
    Setting environment variables can configure system behaviour. For example, adding a directory to the “PATH”:
    export PATH=$PATH:/path/to/directory
    This addition lets the executables in “/path/to/directory” be run from any location in the terminal.
    Customizing the Prompt
    The appearance of the command prompt can be customized to display useful information. For example, execute this command display the username (“\u”), hostname (“\h”), and current working directory (“\W”).
    export PS1="\u@\h \W \$ " How to Apply Changes
    After editing and saving the .bashrc file, apply the changes to the current terminal session by sourcing the file. To apply the changes, execute the command:
    source ~/.bashrc Alternatively, closing and reopening the terminal will also load the new configurations.
    Wrapping Up with Some Best Practices
    That is all there is to learn about the bashrc file. Here are some best practices to make sure you do not encounter any errors.
    Always add comments to your .bashrc file to document the purpose of each customization. This practice aids in understanding and maintaining the file. For extensive configurations, consider sourcing external scripts from within .bashrc to keep the file organized. Be very careful when you add commands that could alter system behaviour or performance. Test new configurations in a separate terminal session before applying them globally. By effectively utilizing the “.bashrc” file, you can create a tailored and efficient command-line environment that aligns with their workflow and preferences.
    Related Articles
    Fun in Terminal How To Use The Linux Terminal Like A Real Pro, First Part How To Use Git Commands From Linux Terminal The post .bashrc: The Configuration File of Linux Terminal appeared first on Unixmen.
  20. By: Edwin
    Fri, 25 Apr 2025 05:26:43 +0000


    The Windows Subsystem for Linux (WSL) is a powerful tool that allows you to run a Linux environment directly on Windows. WSL gives you seamless integration between the two most common operating systems. One of the key features of WSL is the ability to access and manage files across both Windows and Linux platforms. Today at Unixmen, we will walk you through the methods to access Windows files from Linux within WSL and vice versa. Let’s get started!
    How to Access Windows Files from WSL
    In WSL, Windows drives are mounted under the “/mnt” directory, allowing Linux to interact with the Windows file system. Here’s how you can navigate to your Windows files:
    Step 1: Locate the Windows Drive
    Windows drives are mounted as “/mnt/<drive_letter>”. For example, the C: drive is accessible at “/mnt/c”.
    Step 2: Navigate to Your User Directory
    To access your Windows user profile, use the following commands:
    cd /mnt/c/Users/<Your_Windows_Username> Replace “<Your_Windows_Username>” with your actual Windows username.
    Step 3: List the Contents
    Once you are in your user directory, you can list the contents using:
    ls This will display all files and folders in your Windows user directory.
    By navigating through “/mnt/c/”, you can access any file or folder on your Windows C: drive. This integration lets you manipulate Windows files using Linux commands within WSL.
    Steps to Access WSL Files from Windows
    In Windows accessing files stored within the WSL environment is very straightforward. Here is how you can do it:
    Using File Explorer:
    Open File Explorer. In the address bar, type “\\wsl$” and press the Enter key. You’ll see a list of installed Linux distributions. Navigate to your desired distribution to access its file system. Direct Access to Home Directory:
    For quick access to your WSL home directory, navigate to:
    \\wsl$\<Your_Distribution>\home\<Your_Linux_Username> Replace “<Your_Distribution>” with the name of your Linux distribution (e.g., Ubuntu) and “<Your_Linux_Username>” with your Linux username.
    This method allows you to seamlessly transfer files between Windows and WSL environments using the familiar Windows interface.
    Best Practices
    At Unixmen, we recommend these best practices for better file management between Windows and WSL.
    File location: For optimal performance, store project files within the WSL file system when you work primarily with Linux tools. If you need to use Windows tools on the same files, consider storing them in the Windows file system and accessing them from WSL. Permissions: Be mindful of file permissions. Files created in the Windows file system may have different permissions when accessed from WSL. Path conversions: Use the “wslpath” utility to convert Windows paths to WSL paths and vice versa: wslpath 'C:\Users\Your_Windows_Username\file.txt' This command will output the equivalent WSL path.
    Wrapping Up
    By understanding these methods and best practices, you can effectively manage and navigate files between Windows and Linux environments within WSL, enhancing your workflow and productivity.
    Related Links
    WinUSB: Create A Bootable Windows USB In Linux Run Windows Apps on Linux Easily Linux vs. Mac vs. Windows OS Guide The post Windows Linux Subsystem (WSL): Run Linux on Windows appeared first on Unixmen.
  21. By: Edwin
    Fri, 25 Apr 2025 05:26:38 +0000


    If you work with Python a lot, you might be familiar with the process of constantly installing packages. But what happens when you decide that a package is no longer required? That is when you use “pip” to uninstall packages. The “pip” tool, which is Python’s package installer, offers a straightforward method to uninstall packages.
    Today at Unixmen, we will walk you through the process, ensuring even beginners can confidently manage their Python packages. Let’s get started!
    What is pip and Its Role in Python Package Management
    “pip” team named their product interestingly because it stands for “Pip Installs Packages”. It is the standard package manager for Python. It lets you install, update, and remove Python packages from the Python Package Index (PyPI) and other indexes. You will need package management to be as efficient as possible because that ensures your projects remain organized and free from unnecessary or conflicting dependencies.
    How to Uninstall a Single Package with “pip”
    Let us start with simple steps. Here is how you can remove a package using pip. First, open your system’s command line interface (CLI or terminal):
    On Windows, search for “cmd” or “Command Prompt” in the Start menu.
    On macOS or Linux, open the Terminal application. Type the following command, replacing “package_name” with the name of the package you wish to uninstall: pip uninstall package_name For example, to uninstall the `requests` package:
    pip uninstall requests As a precaution, always confirm the uninstallation process. “pip” will display a list of files to be removed and prompt for confirmation like this:
    Proceed (y/n)? When you see this prompt, type “y” and press the Enter key to proceed. This process makes sure that the specified package is removed from your Python environment.
    Uninstall Multiple Packages Simultaneously
    Let’s take it to the next level. Now that we are familiar with uninstalling a single package, let us learn how to uninstall multiple packages at once. When you need to uninstall multiple packages at once, “pip” allows you to do so by listing the package names separated by spaces. Here is how you can do it:
    pip uninstall package1 package2 package3 For example, to uninstall both “numpy” and “pandas”:
    pip uninstall numpy pandas As expected, when this command is executed, a prompt will appear for confirmation before removing each package.
    How to Uninstall Packages Without Confirmation
    When you are confident that you are uninstalling the correct package, the confirmation prompts will be a little irritating. To solve this and bypass the confirmation prompts, use the “-y”flag:
    pip uninstall -y package_name What is being done here is you are instructing the command prompt that it has confirmation with the “-y” flag. This is particularly useful in scripting or automated workflows where manual intervention is impractical.
    Uninstalling All Installed Packages
    To remove all installed packages and achieve a clean slate, you can use the following command:
    pip freeze | xargs pip uninstall -y Here’s a breakdown of the command:
    “pip freeze” lists all installed packages. “xargs” takes this list and passes each package name to “pip uninstall -y”, which uninstalls them without requiring confirmation. Be very careful when you are executing this command. This will remove all packages in your environment. Ensure this is your intended action before proceeding.
    Best Practices for Managing Python Packages
    We have covered almost everything when it comes to using pip to uninstall packages. Before we wrap up, let us learn the best practices as well.
    Always use virtual environments to manage project-specific dependencies without interfering with system-wide packages. Tools like “venv” (included with Python 3.3 and later) or “virtualenv” can help you create isolated environments. Periodically check for and remove unused packages to keep your environment clean and efficient. Documentation can be boring for most of the beginners but always maintain a “requirements.txt” file for each project, listing all necessary packages and their versions. This practice aids in reproducibility and collaboration. Prefer installing packages within virtual environments rather than globally to avoid potential conflicts and permission issues. Wrapping Up
    Managing Python packages is crucial for maintaining a streamlined and conflict-free development environment. The “pip uninstall” command provides a simple yet powerful means to remove unnecessary or problematic packages. By understanding and utilizing the various options and best practices outlined in this guide, even beginners can confidently navigate Python package management.
    Related Articles
    Pip: Install Specific Version of a Python Package Instructions How to Update and Upgrade the Pip Command Install Pip Ubuntu: A Detailed Guide to Cover Every Step The post Pip: Uninstall Packages Instructions with Best Practices appeared first on Unixmen.
  22. By: Edwin
    Fri, 25 Apr 2025 05:26:26 +0000


    Today at Unixmen, we are about to explain a key configuration file that defines how disk partitions, devices, and remote filesystems are mounted and integrated into the system’s directory structure. The file we are talking about is the “/etc/fstab”. By automating the mounting process at boot time, fstab ensures consistent and reliable access to various storage resources.
    In this article, we will explain the structure, common mount options, best practices, and the common pitfalls learners are prone to face. Let’s get started!
    Structure of the “/etc/fstab” File
    Each line in the “fstab” file represents a filesystem and contains six fields, each separated by spaces or tabs. Here are the components:
    Filesystem: Specifies the device or remote filesystem to be mounted, identified by device name (for example: “/dev/sda1”) or UUID. Mounting point: The directory where the filesystem will be mounted, such as “/”, “/home”, or “/mnt/data”. Filesystem type: Indicates the type of filesystem, like “ext4”, “vfat”, or “nfs”. Options: Comma-separated list of mount options that control the behaviour of the filesystem like “defaults”, “noatime”, “ro”. Dump: A binary value (0 or 1) used by the “dump” utility to decide if the filesystem needs to be backed up. Pass: An integer (0, 1, or 2) that determines the order in which “fsck” checks the filesystem during boot. Some of the Common Mount Options
    Let us look at some of the common mount options:
    defaults: This option applies the default settings: “rw”, “suid”, “dev”, “exec”, “auto”, “nouser”, and “async”. noauto: Prevents the filesystem from being mounted automatically at boot. user: Allows any user to mount the filesystem. nouser: Restricts mounting to the superuser. ro: Mounts the filesystem as read-only. rw: Mounts the filesystem as read-write. sync: Ensures that input and output operations are done synchronously. noexec: Prevents execution of binaries on the mounted filesystem. As usual, let us understand the concept of “fstab” with an example. Here is a sample entry:
    UUID=123e4567-e89b-12d3-a456-426614174000 /mnt/data ext4 defaults 0 2 Let us break down this example a little.
    UUID=123e4567-e89b-12d3-a456-426614174000: Specifies the unique identifier of the filesystem. /mnt/data: Designates the mount point. ext4: Indicates the filesystem type. defaults: Applies default mount options. 0: Excludes the filesystem from “dump” backups. 2: Sets the “fsck” order. Non-root filesystems are typically assigned “2”. Best Practices
    While the fstab file is a pretty straightforward component, here are some best practices to help you work more efficiently.
    Always use UUIDs or labels: Employing UUIDs or filesystem labels instead of device names (like “/dev/unixmen”) enhances reliability, especially when device names change due to hardware modifications. Create backups before editing: Always create a backup of the “fstab” file before making changes to prevent system boot issues. Verify entries: After editing “fstab”, test the configuration with “mount -a” to ensure all filesystems mount correctly without errors. Common Pitfalls You May Face
    Misconfigurations in this file can lead to various issues, affecting system stability and accessibility. Common problems you could face include:
    Incorrect device identification: Using device names like “/dev/sda1” can be problematic, especially when hardware changes cause device reordering. This can result in the system attempting to mount the wrong partition. Using an incorrect Universally Unique Identifier (UUID) can prevent the system from locating and mounting the intended filesystem, leading to boot failures.
    Misconfigured mount options: Specifying unsupported or invalid mount options can cause mounting failures. For example, using “errors=remount-rw” instead of the correct “errors=remount-ro” will cause system boot issues.
    File system type mismatch: Specifying an incorrect file system type can prevent proper mounting. For example, specifying an “ext4” partition as “xfs” in “fstab” will result in mounting errors.
    Wrapping Up
    You could have noticed that the basics of fstab does not feel that complex, but we included a thorough section for the best practices and challenges. This is because identifying the exact cause of fstab error is little difficult for the untrained eye. The error messages can be vague and non-specific. Determining the proper log fail for troubleshooting is another pain. We recommend including the “nofail” option, so that the system boots even if the device is unavailable. Now you are ready to work with the fstab file!
    Related Articles
    How to Rename Files in UNIX / Linux Untar tar.gz file: The Only How-to Guide You Will Need Fsck: How to Check and Repair a Filesystem  
    The post fstab: Storage Resource Configuration File appeared first on Unixmen.
  23. by: Blackle Mori
    Thu, 24 Apr 2025 12:49:42 +0000

    You would be forgiven if you’ve never heard of Cohost.org. The bespoke, Tumblr-like social media website came and went in a flash. Going public in June 2022 with invite-only registrations, Cohost’s peach and maroon landing page promised that it would be “posting, but better.” Just over two years later, in September 2024, the site announced its shutdown, its creators citing burnout and funding problems. Today, its servers are gone for good. Any link to cohost.org redirects to the Wayback Machine’s slow but comprehensive archive.
    The landing page for Cohost.org, featuring our beloved eggbug.
    Despite its short lifetime, I am confident in saying that Cohost delivered on its promise. This is in no small part due to its user base, consisting mostly of niche internet creatives and their friends — many of whom already considered “posting” to be an art form. These users were attracted to Cohost’s opinionated, anti-capitalist design that set it apart from the mainstream alternatives. The site was free of advertisements and follower counts, all feeds were purely chronological, and the posting interface even supported a subset of HTML.
    It was this latter feature that conjured a community of its own. For security reasons, any post using HTML was passed through a sanitizer to remove any malicious or malformed elements. But unlike most websites, Cohost’s sanitizer was remarkably permissive. The vast majority of tags and attributes were allowed — most notably inline CSS styles on arbitrary elements.
    Users didn’t take long to grasp the creative opportunities lurking within Cohost’s unassuming “new post” modal. Within 48 hours of going public, the fledgling community had figured out how to post poetry using the <details> tag, port the Apple homepage from 1999, and reimplement a quick-time WarioWare game. We called posts like these “CSS Crimes,” and the people who made them “CSS Criminals.” Without even intending to, the developers of Cohost had created an environment for a CSS community to thrive.
    In this post, I’ll show you a few of the hacks we found while trying to push the limits of Cohost’s HTML support. Use these if you dare, lest you too get labelled a CSS criminal.
    Width-hacking
    Many of the CSS crimes of Cohost were powered by a technique that user @corncycle dubbed “width-hacking.” Using a combination of the <details> element and the CSS calc() function, we can get some pretty wild functionality: combination locks, tile matching games, Zelda-style top-down movement, the list goes on.
    If you’ve been around the CSS world for a while, there’s a good chance you’ve been exposed to the old checkbox hack. By combining a checkbox, a label, and creative use of CSS selectors, you can use the toggle functionality of the checkbox to implement all sorts of things. Tabbed areas, push toggles, dropdown menus, etc.
    However, because this hack requires CSS selectors, that meant we couldn’t use it on Cohost — remember, we only had inline styles. Instead, we used the relatively new elements <details> and <summary>. These elements provide the same visibility-toggling logic, but now directly in HTML. No weird CSS needed.
    CodePen Embed Fallback These elements work like so: All children of the <details> element are hidden by default, except for the <summary> element. When the summary is clicked, it “opens” the parent details element, causing its children to become visible.
    We can add all sorts of styles to these elements to make this example more interesting. Below, I have styled the constituent elements to create the effect of a button that lights up when you click on it.
    CodePen Embed Fallback This is achieved by giving the <summary> element a fixed position and size, a grey background color, and an outset border to make it look like a button. When it’s clicked, a sibling <div> is revealed that covers the <summary> with its own red background and border. Normally, this <div> would block further click events, but I’ve given it the declaration pointer-events: none. Now all clicks pass right on through to the <summary> element underneath, allowing you to turn the button back off.
    This is all pretty nifty, but it’s ultimately the same logic as before: something is toggled either on or off. These are only two states. If we want to make games and other gizmos, we might want to represent hundreds to thousands of states.
    Width-hacking gives us exactly that. Consider the following example:
    CodePen Embed Fallback In this example, three <details> elements live together in an inline-flex container. Because all the <summary> elements are absolutely-positioned, the width of their respective <details> elements are all zero when they’re closed.
    Now, each of these three <details> has a small <div> inside. The first has a child with a width of 1px, the second a child with a width of 2px, and the third a width of 4px. When a <details> element is opened, it reveals its hidden <div>, causing its own width to increase. This increases the width of the inline-flex container. Because the width of the container is the sum of its children, this means its width directly corresponds to the specific <details> elements that are open.
    For example, if just the first and third <details> are open, the inline-flex container will have the width 1px + 4px = 5px. Conversely, if the inline-flex container is 2px wide, we can infer that the only open <details> element is the second one. With this trick, we’ve managed to encode all eight states of the three <details> into the width of the container element.
    This is pretty cool. Maybe we could use this as an element of some kind of puzzle game? We could show a secret message if the right combination of buttons is checked. But how do we do that? How do we only show the secret message for a specific width of that container div?
    CodePen Embed Fallback In the preceding CodePen, I’ve added a secret message as two nested divs. Currently, this message is always visible — complete with a TODO reminding us to implement the logic to hide it unless the correct combination is set.
    You may wonder why we’re using two nested divs for such a simple message. This is because we’ll be hiding the message using a peculiar method: We will make the width of the parent div.secret be zero. Because the overflow: hidden property is used, the child div.message will be clipped, and thus invisible.
    Now we’re ready to implement our secret message logic. Thanks to the fact that percentage sizes are relative to the parent, we can use 100% as a stand-in for the parent’s width. We can then construct a complicated CSS calc() formula that is 350px if the container div is our target size, and 0px otherwise. With that, our secret message will be visible only when the center button is active and the others are inactive. Give it a try!
    CodePen Embed Fallback This complicated calc() function that’s controlling the secret div’s width has the following graph:
    You can see that it’s a piecewise linear curve, constructed from multiple pieces using min/max. These pieces are placed in just the right spots so that the function maxes out when the container div is 2px— which we’ve established is precisely when only the second button is active.
    A surprising variety of games can be implemented using variations on this technique. Here is a tower of Hanoi game I had made that uses both width and height to track the game’s state.
    SVG animation
    So far, we’ve seen some basic functionality for implementing a game. But what if we want our games to look good? What if we want to add ✨animations?✨ Believe it or not, this is actually possible entirely within inline CSS using the power of SVG.
    SVG (Scalable Vector Graphics) is an XML-based image format for storing vector images. It enjoys broad support on the web — you can use it in <img> elements or as the URL of a background-image property, among other things.
    Like HTML, an SVG file is a collection of elements. For SVG, these elements are things like <rect>, <circle>, and <text>, to name a few. These elements can have all sorts of properties defined, such as fill color, stroke width, and font family.
    A lesser-known feature of SVG is that it can contain <style> blocks for configuring the properties of these elements. In the example below, an SVG is used as the background for a div. Inside that SVG is a <style> block that sets the fillcolor of its <circle> to red.
    CodePen Embed Fallback An even lesser-known feature of SVG is that its styles can use media queries. The size used by those queries is the size of the div it is a background of.
    In the following example, we have a resizable <div> with an SVG background. Inside this SVG is a media query which will change the fill color of its <circle> to blue when the width exceeds 100px. Grab the resize handle in its bottom right corner and drag until the circle turns blue.
    CodePen Embed Fallback Because resize handles don’t quite work on mobile, unfortunately, this and the next couple of CodePens are best experienced on desktop.
    This is an extremely powerful technique. By mixing it with width-hacking, we could encode the state of a game or gizmo in the width of an SVG background image. This SVG can then show or hide specific elements depending on the corresponding game state via media queries.
    But I promised you animations. So, how is that done? Turns out you can use CSS animations within SVGs. By using the CSS transition property, we can make the color of our circle smoothly transition from red to blue.
    CodePen Embed Fallback Amazing! But before you try this yourself, be sure to look at the source code carefully. You’ll notice that I’ve had to add a 1×1px, off-screen element with the ID #hack. This element has a very simple (and nearly unnoticeable) continuous animation applied. A “dummy animation” like this is necessary to get around some web browsers’ buggy detection of SVG animation. Without that hack, our transition property wouldn’t work consistently.
    For the fun of it, let’s combine this tech with our previous secret message example. Instead of toggling the secret message’s width between the values of 0px and 350px, I’ve adjusted the calc formula so that the secret message div is normally 350px, and becomes 351px if the right combination is set.
    Instead of HTML/CSS, the secret message is now just an SVG background with a <text> element that says “secret message.” Using media queries, we change the transform scale of this <text> to be zero unless the div is 351px. With the transition property applied, we get a smooth transition between these two states.
    Click the center button to activate the secret message:
    CodePen Embed Fallback The first cohost user to discover the use of media queries within SVG backgrounds was @ticky for this post. I don’t recall who figured out they could animate, but I used the tech quite extensively for this quiz that tells you what kind of soil you’d like if you were a worm.
    Wrapping up
    And that’s will be all for now. There are a number of techniques I haven’t touched on — namely the fun antics one can get up to with the resize property. If you’d like to explore the world of CSS crimes further, I’d recommend this great linkdump by YellowAfterlife, or this video retrospective by rebane2001.
    It will always hurt to describe Cohost in the past tense. It truly was a magical place, and I don’t think I’ll be able to properly convey what it was like to be there at its peak. The best I can do is share the hacks we came up with: the lost CSS tricks we invented while “posting, but better.”
    The Lost CSS Tricks of Cohost.org originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  24. by: Abhishek Kumar
    Thu, 24 Apr 2025 11:57:47 +0530

    When deploying containerized services such as Pi-hole with Docker, selecting the appropriate networking mode is essential for correct functionality, especially when the service is intended to operate at the network level.
    The host networking mode allows a container to share the host machine’s network stack directly, enabling seamless access to low-level protocols and ports.
    This is particularly critical for applications that require broadcast traffic handling, such as DNS and DHCP services.
    This article explores the practical use of host networking mode in docker, explains why bridge mode is inadequate for certain network-wide configurations, and provides a Docker compose example to illustrate correct usage.
    What does “Host Network” actually mean?
    By default, Docker containers run in an isolated virtual network known as the bridge network. Each container receives an internal IP address (typically in the 172.17.0.0/16 range) and communicates through Network Address Translation (NAT).
    This setup is well-suited for application isolation, but it limits the container’s visibility to the outside LAN.
    For instance, services running inside such containers are not directly reachable from other devices on the local network unless specific ports are explicitly mapped.
    In contrast, using host network mode grants the container direct access to the host machine’s network stack.
    Rather than using a virtual subnet, the container behaves as if it were running natively on the host's IP address (e.g., 192.168.x.x or 10.1.x.x), as assigned by your router.
    It can open ports without needing Docker's ports directive, and it responds to network traffic as though it were a system-level process.
    Learn Docker: Complete Beginner’s CourseLearn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.Linux HandbookAbdullah TarekSetting up host network mode using docker compose
    While this setup can also be achieved using the docker run command with the --network host flag, I prefer using Docker Compose.
    It keeps things declarative and repeatable, especially when you need to manage environment variables, mount volumes, or configure multiple containers together.
    Let’s walk through an example config, that runs an nginx container using host network mode:
    version: "3" services: web: container_name: nginx-host image: nginx:latest network_mode: hostThis configuration tells Docker to run the nginx-host container using the host's network stack.
    No need to specify ports, if Nginx is listening on port 80, it’s directly accessible at your host's IP address on port 80, without any NAT or port mapping.
    Start it up with:
    docker compose up -dThen access it via:
    http://192.168.x.xYou’ll get Nginx’s default welcome page directly from your host IP.
    How is this different from Bridge networking?
    By default, Docker containers use the bridge network, where each container is assigned an internal IP (commonly in the 172.17.0.0/16 range).
    Here’s how you would configure that:
    version: "3" services: web: image: nginx:latest ports: - "8080:80" This exposes the container’s port 80 to your host’s port 8080.
    The traffic is routed through Docker’s internal bridge interface, with NAT handling the translation. It’s great for isolation and works well for most applications.
    Optional: Defining custom bridge network with external reference
    In Docker Compose, a user-defined bridge network offers better flexibility and control than the host network, especially when dealing with multiple services.
    This allows you to define custom aliasing, service discovery, and isolation between services, while still enabling them to communicate over a single network.
    I personally use this with Nginx Proxy Manager that needs to communicate with multiple services.
    These are the services that are all connected to my external npm network:
    Let's walk through how you can create and use a custom bridge network in your homelab setup. First, you'll need to create the network using the following command:
    docker network create my_custom_network Then, you can proceed with the Docker Compose configuration:
    version: "3" services: web: image: nginx:latest networks: - hostnet networks: hostnet: external: true name: hostnet Explanation:
    hostnet: This is the name you give to your network inside the Compose file. external: true: This tells Docker Compose to use an existing network, in this case, the network we just created. Docker will not try to create it, assuming it's already available. By using an external bridge network like this, you can ensure that your services can communicate within a shared network context, but they still benefit from Docker’s built-in networking features, such as automatic service name resolution and DNS, without the potential limitations of the host network.
    But... What’s the catch?
    Everything has a trade-off, and host networking is no exception. Here’s where things get real:
    ❌ Security takes a hit
    You lose the isolation that containers are famous for. A process inside your container could potentially see or interfere with host-level services.
    ❌ Port conflicts are a thing
    Because your container is now sharing the same network stack as your host, you can’t run multiple containers using the same ports without stepping on each other. With the bridge network, Docker handles this neatly using port mappings. With host networking, it’s all manual.
    ❌ Not cross-platform friendly
    Host networking works only on Linux hosts. If you're on macOS or Windows, it simply doesn’t behave the same way, thanks to how Docker Desktop creates virtual machines under the hood. This could cause consistency issues if your team is split across platforms.
    ❌ You can’t use some docker features
    Things like service discovery (via Docker's DNS) or custom internal networks just won’t work with host mode. You’re bypassing Docker's clever internal network stack altogether.
    When to choose which Docker network mode
    Here’s a quick idea of when to use what:
    Bridge Network: Great default. Perfect for apps that just need to run and expose ports with isolation. Works well with Docker Compose and lets you connect services easily using their names. Host Network: Use it when performance or native networking is critical. Ideal for edge services, proxies, or tightly coupled host-level apps. None: There's a network_mode: none too—this disables networking entirely. Use it for highly isolated jobs like offline batch processing or building artifacts. Wrapping Up
    The host network mode in Docker is best suited for services that require direct interaction with the local network.
    Unlike Docker's default bridge network, which isolates containers with internal IP addresses, host mode allows a container to share the host's network stack, including its IP address and ports, without any abstraction.
    In my own setup, I use host mode exclusively for Pi-hole, which acts as both a DNS resolver and DHCP server for the entire network.
    For most other containers, such as web applications, reverse proxies, or databases, the bridge network is more appropriate. It ensures better isolation, security, and flexibility when exposing services selectively through port mappings.
    In summary, host mode is a powerful but specialized tool. Use it only when your containerized service needs to behave like a native process on the host system.
    Otherwise, Docker’s default networking modes will serve you better in terms of control and compartmentalization.
  25. by: Abhishek Prakash
    Thu, 24 Apr 2025 05:35:31 GMT

    I guess you already know that It's FOSS has an active community forum. I recently upgraded its server and changed its look slightly. Hope you like it.
    If you have questions about using Linux or if you want to share something interesting you discovered with your Linux setup, you are more than welcome to utilize the Community.
    It’s FOSS CommunityA place for desktop Linux users and It’s FOSS readersIt's FOSS Community💬 Let's see what else you get in this edition
    New Ubuntu flavor release. Exploring pages, links, and tags in Logseq. Ubisoft doing something really unexpected. And other Linux news, tips, and, of course, memes! This edition of FOSS Weekly is supported by Valkey. ❇️ Valkey – The Drop-in Alternative to Redis OSS
    With the change of Redis licensing in March of 2024 came the end of Redis as an open source project. Enter Valkey – the community driven fork that preserves and improves the familiar high-performance, key-value datastore for improving application performance.
    Stewarded by the Linux foundation, Valkey serves as an open source drop-in alternative to Redis OSS – no code changes needed, with the same developer-friendly experience. For your open source database, check out Valkey.
    What is Valkey? – Valkey Datastore Explained - Amazon Web ServicesValkey is an open source, high performance, in-memory, key-value datastore. It is designed as a drop-in replacement for Redis OSS.Amazon Web Services, Inc.📰 Linux and Open Source News
    Ubisoft has surprised us with its open source move. Ubuntu 25.04 has arrived, delivering many upgrades. Similarly, Xubuntu 25.04 and Kubuntu 25.04 are here, offering up useful refinements. ZimaBoard 2 crowdfunding campaign is live on Kickstarter. I had the early prototype sent a few days ago and I share my experience with ZimaBoard 2 in this article.
    Initial Impressions of the ZimaBoard 2 Homelab DeviceApart from the silver exterior, I have nothing to complain about in ZimaBoard 2 even if it is a prototype at this stage.It's FOSS NewsAbhishek🧠 What We’re Thinking About
    Android's Linux Terminal just got a noteworthy power up.
    Android 16 lets the Linux Terminal use your phone’s entire storageAndroid 16 Beta 4 uncaps the disk resizing slider, allowing you to allocate your phone’s entire storage to the Linux Terminal.Android AuthorityMishaal Rahman🧮 Linux Tips, Tutorials and More
    It's easy to check which desktop environment your Linux distro has. Check out these 21 basic, yet essential Linux networking commands. Some tools you can use when you have to share files in GB over the internet. Continuing the Logseq series, learn how to tag, link, and reference in Logseq the right way, and when you are done with that, you can try customizing it. If you just installed or upgraded to the Ubuntu 25.04 release, here are 13 things you should do right away:
    13 Things to do After Installing Ubuntu 25.04Just installed Ubuntu 25.04? Here are some neat tips for you.It's FOSS NewsSourav Rudra Why should you opt for It's FOSS Plus membership:
    ✅ Ad-free reading experience
    ✅ Badges in the comment section and forum
    ✅ Supporting creation of educational Linux materials
    Join It's FOSS Plus 👷 Homelab and Maker's Corner
    Build a real-time knowledge retrieval system with our step-by-step RAG guide.
    Tuning Local LLMs With RAG Using Ollama and LangchainInterested in taking your local AI set up to the next step? Here’s a sample PDF-based RAG project.It's FOSSAbhishek Kumar✨ Apps Highlight
    Docs is a self-hostable document editor solution with many neat features.
    Online Docs... but Sovereign: This is Europe’s Open Source Answer to Big TechDocs is an open source, self-hosted document editor that allows real-time collaboration and gives users control over their data.It's FOSS NewsSourav Rudra📽️ Videos I am Creating for You
    See Fedora 42 in action in the latest video. By the way, that weird default wallpaper has a significance.
    Subscribe to It's FOSS YouTube Channel🧩 Quiz Time
    Try your hand at our Essential Linux Commands crossword.
    Guess the Popular Linux Command: CrosswordTest your Linux command line knowledge with this simple crossword puzzle. All you have to do is to correctly guess the essential Linux command.It's FOSSAbhishek PrakashAlternatively, can you match the Linux distros with their logos?
    💡 Quick Handy Tip
    In GNOME File Manager (Nautilus), you can invert the selection of items using the keyboard shortcut CTRL + SHIFT + I.
    🤣 Meme of the Week
    Hah, this couldn't be more true. 😆
    🗓️ Tech Trivia
    On April 20, 1998, during a demonstration of a beta version of Windows 98 by Microsoft's Bill Gates, at COMDEX, the system crashed in the live event. Gates jokingly said, "That must be why we're not shipping Windows 98 yet". If you ever used Windows 98, you know that it should have never been shipped 😉
    🧑‍🤝‍🧑 FOSSverse Corner
    Can you help a newbie FOSSer with their search for a Linux distribution chart?
    Looking for a Linux distribution chartI’m wondering if there’s ever been a table/spreadsheet created to provide a basic breakdown of most, if not all, distros at a glance. I’m not talking about subjective metrics like some charts display (e.g. user friendliness, cutting edge, community, etc.), but about objective criteria like the following: Available architecture Desktop environments (possibly with individual categories for filing system, window manager, terminal, etc.) Shell Package manager Installation (e.g. CLI, Calamares, etc…It's FOSS CommunityThunder_Chicken❤️ With love
    Share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.