Everything posted by Blogger
-
FOSS Weekly #25.39: Kill Switch Phones, LMDE 7, Zorin OS 18 Beta, Polybar, Apt History and More Linux Stuff
by: Abhishek Prakash Thu, 25 Sep 2025 04:40:13 GMT There were two smartphone launches recently, both with hardware kill switches. One is the Murena-powered HIROH Phone, and the other is the Furi Labs FLX1s. FLX1s uses a Debian based operating system. Now, these are not necessarily for everyone, and they sure are not cheap. I mean, they might not be as expensive as iPhones or Samsung Galaxy S series, but they are surely in the mid-range. These are more suited for journalists and activists who have to protect sensitive data and hence the kill switch. That doesn't mean a privacy aware regular Joe (or Jane) cannot opt for them. It's just that lack of some mainstream features could cause frustration. What do you think? 💬 Let's see what you get in this edition: Apt receiving a much-needed upgrade.Lots happening in the open source space.An early look at LMDE 7 and Zorin OS 18.And other Linux news, tips, and, of course, memes!📰 Linux and Open Source NewsOBS Studio 32.0 introduces a new plugin manager.Apt is finally getting support for a history command.The eBPF Foundation has awarded $100K in research grants.Git 3.0 might make Rust mandatory, though this is not yet final.LMDE 7 beta is here with a Debian 13 base and lots of new bits.Zorin OS 18 beta is here with a fresh design and many new features.A new proposal has been floated to make Linux multi-kernel friendly.New Proposal Looks to Make Linux Multi-Kernel FriendlyIf approved, Linux could one day run multiple kernels simultaneously.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutA coalition of open source organizations has called out predatory practices. Open Source Infrastructure is Breaking Down Due to Corporate FreeloadingAn unprecedented threat looms over open source.It's FOSS NewsSourav RudraIf you are around South Korea, then you should definitely attend this year's Open Source Summit Korea! 🧮 Linux Tips, Tutorials, and LearningsLearn how to make the best out of Polybar in Xfce.If you have ever wondered what an immutable distro is, then we have got you covered.These distros and tools offer Hyprland preconfigured.Hyprland Made Easy: Preconfigured Beautiful DistrosHere are the projects that lower the entry barrier by providing a preconfigured Hyprland option.It's FOSSSourav Rudra👷 AI, Homelab and Hardware CornerCool down your Raspberry Pi in style with these mini PC cases. Raspberry Pi 5 Tower Cases to Give it Desktop Gaming Rig LookPi 5 is a remarkable device and it deserves an awesome case. Transform your Raspberry Pi 5 into a miniature desktop tower PC with these cases.It's FOSSAbhishek PrakashAlso explore some must know Ollama commands to manage local AI models. ✨ Project HighlightNet Commander is a new project from Elelab that brings network troubleshooting, Wi-Fi surveys, SSH jumping, CIDR calculations, and more into VS Code. The author had reached out to us, but we haven't tested the plugin extensively yet. GitHub - elelabdev/net-commander: Net Commander supercharges Visual Studio Code for Network Engineers, DevOps Engineers and Solution Architects streamlining everyday workflows and accelerating data-driven root-cause analysis.Net Commander supercharges Visual Studio Code for Network Engineers, DevOps Engineers and Solution Architects streamlining everyday workflows and accelerating data-driven root-cause analysis. - ele…GitHubelelabdev📽️ Videos I Am Creating for YouExplore DuckDuckGo's lesser known features in our latest video. Subscribe to It's FOSS YouTube Channel Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a McDonald's burger a month), and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 💡 Quick Handy TipIn GNOME's Nautilus file manager, you can drag and drop a tab from one window to another Nautilus window, just like browsers. Or, drag it out to open it as a new window. See below to learn how. 👇 🎋 Fun in the FOSSverse🧩 Quiz Time: Open source is full of forks; can you match the projects with their community-based forks/alternatives? Community Strikes Back [Puzzle]Forked it!It's FOSSAbhishek Prakash🤣 Meme of the Week: The contempt is real, people. ☠️ 🗓️ Tech Trivia: On September 22, 1986, a U.S. federal judge ruled that computer code could be copyrighted, giving software the same legal protections as books and other written works. 🧑🤝🧑 From the Community: One of our regular FOSSers has a question about terminals. Can you help? Terminal: What app do you to see a .log file through pagination and with colors?Hello Friends In a Terminal: What app do you to see a .log file through pagination and with colors? I did do a quick research in the web and I found https://lnav.org/ (not tested yet) But just being curious if you have your own recommendation. It to be used with https://logback.qos.ch where is used the following Logger Levels: trace,debug,info,warn,error If I use Visual Studio Code for long files (20MB-50MB) it consumes ram as a wolf, it even worst for many .log files opened at the same tim…It's FOSS CommunityManuel_Jordan❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
CSS Typed Arithmetic
by: Amit Sheen Wed, 24 Sep 2025 12:49:22 +0000 CSS typed arithmetic is genuinely exciting! It opens the door to new kinds of layout composition and animation logic we could only hack before. The first time I published something that leaned on typed arithmetic was in this animation: CodePen Embed Fallback But before we dive into what is happening in there, let’s pause and get clear on what typed arithmetic actually is and why it matters for CSS. Browser Support: The CSS feature discussed in this article, typed arithmetic, is on the cutting edge. As of the time of writing, browser support is very limited and experimental. To ensure all readers can understand the concepts, the examples throughout this article are accompanied by videos and images, demonstrating the results for those whose browsers do not yet support this functionality. Please check resources like MDN or Can I Use for the latest support status. The Types If you really want to get what a “type” is in CSS, think about TypeScript. Now forget about TypeScript. This is a CSS article, where semantics actually matter. In CSS, a type describes the unit space a value lives in, and is called a data-type. Every CSS value belongs to a specific type, and each CSS property and function only accepts the data type (or types) it expects. Properties like opacity or scale use a plain <number> with no units. width, height, other box metrics, and many additional properties use <length> units like px, rem, cm, etc. Functions like rotate() or conic-gradient() use an <angle> with deg, rad, or turn. animation and transition use <time> for their duration in seconds (s) or milliseconds (ms). Note: You can identify CSS data types in the specs, on MDN, and other official references by their angle brackets: <data-type>. There are many more data types like <percentage>, <frequency>, and <resolution>, but the types mentioned above cover most of our daily use cases and are all we will need for our discussion today. The mathematical concept remains the same for (almost) all types. I say “almost” all types for one reason: not every data type is calculable. For instance, types like <color>, <string>, or <image> cannot be used in mathematical operations. An expression like "foo" * red would be meaningless. So, when we discuss mathematics in general, and typed arithmetic in particular, it is crucial to use types that are inherently calculable, like <length>, <angle>, or <number>. The Rules of Typed Arithmetic Even when we use calculable data types, there are still limitations and important rules to keep in mind when performing mathematical operations on them. Addition and Subtraction Sadly, a mix-and-match approach doesn’t really work here. Expressions like calc(3em + 45deg) or calc(6s - 3px) will not produce a logical result. When adding or subtracting, you must stick to the same data type. Of course, you can add and subtract different units within the same type, like calc(4em + 20px) or calc(300deg - 1rad). Multiplication With multiplication, you can only multiply by a plain <number> type. For example: calc(3px * 7), calc(10deg * 6), or calc(40ms * 4). The result will always adopt the type and unit of the first value, with the new value being the product of the multiplication. But why can you only multiply by a number? If we tried something like calc(10px * 10px) and assumed it followed “regular” math, we would expect a result of 100px². However, there are no squared pixels in CSS, and certainly no square degrees (though that could be interesting…). Because such a result is invalid, CSS only permits multiplying typed values by unitless numbers. Division Here, too, mixing and matching incompatible types is not allowed, and you can divide by a number just as you can multiply a number. But what happens when you divide a type by the same type? Hint: this is where things get interesting. Again, if we were thinking in terms of regular math, we would expect the units to cancel each other out, leaving only the calculated value. For example, 90x / 6x = 15. In CSS, however, this isn’t the case. Sorry, it wasn’t the case. Previously, an expression like calc(70px / 10px) would have been invalid. But starting with Chrome 140 (and hopefully soon in all other browsers), this expression now returns a valid number, which winds up being 7 in this case. This is the major change that typed arithmetic enables. Is that all?! That little division? Is that the big thing I called “genuinely exciting”? Yes! Because this one little feature opens the door to a world of creative possibilities. Case in point: we can convert values from one data type to another and mathematically condition values of one type based on another, just like in the swirl example I demoed at the top. So, to understand what is happening there, let’s look at a more simplified swirl: CodePen Embed Fallback I have a container<div> with 36 <i> elements in the markup that are arranged in a spiral with CSS. Each element has an angle relative to the center point, rotate(var(--angle)), and a distance from that center point, translateX(var(--distance)). The angle calculation is quite direct. I take the index of each <i> element using sibling-index() and multiply it by 10deg. So, the first element with an index of 1 will be rotated by 10 degrees (1 * 10deg), the second by 20 degrees (2 * 10deg), the third by 30 degrees (3 * 10deg), and so on. i { --angle: calc(sibling-index() * 10deg); } As for the distance, I want it to be directly proportional to the angle. I first use typed arithmetic to divide the angle by 360 degrees: var(--angle) / 360deg. This returns the angle’s value, but as a unitless number, which I can then use anywhere. In this case, I can multiply it by a <length> value (e.g. 180px) that determines the element’s distance from the center point. i { --angle: calc(sibling-index() * 10deg); --distance: calc(var(--angle) / 360deg * 180px); } This way, the ratio between the angle and the distance remains constant. Even if we set the angle of each element differently, or to a new value, the elements will still align on the same spiral. The Importance of the Divisor’s Unit It’s important to clarify that when using typed arithmetic this way, you get a unitless number, but its value is relative to the unit of the divisor. In our simplified spiral, we divided the angle by 360deg. The resulting unitless number, therefore, represents the value in degrees. If we had divided by 1turn instead, the result would be completely different — even though 1turn is equivalent to 360deg, the resulting unitless number would represent the value in turns. A clearer example can be seen with <length> values. Let’s say we are working with a screen width of 1080px. If we divide the screen width (100vw) by 1px, we get the number of pixels that fit into the screen width, which is, of course, 1080. calc(100vw / 1px) /* 1080 */ However, if we divide that same width by 1em (and assume a font size of 16px), we get the number of em units that fit across the screen. calc(100vw / 1em) /* 67.5 */ The resulting number is unitless in both cases, but its meaning is entirely dependent on the unit of the value we divided by. From Length to Angle Of course, this conversion doesn’t have to be from a type <angle> to a type <length>. Here is an example that calculates an element’s angle based on the screen width (100vw), creating a new and unusual kind of responsiveness. CodePen Embed Fallback And get this: There are no media queries in here! it’s all happening in a single line of CSS doing the calculations. To determine the angle, I first define the width range I want to work within. clamp(300px, 100vw, 700px) gives me a closed range of 400px, from 300px to 700px. I then subtract 700px from this range, which gives me a new range, from -400px to 0px. Using typed arithmetic, I then divide this range by 400px, which gives me a normalized, unitless number between -1 and 0. And finally, I convert this number into an <angle> by multiplying it by -90deg. Here’s what that looks like in CSS when we put it all together: p { rotate: calc(((clamp(300px, 100vw, 700px) - 700px) / 400px) * -90deg); } From Length to Opacity Of course, the resulting unitless number can be used as-is in any property that accepts a <number> data type, such as opacity. What if I want to determine the font’s opacity based on its size, making smaller fonts more opaque and therefore clearer? Is it possible? Absolutely. CodePen Embed Fallback In this example, I am setting a different font-size value for each <p> element using a --font-size custom property. and since the range of this variable is from 0.8rem to 2rem, I first subtract 0.8rem from it to create a new range of 0 to 1.2rem. I could divide this range by 1.2rem to get a normalized, unitless value between 0 and 1. However, because I don’t want the text to become fully transparent, I divide it by twice that amount (2.4rem). This gives me a result between 0 and 0.5, which I then subtract from the maximum opacity of 1. p { font-size: var(--font-size, 1rem); opacity: calc(1 - (var(--font-size, 1rem) - 0.8rem) / 2.4rem); } Notice that I am displaying the font size in pixel units even though the size is defined in rem units. I simply use typed arithmetic to divide the font size by 1px, which gives me the size in pixels as a unitless value. I then inject this value into the content of the the paragraph’s ::after pseudo-element. p::after { counter-reset: px calc(var(--font-size, 1rem) / 1px); content: counter(px) 'px'; } Dynamic Width Colors Of course, the real beauty of using native CSS math functions, compared to other approaches, is that everything happens dynamically at runtime. Here, for example, is a small demo where I color the element’s background relative to its rendered width. p { --hue: calc(100cqi / 1px); background-color: hsl(var(--hue, 0) 75% 25%); } You can drag the bottom-right corner of the element to see how the color changes in real-time. CodePen Embed Fallback Here’s something neat about this demo: because the element’s default width is 50% of the screen width and the color is directly proportional to that width, it’s possible that the element will initially appear in completely different colors on different devices with different screens. Again, this is all happening without any media queries or JavaScript. An Extreme Example: Chaining Conversions OK, so we’ve established that typed arithmetic is cool and opens up new and exciting possibilities. Before we put a bow on this, I wanted to pit this concept against a more extreme example. I tried to imagine what would happen if we took a <length> type, converted it to a <number> type, then to an <angle> type, back to a <number> type, and, from there, back to a <length> type. Phew! I couldn’t find a real-world use case for such a chain, but I did wonder what would happen if we were to animate an element’s width and use that width to determine the height of something else. All the calculations might not be necessary (maybe?), but I think I found something that looks pretty cool. CodePen Embed Fallback In this demo, the animation is on the solid line along the bottom. The vertical position of the ball, i.e. its height, relative to the line, is proportional to the line’s width. So, as the line expands and contracts, so does the path of the bouncing ball. To create the parabolic arc that the ball moves along, I take the element’s width (100cqi) and, using typed arithmetic, divide it by 300px to get a unitless number between 0 and 1. I multiply that by 180deg to get an angle that I use in a sin() function (Juan Diego has a great article on this), which returns another unitless number between 0 and 1, but with a parabolic distribution of values. Finally, I multiply this number by -200px, which outputs the ball’s vertical position relative to the line. .ball { --translateY: calc(sin(calc(100cqi / 300px) * 180deg) * -200px) ; translate: -50% var(--translateY, 0); } And again, because the ball’s position is relative to the line’s width, the ball’s position will remain on the same arc, no matter how we define that width. Wrapping Up: The Dawn of Computational CSS The ability to divide one typed value by another to produce a unitless number might seem like no big deal; more like a minor footnote in the grand history of CSS. But as we’ve seen, this single feature is a quiet revolution. It dismantles the long-standing walls between different CSS data types, transforming them from isolated silos into a connected, interoperable system. We’ve moved beyond simple calculations, and entered the era of true Computational CSS. This isn’t just about finding new ways to style a button or animate a loading spinner. It represents a fundamental shift in our mental model. We are no longer merely declaring static styles, but rather defining dynamic, mathematical relationships between properties. The width of an element can now intrinsically know about its color, an angle can dictate a distance, and a font’s size can determine its own visibility. This is CSS becoming self-aware, capable of creating complex behaviors and responsive designs that adapt with a precision and elegance that previously required JavaScript. So, the next time you find yourself reaching for JavaScript to bridge a gap between two CSS properties, pause for a moment. Ask yourself if there’s a mathematical relationship you can define instead. You might be surprised at how far you can go with just a few lines of CSS. The Future is Calculable The examples in this article are just the first steps into a much larger world. What happens when we start mixing these techniques with scroll-driven animations, view transitions, and other modern CSS features? The potential for creating intricate data visualizations, generative art, and truly fluid user interfaces, all natively in CSS, is immense. We are being handed a new set of creative tools, and the instruction manual is still being written. CSS Typed Arithmetic originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Module 1: Timers and Automated Task Scheduling in Systemd
by: Umair Khurshid Wed, 24 Sep 2025 17:47:30 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
Advanced Automation with systemd
by: Umair Khurshid Wed, 24 Sep 2025 16:57:20 +0530 Unlock the full potential of your Linux system by replacing the classic cron jobs with modern, powerful systemd automation. Learn how to schedule, monitor, sandbox, and optimize automated workflows like a pro, all while leveraging the same tools used by your Linux system itself. Why systemd instead of cron?Cron has been around for decades, but it’s limited. It can’t monitor dependencies, doesn’t integrate with system logging, and has no native way to handle failures gracefully. systemd is the future of automation on Linux. It’s not just a service manager, it’s a complete automation framework that lets you: Schedule tasks with precision using timersAutomate complex, dependent workflowsSandbox risky jobs for securityMonitor, debug, and optimize jobs like a sysadmin ninjaIf you’re ready to ditch cron and take advantage of systemd’s power, this course is for you. What will you learn?Module 1. Timers and Automated Task Scheduling: Forget crontab -e. Learn how to build systemd timers for recurring and one-off jobs, with logging and error handling built in. Module 2. Automating Complex Workflows with Targets: Master targets to run multi-service workflows in the right order, ensuring dependencies are respected. Module 3. systemd-nspawn and machinectl for Repeatable Environments: Create reproducible containers with systemd-nspawn for development, testing, and automation pipelines. Module 4. Automated Resource Management: Leverage systemd’s cgroup integration to limit CPU, memory, and IO for your automated tasks. Module 5. Sandboxing Directives for Safer Automation: Use built-in sandboxing features to isolate automated jobs and protect your system from accidental damage. Module 6. Debugging and Monitoring Automated Services: Learn how to use journalctl, systemctl status, and other tools to troubleshoot like a pro. How to use this course?You will gain practical skills through hands-on exercises, and real-world scenarios. The best approach here would be to follow the instructions and commands on your Linux system installed in a virtual machine or a dedicated test machine. By the end, you'll have the knowledge and confidence to manage your Linux system more effectively using systemd.
-
410: Trying to help humans in an industry that is becoming increasingly non-human
by: Chris Coyier Tue, 23 Sep 2025 17:33:00 +0000 Chris & Marie jump on the podcast to talk about just how drastically customer support has changed over the last few years. We still exclusively do customer support over email. Incoming email from real customers who need a hand with something where they type out that email in plain languages themselves are few and far between. Instead we get an onslaught of noise from users that don’t exist about Pens and situations that don’t exist. The influence of agentic AI is massive here, some of it with nefarious intent and some not. All of it needs work to mitigate. Time Jumps 00:07 How much support has changed in the last 2 years 01:12 How do we do support at CodePen in 2025 07:41 How much noise AI has added to support 14:02 Verifying accounts before they’re allowed to use support or CodePen 23:05 Some of the changes we’ve made to help deal with AI 29:50 The benefits of learning to code with AI
-
Chris’ Corner: Little Bits of CSS
by: Chris Coyier Mon, 22 Sep 2025 15:33:16 +0000 Adam Argyle is clear with some 2025 CSS advice: Nobody asked me, but if I had to pick a favorite of Adam’s six, it’s all the stuff about animating <dialog>, popover, and <details>. There is a lot of interesting new-ish CSS stuff in there that will help you all around, between allow-discrete, overlay, ::backdrop, :popover-open, @starting-style, and more. /* enable transitions, allow-discrete, define timing */ [popover], dialog, ::backdrop { transition: display 1s allow-discrete, overlay 1s allow-discrete, opacity 1s; opacity: 0; } /* ON STAGE */ :popover-open, :popover-open::backdrop, [open], [open]::backdrop { opacity: 1; } /* OFF STAGE */ /* starting-style for pre-positioning (enter stage from here) */ @starting-style { :popover-open, :popover-open::backdrop, [open], [open]::backdrop { opacity: 0; } } Jeremy Keith also did a little post with CSS snippets in it, including a bit he overlaps with Adam on, where you by default opt-in to View Transitions, even if that’s all you do. @media (prefers-reduced-motion: no-preference) { @view-transition { navigation: auto; } } The idea is you get the cross-fade right way and then are set up to sprinkle in more cross-page animation when you’re ready. Una Kravets has a post about the very new @function stuff in CSS with a bunch of examples. I enjoyed this little snippet: /* Take up 1fr of space for the sidebar on screens smaller than 640px, and take up the --sidebar-width for larger screens. 20ch is the fallback. */ @function --layout-sidebar(--sidebar-width: 20ch) { result: 1fr; @media (width > 640px) { result: var(--sidebar-width) auto; } } .layout { display: grid; grid-template-columns: --layout-sidebar(); } I’m intrigued by the idea of being able to abstract away the logic in CSS when we want to. Perhaps making it more reusable and making the more declarative parts of CSS easier to read. Here’s another. I had absolutely no idea design control over the caret was coming to CSS (the thing in editable areas where you’re typing, that is usually a blinking vertical line). I guess I knew we had caret-color, which is prettttttty niche if you ask me. But now apparently we’re given control over the shape and literal animation of the caret. textarea { color: white; background: black; caret-shape: block; caret-animation: manual; animation: caret-block 2s step-end infinite; } @keyframes caret-block { 0% { caret-color: #00d2ff; } 50% { caret-color: #ffa6b9; } } Jump over to the Igalia blog post to see the video on that one. OK that’s all for this week. Er wait actually you gotta watch Julia Miocene’s Chicken video. Now I’m done.
-
How I Configure Polybar to Customize My Linux Desktop
by: Sreenath Mon, 22 Sep 2025 11:33:58 GMT Most major Linux desktop environments like GNOME, KDE Plasma, and Xfce come with their own built-in panels for launching apps, switching workspaces, and keeping track of what’s happening on your system. Example of top panel in XfceOne of the best things about Linux is the freedom to customize, and there are plenty of alternatives out there if you want something more flexible or visually appealing for your panel. Polybar is a standout choice among these alternatives. It’s a fast, highly customizable status bar that not only looks great but is also easy to configure. If you’re running an X11-based setup, such as the i3 window manager or even Xfce, Polybar can really elevate the look of your desktop, help you keep essential info at your fingertips, and make better use of your screen space. Example of Polybar in XfceWe used Polybar in our Xfce customization video and this is from where we got the idea to do a detailed tutorial on it. Subscribe to It's FOSS YouTube ChannelIn this guide, we’ll build a sleek Polybar panel just like the one featured in our Xfce customization video above. Along the way, you’ll get a solid introduction to the basics of Polybar customization to help you tailor the panel to your own style and workflow. 🚧This article is not trying to take over the place of Polybar Wiki. You can and should read the wiki while customizing Polybar. This article tries to act as a helper companion for beginners to get started.Installing Polybar💡Most tweaks here are done through the config file at user level. If you get easily overwhelmed and don't like to troubleshoot and fix much, you should probably create a new user account. Or, you could try these things in a fresh system on a VM or on a spare machine. This way, you won't impact your main system. Just a suggestion.Polybar is a popular project and is available in the official repositories of most major Linux distributions, including Ubuntu, Debian, Arch Linux, Fedora, etc. If you are a Debian/Ubuntu user, use: sudo apt install polybar For Arch Linux users, sudo pacman -S polybar In Fedora Linux, use the command: sudo dnf install polybar Once you install Polybar, you can actually use it with the default config by using the command: polybar Add it to the list of autostart applications to make the bar automatically start at system login. Initial configuration setupsLet's say you don't want the default config and you want to start from scratch. First, make a directory called polybar in your ~/.config directory. mkdir -p ~/.config/polybarAnd then create a config file called config.ini for Polybar in this location. touch config.ini Now, you have an empty config file. It's time to 'code'. Config file structurePolybar config file has a structure that makes things works easier and cleaner. The whole config can be divided broadly intro four parts. Colors: Define the colors to use across polybarBar: Define the properties of the whole bar.Modules: Individual bar modules are defined here.Scripts: This is not inside the config, but external shell and other scripts that enhance the Polybar functionality.Define the colorsLet me share how I am customizing my desktop Linux with the awesome Polybar. This could work as a beginner's guide to understanding Polybar configuration.It is not convinient to write all the colors in hex code separately. While this is good during rough coding, it will create headaches later on, when you want to change colors in bulk. You can define a set of general colors in the beginning to make things easier. See an example here: [colors] background = #282A2E window-background = #DE282A2E background-alt = #373B41 border-color = #0027A1B9 foreground = #C5C8C6 primary = #88c0d0 secondary = #8ABEB7 alert = #A54242 disabled = #707880 aurora-blue = #27A1B9 aurora-orange = #FF9535 aurora-yellow = #FFFDBB aurora-green = #53E8D4 aurora-violet = #8921C2 nord-background = #4c566a The common definition syntax is explained above. Now, to refer to any color in the list, you can use: key = ${colors.colorvariable} For example, if you want to set the foreground color in a module, you will use: foreground = ${colors.foreground} 💡If you intend to change the entire color palette of the bar, all you have to do is create a new color palette and paste it in the config. No need to change individual colors of all modules and sub-items.Setting the barIn simple words, this is the panel appearing in the bar. The one that contains all other modules. Polybar allows you to have multiple bars. Perhaps that's the reason why it is called 'polybar'. These bars can be named separately in the config file, with their own set of modules. The bar is placed, defined with the syntax: [bar/<barname>] option = value option = value [bar/<barname2>] option = value option = value Let’s say I am creating a top bar and a bottom bar, my simple syntax will be: [bar/mytopbar] options = values [bar/mybottombar] options = value There will be plenty of options and values to use that you will see later in this tutorial. Now, if you want to open only the top bar, use: polybar mytopbar Configure the barYou have seen the general syntax of the bar that mentions options and values. Now, let’s see some options. I am giving you a code block below, and will explain with the help of that. monitor = HDMI-1 width = 100% height = 20pt radius = 5 fixed-center = true background = ${colors.window-background} foreground = ${colors.foreground} line-size = 3pt border-size = 2.5pt border-color = ${colors.border-color} padding-left = 0 padding-right = 0 module-margin = 1 separator = "|" separator-foreground = ${colors.disabled} font-0 = "JetBrains Mono:size=10;3" font-1 = monospace;2 font-2 = "FiraCode Nerd Font:size=11;2" font-3 = "Symbols Nerd Font:size=20;4" modules-left = mymenu ewmh modules-center = date temperature pacupdate modules-right = pulseaudio memory cpu eth magic-click sessionLogout enable-ipc = true The main options that you may take a closer look are: monitor: As the name suggests, this decides on which monitor you want the Polybar. Use the xrandr command to get the name of the display. If you are using a multi-monitor setup, you can define a second bar, placing it on the second monitor and so on.separator: This is the separator used to separate the modules appearing in Polybar. You can use any item here, including Nerd font items (given the nerd font is installed on the system.).font-n: These are fonts to be used in the bar. The number corresponding refers to fallback fonts. That is, if the one mentioned first is not available, the other is used. Take special care to the Nerd fonts we have set at font-2 and font-3. This will be explained in a later section.modules-left, modules-center, modules-right: Keys used to arrange the modules in the bar. Place the module names on any of this section, then that appears in that part of the bar.enable-ipc: Enable Inter-process communication. This allows scripts or external apps to send commands (like module updates or bar reloads) to Polybar in real time.The above-mentioned options are enough for a working bar. The rest are mostly self-explanatory. You can read more about other options and more help from the official wiki of Polybar. ModulesNow that you have placed the bar, it's time to start adding the items. If you have looked at the above piece of script, you would have noticed that there are some entries in the modules-left, modules-center, and modules-right keys. They are mymenu ewmh, date temperature pacupdate, and pulseaudio memory cpu eth magic-click sessionLogout respectively. These are calling modules to the bar and placing them in the required position. In order to call them to the bar, they need to be defined; like what to display at that position. So, our next part is defining the modules. The general syntax for a module will be [module/MY_MODULE_NAME] type = MODULE_TYPE option1 = value1 option2 = value2 ... Here, MY_MODULE_NAME can be found on the Polybar Wiki, that explains modules. For example, refer to the CPU module wiki in Polybar. Getting Module NameThe type here will be: type = internal/cpu 🚧I will be using several modules here, that will create a fine panel for a beginner. You should read the wiki for more modules and customizations as required for your needs.Add WorkspacesWorkspaces is a great way to increase productivity by avoiding cluttered windows in front of you. In Polybar, we will be using the emwh module to get workspaces in the panel. Let's see a sample config: [module/ewmh] type = internal/xworkspaces icon-0 = 1; icon-1 = 2; icon-2 = 3; icon-3 = 4; icon-4 = 5; icon-5 = 6; icon-6 = 7; icon-7 = 8; icon-8 = 9; icon-9 = 10; format = <label-state> format-font = 2 #group-by-monitor = false #pin-workspaces = false label-active = %icon% label-active-background = ${colors.background-alt} label-active-forground = #00000000 label-active-padding = 2 label-occupied = %icon% label-occupied-padding = 1 label-urgent = %icon% label-urgent-background = ${colors.primary} label-urgent-padding = 1 label-empty = %icon% label-empty-foreground = ${colors.disabled} label-empty-padding = 1 We have already seen what type is in the previous section. In workspaces, you should be able to see icons/numbers for each workspace. These icons are defined in the icon-n key. The n here corresponds to the workspace number. For desktops like Xfce, the number of workspaces available is managed by the desktop. So, if you are adding icons for 5 workspaces, make sure you have created 5 workspaces in the system settings. For example, in Xfce, you can search for Virtual Desktops in the menu and set the number of workspaces available in the system. The format options tells the bar what to show for which workspace. We have set it as label-state. This means, we will define some states (active, empty, occupied, urgent) for the workspaces and the display will be according to that. The format-font = 3 tells the polybar to use which font. Here, I have specified 3, that will refer to font-3 defined in the bar section. That is Symbols Nerd Font:size=20;4. Since I have pasted the nerd font logo from nerd fonts, this will be better to display them properly. Look at the code below: label-active = %icon% label-active-background = ${colors.background-alt} label-active-forground = #00000000 label-active-padding = 2 This sets the value %icon% when the workspace is active. When Polybar sees the %icon%, it will swap this with the icons defined above. That is icon-N. The rest options are visual changes for each of the state, like background color, foreground color, etc. If you are using nerd fonts for this, these fonts will change their color according to the set foreground color. Similar is done as needed for other states like empty, urgent, etc. It is up to your creativity to assign what values to these states to make it visually pleasing. 0:00 /0:06 1× Switch Workspaces in Polybar What is the time now?A panel without a date is useless! Let's add a date block to Polybar. The type we use for a date module is: type = internal/date We need to format it, so that it looks better. So, take a look at the sample code below: [module/date] type = internal/date interval = 1.0 time = %I:%M %p date = %d-%m-%Y date-alt = "%{F#FF9535}%Y-%m-%d %I:%M:%S %p%{F-}" label = %date% %time% label-font = 5 label-foreground = ${colors.aurora-yellow} format = <label> format-prefix-font = 2 First is the refresh rate. We set the click to refresh every second with the interval = 1.0. The value is in seconds. Next, define what to show with the time key. It has to be in a format strftime. You can read the full format specification in the man page here. For now, we are using the format %I:%M %p, that will show the time as 12:30 PM. We are going a bit further to show you that there are more with date module. Use the date key to set the date format. I am using the format %d-%m-%Y, which will output 25-07-2025. The date-alt key can be used to show another date format when you click on the date module in the bar. 💡You can remember like this; if there is an alt in the name of a key, then it define an action that is available upon clicking that module.The syntax %{F#RRGGBB} in Polybar is used to set the foreground color dynamically within the module’s label or format string. This is like <span> tag in the HTML codes. So this will tell Polybar “from here on, use this foreground (text) color,” and once the %{F-} is spotted, reset it to general flow, or what was before. So, according to the code, when we click on the date module, it will show the detailed date format as %Y-%m-%d %I:%M:%S %p, which in real world, 2025-07-25 12:30:25 PM. 0:00 /0:07 1× Showing date in Polybar with an alternate format The label = %date% %time%, make sure the bar will show date and time properly. The format = <label> will show the date with a preceding nerd font icon. It is in the format key, you add icons/glyphs to appear on the bar most of the time. How do I change the volume?Most common way to change the volume in most system is to scroll on the volume button on panel. This is possible with Polybar as well. Let's see a code for the module: [module/pulseaudio] type = internal/pulseaudio format-volume-prefix-foreground = ${colors.primary} format-volume = <label-volume> <ramp-volume> label-volume = %percentage%% use-ui-max = false click-right = pavucontrol label-muted = " Mute" label-muted-foreground = ${colors.disabled} format-muted = <label-muted> format-muted-prefix = format-muted-prefix-font = 2 format-muted-padding = 1 ; Ramp settings using <ramp-volume> used for Pulseaudio ramp-volume-0 = ramp-volume-1 = ▁ ramp-volume-2 = ▂ ramp-volume-3 = ▃ ramp-volume-4 = ▄ ramp-volume-5 = ▅ ramp-volume-6 = ▆ ramp-volume-7 = ▇ ramp-volume-8 = █ ramp-volume-font = 2 As you expected, type = internal/pulseaudio is the module type. The next entry to look is format-volume. Here, we see a new item called <ramp-volume>. And if you look further down the code, you can see I have defined 9 levels (0 to 8) of ramp. This ramp-<item> is available in some other module also. So, understanding it here is better to use them as required. For example, the cpu module give a ramp-coreload, memory module gives ramp-used and ramp-free, etc. It shows a visual volume indicator (like volume bars or icons) depending on the number of ramp levels. For example, in the above volume, the 100% volume level is divided into 9 equal ranges. So, when the volume is increased, an appropriate bar is shown. 0:00 /0:10 1× Change the volume with ramps Another useful options are the mouse-click items. Generally, you have three of them available: click-leftclick-middleclick-rightIt is not limited to pulseaudio, you can use it in some other modules also. For that, refer to the wiki page. TrayMany apps needs an active tray module to work. Discord, Spotify, Ksnip, Flameshot, all provides a close to tray option as well. In Polybar, you will be using the tray module for this purpose. [module/tray] type = internal/tray format-margin = 8px tray-spacing = 8px It has several option you can try, in the official wiki. Rewriting them here is not an efficient way, since a bare module serves most purposes. 🚧In Linux systems, only one panel can take the tray. So, you only needed to add it in one tray. Similarly, in Xfce and other distros, which by default offers a panel with tray, using the tray module will not work properly.Scripts and Custom ModuleThis is not the scope of this article to explain bash shell scripts/ python scripts. But we will see custom modules in Polybar, that you can use to extend the function to next level. But, with Polybar, you can create shell scripts and then use it at places in modules. For example, take a look at the code below, that defines a custom module to show any package update available in Arch Linux: [module/pacupdate] type = custom/script exec = /home/$USER/.config/polybar/pacupdates.sh interval = 1000 label = %output% format-font = 3 click-left = notify-send "Updates:" "$(checkupdates)" As you can see, I got the type as custom/script from the wiki for scripts. Check the exec field. It points what to execute in the module. This can either be a simple command or point to the path to a script. Here, I pointed it to a script called pacupdates located on my ~/.config/polybar/ directory. The contents of the script is available in our GitHub repo. What it does is check and tell whether any package update is available. 0:00 /0:06 1× A custom script that will print what updates is available in the system when clicked on it This is not an in-built module in Polybar. We have created it. With that, let's see a general syntax for custom modules: [module/MODULE_NAME] type = custom/script exec = COMMAND_OR_SCRIPT_PATH interval = SECONDS label = %output% format = <label> format-prefix = "ICON_OR_TEXT " format-prefix-font = FONT_INDEX click-left = COMMAND_ON_LEFT_CLICK click-right = COMMAND_ON_RIGHT_CLICK click-middle = COMMAND_ON_MIDDLE_CLICK The %output% value to the label (if you remember, you have seen %icon% earlier) refers to the output of the exec field. We have seen other values in various other sections above. Before we finish, take a look at one more custom module example, which when clicked opens rofi: [module/mymenu] type = custom/text format = <label> format-padding = 2 label = "%{F#1A1B26} Menu%{F-}" click-left = /home/sreenathv/.config/polybar/rofi.sh format-background = ${colors.aurora-blue} Menu Button (Click to enlarge the image)Do not forget to add these to the tray after defined otherwise they won't appear. Wrapping UpApart from the modules we discussed, there are many other modules that you can use. We have provided a ready to use Polybar config with several scripts in out GitHub page. Take a look at the lines on code in that files and get a better grasp of Polybar config. I hope you liked this detailed guide to Polybar customization. If you have any questions or suggestions, please leave a comment and I'll be happy to answer them.
-
Strengthening Linux Defenses with an Online Cybersecurity Degree
By: Janus Atienza Sat, 20 Sep 2025 19:41:56 +0000 Embracing Next-Level Linux Security Challenges Linux runs everything from bleeding-edge research clusters to billion-dollar e-commerce backbones, which makes it a fat target for anyone with skill and bad intentions. The platform’s openness is its strength, but that same transparency gives attackers a clear view of the terrain. In recent years, cryptojacking campaigns have burrowed into unpatched kernels, and supply chain compromises have slipped into package repositories. Rootkits now arrive disguised as flawless kernel modules. If you manage Linux environments, complacency is your most dangerous vulnerability. Expect practical insights here, not hollow pep talks. You will leave with real, applicable strategies. Why an Online Cybersecurity Degree in Florida Aligns with Linux Security Goals If you’re serious about hardening Linux systems, you need more than scattered study sessions and scattered blog posts. Online programs give you the breathing room to keep working while absorbing structured, high-caliber material. They draw students from wildly diverse backgrounds, which means your discussion forums and group projects mirror the heterogeneity of the global threatscape. Accredited programs with faculty who have dissected live intrusions on Linux servers bring the fight closer to reality. Choosing an online cyber security degree florida means tapping into a curriculum that delivers theory, then forces you to wrestle with it in OS-specific labs. Florida’s programs tend to keep a balanced diet of deep technical dives and strategic risk analysis, letting you master packet-level configurations without drifting into abstract coursework with no connection to practice. Core Linux Security Topics Covered by Florida’s Cybersecurity Programs Florida’s stronger programs refuse to skim. You will tackle Linux kernel hardening, locking down the attack surface through parameters and module control. SELinux and AppArmor policies aren’t just read about; you’ll tune them to protect production processes without breaking critical ops. Secure shell configuration goes beyond PermitRootLogin no to controlling ciphers, key lengths, and brute-force detection. These topics aren’t busywork. When you learn container isolation, you’ll think about cgroups and namespaces, not just Docker defaults. Package management security means signing, verifying, and understanding when a repository has been poisoned, not just apt-get update. Hands-On Virtual Labs: Turning Theory into Linux Expertise You’ll set up hardened VMs as playgrounds and battlegrounds. Attack simulation injects actual strain on configurations, followed by countermeasures like policy adjustments or packet filtering. In a single lab cycle, you might spin up a fresh Debian image, run OpenVAS scans, capture suspect traffic via Wireshark, then tighten firewall rules with Netfilter. Labs are not optional frills. They engrain muscle memory and command-line reflexes, which is exactly what you want when your real systems are bleeding packets at 3 a.m. From Classroom to Command Line: Applying Skills in Real Linux Environments Graduates don’t leave theory in the LMS archive. They bring it to bear during system audits, patch rollouts, and log forensics. A graduate uses cron-based audits to catch misconfigurations before a breach window opens. Log parsers flag anomalies tied to newly imported packages. Patches are timed strategically to minimize exposure without derailing uptime SLAs. Keep a short checklist: continuously monitor kernel versions, verify integrity of critical binaries, audit sudoers, and rotate keys before they become stale liabilities. Career Paths Fueled by a Florida Cybersecurity Degree and Linux Mastery Your Linux expertise is currency. Cash it in as a Linux security engineer, cloud security specialist, or DevSecOps practitioner trusted to secure CI/CD pipelines. Entry-level expectations often include comfort with scripting, understanding of network stack behavior, and practical exposure to security frameworks. The degree closes gaps in both strategy and execution. Add weight to your resume by earning certifications like LPIC-3 or RHCE Security once your course load eases. Charting Your Next Steps in Linux-Focused Cybersecurity Start by targeting programs whose syllabi include the Linux security modules outlined above. Scrutinize course descriptions and faculty bios. Build a personal study plan that integrates formal assignments with your own system-hardening experiments. Careers here reward those who never let their skillset fossilize. Combine the credibility of a Florida-based online degree with relentless pursuit of Linux mastery, and you’ll stay ahead of whatever brute-force, zero-day, or poisoned package comes at you next. The post Strengthening Linux Defenses with an Online Cybersecurity Degree appeared first on Unixmen.
-
On inclusive personas and inclusive user research
by: Geoff Graham Fri, 19 Sep 2025 13:58:37 +0000 I’m inclined to take a few notes on Eric Bailey’s grand post about the use of inclusive personas in user research. As someone who has been in roles that have both used and created user personas, there’s so much in here What’s the big deal, right? We’re often taught and encouraged to think about users early in the design process. It’s user’ centric design, so let’s personify 3-4 of the people we think represent our target audiences so our work is aligned with their objectives and needs. My master’s program was big on that and went deep into different approaches, strategies, and templates for documenting that research. And, yes, it is research. The idea, in theory, is that by understanding the motivations and needs of specific users (gosh, isn’t “users” an awkward term?), we can “design backwards” so that the end goal is aligned to actions that get them there. Eric sees holes in that process, particularly when it comes to research centered around inclusiveness. Why is that? Very good reasons that I’m compiling here so I can reference it later. There’s a lot to take in, so you’d do yourself a solid by reading Eric’s post in full. Your takeaways may be different than mine. Traditional vs. Inclusive user research First off, I love how Eric distinguishes what we typically refer to as the general type of user personas, like the ones I made to generalize an audience, from inclusive user personas that are based on individual experiences. So, right off the bat we have to reframe what we’re talking about. There’s blanket personas that are placeholders for abstracting what we think we know about specific groups of people versus individual people that represent specific experiences that impact usability and access to content. Assistive technology is not exclusive to disabilities It’s so easy to assume that using assistive tools automatically means accommodating a disability or impairment, but that’s not always the case. Choice points from Eric: First is that assistive technology is a means, and not an end. Some disabled people use more than one form of assistive technology, both concurrently and switching them in and out as needed. Some disabled people don’t use assistive technology at all. Not everyone who uses assistive technology has also mastered it. Disproportionate attention placed on one kind of assistive technology at the expense of others. It’s entirely possible to have a solution that is technically compliant, yet unintuitive or near-impossible to use in the actual. I like to keep in mind that assistive technologies are for everyone. I often think about examples in the physical world where everyone benefits from an accessibility enhancement, such as cutting curbs in sidewalks (great for skateboarders!), taking elevators (you don’t have to climb stairs in some cases), and using TV subtitles (I often have to keep the volume low for sleeping kids). That’s the inclusive part of this. Everyone benefits rather than a specific subset of people. Different personas, different priorities What happens when inclusive research is documented separately from general user research? In practice, that means: Thinking of a slick new feature that will impress your users? Great! Let’s make sure it doesn’t step on the toes of other experiences in the process, because that’s antithetical to inclusiveness. I recognize this temptation in my own work, particularly if I land on a novel UI pattern that excites me. The excitement and tickle I get from a “clever” idea gives me a blind side to evaluating the overall effectiveness of it. Radical participatory design Gosh dang, why didn’t my schoolwork ever cover this! I had to spend a little time reading the Cambridge University Press article explaining radical participatopry design (RPD) that Eric linked up. Ah, a method for methodology! We’re talking about not only including community members into the internal design process, but make them equal stakeholders as well. They get the power to make decisions, something the article’s author describes as a form of decolonization. Or, as Eric nicely describes it: Bonus points for surfacing the model minority theory: It introduces exclusiveness in the quest to pursue inclusiveness — a stereotype within a stereotype. Thinking bigger Eric caps things off with a great compilation of actionable takeaways for avoiding the pitfalls of inclusive user personas: Letting go of control leads to better outcomes. Member checking: letting participants review, comment on, and correct the content you’ve created based on their input. Take time to scrutinize the functions of our roles and how our organizations compel us to undertake them in order to be successful within them. Organizations can turn inwards and consider the artifacts their existing design and research processes produce. They can then identify opportunities for participants to provide additional clarity and corrections along the way. On inclusive personas and inclusive user research originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
LHB Linux Digest #25.27: zswap vs zram, subfinder, kubectl logs, systemd-inhibit and More Linux Stuff
by: Abhishek Prakash Fri, 19 Sep 2025 17:05:42 +0530 Before you see all the new tips and tutorials, allow me to share a few future updates. So, we are working on two new microcourses: Git for DevOps and Advanced Automation With Systemd. I know that we already have a systemd course in place, but this one specifically focuses on automation and can be considered an advanced topic. Other than that, we are working on Docker video tutorials. Stay tuned for the awesome Linux learning with Linux Handbook. This post is for subscribers only Subscribe now Already have an account? Sign in
-
Finding Subdomains with Subfinder in Linux
by: Hangga Aji Sayekti Fri, 19 Sep 2025 15:56:59 +0530 When you start exploring a target website, the first question to ask is simple: what names exist out there? Before you think about vulnerabilities or exploits, you would want a map of subdomains. That map can reveal forgotten login pages, staging servers, or even entire apps that weren’t meant to be public. My preferred tool for this first step is subfinder. It’s simple, quiet, and quick. In this guide, we’ll walk through installing subfinder on Kali Linux, running the most useful commands, saving results, and experimenting with extra flags. We’ll practice together on vulnweb.com, a safe site for learning. What subfinder actually doesSubfinder is a passive subdomain discovery tool. Instead of hammering DNS servers or brute-forcing names, it asks public sources: certificate transparency logs, DNS databases, GitHub, and more. That’s why it’s fast and low-noise. The catch: subfinder gives us names only. Some names may be stale and some may point to nothing. And that’s fine. At this stage, all we need is a clean list of possible subdomains. Later, we can resolve and probe them. Installing subfinder on Kali LinuxYou have two easy choices: Install via apt (fastest): sudo apt update sudo apt install subfinder Install the latest version via Go: go install github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest echo 'export PATH=$PATH:$HOME/go/bin' >> ~/.bashrc && source ~/.bashrc If youn are just starting out, the apt version is fine. If something feels buggy, you can switch to the Go install for the newest release. Understanding the basic flags-d chooses the domain-silent prints one subdomain per line, no banners-o saves to a text file-oJ saves JSONL (structured data, one object per line)-ls shows which sources subfinder can query-s selects only certain sources-all uses every available sourceUsing subfinder Subfinder has more options that are useful once you’re comfortable with the basics. Let’s walk through them with examples: Enumerate subdomains (basic): subfinder -d vulnweb.com Here’s what will happen next: Awesome, all the subdomains are now visible. Enumerate multiple domains from file: Runs against every domain listed in domains.txt. subfinder -dL domains.txt Use all sources: Queries every source subfinder knows about. Slower, but more complete. subfinder -d vulnweb.com -all Exclude specific sources: subfinder -d vulnweb.com -es alienvault,zoomeyeapi Skips certain sources if you don’t want to use them. Set concurrency (threads): subfinder -d vulnweb.com -o results.txt -t 50 Runs with up to 50 concurrent tasks. Limit request rate: subfinder -d vulnweb.com -rl 50 Keeps requests under 50 per second. Output to a plain file: subfinder -d vulnweb.com -o results.txt JSON output: subfinder -d vulnweb.com -oJ vulnweb.jsonl CSV output (via quick conversion): subfinder -d vulnweb.com -oJ subfinder.jsonl jq -r '.name' subfinder.jsonl | awk 'BEGIN{print "subdomain"}{print $0}' > subfinder.csv The output has been saved to subfinder.csv. Open it to view the data. Unique results only: subfinder -d vulnweb.com -silent -o results.txt sort -u results.txt > results_unique.txt Recursive enumeration (find deeper subdomains): subfinder -d vulnweb.com -recursive Provider configuration (optional but powerful)You can add API keys of premium services like SecurityTrails, Shodan etc. They will give you richer results. You can add keys to ~/.config/subfinder/provider-config.yaml: securitytrails: key: "YOUR_SECURITYTRAILS_API_KEY" virustotal: key: "YOUR_VIRUSTOTAL_API_KEY" shodan: key: "YOUR_SHODAN_API_KEY" You can use subfinder without keys, but adding them usually gives us more coverage. Simple practice exercise with vulnweb.comLet's get into practice mode and run a few simple coommands to explore subfinder: Collect subdomains: subfinder -d vulnweb.com -silent -o vulnweb_raw.txt Clean up duplicates: sort -u vulnweb_raw.txt > vulnweb_clean.txt The following command will create vulnweb_clean.txt with unique entries: Done! The cleaned list is in vulnweb_clean.txt — go check it out. Next, save JSON output too (for reports): subfinder -d vulnweb.com -oJ vulnweb.jsonl Take a look at the first 10 results: head -n 10 vulnweb_clean.txt Now you have a tidy list of candidate subdomains, ready for the next step of vulnerability assessment. Final notesSubfinder is the quiet scout in the toolkit. It doesn’t overwhelm us with noise, it just hands out the names that exist out in the wild. With a few simple commands, we can build a reliable subdomain list for any domain we’re testing. For now, practice on vulnweb.com until you’re comfortable. Later, move on to checking which of those names are live and what services they’re running. But that’s another story for another day.
-
Hyprland Made Easy: Preconfigured Beautiful Distros
by: Sourav Rudra Fri, 19 Sep 2025 06:43:45 GMT Hyprland is a dynamic tiling Wayland compositor that has been gaining traction in the Linux community due to its modern aesthetics, smooth animations, and extensive configurability. Unlike traditional X11 window managers, Hyprland leverages Wayland's capabilities to provide a more fluid and visually appealing desktop experience. Its growing popularity is evident in discussions across forums and communities, where people have been praising its performance and customization options. But if you look at our Hyprland tutorial series, you'll realize that setting up Hyprland can be a huge challenge. And that's why I am listing a few options that lower the entry barrier by providing a preconfigured Hyprland option. Let's see them. 1. Garuda LinuxGaruda Linux offers a dedicated Hyprland edition, preconfigured with themes, wallpapers, and essential applications. It is designed for users who want a visually appealing and ready-to-use desktop without manually configuring Hyprland. The distribution includes performance-oriented buffs such as the Zen kernel, Btrfs snapshots, and optimized compositor settings. Users can enjoy a responsive system with minimal tweaking needed post-install. Garuda’s tools, like "Rani," simplify maintenance and system management. This ensures even users new to Linux can manage updates, drivers, and desktop settings efficiently. ⭐ Key Features Preinstalled tools for system management.Ready-to-use desktop layout with Hyprland.Rolling release updates via Arch Linux repos.Gaurda Linux2. ArchRiotArchRiot is a community-driven, Arch-based distribution that comes with Hyprland preinstalled. It includes essential applications and cool themes for a ready-to-use desktop experience. The distribution provides a Go-based installer that automates setup and includes rollback support, reducing setup errors. Plus, the distro follows a rolling release model, allowing users to stay up to date with the latest packages and Hyprland features. Initially started as a fork of Omarchy (discussed later), it has evolved into a distinct project with custom developed tools. ⭐ Key Features Go-based installer with rollback support.Dependable community support for new users.Preconfigured Hyprland with curated apps and themes.ArchRiot3. CachyOSCachyOS is an Arch-based distribution focused on speed and ease of use. It offers a Hyprland option during installation, letting users start with a functional, preconfigured desktop. It includes a simple installer, and the post-install tools are helpful to manage packages, settings, and desktop customization without extra complexity. This is a suitable option for both beginners and experienced users who want a fast Arch-based system with Hyprland ready to go. ⭐ Key Features GUI and CLI installation options.Tools for hardware detection and system customization.Optimized kernel with BORE scheduler for better performance.CachyOSCachyOS: Arch-based Distro for Speed and Ease of UseA performance-focused Arch-based distro for newbies and experts.It's FOSS NewsAnkush Das4. Omarchy (A Script for Arch Linux)Omarchy is a script for Arch Linux that automates the installation and configuration of Hyprland. It sets up themes, layouts, keybinds, and essential applications. The script reduces manual setup effort, allowing users to get a functional desktop with a single command. It supports optional packages for productivity and multimedia, letting users tailor the environment to their needs. Omarchy is ideal for users who want the flexibility of Arch Linux without configuring every component manually. ⭐ Key Features Many preinstalled themes.Automated Hyprland setup in one command.Optional productivity and multimedia integrations.OmarchyThis One Command Turned My Arch Install Into a Beautiful Hyprland SetupThis script turned my boring Arch install into something special.It's FOSS NewsSourav Rudra5. KooL's Arch - Hyprland (Another Script for Arch Linux)KooL's Arch - Hyprland is an automated installation script that sets up a complete Hyprland desktop environment on minimal Arch Linux systems. The script installs Hyprland along with a curated collection of themes, applications, and preconfigured dotfiles from a centralized repository, creating a polished and functional desktop experience out of the box. While the setup is relatively opinionated and comes with various configurations, users still need to be comfortable with terminal usage and basic configuration file editing for system maintenance and minor adjustments. ⭐ Key Features One-script setup with complete Hyprland environment installation.Curated preconfigured dotfiles from an actively maintained repository.Flexible display manager options, including GDM and SDDM support.KooL's Arch - HyprlandConclusionIf you want a distribution that boots directly into Hyprland with minimal setup, Garuda Linux (Hyprland edition), CachyOS, and ArchRiot are the best candidates. They provide preconfigured desktops, themes, and essential tools without requiring you to fiddle with anything. For Arch enthusiasts who want to stay close to vanilla, Omarchy with Arch Linux or Arch Linux combined with JaKooLit’s script (number 5) are strong alternatives. These do not qualify as full "Hyprland distros," but they automate the setup process and deliver a comparable experience. And don't forget that there are several enthusiasts who have specific Hyprland setups that can be achieved with their dot files. GitHub - msmafra/dotfiles: My Hyprland environment (dotfiles)My Hyprland environment (dotfiles). Contribute to msmafra/dotfiles development by creating an account on GitHub.GitHubmsmafraSuggested Read 📖 Getting Started With HyprlandLet’s get on the “hyp” wagon with HyprlandIt's FOSSAbhishek Prakash
-
FOSS Weekly #25.38: GNOME 49 Release, KDE Drama, sudo vs sudo-rs, Local AI on Android and More Linux Stuff
by: Abhishek Prakash Thu, 18 Sep 2025 04:31:27 GMT We hit a major milestone on our Mastodon account. We crossed the 40,000 mark. It's a pleasant surprise. We have a lot more people on Twitter, Facebook, Instagram and even YouTube. But seeing this number on a non-mainstream platform like Mastodon gives a positive uplift🕺 💬 Let's see what you get in this edition: Ubuntu making a major change.A long-time KDE contributor leaving.The Apache Software Foundation's rebranding.And other Linux news, tips, and, of course, memes!This edition of FOSS Weekly is supported by TigerData. TigerData, the creators of TimescaleDB, are on a mission to make Postgres the fastest database for modern workloads. See how Postgres can scale to 2 PB and 1.5 trillion metrics per day—all without proprietary black boxes or hidden tools. With Tiger Postgres, you get massive scale without sacrificing the SQL you already know and love. TigerData Postgres Scaling 📰 Linux and Open Source NewsJonathan Riddell has left KDE after 25 years.The Apache Software Foundation has turned a new leaf.SUSE's Agama installer recently received a major update.CUDA will be directly offered via Ubuntu's repositories soon.Ubuntu has made a move to Dracut, replacing initramfs-tools.The OpenSearch Foundation has a new Executive Director leading it.GNOME 49 is released. Ubuntu 25.10 and Fedora 43 will have them. Rolling distros like Arch should have them in a week or so, hopefully. GNOME 49 Launches With New Apps, Nautilus Redesign, and GNOME Shell UpgradesMany fresh applications and a refined user interface mark this release.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutThe Rustification of Ubuntu has some performance hurdles to tackle. Rust Coreutils Are Performing Worse Than GNU Coreutils in UbuntuUbuntu’s Rust move shows promise, but questions remain on performance.It's FOSS NewsSourav Rudra🧮 Linux Tips, Tutorials, and LearningsCompose Key in GNOME makes typing € ♥ © and more super easy.Here's how the classic sudo compares to the Rust-based sudo-rs.Avoid these 10 mistakes if you are a new Linux user.Top 10 Mistakes New Linux Users MakeEvery Linux user makes these rookie mistakes. Get to know them before you do, or have you already got into trouble?It's FOSSAnkush Das👷 AI, Homelab and Hardware CornerTurn your Pi into a powerhouse with the Pironman 5 Max. Running local LLMs on your phone isn't science fiction! You can try running a local AI on your Android smartphone. Don't expect a superb experience, but it can help in some cases. And I tried my hands on a Raspberry Pi Pico 2 kit. It's a well-thought-out device primarily aiming to help children get into STEM. Review: Elecrow’s All-in-one Starter Kit for Pico 2For anyone looking to introduce themselves or their children to the exciting world of electronics and programming, this starter kit offers a good entry point into these essential modern skills.It's FOSSAbhishek Prakash✨ Project HighlightReadest is a solid eBook reader choice that runs great on Linux (but is not limited to). This Could Be My New Favorite eBook Reader App on LinuxReadest offers a modern cross-platform eBook reading experience on Linux.It's FOSS NewsSourav Rudra📽️ Videos I Am Creating for YouLearn about using and managing AppImages in Linux in our latest YouTube video. Subscribe to It's FOSS YouTube Channel🧩 Quiz TimeWhat's in a Container? A lot, if you can solve it. Crossword: What’s in the Container?Containers are fun… until they’re in a crossword. 🧩Test your Docker IQ and see if you can solve this without running docker --help in panic mode.It's FOSSAbhishek Prakash Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a McDonald's burger a month), and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 💡 Quick Handy TipYou can easily save sessions in KDE Plasma. First, go into KDE Settings -> Session -> Desktop Session. Here, under the "Session Restore" section, toggle on the "When session was manually saved" button. This will add a new "Save Session" button to your Power Menu, as shown in the screenshot above (on the right). Click on it to make Plasma remember the apps that are open and restore them on the next login. To customize the behavior further, open the apps you need at login and click the button again to change the apps. 🤣 Meme of the WeekYou never know when you might need them! 🗓️ Tech TriviaThe Association for Computing Machinery was founded on September 15, 1947. Today it has over 100,000 members worldwide and organizes conferences and workshops to advance computing knowledge and technology. 🧑🤝🧑 FOSSverse CornerOne of our readers has sent over a reimagination of what Tux, the mascot of Linux, can be. Tux Redesign... UnofficialHey FOSSers, A reader, Michael Kolesidis, sent me an email and shared a redesigned, modern, and simplified version of our beloved Tux mascot that he designed and released under the Creative Commons Attribution-ShareAlike 4.0 International license. I am sharing them with you here: There is also a I <3 Tux styled version: You can find the new designs on Wikimedia. 🔗 Redesigned Tux: https://commons.wikimedia.org/wiki/File:Tux_Redesign.svg ❤ “I Love Linux” derivative: https://…It's FOSS Communityabhishek❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
Is it Time to Un-Sass?
by: Jeff Bridgforth Wed, 17 Sep 2025 14:02:25 +0000 Several weeks ago, I participated in Front End Study Hall. Front End Study Hall is an HTML and CSS focused meeting held on Zoom every two weeks. It is an opportunity to learn from one another as we share our common interest in these two building blocks of the Web. Some weeks, there is more focused discussion while other weeks are more open ended and members will ask questions or bring up topics of interest. Joe, the moderator of the group, usually starts the discussion with something he has been thinking about. In this particular meeting, he asked us about Sass. He asked us if we used it, if we liked it, and then to share our experience with it. I had planned to answer the question but the conversation drifted into another topic before I had the chance to answer. I saw it as an opportunity to write and to share some of the things that I have been thinking about recently. Beginnings I started using Sass in March 2012. I had been hearing about it through different things I read. I believe I heard Chris Coyier talk about it on his then-new podcast, ShopTalk Show. I had been interested in redesigning my personal website and I thought it would be a great chance to learn Sass. I bought an e-book version of Pragmatic Guide to Sass and then put what I was learning into practice as I built a new version of my website. The book suggested using Compass to process my Sass into CSS. I chose to use SCSS syntax instead of indented syntax because SCSS was similar to plain CSS. I thought it was important to stay close to the CSS syntax because I might not always have the chance to use Sass, and I wanted to continue to build my CSS skills. It was very easy to get up and running with Sass. I used a GUI tool called Scout to run Compass. After some frustration trying to update Ruby on my computer, Scout gave me an environment to get up and going quickly. I didn’t even have to use the command line. I just pressed “Play” to tell my computer to watch my files. Later I learned how to use Compass through the command line. I liked the simplicity of that tool and wish that at least one of today’s build tools incorporated that same simplicity. I enjoyed using Sass out of the gate. I liked that I was able to create reusable variables in my code. I could set up colors and typography and have consistency across my code. I had not planned on using nesting much but after I tried it, I was hooked. I really liked that I could write less code and manage all the relationships with nesting. It was great to be able to nest a media query inside a selector and not have to hunt for it in another place in my code. Fast-forward a bit… After my successful first experience using Sass in a personal project, I decided to start using it in my professional work. And I encouraged my teammates to embrace it. One of the things I liked most about Sass was that you could use as little or as much as you liked. I was still writing CSS but now had the superpower that the different helper functions in Sass enabled. I did not get as deep into Sass as I could have. I used the Sass @extend rule more in the beginning. There are a lot of features that I did not take advantage of, like placeholder selectors and for loops. I have never been one to rely much on shortcuts. I use very few of the shortcuts on my Mac. I have dabbled in things like Emmet but tend to quickly abandon them because I am just use to writing things out and have not developed the muscle memory of using shortcuts. Is it time to un-Sass? By my count, I have been using Sass for over 13 years. I chose Sass over Less.js because I thought it was a better direction to go at the time. And my bet paid off. That is one of the difficult things about working in the technical space. There are a lot of good tools but some end up rising to the top and others fall away. I have been pretty fortunate that most of the decisions I have made have gone the way that they have. All the agencies I have worked for have used Sass. At the beginning of this year, I finally jumped into building a prototype for a personal project that I have been thinking about for years: my own memory keeper. One of the few things that I liked about Facebook was the Memories feature. I enjoyed visiting that page each day to remember what I had been doing on that specific day in years past. But I felt at times that Facebook was not giving me all of my memories. And my life doesn’t just happen on Facebook. I also wanted a way to view memories from other days besides just the current date. As I started building my prototype, I wanted to keep it simple. I didn’t want to have to set up any build tools. I decided to write CSS without Sass. Okay, so that was my intention. But I soon realized that that I was using nesting. I had been working on it a couple of days before I realized it. But my code was working. That is when I realized that the native nesting in CSS works much the same nesting in Sass. I had followed the discussion about implementing nesting in native CSS. At one point, the syntax was going to be very different. To be honest, I lost track of where things had landed because I was continuing to use Sass. Native CSS nesting was not a big concern to me right then. I was amazed when I realized that nesting works just the same way. And it was in that moment that I began to wonder: Is this finally the time to un-Sass? I want to give credit where credit is due. I’m borrowing the term “un-Sass” from Stu Robson, who is actually in the middle of writing a series called “Un-Sass’ing my CSS” as I started thinking about writing this post. I love the term “un-Sass” because it is easy to remember and so spot on to describe what I have been thinking about. Here is what I am taking into consideration: Custom Properties I knew that a lot about what I liked about Sass had started to make its way into native CSS. Custom properties were one of the first things. Custom properties are more powerful than Sass variables because you can assign a new value to a custom property in a media query or in a theming system, like light and dark modes. That’s something Sass is unable to do since variables become static once they are compiled into vanilla CSS. You can also assign and update custom properties with JavaScript. Custom properties also work with inheritance and have a broader scope than Sass variables. So, yeah. I found that not only was I already fairly familiar with the concept of variables, thanks to Sass, but the native CSS version was much more powerful. I first used CSS Custom Properties when building two different themes (light and dark) for a client project. I also used them several times with JavaScript and liked how it gave me new possibilities for using CSS and JavaScript together. In my new job, we use custom properties extensively and I have completely switched over to using them in any new code that I write. I made use of custom properties extensively when I redesigned my personal site last year. I took advantage of it to create a light and dark theme and I utilized it with Utopia for typography and spacing utilities. Nesting When Sass introduced nesting, it simplified the writing of CSS code because you write style rules within another style rule (usually a parent). This means that you no longer had to write out the full descendent selector as a separate rule. You could also nest media queries, feature queries, and container queries. This ability to group code together made it easier to see the relationships between parent and child selectors. It was also useful to have the media queries, container queries, or feature queries grouped inside those selectors rather than grouping all the media query rules together further down in the stylesheet. I already mentioned that I stumbled across native CSS nesting when writing code for my memory keeper prototype. I was very excited that the specification extended what I already knew about nesting from Sass. Two years ago, the nesting specification was going to require you to start the nested query with the & symbol, which was different from how it worked in Sass. .footer { a { color: blue } } /* 2023 */ .footer { & a { color: blue } /* This was valid then */ } But that changed sometime in the last two years and you no longer need the ampersand (&) symbol to write a nested query. You can write just as you had been writing it in Sass. I am very happy about this change because it means native CSS nesting is just like I have been writing it in Sass. /* 2025 */ .footer { a { color: blue } /* Today's valid syntax */ } There are some differences in the native implementation of nesting versus Sass. One difference is that you cannot create concatenated selectors with CSS. If you love BEM, then you probably made use of this feature in Sass. But it does not work in native CSS. .card { &__title {} &__body {} &__footer {} } It does not work because the & symbol is a live object in native CSS and it is always treated as a separate selector. Don’t worry, if you don’t understand that, neither do I. The important thing is to understand the implication – you cannot concatenate selectors in native CSS nesting. If you are interested in reading a bit more about this, I would suggest Kevin Powell’s, “Native CSS Nesting vs. Sass Nesting” from 2023. Just know that the information about having to use the & symbol before an element selector in native CSS nesting is out of date. I never took advantage of concatenated selectors in my Sass code so this will not have an impact on my work. For me, nesting is native CSS is equivalent to how I was using it in Sass and is one of the reasons why to consider un-Sassing. My advice is to be careful with nesting. I would suggest trying to keep your nested code to three levels at the most. Otherwise, you end up with very long selectors that may be more difficult to override in other places in our codebase. Keep it simple. The color-mix() function I liked using the Sass color module to lighten or darken a color. I would use this most often with buttons where I wanted the hover color to be different. It was really easy to do with Sass. (I am using $color to stand in for the color value). background-color: darken($color, 20%); The color-mix() function in native CSS allows me to do the same thing and I have used it extensively in the past few months since learning about it from Chris Ferdinandi. background-color: color-mix(in oklab, var(--color), #000000 20%); Mixins and functions I know that a lot of developers who use Sass make extensive use of mixins. In the past, I used a fair number of mixins. But a lot of the time, I was just pasting mixins from previous projects. And many times, I didn’t make as much use of them as I could because I would just plain forget that I had them. They were always nice helper functions and allowed me to not have to remember things like clearfix or font smoothing. But those were also techniques that I found myself using less and less. I also utilized functions in Sass and created several of my own, mostly to do some math on the fly. I mainly used them to convert pixels into ems because I liked being able to define my typography and spacing as relative and creating relationships in my code. I also had written a function to covert pixels to ems for custom media queries that did not fit within the breakpoints I normally used. I had learned that it was a much better practice to use ems in media queries so that layouts would not break when a user used page zoom. Currently, we do not have a way to do mixins and functions in native CSS. But there is work being done to add that functionality. Geoff wrote about the CSS Functions and Mixins Module. I did a little experiment for the use case I was using Sass functions for. I wanted to calculate em units from pixels in a custom media query. My standard practice is to set the body text size to 100% which equals 16 pixels by default. So, I wrote a calc() function to see if I could replicate what my Sass function provided me. @media (min-width: calc((600 / 16) * 1em)); This custom media query is for a minimum width of 600px. This would work based on my setting the base font size to 100%. It could be modified. Tired of tooling Another reason to consider un-Sassing is that I am simply tired of tooling. Tooling has gotten more and more complex over the years, and not necessarily with a better developer experience. From what I have observed, today’s tooling is predominantly geared towards JavaScript-first developers, or anyone using a framework like React. All I need is a tool that is easy to set up and maintain. I don’t want to have to learn a complex system in order to do very simple tasks. Another issue is dependencies. At my current job, I needed to add some new content and styles to an older WordPress site that had not been updated in several years. The site used Sass, and after a bit of digging, I discovered that the previous developer had used CodeKit to process the code. I renewed my Codekit license so that I could add CSS to style the content I was adding. It took me a bit to get the settings correct because the settings in the repo were not saving the processed files to the correct location. Once I finally got that set, I continued to encounter errors. Dart Sass, the engine that powers Sass, introduced changes to the syntax that broke the existing code. I started refactoring a large amount of code to update the site to the correct syntax, allowing me to write new code that would be processed. I spent about 10 minutes attempting to refactor the older code, but was still getting errors. I just needed to add a few lines of CSS to style the new content I was adding to the site. So, I decided to go rogue and write the new CSS I needed directly in the WordPress template. I have had similar experiences with other legacy codebases, and that’s the sort of thing that can happen when you’re super reliant on third-party dependencies. You spend more time trying to refactor the Sass code so you can get to the point where you can add new code and have it compiled. All of this has left me tired of tooling. I am fortune enough at my new position that the tooling is all set up through the Django CMS. But even with that system, I have run into issues. For example, I tried using a mixture of percentage and pixels values in a minmax() function and Sass was trying to evaluate it as a math function and the units were incompatible. grid-template-columns: repeat(auto-fill, minmax(min(200px, 100%), 1fr)); I needed to be able to escape and not have Sass try to evaluate the code as a math function: grid-template-columns: repeat(auto-fill, minmax(unquote("min(200px, 100%)"), 1fr)); This is not a huge pain point but it was something that I had to take some time to investigate that I could have been using to write HTML or CSS. Thankfully, that is something Ana Tudor has written about. All of these different pain points lead me to be tired of having to mess with tooling. It is another reason why I have considered un-Sassing. Verdict So what is my verdict — is it time to un-Sass? Please don’t hate me, but my conclusion is: it depends. Maybe not the definitive answer you were looking for. But you probably are not surprised. If you have been working in web development even a short amount of time, you know that there are very few definitive ways of doing things. There are a lot of different approaches, and just because someone else solves it differently, does not mean you are right and they are wrong (or vice versa). Most things come down to the project you are working on, your audience, and a host of other factors. For my personal site, yes, I would like to un-Sass. I want to kick the build process to the curb and eliminate those dependencies. I would also like for other developers to be able to view source on my CSS. You can’t view source on Sass. And part of the reason I write on my site is to share solutions that might benefit others, and making code more accessible is a nice maintenance enhancement. My personal site does not have a very large codebase. I could probably easily un-Sass it in a couple of days or over a weekend. But for larger sites, like the codebase I work with at my job. I wouldn’t suggest un-Sassing it. There is way too much code that would have to be refactored and I am unable to justify the cost for that kind of effort. And honestly, it is not something I feel motivated to tackle. It works just fine the way that it is. And Sass is still a very good tool to use. It’s not “breaking” anything. Your project may be different and there might be more gains from un-Sassing than the project I work on. Again, it depends. The way forward It is an exciting time to be a CSS developer. The language is continuing to evolve and mature. And every day, it is incorporating new features that first came to us through other third-party tools such as Sass. It is always a good idea to stop and re-evaluate your technology decisions to determine if they still hold up or if more modern approaches would be a better way forward. That does not mean we have to go back and “fix” all of our old projects. And it might not mean doing a complete overhaul. A lot of newer techniques can live side by side with the older ones. We have a mix of both Sass variables and CSS custom properties in our codebase. They don’t work against each other. The great thing about web technologies is that they build on each other and there is usually backward compatibility. Don’t be afraid to try new things. And don’t judge your past work based on what you know today. You did the best you could given your skill level, the constraints of the project, and the technologies you had available. You can start to incorporate newer ways right alongside the old ones. Just build websites! Is it Time to Un-Sass? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Review: Elecrow's All-in-one Starter Kit for Pico 2
by: Abhishek Prakash Wed, 17 Sep 2025 13:42:32 GMT Raspberry Pi Pico 2 starter kit from Elecrow is an educational device that integrates multiple sensors and components onto a single board for learning electronics and programming. Built around the dual-core Raspberry Pi Pico2 RP2350 chip, the kit includes 17 sensors, 20 RGB LEDs, and a 2.4-inch TFT color touchscreen in a portable case format. The kit is designed to eliminate the need for breadboarding, wiring, and soldering, allowing users to focus on programming concepts and sensor functionality. It comes with 21 structured tutorials that progress from basic to advanced levels, using Arduino IDE as the programming environment. In this article, I'll share my experience with this starter kit. 📋Elecrow sent me this kit for review. The opinion is purely mine, based on my experience.Technical specificationThe kit comes in the form of a briefcase-styled plastic case. It weighs less than 350 gram and has a size of 19.5x17x4.6 cm. At the core of this kit lies Raspberry Pi Pico2 RP2350. There is a 2.4 inches TFT touch screen surrounded by seventeen sensors. These sensors are connected to Pico 2 already son you don't need to do any manual connections to access them. It is powered by a type C port and the same is used for transferring the project files to the board. Light Sensor Hall Sensor Gas Sensor (MQ2) Sound Sensor Temperature & Humidity Sensor MPU-6050 Accelerometer & Gyro 2.0 Ultrasonic Ranging Sensor Touch Sensor Buzzer Servo Motor Vibration Motor Relay Individual LEDs RGB LED Buttons Linear Potentiometer Infrared My experience with Elecrow Pico 2 Starter KitThe kit comes preloaded with a few games and a program that lets you enable the LED lights and change their patterns. The games are Dinosaur Jump (the one you see in Chrome) and Snake. The games are not as interesting as I would want them to be. Dianousr moves way too slow in the first stage. Even my four-years old didn't have enough patience to play this 'slow game'. While the Snake game is better, there is a slight delay in button press and the response on screen. But this is not what the kit is for. It is for exploring programming all those sensors on the board. Easier if you are familiar with the Arduino ecosystemHere's the thing. If you are familiar with Arduino board and their ecosystem, things will be a lot easier for you. I have been using Raspberry Pi for years but never used an Arduino or other microcontroller like the Pico board here. I learned a few things for sure. You have to 'burn' the project code on the board and you have to do it each time you have a new project. Which means if you ran a program that sounds the buzzer and next you want to try a program that interacts with the ultrasound sensor, you have to put this new code on Pico 2. Elecrow does provide more than one documentation, but they are inconsistent with each other. The getting started guide should be improved, especially for beginners. It took me some time to figure things out based on the two documents and some web searches. The web-based documentation does not mention that version 4.2.0 of the Raspberry Pi Pico/RP2040/RP2350 has to be explicitly added to the board manager in Arduino IDE. It is mentioned in the user manual PDF, though. Elecrow provides source code for around 15 projects. Wiki on the web mentions a different source code link and the PDF user manual mentions the source code on GitHub. It doesn't end here. Most of the sample project codes on GitHub have different name for their folders and the .ino files. In the Arduino ecosystem, both .ino code file and folder that contains it must have the same name; otherwise, the sketchbook won't be visible in Arduino IDE. In my opinion, things would have been smoother if I were familiar with Arduino and the documentation was a bit more straightforward. Sample projects are simple and funI did manage to overcome the initial hurdle and was able to run several of the provided projects. Now, the provided user manual does an excellent job at explaining the sample projects. It explains the objective of the experiment, actions that should be performed, working principles, and key elements of the program. Document is excellent for understaning the sample projectsProjects are mostly simple and explore various sensors present on the kit. Simple projects like LED controlling with a button, oscillating the servo motor, showing room temperature and humidity, measuring obstacle distance with an ultrasound sensor, etc. Room temperature and humidityThe projects that involved an infrared receiver didn't compile. I'll debug the issue later and if I am unable to fix it, I'll perhaps open a bug report on Elecrow's GitHub repo. To experiment, I even changed the code slightly. I can see that there is potential to modify the existing code into something else. For example, if the room temperature reaches a certain level, the servo motor starts rotating. There is potential here to explore and have fun. Above all, exploring this device made me familiar with Arduino. New skill unlocked 💪 ConclusionThis is a suitable option for schools, as they can have a bunch of these kits in their STEM lab. Children can start working on modifying the codes for their lab projects instead of struggling with wiring and soldering. The briefcase-style case also makes it easier to store without worrying about disturbing the wire connections. Perhaps there could be a discount on bulk orders; I am just guessing. Parents who have a little bit of Arduino experience or the willingness to learn can also get this as a present for their children. With a little guidance, they can build new things upon the existing sample projects, and that will help them explore the exciting world of electronics and programming. To the makers, if they could improve their getting-started guide and provide code consistent with Arduino IDE requirements, it would surely flatten the learning curve. This kit is available for $37.99, which is a fair price for what it offers. Do refer to the official manual beforfe starting, if you purchase the kit. Explore Elecrow All-in-on Starter Kit for Pico 2
-
409: Our Own Script Injection
by: Chris Coyier Tue, 16 Sep 2025 15:41:29 +0000 Chris and Stephen talk about how we use a Cloudflare Worker & HTMLRewriter to inject a very special <script> tag into the previews of the Pens you work on. This script has a lot of important jobs so it’s presence is crucial, and getting it in there reliably can is a bit of a challenge. Time Jumps 00:06 Injecting a script into your code 01:10 What we talked about previously that led up to this 02:45 What are the jobs of this script? 07:54 How do we account for HTML pages? 10:22 Preview page address 20:02 How do we get the script in?
-
Chris’ Corner: Terminological Fading
by: Chris Coyier Mon, 15 Sep 2025 17:18:42 +0000 I found myself saying “The Edge” in a recent podcast with Stephen. I was talking about some server-side JavaScript that executes during a web request, and that it was advantageous that it happens at CDN nodes around the world rather than at one location only, so that it’s fast. That was kinda the whole point about “The Edge” is speed. I don’t hear the term bandied about much anymore, but it’s still a useful architectural concept that many use. Salma Alam-Naylor has a good explainer post. It’s just interesting how terms kinda just chill out in usage over time. They feel like such big important things at the time, that everyone has a thought about, then they just fade away, even if we’re all still doing and using the thing we were talking about. Even terms like “SPA” (Single Page App) seemed like it’s all anyone wanted to argue about for quite a while there and now I see it chilling out. All the nuance and distinctions between that and a website with regular ol’ links and reloads have come to bear. Concepts like paint holding and view transitions make regular sites feel much more like SPAs and concepts like server side rendering make SPAs work as regular sites anyway. It’s not a fight anymore it’s just technology. The more you understand, the more rote (and, dare I say, boring) all this becomes. Dave Rupert says: Design, too, can be and benefits from being a bit boring. Fortunately we have grug to guide us.
-
The “Most Hated” CSS Feature: cos() and sin()
by: Juan Diego Rodríguez Mon, 15 Sep 2025 14:31:06 +0000 No feature is truly “the worst” in CSS, right? After all, it’s all based on opinion and personal experience, but if we had to reach a consensus, checking the State of CSS 2025 results would be a good starting point. I did exactly that, jumped into the awards section, and there I found it: the “Most Hated Feature,” a title no CSS should have bear… This shocks me, if I’m being honest. Are really trigonometric functions really that hated? I know “hated” is not the same as saying something is “worst”, but it still has an awful ring to it. And I know I’m being a little dramatic here, since only “9.1% of respondents truly hate trigonometry.” But that’s still too much shade being thrown for my taste. I want to eliminate that 9.1%. So, in this series, I want to look at practical uses for CSS trigonometric functions. We’ll tackle them in pieces because there’s a lot to take in and I find it easiest to learn and retain information when it’s chunked into focused, digestible pieces. And we’ll start with what may be the most popular functions of the “worst” feature: sin() and cos(). CSS Trigonometric Functions: The “Most Hated” CSS Feature sin() and cos() (You are here!) Tackling the CSS tan() Function (coming soon) Inverse functions: asin(), acos(), atan() and atan2() (coming soon) What the heck are cos() and sin() anyway? This section is for those who cos() and sin() don’t quite click yet, or simply want a refresher. If you aced trigonometry quizzes in high school, feel free to skip ahead to the next section! What I find funny about cos() and sin()— and also why I think there is confusion around them — is the many ways we can describe them. We don’t have to look too hard. A quick glance at this Wikipedia page has an eye-watering number of super nuanced definitions. This is a learning problem in the web development field. I feel like some of those definitions are far too general and lack detail about the essence of what trigonometric functions like sin() and cos() can do. Conversely, other definitions are overly complex and academic, making them tough to grok without an advanced degree. Let’s stick to the sweet middle spot: the unit circle. Meet the unit circle. It is a circle with a radius of one unit: Right now it’s alone… in space. Let’s place it on the Cartesian coordinate system (the classic chart with X and Y axes). We describe each point in space in Cartesian coordinates: The X coordinate: The horizontal axis, plotting the point towards the left or right. The Y coordinate: The vertical axis, plotting the point towards the top or bottom. We can move through the unit circle by an angle, which is measured from the positive X-axis going counter-clockwise. CodePen Embed Fallback We can go in a clockwise direction by using negative angles. As my physics teacher used to say, “Time is negative!” Notice how each angle lands on a unique point in the unit circle. How else can we describe that point using Cartesian coordinates? When the angle is 0° the X and Y coordinates are 1 and 0 (1, 0), respectively. We can deduce the Cartesian coordinates for other angles just as easily, like 90°, 180° and 270°. But for any other angle, we don’t know where the point is initially located on the unit circle. If only there were a pair of functions that take an angle and give us our desired coordinates… You guessed it, the CSS cos() and sin() functions do exactly that. And they’re very closely related, where cos() is designed to handle the X coordinate and sin() returns the Y coordinate. Play with the toggle slider in the following demo to see the relationship between the two functions, and notice how they form a right triangle with the initial point on the unit circle: CodePen Embed Fallback I think that’s all you really need to know about cos() and sin() for the moment. They’re mapped to Cartesian coordinates, which allows us to track a point along the unit circle with an angle, no matter what size that circle happens to be. Let’s dive into what we can actually use cos() and sin() for our everyday CSS work. It’s always good to put a little real-world context to theoretical concepts like math. Circular layouts If we go by the unit circle definition of cos() and sin(), then it’s easy to see how they might be used to create circular layouts in CSS. The initial setup is a single row of circular elements: CodePen Embed Fallback Say we want to place each circular item around the outline of a larger circle instead. First, we would let CSS know the total number of elements and also each element’s index (the order it’s in), something we can do with an inline CSS variable that holds each order in the position: <ul style="--total: 9"> <li style="--i: 0">0</li> <li style="--i: 1">1</li> <li style="--i: 2">2</li> <li style="--i: 3">3</li> <li style="--i: 4">4</li> <li style="--i: 5">5</li> <li style="--i: 6">6</li> <li style="--i: 7">7</li> <li style="--i: 8">8</li> </ul> Note: This step will become much easier and concise when the sibling-index() and sibling-count() functions gain support (and they’re really neat). I’m hardcoding the indexes with inline CSS variables in the meantime. To place the items around the outline of a larger circle, we have to space them evenly by a certain angle. And to get that angle, we can divide 360deg (a full turn around the circle) by the total number of items, which is 8 in this specific example. Then, to get each element’s specific angle, we can multiply the angle spacing by the element’s index (i.e., position): li { --rotation: calc(360deg / var(--total) * var(--i)); } We also need to push the items away from the center, so we’ll assign a --radius value for the circle using another variable. ul { --radius: 10rem; } We have the element’s angle and radius. What’s left is to calculate the X and Y coordinates for each item. That’s where cos() and sin() come into the picture. We use them to get the X and Y coordinates that place each item around the unit circle, then multiply each coordinate by the --radius value to get an item’s final position on the bigger circle: li { /* ... */ position: absolute; transform: translateX(calc(cos(var(--rotation)) * var(--radius))) translateY(calc(sin(var(--rotation)) * var(--radius))); } That’s it! We have a series of eight circular items placed evenly around the outline of a larger circle: CodePen Embed Fallback And we didn’t need to use a bunch of magic numbers to do it! All we provide CSS with is the unit circle’s radius, and then CSS does all the trigonometric gobbledygook that makes so many of us call this the “worst” CSS feature. Hopefully, I’ve convinced you to soften your opinions on them if that’s what was holding you back! We aren’t limited to full circles, though! We can also have a semicircular arrangement by choosing 180deg instead of 360deg. CodePen Embed Fallback This opens up lots of layout possibilities. Like, what if we want a circular menu that expands from a center point by transitioning the radius of the circle? We can totally do that: CodePen Embed Fallback Click or hover the heading and the menu items form around the circle! Wavy layouts There’s still more we can do with layouts! If, say, we plot the cos() and sin() coordinates on a two-axis graph, notice how they give us a pair of waves that periodically go up and down. And notice they are offset from each other along the horizontal (X) axis: Where do these waves come from? If we think back to the unit circle we talked about earlier, the value of cos() and sin() oscillate between -1 and 1. In other words, the lengths match when the angle around the unit circle varies. If we graph that oscillation, then we’ll get our wave and see that they’re sorta like reflections of each other. ⚠️ Auto-playing media Can we place an element following one of these waves? Absolutely. Let’s start with the same single row layout of circular items we made earlier. This time, though, the length of that row spans beyond the viewport, causing overflow. CodePen Embed Fallback We’ll assign an index position for each item like we did before, but this time we don’t need to know the total number of items. We had eight items last time, so let’s bump that up to 10 and pretend like we don’t know that: <ul> <li style="--i: 0"></li> <li style="--i: 1"></li> <li style="--i: 2"></li> <li style="--i: 3"></li> <li style="--i: 4"></li> <li style="--i: 5"></li> <li style="--i: 6"></li> <li style="--i: 7"></li> <li style="--i: 8"></li> <li style="--i: 9"></li> <li style="--i: 10"></li> </ul> We want to vary the element’s vertical position along either a sin() or cos() wave, meaning translating each item’s position based on its order in the index. We’ll multiply an item’s index by a certain angle that is passed into the sin() function, and that will return a ratio that describes how high or low the element should be on the wave. The final thing is to multiply that result by a length value, which I calculated as half an item’s total size. Here’s the math in CSS-y terms: li { transform: translateY(calc(sin(60deg * var(--i)) * var(--shape-size) / 2)); } I’m using a 60deg value because the waves it produces are smoother than some other values, but we can vary it as much as we want to get cooler waves. Play around with the toggle in the next demo and watch how the wave’s intensity changes with the angle: CodePen Embed Fallback This is a great example to see what we’re working with, but how would you use it in your work? Imagine we have two of these wavy chains of circles, and we want to intertwine them together, kinda like a DNA strand. Let’s say we’re starting with the HTML structure for two unordered lists nested inside another unordered list. The two nested unordered lists represent the two waves that form the chain pattern: <ul class="waves"> <!-- First wave --> <li> <ul class="principal"> <!-- Circles --> <li style="--i: 0"></li> <li style="--i: 1"></li> <li style="--i: 2"></li> <li style="--i: 3"></li> <!-- etc. --> </ul> </li> <!-- Second wave --> <li> <ul class="secondary"> <!-- Circles --> <li style="--i: 0"></li> <li style="--i: 1"></li> <li style="--i: 2"></li> <li style="--i: 3"></li> <!-- etc. --> </ul> </li> </ul> Pretty similar to the examples we’ve seen so far, right? We’re still working with an unordered list where the items are indexed with a CSS variable, but now we’re working with two of those lists… and they’re contained inside a third unordered list. We don’t have to structure this as lists, but I decided to leave them so I can use them as hooks for additional styling later. To avoid any problems, we’ll ignore the two direct <li> elements in the outer unordered list that contain the other lists using display: contents. .waves > li { display: contents; } Notice how one of the chains is the “principal” while the other is the “secondary.” The difference is that the “secondary” chain is positioned behind the “principal” chain. I’m using slightly different background colors for the items in each chain, so it’s easier to distinguish one from the other as you scroll through the block-level overflow. CodePen Embed Fallback We can reorder the chains using a stacking context: .principal { position: relative; z-index: 2; } .secondary { position: absolute; } This positions one chain on top of the other. Next, we will adjust each item’s vertical position with the “hated” sin() and cos() functions. Remember, they’re sorta like reflections of one another, so the variance between the two is what offsets the waves to form two intersecting chains of items: .principal { /* ... */ li { transform: translateY(calc(sin(60deg * var(--i)) * var(--shape-size) / 2)); } } .secondary { /* ... */ li { transform: translateY(calc(cos(60deg * var(--i)) * var(--shape-size) / 2)); } } We can accentuate the offset even more by shifting the .secondary wave another 60deg: .secondary { /* ... */ li { transform: translateY(calc(cos(60deg * var(--i) + 60deg) * var(--shape-size) / 2)); } } The next demo shows how the waves intersect at an offset angle of 60deg. Adjust the slider toggle to see how the waves intersect at different angles: CodePen Embed Fallback Oh, I told you this could be used in a practical, real-world way. How about adding a little whimsy and flair to a hero banner: CodePen Embed Fallback Damped oscillatory animations The last example got me thinking: is there a way to use sin() and cos()‘s back and forth movement for animations? The first example that came to mind was an animation that also went back and forth, something like a pendulum or a bouncing ball. This is, of course, trivial since we can do it in a single animation declaration: .element { animation: someAnimation 1s infinite alternate; } This “back and forth” animation is called oscillatory movement. And while cos() or sin() are used to model oscillations in CSS, it would be like reinventing the wheel (albeit a clunkier one). I’ve learned that perfect oscillatory movement — like a pendulum that swings back and forth in perpetuity, or a ball that never stops bouncing — doesn’t really exist. Movement tends to decay over time, like a bouncing spring: ⚠️ Auto-playing media There’s a specific term that describes this: damped oscillatory movement. And guess what? We can model it in CSS with the cos() function! If we graph it over time, then we will see it goes back and forth while getting closer to the resting position1. Wikipedia has another animated example that nicely demonstrates what damped oscillation looks like. In general, we can describe damped oscillation over time as a mathematical function: It’s composed of three parts: e−γt: Due to the negative exponent, it becomes exponentially smaller as time passes, bringing the movement to a gradual stop. It is multiplied by a damping constant (γ) that specifies how quickly the movement should decay. a: This is the initial amplitude of the oscillation, i.e., the element’s initial position. cos(ωt−α): This gives the movement its oscillation as time passes. Time is multiplied by frequency (ω), which determines an element’s oscillation speed2. We can also subtract from time α, which we can use to offset the initial oscillation of the system. Okay, enough with all the theory! How do we do it in CSS? We’ll set the stage with a single circle sitting all by itself. CodePen Embed Fallback We have a few CSS variables we can define that will come in handy since we already know the formula we’re working with: :root { --circle-size: 60px; --amplitude: 200px; /* The amplitude is the distance, so let's write it in pixels*/ --damping: 0.3; --frequency: 0.8; --offset: calc(pi/2); /* This is the same as 90deg! (But in radians) */ } Given these variables, we can peek at what the animation would look like on a graph using a tool like GeoGebra: From the graph, we can see that the animation starts at 0px (thanks to our offset), then peaks around 140px and dies out around 25s in. I, for one, won’t be waiting 25 seconds for the animation to end, so let’s create a --progress property that will animate between 0 to 25, and will act as our “time” in the function. Remember that to animate or transition a custom property, we’ve gotta register it with the @property at-rule. @property --progress { syntax: "<number>"; initial-value: 0; inherits: true; } @keyframes movement { from { --progress: 0; } to { --progress: 25; } } What’s left is to implement the prior formula for the element’s movement, which, written in CSS terms, looks like this: .circle { --oscillation: calc( (exp(-1 * var(--damping) * var(--progress))) * var(--amplitude) * cos(var(--frequency) * (var(--progress)) - var(--offset)) ); transform: translateX(var(--oscillation)); animation: movement 1s linear infinite; } CodePen Embed Fallback This gives a pretty satisfying animation by itself, but the damped motion is only on the x-axis. What would it look like if, instead, we applied the damped motion on both axes? To do this, we can copy the same oscillation formula for x, but replace the cos() with sin(). .circle { --oscillation-x: calc( (exp(-1 * var(--damping) * var(--progress))) * var(--amplitude) * cos(var(--frequency) * (var(--progress)) - var(--offset)) ); --oscillation-y: calc( (exp(-1 * var(--damping) * var(--progress))) * var(--amplitude) * sin(var(--frequency) * (var(--progress)) - var(--offset)) ); transform: translateX(var(--oscillation-x)) translateY(var(--oscillation-y)); animation: movement 1s linear infinite; } CodePen Embed Fallback This is even more satisfying! A circular and damped motion, all thanks to cos() and sin(). Besides looking great, how could this be used in a real layout? We don’t have to look too hard. Take, for example, this sidebar I recently made where the menu items pop in the viewport with a damped motion: CodePen Embed Fallback Pretty neat, right?! More trigonometry to come! Well, finding uses for the “most hated CSS feature” wasn’t that hard; maybe we should start showing some love to trigonometric functions. But wait. There are still several trigonometric functions in CSS we haven’t talked about. In the following posts, we’ll keep exploring what trig functions (like tan() and inverse functions) can do in CSS. CSS Trigonometric Functions: The “Most Hated” CSS Feature sin() and cos() (You are here!) Tackling the CSS tan() Function (coming soon) Inverse functions: asin(), acos(), atan() and atan2() (coming soon) Also, before I forget, here is another demo I made using cos() and sin() that didn’t make the cut in this article, but it is still worth checking out because it dials up the swirly-ness from the last example to show how wacky we can get. CodePen Embed Fallback Footnotes This kind of damped oscillatory movement, where the back and forth is more visible, is called underdamped oscillation. There are also overdamped and critically damped oscillations, but we won’t focus on them here. ↪️ In reality, the damped constant and the frequency are closely related. You can read more about damped oscillation in this paper. ↪️ The “Most Hated” CSS Feature: cos() and sin() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
I Ran Local LLMs on My Android Phone
by: Community Mon, 15 Sep 2025 11:58:13 GMT Like it or not, AI is here to stay. For those who are concerned about data privacy, there are several local AI options available. Tools like Ollama and LM Studio makes things easier. Now those options are for the desktop user and require significant computing power. What if you want to use the local AI on your smartphone? Sure, one way would be to deploy Ollama with a web GUI on your server and access it from your phone. But there is another way and that is to use an application that lets you install and use LLMs (or should I say SLMs, Small Language Models) on your phone directly instead of relying on your local AI server on another computer. Allow me to share my experience with experimenting with LLMs on a phone. 📋Smartphones these days have powerful processors and some even have dedicated AI processors on board. Snapdragon 8 Gen 3, Apple’s A17 Pro, and Google Tensor G4 are some of them. Yet, the models that can be run on a phone are often vastly different than the ones you use on a proper desktop or server.Here's what you'll need: An app that allows you to download the language models and interact with them.Suitable LLMs that have been specifically created for running on mobile devices.Apps for running LLMs locally on a smartphoneAfter researching, I decided to explore following applications for this purpose. Let me share their features and details. 1. MLC ChatMLC Chat supports top models like Llama 3.2, Gemma 2, phi 3.5 and Qwen 2.5 offering offline chat, translation, and multimodal tasks through a sleek interface. Its plug-and-play setup with pre-configured models, NPU optimization (e.g., Snapdragon 8 Gen 2+), and beginner-friendly features make it a good choice for on-device AI. You can download the MLC Chat APK from their GitHub release page. Android is looking to forbid sideloading of APK files. I don't know what would happen then, but you can use APK files for now. Put the APK file on your Android device, go into Files and tap the APK file to begin installation. Enable “Install from Unknown Sources” in your device settings if prompted. Follow on-screen instructions to complete the installation. Enable APK installationOnce installed, open the MLC Chat app, select a model from the list, like Phi-2, Gemma 2B, Llama-3 8B, Mistral 7B. Tap the download icon to install the model. I recommend opting for smaller models like Phi-2. Models are downloaded on first use and cached locally for offline use. Click on the download button to download a modelTap the Chat icon next to the downloaded model. Start typing prompts to interact with the LLM offline. Use the reset icon to start a new conversation if needed. 2. SmolChat (Android)SmolChat is an open-source Android app that runs any GGUF-format model (like Llama 3.2, Gemma 3n, or TinyLlama) directly on your device, offering a clean, ChatGPT-like interface for fully offline chatting, summarization, rewriting, and more. Install SmolChat from Google's Play Store. Open the app, choose a GGUF model from the app’s model list or manually download one from Hugging Face. If manually downloading, place the model file in the app’s designated storage directory (check app settings for the path). 3. Google AI Edge GalleryGoogle AI Edge Gallery is an experimental open-source Android app (iOS soon) that brings Google's on-device AI power to your phone, letting you run powerful models like Gemma 3n and other Hugging Face models fully offline after download. This application makes use of Google’s LiteRT framework. You can download it from Google Play Store. Open the app and browse the list of provided models or manually download a compatible model from Hugging Face. Select the downloaded model and start a chat session. Enter text prompts or upload images (if supported by the model) to interact locally. Explore features like prompt discovery or vision-based queries if available. Top Mobile LLMs to try outHere are the best ones I’ve used: Model My Experience Best For Google’s Gemma 3n (2B) Blazing-fast for multimodal tasks including image captions, translations, even solving math problems from photos. Quick, visual-based AI assistance Meta’s Llama 3.2 (1B/3B) Strikes the perfect balance between size and smarts. It’s great for coding help and private chats.The 1B version runs smoothly even on mid-range phones. Developers & privacy-conscious users Microsoft’s Phi-3 Mini (3.8B) Shockingly good at summarizing long documents despite its small size. Students, researchers, or anyone drowning in PDFs Alibaba’s Qwen-2.5 (1.8B) Surprisingly strong at visual question answering—ask it about an image, and it actually understands! Multimodal experiments TinyLlama-1.1B The lightweight champ runs on almost any device without breaking a sweat. Older phones or users who just need a simple chatbot All these models use aggressive quantization (GGUF/safetensors formats), so they’re tiny but still powerful. You can grab them from Hugging Face—just download, load into an app, and you’re set. Challenges I faced while running LLMs Locally on Android smartphoneGetting large language models (LLMs) to run smoothly on my phone has been equally exhilarating and frustrating. On my Snapdragon 8 Gen 2 phone, models like Llama 3-4B run at a decent 8-10 tokens per second, which is usable for quick queries. But when I tried the same on my backup Galaxy A54 (6 GB RAM), it choked. Loading even a 2B model pushed the device to its limits. I quickly learned that Phi-3-mini (3.8B) or Gemma 2B are far more practical for mid-range hardware. The first time I ran a local AI session, I was shocked to see 50% battery gone in under 90 minutes. MLC Chat offers power-saving mode for this purpose. Turning off background apps to free up RAM also helps. I also experimented with 4-bit quantized models (like Qwen-1.5-2B-Q4) to save storage but noticed they struggle with complex reasoning. For medical or legal queries, I had to switch back to 8-bit versions. It was slower but far more reliable. ConclusionI love the idea of having an AI assistant that works exclusively for me, no monthly fees, no data leaks. Need a translator in a remote village? A virtual assistant on a long flight? A private brainstorming partner for sensitive ideas? Your phone becomes all of these staying offline and untraceable. I won’t lie, it’s not perfect. Your phone isn’t a data center, so you’ll face challenges like battery drain and occasional overheating. But it also provides tradeoffs like total privacy, zero costs, and offline access. The future of AI isn’t just in the cloud, it’s also on your device. Author Info Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has passion for working with Kubernetes.
-
sudo vs sudo-rs: What You Need to Know About the Rust Takeover of Classic Sudo Command
by: Abhishek Prakash Sun, 14 Sep 2025 05:37:41 GMT The upcoming Ubuntu 25.10 release features a controversial move to replace the classic sudo command with its Rust-based implementation, sudo-rs. This move could bring numerous questions for you. Like, why opt for this change? What's wrong with the original? How would you use this new sudo? What happens to the old one? I will answer all these questions in this article. 📝TLDR; If you are a regular, end-user who uses sudo to run commands with root privileges, nothing changes for you at the surface, except for some error and warning messages. You'll continue using sudo as you did before and it will automatically use Rust-based sudo underneath. However, if you are a sysadmin with custom sudo configuration, you should start paying attention as some features have been changed.What is sudo-rs?sudo-rs is an implementation of the classic sudo and su written in the Rust programming language, which is known for its memory safety. The new sudo-rs is not 100% compatible with sudo as it drops some features and implements a few of its own. This new tool is under heavy development and may implement some of the missing sudo features. Why sudo-rs?Don't fix what's not broken, right? Perhaps not. Ubuntu developer discussion cited these primary reasons for going with the Rust-based sudo: Memory safety: Rust's borrow checker provides better memory management and prevents common security vulnerabilities.Modern codebase: Easier to maintain and evolve compared to 30-year-old C code.Better defaults: Removes outdated features that might now be considered security risks.Younger contributor base: Young developers are opting for modern language like Rust instead of C. Rust's safety features also make it easier for new developers to contribute more confidently.Basically, the 30-years old codebase of sudo is complicated and makes it difficult to patch or implement new features. Writing from scratch is easier and the use of a modern, memory-safe language will also help attract contributions from a borader pool of developers. Please note that the sudo-rs dev team is in touch with the original maintainer of the original sudo and they have found issues that were not only fixed in the new Rust-based sudo but also in the original sudo. So from what it seems, sudo-rs is the natural evolution over the classic sudo. What changes between sudo and sudo-rs?Not much for regular end user perspective. You'll still be typing sudo as usual while it runs sudo-rs in the background. Some warning or error messages may have different text but that's about it. For sysadmin and advanced users, there are a few things missing for now and some might not be implemented at all. For example, sudo-rs will not include the sendmail support of original sudo which was used for sending notifications about sudo usage. sudo-rs always uses PAM for authentication and thus your system must be set up for PAM. sudo-rs will use the sudo and sudo-i service configuration. meaning that resource limits, umasks, etc have to be configured via PAM and not through the sudoers file. Wildcards are not supported in argument positions for a command to prevent common configuration mistakes in the sudoers file. Using sudo or sudo-rs in UbuntuIn Ubuntu 25.10, the command sudo is softlinked to sudo-rs. So, you'll be using sudo as always but underneath, it will be running the new sudo-rs. The original sudo is still there in the system as sudo-ws. It resembles the official website sudo.ws of the classic sudo project. If you want to use the OG sudo, you can just replace sudo with sudo-ws. As stated above, there are hardly any differences visible for regular users except for the slightly changed error and warning messages. At least till Ubuntu 26.10, you can make the classic sudo the default sudo by updating the alternatives. Although I would advise against it. Unless you have a solid reason, there is no harm in using the Rust-based sudo. Clearly, this is what will be the future anyways. sudo update-alternatives --config sudo💡sudo-rs is available in universe repository starting with Ubuntu 24.04. If you want to test it, you can type sudo-rs instead of sudo in your commands. Other distributions may also have this package available.sudo-rs is not the only alternative to sudoSurprised? There are several alternatives to sudo that have been in existence for some years now. There is this doas command-line tool that can be considered a simplified, minimal version of sudo. Another Rust-based implementation of sudo like functionality is RootAsRole. Some may even count uid0 from systemd as an alternative to sudo although it's not in the same league in my opinion but serves a similar purpose. The official sudo website lists a few more alternatives, but I think not all of them are seeing active development. FAQLet's summarize and answer some of your frequently asked questions on sudo-rs inclusion. What is sudo-rs?sudo-rs is re-implementation of the classic C based sudo but written in memory-safe Rust programming language. Do I have to use sudo-rs command instead of sudo?No. Starting with Ubuntu 25.10, sudo is softlinked to sudo-rs. Which means that while you continue using sudo as you did in previous versions, it will automatically be running sudo-rs underneath. Can I remove sudo-rs and go back to original sudo?Yes. The original sudo is available as sudo.ws command and you can use update-alternatives to go set it the default sudo. But it is only possible until Ubuntu 26.04. Canonical plans to test sudo-rs as the only sudo mechanism in 26.10. What changes between sudo and sudo-rs?Nothing for common end-users. However, advanced, sysadmin oriented features like sendmail, wildcard support in sudoer file etc., have been changed. Sysadmins should read the man page of sudo-rs for more details. ConclusionTo me, you don't have much to worry about if you are a regular user who never touched the sudo config file. Managing servers with custom sudo config? You should pay attention. Now, was it a wise decision to replace a (prefectly?) working piece of software and replace it with Rust? Is it another example of 'let's do it in Rust' phenomena sweeping the dev world? Share your opinion in the comments.
-
Exploring Ansible Modules
by: Abhishek Prakash Sat, 13 Sep 2025 10:55:42 +0530 Ansible is a powerful automation tool that simplifies the management and configuration of systems. At the heart of Ansible's functionality are modules, which are reusable scripts designed to perform specific tasks on remote hosts. These modules allow users to automate a wide range of tasks, from installing packages to managing services, all with the aim of maintaining their systems' desired state. This article will explain what Ansible modules are, how to use them, and provide real-world examples to demonstrate their effectiveness. What is an Ansible Module?An Ansible module is a reusable, standalone script that performs a specific task or operation on a remote host. Modules can manage system resources like packages, services, files, and users, among other things. They are the building blocks for creating Ansible playbooks, which define the automation workflows for configuring and managing systems. Ansible modules are designed to be idempotent, meaning they ensure that the system reaches a desired state without applying changes that are unnecessary if the system is already in the correct state. This makes Ansible operations predictable and repeatable. Modules can be written in any programming language, but most are in Python. Ansible ships with a large number of built-in modules, and there are also many community-contributed modules available. Additionally, you can write custom modules to meet specific needs. Here's a simple syntax to get you started: --- - name: My task name hosts: group_name # Group of hosts to run the task on become: true # Gain root privileges (if needed) module_name: arguments: # Module specific arguments This is a basic template for defining tasks in your Ansible playbooks. Ansible Modules - Real-world examplesLet's examine some real-world examples to understand how modules work in action. Example 1: Installing a packageLet's use the yum module to install the Apache web server on a RockyLinux. --- - name: Install Apache web server hosts: webservers tasks: - name: Install httpd package yum: name: httpd state: present In this playbook: The hosts directive specifies that this playbook will run on hosts in the webservers group.The yum module is used to ensure that the httpd package is installed.Let's run the above playbook: ansible-playbook playbook.yml After the successful playbook execution, you will see the following output: Example 2: Managing servicesNow, let's use the service module to ensure that the Apache web server is started and enabled to start on boot. --- - name: Ensure Apache is running and enabled hosts: webservers tasks: - name: Start and enable httpd service service: name: httpd state: started enabled: yes In this playbook: The service module is used to start the httpd service and enable it to start at boot.Now, run the above playbook: ansible-playbook playbook.yml Output: 10 common Ansible modules and their usageIn this section, I'll show you some of the most commonly used Ansible modules and their usage. 1. pingThe ping module is used to test the connection to the target hosts. It is often used to ensure that the target hosts are reachable and responsive. This module is particularly useful for troubleshooting connectivity issues. --- - name: Test connectivity hosts: all tasks: - name: Ping all hosts ping: This Ansible playbook named Test connectivity checks the network connectivity of all hosts in the inventory. It does so by running a single task: sending a ping request to each host. The task, named Ping all hosts uses the built-in ping module to ensure that every host is reachable and responding to network requests. Ansible Ping Module: Check if Host is ReachableQuickly test if a node is available with Ansible ping command.Linux HandbookLHB Community2. copyThe copy module copies files from the local machine to the remote host. It is used to transfer configuration files, scripts, or any other files that need to be present on the remote system. This module simplifies file distribution across multiple hosts. --- - name: Copy a file to remote host hosts: webservers tasks: - name: Copy index.html copy: src: /tmp/index.html dest: /var/www/html/index.html The above playbook targets hosts in the webservers group and includes a single task. This task uses the copy module to transfer a file named index.html from the local source path /tmp/index.html to the destination path /var/www/html/index.html on each remote host. Ansible Copy Module [Explained With Examples]The Copy module in Ansible comes in handy in your setup. Learn some practical examples.Linux HandbookLHB Community3. userThe user module manages user accounts. It can create, delete, and manage the properties of user accounts on the remote system. This module is essential for ensuring that the correct users are present on the system with the appropriate permissions. --- - name: Ensure a user exists hosts: all tasks: - name: Create a user user: name: johndoe state: present groups: sudo This playbook contains a single task, which uses the user module to ensure that a user named johndoe exists on each host. Additionally, it assigns this user to the sudo group, granting administrative privileges. 4. packageThe package module is a generic way to manage packages across different package managers. It abstracts the differences between package managers like yum, apt, and dnf, providing a consistent interface for package management tasks. This module helps streamline the installation and management of software packages. --- - name: Install packages hosts: all tasks: - name: Ensure curl is installed package: name: curl state: present The above playbook uses the package module to ensure that the curl package is installed on each host. The desired state of the curl package is set to present meaning it will be installed if it is not already available on the host. 5. shellThe shell module is used to execute commands on the remote hosts. It allows for running shell commands with the full capabilities of the shell. They are useful for executing ad-hoc commands and scripts on remote systems. --- - name: Run shell commands hosts: all tasks: - name: Run a shell command shell: echo "Hello, World!" This playbook uses the shell module to execute the command echo "Hello, World!" on each host. This command will output the text Hello, World! in the shell of each remote host. Using the Shell Module in AnsibleShell module in Ansible is a powerful tool for executing shell commands on remote hosts, but it comes with maintenance risks.Linux HandbookLHB Community6. gitThe git module manages Git repositories. It can clone repositories, check out specific branches or commits, and update the repositories. This module is essential for deploying code and configuration managed in Git repositories. --- - name: Clone a Git repository hosts: all tasks: - name: Clone a repository git: repo: 'https://github.com/example/repo.git' dest: /tmp This playbook uses the git module to clone the repository from https://github.com/example/repo.git into the /tmp directory on each host. 7. templateThe template module is used to copy and render Jinja2 templates. It allows for the dynamic creation of configuration files and scripts based on template files and variables. This module is crucial for creating customized and dynamic configuration files. --- - name: Deploy a configuration file from template hosts: all tasks: - name: Copy template file template: src: /tmp/template.j2 dest: /etc/nginx/nginx.conf This playbook uses the template module to deploy a configuration file by copying the template file located at /tmp/template.j2 on the control machine to /etc/nginx/nginx.conf on each host. The template file can contain Jinja2 variables that are rendered with the appropriate values during the copy process. 8. fileThe file module manages file and directory properties. It can create, delete, and manage the properties of files and directories. This module ensures that the correct file system structure and permissions are in place. --- - name: Ensure a directory exists hosts: all tasks: - name: Create a directory file: path: /tmp/mydir state: directory The above playbook uses the file module to ensure that a directory at the path /tmp/mydir exists on each host. If the directory does not already exist, it will be created. 9. serviceThe service module manages system services. It can start, stop, restart, and enable services on the remote system. This module is essential for ensuring that the necessary services are running and configured to start at boot. --- - name: Ensure a service is running hosts: all tasks: - name: Start a service service: name: nginx state: started This playbook uses the Ansible service module to ensure that the nginx service is running on each host. If the service is not already started, it will be initiated. Manage Services With Ansible Service ModuleThe service module in Ansible comes in handy for managing services across a variety of platforms.Linux HandbookLHB Community10. aptThe apt module manages packages using the apt package manager (for Debian-based systems). It handles package installation, removal, and updating. This module is vital for managing software on systems using the Debian package management system. --- - name: Install a package using apt hosts: all tasks: - name: Install nginx apt: name: nginx state: present This playbook uses the apt module, which is specific to Debian-based systems like Ubuntu, to manage packages. It specifies that the package nginx should be installed, ensuring Nginx is present and available on all targeted hosts after the playbook is executed. Install and Manager Ubuntu Packages with Ansible APT ModuleAnsible’s built-in APT module lets you manage packages on Ubuntu and Debian based nodes.Linux HandbookUmair KhurshidConclusionYou explored the fundamental concept of Ansible modules, which are essential for automating tasks on remote hosts. I showed the basic syntax for using Ansible modules and provided real-world examples of installing packages and managing services. Additionally, I listed and described common and popular Ansible modules, demonstrating their usage and importance in automating various system tasks. This is just a glimps as we have detailed tutorials on several Ansible modules with real-world examples. To further enhance your skills, explore Ansible's extensive documentation and community resources to discover additional modules and advanced configurations.
-
What Can We Actually Do With corner-shape?
by: Daniel Schwarz Fri, 12 Sep 2025 14:20:45 +0000 When I first started messing around with code, rounded corners required five background images or an image sprite likely created in Photoshop, so when border-radius came onto the scene, I remember everybody thinking that it was the best thing ever. Web designs were very square at the time, so to have border-radius was super cool, and it saved us a lot of time, too. Chris’ border-radius article from 2009, which at the time of writing is 16 years old (wait, how old am I?!), includes vendor prefixes for older web browsers, including “old Konqueror browsers” (-khtml-border-radius). What a time to be alive! We’re much less excited about rounded corners nowadays. In fact, sharp corners have made a comeback and are just as popular now, as are squircles (square-ish circles or circle-y squares, take your pick), which is exactly what the corner-shape CSS property enables us to create (in addition to many other cool UI effects that I’ll be walking you through today). At the time of writing, only Chrome 139 and above supports corner-shape, which must be used with the border-radius property or/and any of the related individual properties (i.e., border-top-left-radius, border-top-right-radius, border-bottom-right-radius, and border-bottom-left-radius): CodePen Embed Fallback Snipped corners using corner-shape: bevel These snipped corners are becoming more and more popular as UI designers embrace brutalist aesthetics. In the example above, it’s as easy as using corner-shape: bevel for the snipped corners effect and then border-bottom-right-radius: 16px for the size. corner-shape: bevel; border-bottom-right-radius: 16px; We can do the same thing and it really works with a cyberpunk aesthetic: CodePen Embed Fallback Slanted sections using corner-shape: bevel Slanted sections is a visual effect that’s even more popular, probably not going anywhere, and again, helps elements to look a lot less like the boxes that they are. Before we dive in though, it’s important to keep in mind that each border radii has two semi-major axes, a horizontal axis and a vertical axis, with a ‘point’ (to use vector terminology) on each axis. In the example above, both are set to 16px, so both points move along their respective axis by that amount, away from their corner of course, and then the beveled line is drawn between them. In the slanted section example below, however, we need to supply a different point value for each axis, like this: corner-shape: bevel; border-bottom-right-radius: 100% 50px; CodePen Embed Fallback The first point moves along 100% of the horizontal axis whereas the second point travels 50px of the vertical axis, and then the beveled line is drawn between them, creating the slant that you see above. By the way, having different values for each axis and border radius is exactly how those cool border radius blobs are made. Sale tags using corner-shape: round bevel bevel round You’ve see those sale tags on almost every e-commerce website, either as images or with rounded corners and not the pointy part (other techniques just aren’t worth the trouble). But now we can carve out the proper shape using two different types of corner-shape at once, as well as a whole set of border radius values: CodePen Embed Fallback You’ll need corner-shape: round bevel bevel round to start off. The order flows clockwise, starting from the top-left, as follows: top-left top-right bottom-right bottom-left Just like with border-radius. You can omit some values, causing them to be inferred from other values, but both the inference logic and resulting value syntax lack clarity, so I’d just avoid this, especially since we’re about to explore a more complex border-radius: corner-shape: round bevel bevel round; border-radius: 16px 48px 48px 16px / 16px 50% 50% 16px; Left of the forward slash (/) we have the horizontal-axis values of each corner in the order mentioned above, and on the right of the /, the vertical-axis values. So, to be clear, the first and fifth values correspond to the same corner, as do the second and sixth, and so on. You can unpack the shorthand if it’s easier to read: border-top-left-radius: 16px; border-top-right-radius: 48px 50%; border-bottom-right-radius: 48px 50%; border-bottom-left-radius: 16px; Up until now, we’ve not really needed to fully understand the border radius syntax. But now that we have corner-shape, it’s definitely worth doing so. As for the actual values, 16px corresponds to the round corners (this one’s easy to understand) while the 48px 50% values are for the bevel ones, meaning that the corners are ‘drawn’ from 48px horizontally to 50% vertically, which is why and how they head into a point. Regarding borders — yes, the pointy parts would look nicer if they were slightly rounded, but using borders and outlines on these elements yields unpredictable (but I suspect intended) results due to how browsers draw the corners, which sucks. Arrow crumbs using the same method Yep, same thing. CodePen Embed Fallback We essentially have a grid row with negative margins, but because we can’t create ‘inset’ arrows or use borders/outlines, we have to create an effect where the fake borders of certain arrows bleed into the next. This is done by nesting the exact same shape in the arrows and then applying something to the effect of padding-right: 3px, where 3px is the value of the would-be border. The code comments below should explain it in more detail (the complete code in the Pen is quite interesting, though): <nav> <ol> <li> <a>Step 1</a> </li> <li> <a>Step 2</a> </li> <li> <a>Step 3</a> </li> </ol> </nav> ol { /* Clip n’ round */ overflow: clip; border-radius: 16px; li { /* Arrow color */ background: hsl(270 100% 30%); /* Reverses the z-indexes, making the arrows stack */ /* Result: 2, 1, 0, ... (sibling-x requires Chrome 138+) */ z-index: calc((sibling-index() * -1) + sibling-count()); &:not(:last-child) { /* Arrow width */ padding-right: 3px; /* Arrow shape */ corner-shape: bevel; border-radius: 0 32px 32px 0 / 0 50% 50% 0; /* Pull the next one into this one */ margin-right: -32px; } a { /* Same shape */ corner-shape: inherit; border-radius: inherit; /* Overlay background */ background: hsl(270 100% 50%); } } } Tooltips using corner-shape: scoop CodePen Embed Fallback To create this tooltip style, I’ve used a popover, anchor positioning (to position the caret relative to the tooltip), and corner-shape: scoop. The caret shape is the same as the arrow shape used in the examples above, so feel free to switch scoop to bevel if you prefer the classic triangle tooltips. A quick walkthrough: <!-- Connect button to tooltip --> <button popovertarget="tooltip" id="button">Click for tip</button> <!-- Anchor tooltip to button --> <div anchor="button" id="tooltip" popover>Don’t eat yellow snow</div> #tooltip { /* Define anchor */ anchor-name: --tooltip; /* Necessary reset */ margin: 0; /* Center vertically */ align-self: anchor-center; /* Pin to right side + 15 */ left: calc(anchor(right) + 15px); &::after { /* Create caret */ content: ""; width: 5px; height: 10px; corner-shape: scoop; border-top-left-radius: 100% 50%; border-bottom-left-radius: 100% 50%; /* Anchor to tooltip */ position-anchor: --tooltip; /* Center vertically */ align-self: anchor-center; /* Pin to left side */ right: anchor(left); /* Popovers have this already (required otherwise) */ position: fixed; } } If you’d rather these were hover-triggered, the upcoming Interest Invoker API is what you’re looking for. Realistic highlighting using corner-shape: squircle bevel The <mark> element, used for semantic highlighting, defaults with a yellow background, but it doesn’t exactly create a highlighter effect. By adding the following two lines of CSS, which admittedly I discovered by experimenting with completely random values, we can make it look more like a hand-waved highlight: mark { /* A...squevel? */ corner-shape: squircle bevel; border-radius: 50% / 1.1rem 0.5rem 0.9rem 0.7rem; /* Prevents background-break when wrapping */ box-decoration-break: clone; } CodePen Embed Fallback We can also use squircle by itself to create those fancy-rounded app icons, or use them on buttons/cards/form controls/etc. if you think the ‘old’ border radius is starting to look a bit stale: CodePen Embed Fallback CodePen Embed Fallback Hand-drawn boxes using the same method Same thing, only larger. Kind of looks like a hand-drawn box? CodePen Embed Fallback Admittedly, this effect doesn’t look as awesome on a larger scale, so if you’re really looking to wow and create something more akin to the Red Dead Redemption aesthetic, this border-image approach would be better. Clip a background with corner-shape: notch Notched border radii are ugly and I won’t hear otherwise. I don’t think you’ll want to use them to create a visual effect, but I’ve learned that they’re useful for background clipping if you set the irrelevant axis to 50% and the axis of the side that you want to clip by the amount that you want to clip it by. So if you wanted to clip 30px off the background from the left for example, you’d choose 30px for the horizontal axes and 50% for the vertical axes (for the -left-radius properties only, of course). corner-shape: notch; border-top-left-radius: 30px 50%; border-bottom-left-radius: 30px 50%; CodePen Embed Fallback Conclusion So, corner-shape is actually a helluva lot of fun. It certainly has more uses than I expected, and no doubt with some experimentation you’ll come up with some more. With that in mind, I’ll leave it to you CSS-Tricksters to mess around with (remember though, you’ll need to be using Chrome 139 or higher). As a parting gift, I leave you with this very cool but completely useless CSS Tie Fighter, made with corner-shape and anchor positioning: CodePen Embed Fallback What Can We Actually Do With corner-shape? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
LHB Linux Digest #25.26: tcpdump for cookie capture, GUI for Podman, diff command mastery and more
by: Abhishek Prakash Fri, 12 Sep 2025 17:02:47 +0530 Another week, another batch of Linux goodies! 🎉 Let me quickly summarize them for you. Spaces in filenames are still tripping people up, diff still scares beginners, and tcpdump still lets you spy on HTTP traffic like a hacker in a hoodie 🕵️♂️ (don’t worry, it’s for learning!). If containers are your thing, we’ve got a guide on checking Docker disk usage (before your server starts screaming for space) and some practical Ansible copy module examples to make automation less painful. Plus, our tool discovery section is stacked with ntfy, your new push-notification buddy, and Pods, the slick Podman GUI you didn’t know you needed. And of course, we wrap up with Linux news, from AlmaLinux updates to Linus reminding everyone to write better commit messages. Because good commits = good karma. ✨ This post is for subscribers only Subscribe now Already have an account? Sign in
-
Use tcpdump to Monitor HTTP Traffic and Extract Sensitive Data like Password and Cookies
by: LHB Community Fri, 12 Sep 2025 10:48:27 +0530 You already know the basics of tcpdump from our guide. It helps you watch live traffic, spot misconfigurations, and check that sensitive data is handled safely. Let’s put tcpdump to some practical work. The skills you practice here also align with objectives in CompTIA Security+ or network security roles. In this hands-on tutorial, we’ll run examples against the test site http://testphp.vulnweb.com to filter GET, POST, and sensitive data. By focusing on high-value traffic, security engineers can efficiently audit network flows and identify potential risks without being overwhelmed by irrelevant packets. 1. Observing Network Behavioursudo tcpdump -i eth0 host testphp.vulnweb.com This captures traffic to and from testphp.vulnweb.com. Key observations you should focus on as a security engineer: Identify backend infrastructure and exposed IPsCheck if sensitive data is transmitted in plaintextMonitor response size and timing to detect anomaliesEnsure connection health is stable (ACKs, retransmits)From the output above, let's zoom in on this part: 23:55:01.936700 IP 192.168.64.3.52526 > ec2-44-228-249-3.us-west-2.compute.amazonaws.com.http: Flags [P.], length 339: HTTP: GET / HTTP/1.1 23:55:02.133596 IP ec2-44-228-249-3.us-west-2.compute.amazonaws.com.http > 192.168.64.3.52526: Flags [P.], length 2559: HTTP: HTTP/1.1 200 OK Flags [.], ack ..., length 0 Breaking it down: Line / Field What It Shows 192.168.64.3.52526 > ec2-... Your local machine (source port 52526) talking to AWS EC2 host on port 80 (HTTP). Flags [P.] length 339 PSH + ACK = this packet contains data, the HTTP GET request. ec2-... > 192.168.64.3.52526 The server’s response back to you on the same TCP session. length 2559: HTTP/1.1 200 OK 2.5 KB payload from server, confirms 200 OK response. Flags [.], ack ..., length 0 Plain ACK packets, no payload, normal TCP housekeeping. 💡Regularly monitor endpoints to detect unusual traffic spikes or misconfigured services early. Do not use this for unauthorized scanning.2. Filter at the TCP Payload LevelBefore you use TCP Payload Level, you should first understand TCP Header. Each TCP segment has a header that contains the information needed for reliable transmission. Field Offset (bytes) Size (bytes) Size (bits) Purpose / Description Source Port 0–1 2 bytes 16 bits Port number of the sending process on the source host Destination Port 2–3 2 bytes 16 bits Port number of the receiving process on the destination host Sequence Number 4-7 4 bytes 32 bits Indicates the order of bytes sent; required for reliable delivery Acknowledgment Number 8-11 4 bytes 32 bits Confirms which bytes have been received Data Offset 12 (bits 0–3) — 4 bits Shows where the header ends and the payload begins Reserved 12 (bits 4–6) — 3 bits Reserved for future use; normally zero TCP Flags (NS,CWR,ECE,URG,ACK,PSH,RST,SYN,FIN) 12–13 (bits 7–15) — 9 bits TCP control bits managing the TCP state machine Window Size 14–15 2 bytes 16 bits Flow control: how much data the receiver can accept Checksum 16–17 2 bytes 16 bits Integrity check over header and payload Urgent Pointer 18–19 2 bytes 16 bits Marks urgent data; rarely used today Options (if present) 20–59 0–40 bytes 0–320 bits Optional parameters; extend header beyond the minimum 20 bytes 💡Knowing the Data Offset lets you inspect payload start locations. This helps monitor HTTP methods and headers for auditing, without modifying traffic.Let's take a look at this filter: tcp[((tcp[12:1] & 0xf0) >> 2):4] This extracts the first four bytes of the payload based on the Data Offset, which is key for monitoring GET/POST requests safely. Capturing HTTP GET RequestsThe command below selects packets whose payload starts with 0x47455420, which is the hexadecimal code for 'GET'. sudo tcpdump -s 0 -A -vv 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420' Capturing HTTP POST RequestsThe command below matches packets whose payload begins with 0x504f5354, the hex for 'POST'. sudo tcpdump -s 0 -A -vv 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504f5354' 💡Monitor GET/POST patterns to confirm normal traffic and detect misconfigurations. Avoid capturing other users’ sensitive data without authorization.3. Using grep and egrep to get password and cookiesYou can use egrep to search for text using patterns. Unlike grep, egrep supports extended regular expressions, so you can match multiple patterns at once using symbols like | (OR) or () for grouping. 💡Use egrep to quickly filter output for lines that match any of your patterns, e.g., certain HTTP methods, headers, or parameter names.Monitoring Sensitive POST Datasudo tcpdump -s 0 -A -n -l | egrep -i "POST /|pwd=|passwd=|pass=|password=|Host:" Use this command only in controlled lab environments or on traffic you are authorized to monitor. Regularly verify that credentials are never transmitted over HTTP. Observing HTTP Cookiessudo tcpdump -nn -A -s0 -l | egrep -i 'Set-Cookie|Host:|Cookie:' This is useful for: Inspect session IDs and cookies for secure transmission.Ensure Secure and HttpOnly flags are used.Use this to audit cookie security and session handling policies. Never capture cookies from unauthorized users. Extracting HTTP User-AgentsIn this one, we only match one pattern, so just use grep: sudo tcpdump -nn -A -s1500 -l | grep "User-Agent:" Helpful for: Identify which clients or automated tools interact with your service.Spot misconfigured scanners or unauthorized bots.Use this for traffic profiling and anomaly detection. Helps enforce internal security policies. Conclusiontcpdump is a lightweight yet powerful monitoring tool for security engineers. It lets you monitor data securely, spot anomalies, and see network activity without disrupting operations. Integrate tcpdump monitoring into SOC workflows or automated scripts to catch potential issues in real time. Always operate within authorized boundaries. ✍️Contributed by Hangga Aji Sayekti, a senior software engineer experimenting with pen-testing these days.
-
Compiling Multiple CSS Files into One
by: Geoff Graham Thu, 11 Sep 2025 15:16:34 +0000 Stu Robson is on a mission to “un-Sass” his CSS. I see articles like this pop up every year, and for good reason as CSS has grown so many new legs in recent years. So much so that much of the core features that may have prompted you to reach for Sass in the past are now baked directly into CSS. In fact, we have Jeff Bridgforth on tap with a related article next week. What I like about Stu’s stab at this is that it’s an ongoing journey rather than a wholesale switch. In fact, he’s out with a new post that pokes specifically at compiling multiple CSS files into a single file. Splitting and organizing styles into separate files is definitely the reason I continue to Sass-ify my work. I love being able to find exactly what I need in a specific file and updating it without having to dig through a monolith of style rules. But is that a real reason to keep using Sass? I’ve honestly never questioned it, perhaps due to a lizard brain that doesn’t care as long as something continues to work. Oh, I want partialized style files? Always done that with a Sass-y toolchain that hasn’t let me down yet. I know, not the most proactive path. Stu outlines two ways to compile multiple CSS files when you aren’t relying on Sass for it: Using PostCSS Ah, that’s right, we can use PostCSS both with and without Sass. It’s easy to forget that PostCSS and Sass are compatible, but not dependent on one another. postcss main.css -o output.css Stu explains why this could be a nice way to toe-dip into un-Sass’ing your work: Custom Script for Compilation The ultimate thing would be eliminating the need for any dependencies. Stu has a custom Node.js script for that: const fs = require('fs'); const path = require('path'); // Function to read and compile CSS function compileCSS(inputFile, outputFile) { const cssContent = fs.readFileSync(inputFile, 'utf-8'); const imports = cssContent.match(/@import\s+['"]([^'"]+)['"]/g) || []; let compiledCSS = ''; // Read and append each imported CSS file imports.forEach(importStatement => { const filePath = importStatement.match(/['"]([^'"]+)['"]/)[1]; const fullPath = path.resolve(path.dirname(inputFile), filePath); compiledCSS += fs.readFileSync(fullPath, 'utf-8') + '\n'; }); // Write the compiled CSS to the output file fs.writeFileSync(outputFile, compiledCSS.trim()); console.log(`Compiled CSS written to ${outputFile}`); } // Usage const inputCSSFile = 'index.css'; // Your main CSS file const outputCSSFile = 'output.css'; // Output file compileCSS(inputCSSFile, outputCSSFile); Not 100% free of dependencies, but geez, what a nice way to reduce the overhead and still combine files: node compile-css.js This approach is designed for a flat file directory. If you’re like me and prefer nested subfolders: Very cool, thanks Stu! And check out the full post because there’s a lot of helpful context behind this, particularly with the custom script. Compiling Multiple CSS Files into One originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.