Everything posted by Blogger
-
LHB Linux Digest #25.32: New Linux Networking Course, iotop, Chargebee Alternative and More
by: Abhishek Prakash Fri, 24 Oct 2025 18:11:02 +0530 I am happy to announce the release of our 14th course, Linux Networking at Scale. Okay, this is still a work in progress but I could not wait to reveal it to you 😀 It's a 4-module micro-course that takes you into the world of policy routing, VRFs, nftables, VXLAN, WireGuard, and real-world traffic control, with practical labs in each module. From sysadmins to DevOps to homelab enthusiasts, there is something for everyone in this course. Two modules are available now and the other two will be published in the coming two weeks. Enjoy upgrading your Linux skills 💪 Linux Networking at Scale (In Progress)Master advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair KhurshidLinux Networking at Scale This post is for subscribers only Subscribe now Already have an account? Sign in
-
Module 2: nftables for Complex Rulesets and Performance Optimization
by: Umair Khurshid Fri, 24 Oct 2025 17:02:43 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
Module 1: Advanced iproute2: Policy Routing, Multiple Routing, and VRFs
by: Umair Khurshid Fri, 24 Oct 2025 16:58:03 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
Linux Networking at Scale
by: Umair Khurshid Fri, 24 Oct 2025 16:57:28 +0530 🚀 Why this course?Modern infrastructure demands more than basic networking commands. When systems span across containers, data centers, and cloud edges, you need to scale, isolate, and secure your network intelligently; all using the native power of Linux. This micro-course takes you beyond the basics and into the world of policy routing, VRFs, nftables, VXLAN, WireGuard, and real-world traffic control, with practical labs at every step. 🧑🎓 Who is this course for?This course is designed to help SysAdmins and DevOps engineers move from basic interface configuration to production-grade, resilient networking on Linux. Even aspiring network engineers may find some value in how Linux handles routing, policy decisions, and multi-network connectivity. Later modules will build upon this foundation to explore nftables for complex and optimized firewalls, VXLAN and WireGuard for secure overlays, and tc for traffic shaping and QoS. This is for: Linux admins and DevOps engineers managing distributed systemsNetwork engineers expanding into Linux-based routing and firewallsHomelab enthusiasts and advanced learners who want real mastery📋Prerequisite: Familiarity with Linux command-line tools (ip, ping, systemctl) and basic TCP/IP concepts.🧩 What you’ll learn in this micro-course?By the end of this course, you’ll be able to: Design multi-path and multi-tenant routing using iproute2 and VRFsBuild high-performance firewall setups with nftablesCreate secure overlay networks using VXLAN and WireGuardImplement traffic shaping and QoS policies to control real-world bandwidth usage🥼Every concept is paired with hands-on labs using network namespaces and containers, no expensive lab gear needed.You’ll build, break, and fix your network, exactly like in production. Well, maybe not exactly like production but pretty close to that. What are you waiting for? Time to take your Linux networking knowledge to the next level.
-
Monitoring I/O Usage and Network Traffic in Linux With iotop & ntopng
by: LHB Community Fri, 24 Oct 2025 11:21:17 +0530 You've already seen how to monitor CPU and memory usage with top and htop. Now, let's take a look at two other tools you can use for monitoring your system: iotop and ntopng. These tools monitor disk I/O (Input/Output) and network traffic, respectively. This tutorial will show you how to install, configure, and use both tools. What are iotop and ntopng?iotop:Similar in appearance to top and htop, iotop is a real-time disk I/O monitoring utility that displays the current activity (reads, writes, and waiting) of each process or thread on a Linux system. It can also show total accumulated usage per process/thread. It's useful for identifying processes that are generating heavy I/O traffic (reads/writes) or causing bottlenecks and high latency. ntopng:As the name suggests, ntopng is the next-generation version of ntop, a tool for real-time network-traffic monitoring. It provides analytics, host statistics, protocol breakdowns, flow views, and geolocation, helping you spot abnormal usage. Unlike iotop (and the older ntop command), ntopng primarily serves its output through a web interface, so you interact with it in a browser. While this tutorial also covers basic console usage, do note that it's more limited on the CLI. 📋ntopng integrates with systemd on most distros by default, and this tutorial does not cover systems using other init systems.Installing iotop and ntopngBoth tools are available for installation on Ubuntu and most other distros in their standard repositories. For Debian/Ubuntu and their derivatives: sudo apt update && sudo apt install -y iotop ntopng To install ntopng, RHEL, CentOS, Rocky, and AlmaLinux users will need to enable the EPEL repository first: sudo dnf install -y epel-release sudo dnf install -y iotop ntopng For Arch-based distros, use: sudo pacman -Syu --noconfirm iotop ntopng For openSUSE, run: sudo zypper refresh && sudo zypper install -y iotop ntopng 📋On all systems, ntopng is installed as a systemd service, but it only runs by default on Debian/Ubuntu-based systems and on openSUSE/SUSE.Enable ntopng if you'd like it to run constantly in the background: sudo systemctl enable --now ntopng If you'd like to disable this behavior and only use ntopng on demand, you can run: sudo systemctl stop nntopng && sudo systemctl disable ntopng Using iotop for monitoring disk I/O Much like top and htop, iotop runs solely as a CLI tool. It requires root permissions, but not to worry, it is only used for monitoring purposes and cannot access or control anything else on your system. sudo iotop You’ll see something like this: At the top, the following real-time readouts are displayed (all in Kilobytes): Total DISK READ: cumulative amount of data read from disk since iotop started.Total DISK WRITE: cumulative amount of data written to disk since start.Current DISK READ: how much data is being read (per second).Current DISK WRITE: how much data is being written (per second).Below these outputs, there are several columns shown by default: TID: Thread ID (unique identifier of the thread/process).PRIO: I/O priority level (lower number = higher priority).USER: The user owning the process/thread.DISK READ: Data read from disk by this thread/process.DISK WRITE: Data written to disk by this thread/process.SWAPIN: Percentage of time spent swapping memory in/out.IO> (I/O): Percentage of time the process waits on I/O operations.COMMAND: The name or command of the running process/thread.Useful options & key bindingsYou can control what iotop shows by default by passing various flags when launching the command. Here are some of the commonly used options: -o (or --only): Only show processes with current I/O (filter idle processes).-b (or --batch): Non-interactive mode (useful for logging).-n <count>: Outputs several iterations, then exits (runs in batch mode).-d <delay>: Delay between iterations (in seconds). For instance, use -d 5 for a 5-second delay, or -d 0.5 for a half-second delay. The default is one second.When run without "-b/--batch", iotop starts in interactive mode, where you can use the following keys to change various options: o: toggles the view between showing only processes currently doing I/O and all processes running on the system.p: toggles between displaying only processes or all threads. Changes "TID" (Thread ID) to "PID" (Process ID).a: toggles accumulated I/O vs current I/O.r: Reverse sort order (toggles ascending/descending).left/right arrows: Change the sort column (move between columns like DISK READ, COMMAND, etc.).HOME: Jump to sorting by TID (Thread ID).END: Jump to sorting by COMMAND (process name).q: quits iotop.💡Excessive disk I/O from unexpected processes is usually a sign of possible misconfiguration, runaway logs, a backup mis-schedule, or high database activity. If you're not sure about a process, it's best to investigate what purpose that process serves before taking action.Practical example scenario where iotop helps you as a sysadminLet's say you're working on your system and you notice that it's suddenly slowing down, but can't find the cause via the normal means (high CPU or memory usage). You might suspect disk I/O is the bottleneck, but this will not show up in most system monitoring tools, so you run "sudo iotop" and sort by DISK WRITE. There, you notice a process is constantly writing hundreds of MB/s, blocking other processes. Using the "o" keybinding, you filter to only active writers. You may then throttle or stop that process in another tool (like htop), reschedule it to run at off-hours, or have it use another storage device. iotop has its limitationsWhile it is a useful monitoring tool, iotop cannot control processes on its own. It only has access for reading activity, not controlling it. Some other key things to note with this tool are: On systems with many threads/processes doing I/O, sorting/filtering is key. It's recommended that you use "-o" when launching the command, or press "o" after you've started it.iotop shows process-level I/O, but does not always give full hardware device stats (for that, tools like iostat or blktrace may be needed).You should avoid running iotop on production systems for long intervals without caution, since iotop itself causes overhead when many processes are updating at the same time.Exploring ntopng to get graphical view of network trafficUnlike iotop and its older variant, ntop (which is no longer packaged on some distros), ntopng is primarily accessed via a web-based GUI at default port 3000. For example: http://your-server-ip-address:3000 or if you're running it on your locallyr, from https://localhost:3000. From the GUI, you can view hosts, traffic flows, protocols, top talkers, geolocation, alerts, etc. To keep things simple, we'll cover basic usage and features. Changing the default portChanging the port is a good idea if you already use port 3000 for other local web services. To change ntopng’s default web port, edit its configuration file and restart the service. sudo nano /etc/ntopng/ntopng.conf Then, change the line defining the web port. If it doesn't exist, add it: -w=3001 You can use any unused port above 1024. Next, you'll need to restart ntopng: sudo systemctl restart ntopng You should now see ntopng listening on port 3001. Dashboard overview💡When you first load ntopng in your browser, you'll need to log in. The default username and password are both "admin". However, you'll be prompted to change the password on the first login.Once you're logged in, you'll land on the main dashboard, which looks like this: This dashboard provides a real-time visual overview of network activity and is usually the first thing you see. By default, the dashboard includes: Traffic summary (top left): shows live inbound and outbound traffic rates, number of active hosts, flows, and alerts. Clicking on any of these will take you to the relevant section.Search bar (top center): lets you quickly find hosts, IPs, or ports.Top Flow Talkers (main panel): a large visual block showing which hosts are generating or receiving the most traffic (e.g., your machine vs. external IPs).Sidebar (left): navigation menu with access to:Dashboard: current view.Alerts: security or threshold-based notifications.Flows/Hosts/Ports/Applications: detailed breakdowns of network activity.Interfaces: network interfaces being monitored.Settings / System / Developer: configuration and data export options.Refresh indicator (bottom): shows the live update frequency (default: 5 seconds).Footer: version information, uptime, and system clock.You can check each panel in the sidebar and dashboard individually to see what each displays. For this tutorial, we won't go into every detail, as there are too many to cover here. Using ntopng from the consoleAlthough ntopng is designed to be primarily web-based, you can still run it directly in the console for quick checks or lightweight monitoring. This can be useful on headless systems over SSH, or when you just want a quick snapshot of network activity without loading the web UI. First, stop the ntopng systemd service: sudo systemctl stop ntopng This is necessary to avoid any conflicts between the running service and your access via the CLI. Now you can launch ntopng directly: sudo ntopng --disable-ui --verbose This command will listen on all network interfaces that ntopng can find. If you'd like to restrict to a certain interface, you can use the -i flag. For example, to listen only on your WiFi interface, you can use either of the following commands (usually begins with "wl"): ip link | grep wl or nmcli device status | grep wl Then run ntopng, pointed to your wifi router: sudo ntopng --disable-ui --verbose -i wlp49s0 Replace "wlp49s0" with your device, of course. Basic logging with the ntopng CLIIf you'd like to capture a basic log with ntopng from the console, you can run: sudo ntopng --disable-ui -i wlp49s0 --dump-flows flows.log Again, just remember to replace wlp49s0 with your device name. Note that the log will save to which ever folder is your current working directory. You can change the location of the log file by providing a path, for example: sudo ntopng --disable-ui -i wlp49s0 --dump-flows path/to/save/to/flows.log Practical example scenario where ntopng helpsSay you suspect unusual network activity on your system. You log in to the ntopng dashboard and notice that one host on your network is sending a large amount of data to an external IP address over port 443 (HTTPS). Clicking on that host reveals its flows, showing that a specific application is continuously communicating with a remote server. Using this insight, you can then open another monitoring tool, such as top or htop, to identify and stop the offending process before investigating further. Even for less experienced users, ntopng is a great way to understand a system’s network usage at a glance. You can run it on a production server if resources allow, or dedicate a small monitoring host to watch other devices on your network (out of scope here). By combining real-time views with short-term history (e.g., spotting periodic traffic spikes), you can build a picture of network health. Used alongside a firewall and tools like fail2ban, ntopng helps surface anomalies quickly so you can investigate and respond. ngtopng has its limitations tooWhile ntopng is powerful, capturing all network traffic at very high throughput can require serious resources (NICs, CPU, memory). If you're using it on a high-traffic network, it's probably best to use a separate server for monitoring. Here are some other important things to note: If you are monitoring remote networks or via VLANs, you may need an appropriate network setup (mirror ports, network taps). However, these are outside the scope of this tutorial.For data retention out of the box, you only get a limited history. For long-term trends, you'll need to configure external storage or a database.Most traffic (e.g., HTTPS) is encrypted, so ntopng can only show metadata (hosts, ports, volumes, SNI (Server Name Indication) where available). In such cases, it cannot show the actual payloads.Conclusioniotop and ntopng are two powerful free/open-source tools that can help you monitor, analyze, and troubleshoot critical subsystems on your Linux machine. By incorporating these into your arsenal, you'll get a better understanding of your system's baseline for normal operations and be better equipped to spot anomalies or bottlenecks quickly.
-
414: Apollo (and the Almighty Cache)
by: Chris Coyier Thu, 23 Oct 2025 16:15:59 +0000 Rachel and Chris jump on the show to talk about a bit of client-side technology we use: Apollo. We use it because we have a GraphQL API and Apollo helps us write queries and mutations that go through that API. It slots in quite nicely with our React front-end, providing hooks we use to do the data work we need to do when we need to do it. Plus we get typed data all the way through. Chris gets to learn that the Apollo Cache isn’t some bonus feature that just helps makes things faster, but an inevitable and deeply integrated feature into how this whole thing works. Time Jumps 00:06 How do you get data into the front end of your application? 02:57 Do we use Apollo Server? 10:17 Why is GraphQL not as cool anymore? 18:23 How does the Apollo Client cache work?
-
FOSS Weekly #25.43: NebiOS Linux, GNOME Enhancements, LMDE 7, COSMIC Beta Review and More Linux Stuff
by: Abhishek Prakash Thu, 23 Oct 2025 04:29:08 GMT Linux Mint Debian Edition (LMDE) version 7 is available now. For people who like Debian more than Ubuntu and Linux Mint's Cinnamon more than anything, this is the perfect choice. LMDE 7 “Gigi” Released: Linux Mint’s Debian-Based Alternative Gets Major UpgradeA stable Debian base meets a polished Linux Mint desktop experience.It's FOSS NewsSourav RudraSometimes I wonder if LMDE should be the default choice for Linux Mint. Am I the only one who thinks this? 💬 Let's see what you get in this edition: Me pitching Proton Mail against Gmail.A new LMDE release based on Debian 13.DIY kindle alternatives.And other Linux news, tips, and, of course, memes!This edition of FOSS Weekly is supported by PrepperDisk. PrepperDisk gives you a fully offline, private copy of the world’s most useful open-source knowledge—so your access doesn’t depend on big platforms, networks, or gatekeepers. Built on Raspberry Pi, it bundles projects like Wikipedia, maps, and survival manuals with tools we’ve built and open-sourced ourselves. It’s a way to safeguard information freedom: your own secure, personal archive of open knowledge, ready anywhere—even without the internet. Explore PrepperDisk 📰 Linux and Open Source NewsNordVPN has made the GUI code for its Linux app open source.Valkey 9.0 release adds multi-database clusters and now supports 1 billion requests per second The beloved open source game SuperTuxKart gets many refinements in the latest release.ONLYOFFICE Docs 9.1 is here with PDF redaction and editor upgrades.LMDE 7 "Gigi" has arrived with a Debian 13 base and many improvements.LMDE 7 “Gigi” Released: Linux Mint’s Debian-Based Alternative Gets Major UpgradeA stable Debian base meets a polished Linux Mint desktop experience.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutProton Mail is a better choice than Gmail. That's what I think. And I discovered a ProtonMail feature that works better than Gmail. That One (of the several) Feature ProtonMail Does Better Than GmailThe newsletters can be a mess to manage. ProtonMail gives you better features than Gmail to manage your newsletter subscriptions.It's FOSS NewsAbhishekGNOME all the wayI thought of sharing some neat tips and tweaks that relate to various components of the GNOME desktop environment. Basically, they let you discover some lesser known features and customization. Perhaps you'll discover your next favorite trick here. Enhance the functionality of the Nautilus file manager with these tips.Learn to get more out of the search feature in the file manager.Why restrict yourself with file manager? Explore the full potential of the activity search in GNOME.Let's take it further and customize the top panel in GNOME to get applet indicator and more such features.For a long time, I relied on GNOME Tweaks until I discovered Just Perfection. GNOME customization was never the same.Here are a few tips to save time by combining the terminal and the file manager on your GNOME system.🧮 Linux Tips, Tutorials, and LearningsLearn the difference between PipeWire and PulseAudio.Explore comic book readers on Linux for .cbr files.Unravel the mystery of loop devices in Linux.👷 AI, Homelab and Hardware CornerFor AI enthusiasts, here is a way to go from zero keys to full AI integration in one step. The Puter.js library allows integrating mainstream AI in your web projects without needing their API keys. I Used This Open Source Library to Integrate OpenAI, Claude, Gemini to Websites Without API KeysThis underrated open source JavaScript library lets you integrate popular commercial LLMs without needing their paid API. You can test it out within minutes on your Linux system with this tutorial.It's FOSSBhuwan MishraAlso, if you are fed up with Amazon's Kindle, then you can build your own eBook reader. Looking for Open Source Kindle Alternatives? Build it YourselfThere are no easy options. You have to take the matter in your hand, quite literally.It's FOSSPulkit ChandakThe FSF is going all in with the Librephone project. 🛍️ Deal Alert: Raspberry Pi eBook BundleLearn the ins and outs of coding your favorite retro games and build one of your own with Code the Classics Volume II. Give your tech-savvy kids a head start in computer coding with Unplugged Tots. The 16-book library also includes just-released editions of The Official Raspberry Pi Handbook 2026, Book of Making 2026, and much more! Whether you’re just getting into coding or want to deepen your knowledge about something more specific, this pay-what-you-want bundle has everything you need. And you support Raspberry Pi Foundation North America with your purchase! Humble Tech Book Bundle: All Things Raspberry Pi by Raspberry Pi PressLearn the ins and outs of computer coding with this library from Raspberry Pi! Pay what you want and support the charity of your choice!Humble BundleExplore the Humble offer here✨ Project HighlightsNebiOS is a beautiful approach to how an Ubuntu-based distro with a custom desktop environment can be built. NebiOS is an Ubuntu-based Distro With a Brand New DE Written for Wayland from Ground UpExploring a new Ubuntu-based distro. By the way, it’s been some time since we had a new distro based on Ubuntu.It's FOSS NewsSourav RudraCOSMIC is shaping up well, we tested it to see how it performs. I Tested Pop!_OS 24.04 LTS Beta: A Few Hits and Misses But Mostly on the Right TrackCOSMIC has come a long way, but is it enough?It's FOSS NewsSourav Rudra📽️ Videos I Am Creating for YouThe terminal makeover video is nearly at 100K views. With so many people enhancing the looks of their terminal, I thought you might want to give it a try, too. Subscribe to It's FOSS YouTube Channel Linux is the most used operating system in the world. but on servers. Linux on desktop is often ignored. That's why It's FOSS made it a mission to write helpful tutorials and guides to help use Linux on their personal computer. We do it all for free. No venture capitalist funds us. But you know who does? Readers like you. Yes, we are an independent, reader supported publication helping Linux users worldwide with timely news coverage, in-depth guides and tutorials. If you believe in our work, please support us by getting a Plus membership. It costs just $3 a month or $99 for a lifetime subscription. Join It's FOSS Plus 💡 Quick Handy TipToo much GNOME in this newsletter? Let's switch to KDE. If you are using desktop widgets in KDE Plasma and don't know how to add the system monitor sensor to it, then do this. Open the System Monitor app and right-click on any telemetry you want to add. Then select "Add chart as Desktop Widget". That's it. The selected chart will be added to your desktop. You can change its appearance by going to Edit mode later. 🎋 Fun in the FOSSverseThis crossword-style challenge mixes up popular Linux text editors. From timeless command-line classics to sleek modern tools. Sharpen your brain, embrace your inner geek, and see how many you can decode! The Scrambled Linux Editors CrosswordThink you know your Linux text editors? From Vim to Nano, these jumbled names will challenge even seasoned coders. Try to unscramble them and see how many you can get right!It's FOSSAbhishek Prakash🤣 Meme of the Week: Probably not true anymore but still funny. 🗓️ Tech Trivia: On October 20, 2004, Ubuntu 4.10 "Warty Warthog" was released! Backed by Mark Shuttleworth’s Canonical, Ubuntu aimed to make Linux simple and human-friendly, its name loosely translates to "humanity." Two decades later, it’s dominating the Linux desktop space. 🧑🤝🧑 From the Community: Long-time FOSSer Cliff is looking for help with a Realtek Wi-Fi issue on his MX Linux system. Can you help? MX Linux Realtek Wi-fi IssuesI have MX Linux KDE, most recent update. It runs on kernel 6.1.0-40. I am using a mini pc with a Realtek 8852BE network card. I had always had wired internet for that machine, but now I have to be happy with wifi. The problem, unlike any of my other OSs, is that it sees each wifi channel as having a 0 signal strength and fails to activate wlan0. I went around for hours with Claude AI to solve it and it was unable to resolve the issue. It finally suggested just going to MX Tools, Package Install…It's FOSS Communitycliffsloane❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
An Introduction to JavaScript Expressions
by: Mat Marquis Wed, 22 Oct 2025 19:08:23 +0000 Editor’s note: Mat Marquis and Andy Bell have released JavaScript for Everyone, an online course offered exclusively at Piccalilli. This post is an excerpt from the course taken specifically from a chapter all about JavaScript expressions. We’re publishing it here because we believe in this material and want to encourage folks like yourself to sign up for the course. So, please enjoy this break from our regular broadcasting to get a small taste of what you can expect from enrolling in the full JavaScript for Everyone course. Hey, I’m Mat, but “Wilto” works too — I’m here to teach you JavaScript. Well, not here-here; technically, I’m over at JavaScript for Everyone to teach you JavaScript. What we have here is a lesson from the JavaScript for Everyone module on lexical grammar and analysis — the process of parsing the characters that make up a script file and converting it into a sequence of discrete “input elements” (lexical tokens, line ending characters, comments, and whitespace), and how the JavaScript engine interprets those input elements. An expression is code that, when evaluated, resolves to a value. 2 + 2 is a timeless example. 2 + 2 // result: 4 As mental models go, you could do worse than “anywhere in a script that a value is expected you can use an expression, no matter how simple or complex that expression may be:” function numberChecker( checkedNumber ) { if( typeof checkedNumber === "number" ) { console.log( "Yep, that's a number." ); } } numberChecker( 3 ); // result: Yep, that's a number. numberChecker( 10 + 20 ); // result: Yep, that's a number. numberChecker( Math.floor( Math.random() * 20 ) / Math.floor( Math.random() * 10 ) ); // result: Yep, that's a number. Granted, JavaScript doesn’t tend to leave much room for absolute statements. The exceptions are rare, but it isn’t the case absolutely, positively, one hundred percent of the time: console.log( -2**1 ); // result: Uncaught SyntaxError: Unary operator used immediately before exponentiation expression. Parenthesis must be used to disambiguate operator precedence Still, I’m willing to throw myself upon the sword of “um, actually” on this one. That way of looking at the relationship between expressions and their resulting values is heart-and-soul of the language stuff, and it’ll get you far. Primary Expressions There’s sort of a plot twist, here: while the above example reads to our human eyes as an example of a number, then an expression, then a complex expression, it turns out to be expressions all the way down. 3 is itself an expression — a primary expression. In the same way the first rule of Tautology Club is Tautology Club’s first rule, the number literal 3 is itself an expression that resolves in a very predictable value (psst, it’s three). console.log( 3 ); // result: 3 Alright, so maybe that one didn’t necessarily need the illustrative snippet of code, but the point is: the additive expression 2 + 2 is, in fact, the primary expression 2 plus the primary expression 2. Granted, the “it is what it is” nature of a primary expression is such that you won’t have much (any?) occasion to point at your display and declare “that is a primary expression,” but it does afford a little insight into how JavaScript “thinks” about values: a variable is also a primary expression, and you can mentally substitute an expression for the value it results in — in this case, the value that variable references. That’s not the only purpose of an expression (which we’ll get into in a bit) but it’s a useful shorthand for understanding expressions at their most basic level. There’s a specific kind of primary expression that you’ll end up using a lot: the grouping operator. You may remember it from the math classes I just barely passed in high school: console.log( 2 + 2 * 3 ); // result: 8 console.log( ( 2 + 2 ) * 3 ); // result: 12 The grouping operator (singular, I know, it kills me too) is a matched pair of parentheses used to evaluate a portion of an expression as a single unit. You can use it to override the mathematical order of operations, as seen above, but that’s not likely to be your most common use case—more often than not you’ll use grouping operators to more finely control conditional logic and improve readability: const minValue = 0; const maxValue = 100; const theValue = 50; if( ( theValue > minValue ) && ( theValue < maxValue ) ) { // If ( the value of `theValue` is greater than that of `minValue` ) AND less than `maxValue`): console.log( "Within range." ); } // result: Within range. Personally, I make a point of almost never excusing my dear Aunt Sally. Even when I’m working with math specifically, I frequently use parentheses just for the sake of being able to scan things quickly: console.log( 2 + ( 2 * 3 ) ); // result: 8 This use is relatively rare, but the grouping operator can also be used to remove ambiguity in situations where you might need to specify that a given syntax is intended to be interpreted as an expression. One of them is, well, right there in your developer console. The syntax used to initialize an object — a matched pair of curly braces — is the same as the syntax used to group statements into a block statement. Within the global scope, a pair of curly braces will be interpreted as a block statement containing a syntax that makes no sense given that context, not an object literal. That’s why punching an object literal into your developer console will result in an error: { "theValue" : true } // result: `Uncaught SyntaxError: unexpected token: ':' It’s very unlikely you’ll ever run into this specific issue in your day-to-day JavaScript work, seeing as there’s usually a clear division between contexts where an expression or a statement are expected: { const theObject = { "theValue" : true }; } You won’t often be creating an object literal without intending to do something with it, which means it will always be in the context where an expression is expected. It is the reason you’ll see standalone object literals wrapped in a a grouping operator throughout this course — a syntax that explicitly says “expect an expression here”: ({ "value" : true }); However, that’s not to say you’ll never need a grouping operator for disambiguation purposes. Again, not to get ahead of ourselves, but an Independently-Invoked Function Expression (IIFE), an anonymous function expression used to manage scope, relies on a grouping operator to ensure the function keyword is treated as a function expression rather than a declaration: (function(){ // ... })(); Expressions With Side Effects Expressions always give us back a value, in no uncertain terms. There are also expressions with side effects — expressions that result in a value and do something. For example, assigning a value to an identifier is an assignment expression. If you paste this snippet into your developer console, you’ll notice it prints 3: theIdentifier = 3; // result: 3 The resulting value of the expression theIdentifier = 3 is the primary expression 3; classic expression stuff. That’s not what’s useful about this expression, though — the useful part is that this expression makes JavaScript aware of theIdentifier and its value (in a way we probably shouldn’t, but that’s a topic for another lesson). That variable binding is an expression and it results in a value, but that’s not really why we’re using it. Likewise, a function call is an expression; it gets evaluated and results in a value: function theFunction() { return 3; }; console.log( theFunction() + theFunction() ); // result: 6 We’ll get into it more once we’re in the weeds on functions themselves, but the result of calling a function that returns an expression is — you guessed it — functionally identical to working with the value that results from that expression. So far as JavaScript is concerned, a call to theFunction effectively is the simple expression 3, with the side effect of executing any code contained within the function body: function theFunction() { console.log( "Called." ); return 3; }; console.log( theFunction() + theFunction() ); /* Result: Called. Called. 6 */ Here theFunction is evaluated twice, each time calling console.log then resulting in the simple expression 3 . Those resulting values are added together, and the result of that arithmetic expression is logged as 6. Granted, a function call may not always result in an explicit value. I haven’t been including them in our interactive snippets here, but that’s the reason you’ll see two things in the output when you call console.log in your developer console: the logged string and undefined. JavaScript’s built-in console.log method doesn’t return a value. When the function is called it performs its work — the logging itself. Then, because it doesn’t have a meaningful value to return, it results in undefined. There’s nothing to do with that value, but your developer console informs you of the result of that evaluation before discarding it. Comma Operator Speaking of throwing results away, this brings us to a uniquely weird syntax: the comma operator. A comma operator evaluates its left operand, discards the resulting value, then evaluates and results in the value of the right operand. Based only on what you’ve learned so far in this lesson, if your first reaction is “I don’t know why I’d want an expression to do that,” odds are you’re reading it right. Let’s look at it in the context of an arithmetic expression: console.log( ( 1, 5 + 20 ) ); // result: 25 The primary expression 1 is evaluated and the resulting value is discarded, then the additive expression 5 + 20 is evaluated, and that’s resulting value. Five plus twenty, with a few extra characters thrown in for style points and a 1 cast into the void, perhaps intended to serve as a threat to the other numbers. And hey, notice the extra pair of parentheses there? Another example of a grouping operator used for disambiguation purposes. Without it, that comma would be interpreted as separating arguments to the console.log method — 1 and 5 + 20 — both of which would be logged to the console: console.log( 1, 5 + 20 ); // result: 1 25 Now, including a value in an expression in a way where it could never be used for anything would be a pretty wild choice, granted. That’s why I bring up the comma operator in the context of expressions with side effects: both sides of the , operator are evaluated, even if the immediately resulting value is discarded. Take a look at this validateResult function, which does something fairly common, mechanically speaking; depending on the value passed to it as an argument, it executes one of two functions, and ultimately returns one of two values. For the sake of simplicity, we’re just checking to see if the value being evaluated is strictly true — if so, call the whenValid function and return the string value "Nice!". If not, call the whenInvalid function and return the string "Sorry, no good.": function validateResult( theValue ) { function whenValid() { console.log( "Valid result." ); }; function whenInvalid() { console.warn( "Invalid result." ); }; if( theValue === true ) { whenValid(); return "Nice!"; } else { whenInvalid(); return "Sorry, no good."; } }; const resultMessage = validateResult( true ); // result: Valid result. console.log( resultMessage ); // result: "Nice!" Nothing wrong with this. The whenValid / whenInvalid functions are called when the validateResult function is called, and the resultMessage constant is initialized with the returned string value. We’re touching on a lot of future lessons here already, so don’t sweat the details too much. Some room for optimizations, of course — there almost always is. I’m not a fan of having multiple instances of return, which in a sufficiently large and potentially-tangled codebase can lead to increased “wait, where is that coming from” frustrations. Let’s sort that out first: function validateResult( theValue ) { function whenValid() { console.log( "Valid result." ); }; function whenInvalid() { console.warn( "Invalid result." ); }; if( theValue === true ) { whenValid(); } else { whenInvalid(); } return theValue === true ? "Nice!" : "Sorry, no good."; }; const resultMessage = validateResult( true ); // result: Valid result. resultMessage; // result: "Nice!" That’s a little better, but we’re still repeating ourselves with two separate checks for theValue. If our conditional logic were to be changed someday, it wouldn’t be ideal that we have to do it in two places. The first — the if/else — exists only to call one function or the other. We now know function calls to be expressions, and what we want from those expressions are their side effects, not their resulting values (which, absent a explicit return value, would just be undefined anyway). Because we need them evaluated and don’t care if their resulting values are discarded, we can use comma operators (and grouping operators) to sit them alongside the two simple expressions — the strings that make up the result messaging — that we do want values from: function validateResult( theValue ) { function whenValid() { console.log( "Valid result." ); }; function whenInvalid() { console.warn( "Invalid result." ); }; return theValue === true ? ( whenValid(), "Nice!" ) : ( whenInvalid(), "Sorry, no good." ); }; const resultMessage = validateResult( true ); // result: Valid result. resultMessage; // result: "Nice!" Lean and mean thanks to clever use of comma operators. Granted, there’s a case to be made that this is a little too clever, in that it could make this code a little more difficult to understand at a glance for anyone that might have to maintain this code after you (or, if you have a memory like mine, for your near-future self). The siren song of “I could do it with less characters” has driven more than one JavaScript developer toward the rocks of, uh, slightly more difficult maintainability. I’m in no position to talk, though. I chewed through my ropes years ago. Between this lesson on expressions and the lesson on statements that follows it, well, that would be the whole ballgame — the entirety of JavaScript summed up, in a manner of speaking — were it not for a not-so-secret third thing. Did you know that most declarations are neither statement nor expression, despite seeming very much like statements? Variable declarations performed with let or const, function declarations, class declarations — none of these are statements: if( true ) let theVariable; // Result: Uncaught SyntaxError: lexical declarations can't appear in single-statement context if is a statement that expects a statement, but what it encounters here is one of the non-statement declarations, resulting in a syntax error. Granted, you might never run into this specific example at all if you — like me — are the sort to always follow an if with a block statement, even if you’re only expecting a single statement. I did say “one of the non-statement declarations,” though. There is, in fact, a single exception to this rule — a variable declaration using var is a statement: if( true ) var theVariable; That’s just a hint at the kind of weirdness you’ll find buried deep in the JavaScript machinery. 5 is an expression, sure. 0.1 * 0.1 is 0.010000000000000002, yes, absolutely. Numeric values used to access elements in an array are implicitly coerced to strings? Well, sure — they’re objects, and their indexes are their keys, and keys are strings (or Symbols). What happens if you use call() to give this a string literal value? There’s only one way to find out — two ways to find out, if you factor in strict mode. That’s where JavaScript for Everyone is designed take you: inside JavaScript’s head. My goal is to teach you the deep magic — the how and the why of JavaScript. If you’re new to the language, you’ll walk away from this course with a foundational understanding of the language worth hundreds of hours of trial-and-error. If you’re a junior JavaScript developer, you’ll finish this course with a depth of knowledge to rival any senior. I hope to see you there. JavaScript for Everyone is now available and the launch price runs until midnight, October 28. Save £60 off the full price of £249 (~$289) and get it for £189 (~$220)! Get the Course An Introduction to JavaScript Expressions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Arduino Alternative Microcontroller Boards for Your DIY Projects in the Post-Qualcomm Era
by: Pulkit Chandak Wed, 22 Oct 2025 07:13:10 GMT Arduino has been the cornerstone of embedded electronics projects for a while now. Be it DIY remote-controlled vehicles, binary clocks, power laces, or as is relevant to the month of publishing, flamethrowing Jack-O'-Lanterns! The versatility and affordability of the board has been uniquely unparalleled. But now that Qualcomm has acquired Arduino projecting more AI-forward features with more powerful hardware, there might be some changes around the corner. Perhaps I am reading too much between the lines but not all of us have favorable views about Big Tech and corporate greed. We thought it might be a good time to look at some alternatives. Since Arduino has a lot of different models with different features, we will not draw a comparison between Arduino and other boards, but just highlight the unique features these alternative boards have. 1. Raspberry Pi PicoRaspberry Pi needs no introduction, it being the one company besides Arduino that has always been the favorite of tinkerers. While Raspberry Pi is known for its full fledged single-board-computers, the Pico is a development board for programming dedicated tasks like the Arduino boards. There are two releases of the Pico at the time of writing this article, 1 and 2. The major upgrade being the processor. There are certain prefixes which denote model features, "W" denoting wireless capabilities, "H" denoting pre-soldered headers. Here, I describe the cutting-edge model, the Pico 2 W with Headers. Processors: Dual Cortex-M33 (ARM) upto 133 MHz and optional Hazard3 processors (RISC-V)Memory: 520 KB on-chip SRAMInput-Output: 26 GPIO pinsConnectivity: Optionally 2.4 GHz Wi-Fi and Bluetooth 5.2 on the W modelPower: Micro-USBProgramming Software or Language: MicroPython or C/C++Price: $8Extra Features: Temperature sensorThe greatest advantage of Raspberry Pi is the huge userbase, second probably only to Arduino. Besides that, the GPIO pins make projects easier to construct, and the optional RISC-V processors give it an open-source experimental edge that many long for. 2. ESP32ESP32 is a SoC that has soared in popularity in the past decade, and for all the right reasons. It comes in very cheap, screaming "hobbyist" and is committed to good documentation and an open SDK (software development kit). It came as a successor to the already very successful and still relevant ESP8266 SoC. The categorization is a little to get a hang of because of the sheer number of boards available. The original ESP32 SoC boards come with dual-core Xtensa LX6 processors that go up to 240 MHz, and they come with Wi-Fi + Bluetooth classic/LE built-in. The ESP32-S series are a little enhanced, with more GPIO pins for connectivity. Now the ESP32-C series transitioned to RISC-V chips, and finally the ESP32-H series are designed for ultra low-power IoT applications. If the board name has WROOM, it belongs to the original basic family but the ones with WROVER indicate modules with PSRAM and more memory in general. You can find all the "DevKits" here. Getting over the whole naming culture, I will directly describe one board here that might fulfill your Arduino-alternative needs, ESP32-DevKitC-VE: Processors: Dual-core 32-bit LX6 upto 240 MHzMemory: 8 MBInput-Output: 34 programmable GPIOsConnectivity: 802.11 Wi-Fi, Bluetooth 4.2 with BLEPower: Micro-USBProgramming Software or Language: Arduino IDE, PlatformIO IDE (VS Code), LUA, MicroPython, Espressif IDF (IoT Development Framework), JavaScriptPrice: $11Extra Features: Breadboard friendly, rich set of peripheral interfacesI encourage you to do your own research based on your needs of the board and choose one, as the support and hardware is rock solid but the sheer number of options can be a little tricky to figure out. 3. Adafruit FeatherAdafruit Feather isn't a single board, but a category of hardware boards that come with all sorts of different features and processors each. The idea is getting a "feather", which is the board, and then getting "wings" which are hats/shields, basically extending the features and abilities of the board, and there are a huge number of them. This extensible versatility is the most attractive features of the boards but also the reason why I cannot describe one board that best suits the needs of any user. I can, however, tell you what options they provide. All FeathersCan be programmed with Arduino IDECome with Micro-USB or USB-CAre 0.9" long and breadboard-compatibleCan be run with either USB power or a LiPo batteryProcessorsThe boards are available with several different processors, such as: Atmel ATmega32u4 and ATmega 328P - 8 bit AVRAtmel ATSAMD21 - 32 bit ARM Cortex M0+Atmel ATSAMD51 - 32-bit ARM Cortex M4Broadcom/Cypress WICED - STM32 with WiFiEspressif ESP8266 and ESP32 - Tensilica with WiFi/BTFreescale MK20 - ARM Cortex M4, as the Teensy 3.2 Feather AdapterNordic nRF52832 and nRF32840 - ARM Cortex & Bluetooth LEPacket radio modules featuring SemTech SX1231LoRa radio modules featuring SemTech SX127xA good model to look into for an Arduino alternative is Adafruit ESP32 Feather V2. Connectivity and wingsThe "feathers" have different categories based on their connectivity. The categories include: Basic FeathersWi-Fi FeathersBluetooth FeathersCellular FeathersLoRa and Radio FeathersThis doesn't mean that these connectivity features are mutually exclusive, there are several boards which have more than one of theses connectivity options. The Wings add all the functionality to the boards, and the number of options are immense. I cannot possibly list them here. 4. SeeeduinoAs Arduino alternatives go, this board seems to be one of the most worthy of holding that title. It looks like an Arduino, works with the software that Arduino is compatible with, and even supports the shields made for UNO-R3. Here is the description of the most recent model at the time of writing this, Seeeduino V4.3: Processors: ATmega328Memory: 2 KB RAM, 1 KB EEPROM and 32 KB Flash MemoryInput-Output: 14 digial IO pins, 6 analog inputsPower: Micro-USB, DC Input JackProgramming Software or Language: Arduino IDEPrice: $7.6If you need a no-brainer Arduino alternative that delivers what it does with stability and efficiency, this should be your go-to choice. 5. STM32 Nucleo BoardsSTM32 offers a very, very wide range of development boards, among which the Nucleo boards seem like the best alternatives for Arduino. They come in three series as well: Nucleo-32, Nucleo-64 and Nucleo-144, the numbers at the end of which denote the number of connectivity pins that the board offers. Every single series has a number of models within, again. Here, I will describe the one most appropriate as an Arduino alternative: STM32 Nucleo-F103RBMicrocontroller: STM32Input-Output: 64 IO pins; Arduino shield-compatibleConnectivity: Arduino Uno V3 expansion connectorPower: Micro-USBProgramming Software or Language: IAR Embedded Workbench, MDK-ARM, STM32CubeIDE, etc.Price: $10.81Extra Features: 1 programmable LED, 1 programmable button, 1 reset buttonOptional Features: Second user LED, cryptography, USB-C, etc.STM32 provides great hardware abstraction, ease of development, GUI based initialization, good resources and more. If that is the kind of thing you need, then this should probably be your choice. 6. micro:bitmicro:bit boards are designed mostly for younger students and kids to learn programming, but offer some really interesting features that can help anyone make a project without buying many extra parts. In fact, this is one of the ideal tools for introducing STEM education to young children. Here are the details of the most recent version at the time of writing, micro:bit v2: Processors: Nordic Semiconductor nRF52833Memory: 128 KB RAM, 512 KB Flash MemoryInput-Output: 25 pins (4 dedicated GPIO, PWM, I2C, SPI)Connectivity: Bluetooth 5.0, radioPower: Micro-USBProgramming Software or Language: Price: $17.95 (other more expensive bundles with extra hardware are also available)The extra built-in features of the board include: 2 built in buttons that can be programmed in different waysTouch sensor on the logo, temperature sensorBuilt-in speaker and microphone25 programmable LEDsAccelerometer and compassReset and power buttonIf a plethora of extra hardware features capable of executing almost anything you might want, or if you want a development board with extensive documentation for younger audiences, this should be your go to choice. The company doesn't only make great boards, but also supports inclusive technological education for children of all abilities, and sustainability, which is admirable. 7. Particle Photon 2The Particle Photon 2 is a board designed with ease of prototyping in mind. It enables IoT projects, giving broad customization options to both hardware and software. The Photon is also Feather-compatible (from Adafruit), giving the ability to attach Wings to extend the features. Processors: ARM Cortex M33, upto 200 MHzMemory: 3MB RAM, 2MB Flash MemoryInput-Output: 16 GPIO pinsConnectivity: Dual-band Wi-Fi and BLE 5.3Power: Micro-USBProgramming Software or Language: VSC plug-inPrice: $17.95The Photon also has a built-in programmable LED. Particle also provides a Wi-Fi antenna add-on component if your project requires that. If building new product ideas is your need, this might just be what you're looking for. 8. Teensy Development BoardsThe Teensy board series, as the name suggests, aims for a small board with a minimal footprint with a lot of power packed at an affordable price. There have been several releases of the board, with the most recent one at the time of writing being Teensy 4.1: Processors: ARM Cortex-M7 at 600 MHzMemory: 1024K RAM, 8MB Flash MemoryInput-Output: 55 digital IO pins, 18 analog input pinsPower: Micro-USB, Programming Software or Language: Arduino IDE + Teensyduino, Visual Micro, PlatformIO, CircuitPython, command linePrice: $31.50Extra Features: Onboard Micro SD cardIf you need a stable base for your project that just works, this might be your choice. It is worth noting that the Teensy boards have excellent audio libraries and offer a lot of processing power. 9. PineConePineCone is a development board from one of the foremost open source companies, Pine64. It provides amazing features and connectivity, making it ideal for a lot of tinkering purposes. Processors: 32-bit RV32IMAFC RISC-V “SiFive E24 Core”Memory: 2 MB Flash MemoryInput-Output: 18 GPIO pinsConnectivity: Wi-Fi, BLE 5.0, RadioPower: USB-CProgramming Software or Language: RustPrice: $3.99Extra Features: 3 on-board LEDsThe RISC-V processor capability gives it the open-source hardware edge that many other boards lack. That makes it quite good for IoT prototyping into devices and technologies that might be very new and untapped. 10. Sparkfun Development BoardsSparkfun has a whole range of boards on their website, out of which the two most notable series are the "RedBoard" series and the "Thing" series. A big part of some of these boards is the Qwiic ecosystem, in which I2C sensors, actuators, shields, etc. can be connected to the board with the same 4-pin connector. Not only that, but you can daisy-chain the boards in one string, making it more convenient and less prone to errors. Here's a great article to learn about the Qwiic ecosystem. Sparkfun RedBoard QwiicThis is another board that is a perfect alternative to Arduino with extra features because it was designed to be so. It is an Arduino-compatible board, supporting the software, shields, etc. Microcontroller: ATmega328 with UNO's Optiboot BootloaderInput-Output: 20 Digital IO pins, 1 Qwiic connectorConnectivity: 20 Digital I/O pins with 6 PWM pinsPower: Micro-USB, Pin inputProgramming Software or Language: Arduino IDEPrice: $21.95Sparkfun Thing Plus SeriesThe Sparkfun Thing Plus series comes in with sorts of different processors and connection abilities like RP2040, RP2350, nRF9160, ARM Cortex-M4, ESP32-based, STM32-based, etc. We've chosen to describe one of the most popular models here, SparkFun Thing Plus - ESP32 WROOM (USB-C). Microcontroller: ESP32-WROOM ModuleInput-Output: 21 Multifunctional GPIOConnectivity: Wi-Fi 2.4GHz, dual integrated Bluetooth (classic and BLE)Power: USB-C, Qwiic connectorProgramming Software or Language: Arduino IDEPrice: $33.73Extra Features: RGB status LED, built-in SD card slot, Adafruit Feather compatible (you can attach the "Wings")Sparkfun offers a lot of options, especially based on the form-factor. They not only provide /new unique features of their own, but also utilize the open technologies provided by other companies very well, as you can see. ConclusionThe Arduino boards clearly have a lot of alternatives, varying in size, features and practicality. If Arduino being acquired puts a bad taste in your mouth, or even if you just want to explore what the alternatives offer, I hope this article has been helpful for you. Please let us know in the comments if we missed your favorite one. Cheers!
-
PenTesting 101: Using TheHarvester for OSINT and Reconnaissance
by: Hangga Aji Sayekti Wed, 22 Oct 2025 11:49:45 +0530 Ever wonder how security pros find those hidden entry points before the real testing even begins? It all starts with what we call reconnaissance—the art of gathering intelligence. Think of it like casing a building before a security audit; you need to know the doors, windows, and air vents first. In this digital age, one of the go-to tools for this initial legwork is TheHarvester. At its heart, TheHarvester is a Python script that doesn't try to do anything fancy. Its job is straightforward: to scour publicly available information and collect things like email addresses, subdomains, IPs, and URLs. It looks in all the usual places, from standard search engines to specialized databases like Shodan, which is essentially a search engine for internet-connected devices. We did something like this by fingerprinting with WhatWeb in an earlier tutorial. But TheHarvester is a different tool with more diverse information. 📋To put this into practice, we're going to get our hands dirty with a live example. We'll use vulnweb.com as our test subject. This is a safe, legal website specifically set up by security folks to practice these very techniques, so it's the perfect place to learn without causing any harm. Let's dive in and see what we can uncover.Step 1: Installing TheHarvesterIf you're not using Kali Linux, you can easily install TheHarvester from its GitHub repository. Option A: Using apt (Kali Linux / Debian/Ubuntu)sudo apt update && sudo apt install theharvester Option B: Installing from source (Latest Version)git clone https://github.com/laramies/theHarvester.git cd theHarvester python3 -m pip install -r requirements.txt You can verify the installation by checking the help menu: theHarvester -h Step 2: Understanding the basic syntaxThe basic command structure of TheHarvester is straightforward: theHarvester -d <domain> -l <limit> -b <data_source> Let's break down the key options: -d or --domain: The target domain name (e.g., vulnweb.com).-l or --limit: The number of results to fetch from each data source (e.g., 100, 500). More results take longer.-b or --source: The data source to use. You can specify a single source like google or use all to run all available sources.-f or --filename: Save the results to an HTML and/or XML file.Step 3: Case Study: Reconnaissance on vulnweb.comLet's use TheHarvester to discover information about our target, vulnweb.com. We'll start with a broad search using the google and duckduckgo sources. Run a basic scantheHarvester -d vulnweb.com -l 100 -b google,duckduckgo If you're seeing the error The following engines are not supported: {'google'}, don't worry—you're not alone. This is a frequent problem that stems from how TheHarvester interacts with public search engines, particularly Google. Let's break down why this happens and walk through the most effective solutions. Why Does This Happen?The short answer: Google has made its search engine increasingly difficult to scrape programmatically. Here are the core reasons: Advanced Bot Detection: Google uses sophisticated algorithms to detect and block automated requests that don't come from a real web browser. TheHarvester's requests are easily identified as bots.CAPTCHAs: When Google suspects automated activity, it presents a CAPTCHA challenge. TheHarvester cannot solve these, so the request fails, and the module is disabled for the rest of your session.Lack of an API Key (for some sources): Some data sources, like Shodan, require a free API key to be used effectively. Without one, the module will not work.In the case of our example domain, vulnweb.com, this means we might miss some results that could be indexed on Google, but it's not the end of the world. Solution: Use the "All" flag with realistic expectationsYou can use -b all to run all modules. The unsupported ones will be gracefully skipped, and the supported ones will run. theHarvester -d vulnweb.com -l 100 -b all Now the output will look something like that. Read proxies.yaml from /etc/theHarvester/proxies.yaml ******************************************************************* * _ _ _ * * | |_| |__ ___ /\ /\__ _ _ ____ _____ ___| |_ ___ _ __ * * | __| _ \ / _ \ / /_/ / _` | '__\ \ / / _ \/ __| __/ _ \ '__| * * | |_| | | | __/ / __ / (_| | | \ V / __/\__ \ || __/ | * * \__|_| |_|\___| \/ /_/ \__,_|_| \_/ \___||___/\__\___|_| * * * * theHarvester 4.8.0 * * Coded by Christian Martorella * * Edge-Security Research * * cmartorella@edge-security.com * * * ******************************************************************* [*] Target: vulnweb.com Read api-keys.yaml from /etc/theHarvester/api-keys.yaml An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected [*] Searching Baidu. Searching 0 results. ..... ..... ..... The output is actually huge, spanning over 600 lines. You can view the complete output in this GitHub gist. Analyzing the outputWhen TheHarvester finishes its work, the real detective work begins. The Initial Chatter: Warnings and Status MessagesRight off the bat, you'll see a series of status checks and warnings: Read api-keys.yaml from /etc/theHarvester/api-keys.yaml An exception has occurred: Server disconnected [*] Searching Baidu. Searching 0 results. [*] Searching Bing. Searching results. Don't let these alarm you. The "Server disconnected" and similar exceptions are TheHarvester's way of telling you that certain data sources were unavailable or timed out—this is completely normal during reconnaissance. The tool gracefully skips these and moves on to working sources. The reconnaissance gold: Key findingsHere's where we strike valuable intelligence: Network infrastructure (ASN)[*] ASNS found: 1 -------------------- AS16509 This reveals the Autonomous System Number, essentially telling us which major network provider hosts this infrastructure (in this case, AS16509 is Amazon.com, Inc.). The attack surface - interesting URLs[*] Interesting Urls found: 15 -------------------- http://testphp.vulnweb.com/ https://testasp.vulnweb.com/ http://testphp.vulnweb.com/login.php This is your target list! Each URL represents a potential entry point. Notice we've found: Multiple applications (testphp, testasp, testhtml5)Specific functional pages (login.php, search.php)Both HTTP and HTTPS services.IP address mapping[*] IPs found: 2 ------------------- 44.228.249.3 44.238.29.244 Only two IP addresses serving all this content? This suggests virtual hosting where multiple domains share the same server—valuable for understanding the infrastructure setup. The subdomain treasure trove[*] Hosts found: 610 --------------------- testphp.vulnweb.com:44.228.249.3 testasp.vulnweb.com:44.238.29.244 This massive list of 610 hosts reveals the true scale of the environment. You can see patterns emerging: Application subdomains (testphp, testasp)Infrastructure components (compute.vulnweb.com, elb.vulnweb.com)Geographic distribution across AWS regionsWhat's not there matters too[*] No emails found. [*] No people found. For a test site like vulnweb.com, this makes sense. But in a real engagement, missing email addresses might mean you need different reconnaissance approaches. From reconnaissance to actionSo what's next with this intelligence? Your penetration testing roadmap becomes clear: Prioritize targets - Start with the login pages and search functionsScan the applications - Use tools like nikto or nuclei on the discovered URLsProbe the infrastructure - Run nmap scans on the identified IP addressesDocument everything - Each subdomain is a potential attack vectortIn just minutes, TheHarvester has transformed an unknown domain into a mapped-out territory ready for deeper security testing. Step 4: Expanding the search with more data sourcesThe real power of TheHarvester comes from using multiple data sources. Let's run a more comprehensive scan using bing, linkedin, and threatcrowd. theHarvester -d vulnweb.com -l 100 -b bing,linkedin,threatcrowd Bing: Often returns different and sometimes more results than Google.LinkedIn: Can be useful for finding employee names and profiles associated with a company, which can help in social engineering attacks. For vulnweb.com, this won't yield results, but for a real corporate target, it's invaluable.Threat Crowd: An open-source threat intelligence engine that can often provide a rich list of subdomains.Step 5: Using all sources and saving resultsFor the most thorough reconnaissance, you can use nearly all sources with the -b all flag. 🚧This can be slow and may trigger captchas on some search engines.It's also crucial to save your results for later analysis. Use the -f flag for this. theHarvester -d vulnweb.com -l 100 -b all -f recon-results This command will: Query all available data sources.Limit results to 100 per source.Save the final output to recon-results.json and recon-results.xml.Read json files with cat and jq: cat recon-results.json | jq '.' Important notes and best practicesRate Limiting: Be respectful of the data sources. Using high limits or running scans too frequently can get your IP address temporarily blocked.Legality: Only use TheHarvester on domains you own or have explicit permission to test. Unauthorized reconnaissance can be illegal.Context is Key: TheHarvester is a starting point. The data it collects must be verified and analyzed in the context of a broader security assessment.TheHarvester is a cornerstone tool for any penetration tester or security researcher. By following this guide, you can effectively use it to map out the digital footprint of your target and lay the groundwork for a successful security assessment.
-
Google Chrome & Iframe `allow` Permissions Problems
by: Chris Coyier Mon, 20 Oct 2025 18:06:24 +0000 If you’re a CodePen user, this shouldn’t affect you aside from potentially seeing some console noise while we work this out. Carry on! At CodePen we have Embedded Pens which are shown in an <iframe>. These contain user-authored code at a non-same-origin URL as where they are placed. We like to be both safe and as permissive as possible with what we allow users to build and test. The sandbox attribute helps us with safety and while there are some issues with it that we’ll get to later, this is mostly about the allow attribute. Here’s an example. A user wants to use the navigator.clipboard.writeText() API. So they write JavaScript like: button.onclick = async () => { try { await navigator.clipboard.writeText(`some text`); console.log('Content copied to clipboard'); } catch (err) { console.error('Failed to copy: ', err); } } The Embedded Pen is placed on arbitrary origins, for example: chriscoyier.net. The src of the <iframe> is at codepen.io, so there is an origin mismatch there. The JavaScript in the iframe is not same-origin JavaScript, thus is subject to permissions policies. If CodePen were to not use the allow attribute on our <iframe> it would throw an error when the user tries to execute that JavaScript. Failed to copy: NotAllowedError: Failed to execute 'writeText' on 'Clipboard': The Clipboard API has been blocked because of a permissions policy applied to the current document. See https://crbug.com/414348233 for more details. This is an easy fix. We make sure that allow attribute is on the <iframe>, like this, targeting the exact feature we want to allow at any origin: <iframe src="https://codepen.io/..." allow="clipboard-write *;"> </iframe> But here’s where the problem comes in… The (new) Nested Iframe Issue CodePen has Embedded Pens are actually nested <iframe>s in a structure like this: In code structured like this: <iframe src="https://codepen.io/..."> CodePen UI <iframe src="..."> User-Authored Code </iframe> </iframe> We need to put the allow attribute on the user-authored code, so it works, like this: <iframe src="https://codepen.io/..."> CodePen UI <iframe src="..." allow="clipboard-write *;" > User-Authored Code </iframe> </iframe> This is the problem! As soon as the nested iframe has the allow attribute, as of recently (seems like Chrome 136) this will throw an error: [Violation] Potential permissions policy violation: clipboard-write is not allowed in this document. With our complete list (which I’ll include below), this error list is very intense: Can’t we just put the allow attributes on both <iframe>s? Yes and no. Now we run into a second problem that we’ve been working around for many years. That problem is that every browser has a different set of allow attribute values that is supports. If you use a value that isn’t supported, it throws console errors or warnings about those attributes. This is noisy or scary to users who might think it’s their own code causing the issue, and it’s entirely outside of their (or our) control. The list of allow values for Google Chrome We know we need all these to allow users to test browser APIs. This list is constantly being adjusted with new APIs, often that our users ask for directly. <iframe allow="accelerometer *; bluetooth *; camera *; clipboard-read *; clipboard-write *; display-capture *; encrypted-media *; geolocation *; gyroscope *; language-detector *; language-model *; microphone *; midi *; rewriter *; serial *; summarizer *; translator *; web-share *; writer *; xr-spatial-tracking *" ></iframe> There are even some quite-new AI-related attributes in there reflecting brand new browser APIs. Example of allow value errors If were to ship those allow attribute values on all <iframe>s that we generate for Embedded Pens, here’s what it would look like in Firefox: At the moment, Firefox actually displays three sets of these warning. That’s a lot of console noise. Safari, at the moment, isn’t displaying errors or warnings about unsupported allow attribute values, but I believe they have in the past. Chrome itself throws warnings. If I include an unknown policy like fartsandwich, it will throw a warning like: Unrecognized feature: 'fartsandwich'. Those AI-related attributes require a trial which also throw warnings, so most users get that noise as well. We (sorry!) Need To Do User-Agent Sniffing To avoid all this noise and stop scaring users, we detect the user-agent (client-side) and generate the iframe attributes based on what browser we’re pretty sure it is. Here’s our current data and choices for the allow attribute export default { allowAttributes: { chrome: [ 'accelerometer', 'bluetooth', 'camera', 'clipboard-read', 'clipboard-write', 'display-capture', 'encrypted-media', 'geolocation', 'gyroscope', 'language-detector', 'language-model', 'microphone', 'midi', 'rewriter', 'serial', 'summarizer', 'translator', 'web-share', 'writer', 'xr-spatial-tracking' ], firefox: [ 'camera', 'display-capture', 'geolocation', 'microphone', 'web-share' ], default: [ 'accelerometer', 'ambient-light-sensor', 'camera', 'display-capture', 'encrypted-media', 'geolocation', 'gyroscope', 'microphone', 'midi', 'payment', 'serial', 'vr', 'web-share', 'xr-spatial-tracking' ] } }; We’ve been around long enough to know that user-agent sniffing is rife with problems. And also around long enough that you gotta do what you gotta do to solve problems. We’ve been doing this for many years and while we don’t love it, it’s mostly worked. The User-Agent Sniffing Happens on the Client <script> /* We need to user-agent sniff at *this* level so we can generate the allow attributes when the iframe is created. */ </script> <iframe src="..." allow="..."></iframe> CodePen has a couple of features where the <iframe> is provided directly, not generated. Direct <iframe> embeds. Users choose this in situations where they can’t run JavaScript directly on the page it’s going (e.g. RSS, restrictive CMSs, etc) oEmbed API. This returns an <iframe> to be embedded via a server-side call. The nested structure of our embeds has helped us here where we have that first level of iframe to attempt to run the user-agent sniff an apply the correct allow attributes to the internal iframe. The problem now is that if we’re expected to provide the allow attributes directly, we can’t know which set of attributes to provide, because any browser in the world could potentially be loading that iframe. Solutions? Are the allow attributes on “parent” iframes really necessary? Was this a regression? Or is this a feature? It sorta seems like the issue is that it’s possible for nested iframes to loosen permissions on a parent, which could be a security issue? It would be good to know where we fall here. Could browsers just stop erroring or warning about unsupported allow attributes? Looks like that’s what Safari is doing and that seems OK? If this is the case, we could just ship the complete set of allow attributes to all browsers. A little verbose but prevents needing to user-agent sniff. This could also help with the problem of needing to “keep up” with these attributes quite as much. For example, if Firefox starts to support the “rewriter” value, then it’ll just start working. This is better than some confused or disappointed user writing to support about it. Even being rather engaged with web platform news, we find it hard to catch when these very niche features evolve and need iframe attribute changes. Could browsers give us API access to what allow attributes are supported? Can the browser just tell us which ones it supports and then we could verify our list against that? Navigator.allow? Also… It’s not just the allow attribute. We also maintain browser-specific sets for the sandbox attribute. Right now, this isn’t affected by the nesting issues, but we could see it going that road. This isn’t entirely about nested iframes. We use one level of iframe anywhere on codepen.io we show a preview of a Pen, and we need allow attributes there also. This is less of an immediate problem because of the user-agent sniffing JS we have access to do get them right, but ideally we wouldn’t have to do that at all.
-
Building a Honeypot Field That Works
by: Zell Liew Mon, 20 Oct 2025 16:11:40 +0000 Honeypots are fields that developers use to prevent spam submissions. They still work in 2025. So you don’t need reCAPTCHA or other annoying mechanisms. But you got to set a couple of tricks in place so spambots can’t detect your honeypot field. Use This I’ve created a Honeypot component that does everything I mention below. So you can simply import and use them like this: <script> import { Honeypot } from '@splendidlabz/svelte' </script> <Honeypot name="honeypot-name" /> Or, if you use Astro, you can do this: --- import { Honeypot } from '@splendidlabz/svelte' --- <Honeypot name="honeypot-name" /> But since you’re reading this, I’m sure you kinda want to know what’s the necessary steps. Preventing Bots From Detecting Honeypots Here are two things that you must not do: Do not use <input type=hidden>. Do not hide the honeypot with inline CSS. Bots today are already smart enough to know that these are traps — and they will skip them. Here’s what you need to do instead: Use a text field. Hide the field with CSS that is not inline. A simple example that would work is this: <input class="honeypot" type="text" name="honeypot" /> <style> .honeypot { display: none; } </style> For now, placing the <style> tag near the honeypot seems to work. But you might not want to do that in the future (more below). Unnecessary Enhancements You may have seen these other enhancements being used in various honeypot articles out there: aria-hidden to prevent screen readers from using the field autocomplete=off and tabindex="-1" to prevent the field from being selected <input ... aria-hidden autocomplete="off" tabindex="-1" /> These aren’t necessary because display: none itself already does the things these properties are supposed to do. Future-Proof Enhancements Bots get smarter everyday, so I won’t discount the possibility that they can catch what we’ve created above. So, here are a few things we can do today to future-proof a honeypot: Use a legit-sounding name attribute values like website or mobile instead of obvious honeypot names like spam or honeypot. Use legit-sounding CSS class names like .form-helper instead of obvious ones like .honeypot. Put the CSS in another file so they’re further away and harder to link between the CSS and honeypot field. The basic idea is to trick spam bot to enter into this “legit” field. <input class="form-helper" ... name="occupation" /> <!-- Put this into your CSS file, not directly in the HTML --> <style> .form-helper { display: none; } </style> There’s a very high chance that bots won’t be able to differentiate the honeypot field from other legit fields. Even More Enhancements The following enhancements need to happen on the <form> instead of a honeypot field. The basic idea is to detect if the entry is potentially made by a human. There are many ways of doing that — and all of them require JavaScript: Detect a mousemove event somewhere. Detect a keyboard event somewhere. Ensure the the form doesn’t get filled up super duper quickly (‘cuz people don’t work that fast). Now, the simplest way of using these (I always advocate for the simplest way I know), is to use the Form component I’ve created in Splendid Labz: <script> import { Form, Honeypot } from '@splendidlabz/svelte' </script> <Form> <Honeypot name="honeypot" /> </Form> If you use Astro, you need to enable JavaScript with a client directive: --- import { Form, Honeypot } from '@splendidlabz/svelte' --- <Form client:idle> <Honeypot name="honeypot" /> </Form> If you use vanilla JavaScript or other frameworks, you can use the preventSpam utility that does the triple checks for you: import { preventSpam } from '@splendidlabz/utils/dom' let form = document.querySelector('form') form = preventSpam(form, { honeypotField: 'honeypot' }) form.addEventListener('submit', event => { event.preventDefault() if (form.containsSpam) return else form.submit() }) And, if you don’t wanna use any of the above, the idea is to use JavaScript to detect if the user performed any sort of interaction on the page: export function preventSpam( form, { honeypotField = 'honeypot', honeypotDuration = 2000 } = {} ) { const startTime = Date.now() let hasInteraction = false // Check for user interaction function checkForInteraction() { hasInteraction = true } // Listen for a couple of events to check interaction const events = ['keydown', 'mousemove', 'touchstart', 'click'] events.forEach(event => { form.addEventListener(event, checkForInteraction, { once: true }) }) // Check for spam via all the available methods form.containsSpam = function () { const fillTime = Date.now() - startTime const isTooFast = fillTime < honeypotDuration const honeypotInput = form.querySelector(`[name="${honeypotField}"]`) const hasHoneypotValue = honeypotInput?.value?.trim() const noInteraction = !hasInteraction // Clean up event listeners after use events.forEach(event => form.removeEventListener(event, checkForInteraction) ) return isTooFast || !!hasHoneypotValue || noInteraction } } Better Forms I’m putting together a solution that will make HTML form elements much easier to use. It includes the standard elements you know, but with easy-to-use syntax and are highly accessible. Stuff like: Form Honeypot Text input Textarea Radios Checkboxes Switches Button groups etc. Here’s a landing page if you’re interested in this. I’d be happy to share something with you as soon as I can. Wrapping Up There are a couple of tricks that make honeypots work today. The best way, likely, is to trick spam bots into thinking your honeypot is a real field. If you don’t want to trick bots, you can use other bot-detection mechanisms that we’ve defined above. Hope you have learned a lot and manage to get something useful from this! Building a Honeypot Field That Works originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Chris’ Corner: Stage 2
by: Chris Coyier Mon, 20 Oct 2025 15:47:26 +0000 We get all excited when we get new CSS features. Well, I do anyway. It’s amazing, because sometimes it unlocks something we’ve literally never been able to do before. It’s wonderful when an artist finishes a new painting, and something to be celebrated. But this is more akin to a new color dropping, making possible a sight never before seen. Just as exciting, to me, is the evolution of new features. Both from the perspective of the feature literally gaining new abilities, or us users figuring out how to use it more effectively. We point to CSS grid as an incredibly important feature addition to CSS in the last decade. And it was! … but then later we got subgrid. … but then later gap was improved to work across layouts. … but then later we got safe alignment. And this journey isn’t over! Masonry is actively being hashed out, and has gone back-and-forth whether it will be part of grid itself. (It looks like it will be a new display type but share properties with other layout types.) Plus another one I’m excited about: styling the gap. Just as gap itself is just for the spacing between grid items, now row-rule and column-rule can draw lines in those gaps. Actual elements don’t need to be there, so we don’t need to put “fake” elements there just to draw borders and whatnot. Interestingly, column-rule isn’t even new as it was used to draw lines between multi-column layouts already, now it just does double-duty which is kinda awesome. Chrome Developer Blog: A new way to style gaps in CSS Microsoft Edge Blog: Minding the gaps: A new way to draw separators in CSS If we’re entering an era where CSS innovation slows down a little and we catch our breath with Stage 2 sorta features and figuring out what to do with these new features, I’m cool with that. Sorta like… We’ve got corner-shape, so what can we actually do with it? We’ve got @layer now, how do we actually get it into a project? We’ve got View Transitions now, maybe we actually need to scope them for variety of real-world situations.
-
I Used This Open Source Library to Integrate OpenAI, Claude, Gemini to Websites Without API Keys
by: Bhuwan Mishra Mon, 20 Oct 2025 03:31:08 GMT When I started experimenting with AI integrations, I wanted to create a chat assistant on my website, something that could talk like GPT-4, reason like Claude, and even joke like Grok. But OpenAI, Anthropic, Google, and xAI all require API keys. That means I needed to set up an account for each of the platforms and upgrade to one of their paid plans before I could start coding. Why? Because most of these LLM providers require a paid plan for API access. Not to mention, I would need to cover API usage billing for each LLM platform. What if I could tell you there's an easier approach to start integrating AI within your websites and mobile applications, even without requiring API keys at all? Sounds exciting? Let me share how I did exactly that. Integrate AI with Puter.js Thanks to Puter.js, an open source JavaScript library that lets you use cloud features like AI models, storage, databases, user auth, all from the client side. No servers, no API keys, no backend setup needed here. What else can you ask for as a developer? Puter.js is built around Puter’s decentralized cloud platform, which handles all the stuff like key management, routing, usage limits, and billing. Everything’s abstracted away so cleanly that, from your side, it feels like authentication, AI, and LLM just live in your browser. Enough talking, let’s see how you can add GPT-5 integration within your web application in less than 10 lines. <html> <body> <script src="https://js.puter.com/v2/"></script> <script> puter.ai.chat(`What is puter js?`, { model: 'gpt-5-nano', }).then(puter.print); </script> </body> </html>Yes, that’s it. Unbelievable, right? Let's save the HTML code into an index.html file place this a new, empty directory. Open a terminal and switch to the directory where index.html file is located and serve it on localhost with the Python command: python -m http.serverThen open http://localhost:8000 in your web browser. Click on Puter.js “Continue” button when presented. Integrate ChatGPT with Puter JS🚧 It would take some time before you see a response from ChatGPT. Till then, you'll see a blank page. ChatGPT Nano doesn't know Puter.js yet but it will, soonYou can explore a lot of examples and get an idea of what Puter.js does for you on its playground. Let’s modify the code to make it more interesting this time. It would take a user query and return streaming responses from three different LLMs so that users can decide which among the three provides the best result. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>AI Model Comparison</title> <script src="https://cdn.twind.style"></script> <script src="https://js.puter.com/v2/"></script> </head> <body class="bg-gray-900 min-h-screen p-6"> <div class="max-w-7xl mx-auto"> <h1 class="text-3xl font-bold text-white mb-6 text-center">AI Model Comparison</h1> <div class="mb-6"> <label for="queryInput" class="block text-white mb-2 font-medium">Enter your query:</label> <div class="flex gap-2"> <input type="text" id="queryInput" class="flex-1 px-4 py-3 rounded-lg bg-gray-800 text-white border border-gray-700 focus:outline-none focus:border-blue-500" placeholder="Write a detailed essay on the impact of artificial intelligence on society" value="Write a detailed essay on the impact of artificial intelligence on society" /> <button id="submitBtn" class="px-6 py-3 bg-blue-600 hover:bg-blue-700 text-white rounded-lg font-medium transition-colors" > Generate </button> </div> </div> <div class="grid grid-cols-1 md:grid-cols-3 gap-4"> <div class="bg-gray-800 rounded-lg p-4"> <h2 class="text-xl font-semibold text-blue-400 mb-3">Claude Opus 4</h2> <div id="output1" class="text-gray-300 text-sm leading-relaxed h-96 overflow-y-auto whitespace-pre-wrap"></div> </div> <div class="bg-gray-800 rounded-lg p-4"> <h2 class="text-xl font-semibold text-green-400 mb-3">Claude Sonnet 4</h2> <div id="output2" class="text-gray-300 text-sm leading-relaxed h-96 overflow-y-auto whitespace-pre-wrap"></div> </div> <div class="bg-gray-800 rounded-lg p-4"> <h2 class="text-xl font-semibold text-purple-400 mb-3">Gemini 2.0 Pro</h2> <div id="output3" class="text-gray-300 text-sm leading-relaxed h-96 overflow-y-auto whitespace-pre-wrap"></div> </div> </div> </div> <script> const queryInput = document.getElementById('queryInput'); const submitBtn = document.getElementById('submitBtn'); const output1 = document.getElementById('output1'); const output2 = document.getElementById('output2'); const output3 = document.getElementById('output3'); async function generateResponse(query, model, outputElement) { outputElement.textContent = 'Loading...'; try { const response = await puter.ai.chat(query, { model: model, stream: true }); outputElement.textContent = ''; for await (const part of response) { if (part?.text) { outputElement.textContent += part.text; outputElement.scrollTop = outputElement.scrollHeight; } } } catch (error) { outputElement.textContent = `Error: ${error.message}`; } } async function handleSubmit() { const query = queryInput.value.trim(); if (!query) { alert('Please enter a query'); return; } submitBtn.disabled = true; submitBtn.textContent = 'Generating...'; submitBtn.classList.add('opacity-50', 'cursor-not-allowed'); await Promise.all([ generateResponse(query, 'claude-opus-4', output1), generateResponse(query, 'claude-sonnet-4', output2), generateResponse(query, 'google/gemini-2.0-flash-lite-001', output3) ]); submitBtn.disabled = false; submitBtn.textContent = 'Generate'; submitBtn.classList.remove('opacity-50', 'cursor-not-allowed'); } submitBtn.addEventListener('click', handleSubmit); queryInput.addEventListener('keypress', (e) => { if (e.key === 'Enter') { handleSubmit(); } }); </script> </body> </html> Save the above file in the index.html file as we did in the previos example and then run the server with Python. This is what it looks like now on localhost. Comparing output from different LLM provider with Puter.jsAnd here is a sample response from all three models on the query "What is It's FOSS". Looks like It's FOSS is well trusted by humans as well as AI 😉 My Final Take on Puter.js and LLMs IntegrationThat’s not bad! Without requiring any API keys, you can do this crazy stuff. Puter.js utilizes the “User pays model” which means it’s completely free for developers, and your application user will spend credits from their Puter’s account for the cloud features like the storage and LLMs they will be using. I reached out to them to understand their pricing structure, but at this moment, the team behind it is still working out to come up with a pricing plan. This new Puter.js library is superbly underrated. I’m still amazed by how easy it has made LLM integration. Besides it, you can use Puter.js SDK for authentication, storage like Firebase. Do check out this wonderful open source JavaScript library and explore what else you can build with it. Puter.js - Free, Serverless, Cloud and AI in One Simple LibraryPuter.js provides auth, cloud storage, database, GPT-4o, o1, o3-mini, Claude 3.7 Sonnet, DALL-E 3, and more, all through a single JavaScript library. No backend. No servers. No configuration.Puter
-
LHB Linux Digest #25.31: syslog guide, snippet manager, screen command more
by: Abhishek Prakash Fri, 17 Oct 2025 18:31:53 +0530 Welcome back to another round of Linux magic and command-line sorcery. Weirdly scary opening line, right? That's because I am already in Halloween spirit 🎃 And I'll take this opportunity to crack a dad joke: Q: Why do Linux sysadmins confuse Halloween with Christmas? A: Because 31 Oct equals 25 Dec. Hint: Think octal. Think Decimal. Jokes aside, we are working towards a few new series and courses. The CNCF series should be published next week, followed by either networking or Kubernetes microcourses. Stay awesome 😄 This post is for subscribers only Subscribe now Already have an account? Sign in
-
How to Fingerprint Websites With WhatWeb - A Practical, Hands-On Guide
by: Hangga Aji Sayekti Fri, 17 Oct 2025 17:59:33 +0530 This short guide will help you get started with WhatWeb, a simple tool for fingerprinting websites. It’s written for beginners who want clear steps, short explanations, and practical tips. By the end, you’ll know how to run WhatWeb with confidence. What is WhatWeb?Imagine you’re curious about what powers a website: the CMS, web server, frameworks, analytics tools, or plugins behind it. WhatWeb can tell you all that right from the Linux command line. It’s like getting a quick peek under the hood of any site. In this guide, we’ll skip the long theory and go straight to the fun part. You’ll run the commands, see the results, and learn how to understand them in real situations. Legal and ethical noteBefore you start, here’s a quick reminder. Only scan websites that you own or have clear permission to test. Running scans on random sites can break the law and go against ethical hacking practices. If you just want to practice, use safe test targets that are made for learning. For the examples in this guide, we will use http://www.vulnweb.com/ and some of its subdomains as safe test targets. These sites are intentionally provided for learning and experimentation, so they are good places to try WhatWeb without worrying about legal or ethical issues. Install WhatWebKali Linux often includes WhatWeb. Check version with: whatweb --version If not present, install with: sudo apt update sudo apt install whatweb Quick basic scanRun a fast scan with this command. Replace the URL with your target. whatweb http://testphp.vulnweb.com This prints a one-line summary for the target. You will see status code, server, CMS, and other hints: Beyond basic scan: Getting more out of whatwebThe above was just the very basic usse of whatweb. Let's see what else we can do with it. 1. Verbose outputwhatweb -v http://testphp.vulnweb.com This shows more details and the patterns WhatWeb matched. WhatWeb report for http://testphp.vulnweb.com Status : 200 OK Title : Home of Acunetix Art IP : 44.228.249.3 Country : UNITED STATES, US Summary : ActiveX[D27CDB6E-AE6D-11cf-96B8-444553540000], Adobe-Flash, Email[wvs@acunetix.com], HTTPServer[nginx/1.19.0], nginx[1.19.0], Object[http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,29,0][clsid:D27CDB6E-AE6D-11cf-96B8-444553540000], PHP[5.6.40-38+ubuntu20.04.1+deb.sury.org+1], Script[text/JavaScript], X-Powered-By[PHP/5.6.40-38+ubuntu20.04.1+deb.sury.org+1] Detected Plugins: [ ActiveX ] ActiveX is a framework based on Microsoft's Component Object Model (COM) and Object Linking and Embedding (OLE) technologies. ActiveX components officially operate only with Microsoft's Internet Explorer web browser and the Microsoft Windows operating system. - More info: http://en.wikipedia.org/wiki/ActiveX Module : D27CDB6E-AE6D-11cf-96B8-444553540000 [ Adobe-Flash ] This plugin identifies instances of embedded adobe flash files. Google Dorks: (1) Website : https://get.adobe.com/flashplayer/ [ Email ] Extract email addresses. Find valid email address and syntactically invalid email addresses from mailto: link tags. We match syntactically invalid links containing mailto: to catch anti-spam email addresses, eg. bob at gmail.com. This uses the simplified email regular expression from http://www.regular-expressions.info/email.html for valid email address matching. String : wvs@acunetix.com String : wvs@acunetix.com [ HTTPServer ] HTTP server header string. This plugin also attempts to identify the operating system from the server header. String : nginx/1.19.0 (from server string) [ Object ] HTML object tag. This can be audio, video, Flash, ActiveX, Python, etc. More info: http://www.w3schools.com/tags/tag_object.asp Module : clsid:D27CDB6E-AE6D-11cf-96B8-444553540000 (from classid) String : http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,29,0 [ PHP ] PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. This plugin identifies PHP errors, modules and versions and extracts the local file path and username if present. Version : 5.6.40-38+ubuntu20.04.1+deb.sury.org+1 Google Dorks: (2) Website : http://www.php.net/ [ Script ] This plugin detects instances of script HTML elements and returns the script language/type. String : text/JavaScript [ X-Powered-By ] X-Powered-By HTTP header String : PHP/5.6.40-38+ubuntu20.04.1+deb.sury.org+1 (from x-powered-by string) [ nginx ] Nginx (Engine-X) is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. Version : 1.19.0 Website : http://nginx.net/ HTTP Headers: HTTP/1.1 200 OK Server: nginx/1.19.0 Date: Mon, 13 Oct 2025 07:29:42 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: close X-Powered-By: PHP/5.6.40-38+ubuntu20.04.1+deb.sury.org+1 Content-Encoding: gzip 2. Aggressive scan (more probes)whatweb -a 3 http://testphp.vulnweb.com Use aggressive mode when you want more fingerprints. Aggressive mode is slower and noisier. Use it only with permission. 3. Scan a list of targetsCreate a file named targets.txt with one URL per line. nano targets.txt When nano opens, paste the following lines exactly (copy and right-click to paste in many terminals): http://testphp.vulnweb.com/ http://testasp.vulnweb.com/ http://testaspnet.vulnweb.com/ http://rest.vulnweb.com/ http://testhtml5.vulnweb.com/ Save and exit nano by pressing ctrl+X. Confirm the file was created for the sake of it: cat targets.txt You should see the five URLs listed. Then run: whatweb -i targets.txt --log-json results.json This saves results in JSON format in results.json. What to expect on screen: WhatWeb prints a per-host summary while it runs. When finished, open the JSON file to inspect it: less results.json If you want a pretty view and you have jq installed, run: jq '.' results.json | less -R 4. Save a human readable logwhatweb -v --log-verbose whatweb.log http://testphp.vulnweb.com Let's see the log: cat whatweb.log 5. Use a proxy (for example Burp Suite)whatweb --proxy 127.0.0.1:8080 http://testphp.vulnweb.com 6. Custom user agentIf a site blocks you, slow down the scan or change the user agent. whatweb --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" http://testphp.vulnweb.com 7. Limit scan to specific portsWhatWeb accepts a URL with port, for example: whatweb http://example.com:8080 Interpreting the outputA typical WhatWeb line looks like this: http://testphp.vulnweb.com [200 OK] Apache[2.4.7], PHP[5.5.9], HTML5 200 OK - HTTP status code. It means the request succeeded.Apache[2.4.7] - the web server software and version.PHP[5.5.9] - server side language and version.HTML5 - content hints.If you see a CMS such as WordPress, you may also see plugins or themes. WhatWeb reports probable matches. It is not a guarantee. Combine WhatWeb with other toolsWhatWeb is best for reconnaissance. Use it with these tools for a fuller picture: nmap - for network and port scansdirsearch or gobuster - for directory and file discoverywpscan - for deeper WordPress checksA simple workflow: Run WhatWeb to identify technologies.Use nmap to find open ports and services.Use dirsearch to find hidden pages or admin panels.If the site is WordPress, run wpscan for plugin vulnerabilities.ConclusionWhatWeb is a lightweight and fast tool for fingerprinting websites. It helps you quickly understand what runs a site and gives leads for deeper testing. Use the copy-paste commands here to get started, and combine WhatWeb with other tools for a full reconnaissance workflow. Happy pen-testing 😀
-
Looking for Open Source Kindle Alternatives? Build it Yourself
by: Pulkit Chandak Fri, 17 Oct 2025 05:20:49 GMT The e-ink display technology arrived on the scene as the answer for a long list of issues and desires people had with digital book reading. The strain on the eyes, the distractions, the low battery life—all of it fixed in one swoop. While the most popular option that remains in the category is an Amazon Kindle, not everyone of us would want a DRM-restricted Big Tech ecosystem. As a Linux user and open source enthusiast, I wanted something more 'open' and thus I scoured the World Wide Web and came up with a few interesting options. I have put them into two categories: DIY: You use a board like Raspberry Pi Pico and you build it yourself thanks to the blueprint provided by the project developer. This is for hardware tinkerers.A couple of non-DIY options that may be considered here.Needless to say, you should not expect a polished, out of the box eBook experience like Amazon Kindle but that's not what we are aiming for here, are we? Also, I have not tested these projects on my own. As much as I would like to, I don't have enough money to get all of them and experiment with them. 1. The Open BookThe Open Book project is the definitively DIY ebook reader project. It is based on the Raspberry Pi Pico, and makes a point of having to buy a minimum number of components. The pins on the Pico make it easy to control all necessary actions including button controls, power controls, etc. The firmware is called libros, which needs to be flashed onto the Pico. It also uses a library called Babel that gives it the ability to display the text of all languages in the world, which is a major advantage. Display: 4.2" GDEW042T2 display, designed for fast refreshingFormats supported: Plain UTF-8 text, TXT files (a converter is given by the creator)Battery: 2 AAA batteriesCost: Can differ depending on the cost of the hardware you decide to go with, but a decent build can be made at about $130.The PCB for the main board as well as the e-paper driver are easily printable because the schematics are given by the creator. The instructions for setting up the device and getting books ready to be read on the device are given very clearly and concisely on the website. 2. ZEReaderZEReader is a device inspired by The Open Book, making another iteration of the Raspberry Pi Pico based e-ink device. This project is relatively more convenient as it provides a USB-C port for charging. The convenience is not only limited to the usage, but also the assembly. The software is based on Zephyr Real-Time OS, which makes it easier for the software to be adapted to other hardware boards as well. Display: 7.5" Waveshare ePaper displayFormats supported: EPUB, very basic HTML parsingBattery: LiPo batteryCost: UnknownFor navigation, there are 4 buttons designed on the casing. The board is printable with schematics available online, and the parts can be gathered as the user pleases according to the requirements. There's a micro SD card necessary for storage of files. The instructions can all be found on the GitHub page, along with the information of the parts and software commands. Get more information on our news article about the device. 3. Dual-Screen E-ReaderThe big idea behind this project is getting back to the feeling of reading a two-paged book instead of a single-page pamphlet-like structure like a Kindle provides. A button press to change the page moves both the pages ahead, making it feel more natural, similar to an actual book. Instead of a full single-board computer like a Raspberry Pi, this uses a SoC, ESP32-S3. This provides a significant edge to the power consumption, drawing very low power as it is in the reading mode, but in the deep sleep mode, which occurs after 10 minutes of inactivity, it reduces power consumption even more dramatically, basically never needing to be turned off. Display: 2 x 4.2" panelsFormats supported: EPUB, basic HTMLBattery: 2 x 1300 mAh batteriesCost: Original creator's estimate is a little over $80.The parts are all laid out in a very concise list on the originating Reddit post with all the relevant information linked there effectively. The project is posted on Yanko Design as well in a well written post. 4. piEreaderThe piEreader aims for a fully open approach, that includes the hardware, software, and even a server to host a library. The heart of the device is a Raspberry Pi Compute Module, giving it more capabilities than an average microcontroller. The display on the build has a touch-screen as well as a backlight. The software revolves around MuPDF, which is a very well known popular e-book reader on the Linux platform. Display: 4.2" e-paper displayFormats supported: EPUB, MOBI, CBZ, PDF, etc.Battery: Lithium batteryCost: UnknownThe Hackaday page contains all the necessary information, and the GitLab page hosts all the necessary code. It is worth noting that the creator has been able to successfully try out the software on other boards like PINE64-LTS, SOQUARTZ, etc. as well. Read more about this device in our news article. 5. TurtleBookTaking an extremely practical approach, the creator of TurtleBook made some really innovative choices. First, and as they mention, most e-book readers have a lot of unnecessary features when mostly all that is needed is turning a page. As such, the reader doesn't have any physical buttons. It works on gestures, which can be used to switch pages, open menus and adjust brightness, among other things. Also since e-ink technology doesn't require a lot of power, the power setup is solar with hybrid capacitors, making it truly autonomous and one-of-a-kind. The device is based on an Arduino MEGA2560 board. Display: Waveshare 5.3" e-ink display, and a small OLED panel for easily accessing the menu optionsFormats supported: CB files (custom formatting website is given by the creator)Battery: Hybrid capacitorsCost: $80-$120All the necessary parts and the links to them are provided by the creator in a list on the GitHub page, as well as the schematics for the PCBs and 3D-printable casing. There are two options, one with SRAM, a charger and WiFI capabilities and the other one with no charger or WiFi. The Instructables page for the device has very detailed instructions for the entire process, making it one of the most friendly options on this list. 6. EPub-InkPlate Inkplate 6 from Soldred Electronics is basically an ESP-32 based e-Paper display. Inkplate uses recycled screens from old, discarded e-Book readers. Excellent intiative. The project is open source both software and hardware wise. While you can build a lot of cool devices on top of it, the EPub-InkPlate project allows you to convert it into an eBook reader. Although, the GitHub repo doesn't seen any new updates since 2022, it could be worth giving a shot if you already have an InkPlate display. 7. PineNote (not DIY)While not DIY like the other projects on the list, PineNote is from the company Pine64, which has been one of the most actively pro-open source companies in recent times. Since it is pre-built by a proper manufacturer, it can provide a lot of stable features that the DIY projects might lack. To start with, it is immensely powerful and has a Linux-based OS. It has a 128 GB eMMC storage, 4 GB RAM, and am ARM processor. Display: 10.3" multi-touch e-ink panel with frontlighting and an optional Wacom EMR penFormats supported: PDF, MOBI, CBZ, TXT, etc. virtually any formatBattery: 4000 mAh lithium batteryCost: $400 (I know but it's not just an e-Book reader)It also is charged by USB-C and can be expanded into different sorts of projects, not just an e-book reader since it is based on an unrestricted Linux OS. Special Mention: paper 7Don't confuse this paper 7 with the Paper 7 e-ink tablet from Harbor Innovations. That is also an excellent device but not open source. Yes. paper 7 is an open source device, or at least it is in the process. It is developed by a company called paperless paper based in Leipzig, Germany. It has been designed mainly as a photo frame, but I think it can be repurposed into an e-book reader. Presently, the official integration shows that you can save and read webpages on it. Adding the ability to read PDF and ePUB files would be wonderful. paper 7ConclusionThere are a lot of options to choose from, each with something more distinct than the last. The extent of the open-source philosophy, the amount of effort it might require, the extra features the devices have are some of the factors that might influence your decision when choosing the right device for yourself. Whatever your choice may be, you might find yourself with a new device as well as a new interest, perhaps, after dabbling into the DIY side of open technology. We wish you the very best for it. Let us know what you think about it in the comments. Cheers!
-
FOSS Weekly #25.42: Hyprland Controversy, German State with Open Source, New Flatpak App Center and a Lot More Linux Stuff
by: Abhishek Prakash Thu, 16 Oct 2025 04:50:27 GMT In the previous newsletter, I asked what kind of advice someone looking to switch from Windows to Linux would have. I got so many responses that I am still replying to all the suggestions. I am also working on the 'Windows to Linux migration' page. Hopefully, we will have that up by next week. Hope to see more people coming to Linux as Windows 10 support has ended now. 💬 Let's see what you get in this edition: Mastering alias command.A bug that broke Flatpaks on Ubuntu 25.10.Controversy over Framework supporting Hyprland project.New Flatpak software center.Open source game development arriving on iPhone.And other Linux news, tips, and, of course, memes!📰 Linux and Open Source NewsXogot is now available on Apple iPhone for open source game development.The German state of Schleswig-Holstein has completed a massive transition to open source email systems.Ubuntu 25.10 has been released as the second and final interim release of Ubuntu for 2025, with a bug briefly breaking flatpak installations on it.Zorin OS 18 is also available now, looking prettier than ever.Framework has found itself in a controversy over its recent endorsements of Hyprland project. Framework is Accused of Supporting the Far-right, Apparently for Sponsoring the Hyprland ProjectThe announcement has generated quite some buzz but for all the wrong reasons.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutTelegram banned our community group without reasons. It's a deja vu moment, as Facebook was also banning links to Linux websites some months ago. Telegram, Please Learn Who’s a Threat and Who’s NotOur Telegram community got deleted without an explanation.It's FOSS NewsSourav RudraProprietary ecosystems are great at keeping creative people locked in, but you can break free with the power of FOSS. 5 Signs Your Proprietary Workflow Is Stifling Your Creativity (And What You Can Do About It)If these signs feel familiar, your creativity may be stifled by proprietary constraints.It's FOSS NewsTheena Kumaragurunathan🧮 Linux Tips, Tutorials, and LearningsYou can greatly improve your efficiency in the Linux terminal by using aliases.Ubuntu/GNOME customization tips.Our beginner's guide to the Nano text editor will teach you the basics without overwhelming you.Understanding software update management in Linux Mint.Getting Started With ManjaroThis is a collection of tutorials that are useful for new Manjaro users.It's FOSSAbhishek Prakash👷 AI, Homelab and Hardware CornerWe have a Pironman alternative for you that saves your wallet and desk space. The Affordable Pironman Alternative Mini PC Case for Raspberry Pi 5We have a new option in tower cases for Raspberry Pi 5. This one has a lower price tag but does that make it worth a purchase?It's FOSSAbhishek PrakashUbo Pod is an open source AI assistant that works for you, not for your data. It is based on Raspberry Pi. Bhuwan tried them all but llama.cpp finally nailed the local LLM experience. I have been using Keychron mechanical keyboard for two years now. I recently came across their upcoming product that has ceramic mechanical keyboards. Interesting materials choice, right? Keychron's Ceramic Keyboards🎫 Event Alert: First Ever UbuCon in IndiaThe Ubuntu India LoCo is hosting the first ever UbuCon event in India, and we are the official media partners for it! India’s First UbuCon Set to Unite Ubuntu Community in Bengaluru This NovemberIndia gets its first UbuCon!It's FOSS NewsSourav RudraProprietary ecosystems are great at keeping creative people locked in, but ✨ Project HighlightsBazaar is getting all the hype right now; it is a neat app store for GNOME that focuses on providing applications and add-ons from Flatpak remotes, particularly Flathub. GitHub - kolunmi/bazaar: New App Store for GNOMENew App Store for GNOME. Contribute to kolunmi/bazaar development by creating an account on GitHub.GitHubkolunmiA new, open source personal finance application. John Schneiderman’s - DRNAn application to manage your personal finances using a budget.DRNJohn Schneiderman📽️ Videos I Am Creating for YouYour Linux Mint setup deserves a stunning makeover! Subscribe to It's FOSS YouTube Channel Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a McDonald's burger a month), and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 💡 Quick Handy TipIn KDE Plasma, open settings and go into Colors & Themes → Window Decorations → Configure Titlebar. Here, add the "On all desktops" and "Keep above other windows" options to the title bar by dragging and dropping. Click on "Apply" to confirm the changes. Now, you can use: The On all desktops button to pin an app to all your desktops.The Keep above other windows button to keep a selected window always on top.🎋 Fun in the FOSSverseCan memory match terminal shortcuts with their actions? Memory Match Terminal Shortcuts With Their ActionsAn enjoyable way to test your memory by matching the Linux terminal shortcuts with their respective actions.It's FOSSAbhishek Prakash🤣 Meme of the Week: Windows 10 will be missed by many, but there are much better Linux choices to replace it with. 🗓️ Tech Trivia: On October 16, 1959, Control Data Corporation introduced the CDC 1604, one of the first fully transistorized computers. It was designed by Seymour Cray, who later became known as the father of supercomputing. The CDC 1604 was among the fastest machines of its time and was used for scientific research, weapons control, and commercial data processing. 🧑🤝🧑 From the Community: Windows 10 has reached end of life, and our FOSSers are discussing the event. Windows 10 reaches EOL tomorrow!Hi everybody, it’s that time again, that happens approx. every 10 or so years: A Windows version is reaching its end of life. I was doing some research and asked Brave Search about it. And the facts said that Windows 10 has 47% of overall Windows market share, which is roughly 35% of the overall share. Let’s just hope that they will do the right thing and switch to Linux. I wanted to know: what are others opinions on this? Do you know somebody who migrated from Windows?It's FOSS CommunityGeorge1❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
Sequential linear() Animation With N Elements
by: Temani Afif Wed, 15 Oct 2025 13:39:39 +0000 Let’s suppose you have N elements with the same animation that should animate sequentially. The first one, then the second one, and so on until we reach the last one, then we loop back to the beginning. I am sure you know what I am talking about, and you also know that it’s tricky to get such an effect. You need to define complex keyframes, calculate delays, make it work for a specific number of items, etc. Tell you what: with modern CSS, we can easily achieve this using a few lines of code, and it works for any number of items! The following demo is currently limited to Chrome and Edge, but will work in other browsers as the sibling-index() and sibling-count() functions gain broader support. You can track Firefox support in Ticket #1953973 and WebKit’s position in Issue #471. CodePen Embed Fallback In the above demo, the elements are animated sequentially and the keyframes are as simple as a single to frame changing an element’s background color and scale: @keyframes x { to { background: #F8CA00; scale: .8; } } You can add or remove as many items as you want and everything will keep running smoothly. Cool, right? That effect is made possible with this strange and complex-looking code: .container > * { --_s: calc(100%*(sibling-index() - 1)/sibling-count()); --_e: calc(100%*(sibling-index())/sibling-count()); animation: x calc(var(--d)*sibling-count()) infinite linear(0, 0 var(--_s), 1, 0 var(--_e), 0); } It’s a bit scary and unreadable, but I will dissect it with you to understand the logic behind it. The CSS linear() function When working with animations, we can define timing functions (also called easing functions). We can use predefined keyword values — such as linear, ease, ease-in, etc. — or steps() to define discrete animations. There’s also cubic-bezier(). But we have a newer, more powerful function we can add to that list: linear(). From the specification: animation-timing-function: linear creates a linear interpolation between two points — the start and end of the animation — while the linear() function allows us to define as many points as we want and have a “linear” interpolation between two consecutive points. It’s a bit confusing at first glance, but once we start working with it, things becomes clearer. Let’s start with the first value, which is nothing but an equivalent of the linear value. linear(0 0%, 1 100%) We have two points, and each point is defined with two values (the “output” progress and “input” progress). The “output” progress is the animation (i.e., what is defined within the keyframes) and the “input” progress is the time. Let’s consider the following code: .box { animation: move 2s linear(0 0%, 1 100%); } @keyframes move { 0% {translate: 0px } 100% {translate: 80px} } In this case, we want 0 of the animation (translate: 0px) at t=0% (in other words, 0% of 2s, so 0s) and 1 of the animation (translate: 80px) at t=100% (which is 100% of 2s, so 2s). Between these points, we do a linear interpolation. CodePen Embed Fallback Instead of percentages, we can use numbers, which means that the following is also valid: linear(0 0, 1 1) But I recommend you stick to the percentage notation to avoid getting confused with the first value which is a number as well. The 0% and 100% are implicit, so we can remove them and simply use the following: linear(0, 1) Let’s add a third point: linear(0, 1, 0) As you can see, I am not defining any “input” progress (the percentage values that represent the time) because they are not mandatory; however, introducing them is the first thing to do to understand what the function is doing. The first value is always at 0% and the last value is always at 100%. linear(0 0%, 1, 0 100%) The value will be 50% for the middle point. When a control point is missing its “input” progress, we take the mid-value between two adjacent points. If you are familiar with gradients, you will notice the same logic applies to color stops. linear(0 0%, 1 50%, 0 100%) Easier to read, right? Can you explain what it does? Take a few minutes to think about it before continuing. Got it? I am sure you did! It breaks down like this: We start with translate: 0px at t=0s (0% of 2s). Then we move to translate: 80px at t=1s (50% of 2s). Then we get back to translate: 0px at t=2s (100% of 2s). CodePen Embed Fallback Most of the timing functions allow us to only move forward, but with linear() we can move in both directions as many times as we want. That’s what makes this function so powerful. With a “simple” keyframes you can have a “complex” animation. I could have used the following keyframes to do the same thing: @keyframes move { 0%, 100% { translate: 0px } 50% { translate: 80px } } However, I won’t be able to update the percentage values on the fly if I want a different animation. There is no way to control keyframes using CSS so I need to define new keyframes each time I need a new animation. But with linear(), I only need one keyframes. In the demo below, all the elements are using the same keyframes and yet have completely different animations! CodePen Embed Fallback Add a delay with linear() Now that we know more about linear(), let’s move to the main trick of our effect. Don’t forget that the idea is to create a sequential animation with a certain number (N) of elements. Each element needs to animate, then “wait” until all the others are done with their animation to start again. That waiting time can be seen as a delay. The intuitive way to do this is the following: @keyframes move { 0%, 50% { translate: 0px } 100% { translate: 80px } } We specify the same value at 0% and 50%; hence nothing will happen between 0% and 50%. We have our delay, but as I said previously, we won’t be able to control those percentages using CSS. Instead, we can express the same thing using linear(): linear(0 0%, 0 50%, 1 100%) The first two control points have the same “output” progress. The first one is at 0% of the time, and the second one at 50% of the time, so nothing will “visually” happen in the first half of the animation. We created a delay without having to update the keyframes! @keyframes move { 0% { translate: 0px } 100% { translate: 80px } } CodePen Embed Fallback Let’s add another point to get back to the initial state: linear(0 0%, 0 50%, 1 75%, 0 100%) Or simply: linear(0, 0 50%, 1, 0) CodePen Embed Fallback Cool, right? We’re able to create a complex animation with a simple set of keyframes. Not only that, but we can easily adjust the configuration by tweaking the linear() function. This is what we will do for each element to get our sequential animation! The full animation Let’s get back to our first animation and use the previous linear() value we did before. We will start with two elements. CodePen Embed Fallback Nothing surprising yet. Both elements have the exact same animation, so they animate the same way at the same time. Now, let’s update the linear() function for the first element to have the opposite effect: an animation in the first half, then a delay in the second half. linear(0, 1, 0 50%, 0) This literally inverts the previous value: CodePen Embed Fallback Tada! We have established a sequential animation with two elements! Are you starting to see the idea? The goal is to do the same with any number (N) of elements. Of course, we are not going to assign a different linear() value for each element — we will do it programmatically. First, let’s draw a figure to understand what we did for two elements. When one element is waiting, the other one is animating. We can identify two ranges. Let’s imagine the same with three elements. This time, we need three ranges. Each element animates in one range and waits in two ranges. Do you see the pattern? For N elements, we need N ranges, and the linear() function will have the following syntax: linear(0, 0 S, 1, 0 E, 0) The start and the end are equal to 0, which is the initial state of the animation, then we have an animation between S and E. An element will wait from 0% to S, animate from S to E, then wait again from E to 100%. The animation time equals to 100%/N, which means E - S = 100%/N. The first element starts its animation at the first range (0 * 100%/N), the second element at the second range (1 * 100%/N), the third element at the third range (2 * 100%/N), and so on. S is equal to: S = (i - 1) * 100%/N …where i is the index of the element. Now, you may ask, how do we get the value of N and i? The answer is as simple as using the sibling-count()and sibling-index() functions! Again, these are currently supported in Chromium browsers, but we can expect them to roll out in other browsers down the road. S = calc(100%*(sibling-index() - 1)/sibling-count()) And: E = S + 100%/N E = calc(100%*sibling-index()/sibling-count()) We write all this with some good CSS and we are done! .box { --d: .5s; /* animation duration */ --_s: calc(100%*(sibling-index() - 1)/sibling-count()); --_e: calc(100%*(sibling-index())/sibling-count()); animation: x calc(var(--d)*sibling-count()) infinite linear(0, 0 var(--_s), 1, 0 var(--_e), 0); } @keyframes x { to { background: #F8CA00; scale: .8; } } I used a variable (--d) to control the duration, but it’s not mandatory. I wanted to be able to control the amount of time each element takes to animate. That’s why I multiply it later by N. CodePen Embed Fallback Now all that’s left is to define your animation. Add as many elements as you want, and watch the result. No more complex keyframes and magic values! Note: For unknown reasons (probably a bug) you need to register the variables with @property. More variations We can extend the basic idea to create more variations. For example, instead of having to wait for an element to completely end its animation, the next one can already start its own. CodePen Embed Fallback This time, I am defining N + 1 ranges, and each element animates in two ranges. The first element will animate in the first and second range, while the second element will animate in the second and third range; hence an overlap of both animations in the second range, etc. I will not spend too much time explaining this case because it’s one example among many we create, so I let you dissect the code as a small exercise. And here is another one for you to study as well. CodePen Embed Fallback Conclusion The linear() function was mainly introduced to create complex easing such as bounce and elastic, but combined with other modern features, it unlocks a lot of possibilities. Through this article, we got a small overview of its potential. I said “small” because we can go further and create even more complex animations, so stay tuned for more articles to come! Sequential linear() Animation With N Elements originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
413: Still indie after all these years
by: Chris Coyier Tue, 14 Oct 2025 13:52:25 +0000 We’re over 13 years old as a company now. We decide that we’re not a startup anymore (we’re a “small business” with big dreams) but we are still indie. We’ve seen trends come and go. We just do what we do, knowing the tradeoffs, and plan to keep getting better as long as we can. Links Timeline – Chris Coyier 115: Adam Argyle on Cracking the 2025 Web Dev Interview | Front-End Fire Time Jumps 00:05 Are we still an indie startup? 04:32 Remote working at CodePen 19:20 Progressing and advancement in a small business 22:51 Career opportunities in tech 25:39 Startups starting at free 29:17 2.0 for the future
-
Chris’ Corner: Design (and you’re going to like it)
by: Chris Coyier Mon, 13 Oct 2025 17:01:15 +0000 Damning opening words from Edwin Heathcote in Why designers abandoned their dreams of changing the world. The situation is, if you wanna make money doing design work, you’re probably going to be making it from some company hurting the world, making both you and them complicit. Kinda dark. But maybe it is course correction for designers thinking they are the world’s salvation, a swing too far in the other direction. This pairs very nicely with Pavel Samsonov’s UX so bad that it’s illegal, again opening with a banger: Big companies products are so dominant that users are simply going to use them no matter what. Young designers will be hired to make the products more profitable no matter what, and they will like it, damn it. Using design to make money is, well, often kind of the point. And I personally take no issue with that. I do take issue with using design for intentional harm. I take issue with using the power of design to influence users to make decisions against their own better judgement. It makes me think of the toy catalog that showed up at my house from Amazon recently. It’s early October. Christmas is 3 months away, but the message is clear: get your wallets ready. This design artifact, for children, chockablock with every toy under the sun, to set their desire ablaze, to ensure temper tantrums for until the temporary soothing that only a parent clicking a Buy Now button gives. It isn’t asking kids to thoughtfully pick out a toy they might want, it’s says give me them all, I want every last thing. The pages are nicely designed with great photography. A designer make the argument: let’s set all the pages on white with product cutouts and plenty of white space, so kids can easily visibly circle all the things they want. Let their fingers bleed with capitalism. Making a list isn’t just implied though, the first page is a thicker-weight paper that is a literal 15-item wish list page designed to be filled out and torn out. More. Morrrreeeee. And just as a little cherry on top, it’s a sticker book too. It begs to travel with you, becoming an accessory to the season. It’s cocaine for children with the same mandates as the Instagram algorithm is for older kids and adults.
-
Masonry: Watching a CSS Feature Evolve
by: Saleh Mubashar Mon, 13 Oct 2025 14:31:35 +0000 You’ve probably heard the buzz about CSS Masonry. You might even be current on the ongoing debate about how it should be built, with two big proposals on the table, one from the Chrome team and one from the WebKit team. The two competing proposals are interesting in their own right. Chrome posted about its implementation a while back, and WebKit followed it up with a detailed post stating their position (which evolved out of a third proposal from the Technical Architecture Group). We’ll rehash some of that in this post, but even more interesting to me is that this entire process is an excellent illustration of how the CSS Working Group (CSSWG), browsers, and developers coalesce around standards for CSS features. There are tons of considerations that go into a feature, like technical implementations and backwards compatibility. But it can be a bit political, too. That’s really what I want to do here: look at the CSS Masonry discussions and what they can teach us about the development of new CSS features. What is the CSSWG’s role? What influence do browsers have? What can learn from the way past features evolved? Masonry Recap A masonry layout is different than, say Flexbox and Grid, stacking unevenly-sized items along a single track that automatically wraps into multiple rows or columns, depending on the direction. It’s called the “Pinterest layout” for the obvious reason that it’s the hallmark of Pinterest’s feed. Pinterest’s masonry layout We could go deeper here, but talking specifically about CSS Masonry isn’t the point. When Masonry entered CSS Working Group discussions, the first prototype actually came from Firefox back in 2019, based on an early draft that integrated masonry behavior directly into Grid. The Chrome team followed later with a new display: masonry value, treating it as a distinct layout model. They argued that masonry is a different enough layout from Flexbox and Grid to deserve its own display value. Grid’s defaults don’t line up with how masonry works, so why force developers to learn a bunch of extra Grid syntax? Chrome pushed ahead with this idea and prototyped it in Chrome 140: .container { display: masonry; grid-template-columns: repeat(auto-fit, minmax(160px, 1fr)); gap: 10px; } Meanwhile, the WebKit team has proposed that masonry should be a subset of Grid, rather than its own display type. They endorsed a newer direction based on a recommendation by the W3C Technical Architecture Group (TAG) built around a concept called Item Flow that unifies flex-flow and grid-auto-flow into a single set of properties. Instead of writing display: masonry, you’d stick with display: grid and use a new item-flow shorthand to collapse rows or columns into a masonry-style layout: .container { display: grid; grid-template-columns: repeat(auto-fill, minmax(14rem, 1fr)); item-flow: row collapse; gap: 1rem; } The debate here really comes down to mental models and how you think about masonry. WebKit sees it as a natural extension of Grid, not a brand-new system. Their thinking is that developers shouldn’t need to learn an entirely new model when most of it already exists in Grid. With item-flow, you’re not telling the browser “this is a whole new layout system,” you’re more or less adjusting the way elements flow in a particular context. How CSS Features Evolve This sort of horse-trading isn’t new. Both Flexbox and Grid went through years of competing drafts before becoming the specs we use today. Flexbox, in particular, had a rocky rollout in the early 2010s. Those who were in the trenches at the time likely remember multiple conflicting syntaxes floating around at once. The initial release had missing gaps and browsers implemented the features differently, leading to all kinds of things, like proprietary properties, experimental releases, and different naming conventions that made the learning curve rather steep, and even Frankenstein-like usage in some cases to get the most browser support. In other words, Flexbox (nor Grid, for that matter) did not enjoyed a seamless release, but we’ve gotten to a place where the browsers implementations are interoperable with one another. That’s a big deal for developers like us who often juggle inconsistent support for various features. Heck, Rob O’Leary recently published the rabbit hole he traveled trying to use text-wrap: pretty in his work, and that’s considered “Baseline” support that is “widely available.” But I digress. It’s worth noting that Flexbox faced unique challenges early on, and masonry has benefitted from those lessons learned. I reached out to CSSWG member Tab Atkins-Bittner for a little context since they were heavily involved in editing the Flexbox specification. In other words, Flexbox was sort of a canary in the coal mine as the CSSWG considered what a modern CSS layout syntax should accomplish. This greatly benefited the work put into defining CSS Grid because a lot of the foundation for things like tracks, intrinsic sizing, and proportions were already tackled. Atkins-Bittner goes on further to explain that the Grid specification process also forced the CSSWG to rethink several of Flexbox’s design choices in the process. This also explains why Flexbox underwent several revisions following its initial release. It also highlights another key point: CSS features are always evolving. Early debate and iteration are essential because they reduce the need for big breaking changes. Still, some of the Flexbox mistakes (which do happen) became widely adopted. Browsers had widely implemented their approaches and the specification caught up to it while trying to establish a consistent language that helps both user agents and developers implemented and use the features, respectively. All this to say: Masonry is in a much better spot than Flexbox was at its inception. It benefits from the 15+ years that the CSSWG, browsers, and developers contributed to Flexbox and Grid over that time. The discussions are now less about fixing under-specified details and more about high-level design choices. Hence, novel ideas born from Masonry that combine the features of Flexbox and Grid into the new Item Flow proposal. It’s messy. And weird. But it’s how things get done. The CSSWG’s Role Getting to this point requires process. And in CSS, that process runs through the Working Group. The CSS Working Group (CSSWG) runs on consensus: members debate in the open, weigh pros and cons, and push browsers towards common ground. Miriam Suzanne, an invited expert with the CSSWG (and CSS-Tricks alumni), describes the process like this: But consensus only applies to the specifications. Browsers still decide when and how to those features are shipped, as Suzanne continues: So, while the CSSWG facilitates discussions around features, it can’t actually stop browsers from shipping those features, let alone how they’re implemented. It’s a consensus-driven system, but consensus is only about publishing a specification. In practice, momentum can shift if one vendor is the first to ship or prototype a feature. In most cases, though, the specification adoption process results in a stronger proposal overall. By the time features ship, the idea is that they’ve already been thoroughly debated, which in theory, reduces the need for significant revisions later that could lead to breaking changes. Backwards compatibility is always at the forefront of CSSWG discussions. Developer feedback also plays an important role, though there isn’t a single standardized way that it is solicited, collected, or used. For the CSSWG, the csswg-drafts GitHub repo is the primary source of feedback and discussion, while browsers also run their own surveys and gather input through various other channels such as Chrome’s technical discussion groups and Webkit’s mailing lists. The Bigger Picture Browsers are in the business of shaping new features. It’s also in their best interest for a number of reasons. Proposing new ideas gives them a seat at the table. Prototyping new features gets developers excited and helps further refine edge cases. Implementing new features (particularly first) gives them a competitive edge in the consumer market. All that said, prototyping features ahead of consensus is a bit of a tightrope walk. And that’s where Masonry comes back into the bigger picture. Chrome shipped a prototype of the feature that leans heavily into the first proposal for a new display: masonry value. Other browsers have yet to ship competing prototypes, but have openly discussed their positions, as WebKit did in subsequent blog posts. At first glance, that might suggest that Chrome is taking a heavy-handed approach to tip the scales in its favorable direction. But there’s a lot to like about prototyping features because it’s proof in the pudding for real-world uses by allowing developers early access to experiment. Atkins-Bittner explains it nicely: This kind of “soft” commit moves conversations forward while leaving room to change course, if needed, based on real-world use. But there’s obviously a tension here as well. Browsers may be custodians of web standards and features, but they’re still employed by massive companies that are selling a product at the end of the day. It’s easy to get cynical. And political. In theory, though, allowing browsers to voluntarily adopt features gives everyone choice: browsers compete in the market based on what they implement, developers gain new features that push the web further, and everyone gets to choose the browser that best fits their browsing needs. If one company controls access to a huge share of users, however, those choices feel less accessible. Standards often get shaped just as much by market power as by technical merit. Where We’re At At the end of the day, standards get shaped by a mix of politics, technical trade-offs, and developer feedback. Consensus is messy, and it’s rarely about one side “winning.” With masonry, it might look like Google got its way, but in reality the outcome reflects input from both proposals, plus ideas from the wider community. As of this writing: Masonry will be a new display type, but must include the word “grid” in the name. The exact keyword is still being debated. The CSSWG has resolved to proceed with the proposed **item-flow** approach. Grid will be used for layout templates and explicitly placing items in them. Some details, like a possible shorthand syntax and track listing defaults, are still being discussed. Further reading This is a big topic, one that goes much deeper and further than we’ve gone here. While working on this article, a few others popped up that are very much worth your time to see the spectrum of ideas and opinions about the CSS standards process: Alex Russell’s post about the standards adoption process in browsers. Rob O’Leary’s article about struggling with text-wrap: pretty, explaining that “Baseline” doesn’t always mean consistent support in practice. David Bushell’s piece about the WHATWG. It isn’t about the CSSWG specifically, but covers similar discussions on browser politics and standards consensus. Masonry: Watching a CSS Feature Evolve originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Masonry: Watching a CSS Feature Evolve
by: Saleh Mubashar Mon, 13 Oct 2025 14:31:35 +0000 You’ve probably heard the buzz about CSS Masonry. You might even be current on the ongoing debate about how it should be built, with two big proposals on the table, one from the Chrome team and one from the WebKit team. The two competing proposals are interesting in their own right. Chrome posted about its implementation a while back, and WebKit followed it up with a detailed post stating their position (which evolved out of a third proposal from the Technical Architecture Group). We’ll rehash some of that in this post, but even more interesting to me is that this entire process is an excellent illustration of how the CSS Working Group (CSSWG), browsers, and developers coalesce around standards for CSS features. There are tons of considerations that go into a feature, like technical implementations and backwards compatibility. But it can be a bit political, too. That’s really what I want to do here: look at the CSS Masonry discussions and what they can teach us about the development of new CSS features. What is the CSSWG’s role? What influence do browsers have? What can learn from the way past features evolved? Masonry Recap A masonry layout is different than, say Flexbox and Grid, stacking unevenly-sized items along a single track that automatically wraps into multiple rows or columns, depending on the direction. It’s called the “Pinterest layout” for the obvious reason that it’s the hallmark of Pinterest’s feed. Pinterest’s masonry layout We could go deeper here, but talking specifically about CSS Masonry isn’t the point. When Masonry entered CSS Working Group discussions, the first prototype actually came from Firefox back in 2019, based on an early draft that integrated masonry behavior directly into Grid. The Chrome team followed later with a new display: masonry value, treating it as a distinct layout model. They argued that masonry is a different enough layout from Flexbox and Grid to deserve its own display value. Grid’s defaults don’t line up with how masonry works, so why force developers to learn a bunch of extra Grid syntax? Chrome pushed ahead with this idea and prototyped it in Chrome 140: .container { display: masonry; grid-template-columns: repeat(auto-fit, minmax(160px, 1fr)); gap: 10px; } Meanwhile, the WebKit team has proposed that masonry should be a subset of Grid, rather than its own display type. They endorsed a newer direction based on a recommendation by the W3C Technical Architecture Group (TAG) built around a concept called Item Flow that unifies flex-flow and grid-auto-flow into a single set of properties. Instead of writing display: masonry, you’d stick with display: grid and use a new item-flow shorthand to collapse rows or columns into a masonry-style layout: .container { display: grid; grid-template-columns: repeat(auto-fill, minmax(14rem, 1fr)); item-flow: row collapse; gap: 1rem; } The debate here really comes down to mental models and how you think about masonry. WebKit sees it as a natural extension of Grid, not a brand-new system. Their thinking is that developers shouldn’t need to learn an entirely new model when most of it already exists in Grid. With item-flow, you’re not telling the browser “this is a whole new layout system,” you’re more or less adjusting the way elements flow in a particular context. How CSS Features Evolve This sort of horse-trading isn’t new. Both Flexbox and Grid went through years of competing drafts before becoming the specs we use today. Flexbox, in particular, had a rocky rollout in the early 2010s. Those who were in the trenches at the time likely remember multiple conflicting syntaxes floating around at once. The initial release had missing gaps and browsers implemented the features differently, leading to all kinds of things, like proprietary properties, experimental releases, and different naming conventions that made the learning curve rather steep, and even Frankenstein-like usage in some cases to get the most browser support. In other words, Flexbox (nor Grid, for that matter) did not enjoyed a seamless release, but we’ve gotten to a place where the browsers implementations are interoperable with one another. That’s a big deal for developers like us who often juggle inconsistent support for various features. Heck, Rob O’Leary recently published the rabbit hole he traveled trying to use text-wrap: pretty in his work, and that’s considered “Baseline” support that is “widely available.” But I digress. It’s worth noting that Flexbox faced unique challenges early on, and masonry has benefitted from those lessons learned. I reached out to CSSWG member Tab Atkins-Bittner for a little context since they were heavily involved in editing the Flexbox specification. In other words, Flexbox was sort of a canary in the coal mine as the CSSWG considered what a modern CSS layout syntax should accomplish. This greatly benefited the work put into defining CSS Grid because a lot of the foundation for things like tracks, intrinsic sizing, and proportions were already tackled. Atkins-Bittner goes on further to explain that the Grid specification process also forced the CSSWG to rethink several of Flexbox’s design choices in the process. This also explains why Flexbox underwent several revisions following its initial release. It also highlights another key point: CSS features are always evolving. Early debate and iteration are essential because they reduce the need for big breaking changes. Still, some of the Flexbox mistakes (which do happen) became widely adopted. Browsers had widely implemented their approaches and the specification caught up to it while trying to establish a consistent language that helps both user agents and developers implemented and use the features, respectively. All this to say: Masonry is in a much better spot than Flexbox was at its inception. It benefits from the 15+ years that the CSSWG, browsers, and developers contributed to Flexbox and Grid over that time. The discussions are now less about fixing under-specified details and more about high-level design choices. Hence, novel ideas born from Masonry that combine the features of Flexbox and Grid into the new Item Flow proposal. It’s messy. And weird. But it’s how things get done. The CSSWG’s Role Getting to this point requires process. And in CSS, that process runs through the Working Group. The CSS Working Group (CSSWG) runs on consensus: members debate in the open, weigh pros and cons, and push browsers towards common ground. Miriam Suzanne, an invited expert with the CSSWG (and CSS-Tricks alumni), describes the process like this: But consensus only applies to the specifications. Browsers still decide when and how to those features are shipped, as Suzanne continues: So, while the CSSWG facilitates discussions around features, it can’t actually stop browsers from shipping those features, let alone how they’re implemented. It’s a consensus-driven system, but consensus is only about publishing a specification. In practice, momentum can shift if one vendor is the first to ship or prototype a feature. In most cases, though, the specification adoption process results in a stronger proposal overall. By the time features ship, the idea is that they’ve already been thoroughly debated, which in theory, reduces the need for significant revisions later that could lead to breaking changes. Backwards compatibility is always at the forefront of CSSWG discussions. Developer feedback also plays an important role, though there isn’t a single standardized way that it is solicited, collected, or used. For the CSSWG, the csswg-drafts GitHub repo is the primary source of feedback and discussion, while browsers also run their own surveys and gather input through various other channels such as Chrome’s technical discussion groups and Webkit’s mailing lists. The Bigger Picture Browsers are in the business of shaping new features. It’s also in their best interest for a number of reasons. Proposing new ideas gives them a seat at the table. Prototyping new features gets developers excited and helps further refine edge cases. Implementing new features (particularly first) gives them a competitive edge in the consumer market. All that said, prototyping features ahead of consensus is a bit of a tightrope walk. And that’s where Masonry comes back into the bigger picture. Chrome shipped a prototype of the feature that leans heavily into the first proposal for a new display: masonry value. Other browsers have yet to ship competing prototypes, but have openly discussed their positions, as WebKit did in subsequent blog posts. At first glance, that might suggest that Chrome is taking a heavy-handed approach to tip the scales in its favorable direction. But there’s a lot to like about prototyping features because it’s proof in the pudding for real-world uses by allowing developers early access to experiment. Atkins-Bittner explains it nicely: This kind of “soft” commit moves conversations forward while leaving room to change course, if needed, based on real-world use. But there’s obviously a tension here as well. Browsers may be custodians of web standards and features, but they’re still employed by massive companies that are selling a product at the end of the day. It’s easy to get cynical. And political. In theory, though, allowing browsers to voluntarily adopt features gives everyone choice: browsers compete in the market based on what they implement, developers gain new features that push the web further, and everyone gets to choose the browser that best fits their browsing needs. If one company controls access to a huge share of users, however, those choices feel less accessible. Standards often get shaped just as much by market power as by technical merit. Where We’re At At the end of the day, standards get shaped by a mix of politics, technical trade-offs, and developer feedback. Consensus is messy, and it’s rarely about one side “winning.” With masonry, it might look like Google got its way, but in reality the outcome reflects input from both proposals, plus ideas from the wider community. As of this writing: Masonry will be a new display type, but must include the word “grid” in the name. The exact keyword is still being debated. The CSSWG has resolved to proceed with the proposed **item-flow** approach. Grid will be used for layout templates and explicitly placing items in them. Some details, like a possible shorthand syntax and track listing defaults, are still being discussed. Further reading This is a big topic, one that goes much deeper and further than we’ve gone here. While working on this article, a few others popped up that are very much worth your time to see the spectrum of ideas and opinions about the CSS standards process: Alex Russell’s post about the standards adoption process in browsers. Rob O’Leary’s article about struggling with text-wrap: pretty, explaining that “Baseline” doesn’t always mean consistent support in practice. David Bushell’s piece about the WHATWG. It isn’t about the CSSWG specifically, but covers similar discussions on browser politics and standards consensus. Masonry: Watching a CSS Feature Evolve originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
The Affordable Pironman Alternative Mini PC Case for Raspberry Pi 5
by: Abhishek Prakash Mon, 13 Oct 2025 07:48:52 GMT SunFounder's Pironman cases for Raspberry Pi are a huge hit. This bestselling device converts the naked Raspberry Pi board into a miniature tower PC. The RGB lighting, OLED display and glass casing make it look cool. Full HDMI ports, NVMe ports and active-passive cooling options enhance the functionality of the Pi 5. This great gadget is too expensive for some people to buy at $76 for the Pironman and $95 for the dual-NVMe NVMe Pironman Max. SunFounder knows it and that's why they have introduced Pironman 5 Mini at $45 but have removed the OLED display, full HDMI ports and reduced the number of fans. Dealbreaker? Maybe. Maybe not. But I have come across a new case that has most of the features at a much lower price. Elecrow's PitowerLike SunFounder, Elecrow's has been offering gadgets and accessories for Raspberry Pi and other embedded devices for years. Their CrowView Note and all-in-on starter kits have been popular among SBC enthusiasts. They have just revealed a new product, a mini PC case for your Raspberry Pi 5 and Jetson Orin Nano. Yes, that doubles the excitement. Parameter Specification Compatible Devices Raspberry Pi 5 / Jetson Orin Nano Display 1.3″ OLED Screen Material Aluminum Alloy + Acrylic Cooling System 3 × Cooling Fans Power Control Integrated Power Button PCIe Interface (Raspberry Pi Version) PCIe M.2 Supported SSD Sizes 2230 / 2242 / 2260 / 2280 RTC (Real-Time Clock) Support Supported (Raspberry Pi Version) Dimensions 120 × 120 × 72 mm Weight 500 g Ports 2 x Full HDMI Ports 4 x USB 1 X Ethernet 1 X Type C for power Included Accessories 1 × Case (Unassembled) 1 × PCBA Board 3 × Cooling Fans 1 × Heatsink (for Raspberry Pi) -1 × User Manual And all this comes at a lower price tag of nearly $40 (more on this later). That sounds tempting, right? Let's see how good this case is. 📋Elecrow sent me this case for review. The views expressed are my own.Features meet affordibilityLet's take a look at the appearance of Elecrow's mini PC case. It is slightly bigger than the Pironman cases and has a more boxy looks somehow. The OLED display and power button are at the top. The micro SD card outlet is at the bottom and to accommodate it, the case has taller feet. There is nothing in the front of the device except a transparent acrylic sheet. The main look of the case comes from the side that gives you a broader look at the circuits. It looks magnificent with the RGB lights. The GPIO pins are accessible from here and they are duly marked. Front viewThere are three RGB fans here. Two in the back throw air out and one at the top sucks air in. This is done to keep the airflow in circulation inside the case. The official Raspberry Pi Active Cooler is also added to provide some passive cooling. All the other ports are accessible from the back. In addition to all the usual Raspberry Pi ports like, there are two full-HDMI ports replacing the mini HDMI ports. Back viewThe NVMe board is inside the case and it is better to insert the SSD while assembling the case. Yes, this is also an assembly kit. 📋I used the case for Raspberry Pi 5 and hence this section focuses on the Pi 5 specific features.Assembling the caseMini PC case boxSince Elcerow's tower case is clearly inspired from SunFounder's Pironman case, they also have kept the DIY angle here. This simply means that you have to assemble the kit yourself. It is while assembling that you can decide whether you want to use it for Raspberry Pi 5 or Jetson Orin Nano. Assembling instructions differ slightly for the devices. There is an official assembly video and you should surely watch it to get a feel of how much effort is required for building this case. In my case, I was not aware of the assembly video as I was sent this device at the time the product was announced. I used the included paper manual and it took me nearly two hours to complete the assembly. If I had had the help of the video and if I had not encountered a couple of issues, this could have been done within an hour. Assembling the caseDid I say issues? Yes, a few. First, the paper manual didn't specifically mention connecting one of the FPC cables. The video mentions it, thankfully. One major issue was in putting in the power button. It seems to me that while they sized the hole according to the power button, they applied the black coating later on. And this reduced the size of the hole from which the power button passes through. I don't see the official assembly video mentioning this issue and it could create confusion. The workaround is to simply use an object to remove the coating. I used scissors to scrape it. Another issue was putting in the tiny screws in even tinier spaces at times. The situation worsened for me as the paper manual suggested joining the main board and all the adapter boards in the initial phases. This made putting the screws in even harder. As the video shows, this could be done in steps. My magnetic screwdriver helped a great deal in placing the tiny screws in narrow places, and I think Elecrow should have provided a magnetic screwdriver instead of a regular one. User experienceTo make full use of all the cool features, i.e., OLED display, RGB fans, etc., you need to install a few Python scripts first. Scripts to add support for power button actions and OLED screenAnd here's the thing that I have noticed with most Elecrow products: they are uncertain about the appropriate location for their documentation. The paper manual that comes with the package has a QR code that takes you to this Google Drive that contains various scripts and a readme file. But there is also an online Wiki page and I think this page should be considered and distributed as the official documentation. After running 12 or so commands, including a few that allow 777 permissions, the OLED screen started showing system stats such as CPU temperature and usage, RAM usage, disk stats, date and time. It would have been nice if it displayed the IP address too. Milliseconds of light sync issue which is present in SunFounder cases tooLike Pironman, Elecrow also has RGB lighting of fans out of sync by a few milliseconds. Not an issue unless you have acute OSD. The main issue is that it has three fans and the fans start running as soon as the device is turned on. For such a tiny device, three continuously running fans generate considerable noise. The problem is that there is no user-facing way of controlling the fans without modifying the scripts themselves. Another issue is that if you turn off Pi from the operating system, i.e., use the shutdown command or the graphical option of Raspberry Pi OS, the RGB lights and fans stay on. Even the OLED screen keeps on displaying whatever message it had when the system was shut down. Top of the case has the OLED display and power buttonIf you shut down the device by long pressing the power button, everything is turned off normally. This should not be the intended behavior. I have notified Elecrow about it and hopefully their developers will work on fixing their script. Barring these hiccups, there are plenty of positives. There is an RTC battery to give you correct time between long shutdowns, although it works only with Raspberry Pi OS at the moment. The device stays super cool thanks to three fans maintaining a good airflow and the active cooler adding to the overall cooling. The clear display with RGB lights surely gives it an oomph factor. My photography skills don't do justiceConclusion There is room for improvement here, and I hope Elecrow updates their scripts to address these issues in the future: Proper handling of lights/fans shutdown instead of relying on the power button.oProvide options to configure the RGB lights and control the fans.Include IP address in OLED display (optional).Other than that, I have no complaints. The case is visually appealing, the device remains cool, and the price is reasonable in comparison to the popular Pironman cases. Coming to the pricing. The device costs $32 for the Jetson Nano version and $40 for the Raspberry Pi version. I am guessing this is because the Pi version includes the additional active cooler. Do note that the pricing displayed on the website DOES NOT include shipping charges and customs duty. Those things will be additional. Alternatively, at least for our readers in the United States of America, the device is available on Amazon (partner link) but at a price tag of $59 at the time of writing this review. You don't have to worry about extra shipping or custom duty fee if you order from Amazon. Get it from Amazon US (for $59)Get it from official website (shipping/customs extra)
-
I Switched From Ollama And LM Studio To llama.cpp And Absolutely Loving It
by: Bhuwan Mishra Sat, 11 Oct 2025 02:26:37 GMT My interest in running AI models locally started as a side project with part curiosity and part irritation with cloud limits. There’s something satisfying about running everything on your own box. No API quotas, no censorship, no signups. That’s what pulled me toward local inference. My struggle with running local AI modelsMy setup, being an AMD GPU on Windows, turned out to be the worst combination for most local AI stacks. The majority of AI stacks assume NVIDIA + CUDA, and if you don’t have that, you’re basically on your own. ROCm, AMD’s so-called CUDA alternative, doesn’t even work on Windows, and even on Linux, it’s not straightforward. You end up stuck with CPU-only inference or inconsistent OpenCL backends that feel like a decade behind. Why not Ollama and LM Studio?I started with the usual tools, i.e., Ollama and LM Studio. Both deserve credit for making local AI look plug-and-play. I tried LM Studio first. But soon after, I discovered how LM Studio hijacks my taskbar. I frequently jump from one application window to another using the mouse, and it was getting annoying for me. Another thing that annoyed me is its installer size of 528 MB. I’m a big advocate for keeping things minimal yet functional. I’m a big admirer of a functional text editor that fits under 1 MB (Dred), a reactive JavaScript library and React alternative that fits under 1KB (Van JS), and a game engine that fits under 100 MB (Godot). Then I tried Ollama. Being a CLI user (even on Windows), I was impressed with Ollama. I don’t need to spin up an Electron JS application (LM Studio) to run an AI model locally. With just two commands, you can run any AI models locally with Ollama. ollma pull tinyllama ollama run tinyllama But once I started testing different AI models, I needed to reclaim disk space after that. My initial approach was to delete the model manually from File Explorer. I was a bit paranoid! But soon, I discovered these Ollama commands: ollama rm tinyllama #remove the model ollama ls #lists all modelsUpon checking how lightweight Ollama is, it comes close to 4.6 GB on my Windows system. Although you can delete unnecessary files to make it slim (it comes bundled with all libraries like rocm, cuda_v13, and cuda_v12), After trying Ollama, I was curious! Does LM Studio even provide a CLI? Upon my research, I came to know, yeah, it does offer a command lineinterface. I investigated further and found out that LM Studio uses Llama.cpp under the hood. With these two commands, I can run LM Studio via CLI and chat to an AI model while staying in the terminal: lms load <model name> #Load the model lms chat #starts the interactive chatI was generally satisfied with LM Studio CLI at this moment. Also, I noticed it came with Vulkan support out of the box. Now, I have been looking to add Vulkan support for Ollama. I discovered an approach to compile Ollama from source code and enable Vulkan support manually. That’s a real hassle! I just had three additional complaints at this moment. Every time I needed to use LM Studio CLI(lms), it would take some time to wake up its Windows service. LMS CLI is not feature-rich. It does not even provide a CLI way to delete a model. And the last one was how it takes two steps to load the model first and then chat. After the chat is over, you need to manually unload the model. This mental model doesn’t make sense to me. That’s where I started looking for something more open, something that actually respected the hardware I had. That’s when I stumbled onto Llama.cpp, with its Vulkan backend and refreshingly simple approach. Setting up Llama.cpp🚧The tutorial was performed on Windows because that's the system I am using currently. I understand that most folks here on It's FOSS are Linux users and I am committing blasphemy of sort but I just wanted to share the knowledge and experience I gained with my local AI setup. You could actually try similar setup on Linux, too. Just use Linux equivalent paths and commands.Step 1: Download from GitHubHead over to its GitHub releases page and download its latest releases for your platform. 📋If you’ll be using Vulkan support, remember to download assets suffixed with vulkan-x64.zip like llama-b6710-bin-ubuntu-vulkan-x64.zip, llama-b6710-bin-win-vulkan-x64.zip.Step 2: Extract the zip fileExtract the downloaded zip file and, optionally, move the directory where you usually keep your binaries, like /usr/local/bin on macOS and Linux. On Windows 10, I usually keep it under %USERPROFILE%\.local/bin. Step 3: Add the Llama.cpp directory to the PATH environment variableNow, you need to add its directory location to the PATH environment variable. On Linux and macOS (replace path-to-llama-cpp-directory with your exact directory location): export PATH=$PATH:”<path-to-llama-cpp-directory>”On Windows 10 and Windows 11: setx PATH=%PATH%;:”<path-to-llama-cpp-directory>”Now, Llama.cpp is ready to use. llama.cpp: The best local AI stack for meJust grab a .gguf file, point to it, and run. It reminded me why I love tinkering on Linux in the first place: fewer black boxes, more freedom to make things work your way. With just one command, you can start a chat session with Llama.cpp: llama-cli.exe -m e:\models\Qwen3-8B-Q4_K_M.gguf --interactiveIf you carefully read its verbose message, it clearly shows signs of GPU being utilized: With llama-server, you can even download AI models from Hugging Face, like: llama-server -hf itlwas/Phi-4-mini-instruct-Q4_K_M-GGUF:Q4_K_M-hf flag tells to download the model from the Hugging Face repository. You even get a web UI with Llama.cpp. Like run the model with this command: llama-server -m e:\models\Qwen3-8B-Q4_K_M.gguf --port 8080 --host 127.0.0.1This starts a web UI on http://127.0.0.1:8080, along with the ability to send an API request from another application to Llama. Let’s send an API request via curl: curl http://127.0.0.1:8080/completion -H "Content-Type: application/json" -d "{\"prompt\":\"Explain the difference between OpenCL and SYCL in short.\",\"temperature\":0.7,\"max_tokens\":128}temperature controls the creativity of the model’s outputmax_tokens controls whether the output will be short and concise or a paragraph-length explanation.llama.cpp for the winWhat am I losing by using llama? Nothing. Like Ollama, I can use a feature-rich CLI, plus Vulkan support. All comes under 90 MB on my Windows 10 system. Now, I don’t see the point of using Ollama and LM Studio, I can directly download any model with llama-server, run the model directly with llama-cli, and even interact with its web UI and API requests. I’m hoping to do some benchmarking on how performant AI inference on Vulkan is as compared to pure CPU and SYCL implementation in some future post. Until then, keep exploring AI tools and the ecosystem to make your life easier. Use AI to your advantage rather than going on endless debate with questions like, will AI take our jobs?