Everything posted by Blogger
-
Disaggregated Routing with SONiC and VPP: Lab Demo and Performance Insights – Part Two
By: Linux.com Editorial Staff Wed, 29 Oct 2025 13:45:35 +0000 In Part One of this series, we examined how the SONiC control plane and the VPP data plane form a cohesive, software-defined routing stack through the Switch Abstraction Interface. We outlined how SONiC’s Redis-based orchestration and VPP’s user-space packet engine come together to create a high-performance, open router architecture. In this second part, we’ll turn theory into practice. You’ll see how the architecture translates into a working environment, through a containerized lab setup that connects two SONiC-VPP routers and Linux hosts. Reconstructing the L3 Routing Demo Understanding the architecture is foundational, but the true power of this integration becomes apparent through a practical, container-based lab scenario. The demo constructs a complete L3 routing environment using two SONiC-VPP virtual routers and two Linux hosts, showcasing how to configure interfaces, establish dynamic routing, and verify end-to-end connectivity. Lab Environment and Topology The demonstration is built using a containerized lab environment, orchestrated by a tool like Containerlab. This approach allows for the rapid deployment and configuration of a multi-node network topology from a simple declarative file. The topology consists of four nodes: router1: A SONiC-VPP virtual machine acting as the gateway for the first LAN segment. router2: A second SONiC-VPP virtual machine, serving as the gateway for the second LAN segment. PC1: A standard Linux container representing a host in the first LAN segment. PC2: Another Linux container representing a host in the second LAN segment. These nodes are interconnected as follows: An inter-router link connects router1:eth1 to router2:eth1. PC1 is connected to router1 via PC1:eth2 and router1:eth2. PC2 is connected to router2 via PC2:eth2 and router2:eth2. Initial Network Configuration Once the lab is deployed, a startup script applies the initial L3 configuration to all nodes. Host Configuration: The Linux hosts, PC1 and PC2, are configured with static IP addresses and routes. PC1 is assigned the IP address 10.20.1.1/24 and is given a static route for the 10.20.2.0/24 network via its gateway, router1 (10.20.1.254). PC2 is assigned the IP address 10.20.2.1/24 and is given a static route for the 10.20.1.0/24 network via its gateway, router2 (10.20.2.254). Router Interface Configuration: The SONiC-VPP routers are configured using the standard SONiC CLI. router1: The inter-router interface Ethernet0 is configured with the IP 10.0.1.1/30. The LAN-facing interface Ethernet4 is configured with the IP 10.20.1.254/24. router2: The inter-router interface Ethernet0 is configured with the IP 10.0.1.2/30. The LAN-facing interface Ethernet4 is configured with the IP 10.20.2.254/24. After IP assignment, each interface is brought up using the sudo config interface startup command. Dynamic Routing with BGP With the interfaces configured, dynamic routing is established between the two routers using the FRRouting suite integrated within SONiC. The configuration is applied via the vtysh shell. iBGP Peering: An internal BGP (iBGP) session is established between router1 and router2 as they both belong to the same Autonomous System (AS) 65100. router1 (router-id 10.0.1.1) is configured to peer with router2 at 10.0.1.2. router2 (router-id 10.0.1.2) is configured to peer with router1 at 10.0.1.1. Route Advertisement: Each router advertises its connected LAN segment into the BGP session. router1 advertises the 10.20.1.0/24 network. router2 advertises the 10.20.2.0/24 network. This BGP configuration ensures that router1 learns how to reach PC2’s network via router2, and router2 learns how to reach PC1’s network via router1. Verification and Data Path Analysis The final phase is to verify that the configuration is working correctly at every layer of the stack. Control Plane Verification: The BGP session status and learned routes can be checked from within vtysh on either router. On router1, the command show ip bgp summary would confirm an established peering session with router2. The command show ip route would display the route to 10.20.2.0/24 learned via BGP from 10.0.1.2. Data Plane Verification: To confirm the route has been programmed into the VPP data plane, an operator would access the VPP command-line interface (vppctl) inside the syncd container. The command show ip fib would display the forwarding table, which should include the BGP-learned route to 10.20.2.0/24, confirming that the state has been successfully synchronized from the control plane. End-to-End Test: The ultimate test is to generate traffic between the hosts. A simple ping 10.20.2.1 from PC1 should succeed. This confirms that the entire data path is functional: PC1 sends the packet to its gateway (router1), router1 performs a lookup in its VPP FIB and forwards the packet to router2, which then forwards it to PC2. The return traffic follows the reverse path, validating the complete, integrated solution. This practical demonstration, using standard container tooling and declarative configurations, powerfully illustrates the operational simplicity and robustness of the SONiC-VPP architecture for building high-performance, software-defined L3 networks. Performance Implications and Future Trajectories The elegance of the SONiC-VPP integration is matched by its impressive performance and its applicability to a wide range of modern networking challenges. By offloading the data plane from the kernel to a highly optimized user-space framework, this solution unlocks capabilities that are simply unattainable with traditional software-based routing. The performance gains are impressive. VPP is consistently benchmarked as being much faster than kernel-based forwarding, with some sources claiming a 10x to 100x improvement in packet processing throughput.6 This enables use cases like “Terabit IPSec” on multi-core COTS servers, a feat that would have been unthinkable just a few years ago.7 Real-world deployments have validated this potential. A demonstration at the ONE Summit 2024 showcased a SONiC-VPP virtual gateway providing multi-cloud connectivity between AWS and Azure. The performance testing revealed a round-trip time of less than 1 millisecond between application workloads and the cloud provider on-ramps (AWS Direct Connect and Azure ExpressRoute), highlighting its suitability for high-performance, low-latency applications.9 This level of performance opens the door to a variety of demanding use cases: High-Performance Edge Routing: As a virtual router or gateway, SONiC-VPP can handle massive traffic volumes at the network edge, serving as a powerful and cost-effective alternative to proprietary hardware routers.8 Multi-Cloud and Hybrid Cloud Connectivity: The solution is ideal for creating secure, high-throughput virtual gateways that interconnect on-premises data centers with multiple public clouds, as demonstrated in the ONE Summit presentation.9 Integrated Security Services: The performance of VPP makes it an excellent platform for computationally intensive security functions. Commercial offerings based on this architecture, like AsterNOS-VPP, package the solution as an integrated platform for routing, security (firewall, IPsec VPN, IDS/IPS), and operations.8 While the raw throughput figures are compelling, a more nuanced benefit lies in the nature of the performance itself. The Linux kernel, for all its power, is a general-purpose operating system. Its network stack is subject to non-deterministic delays, caused by system interrupts, process scheduling, and context switches. This introduces unpredictable latency, which can be detrimental to sensitive applications.12 VPP, by running in user space on dedicated cores and using poll-mode drivers, sidesteps these sources of unpredictability. This provides not just high throughput, but consistent, low-latency performance. For emerging workloads at the edge, such as real-time IoT data processing, AI/ML inference, and 5G network functions, this predictable performance is often more critical than raw aggregate bandwidth.16 The key value proposition, therefore, is not just being “fast,” but being “predictably fast.” The SONiC-VPP project is not static; it is an active area of development within the open-source community. A key focus for the future is to deepen the integration by extending the SAI API to expose more of VPP’s rich feature set to the SONiC control plane. Currently, SAI primarily covers core L2/L3 forwarding basics. However, VPP has a vast library of advanced features. Active development efforts are underway to create SAI extensions for features like Network Address Translation (NAT) and advanced VxLAN multi-tenancy capabilities, which would allow these functions to be configured and managed directly through the standard SONiC interfaces.9 A review of pull requests on thesonic-platform-vpp GitHub repository shows ongoing work to add support for complex features like VxLAN BGP EVPN and to improve ACL testing, indicating a healthy and forward-looking development trajectory.17 The Future is Software-Defined and Open The integration of the SONiC control plane with the VPP data plane is far more than a clever engineering exercise. It is a powerful testament to the maturity and viability of the disaggregated networking model. This architecture successfully combines the strengths of two of the most significant open-source networking projects, creating a platform that is flexible, performant, and free from the constraints of proprietary hardware. It proves that the separation of the control and data planes is no longer a theoretical concept but a practical, deployable reality that offers unparalleled architectural freedom. The synergy between SONiC and FD.io VPP, both flagship projects of the Linux Foundation, highlights the immense innovative power of collaborative, community-driven development.1 This combined effort has produced a solution that fundamentally redefines the router, transforming it from a monolithic hardware appliance into a dynamic, high-performance software application that can be deployed on commodity servers. Perhaps most importantly, this architecture provides the tools to manage network infrastructure with the same principles that govern modern software development. As demonstrated by the L3 routing demo’s lifecycle-building from code, configuring with declarative files, and deploying as a versioned artifact, the SONiC-VPP stack paves the way for true NetDevOps. It enables network engineers and operators to embrace automation, version control, and CI/CD pipelines, finally treating network infrastructure as code. In doing so, it delivers on the ultimate promise of software-defined networking – a network that is as agile, scalable, and innovative – as the applications it supports. Sources SONiC Foundation – Linux Foundation Project https://sonicfoundation.dev/ SONiC Architecture – Software for Open Networking in the Cloud (SONiC) – Cisco DevNet https://developer.cisco.com/docs/sonic/sonic-architecture/ The Technology Behind FD.io – FD.io https://fd.io/technology/ SONiC Architecture and Deployment Deep Dive – Cisco Live https://www.ciscolive.com/c/dam/r/ciscolive/global-event/docs/2025/pdf/BRKMSI-2004.pdf Openstack edge cloud with SONiC VPP for high-speed and low-latency multi-cloud connectivity – YouTube https://www.youtube.com/watch?v=R6elTX_Zmtk Pull requests · sonic-net/sonic-platform-vpp – GitHub https://github.com/sonic-net/sonic-platform-vpp/pulls The post Disaggregated Routing with SONiC and VPP: Lab Demo and Performance Insights – Part Two appeared first on Linux.com.
-
Pomodoro With Super Powers: This Linux App Will Boost Your Productivity
by: Roland Taylor Wed, 29 Oct 2025 10:29:16 GMT There is no shortage of to-do apps in the Linux ecosystem, but few are designed to keep you focused while you work. Koncentro takes a direct approach by bundling a versatile task list, a Pomodoro-style timer, and a configurable website blocker into one tidy solution. What is Koncentro exactly?Koncentro is a free, open-source productivity tool, inspired by the likes of Super Productivity and Chomper. The project is actively developed by Bishwa Saha (kun-codes), with source code, issue tracking, and discussions hosted on GitHub. Built with a sleek Qt 6 interface echoing Microsoft’s Fluent Design language, this app pairs modern aesthetics with solid functionality. The latest release, version 1.1.0, arrived earlier this month with new features and quality-of-life improvements, including sub-tasks and a system-tray option. That said, it's not without quirks, and first-time users may hit a few bumps along the way. However, once you get past the initial hurdles and multistep setup, it becomes a handy companion for getting things done while blocking out common distractions. In this review, we examine what sets Koncentro apart from the to-do crowd and help you determine whether it is the right fit for your workflow. Bringing Koncentro’s methods into focusIt is rare to find an app that gives you everything you need in one go without becoming overstuffed or cumbersome to use. Koncentro strikes a solid balance, offering more than to-do apps that stop at lists and due dates without veering into overwhelm. The pomodoro timer in Koncentro during a focus periodIt combines the Pomodoro technique with timeblocking, emphasizing an economical approach where time is the primary unit of work. As such, it caters to an audience that aims to structure the day rather than the week. In fact, there is no option to add tasks with specific dates — only times. This omission is not a limitation so much as a design choice. It fits the Pomodoro philosophy of tackling work in short, focused intervals, encouraging you to act now rather than plan for later. It makes Koncentro perfect for day-to-day activities, but you may need to find another solution if you're looking for long-term task tracking. Backing up this standard functionality is a snazzy website blocker to help you stave off distractions while you get down to work. The hands-on experienceIn my experience, Koncentro proved to be quite pleasant to use, as someone who relies on similar apps in my daily life. In this section, I'll focus on the overall experience of using the app from a fresh install onward. Using Koncentro📋While Koncentro features a distinct Pomodoro timer, I will not discuss this feature in depth in this section.First runOn the first run, Koncentro will guide you through setting up its website blocking feature; the app's core function outside simple task management. In order for this to work, the system must temporarily disconnect from the internet, since the app must set up a proxy to facilitate website blocking. All filtering happens locally; no browsing data is sent anywhere outside your machine. I'll explain how this works when we get to the website blocker in detail. The first of two setup dialogs in Koncentro🚧Note: The proxy Koncentro relies on runs on port :8080, so it may conflict with other services using this port. Be sure to check for any conflicts before running the setup.The second setup dialog in KoncentroOnce you've managed to set it up (or managed to bypass this step), Koncentro will walk you through an introductory tutorial, showing how its primary features work. Once the tutorial is completed, you can rename or remove the default workspace and tasks. 🚧Be aware that there is a known bug on X11, the tutorial traps focus and may not be able to exit until the app is restarted.Straightforward task managementKoncentro follows a rather uncomplicated approach to task management. There are no tags, no due dates, and no folders. Also, tasks cannot overlap, since the timer for one task is automatically stopped if you start another. Furthermore, while tasks can have sub-tasks, parent tasks cannot be started on their own. Adding a task in KoncentroThis approach may not be for everyone, but since the app is focused on streamlined productivity, it makes sense to arrange things in this way, as you're unlikely to lose track of any given tasks with strict rules around time management. Tasks must be timeboxed upon creation, meaning you have to select a maximum time for each task to be accomplished within. This is set as the "estimated time" value. When you start the timer on any task, "elapsed time" is recorded and contrasted against the estimated time. This comes in pretty handy if you want to measure your performance against a benchmark or goal. Editing the time for a task in KoncentroActive and uncompleted tasks are grouped into "To Do Tasks", and finished tasks into "Completed Tasks", though this doesn't happen automatically. Since there are no folders or tags, task organization is accomplished by simply dragging tasks between these two sections. Workspaces: a subtle power toolOne of the standout features of Koncentro is the way it uses workspaces to manage not just tasks, but overall settings. While this implementation is still clearly in its infancy, I see the potential for even more powerful functionality in the future. Managing Workspaces in KoncentroCurrently, workspaces serve to group your tasks and are protected by an optional website blocker to keep your attention on the present goal. 📋In order to access workspaces, you must first make sure to stop any timers on your tasks, and ensure that "Current Task:" says "None" in the bottom left of the window. If the workspace button is greyed out, clicking the stop button will fix this.The website blocker in depthPerhaps the most distinguishing feature of Koncentro is its website blocker. It's not something you find in most to-do list apps for Linux, yet its simplicity and versatility make it a truly standout addition. Plus, the fact that each workspace can have its own block list makes Koncentro especially useful for scoping your focus periods and break times. The website blocker in KoncentroIn terms of usage, it's mostly seamless once you've passed the initial setup process, which isn't too tedious, but certainly could be made smoother overall. Koncentro doesn't block any particular sites by default, so you'll need to manually add any sites you'd like to block to each workspace. Note: Website blocking is only active when there is an active task. If all tasks are stopped, website blocking will not be activated. Editing the blocklist in KoncentroKoncentro relies on a Man In The Middle (MITM) proxy called mitmproxy to power this feature. Don't let the name throw you off: mitmproxy is a trusted open-source Python tool commonly used for network testing, repurposed here to handle local HTTPS interception for blocking rules. It's only activated when you're performing a task, and can be disabled altogether in Koncentro's settings. The mitmproxy home pagePart of the setup process involves installing its certificates if you wish to use the website blocker. You'll need to do this both for your system and for Firefox (if you're using Firefox as your browser), since Firefox does not use the system's certificates. Example usage scenarioLet's say, for instance, you want to block all social media while you're working. You'd just need to add these sites to your "At-work space" (or whatever you'd like to call it) and get down to business. Website blocking with Koncentro is simple and straightforwardEven if a friend sends you a YouTube video, you won't be distracted by thumbnails because that URL would be locked out for that time period. Once that stretch of work ends, you could switch to your "taking a break" workspace, where social media is allowed, and (if you like) all work-related URLs are blocked. But does it really work?That's the real question here, of course: whether this is actually effective in practice. Of course, if you're highly distractible, it might be just the thing to help you keep on track. However, if you're already quite disciplined in your work, it might not be particularly meaningful. It really depends on how you work as an individual, after all. That said, I can definitely see a benefit for power users who know how to leverage the site blocker to prevent notifications in popular chat apps, which must still communicate with a central server to notify you. Sure, you can use "Do not disturb" in desktop environments that support it, but this doesn't consistently disable sound or notifications (if the chat app in question uses non-native notifications, for instance). A focus on aesthetics - Why it feels nice to useThe choice to use Microsoft's Fluent design language may seem strange to many Linux users, but in fairness, Koncentro is a cross-platform application, and Windows still maintains the dominant position in the market. The Fluent Design language home page in Microsoft Edge, which also uses this design language for its UI.That being said, in many ways, it's similar enough in practical usage to the UI libraries and UX principles popular within the Linux ecosystem. It's close enough in functionality to apps built with Kirigami and Libadwaita that it doesn't seem too out of place among them. CustomizationKoncentro features a limited degree of customization options, following the "just enough" principle that seems to be the trend in modern design. It threads the delicate line between the user's freedom for customization and the developer's intentions for how their app should look and behave across platforms. Koncentro using the "Light" themeYou get the standard light and dark modes, and the option to follow your system's preference. Using it on the Gnome desktop, it picked up my dark mode preference out of the box. System IntegrationKoncentro integrates well with the system tray support, using a standard app indicator with a simple menu. The Koncentro indicator menu in the Gnome Desktop on Ubuntu with Dash-To-Panel enabledHowever, while you get the option to choose a theme colour, it doesn't give the option to follow your system's accent colour, unlike most modern Linux/open-source applications. It also does not feature rounded corners, which some users may find disappointing. Koncentro with a custom accent colour selectedThe quirks that still hold it backAs mentioned earlier, Koncentro has a number of quirks that detract from the overall experience, though most of these are limited to its first-time run. Mandatory website blocker setupPerhaps the most unconventional choice, there's no way to start using Koncentro until its website blocker is set up. It will not allow you to use the app (even to disable the website blocker) in any way without first completing this step. While you can "fake it" by clicking "setup completed" in the second pop-up dialog, it creates a false sense of urgency, which could be especially confusing for less experienced users. This is perhaps where Koncentro would be better served by offering a smoother initial setup experience. No way to copy workspaces/settingsWhile you can have multiple workspaces with their own settings, you can't duplicate workspaces or even copy your blocklists between them. This isn't a big deal if you're just using a couple of workspaces with simple block/allow lists, but if you're someone who wants to have a complex setup with shared lists on multiple workspaces, you'll need to add them to each workspace manually. No penalty for time overrunsAt this time, nothing happens when you go over time — no warnings, no sounds, no notifications. If you're trying to stay on task and run overtime, it would help to have some kind of "intervention" or warning. No warning for a time overrunI've gone ahead and made feature requests for possible solutions to these UX issues: export/import for lists, warnings or notifications for overruns, and copying workspace settings. These are all just small limitations in what is otherwise a remarkably cohesive early-stage project. Installing Koncentro on LinuxBeing that it's available on Flathub, Koncentro can be installed on all Linux distributions that support Flatpaks. You can grab it from there through your preferred software manager, or run this command in the terminal: flatpak install flathub com.bishwasaha.KoncentroAlternatively, you can also get official .deb or .rpm packages for your distro of choice (or source code for compiling it yourself) from the project's releases page. ConclusionAll told, Koncentro is a promising productivity tool that offers a blend of simplicity, aesthetic appeal, and smooth functionality. It's a great tool for anyone who likes to blend time management with structure. For Linux users who value open-source productivity tools that respect privacy and focus, it’s a refreshing middle ground between the more minimal to-do lists and full-blown productivity suites. It’s still young, but it already shows how open-source can combine focus and flexibility without unnecessary noise.
-
415: Babel Choices
by: Chris Coyier Tue, 28 Oct 2025 18:07:00 +0000 Robert and Chris hop on the show to talk about choices we’ve had to make around Babel. Probably the best way to use Babel is to just use the @babel/preset-env plugin so you get modern JavaScript features processed down to a level of browser support you find comfortable. But Babel supports all sorts of plugins, and in our Classic Editor, all you do is select “Babel” from a dropdown menu and that’s it. You don’t see the config nor can you change it, and that config we use does not use preset env. So we’re in an interesting position with the 2.0 editor. We want to give new Pens, which do support editable configs, a good modern config, and we want all converted Classic Pens a config that doesn’t break anything. There is some ultra-old cruft in that old config, and supporting all of it felt kinda silly. We could support a “legacy” Babel block that does support all of it, but so far, we’ve decided to just provide a config that handles the vast majority of old stuff, while using the same Babel block that everyone will get on day one. We’re still in the midst of working on our conversion code an verifying the output of loads of Classic Pens, so we’ll see how it goes! Time Jumps 00:15 New editor and blocks at CodePen 04:10 Dealing with versioning in blocks 14:44 What the ‘tweener plugin does 19:31 What we did with Sass? 22:10 Trying to understand the TC39 process 27:41 JavaScript and APIs
-
Automating XSS Hunting with Dalfox [Pen Testing Hands-on]
by: Hangga Aji Sayekti Tue, 28 Oct 2025 18:46:16 +0530 Want a fast XSS check? Dalfox does the heavy lifting. It auto-injects, verifies (headless/DOM checks included), and spits out machine-friendly results you can act on. Below: installing on Kali, core commands, handy switches, and a demo scan against a safe target. Copy, paste, profit. (lab-only.) Behind the Scenes: How Dalfox WorksDalfox is more than a simple payload injector. Its efficiency comes from a smart engine that: Performs Parameter Analysis: Identifies all parameters and checks if input is reflected in the responseUses a DOM Parser: Analyzes the Document Object Model to verify if a payload would truly execute in the browserApplies Optimization: Eliminates unnecessary payloads based on context and uses abstraction to generate specific payloadsLeverages Parallel Processing: Sends requests concurrently, making the scanning process exceptionally fast🚧testphp.vulnweb.com is a purposely vulnerable playground — safe to practice on. Always obtain explicit permission before scanning other domains.1. Install dependenciesUpdate packages and make sure Go (Golang) is installed: sudo apt update && sudo apt upgrade -y go version || sudo apt install golang-go -y If go version shows a Go runtime, you’re good. 2. Install Dalfox Install the latest Dalfox binary using Go: go install github.com/hahwul/dalfox/v2@latest export PATH=$PATH:$(go env GOPATH)/bin # add GOPATH/bin to PATH if needed dalfox version That installs Dalfox into your Go bin folder so you can run dalfox directly. This post is for subscribers only Subscribe now Already have an account? Sign in
-
Pure CSS Tabs With Details, Grid, and Subgrid
by: Silvestar Bistrović Mon, 27 Oct 2025 14:33:17 +0000 Making a tab interface with CSS is a never-ending topic in the world of modern web development. Are they possible? If yes, could they be accessible? I wrote how to build them the first time nine long years ago, and how to integrate accessible practices into them. Although my solution then could possibly still be applied today, I’ve landed on a more modern approach to CSS tabs using the <details> element in combination with CSS Grid and Subgrid. First, the HTML Let’s start by setting up the HTML structure. We will need a set of <details> elements inside a parent wrapper that we’ll call .grid. Each <details> will be an .item as you might imagine each one being a tab in the interface. <div class="grid"> <!-- First tab: set to open --> <details class="item" name="alpha" open> <summary class="subitem">First item</summary> <div><!-- etc. --></div> </details> <details class="item" name="alpha"> <summary class="subitem">Second item</summary> <div><!-- etc. --></div> </details> <details class="item" name="alpha"> <summary class="subitem">Third item</summary> <div><!-- etc. --></div> </details> </div> These don’t look like true tabs yet! But it’s the right structure we want before we get into CSS, where we’ll put CSS Grid and Subgrid to work. Next, the CSS Let’s set up the grid for our wrapper element using — you guessed it — CSS Grid. Basically what we’re making is a three-column grid, one column for each tab (or .item), with a bit of spacing between them. We’ll also set up two rows in the .grid, one that’s sized to the content and one that maintains its proportion with the available space. The first row will hold our tabs and the second row is reserved for the displaying the active tab panel. .grid { display: grid; grid-template-columns: repeat(3, minmax(200px, 1fr)); grid-template-rows: auto 1fr; column-gap: 1rem; } Now we’re looking a little more tab-like: Next, we need to set up the subgrid for our tab elements. We want subgrid because it allows us to use the existing .grid lines without nesting an entirely new grid with new lines. Everything aligns nicely this way. So, we’ll set each tab — the <details> elements — up as a grid and set their columns and rows to inherit the main .grid‘s lines with subgrid. details { display: grid; grid-template-columns: subgrid; grid-template-rows: subgrid; } Additionally, we want each tab element to fill the entire .grid, so we set it up so that the <details> element takes up the entire available space horizontally and vertically using the grid-column and grid-row properties: details { display: grid; grid-template-columns: subgrid; grid-template-rows: subgrid; grid-column: 1 / -1; grid-row: 1 / span 3; } It looks a little wonky at first because the three tabs are stacked right on top of each other, but they cover the entire .grid which is exactly what we want. Next, we will place the tab panel content in the second row of the subgrid and stretch it across all three columns. We’re using ::details-content (good support, but not yet in WebKit at the time of writing) to target the panel content, which is nice because that means we don’t need to set up another wrapper in the markup simply for that purpose. details::details-content { grid-row: 2; /* position in the second row */ grid-column: 1 / -1; /* cover all three columns */ padding: 1rem; border-bottom: 2px solid dodgerblue; } The thing about a tabbed interface is that we only want to show one open tab panel at a time. Thankfully, we can select the [open] state of the <details> elements and hide the ::details-content of any tab that is :not([open])by using enabling selectors: details:not([open])::details-content { display: none; } We still have overlapping tabs, but the only tab panel we’re displaying is currently open, which cleans things up quite a bit: Turning <details> into tabs Now on to the fun stuff! Right now, all of our tabs are visually stacked. We want to spread those out and distribute them evenly along the .grid‘s top row. Each <details> element contains a <summary> providing both the tab label and button that toggles each one open and closed. Let’s place the <summary> element in the first subgrid row and add apply light styling when a <details> tab is in an [open] state: summary { grid-row: 1; /* First subgrid row */ display: grid; padding: 1rem; /* Some breathing room */ border-bottom: 2px solid dodgerblue; cursor: pointer; /* Update the cursor when hovered */ } /* Style the <summary> element when <details> is [open] */ details[open] summary { font-weight: bold; } Our tabs are still stacked, but how we have some light styles applied when a tab is open: We’re almost there! The last thing is to position the <summary> elements in the subgrid’s columns so they are no longer blocking each other. We’ll use the :nth-of-type pseudo to select each one individually by their order in the HTML: /* First item in first column */ details:nth-of-type(1) summary { grid-column: 1 / span 1; } /* Second item in second column */ details:nth-of-type(2) summary { grid-column: 2 / span 1; } /* Third item in third column */ details:nth-of-type(3) summary { grid-column: 3 / span 1; } Check that out! The tabs are evenly distributed along the subgrid’s top row: Unfortunately, we can’t use loops in CSS (yet!), but we can use variables to keep our styles DRY: summary { grid-column: var(--n) / span 1; } Now we need to set the --n variable for each <details> element. I like to inline the variables directly in HTML and use them as hooks for styling: <div class="grid"> <details class="item" name="alpha" open style="--n: 1"> <summary class="subitem">First item</summary> <div><!-- etc. --></div> </details> <details class="item" name="alpha" style="--n: 2"> <summary class="subitem">Second item</summary> <div><!-- etc. --></div> </details> <details class="item" name="alpha" style="--n: 3"> <summary class="subitem">Third item</summary> <div><!-- etc. --></div> </details> </div> Again, because loops aren’t a thing in CSS at the moment, I tend to reach for a templating language, specifically Liquid, to get some looping action. This way, there’s no need to explicitly write the HTML for each tab: {% for item in itemList %} <div class="grid"> <details class="item" name="alpha" style="--n: {{ forloop.index }}" {% if forloop.first %}open{% endif %}> <!-- etc. --> </details> </div> {% endfor %} You can roll with a different templating language, of course. There are plenty out there if you like keeping things concise! Final touches OK, I lied. There’s one more thing we ought to do. Right now, you can click only on the last <summary> element because all of the <details> pieces are stacked on top of each other in a way where the last one is on top of the stack. You might have already guessed it: we need to put our <summary> elements on top by setting z-index. summary { z-index: 1; } Here’s the full working demo: CodePen Embed Fallback Accessibility The <details> element includes built-in accessibility features, such as keyboard navigation and screen reader support, for both expanded and collapsed states. I’m sure we could make it even better, but it might be a topic for another article. I’d love some feedback in the comments to help cover as many bases as possible. It’s 2025, and we can create tabs with HTML and CSS only without any hacks. I don’t know about you, but this developer is happy today, even if we still need a little patience for browsers to fully support these features. Pure CSS Tabs With Details, Grid, and Subgrid originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Community Strikes Back: 12 Open Source Projects Born from Resistance
by: Pulkit Chandak Mon, 27 Oct 2025 07:34:29 GMT When open source is spoken about, it is done so just as a licensing model for software. But when you think about it, it is often deeper than just that. With the open source philosophy, developers make good software exist just for the sake of its existence. Sometimes this good software is so good, that it disrupts the already existing players of the area, tipping the balance entirely. We'll be looking at the most significant cases of such an event in this article. So sit back and enjoy this casual read. 1. Git decimates BitKeeperImagine being the creator of Linux and yet people know you more for creating Git. That's the story of Linus Torvalds. Before Git, BitKeeper was the primary software used for distributed revision control of Linux kernel source code. And it was revolutionary because before that, according to Torvalds, the only good option was to manually check the patches and put them in. While Stallman and some others crticized the use of a properitary tool for the development of open source Linux kernel project, BitKeeper remained the choice of VCS tool. It was in 2005, that BitKeeper revoked the free license for Linux kernel project. They blamed Andrew Tridgell who tried creating an open source version of BitKeeper, the same way he had created Samba protocol, by resevre engineering existing project. This move violated BitKeeper's terms as Tridgell was employed by OSDL, predeccors of Linux Foundation, the non-profit organization pushing the Linux kernel development. After a public feud with Tridgell, Torvalds started working on his own source control software and released the first version before the month ended. And that's how Git was born, out of necessity, just like Linux project. Fun fact, this incident also led to the birth of Mercurial, another open source VCS. Popularity of Git overshadowed Mercurial. BitKeeper then turned open source before eventually being discontinued. Git, however, remains the most popular software control tool, with GitHub and GitLab, etc. being the most massive code bases used by everyone. 2. X.Org takes on XFree86's advertising clauseX Window System, aka X11 is one of the graphic windowing systems that are used in many Linux distributions as of now, and was used almost exclusively all major distributions before Wayland came along. The most popular implementation of X11 used to be XFree86. It began to go sour when the development of the software started to stagnate, as the core team began to resist progress. Things changed in 2004 when XFree86 wanted to include an advertising clause in their license, making it incompatible with the GPL license. This caused some tension within the community with the developers of major distributions warning to pull out. As a response, X.Org Foundation made the X.Org Server based on the last open source compatible version of XFree86. It became really popular really fast, replacing XFree86 in most of the major distributions within months. With a modular structure and transparency in development, X.Org became integral in graphical Linux operating systems, only now starting to be slowly replaced by a different windowing system entirely, Wayland. 3. Icinga takes on NagiosIn an IT workplace, all the technological elements of the system need to be monitored well. This is what is done by a network and infrastructure monitoring system, like Nagios. It is an application that can watch over servers, devices, applications, computers, etc. over a network, and report errors, if there are any. Nagios dominated this area, being open-source and extensible. This modularity, however, became the reason for its downfall as the developers made the decision to move certain plugins and features behind paid tiers. Due to this increased commercialization and closed development, they started losing their users. As a response, Icinga was made from a Nagios fork in 2009. They kept the backward compatibility to keep system from breaking, but put a step towards the future. Icinga offered a new web-interface, configuration format and improved scalability, essentially replacing Nagios as the preferred platform. 4. Illumos carries the legacy of OpenSolarisSun Microsystems had been a major player in the tech world, both hardware and software wise, during the dot-com boom. Solaris was a proprietary, UNIX based operating system designed by them that became really important in the industry. They then released OpenSolaris, which was their daring attempt at open-sourcing their powerful OS. Eventually, however, Oracle acquired Sun in 2010, abruptly abandoning the OpenSolaris project, leaving a lot of people hanging in the process. The solution? Some of the former Sun engineers and the open-source community came together to build Illumos from the last open-source version of OpenSolaris. It aimed to carry forward the userbase and legacy of OpenSolaris, and to continually develop new features, keeping the OS relevant. It has retained the excellent and distinguishing features of OpenSolaris such as the ZFS filesystem and DTrace. It has since then been the basis for other operating systems as well, like OpenIndiana, OmniOS and SmartOS. 5. OpenSearch when ElasticSearch went SSPLElasticSearch, soon after its release became the preferred search engine of enterprises all across the world. Providing rich analytics and usage statistics, it seemed to fulfill all the needs. Initially open source under Apache 2.0, ElasticSearch was later on moved to the SSPL (Server Side Public License), which is not a license recognized by the OSI. Amazon saw the opportunity and picked up the slack by forking the last open source release of ElasticSearch and adding their own spin to it, bringing about OpenSearch, which is open source. OpenSearch retains most of the important features ElasticSearch had along with the look and feel, and adds more on top such as easy AWS integration and cloud alignment, which proves to be a great advantage for most web service purposes. ElasticSearch came back as open source project again in 2024. But the damage was done as big players like Amazon has already put OpenSearch at the forfront of cloud servers. 6. VeraCrypt continues TrueCryptDisk encryption is one of the most, if not the most important security feature on an operating system. For a very long time, this job was reliably done by TrueCrypt, with automatic and on-the-fly encryption. However suddenly in 2014, TrueCrypt announced that they would not develop the program any further, and that the program was "not secure". It is unclear what the proper reasoning was (as flaws that major were not found) but in their message, they asked the users to switch to Microsoft's BitLocker. That didn't seem to take with the open-source community, which them proceeded to build VeraCrypt, forked from the last version of TrueCrypt. VeraCrypt carried on the existing features well, also improving various factors including stronger encryption algorithms, better key derivation functions and rapid bug fixing. It is known for being transparent and community-driven and hence very trusted. 7. Rocky Linux born in the aftermath of CentOS fiascoCentOS was an operating system by Red Hat that was based on RHEL (Red Hat Enterprise Limited) source code, getting all of its features a few months after RHEL itself, only free of cost. CentOS was eventually transitioned into CentOS Stream, which is a rolling release. The features now came in faster, but the stability was significantly hindered. This made is unsuitable for development environments, commercial uses or even personal usage. To resolve the situation, one of the original creators of CentOS created Rocky Linux in 2021, filling in the gap that CentOS left behind. It was, and ever since has been enterprise-ready and rock-solid stable. Being based on RHEL, it can be used in high-performance computing, cloud and hyperscale computing, as well as for smaller commercial systems. 8. OpenELA tackles RHEL's partially close source movesFollowing up the previous point, this one carries it further. RHEL had announced that the only source code that will be publicly available related to RHEL would be the CentOS Stream, and for Red Hat customers, it would be available through the customer portal. Understandably, the developers of the distributions based on RHEL were not pleased with the decision. CIQ, the company backing Rocky Linux, SUSE and Oracle responded by forming OpenELA (Open Enterprise Linux Association) with the goal of creating distributions compatible with RHEL, while keeping the source code open to all. It was supposed to be an answer to the hole that the dependency of enterprise operating systems on CentOS had left behind. The group has automated the task of paying to get access to the source code and then publishing it on a public repository, out for everyone to be able to access it and make an operating system out of it. Several distributions like Rocky Peridot, SUSE Open Build Service, Fedora Koji, and the AlmaLinux Build System were born out of the same. 9. OpenTofu fills the void after Terraform opted for Business Source LicenseThe story starts with Terraform being a terrific open source for IaC (infrastructure-as-code) purposes. The idea is that it will let you visualize, manage and plan your computing infrastructure (such as VMs, databases, VPNs, etc.) not manually, but as code, which automatically then executes the needed action. Terraform started as as open source, cross-cloud and was very extensible, which made it the go-to choice for everyone to the point where other services were being build around Terraform. In 2023, however, they decided to move from the open source MPL license to a BSL (Business Source License), which put several restrictions that put certain users at risk. Concerned about the problems that might occur in the future, open source developers forked the last open source version of Terraform and released OpenTofu, which then was backed by the Linux Foundation itself. Now after some time has passed, OpenTofu has not only proven successful in its mission, but has features that Terraform lacks. Listening to the community and its needs, OpenTofu has found great success. 10. Valkey forked from Redis as it changed licenseRedis (REmote DIctionary Server) was built to be an in-memory data store with blazing speed and utility. This meant that it could contain and retrieve data from RAM (with optional persistence to disk) with microsecond latency. This has several essential uses such as caching, session storage (like shopping carts), real-time analytics (like, share counts, etc.) and so on. Initially open source under the BSD license, it became wildly popular and an integral part of the internet's infrastructure. In 2024, however, Redis announced a change in license which would restrict its use in commercial clouds, heavily affecting the users. In response, Valkey was created, which was born out of the last open source version of Redis. 100% Redis compatible and not governed by a single company, Valkey thrived as a drop-in replacement for Redis. 11. LineageOS carries on after CyanogenMod's demiseFor a very long time, CyanogenMod had been the go-to option for Android users to install an alternative open-source operating system which could give them more control, customization and most importantly, freedom from any of the manufacturer's proprietary trackers, etc. Eventually, Cyanogen Inc. shifted its focus to more proprietary projects and discontinued the project. The developers' response was to fork the last known version into LineageOS, successfully taking the place of CyanogenMod. It is still going strong as the best open-source option for Android, different ROMs for different devices, with enhanced security and customization. Not only that, but it offers extended software support to older devices that are not supported by their parent companies any longer. 12. MariaDB, the OG Business Source LicenseeMySQL is an open-source database management system that has been the biggest program of its kind, and for good reason. It has had amazing support and documentation, can be used for extremely large databases with read-heavy purposes, and is very simple to use (so easy that it is taught to schoolchildren). It was acquired by (yet another time) Oracle, and the open-source community feared that the development might become slow, features might become proprietary, and it might lose the openness. In response, the original creator of MySQL Michael "Monty" Widenius created MariaDB, keeping it under the GPL license. It acted as a drop-in alternative to MySQL while also introducing new and exciting features that set it apart. It has since become the preferred management system in open-source projects. 📋It is kind of ironical to include MariaDB in this list. While it was created as a modern version of MySQL, MariaDB was the one that introduced the Business Source License. This was done because cloud vendors like AWS and Azure were reaping the benefit of open source projects by offering their hosted versions. This impacted those open source projects as they were not getting enough enterprise customers to sustain the development. As you can see, whenever an open source project opted for BSL, big players like AWS, Azure etc would just fork them and create an open source project they themselves govern. Decide who is the hero and who is the villain in this story. ConclusionTime and time again, the open source philosophy often trumps some rash business decisions, favoring openness and innovation. The twists and turns of these changes come from all sorts of different directions, but more often than not, good open source software has existed and thrived solely because people wanted them to. Let us know if you enjoyed this article in the comments. Cheers!
-
Ghostty Terminal: Never Understood the Hype Until I tried it
by: Abhishek Prakash Sun, 26 Oct 2025 14:37:47 GMT When I first started using Linux, I did not care much about the terminal applications. Not in the sense that I was not using the terminal but more like I never cared about trying other terminal application (or terminal emulators, if you want to use the correct technical term.) I mean, why would I? The magic is in the commands you run, after all. How does it matter if it's the default terminal that comes with the system or something else? Most terminals are pretty much the same, or so it feels. But still, there are numerous terminal emulators available for Linux. Perhaps they are more in number than the Arch-based distros. Last year, HashiCorp founder Mitchell Hashimoto developed another new terminal called Ghostty. And it took the developer world by storm. It seemed like everyone was talking about it. But that didn't bother me much. I attributed all the buzz around Ghostty to the Hashimoto's stature, never cared about trying it until last month. And when I tried it, I discovered a few features that I think makes it a favorite for pro terminal dwellers. If videos are your thing, this video shows Ghostty features in action. Subscribe to It's FOSS YouTube ChannelWhat makes Ghostty special?Ghostty is a relatively new terminal emulator for Linux and macOS, that provides a platform native UI and GPU acceleration. Easy to use configuration Ghostty does not require a configuration file to work. This is one of the cool features for a terminal emulator that comes with no GUI-based settings manager. It's not that you cannot edit the config file. It's just that the defaults are so good, you can just get on with your commands. For example, Ghostty supports nerd-fonts by default. So, your glyph characters and funny CLI tools like Starship prompt will just work out-of-the-box in Ghostty. Editing the configuration file of Ghostty is very simple; even for less tech-savvy people. The configuration file, usually stored at ~/.config/ghostty/config, is just a plain text file with a bunch of key-value pairs. Let's say you want to hide the mouse while typing. You just add this line to the config file: mouse-hide-while-typing = trueAnd reload the config with Ctrl+Shift+, or choosing the option from hamburger menu. How will you know what key-value you can use in Ghostty? Well, Ghostty keeps a fantastic, easy to understand documentation. You can start reading this doc, understand what a key is all about, and then add it to the config. It's that simple! 💡The documentation is also available locally on your system. Use the command ghostty +show-config --default --docs | lessWindows, tabs, splits and overviewIf you have used Kitty, you probably are aware of the various windows and split options. Ghostty provides a very similar experience. I won't deny, Ghostty borrows a lot of features from Kitty. So, here, you have one main window, and can have multiple tabs. Almost every terminal has multiple tab options these days. But Ghostty also allows you to have multiple window splits. Window splits in GhosttyIt's not as effective as using Tmux or screen command but this is good if you want to use multiple terminals in the same screen. A feature that made Terminator a popular choice a decade ago. This window split is mostly inclined to power users, who want to control multiple things at the same time. You can use keyboard shortcuts or the menu. Another interesting feature in this section is the tab overview. You can click on the overview button on the top bar. Click on the overview buttonThis is convenient, as this intuitive look introduces some kind of organization to your terminal usage. Somewhat like GNOME overview. Tabs in Ghostty (Click to enlarge the image)More importantly, you can search tabs as well! As you can see in the above screenshot, there is a proper name for each tab that was automatically assigned based on the last command you ran. So, if you ever reach a point where like browser tabs, you have numerous terminal tabs opened, you can search for it relatively easier ;) This overview feature is also available through keyboard shortcuts and that is my next favorite Ghostty feature in this list. Trigger Sequence ShortcutsThere are a whole lot of actions properly documented on the Ghostty documentation for you. These can be assigned to various keybindings of your preference. Ghostty keybindings will allow you to assign trigger sequences, which Vim users are familiar with. That is, you can use a trigger shortcut and then enter another key to complete the action. For example, in my Ghostty config, I have set: keybind = ctrl+a>o=toggle_tab_overviewWhat this does is, I can press ctrl+a and then press o to open the tab overview! How cool is that, to have a familiar workflow everywhere! Custom keybindings are also placed in Ghostty config file. Action Reference - KeybindingsReference of all Ghostty keybinding actions.GhosttyPerformable KeybindingsThis is a new feature introduced in version 1.2.0. With performable keybinding, you can assign a keyboard shortcut to multiple action. But the keybinding is activated only if the action is able to be performed. The Ghostty team itself provides a convenient example of how this works: keybind = performable:ctrl+c=copy_to_clipboardWhat it does is, use Ctrl+C to copy text only when there is something selected and available to copy. Otherwise, it works as the interrupt signal! No more accidental interrupts when you try to copy something. Kind of difficult for me to show it in the screenshot and thus I'll skip adding any image to this section. Image supportNot all terminals come with image protocol support. Only a few do. One of them is Kitty, which developed its own image rendering protocol, the Kitty Image Protocol. Ghostty implements the same Kitty Image Protocol in the terminal so that you can view images right from the terminal. Now, a simple user may not find the use of images support in the terminal. But there are a few use cases of image support. Simply speaking, this image rendering helps Ghostty to display images in fun tool like Fastfetch to reading manga right-within the terminal. Watch our video on fun stuff you can do in Linux terminal. Subscribe to It's FOSS YouTube ChannelLigature and fancy fontsGhostty also has ligature support. Now what is the purpose of ligatures, and what is its use within the terminal? If you are into coding, there are symbols that are a combination of two symbols. Let's say, "Not equal to", usually denoted as != but mathematically displayed as ≠ . Now, with a ligature supported terminal, you will get the proper symbol for this operation. See the difference for yourself. Terminals with NO ligature support and WITH ligature support. (Click to enlarge the image) This makes code more human readable and understandable. Built-in themes with light and dark variantWith Ghostty, you have no reason to search the web for color schemes. There is a huge list of color schemes, baked right in to the application. All you have to do is, note its name and use it in the config. To list all the available color schemes/themes, use the command: ghostty +list-themesThis new interface lists every theme available, along with a live preview. Note the name of a theme from the left sidebar. Use q to exit the preview. Let's say I want to use the Adventure dark theme. All I have to do is to add a line in the config: theme = AdventureThere are light and dark variants of themes available to choose from. You can define themes for both light and dark mode. So if you system uses dark mode, the terminal theme will be the one you chose for dark mode and vice versa. theme = dark:Moonkai Pro Machine,light:Catppuccin LatteHow does it matter? Well, operating systems these days also come with feature that automatically switches between dark and light modes based on the time of the day. And if you opt for that feature, you'll have a better dark/light experience with Ghostty. Native UIMany apps use the same frameworks on all the operating system and that might not blend well. This is specially true for applications built on top of Electron framework often look out of place in Linux. Ghostty for Linux is developed using the GTK4 toolkit, which makes it looks native in various Linux distributions. Popular distributions like Ubuntu, Fedora, etc uses GNOME as their default desktop offering. Thus, you will get a familiar look and feel for the window, along with overall system uniqueness. On macOS, Ghosttty app is built using Swift, AppKit, and SwiftUI, with real native macOS components like native tabs, native splits, native windows, menu bars, and a proper settings GUI. Installing Ghostty on LinuxIf you are an Arch Linux user, Ghostty is available in the official repository. You can install it using the pacman command: sudo pacman -Syu ghosttyFor Ubuntu users, there is an unofficial user-maintained repository, offering deb files. You can download it from the releases page. You can check other official installation methods in the installation manual. GhosttyWrapping UpIf you are new to Ghostty and want to get an overview of the config file format, you can refer to our sample Ghostty configuration. Don't forget to read the README! Get custom Ghostty configGhostty indeed is a worthy choice if you are looking for some all-rounder terminal emulators. But only if you are looking for one because most of the time, the default terminal works just fine. With a little configuration tweaking, you could get many of the discussed Ghostty features, too. Take KDE's Konsole terminal customization as an example. What's your take on Ghostty? Is it worth a try or would you rather stick with your current terminal choice? Share your views in the comments please.
-
CSS Animations That Leverage the Parent-Child Relationship
by: Preethi Fri, 24 Oct 2025 14:18:03 +0000 Modern CSS has great ways to position and move a group of elements relative to each other, such as anchor positioning. That said, there are instances where it may be better to take up the old ways for a little animation, saving time and effort. We’ve always been able to affect an element’s structure, like resizing and rotating it. And when we change an element’s intrinsic sizing, its children are affected, too. This is something we can use to our advantage. Let’s say a few circles need to move towards and across one another. Something like this: Our markup might be as simple as a <main> element that contains four child .circle elements: <main> <div class="circle"></div> <div class="circle"></div> <div class="circle"></div> <div class="circle"></div> </main> As far as rotating things, there are two options. We can (1) animate the <main> parent container, or (2) animate each .circle individually. Tackling that first option is probably best because animating each .circle requires defining and setting several animations rather than a single animation. Before we do that, we ought to make sure that each .circle is contained in the <main> element and then absolutely position each one inside of it: main { contain: layout; } .circle { position: absolute; &:nth-of-type(1){ background-color: rgb(0, 76, 255); } &:nth-of-type(2){ background-color: rgb(255, 60, 0); right: 0; } &:nth-of-type(3){ background-color: rgb(0, 128, 111); bottom: 0; } &:nth-of-type(4){ background-color: rgb(255, 238, 0); right: 0; bottom: 0; } } If we rotate the <main> element that contains the circles, then we might create a specific .animate class just for the rotation: /* Applied on <main> (the parent element) */ .animate { width: 0; transform: rotate(90deg); transition: width 1s, transform 1.3s; } …and then set it on the <main> element with JavaScript when the button is clicked: const MAIN = document.querySelector("main"); function play() { MAIN.className = ""; MAIN.offsetWidth; MAIN.className = "animate"; } It looks like we’re animating four circles, but what we’re really doing is rotating the parent container and changing its width, which rotates and squishes all the circles in it as well: CodePen Embed Fallback Each .circle is fixed to a respective corner of the <main> parent with absolute positioning. When the animation is triggered in the parent element — i.e. <main> gets the .animate class when the button is clicked — the <main> width shrinks and it rotates 90deg. That shrinking pulls each .circle closer to the <main> element’s center, and the rotation causes the circles to switch places while passing through one another. This approach makes for an easier animation to craft and manage for simple effects. You can even layer on the animations for each individual element for more variations, such as two squares that cross each other during the animation. /* Applied on <main> (the parent element) */ .animate { transform: skewY(30deg) rotateY(180deg); transition: 1s transform .2s; .square { transform: skewY(30deg); transition: inherit; } } CodePen Embed Fallback See that? The parent <main> element makes a 30deg skew and flip along the Y-axis, while the two child .square elements counter that distortion with the same skew. The result is that you see the child squares flip positions while moving away from each other. If we want the squares to form a separation without the flip, here’s a way to do that: /* Applied on <main> (the parent element) */ .animate { transform: skewY(30deg); transition: 1s transform .2s; .square { transform: skewY(-30deg); transition: inherit; } } CodePen Embed Fallback This time, the <main> element is skewed 30deg, while the .square children cancel that with a -30deg skew. Setting skew() on a parent element helps rearrange the children beyond what typical rectangular geometry allows. Any change in the parent can be complemented, countered, or cancelled by the children depending on what effect you’re looking for. Here’s an example where scaling is involved. Notice how the <main> element’s skewY() is negated by its children and scale()s at a different value to offset it a bit. /* Applied on <main> (the parent element) */ .animate { transform: rotate(-180deg) scale(.5) skewY(45deg) ; transition: .6s .2s; transition-property: transform, border-radius; .squares { transform: skewY(-45deg) scaleX(1.5); border-radius: 10px; transition: inherit; } } CodePen Embed Fallback The parent element (<main>) rotates counter-clockwise (rotate(-180deg)), scales down (scale(.5)), and skews vertically (skewY(45deg)). The two children (.square) cancel the parent’s distortion by using the negative value of the parent’s skew angle (skewY(-45deg)), and scale up horizontally (scaleX(1.5)) to change from a square to a horizontal bar shape. There are a lot of these combinations you can come up with. I’ve made a few more below where, instead of triggering the animation with a JavaScript interaction, I’ve used a <details> element that triggers the animation when it is in an [open] state once the <summary> element is clicked. And each <summary> contains an .icon child demonstrating a different animation when the <details> toggles between open and closed. Click on a <details> to toggle it open and closed to see the animations in action. CodePen Embed Fallback That’s all I wanted to share — it’s easy to forget that we get some affordances for writing efficient animations if we consider how transforming a parent element intrinsically affects the size, position, and orientation. That way, for example, there’s no need to write complex animations for each individual child element, but rather leverage what the parent can do, then adjust the behavior at the child level, as needed. CSS Animations That Leverage the Parent-Child Relationship originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
LHB Linux Digest #25.32: New Linux Networking Course, iotop, Chargebee Alternative and More
by: Abhishek Prakash Fri, 24 Oct 2025 18:11:02 +0530 I am happy to announce the release of our 14th course, Linux Networking at Scale. Okay, this is still a work in progress but I could not wait to reveal it to you 😀 It's a 4-module micro-course that takes you into the world of policy routing, VRFs, nftables, VXLAN, WireGuard, and real-world traffic control, with practical labs in each module. From sysadmins to DevOps to homelab enthusiasts, there is something for everyone in this course. Two modules are available now and the other two will be published in the coming two weeks. Enjoy upgrading your Linux skills 💪 Linux Networking at Scale (In Progress)Master advanced networking on Linux — from policy routing to encrypted overlays.Linux HandbookUmair KhurshidLinux Networking at Scale This post is for subscribers only Subscribe now Already have an account? Sign in
-
Module 2: nftables for Complex Rulesets and Performance Optimization
by: Umair Khurshid Fri, 24 Oct 2025 17:02:43 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
Module 1: Advanced iproute2: Policy Routing, Multiple Routing, and VRFs
by: Umair Khurshid Fri, 24 Oct 2025 16:58:03 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
Linux Networking at Scale
by: Umair Khurshid Fri, 24 Oct 2025 16:57:28 +0530 🚀 Why this course?Modern infrastructure demands more than basic networking commands. When systems span across containers, data centers, and cloud edges, you need to scale, isolate, and secure your network intelligently; all using the native power of Linux. This micro-course takes you beyond the basics and into the world of policy routing, VRFs, nftables, VXLAN, WireGuard, and real-world traffic control, with practical labs at every step. 🧑🎓 Who is this course for?This course is designed to help SysAdmins and DevOps engineers move from basic interface configuration to production-grade, resilient networking on Linux. Even aspiring network engineers may find some value in how Linux handles routing, policy decisions, and multi-network connectivity. Later modules will build upon this foundation to explore nftables for complex and optimized firewalls, VXLAN and WireGuard for secure overlays, and tc for traffic shaping and QoS. This is for: Linux admins and DevOps engineers managing distributed systemsNetwork engineers expanding into Linux-based routing and firewallsHomelab enthusiasts and advanced learners who want real mastery📋Prerequisite: Familiarity with Linux command-line tools (ip, ping, systemctl) and basic TCP/IP concepts.🧩 What you’ll learn in this micro-course?By the end of this course, you’ll be able to: Design multi-path and multi-tenant routing using iproute2 and VRFsBuild high-performance firewall setups with nftablesCreate secure overlay networks using VXLAN and WireGuardImplement traffic shaping and QoS policies to control real-world bandwidth usage🥼Every concept is paired with hands-on labs using network namespaces and containers, no expensive lab gear needed.You’ll build, break, and fix your network, exactly like in production. Well, maybe not exactly like production but pretty close to that. What are you waiting for? Time to take your Linux networking knowledge to the next level.
-
Monitoring I/O Usage and Network Traffic in Linux With iotop & ntopng
by: LHB Community Fri, 24 Oct 2025 11:21:17 +0530 You've already seen how to monitor CPU and memory usage with top and htop. Now, let's take a look at two other tools you can use for monitoring your system: iotop and ntopng. These tools monitor disk I/O (Input/Output) and network traffic, respectively. This tutorial will show you how to install, configure, and use both tools. What are iotop and ntopng?iotop:Similar in appearance to top and htop, iotop is a real-time disk I/O monitoring utility that displays the current activity (reads, writes, and waiting) of each process or thread on a Linux system. It can also show total accumulated usage per process/thread. It's useful for identifying processes that are generating heavy I/O traffic (reads/writes) or causing bottlenecks and high latency. ntopng:As the name suggests, ntopng is the next-generation version of ntop, a tool for real-time network-traffic monitoring. It provides analytics, host statistics, protocol breakdowns, flow views, and geolocation, helping you spot abnormal usage. Unlike iotop (and the older ntop command), ntopng primarily serves its output through a web interface, so you interact with it in a browser. While this tutorial also covers basic console usage, do note that it's more limited on the CLI. 📋ntopng integrates with systemd on most distros by default, and this tutorial does not cover systems using other init systems.Installing iotop and ntopngBoth tools are available for installation on Ubuntu and most other distros in their standard repositories. For Debian/Ubuntu and their derivatives: sudo apt update && sudo apt install -y iotop ntopng To install ntopng, RHEL, CentOS, Rocky, and AlmaLinux users will need to enable the EPEL repository first: sudo dnf install -y epel-release sudo dnf install -y iotop ntopng For Arch-based distros, use: sudo pacman -Syu --noconfirm iotop ntopng For openSUSE, run: sudo zypper refresh && sudo zypper install -y iotop ntopng 📋On all systems, ntopng is installed as a systemd service, but it only runs by default on Debian/Ubuntu-based systems and on openSUSE/SUSE.Enable ntopng if you'd like it to run constantly in the background: sudo systemctl enable --now ntopng If you'd like to disable this behavior and only use ntopng on demand, you can run: sudo systemctl stop nntopng && sudo systemctl disable ntopng Using iotop for monitoring disk I/O Much like top and htop, iotop runs solely as a CLI tool. It requires root permissions, but not to worry, it is only used for monitoring purposes and cannot access or control anything else on your system. sudo iotop You’ll see something like this: At the top, the following real-time readouts are displayed (all in Kilobytes): Total DISK READ: cumulative amount of data read from disk since iotop started.Total DISK WRITE: cumulative amount of data written to disk since start.Current DISK READ: how much data is being read (per second).Current DISK WRITE: how much data is being written (per second).Below these outputs, there are several columns shown by default: TID: Thread ID (unique identifier of the thread/process).PRIO: I/O priority level (lower number = higher priority).USER: The user owning the process/thread.DISK READ: Data read from disk by this thread/process.DISK WRITE: Data written to disk by this thread/process.SWAPIN: Percentage of time spent swapping memory in/out.IO> (I/O): Percentage of time the process waits on I/O operations.COMMAND: The name or command of the running process/thread.Useful options & key bindingsYou can control what iotop shows by default by passing various flags when launching the command. Here are some of the commonly used options: -o (or --only): Only show processes with current I/O (filter idle processes).-b (or --batch): Non-interactive mode (useful for logging).-n <count>: Outputs several iterations, then exits (runs in batch mode).-d <delay>: Delay between iterations (in seconds). For instance, use -d 5 for a 5-second delay, or -d 0.5 for a half-second delay. The default is one second.When run without "-b/--batch", iotop starts in interactive mode, where you can use the following keys to change various options: o: toggles the view between showing only processes currently doing I/O and all processes running on the system.p: toggles between displaying only processes or all threads. Changes "TID" (Thread ID) to "PID" (Process ID).a: toggles accumulated I/O vs current I/O.r: Reverse sort order (toggles ascending/descending).left/right arrows: Change the sort column (move between columns like DISK READ, COMMAND, etc.).HOME: Jump to sorting by TID (Thread ID).END: Jump to sorting by COMMAND (process name).q: quits iotop.💡Excessive disk I/O from unexpected processes is usually a sign of possible misconfiguration, runaway logs, a backup mis-schedule, or high database activity. If you're not sure about a process, it's best to investigate what purpose that process serves before taking action.Practical example scenario where iotop helps you as a sysadminLet's say you're working on your system and you notice that it's suddenly slowing down, but can't find the cause via the normal means (high CPU or memory usage). You might suspect disk I/O is the bottleneck, but this will not show up in most system monitoring tools, so you run "sudo iotop" and sort by DISK WRITE. There, you notice a process is constantly writing hundreds of MB/s, blocking other processes. Using the "o" keybinding, you filter to only active writers. You may then throttle or stop that process in another tool (like htop), reschedule it to run at off-hours, or have it use another storage device. iotop has its limitationsWhile it is a useful monitoring tool, iotop cannot control processes on its own. It only has access for reading activity, not controlling it. Some other key things to note with this tool are: On systems with many threads/processes doing I/O, sorting/filtering is key. It's recommended that you use "-o" when launching the command, or press "o" after you've started it.iotop shows process-level I/O, but does not always give full hardware device stats (for that, tools like iostat or blktrace may be needed).You should avoid running iotop on production systems for long intervals without caution, since iotop itself causes overhead when many processes are updating at the same time.Exploring ntopng to get graphical view of network trafficUnlike iotop and its older variant, ntop (which is no longer packaged on some distros), ntopng is primarily accessed via a web-based GUI at default port 3000. For example: http://your-server-ip-address:3000 or if you're running it on your locallyr, from https://localhost:3000. From the GUI, you can view hosts, traffic flows, protocols, top talkers, geolocation, alerts, etc. To keep things simple, we'll cover basic usage and features. Changing the default portChanging the port is a good idea if you already use port 3000 for other local web services. To change ntopng’s default web port, edit its configuration file and restart the service. sudo nano /etc/ntopng/ntopng.conf Then, change the line defining the web port. If it doesn't exist, add it: -w=3001 You can use any unused port above 1024. Next, you'll need to restart ntopng: sudo systemctl restart ntopng You should now see ntopng listening on port 3001. Dashboard overview💡When you first load ntopng in your browser, you'll need to log in. The default username and password are both "admin". However, you'll be prompted to change the password on the first login.Once you're logged in, you'll land on the main dashboard, which looks like this: This dashboard provides a real-time visual overview of network activity and is usually the first thing you see. By default, the dashboard includes: Traffic summary (top left): shows live inbound and outbound traffic rates, number of active hosts, flows, and alerts. Clicking on any of these will take you to the relevant section.Search bar (top center): lets you quickly find hosts, IPs, or ports.Top Flow Talkers (main panel): a large visual block showing which hosts are generating or receiving the most traffic (e.g., your machine vs. external IPs).Sidebar (left): navigation menu with access to:Dashboard: current view.Alerts: security or threshold-based notifications.Flows/Hosts/Ports/Applications: detailed breakdowns of network activity.Interfaces: network interfaces being monitored.Settings / System / Developer: configuration and data export options.Refresh indicator (bottom): shows the live update frequency (default: 5 seconds).Footer: version information, uptime, and system clock.You can check each panel in the sidebar and dashboard individually to see what each displays. For this tutorial, we won't go into every detail, as there are too many to cover here. Using ntopng from the consoleAlthough ntopng is designed to be primarily web-based, you can still run it directly in the console for quick checks or lightweight monitoring. This can be useful on headless systems over SSH, or when you just want a quick snapshot of network activity without loading the web UI. First, stop the ntopng systemd service: sudo systemctl stop ntopng This is necessary to avoid any conflicts between the running service and your access via the CLI. Now you can launch ntopng directly: sudo ntopng --disable-ui --verbose This command will listen on all network interfaces that ntopng can find. If you'd like to restrict to a certain interface, you can use the -i flag. For example, to listen only on your WiFi interface, you can use either of the following commands (usually begins with "wl"): ip link | grep wl or nmcli device status | grep wl Then run ntopng, pointed to your wifi router: sudo ntopng --disable-ui --verbose -i wlp49s0 Replace "wlp49s0" with your device, of course. Basic logging with the ntopng CLIIf you'd like to capture a basic log with ntopng from the console, you can run: sudo ntopng --disable-ui -i wlp49s0 --dump-flows flows.log Again, just remember to replace wlp49s0 with your device name. Note that the log will save to which ever folder is your current working directory. You can change the location of the log file by providing a path, for example: sudo ntopng --disable-ui -i wlp49s0 --dump-flows path/to/save/to/flows.log Practical example scenario where ntopng helpsSay you suspect unusual network activity on your system. You log in to the ntopng dashboard and notice that one host on your network is sending a large amount of data to an external IP address over port 443 (HTTPS). Clicking on that host reveals its flows, showing that a specific application is continuously communicating with a remote server. Using this insight, you can then open another monitoring tool, such as top or htop, to identify and stop the offending process before investigating further. Even for less experienced users, ntopng is a great way to understand a system’s network usage at a glance. You can run it on a production server if resources allow, or dedicate a small monitoring host to watch other devices on your network (out of scope here). By combining real-time views with short-term history (e.g., spotting periodic traffic spikes), you can build a picture of network health. Used alongside a firewall and tools like fail2ban, ntopng helps surface anomalies quickly so you can investigate and respond. ngtopng has its limitations tooWhile ntopng is powerful, capturing all network traffic at very high throughput can require serious resources (NICs, CPU, memory). If you're using it on a high-traffic network, it's probably best to use a separate server for monitoring. Here are some other important things to note: If you are monitoring remote networks or via VLANs, you may need an appropriate network setup (mirror ports, network taps). However, these are outside the scope of this tutorial.For data retention out of the box, you only get a limited history. For long-term trends, you'll need to configure external storage or a database.Most traffic (e.g., HTTPS) is encrypted, so ntopng can only show metadata (hosts, ports, volumes, SNI (Server Name Indication) where available). In such cases, it cannot show the actual payloads.Conclusioniotop and ntopng are two powerful free/open-source tools that can help you monitor, analyze, and troubleshoot critical subsystems on your Linux machine. By incorporating these into your arsenal, you'll get a better understanding of your system's baseline for normal operations and be better equipped to spot anomalies or bottlenecks quickly.
-
414: Apollo (and the Almighty Cache)
by: Chris Coyier Thu, 23 Oct 2025 16:15:59 +0000 Rachel and Chris jump on the show to talk about a bit of client-side technology we use: Apollo. We use it because we have a GraphQL API and Apollo helps us write queries and mutations that go through that API. It slots in quite nicely with our React front-end, providing hooks we use to do the data work we need to do when we need to do it. Plus we get typed data all the way through. Chris gets to learn that the Apollo Cache isn’t some bonus feature that just helps makes things faster, but an inevitable and deeply integrated feature into how this whole thing works. Time Jumps 00:06 How do you get data into the front end of your application? 02:57 Do we use Apollo Server? 10:17 Why is GraphQL not as cool anymore? 18:23 How does the Apollo Client cache work?
-
FOSS Weekly #25.43: NebiOS Linux, GNOME Enhancements, LMDE 7, COSMIC Beta Review and More Linux Stuff
by: Abhishek Prakash Thu, 23 Oct 2025 04:29:08 GMT Linux Mint Debian Edition (LMDE) version 7 is available now. For people who like Debian more than Ubuntu and Linux Mint's Cinnamon more than anything, this is the perfect choice. LMDE 7 “Gigi” Released: Linux Mint’s Debian-Based Alternative Gets Major UpgradeA stable Debian base meets a polished Linux Mint desktop experience.It's FOSS NewsSourav RudraSometimes I wonder if LMDE should be the default choice for Linux Mint. Am I the only one who thinks this? 💬 Let's see what you get in this edition: Me pitching Proton Mail against Gmail.A new LMDE release based on Debian 13.DIY kindle alternatives.And other Linux news, tips, and, of course, memes!This edition of FOSS Weekly is supported by PrepperDisk. PrepperDisk gives you a fully offline, private copy of the world’s most useful open-source knowledge—so your access doesn’t depend on big platforms, networks, or gatekeepers. Built on Raspberry Pi, it bundles projects like Wikipedia, maps, and survival manuals with tools we’ve built and open-sourced ourselves. It’s a way to safeguard information freedom: your own secure, personal archive of open knowledge, ready anywhere—even without the internet. Explore PrepperDisk 📰 Linux and Open Source NewsNordVPN has made the GUI code for its Linux app open source.Valkey 9.0 release adds multi-database clusters and now supports 1 billion requests per second The beloved open source game SuperTuxKart gets many refinements in the latest release.ONLYOFFICE Docs 9.1 is here with PDF redaction and editor upgrades.LMDE 7 "Gigi" has arrived with a Debian 13 base and many improvements.LMDE 7 “Gigi” Released: Linux Mint’s Debian-Based Alternative Gets Major UpgradeA stable Debian base meets a polished Linux Mint desktop experience.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutProton Mail is a better choice than Gmail. That's what I think. And I discovered a ProtonMail feature that works better than Gmail. That One (of the several) Feature ProtonMail Does Better Than GmailThe newsletters can be a mess to manage. ProtonMail gives you better features than Gmail to manage your newsletter subscriptions.It's FOSS NewsAbhishekGNOME all the wayI thought of sharing some neat tips and tweaks that relate to various components of the GNOME desktop environment. Basically, they let you discover some lesser known features and customization. Perhaps you'll discover your next favorite trick here. Enhance the functionality of the Nautilus file manager with these tips.Learn to get more out of the search feature in the file manager.Why restrict yourself with file manager? Explore the full potential of the activity search in GNOME.Let's take it further and customize the top panel in GNOME to get applet indicator and more such features.For a long time, I relied on GNOME Tweaks until I discovered Just Perfection. GNOME customization was never the same.Here are a few tips to save time by combining the terminal and the file manager on your GNOME system.🧮 Linux Tips, Tutorials, and LearningsLearn the difference between PipeWire and PulseAudio.Explore comic book readers on Linux for .cbr files.Unravel the mystery of loop devices in Linux.👷 AI, Homelab and Hardware CornerFor AI enthusiasts, here is a way to go from zero keys to full AI integration in one step. The Puter.js library allows integrating mainstream AI in your web projects without needing their API keys. I Used This Open Source Library to Integrate OpenAI, Claude, Gemini to Websites Without API KeysThis underrated open source JavaScript library lets you integrate popular commercial LLMs without needing their paid API. You can test it out within minutes on your Linux system with this tutorial.It's FOSSBhuwan MishraAlso, if you are fed up with Amazon's Kindle, then you can build your own eBook reader. Looking for Open Source Kindle Alternatives? Build it YourselfThere are no easy options. You have to take the matter in your hand, quite literally.It's FOSSPulkit ChandakThe FSF is going all in with the Librephone project. 🛍️ Deal Alert: Raspberry Pi eBook BundleLearn the ins and outs of coding your favorite retro games and build one of your own with Code the Classics Volume II. Give your tech-savvy kids a head start in computer coding with Unplugged Tots. The 16-book library also includes just-released editions of The Official Raspberry Pi Handbook 2026, Book of Making 2026, and much more! Whether you’re just getting into coding or want to deepen your knowledge about something more specific, this pay-what-you-want bundle has everything you need. And you support Raspberry Pi Foundation North America with your purchase! Humble Tech Book Bundle: All Things Raspberry Pi by Raspberry Pi PressLearn the ins and outs of computer coding with this library from Raspberry Pi! Pay what you want and support the charity of your choice!Humble BundleExplore the Humble offer here✨ Project HighlightsNebiOS is a beautiful approach to how an Ubuntu-based distro with a custom desktop environment can be built. NebiOS is an Ubuntu-based Distro With a Brand New DE Written for Wayland from Ground UpExploring a new Ubuntu-based distro. By the way, it’s been some time since we had a new distro based on Ubuntu.It's FOSS NewsSourav RudraCOSMIC is shaping up well, we tested it to see how it performs. I Tested Pop!_OS 24.04 LTS Beta: A Few Hits and Misses But Mostly on the Right TrackCOSMIC has come a long way, but is it enough?It's FOSS NewsSourav Rudra📽️ Videos I Am Creating for YouThe terminal makeover video is nearly at 100K views. With so many people enhancing the looks of their terminal, I thought you might want to give it a try, too. Subscribe to It's FOSS YouTube Channel Linux is the most used operating system in the world. but on servers. Linux on desktop is often ignored. That's why It's FOSS made it a mission to write helpful tutorials and guides to help use Linux on their personal computer. We do it all for free. No venture capitalist funds us. But you know who does? Readers like you. Yes, we are an independent, reader supported publication helping Linux users worldwide with timely news coverage, in-depth guides and tutorials. If you believe in our work, please support us by getting a Plus membership. It costs just $3 a month or $99 for a lifetime subscription. Join It's FOSS Plus 💡 Quick Handy TipToo much GNOME in this newsletter? Let's switch to KDE. If you are using desktop widgets in KDE Plasma and don't know how to add the system monitor sensor to it, then do this. Open the System Monitor app and right-click on any telemetry you want to add. Then select "Add chart as Desktop Widget". That's it. The selected chart will be added to your desktop. You can change its appearance by going to Edit mode later. 🎋 Fun in the FOSSverseThis crossword-style challenge mixes up popular Linux text editors. From timeless command-line classics to sleek modern tools. Sharpen your brain, embrace your inner geek, and see how many you can decode! The Scrambled Linux Editors CrosswordThink you know your Linux text editors? From Vim to Nano, these jumbled names will challenge even seasoned coders. Try to unscramble them and see how many you can get right!It's FOSSAbhishek Prakash🤣 Meme of the Week: Probably not true anymore but still funny. 🗓️ Tech Trivia: On October 20, 2004, Ubuntu 4.10 "Warty Warthog" was released! Backed by Mark Shuttleworth’s Canonical, Ubuntu aimed to make Linux simple and human-friendly, its name loosely translates to "humanity." Two decades later, it’s dominating the Linux desktop space. 🧑🤝🧑 From the Community: Long-time FOSSer Cliff is looking for help with a Realtek Wi-Fi issue on his MX Linux system. Can you help? MX Linux Realtek Wi-fi IssuesI have MX Linux KDE, most recent update. It runs on kernel 6.1.0-40. I am using a mini pc with a Realtek 8852BE network card. I had always had wired internet for that machine, but now I have to be happy with wifi. The problem, unlike any of my other OSs, is that it sees each wifi channel as having a 0 signal strength and fails to activate wlan0. I went around for hours with Claude AI to solve it and it was unable to resolve the issue. It finally suggested just going to MX Tools, Package Install…It's FOSS Communitycliffsloane❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
An Introduction to JavaScript Expressions
by: Mat Marquis Wed, 22 Oct 2025 19:08:23 +0000 Editor’s note: Mat Marquis and Andy Bell have released JavaScript for Everyone, an online course offered exclusively at Piccalilli. This post is an excerpt from the course taken specifically from a chapter all about JavaScript expressions. We’re publishing it here because we believe in this material and want to encourage folks like yourself to sign up for the course. So, please enjoy this break from our regular broadcasting to get a small taste of what you can expect from enrolling in the full JavaScript for Everyone course. Hey, I’m Mat, but “Wilto” works too — I’m here to teach you JavaScript. Well, not here-here; technically, I’m over at JavaScript for Everyone to teach you JavaScript. What we have here is a lesson from the JavaScript for Everyone module on lexical grammar and analysis — the process of parsing the characters that make up a script file and converting it into a sequence of discrete “input elements” (lexical tokens, line ending characters, comments, and whitespace), and how the JavaScript engine interprets those input elements. An expression is code that, when evaluated, resolves to a value. 2 + 2 is a timeless example. 2 + 2 // result: 4 As mental models go, you could do worse than “anywhere in a script that a value is expected you can use an expression, no matter how simple or complex that expression may be:” function numberChecker( checkedNumber ) { if( typeof checkedNumber === "number" ) { console.log( "Yep, that's a number." ); } } numberChecker( 3 ); // result: Yep, that's a number. numberChecker( 10 + 20 ); // result: Yep, that's a number. numberChecker( Math.floor( Math.random() * 20 ) / Math.floor( Math.random() * 10 ) ); // result: Yep, that's a number. Granted, JavaScript doesn’t tend to leave much room for absolute statements. The exceptions are rare, but it isn’t the case absolutely, positively, one hundred percent of the time: console.log( -2**1 ); // result: Uncaught SyntaxError: Unary operator used immediately before exponentiation expression. Parenthesis must be used to disambiguate operator precedence Still, I’m willing to throw myself upon the sword of “um, actually” on this one. That way of looking at the relationship between expressions and their resulting values is heart-and-soul of the language stuff, and it’ll get you far. Primary Expressions There’s sort of a plot twist, here: while the above example reads to our human eyes as an example of a number, then an expression, then a complex expression, it turns out to be expressions all the way down. 3 is itself an expression — a primary expression. In the same way the first rule of Tautology Club is Tautology Club’s first rule, the number literal 3 is itself an expression that resolves in a very predictable value (psst, it’s three). console.log( 3 ); // result: 3 Alright, so maybe that one didn’t necessarily need the illustrative snippet of code, but the point is: the additive expression 2 + 2 is, in fact, the primary expression 2 plus the primary expression 2. Granted, the “it is what it is” nature of a primary expression is such that you won’t have much (any?) occasion to point at your display and declare “that is a primary expression,” but it does afford a little insight into how JavaScript “thinks” about values: a variable is also a primary expression, and you can mentally substitute an expression for the value it results in — in this case, the value that variable references. That’s not the only purpose of an expression (which we’ll get into in a bit) but it’s a useful shorthand for understanding expressions at their most basic level. There’s a specific kind of primary expression that you’ll end up using a lot: the grouping operator. You may remember it from the math classes I just barely passed in high school: console.log( 2 + 2 * 3 ); // result: 8 console.log( ( 2 + 2 ) * 3 ); // result: 12 The grouping operator (singular, I know, it kills me too) is a matched pair of parentheses used to evaluate a portion of an expression as a single unit. You can use it to override the mathematical order of operations, as seen above, but that’s not likely to be your most common use case—more often than not you’ll use grouping operators to more finely control conditional logic and improve readability: const minValue = 0; const maxValue = 100; const theValue = 50; if( ( theValue > minValue ) && ( theValue < maxValue ) ) { // If ( the value of `theValue` is greater than that of `minValue` ) AND less than `maxValue`): console.log( "Within range." ); } // result: Within range. Personally, I make a point of almost never excusing my dear Aunt Sally. Even when I’m working with math specifically, I frequently use parentheses just for the sake of being able to scan things quickly: console.log( 2 + ( 2 * 3 ) ); // result: 8 This use is relatively rare, but the grouping operator can also be used to remove ambiguity in situations where you might need to specify that a given syntax is intended to be interpreted as an expression. One of them is, well, right there in your developer console. The syntax used to initialize an object — a matched pair of curly braces — is the same as the syntax used to group statements into a block statement. Within the global scope, a pair of curly braces will be interpreted as a block statement containing a syntax that makes no sense given that context, not an object literal. That’s why punching an object literal into your developer console will result in an error: { "theValue" : true } // result: `Uncaught SyntaxError: unexpected token: ':' It’s very unlikely you’ll ever run into this specific issue in your day-to-day JavaScript work, seeing as there’s usually a clear division between contexts where an expression or a statement are expected: { const theObject = { "theValue" : true }; } You won’t often be creating an object literal without intending to do something with it, which means it will always be in the context where an expression is expected. It is the reason you’ll see standalone object literals wrapped in a a grouping operator throughout this course — a syntax that explicitly says “expect an expression here”: ({ "value" : true }); However, that’s not to say you’ll never need a grouping operator for disambiguation purposes. Again, not to get ahead of ourselves, but an Independently-Invoked Function Expression (IIFE), an anonymous function expression used to manage scope, relies on a grouping operator to ensure the function keyword is treated as a function expression rather than a declaration: (function(){ // ... })(); Expressions With Side Effects Expressions always give us back a value, in no uncertain terms. There are also expressions with side effects — expressions that result in a value and do something. For example, assigning a value to an identifier is an assignment expression. If you paste this snippet into your developer console, you’ll notice it prints 3: theIdentifier = 3; // result: 3 The resulting value of the expression theIdentifier = 3 is the primary expression 3; classic expression stuff. That’s not what’s useful about this expression, though — the useful part is that this expression makes JavaScript aware of theIdentifier and its value (in a way we probably shouldn’t, but that’s a topic for another lesson). That variable binding is an expression and it results in a value, but that’s not really why we’re using it. Likewise, a function call is an expression; it gets evaluated and results in a value: function theFunction() { return 3; }; console.log( theFunction() + theFunction() ); // result: 6 We’ll get into it more once we’re in the weeds on functions themselves, but the result of calling a function that returns an expression is — you guessed it — functionally identical to working with the value that results from that expression. So far as JavaScript is concerned, a call to theFunction effectively is the simple expression 3, with the side effect of executing any code contained within the function body: function theFunction() { console.log( "Called." ); return 3; }; console.log( theFunction() + theFunction() ); /* Result: Called. Called. 6 */ Here theFunction is evaluated twice, each time calling console.log then resulting in the simple expression 3 . Those resulting values are added together, and the result of that arithmetic expression is logged as 6. Granted, a function call may not always result in an explicit value. I haven’t been including them in our interactive snippets here, but that’s the reason you’ll see two things in the output when you call console.log in your developer console: the logged string and undefined. JavaScript’s built-in console.log method doesn’t return a value. When the function is called it performs its work — the logging itself. Then, because it doesn’t have a meaningful value to return, it results in undefined. There’s nothing to do with that value, but your developer console informs you of the result of that evaluation before discarding it. Comma Operator Speaking of throwing results away, this brings us to a uniquely weird syntax: the comma operator. A comma operator evaluates its left operand, discards the resulting value, then evaluates and results in the value of the right operand. Based only on what you’ve learned so far in this lesson, if your first reaction is “I don’t know why I’d want an expression to do that,” odds are you’re reading it right. Let’s look at it in the context of an arithmetic expression: console.log( ( 1, 5 + 20 ) ); // result: 25 The primary expression 1 is evaluated and the resulting value is discarded, then the additive expression 5 + 20 is evaluated, and that’s resulting value. Five plus twenty, with a few extra characters thrown in for style points and a 1 cast into the void, perhaps intended to serve as a threat to the other numbers. And hey, notice the extra pair of parentheses there? Another example of a grouping operator used for disambiguation purposes. Without it, that comma would be interpreted as separating arguments to the console.log method — 1 and 5 + 20 — both of which would be logged to the console: console.log( 1, 5 + 20 ); // result: 1 25 Now, including a value in an expression in a way where it could never be used for anything would be a pretty wild choice, granted. That’s why I bring up the comma operator in the context of expressions with side effects: both sides of the , operator are evaluated, even if the immediately resulting value is discarded. Take a look at this validateResult function, which does something fairly common, mechanically speaking; depending on the value passed to it as an argument, it executes one of two functions, and ultimately returns one of two values. For the sake of simplicity, we’re just checking to see if the value being evaluated is strictly true — if so, call the whenValid function and return the string value "Nice!". If not, call the whenInvalid function and return the string "Sorry, no good.": function validateResult( theValue ) { function whenValid() { console.log( "Valid result." ); }; function whenInvalid() { console.warn( "Invalid result." ); }; if( theValue === true ) { whenValid(); return "Nice!"; } else { whenInvalid(); return "Sorry, no good."; } }; const resultMessage = validateResult( true ); // result: Valid result. console.log( resultMessage ); // result: "Nice!" Nothing wrong with this. The whenValid / whenInvalid functions are called when the validateResult function is called, and the resultMessage constant is initialized with the returned string value. We’re touching on a lot of future lessons here already, so don’t sweat the details too much. Some room for optimizations, of course — there almost always is. I’m not a fan of having multiple instances of return, which in a sufficiently large and potentially-tangled codebase can lead to increased “wait, where is that coming from” frustrations. Let’s sort that out first: function validateResult( theValue ) { function whenValid() { console.log( "Valid result." ); }; function whenInvalid() { console.warn( "Invalid result." ); }; if( theValue === true ) { whenValid(); } else { whenInvalid(); } return theValue === true ? "Nice!" : "Sorry, no good."; }; const resultMessage = validateResult( true ); // result: Valid result. resultMessage; // result: "Nice!" That’s a little better, but we’re still repeating ourselves with two separate checks for theValue. If our conditional logic were to be changed someday, it wouldn’t be ideal that we have to do it in two places. The first — the if/else — exists only to call one function or the other. We now know function calls to be expressions, and what we want from those expressions are their side effects, not their resulting values (which, absent a explicit return value, would just be undefined anyway). Because we need them evaluated and don’t care if their resulting values are discarded, we can use comma operators (and grouping operators) to sit them alongside the two simple expressions — the strings that make up the result messaging — that we do want values from: function validateResult( theValue ) { function whenValid() { console.log( "Valid result." ); }; function whenInvalid() { console.warn( "Invalid result." ); }; return theValue === true ? ( whenValid(), "Nice!" ) : ( whenInvalid(), "Sorry, no good." ); }; const resultMessage = validateResult( true ); // result: Valid result. resultMessage; // result: "Nice!" Lean and mean thanks to clever use of comma operators. Granted, there’s a case to be made that this is a little too clever, in that it could make this code a little more difficult to understand at a glance for anyone that might have to maintain this code after you (or, if you have a memory like mine, for your near-future self). The siren song of “I could do it with less characters” has driven more than one JavaScript developer toward the rocks of, uh, slightly more difficult maintainability. I’m in no position to talk, though. I chewed through my ropes years ago. Between this lesson on expressions and the lesson on statements that follows it, well, that would be the whole ballgame — the entirety of JavaScript summed up, in a manner of speaking — were it not for a not-so-secret third thing. Did you know that most declarations are neither statement nor expression, despite seeming very much like statements? Variable declarations performed with let or const, function declarations, class declarations — none of these are statements: if( true ) let theVariable; // Result: Uncaught SyntaxError: lexical declarations can't appear in single-statement context if is a statement that expects a statement, but what it encounters here is one of the non-statement declarations, resulting in a syntax error. Granted, you might never run into this specific example at all if you — like me — are the sort to always follow an if with a block statement, even if you’re only expecting a single statement. I did say “one of the non-statement declarations,” though. There is, in fact, a single exception to this rule — a variable declaration using var is a statement: if( true ) var theVariable; That’s just a hint at the kind of weirdness you’ll find buried deep in the JavaScript machinery. 5 is an expression, sure. 0.1 * 0.1 is 0.010000000000000002, yes, absolutely. Numeric values used to access elements in an array are implicitly coerced to strings? Well, sure — they’re objects, and their indexes are their keys, and keys are strings (or Symbols). What happens if you use call() to give this a string literal value? There’s only one way to find out — two ways to find out, if you factor in strict mode. That’s where JavaScript for Everyone is designed take you: inside JavaScript’s head. My goal is to teach you the deep magic — the how and the why of JavaScript. If you’re new to the language, you’ll walk away from this course with a foundational understanding of the language worth hundreds of hours of trial-and-error. If you’re a junior JavaScript developer, you’ll finish this course with a depth of knowledge to rival any senior. I hope to see you there. JavaScript for Everyone is now available and the launch price runs until midnight, October 28. Save £60 off the full price of £249 (~$289) and get it for £189 (~$220)! Get the Course An Introduction to JavaScript Expressions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Disaggregated Routing with SONiC and VPP: Architecture and Integration – Part One
By: Linux.com Editorial Staff Wed, 22 Oct 2025 13:44:22 +0000 The networking industry is undergoing a fundamental architectural transformation, driven by the relentless demands of cloud-scale data centers and the rise of software-defined infrastructure. At the heart of this evolution is the principle of disaggregation: the systematic unbundling of components that were once tightly integrated within proprietary, monolithic systems. This movement began with the separation of network hardware from the network operating system (NOS), a paradigm shift championed by hyperscalers to break free from vendor lock-in and accelerate innovation. In this blog post, we will explore how disaggregated networking takes shape, when the SONiC control plane meets the VPP data plane. You’ll see how their integration creates a fully software-defined router – one that delivers ASIC-class performance on standard x86 hardware, while preserving the openness and flexibility of Linux-based systems. Disaggregation today extends to the software stack, separating the control plane from the data plane. This decoupling enables modular design, independent component selection, and more efficient performance and cost management. The integration of Software for Open Networking in the Cloud (SONiC) and the Vector Packet Processing (VPP) framework represents the peak of this disaggregated model. SONiC, originally developed by Microsoft and now a thriving open-source project under the Linux Foundation, has established itself as the de facto standard for a disaggregated NOS, offering a rich suite of L3 routing functionalities hardened in the world’s largest data centers.1 Its core design philosophy is to abstract the underlying switch hardware, allowing a single, consistent software stack to run on a multitude of ASICs from different vendors. This liberates operators from the constraints of proprietary systems and fosters a competitive, innovative hardware ecosystem. Complementing SONiC’s control plane prowess is VPP, a high-performance, user-space data plane developed by Cisco and now part of the Linux Foundation’s Fast Data Project (FD.io). VPP’s singular focus is to deliver extraordinary packet processing throughput on commodity commercial-off-the-shelf (COTS) processors. By employing techniques like vector processing and bypassing the traditional kernel network stack, VPP achieves performance levels previously thought to be the exclusive domain of specialized, expensive hardware like ASICs and FPGAs. The fusion of these two powerful open-source projects creates a new class of network device: a fully software-defined router that combines the mature, feature-rich control plane of SONiC with the blistering-fast forwarding performance of VPP. This architecture directly addresses a critical industry need for a network platform that is simultaneously programmable, open, and capable of line-rate performance without relying on specialized hardware. The economic implications are profound. By replacing vertically integrated, vendor-locked routers with a software stack running on standard x86 servers, organizations can fundamentally alter their procurement and operational models. This shift transforms network infrastructure from a capital-expenditure-heavy (CAPEX) model, characterized by large upfront investments in proprietary hardware, to a more agile and scalable operational expenditure (OPEX) model. The ability to leverage COTS hardware drastically reduces total cost of ownership (TCO) and breaks the cycle of vendor lock-in, democratizing access to high-performance networking and enabling a more dynamic, cost-effective infrastructure strategy. Deconstructing the Components: A Tale of Two Titans To fully appreciate the synergy of the SONiC-VPP integration, it is essential to first understand the distinct architectural philosophies and capabilities of each component. While they work together to form a cohesive system, their internal designs are optimized for entirely different, yet complementary, purposes. SONiC is engineered for control, abstraction, and scalability at the management level, while VPP is purpose-built for raw, unadulterated packet processing speed. SONiC: The Cloud-Scale Control Plane SONiC is a complete, open-source NOS built upon the foundation of Debian Linux. Its architecture is a masterclass in modern software design, abandoning the monolithic structure of traditional network operating systems in favor of a modular, containerized, microservices-based approach. This design provides exceptional development agility and serviceability. Key networking functions, such as: Border Gateway Protocol (BGP) routing stack Link Layer Discovery Protocol (LLDP) platform monitoring (PMON) each run within their own isolated Docker container. This modularity allows individual components to be updated, restarted, or replaced without affecting the entire system, a critical feature for maintaining high availability in large-scale environments. The central nervous system of this distributed architecture is an in-memory Redis database engine, which serves as the single source of truth for the switch’s state. Rather than communicating through direct inter-process communication (IPC) or rigid APIs, SONiC’s containers interact asynchronously by publishing and subscribing to various tables within the Redis database. This loosely coupled communication model is fundamental to SONiC’s flexibility. Key databases include: CONFIG_DB: Stores the persistent, intended configuration of the switch. APPL_DB: A high-level, application-centric representation of the network state, such as routes and neighbors. STATE_DB: Holds the operational state of various components. ASIC_DB: A hardware-agnostic representation of the forwarding plane’s desired state. The cornerstone of SONiC’s hardware independence, and the very feature that makes the VPP integration possible, is the Switch Abstraction Interface (SAI). SAI is a standardized C API that defines a vendor-agnostic way for SONiC’s software to control the underlying forwarding elements. A dedicated container, syncd, is responsible for monitoring the ASIC_DB. Upon detecting changes, making the corresponding SAI API calls to program the hardware. Each hardware vendor provides a libsai.so library that implements this API, translating the standardized calls into the specific commands required by their ASIC’s SDK. This elegant abstraction allows the entire SONiC control plane to remain blissfully unaware of the specific silicon it is running on. VPP: The User-Space Data Plane Accelerator While SONiC manages the high-level state of the network, VPP is singularly focused on the task of moving packets as quickly as possible. As a core component of the FD.io project, VPP is an extensible framework that provides the functionality of a router or switch entirely in software. Its remarkable performance is derived from several key architectural principles. Vector Processing The first and most important is vector processing. Unlike traditional scalar processing, where the CPU processes one packet at a time through the entire forwarding pipeline, VPP processes packets in batches, or “vectors”. A vector typically contains up to 256 packets. The entire vector is processed through the first stage of the pipeline, then the second, and so on. This approach has a profound impact on CPU efficiency. The first packet in the vector effectively “warms up” the CPU’s instruction cache (i-cache), loading the necessary instructions for a given task. The subsequent packets in the vector can then be processed using these cached instructions, dramatically reducing the number of expensive fetches from main memory and maximizing the benefits of modern superscalar CPU architectures. User-Space Orientation & Kernel Bypass The second principle is user-space operation and kernel bypass. The Linux kernel network stack, while powerful and flexible, introduces performance overheads from system calls, context switching between kernel and user space, and interrupt handling. VPP avoids this entirely by running as a user-space process. It typically leverages the Data Plane Development Kit (DPDK) to gain direct, exclusive access to network interface card (NIC) hardware. Using poll-mode drivers (PMDs), VPP continuously polls the NIC’s receive queues for new packets, eliminating the latency and overhead associated with kernel interrupts. This direct hardware access is a critical component of its high-throughput, low-latency performance profile. Packet Processing Graph Finally, VPP’s functionality is organized as a packet processing graph. Each feature or operation-such as an L2 MAC lookup, an IP4 route lookup, or an Access Control List (ACL) check-is implemented as a “node” in a directed graph. Packets flow from node to node as they are processed. This modular architecture makes VPP highly extensible. New networking features can be added as plugins that introduce new graph nodes or rewire the existing graph, without requiring changes to the core VPP engine. The design of SAI was a stroke of genius, originally intended to abstract the differences between various hardware ASICs. However, its true power is revealed in its application here. The abstraction is so well-defined, that it can be used to represent not just a physical piece of silicon, but a software process. The SONiC control plane does not know or care whether the entity on the other side of the SAI API is a Broadcom Tomahawk chip or a VPP instance running on an x86 CPU. It simply speaks the standardized language of SAI. This demonstrates that SAI successfully abstracted away not just the implementation details of a data plane, but the very notion of it being physical, allowing a purely software-based forwarder to be substituted with remarkable elegance. FeatureSONiCVPPPrimary FunctionControl Plane & Management PlaneData PlaneArchitectural ModelContainerized MicroservicesPacket Processing GraphKey AbstractionSwitch Abstraction Interface (SAI)Graph Nodes & PluginsOperating EnvironmentKernel/User-space Hybrid (Linux-based)Pure User-space (Kernel Bypass)Core Performance MechanismDistributed State Management via RedisVector Processing & CPU Cache OptimizationPrimary Configuration MethodDeclarative (config_db.json, Redis)Imperative (startup.conf, Binary API) Creating a High-Performance Software Router The integration of SONiC and VPP is a sophisticated process that transforms two independent systems into a single, cohesive software router. The architecture hinges on SONiC’s decoupled state management and a clever translation layer that bridges the abstract world of the control plane with the concrete forwarding logic of the data plane. Tracing the lifecycle of a single route update reveals the elegance of this design. The End-to-End Control Plane Flow The process begins when a new route is learned by the control plane. In a typical L3 scenario, this happens via BGP. Route Reception: An eBGP peer sends a route update to the SONiC router. This update is received by the bgpd process, which runs within the BGP container. SONiC leverages the well-established FRRouting (FRR) suite for its routing protocols, so bgpd is the FRR BGP daemon. RIB Update: bgpd processes the update and passes the new route information to zebra, FRR’s core component that acts as the Routing Information Base (RIB) manager. Kernel and FPM Handoff: zebra performs two critical actions. First, it injects a route into the host Linux kernel’s forwarding table – via a Netlink message. Second, it sends the same route information to the fpmsyncd process using the Forwarding Plane Manager (FPM) interface, a protocol designed for pushing routing updates from a RIB manager to a forwarding plane agent. Publishing to Redis: The fpmsyncd process acts as the first bridge between the traditional routing world and SONiC’s database-centric architecture. It receives the route from zebra and writes it into the APPL_DB table in the Redis database. At this point, the route has been successfully onboarded into the SONiC ecosystem. Orchestration and Translation: The Orchestration Agent (orchagent), a key process within the Switch State Service (SWSS) container, is constantly subscribed to changes in the APPL_DB. When it sees the new route entry, it performs a crucial translation. It converts the high-level application intent (“route to prefix X via next-hop Y”) into a hardware-agnostic representation and writes this new state to the ASIC_DB table in Redis. Synchronization to the Data Plane: The final step in the SONiC control plane is handled by the syncd container. This process subscribes to the ASIC_DB. When it detects the new route entry created by orchagent, it knows it must program this state into the underlying forwarding plane. This entire flow is made possible by the architectural decision to use Redis as a central, asynchronous message bus. In a traditional, monolithic NOS, the BGP daemon might make a direct, tightly coupled function call to a forwarding plane driver. This creates brittle dependencies. SONiC’s pub/sub model, by contrast, ensures that each component is fully decoupled. The BGP container’s only responsibility is to publish routes to the APPL_DB; it has no knowledge of who will consume that information. This allows the final consumer the data plane-to be swapped out with zero changes to any of the upstream control plane components. This decoupled architecture is what allows VPP to be substituted for a hardware ASIC so cleanly and implies that other data planes could be integrated in the future – simply by creating a new SAI implementation. The Integration Foundation: libsaivpp.so The handoff from syncd to the data plane is where the specific SONiC-VPP integration occurs. In a standard SONiC deployment on a physical switch, the syncd container would be loaded with a vendor-provided shared library (e.g., libsai_broadcom.so). When syncd reads from the ASIC_DB, it calls the appropriate standardized SAI API function (e.g., sai_api_route->create_route_entry()), and the vendor library translates this into proprietary SDK calls, to program the physical ASIC. In the SONiC-VPP architecture, this vendor library is replaced by a purpose-built shared library: libsaivpp.so. This library is the critical foundationof the entire system. It implements the full SAI API, presenting the exact same interface tosyncd as any hardware SAI library would. However, its internal logic is completely different. When syncd calls a function like create_route_entry(), libsaivpp.so does not communicate with a hardware driver. Instead, it translates the SAI object and its attributes into a binary API message that the VPP process understands. It then sends this message to the VPP engine, instructing it to add the corresponding entry to its software forwarding information base (FIB). This completes a “decision-to-execution” loop, bridging SONiC’s abstract control plane with VPP’s high-performance software data plane. Component (Container)Key Process(es)Role in IntegrationBGP ContainerbgpdReceives BGP updates from external peers using the FRRouting stack.SWSS Containerzebra, fpmsyncdzebra manages the RIB. fpmsyncd receives route updates from zebra and publishes them to the Redis APPL_DB.Database Containerredis-serverActs as the central, asynchronous message bus for all SONiC components. Hosts the APPL_DB and ASIC_DB.SWSS ContainerorchagentSubscribes to APPL_DB, translates application intent into a hardware-agnostic format, and publishes it to the ASIC_DB.Syncd ContainersyncdSubscribes to ASIC_DB and calls the appropriate SAI API functions to program the data plane.VPP Platformlibsaivpp.soThe SAI implementation for VPP. Loaded by syncd, it translates SAI API calls into VPP binary API messages.VPP ProcessvppThe user-space data plane. Receives API messages from libsaivpp.so and programs its internal forwarding tables accordingly. In the second part of our series, we will move from architecture to action – building and testing a complete SONiC-VPP software router in a containerized lab. We’ll configure BGP routing, verify control-to-data plane synchronization, and analyze performance benchmarks that showcase the real-world potential of this disaggregated design. Sources SONiC (operating system) – Wikipedia https://en.wikipedia.org/wiki/SONiC_(operating_system) Broadcom https://www.broadcom.com/products/ethernet-connectivity/software/enterprise-sonic Vector Packet Processing Documentation – FD.io https://docs.fd.io/vpp/21.06/ FD.io VPP Whitepaper — Vector Packet Processing Whitepaper https://fd.io/docs/whitepapers/FDioVPPwhitepaperJuly2017.pdf SONiC Virtual Switch with FD.io Vector Packet Processor (VPP) on Google Cloud https://ronnievsmith.medium.com/sonic-virtual-switch-with-fd-ios-vector-packet-processor-vpp-on-google-cloud-89f9c62f5fe3 Simplifying Multi-Cloud Networking with SONiC Virtual Gateway https://sonicfoundation.dev/simplifying-multi-cloud-networking-with-sonic-virtual-gateway/ Deep dive into SONiC Architecture & Design – SONiC Foundation https://sonicfoundation.dev/deep-dive-into-sonic-architecture-design/ Vector Packet Processing – Wikipedia https://en.wikipedia.org/wiki/Vector_Packet_Processing Kernel Bypass Networking with FD.io and VPP — Toonk.io https://toonk.io/kernel-bypass-networking-with-fd-io-and-vpp/index.html PANTHEON.tech*, Delivers Fast Data and Control Planes – Intel® Network Builders https://builders.intel.com/docs/networkbuilders/pantheon-tech-intel-deliver-fast-data-and-control-planes-1663788453.pdf VPP Guide — PANTHEON.tech https://pantheon.tech/blog-news/vpp-guide/ The post Disaggregated Routing with SONiC and VPP: Architecture and Integration – Part One appeared first on Linux.com.
-
Arduino Alternative Microcontroller Boards for Your DIY Projects in the Post-Qualcomm Era
by: Pulkit Chandak Wed, 22 Oct 2025 07:13:10 GMT Arduino has been the cornerstone of embedded electronics projects for a while now. Be it DIY remote-controlled vehicles, binary clocks, power laces, or as is relevant to the month of publishing, flamethrowing Jack-O'-Lanterns! The versatility and affordability of the board has been uniquely unparalleled. But now that Qualcomm has acquired Arduino projecting more AI-forward features with more powerful hardware, there might be some changes around the corner. Perhaps I am reading too much between the lines but not all of us have favorable views about Big Tech and corporate greed. We thought it might be a good time to look at some alternatives. Since Arduino has a lot of different models with different features, we will not draw a comparison between Arduino and other boards, but just highlight the unique features these alternative boards have. 1. Raspberry Pi PicoRaspberry Pi needs no introduction, it being the one company besides Arduino that has always been the favorite of tinkerers. While Raspberry Pi is known for its full fledged single-board-computers, the Pico is a development board for programming dedicated tasks like the Arduino boards. There are two releases of the Pico at the time of writing this article, 1 and 2. The major upgrade being the processor. There are certain prefixes which denote model features, "W" denoting wireless capabilities, "H" denoting pre-soldered headers. Here, I describe the cutting-edge model, the Pico 2 W with Headers. Processors: Dual Cortex-M33 (ARM) upto 133 MHz and optional Hazard3 processors (RISC-V)Memory: 520 KB on-chip SRAMInput-Output: 26 GPIO pinsConnectivity: Optionally 2.4 GHz Wi-Fi and Bluetooth 5.2 on the W modelPower: Micro-USBProgramming Software or Language: MicroPython or C/C++Price: $8Extra Features: Temperature sensorThe greatest advantage of Raspberry Pi is the huge userbase, second probably only to Arduino. Besides that, the GPIO pins make projects easier to construct, and the optional RISC-V processors give it an open-source experimental edge that many long for. 2. ESP32ESP32 is a SoC that has soared in popularity in the past decade, and for all the right reasons. It comes in very cheap, screaming "hobbyist" and is committed to good documentation and an open SDK (software development kit). It came as a successor to the already very successful and still relevant ESP8266 SoC. The categorization is a little to get a hang of because of the sheer number of boards available. The original ESP32 SoC boards come with dual-core Xtensa LX6 processors that go up to 240 MHz, and they come with Wi-Fi + Bluetooth classic/LE built-in. The ESP32-S series are a little enhanced, with more GPIO pins for connectivity. Now the ESP32-C series transitioned to RISC-V chips, and finally the ESP32-H series are designed for ultra low-power IoT applications. If the board name has WROOM, it belongs to the original basic family but the ones with WROVER indicate modules with PSRAM and more memory in general. You can find all the "DevKits" here. Getting over the whole naming culture, I will directly describe one board here that might fulfill your Arduino-alternative needs, ESP32-DevKitC-VE: Processors: Dual-core 32-bit LX6 upto 240 MHzMemory: 8 MBInput-Output: 34 programmable GPIOsConnectivity: 802.11 Wi-Fi, Bluetooth 4.2 with BLEPower: Micro-USBProgramming Software or Language: Arduino IDE, PlatformIO IDE (VS Code), LUA, MicroPython, Espressif IDF (IoT Development Framework), JavaScriptPrice: $11Extra Features: Breadboard friendly, rich set of peripheral interfacesI encourage you to do your own research based on your needs of the board and choose one, as the support and hardware is rock solid but the sheer number of options can be a little tricky to figure out. 3. Adafruit FeatherAdafruit Feather isn't a single board, but a category of hardware boards that come with all sorts of different features and processors each. The idea is getting a "feather", which is the board, and then getting "wings" which are hats/shields, basically extending the features and abilities of the board, and there are a huge number of them. This extensible versatility is the most attractive features of the boards but also the reason why I cannot describe one board that best suits the needs of any user. I can, however, tell you what options they provide. All FeathersCan be programmed with Arduino IDECome with Micro-USB or USB-CAre 0.9" long and breadboard-compatibleCan be run with either USB power or a LiPo batteryProcessorsThe boards are available with several different processors, such as: Atmel ATmega32u4 and ATmega 328P - 8 bit AVRAtmel ATSAMD21 - 32 bit ARM Cortex M0+Atmel ATSAMD51 - 32-bit ARM Cortex M4Broadcom/Cypress WICED - STM32 with WiFiEspressif ESP8266 and ESP32 - Tensilica with WiFi/BTFreescale MK20 - ARM Cortex M4, as the Teensy 3.2 Feather AdapterNordic nRF52832 and nRF32840 - ARM Cortex & Bluetooth LEPacket radio modules featuring SemTech SX1231LoRa radio modules featuring SemTech SX127xA good model to look into for an Arduino alternative is Adafruit ESP32 Feather V2. Connectivity and wingsThe "feathers" have different categories based on their connectivity. The categories include: Basic FeathersWi-Fi FeathersBluetooth FeathersCellular FeathersLoRa and Radio FeathersThis doesn't mean that these connectivity features are mutually exclusive, there are several boards which have more than one of theses connectivity options. The Wings add all the functionality to the boards, and the number of options are immense. I cannot possibly list them here. 4. SeeeduinoAs Arduino alternatives go, this board seems to be one of the most worthy of holding that title. It looks like an Arduino, works with the software that Arduino is compatible with, and even supports the shields made for UNO-R3. Here is the description of the most recent model at the time of writing this, Seeeduino V4.3: Processors: ATmega328Memory: 2 KB RAM, 1 KB EEPROM and 32 KB Flash MemoryInput-Output: 14 digial IO pins, 6 analog inputsPower: Micro-USB, DC Input JackProgramming Software or Language: Arduino IDEPrice: $7.6If you need a no-brainer Arduino alternative that delivers what it does with stability and efficiency, this should be your go-to choice. 5. STM32 Nucleo BoardsSTM32 offers a very, very wide range of development boards, among which the Nucleo boards seem like the best alternatives for Arduino. They come in three series as well: Nucleo-32, Nucleo-64 and Nucleo-144, the numbers at the end of which denote the number of connectivity pins that the board offers. Every single series has a number of models within, again. Here, I will describe the one most appropriate as an Arduino alternative: STM32 Nucleo-F103RBMicrocontroller: STM32Input-Output: 64 IO pins; Arduino shield-compatibleConnectivity: Arduino Uno V3 expansion connectorPower: Micro-USBProgramming Software or Language: IAR Embedded Workbench, MDK-ARM, STM32CubeIDE, etc.Price: $10.81Extra Features: 1 programmable LED, 1 programmable button, 1 reset buttonOptional Features: Second user LED, cryptography, USB-C, etc.STM32 provides great hardware abstraction, ease of development, GUI based initialization, good resources and more. If that is the kind of thing you need, then this should probably be your choice. 6. micro:bitmicro:bit boards are designed mostly for younger students and kids to learn programming, but offer some really interesting features that can help anyone make a project without buying many extra parts. In fact, this is one of the ideal tools for introducing STEM education to young children. Here are the details of the most recent version at the time of writing, micro:bit v2: Processors: Nordic Semiconductor nRF52833Memory: 128 KB RAM, 512 KB Flash MemoryInput-Output: 25 pins (4 dedicated GPIO, PWM, I2C, SPI)Connectivity: Bluetooth 5.0, radioPower: Micro-USBProgramming Software or Language: Price: $17.95 (other more expensive bundles with extra hardware are also available)The extra built-in features of the board include: 2 built in buttons that can be programmed in different waysTouch sensor on the logo, temperature sensorBuilt-in speaker and microphone25 programmable LEDsAccelerometer and compassReset and power buttonIf a plethora of extra hardware features capable of executing almost anything you might want, or if you want a development board with extensive documentation for younger audiences, this should be your go to choice. The company doesn't only make great boards, but also supports inclusive technological education for children of all abilities, and sustainability, which is admirable. 7. Particle Photon 2The Particle Photon 2 is a board designed with ease of prototyping in mind. It enables IoT projects, giving broad customization options to both hardware and software. The Photon is also Feather-compatible (from Adafruit), giving the ability to attach Wings to extend the features. Processors: ARM Cortex M33, upto 200 MHzMemory: 3MB RAM, 2MB Flash MemoryInput-Output: 16 GPIO pinsConnectivity: Dual-band Wi-Fi and BLE 5.3Power: Micro-USBProgramming Software or Language: VSC plug-inPrice: $17.95The Photon also has a built-in programmable LED. Particle also provides a Wi-Fi antenna add-on component if your project requires that. If building new product ideas is your need, this might just be what you're looking for. 8. Teensy Development BoardsThe Teensy board series, as the name suggests, aims for a small board with a minimal footprint with a lot of power packed at an affordable price. There have been several releases of the board, with the most recent one at the time of writing being Teensy 4.1: Processors: ARM Cortex-M7 at 600 MHzMemory: 1024K RAM, 8MB Flash MemoryInput-Output: 55 digital IO pins, 18 analog input pinsPower: Micro-USB, Programming Software or Language: Arduino IDE + Teensyduino, Visual Micro, PlatformIO, CircuitPython, command linePrice: $31.50Extra Features: Onboard Micro SD cardIf you need a stable base for your project that just works, this might be your choice. It is worth noting that the Teensy boards have excellent audio libraries and offer a lot of processing power. 9. PineConePineCone is a development board from one of the foremost open source companies, Pine64. It provides amazing features and connectivity, making it ideal for a lot of tinkering purposes. Processors: 32-bit RV32IMAFC RISC-V “SiFive E24 Core”Memory: 2 MB Flash MemoryInput-Output: 18 GPIO pinsConnectivity: Wi-Fi, BLE 5.0, RadioPower: USB-CProgramming Software or Language: RustPrice: $3.99Extra Features: 3 on-board LEDsThe RISC-V processor capability gives it the open-source hardware edge that many other boards lack. That makes it quite good for IoT prototyping into devices and technologies that might be very new and untapped. 10. Sparkfun Development BoardsSparkfun has a whole range of boards on their website, out of which the two most notable series are the "RedBoard" series and the "Thing" series. A big part of some of these boards is the Qwiic ecosystem, in which I2C sensors, actuators, shields, etc. can be connected to the board with the same 4-pin connector. Not only that, but you can daisy-chain the boards in one string, making it more convenient and less prone to errors. Here's a great article to learn about the Qwiic ecosystem. Sparkfun RedBoard QwiicThis is another board that is a perfect alternative to Arduino with extra features because it was designed to be so. It is an Arduino-compatible board, supporting the software, shields, etc. Microcontroller: ATmega328 with UNO's Optiboot BootloaderInput-Output: 20 Digital IO pins, 1 Qwiic connectorConnectivity: 20 Digital I/O pins with 6 PWM pinsPower: Micro-USB, Pin inputProgramming Software or Language: Arduino IDEPrice: $21.95Sparkfun Thing Plus SeriesThe Sparkfun Thing Plus series comes in with sorts of different processors and connection abilities like RP2040, RP2350, nRF9160, ARM Cortex-M4, ESP32-based, STM32-based, etc. We've chosen to describe one of the most popular models here, SparkFun Thing Plus - ESP32 WROOM (USB-C). Microcontroller: ESP32-WROOM ModuleInput-Output: 21 Multifunctional GPIOConnectivity: Wi-Fi 2.4GHz, dual integrated Bluetooth (classic and BLE)Power: USB-C, Qwiic connectorProgramming Software or Language: Arduino IDEPrice: $33.73Extra Features: RGB status LED, built-in SD card slot, Adafruit Feather compatible (you can attach the "Wings")Sparkfun offers a lot of options, especially based on the form-factor. They not only provide /new unique features of their own, but also utilize the open technologies provided by other companies very well, as you can see. ConclusionThe Arduino boards clearly have a lot of alternatives, varying in size, features and practicality. If Arduino being acquired puts a bad taste in your mouth, or even if you just want to explore what the alternatives offer, I hope this article has been helpful for you. Please let us know in the comments if we missed your favorite one. Cheers!
-
PenTesting 101: Using TheHarvester for OSINT and Reconnaissance
by: Hangga Aji Sayekti Wed, 22 Oct 2025 11:49:45 +0530 Ever wonder how security pros find those hidden entry points before the real testing even begins? It all starts with what we call reconnaissance—the art of gathering intelligence. Think of it like casing a building before a security audit; you need to know the doors, windows, and air vents first. In this digital age, one of the go-to tools for this initial legwork is TheHarvester. At its heart, TheHarvester is a Python script that doesn't try to do anything fancy. Its job is straightforward: to scour publicly available information and collect things like email addresses, subdomains, IPs, and URLs. It looks in all the usual places, from standard search engines to specialized databases like Shodan, which is essentially a search engine for internet-connected devices. We did something like this by fingerprinting with WhatWeb in an earlier tutorial. But TheHarvester is a different tool with more diverse information. 📋To put this into practice, we're going to get our hands dirty with a live example. We'll use vulnweb.com as our test subject. This is a safe, legal website specifically set up by security folks to practice these very techniques, so it's the perfect place to learn without causing any harm. Let's dive in and see what we can uncover.Step 1: Installing TheHarvesterIf you're not using Kali Linux, you can easily install TheHarvester from its GitHub repository. Option A: Using apt (Kali Linux / Debian/Ubuntu)sudo apt update && sudo apt install theharvester Option B: Installing from source (Latest Version)git clone https://github.com/laramies/theHarvester.git cd theHarvester python3 -m pip install -r requirements.txt You can verify the installation by checking the help menu: theHarvester -h Step 2: Understanding the basic syntaxThe basic command structure of TheHarvester is straightforward: theHarvester -d <domain> -l <limit> -b <data_source> Let's break down the key options: -d or --domain: The target domain name (e.g., vulnweb.com).-l or --limit: The number of results to fetch from each data source (e.g., 100, 500). More results take longer.-b or --source: The data source to use. You can specify a single source like google or use all to run all available sources.-f or --filename: Save the results to an HTML and/or XML file.Step 3: Case Study: Reconnaissance on vulnweb.comLet's use TheHarvester to discover information about our target, vulnweb.com. We'll start with a broad search using the google and duckduckgo sources. Run a basic scantheHarvester -d vulnweb.com -l 100 -b google,duckduckgo If you're seeing the error The following engines are not supported: {'google'}, don't worry—you're not alone. This is a frequent problem that stems from how TheHarvester interacts with public search engines, particularly Google. Let's break down why this happens and walk through the most effective solutions. Why Does This Happen?The short answer: Google has made its search engine increasingly difficult to scrape programmatically. Here are the core reasons: Advanced Bot Detection: Google uses sophisticated algorithms to detect and block automated requests that don't come from a real web browser. TheHarvester's requests are easily identified as bots.CAPTCHAs: When Google suspects automated activity, it presents a CAPTCHA challenge. TheHarvester cannot solve these, so the request fails, and the module is disabled for the rest of your session.Lack of an API Key (for some sources): Some data sources, like Shodan, require a free API key to be used effectively. Without one, the module will not work.In the case of our example domain, vulnweb.com, this means we might miss some results that could be indexed on Google, but it's not the end of the world. Solution: Use the "All" flag with realistic expectationsYou can use -b all to run all modules. The unsupported ones will be gracefully skipped, and the supported ones will run. theHarvester -d vulnweb.com -l 100 -b all Now the output will look something like that. Read proxies.yaml from /etc/theHarvester/proxies.yaml ******************************************************************* * _ _ _ * * | |_| |__ ___ /\ /\__ _ _ ____ _____ ___| |_ ___ _ __ * * | __| _ \ / _ \ / /_/ / _` | '__\ \ / / _ \/ __| __/ _ \ '__| * * | |_| | | | __/ / __ / (_| | | \ V / __/\__ \ || __/ | * * \__|_| |_|\___| \/ /_/ \__,_|_| \_/ \___||___/\__\___|_| * * * * theHarvester 4.8.0 * * Coded by Christian Martorella * * Edge-Security Research * * cmartorella@edge-security.com * * * ******************************************************************* [*] Target: vulnweb.com Read api-keys.yaml from /etc/theHarvester/api-keys.yaml An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected [*] Searching Baidu. Searching 0 results. ..... ..... ..... The output is actually huge, spanning over 600 lines. You can view the complete output in this GitHub gist. Analyzing the outputWhen TheHarvester finishes its work, the real detective work begins. The Initial Chatter: Warnings and Status MessagesRight off the bat, you'll see a series of status checks and warnings: Read api-keys.yaml from /etc/theHarvester/api-keys.yaml An exception has occurred: Server disconnected [*] Searching Baidu. Searching 0 results. [*] Searching Bing. Searching results. Don't let these alarm you. The "Server disconnected" and similar exceptions are TheHarvester's way of telling you that certain data sources were unavailable or timed out—this is completely normal during reconnaissance. The tool gracefully skips these and moves on to working sources. The reconnaissance gold: Key findingsHere's where we strike valuable intelligence: Network infrastructure (ASN)[*] ASNS found: 1 -------------------- AS16509 This reveals the Autonomous System Number, essentially telling us which major network provider hosts this infrastructure (in this case, AS16509 is Amazon.com, Inc.). The attack surface - interesting URLs[*] Interesting Urls found: 15 -------------------- http://testphp.vulnweb.com/ https://testasp.vulnweb.com/ http://testphp.vulnweb.com/login.php This is your target list! Each URL represents a potential entry point. Notice we've found: Multiple applications (testphp, testasp, testhtml5)Specific functional pages (login.php, search.php)Both HTTP and HTTPS services.IP address mapping[*] IPs found: 2 ------------------- 44.228.249.3 44.238.29.244 Only two IP addresses serving all this content? This suggests virtual hosting where multiple domains share the same server—valuable for understanding the infrastructure setup. The subdomain treasure trove[*] Hosts found: 610 --------------------- testphp.vulnweb.com:44.228.249.3 testasp.vulnweb.com:44.238.29.244 This massive list of 610 hosts reveals the true scale of the environment. You can see patterns emerging: Application subdomains (testphp, testasp)Infrastructure components (compute.vulnweb.com, elb.vulnweb.com)Geographic distribution across AWS regionsWhat's not there matters too[*] No emails found. [*] No people found. For a test site like vulnweb.com, this makes sense. But in a real engagement, missing email addresses might mean you need different reconnaissance approaches. From reconnaissance to actionSo what's next with this intelligence? Your penetration testing roadmap becomes clear: Prioritize targets - Start with the login pages and search functionsScan the applications - Use tools like nikto or nuclei on the discovered URLsProbe the infrastructure - Run nmap scans on the identified IP addressesDocument everything - Each subdomain is a potential attack vectortIn just minutes, TheHarvester has transformed an unknown domain into a mapped-out territory ready for deeper security testing. Step 4: Expanding the search with more data sourcesThe real power of TheHarvester comes from using multiple data sources. Let's run a more comprehensive scan using bing, linkedin, and threatcrowd. theHarvester -d vulnweb.com -l 100 -b bing,linkedin,threatcrowd Bing: Often returns different and sometimes more results than Google.LinkedIn: Can be useful for finding employee names and profiles associated with a company, which can help in social engineering attacks. For vulnweb.com, this won't yield results, but for a real corporate target, it's invaluable.Threat Crowd: An open-source threat intelligence engine that can often provide a rich list of subdomains.Step 5: Using all sources and saving resultsFor the most thorough reconnaissance, you can use nearly all sources with the -b all flag. 🚧This can be slow and may trigger captchas on some search engines.It's also crucial to save your results for later analysis. Use the -f flag for this. theHarvester -d vulnweb.com -l 100 -b all -f recon-results This command will: Query all available data sources.Limit results to 100 per source.Save the final output to recon-results.json and recon-results.xml.Read json files with cat and jq: cat recon-results.json | jq '.' Important notes and best practicesRate Limiting: Be respectful of the data sources. Using high limits or running scans too frequently can get your IP address temporarily blocked.Legality: Only use TheHarvester on domains you own or have explicit permission to test. Unauthorized reconnaissance can be illegal.Context is Key: TheHarvester is a starting point. The data it collects must be verified and analyzed in the context of a broader security assessment.TheHarvester is a cornerstone tool for any penetration tester or security researcher. By following this guide, you can effectively use it to map out the digital footprint of your target and lay the groundwork for a successful security assessment.
-
Google Chrome & Iframe `allow` Permissions Problems
by: Chris Coyier Mon, 20 Oct 2025 18:06:24 +0000 If you’re a CodePen user, this shouldn’t affect you aside from potentially seeing some console noise while we work this out. Carry on! At CodePen we have Embedded Pens which are shown in an <iframe>. These contain user-authored code at a non-same-origin URL as where they are placed. We like to be both safe and as permissive as possible with what we allow users to build and test. The sandbox attribute helps us with safety and while there are some issues with it that we’ll get to later, this is mostly about the allow attribute. Here’s an example. A user wants to use the navigator.clipboard.writeText() API. So they write JavaScript like: button.onclick = async () => { try { await navigator.clipboard.writeText(`some text`); console.log('Content copied to clipboard'); } catch (err) { console.error('Failed to copy: ', err); } } The Embedded Pen is placed on arbitrary origins, for example: chriscoyier.net. The src of the <iframe> is at codepen.io, so there is an origin mismatch there. The JavaScript in the iframe is not same-origin JavaScript, thus is subject to permissions policies. If CodePen were to not use the allow attribute on our <iframe> it would throw an error when the user tries to execute that JavaScript. Failed to copy: NotAllowedError: Failed to execute 'writeText' on 'Clipboard': The Clipboard API has been blocked because of a permissions policy applied to the current document. See https://crbug.com/414348233 for more details. This is an easy fix. We make sure that allow attribute is on the <iframe>, like this, targeting the exact feature we want to allow at any origin: <iframe src="https://codepen.io/..." allow="clipboard-write *;"> </iframe> But here’s where the problem comes in… The (new) Nested Iframe Issue CodePen has Embedded Pens are actually nested <iframe>s in a structure like this: In code structured like this: <iframe src="https://codepen.io/..."> CodePen UI <iframe src="..."> User-Authored Code </iframe> </iframe> We need to put the allow attribute on the user-authored code, so it works, like this: <iframe src="https://codepen.io/..."> CodePen UI <iframe src="..." allow="clipboard-write *;" > User-Authored Code </iframe> </iframe> This is the problem! As soon as the nested iframe has the allow attribute, as of recently (seems like Chrome 136) this will throw an error: [Violation] Potential permissions policy violation: clipboard-write is not allowed in this document. With our complete list (which I’ll include below), this error list is very intense: Can’t we just put the allow attributes on both <iframe>s? Yes and no. Now we run into a second problem that we’ve been working around for many years. That problem is that every browser has a different set of allow attribute values that is supports. If you use a value that isn’t supported, it throws console errors or warnings about those attributes. This is noisy or scary to users who might think it’s their own code causing the issue, and it’s entirely outside of their (or our) control. The list of allow values for Google Chrome We know we need all these to allow users to test browser APIs. This list is constantly being adjusted with new APIs, often that our users ask for directly. <iframe allow="accelerometer *; bluetooth *; camera *; clipboard-read *; clipboard-write *; display-capture *; encrypted-media *; geolocation *; gyroscope *; language-detector *; language-model *; microphone *; midi *; rewriter *; serial *; summarizer *; translator *; web-share *; writer *; xr-spatial-tracking *" ></iframe> There are even some quite-new AI-related attributes in there reflecting brand new browser APIs. Example of allow value errors If were to ship those allow attribute values on all <iframe>s that we generate for Embedded Pens, here’s what it would look like in Firefox: At the moment, Firefox actually displays three sets of these warning. That’s a lot of console noise. Safari, at the moment, isn’t displaying errors or warnings about unsupported allow attribute values, but I believe they have in the past. Chrome itself throws warnings. If I include an unknown policy like fartsandwich, it will throw a warning like: Unrecognized feature: 'fartsandwich'. Those AI-related attributes require a trial which also throw warnings, so most users get that noise as well. We (sorry!) Need To Do User-Agent Sniffing To avoid all this noise and stop scaring users, we detect the user-agent (client-side) and generate the iframe attributes based on what browser we’re pretty sure it is. Here’s our current data and choices for the allow attribute export default { allowAttributes: { chrome: [ 'accelerometer', 'bluetooth', 'camera', 'clipboard-read', 'clipboard-write', 'display-capture', 'encrypted-media', 'geolocation', 'gyroscope', 'language-detector', 'language-model', 'microphone', 'midi', 'rewriter', 'serial', 'summarizer', 'translator', 'web-share', 'writer', 'xr-spatial-tracking' ], firefox: [ 'camera', 'display-capture', 'geolocation', 'microphone', 'web-share' ], default: [ 'accelerometer', 'ambient-light-sensor', 'camera', 'display-capture', 'encrypted-media', 'geolocation', 'gyroscope', 'microphone', 'midi', 'payment', 'serial', 'vr', 'web-share', 'xr-spatial-tracking' ] } }; We’ve been around long enough to know that user-agent sniffing is rife with problems. And also around long enough that you gotta do what you gotta do to solve problems. We’ve been doing this for many years and while we don’t love it, it’s mostly worked. The User-Agent Sniffing Happens on the Client <script> /* We need to user-agent sniff at *this* level so we can generate the allow attributes when the iframe is created. */ </script> <iframe src="..." allow="..."></iframe> CodePen has a couple of features where the <iframe> is provided directly, not generated. Direct <iframe> embeds. Users choose this in situations where they can’t run JavaScript directly on the page it’s going (e.g. RSS, restrictive CMSs, etc) oEmbed API. This returns an <iframe> to be embedded via a server-side call. The nested structure of our embeds has helped us here where we have that first level of iframe to attempt to run the user-agent sniff an apply the correct allow attributes to the internal iframe. The problem now is that if we’re expected to provide the allow attributes directly, we can’t know which set of attributes to provide, because any browser in the world could potentially be loading that iframe. Solutions? Are the allow attributes on “parent” iframes really necessary? Was this a regression? Or is this a feature? It sorta seems like the issue is that it’s possible for nested iframes to loosen permissions on a parent, which could be a security issue? It would be good to know where we fall here. Could browsers just stop erroring or warning about unsupported allow attributes? Looks like that’s what Safari is doing and that seems OK? If this is the case, we could just ship the complete set of allow attributes to all browsers. A little verbose but prevents needing to user-agent sniff. This could also help with the problem of needing to “keep up” with these attributes quite as much. For example, if Firefox starts to support the “rewriter” value, then it’ll just start working. This is better than some confused or disappointed user writing to support about it. Even being rather engaged with web platform news, we find it hard to catch when these very niche features evolve and need iframe attribute changes. Could browsers give us API access to what allow attributes are supported? Can the browser just tell us which ones it supports and then we could verify our list against that? Navigator.allow? Also… It’s not just the allow attribute. We also maintain browser-specific sets for the sandbox attribute. Right now, this isn’t affected by the nesting issues, but we could see it going that road. This isn’t entirely about nested iframes. We use one level of iframe anywhere on codepen.io we show a preview of a Pen, and we need allow attributes there also. This is less of an immediate problem because of the user-agent sniffing JS we have access to do get them right, but ideally we wouldn’t have to do that at all.
-
Building a Honeypot Field That Works
by: Zell Liew Mon, 20 Oct 2025 16:11:40 +0000 Honeypots are fields that developers use to prevent spam submissions. They still work in 2025. So you don’t need reCAPTCHA or other annoying mechanisms. But you got to set a couple of tricks in place so spambots can’t detect your honeypot field. Use This I’ve created a Honeypot component that does everything I mention below. So you can simply import and use them like this: <script> import { Honeypot } from '@splendidlabz/svelte' </script> <Honeypot name="honeypot-name" /> Or, if you use Astro, you can do this: --- import { Honeypot } from '@splendidlabz/svelte' --- <Honeypot name="honeypot-name" /> But since you’re reading this, I’m sure you kinda want to know what’s the necessary steps. Preventing Bots From Detecting Honeypots Here are two things that you must not do: Do not use <input type=hidden>. Do not hide the honeypot with inline CSS. Bots today are already smart enough to know that these are traps — and they will skip them. Here’s what you need to do instead: Use a text field. Hide the field with CSS that is not inline. A simple example that would work is this: <input class="honeypot" type="text" name="honeypot" /> <style> .honeypot { display: none; } </style> For now, placing the <style> tag near the honeypot seems to work. But you might not want to do that in the future (more below). Unnecessary Enhancements You may have seen these other enhancements being used in various honeypot articles out there: aria-hidden to prevent screen readers from using the field autocomplete=off and tabindex="-1" to prevent the field from being selected <input ... aria-hidden autocomplete="off" tabindex="-1" /> These aren’t necessary because display: none itself already does the things these properties are supposed to do. Future-Proof Enhancements Bots get smarter everyday, so I won’t discount the possibility that they can catch what we’ve created above. So, here are a few things we can do today to future-proof a honeypot: Use a legit-sounding name attribute values like website or mobile instead of obvious honeypot names like spam or honeypot. Use legit-sounding CSS class names like .form-helper instead of obvious ones like .honeypot. Put the CSS in another file so they’re further away and harder to link between the CSS and honeypot field. The basic idea is to trick spam bot to enter into this “legit” field. <input class="form-helper" ... name="occupation" /> <!-- Put this into your CSS file, not directly in the HTML --> <style> .form-helper { display: none; } </style> There’s a very high chance that bots won’t be able to differentiate the honeypot field from other legit fields. Even More Enhancements The following enhancements need to happen on the <form> instead of a honeypot field. The basic idea is to detect if the entry is potentially made by a human. There are many ways of doing that — and all of them require JavaScript: Detect a mousemove event somewhere. Detect a keyboard event somewhere. Ensure the the form doesn’t get filled up super duper quickly (‘cuz people don’t work that fast). Now, the simplest way of using these (I always advocate for the simplest way I know), is to use the Form component I’ve created in Splendid Labz: <script> import { Form, Honeypot } from '@splendidlabz/svelte' </script> <Form> <Honeypot name="honeypot" /> </Form> If you use Astro, you need to enable JavaScript with a client directive: --- import { Form, Honeypot } from '@splendidlabz/svelte' --- <Form client:idle> <Honeypot name="honeypot" /> </Form> If you use vanilla JavaScript or other frameworks, you can use the preventSpam utility that does the triple checks for you: import { preventSpam } from '@splendidlabz/utils/dom' let form = document.querySelector('form') form = preventSpam(form, { honeypotField: 'honeypot' }) form.addEventListener('submit', event => { event.preventDefault() if (form.containsSpam) return else form.submit() }) And, if you don’t wanna use any of the above, the idea is to use JavaScript to detect if the user performed any sort of interaction on the page: export function preventSpam( form, { honeypotField = 'honeypot', honeypotDuration = 2000 } = {} ) { const startTime = Date.now() let hasInteraction = false // Check for user interaction function checkForInteraction() { hasInteraction = true } // Listen for a couple of events to check interaction const events = ['keydown', 'mousemove', 'touchstart', 'click'] events.forEach(event => { form.addEventListener(event, checkForInteraction, { once: true }) }) // Check for spam via all the available methods form.containsSpam = function () { const fillTime = Date.now() - startTime const isTooFast = fillTime < honeypotDuration const honeypotInput = form.querySelector(`[name="${honeypotField}"]`) const hasHoneypotValue = honeypotInput?.value?.trim() const noInteraction = !hasInteraction // Clean up event listeners after use events.forEach(event => form.removeEventListener(event, checkForInteraction) ) return isTooFast || !!hasHoneypotValue || noInteraction } } Better Forms I’m putting together a solution that will make HTML form elements much easier to use. It includes the standard elements you know, but with easy-to-use syntax and are highly accessible. Stuff like: Form Honeypot Text input Textarea Radios Checkboxes Switches Button groups etc. Here’s a landing page if you’re interested in this. I’d be happy to share something with you as soon as I can. Wrapping Up There are a couple of tricks that make honeypots work today. The best way, likely, is to trick spam bots into thinking your honeypot is a real field. If you don’t want to trick bots, you can use other bot-detection mechanisms that we’ve defined above. Hope you have learned a lot and manage to get something useful from this! Building a Honeypot Field That Works originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Chris’ Corner: Stage 2
by: Chris Coyier Mon, 20 Oct 2025 15:47:26 +0000 We get all excited when we get new CSS features. Well, I do anyway. It’s amazing, because sometimes it unlocks something we’ve literally never been able to do before. It’s wonderful when an artist finishes a new painting, and something to be celebrated. But this is more akin to a new color dropping, making possible a sight never before seen. Just as exciting, to me, is the evolution of new features. Both from the perspective of the feature literally gaining new abilities, or us users figuring out how to use it more effectively. We point to CSS grid as an incredibly important feature addition to CSS in the last decade. And it was! … but then later we got subgrid. … but then later gap was improved to work across layouts. … but then later we got safe alignment. And this journey isn’t over! Masonry is actively being hashed out, and has gone back-and-forth whether it will be part of grid itself. (It looks like it will be a new display type but share properties with other layout types.) Plus another one I’m excited about: styling the gap. Just as gap itself is just for the spacing between grid items, now row-rule and column-rule can draw lines in those gaps. Actual elements don’t need to be there, so we don’t need to put “fake” elements there just to draw borders and whatnot. Interestingly, column-rule isn’t even new as it was used to draw lines between multi-column layouts already, now it just does double-duty which is kinda awesome. Chrome Developer Blog: A new way to style gaps in CSS Microsoft Edge Blog: Minding the gaps: A new way to draw separators in CSS If we’re entering an era where CSS innovation slows down a little and we catch our breath with Stage 2 sorta features and figuring out what to do with these new features, I’m cool with that. Sorta like… We’ve got corner-shape, so what can we actually do with it? We’ve got @layer now, how do we actually get it into a project? We’ve got View Transitions now, maybe we actually need to scope them for variety of real-world situations.
-
I Used This Open Source Library to Integrate OpenAI, Claude, Gemini to Websites Without API Keys
by: Bhuwan Mishra Mon, 20 Oct 2025 03:31:08 GMT When I started experimenting with AI integrations, I wanted to create a chat assistant on my website, something that could talk like GPT-4, reason like Claude, and even joke like Grok. But OpenAI, Anthropic, Google, and xAI all require API keys. That means I needed to set up an account for each of the platforms and upgrade to one of their paid plans before I could start coding. Why? Because most of these LLM providers require a paid plan for API access. Not to mention, I would need to cover API usage billing for each LLM platform. What if I could tell you there's an easier approach to start integrating AI within your websites and mobile applications, even without requiring API keys at all? Sounds exciting? Let me share how I did exactly that. Integrate AI with Puter.js Thanks to Puter.js, an open source JavaScript library that lets you use cloud features like AI models, storage, databases, user auth, all from the client side. No servers, no API keys, no backend setup needed here. What else can you ask for as a developer? Puter.js is built around Puter’s decentralized cloud platform, which handles all the stuff like key management, routing, usage limits, and billing. Everything’s abstracted away so cleanly that, from your side, it feels like authentication, AI, and LLM just live in your browser. Enough talking, let’s see how you can add GPT-5 integration within your web application in less than 10 lines. <html> <body> <script src="https://js.puter.com/v2/"></script> <script> puter.ai.chat(`What is puter js?`, { model: 'gpt-5-nano', }).then(puter.print); </script> </body> </html>Yes, that’s it. Unbelievable, right? Let's save the HTML code into an index.html file place this a new, empty directory. Open a terminal and switch to the directory where index.html file is located and serve it on localhost with the Python command: python -m http.serverThen open http://localhost:8000 in your web browser. Click on Puter.js “Continue” button when presented. Integrate ChatGPT with Puter JS🚧 It would take some time before you see a response from ChatGPT. Till then, you'll see a blank page. ChatGPT Nano doesn't know Puter.js yet but it will, soonYou can explore a lot of examples and get an idea of what Puter.js does for you on its playground. Let’s modify the code to make it more interesting this time. It would take a user query and return streaming responses from three different LLMs so that users can decide which among the three provides the best result. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>AI Model Comparison</title> <script src="https://cdn.twind.style"></script> <script src="https://js.puter.com/v2/"></script> </head> <body class="bg-gray-900 min-h-screen p-6"> <div class="max-w-7xl mx-auto"> <h1 class="text-3xl font-bold text-white mb-6 text-center">AI Model Comparison</h1> <div class="mb-6"> <label for="queryInput" class="block text-white mb-2 font-medium">Enter your query:</label> <div class="flex gap-2"> <input type="text" id="queryInput" class="flex-1 px-4 py-3 rounded-lg bg-gray-800 text-white border border-gray-700 focus:outline-none focus:border-blue-500" placeholder="Write a detailed essay on the impact of artificial intelligence on society" value="Write a detailed essay on the impact of artificial intelligence on society" /> <button id="submitBtn" class="px-6 py-3 bg-blue-600 hover:bg-blue-700 text-white rounded-lg font-medium transition-colors" > Generate </button> </div> </div> <div class="grid grid-cols-1 md:grid-cols-3 gap-4"> <div class="bg-gray-800 rounded-lg p-4"> <h2 class="text-xl font-semibold text-blue-400 mb-3">Claude Opus 4</h2> <div id="output1" class="text-gray-300 text-sm leading-relaxed h-96 overflow-y-auto whitespace-pre-wrap"></div> </div> <div class="bg-gray-800 rounded-lg p-4"> <h2 class="text-xl font-semibold text-green-400 mb-3">Claude Sonnet 4</h2> <div id="output2" class="text-gray-300 text-sm leading-relaxed h-96 overflow-y-auto whitespace-pre-wrap"></div> </div> <div class="bg-gray-800 rounded-lg p-4"> <h2 class="text-xl font-semibold text-purple-400 mb-3">Gemini 2.0 Pro</h2> <div id="output3" class="text-gray-300 text-sm leading-relaxed h-96 overflow-y-auto whitespace-pre-wrap"></div> </div> </div> </div> <script> const queryInput = document.getElementById('queryInput'); const submitBtn = document.getElementById('submitBtn'); const output1 = document.getElementById('output1'); const output2 = document.getElementById('output2'); const output3 = document.getElementById('output3'); async function generateResponse(query, model, outputElement) { outputElement.textContent = 'Loading...'; try { const response = await puter.ai.chat(query, { model: model, stream: true }); outputElement.textContent = ''; for await (const part of response) { if (part?.text) { outputElement.textContent += part.text; outputElement.scrollTop = outputElement.scrollHeight; } } } catch (error) { outputElement.textContent = `Error: ${error.message}`; } } async function handleSubmit() { const query = queryInput.value.trim(); if (!query) { alert('Please enter a query'); return; } submitBtn.disabled = true; submitBtn.textContent = 'Generating...'; submitBtn.classList.add('opacity-50', 'cursor-not-allowed'); await Promise.all([ generateResponse(query, 'claude-opus-4', output1), generateResponse(query, 'claude-sonnet-4', output2), generateResponse(query, 'google/gemini-2.0-flash-lite-001', output3) ]); submitBtn.disabled = false; submitBtn.textContent = 'Generate'; submitBtn.classList.remove('opacity-50', 'cursor-not-allowed'); } submitBtn.addEventListener('click', handleSubmit); queryInput.addEventListener('keypress', (e) => { if (e.key === 'Enter') { handleSubmit(); } }); </script> </body> </html> Save the above file in the index.html file as we did in the previos example and then run the server with Python. This is what it looks like now on localhost. Comparing output from different LLM provider with Puter.jsAnd here is a sample response from all three models on the query "What is It's FOSS". Looks like It's FOSS is well trusted by humans as well as AI 😉 My Final Take on Puter.js and LLMs IntegrationThat’s not bad! Without requiring any API keys, you can do this crazy stuff. Puter.js utilizes the “User pays model” which means it’s completely free for developers, and your application user will spend credits from their Puter’s account for the cloud features like the storage and LLMs they will be using. I reached out to them to understand their pricing structure, but at this moment, the team behind it is still working out to come up with a pricing plan. This new Puter.js library is superbly underrated. I’m still amazed by how easy it has made LLM integration. Besides it, you can use Puter.js SDK for authentication, storage like Firebase. Do check out this wonderful open source JavaScript library and explore what else you can build with it. Puter.js - Free, Serverless, Cloud and AI in One Simple LibraryPuter.js provides auth, cloud storage, database, GPT-4o, o1, o3-mini, Claude 3.7 Sonnet, DALL-E 3, and more, all through a single JavaScript library. No backend. No servers. No configuration.Puter
-
LHB Linux Digest #25.31: syslog guide, snippet manager, screen command more
by: Abhishek Prakash Fri, 17 Oct 2025 18:31:53 +0530 Welcome back to another round of Linux magic and command-line sorcery. Weirdly scary opening line, right? That's because I am already in Halloween spirit 🎃 And I'll take this opportunity to crack a dad joke: Q: Why do Linux sysadmins confuse Halloween with Christmas? A: Because 31 Oct equals 25 Dec. Hint: Think octal. Think Decimal. Jokes aside, we are working towards a few new series and courses. The CNCF series should be published next week, followed by either networking or Kubernetes microcourses. Stay awesome 😄 This post is for subscribers only Subscribe now Already have an account? Sign in
-
How to Fingerprint Websites With WhatWeb - A Practical, Hands-On Guide
by: Hangga Aji Sayekti Fri, 17 Oct 2025 17:59:33 +0530 This short guide will help you get started with WhatWeb, a simple tool for fingerprinting websites. It’s written for beginners who want clear steps, short explanations, and practical tips. By the end, you’ll know how to run WhatWeb with confidence. What is WhatWeb?Imagine you’re curious about what powers a website: the CMS, web server, frameworks, analytics tools, or plugins behind it. WhatWeb can tell you all that right from the Linux command line. It’s like getting a quick peek under the hood of any site. In this guide, we’ll skip the long theory and go straight to the fun part. You’ll run the commands, see the results, and learn how to understand them in real situations. Legal and ethical noteBefore you start, here’s a quick reminder. Only scan websites that you own or have clear permission to test. Running scans on random sites can break the law and go against ethical hacking practices. If you just want to practice, use safe test targets that are made for learning. For the examples in this guide, we will use http://www.vulnweb.com/ and some of its subdomains as safe test targets. These sites are intentionally provided for learning and experimentation, so they are good places to try WhatWeb without worrying about legal or ethical issues. Install WhatWebKali Linux often includes WhatWeb. Check version with: whatweb --version If not present, install with: sudo apt update sudo apt install whatweb Quick basic scanRun a fast scan with this command. Replace the URL with your target. whatweb http://testphp.vulnweb.com This prints a one-line summary for the target. You will see status code, server, CMS, and other hints: Beyond basic scan: Getting more out of whatwebThe above was just the very basic usse of whatweb. Let's see what else we can do with it. 1. Verbose outputwhatweb -v http://testphp.vulnweb.com This shows more details and the patterns WhatWeb matched. WhatWeb report for http://testphp.vulnweb.com Status : 200 OK Title : Home of Acunetix Art IP : 44.228.249.3 Country : UNITED STATES, US Summary : ActiveX[D27CDB6E-AE6D-11cf-96B8-444553540000], Adobe-Flash, Email[wvs@acunetix.com], HTTPServer[nginx/1.19.0], nginx[1.19.0], Object[http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,29,0][clsid:D27CDB6E-AE6D-11cf-96B8-444553540000], PHP[5.6.40-38+ubuntu20.04.1+deb.sury.org+1], Script[text/JavaScript], X-Powered-By[PHP/5.6.40-38+ubuntu20.04.1+deb.sury.org+1] Detected Plugins: [ ActiveX ] ActiveX is a framework based on Microsoft's Component Object Model (COM) and Object Linking and Embedding (OLE) technologies. ActiveX components officially operate only with Microsoft's Internet Explorer web browser and the Microsoft Windows operating system. - More info: http://en.wikipedia.org/wiki/ActiveX Module : D27CDB6E-AE6D-11cf-96B8-444553540000 [ Adobe-Flash ] This plugin identifies instances of embedded adobe flash files. Google Dorks: (1) Website : https://get.adobe.com/flashplayer/ [ Email ] Extract email addresses. Find valid email address and syntactically invalid email addresses from mailto: link tags. We match syntactically invalid links containing mailto: to catch anti-spam email addresses, eg. bob at gmail.com. This uses the simplified email regular expression from http://www.regular-expressions.info/email.html for valid email address matching. String : wvs@acunetix.com String : wvs@acunetix.com [ HTTPServer ] HTTP server header string. This plugin also attempts to identify the operating system from the server header. String : nginx/1.19.0 (from server string) [ Object ] HTML object tag. This can be audio, video, Flash, ActiveX, Python, etc. More info: http://www.w3schools.com/tags/tag_object.asp Module : clsid:D27CDB6E-AE6D-11cf-96B8-444553540000 (from classid) String : http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,29,0 [ PHP ] PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. This plugin identifies PHP errors, modules and versions and extracts the local file path and username if present. Version : 5.6.40-38+ubuntu20.04.1+deb.sury.org+1 Google Dorks: (2) Website : http://www.php.net/ [ Script ] This plugin detects instances of script HTML elements and returns the script language/type. String : text/JavaScript [ X-Powered-By ] X-Powered-By HTTP header String : PHP/5.6.40-38+ubuntu20.04.1+deb.sury.org+1 (from x-powered-by string) [ nginx ] Nginx (Engine-X) is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. Version : 1.19.0 Website : http://nginx.net/ HTTP Headers: HTTP/1.1 200 OK Server: nginx/1.19.0 Date: Mon, 13 Oct 2025 07:29:42 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: close X-Powered-By: PHP/5.6.40-38+ubuntu20.04.1+deb.sury.org+1 Content-Encoding: gzip 2. Aggressive scan (more probes)whatweb -a 3 http://testphp.vulnweb.com Use aggressive mode when you want more fingerprints. Aggressive mode is slower and noisier. Use it only with permission. 3. Scan a list of targetsCreate a file named targets.txt with one URL per line. nano targets.txt When nano opens, paste the following lines exactly (copy and right-click to paste in many terminals): http://testphp.vulnweb.com/ http://testasp.vulnweb.com/ http://testaspnet.vulnweb.com/ http://rest.vulnweb.com/ http://testhtml5.vulnweb.com/ Save and exit nano by pressing ctrl+X. Confirm the file was created for the sake of it: cat targets.txt You should see the five URLs listed. Then run: whatweb -i targets.txt --log-json results.json This saves results in JSON format in results.json. What to expect on screen: WhatWeb prints a per-host summary while it runs. When finished, open the JSON file to inspect it: less results.json If you want a pretty view and you have jq installed, run: jq '.' results.json | less -R 4. Save a human readable logwhatweb -v --log-verbose whatweb.log http://testphp.vulnweb.com Let's see the log: cat whatweb.log 5. Use a proxy (for example Burp Suite)whatweb --proxy 127.0.0.1:8080 http://testphp.vulnweb.com 6. Custom user agentIf a site blocks you, slow down the scan or change the user agent. whatweb --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" http://testphp.vulnweb.com 7. Limit scan to specific portsWhatWeb accepts a URL with port, for example: whatweb http://example.com:8080 Interpreting the outputA typical WhatWeb line looks like this: http://testphp.vulnweb.com [200 OK] Apache[2.4.7], PHP[5.5.9], HTML5 200 OK - HTTP status code. It means the request succeeded.Apache[2.4.7] - the web server software and version.PHP[5.5.9] - server side language and version.HTML5 - content hints.If you see a CMS such as WordPress, you may also see plugins or themes. WhatWeb reports probable matches. It is not a guarantee. Combine WhatWeb with other toolsWhatWeb is best for reconnaissance. Use it with these tools for a fuller picture: nmap - for network and port scansdirsearch or gobuster - for directory and file discoverywpscan - for deeper WordPress checksA simple workflow: Run WhatWeb to identify technologies.Use nmap to find open ports and services.Use dirsearch to find hidden pages or admin panels.If the site is WordPress, run wpscan for plugin vulnerabilities.ConclusionWhatWeb is a lightweight and fast tool for fingerprinting websites. It helps you quickly understand what runs a site and gives leads for deeper testing. Use the copy-paste commands here to get started, and combine WhatWeb with other tools for a full reconnaissance workflow. Happy pen-testing 😀