Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never
  1. by: Mat Marquis Wed, 22 Oct 2025 19:08:23 +0000 Editor’s note: Mat Marquis and Andy Bell have released JavaScript for Everyone, an online course offered exclusively at Piccalilli. This post is an excerpt from the course taken specifically from a chapter all about JavaScript expressions. We’re publishing it here because we believe in this material and want to encourage folks like yourself to sign up for the course. So, please enjoy this break from our regular broadcasting to get a small taste of what you can expect from enrolling in the full JavaScript for Everyone course. Hey, I’m Mat, but “Wilto” works too — I’m here to teach you JavaScript. Well, not here-here; technically, I’m over at JavaScript for Everyone to teach you JavaScript. What we have here is a lesson from the JavaScript for Everyone module on lexical grammar and analysis — the process of parsing the characters that make up a script file and converting it into a sequence of discrete “input elements” (lexical tokens, line ending characters, comments, and whitespace), and how the JavaScript engine interprets those input elements. An expression is code that, when evaluated, resolves to a value. 2 + 2 is a timeless example. 2 + 2 // result: 4 As mental models go, you could do worse than “anywhere in a script that a value is expected you can use an expression, no matter how simple or complex that expression may be:” function numberChecker( checkedNumber ) { if( typeof checkedNumber === "number" ) { console.log( "Yep, that's a number." ); } } numberChecker( 3 ); // result: Yep, that's a number. numberChecker( 10 + 20 ); // result: Yep, that's a number. numberChecker( Math.floor( Math.random() * 20 ) / Math.floor( Math.random() * 10 ) ); // result: Yep, that's a number. Granted, JavaScript doesn’t tend to leave much room for absolute statements. The exceptions are rare, but it isn’t the case absolutely, positively, one hundred percent of the time: console.log( -2**1 ); // result: Uncaught SyntaxError: Unary operator used immediately before exponentiation expression. Parenthesis must be used to disambiguate operator precedence Still, I’m willing to throw myself upon the sword of “um, actually” on this one. That way of looking at the relationship between expressions and their resulting values is heart-and-soul of the language stuff, and it’ll get you far. Primary Expressions There’s sort of a plot twist, here: while the above example reads to our human eyes as an example of a number, then an expression, then a complex expression, it turns out to be expressions all the way down. 3 is itself an expression — a primary expression. In the same way the first rule of Tautology Club is Tautology Club’s first rule, the number literal 3 is itself an expression that resolves in a very predictable value (psst, it’s three). console.log( 3 ); // result: 3 Alright, so maybe that one didn’t necessarily need the illustrative snippet of code, but the point is: the additive expression 2 + 2 is, in fact, the primary expression 2 plus the primary expression 2. Granted, the “it is what it is” nature of a primary expression is such that you won’t have much (any?) occasion to point at your display and declare “that is a primary expression,” but it does afford a little insight into how JavaScript “thinks” about values: a variable is also a primary expression, and you can mentally substitute an expression for the value it results in — in this case, the value that variable references. That’s not the only purpose of an expression (which we’ll get into in a bit) but it’s a useful shorthand for understanding expressions at their most basic level. There’s a specific kind of primary expression that you’ll end up using a lot: the grouping operator. You may remember it from the math classes I just barely passed in high school: console.log( 2 + 2 * 3 ); // result: 8 console.log( ( 2 + 2 ) * 3 ); // result: 12 The grouping operator (singular, I know, it kills me too) is a matched pair of parentheses used to evaluate a portion of an expression as a single unit. You can use it to override the mathematical order of operations, as seen above, but that’s not likely to be your most common use case—more often than not you’ll use grouping operators to more finely control conditional logic and improve readability: const minValue = 0; const maxValue = 100; const theValue = 50; if( ( theValue > minValue ) && ( theValue < maxValue ) ) { // If ( the value of `theValue` is greater than that of `minValue` ) AND less than `maxValue`): console.log( "Within range." ); } // result: Within range. Personally, I make a point of almost never excusing my dear Aunt Sally. Even when I’m working with math specifically, I frequently use parentheses just for the sake of being able to scan things quickly: console.log( 2 + ( 2 * 3 ) ); // result: 8 This use is relatively rare, but the grouping operator can also be used to remove ambiguity in situations where you might need to specify that a given syntax is intended to be interpreted as an expression. One of them is, well, right there in your developer console. The syntax used to initialize an object — a matched pair of curly braces — is the same as the syntax used to group statements into a block statement. Within the global scope, a pair of curly braces will be interpreted as a block statement containing a syntax that makes no sense given that context, not an object literal. That’s why punching an object literal into your developer console will result in an error: { "theValue" : true } // result: `Uncaught SyntaxError: unexpected token: ':' It’s very unlikely you’ll ever run into this specific issue in your day-to-day JavaScript work, seeing as there’s usually a clear division between contexts where an expression or a statement are expected: { const theObject = { "theValue" : true }; } You won’t often be creating an object literal without intending to do something with it, which means it will always be in the context where an expression is expected. It is the reason you’ll see standalone object literals wrapped in a a grouping operator throughout this course — a syntax that explicitly says “expect an expression here”: ({ "value" : true }); However, that’s not to say you’ll never need a grouping operator for disambiguation purposes. Again, not to get ahead of ourselves, but an Independently-Invoked Function Expression (IIFE), an anonymous function expression used to manage scope, relies on a grouping operator to ensure the function keyword is treated as a function expression rather than a declaration: (function(){ // ... })(); Expressions With Side Effects Expressions always give us back a value, in no uncertain terms. There are also expressions with side effects — expressions that result in a value and do something. For example, assigning a value to an identifier is an assignment expression. If you paste this snippet into your developer console, you’ll notice it prints 3: theIdentifier = 3; // result: 3 The resulting value of the expression theIdentifier = 3 is the primary expression 3; classic expression stuff. That’s not what’s useful about this expression, though — the useful part is that this expression makes JavaScript aware of theIdentifier and its value (in a way we probably shouldn’t, but that’s a topic for another lesson). That variable binding is an expression and it results in a value, but that’s not really why we’re using it. Likewise, a function call is an expression; it gets evaluated and results in a value: function theFunction() { return 3; }; console.log( theFunction() + theFunction() ); // result: 6 We’ll get into it more once we’re in the weeds on functions themselves, but the result of calling a function that returns an expression is — you guessed it — functionally identical to working with the value that results from that expression. So far as JavaScript is concerned, a call to theFunction effectively is the simple expression 3, with the side effect of executing any code contained within the function body: function theFunction() { console.log( "Called." ); return 3; }; console.log( theFunction() + theFunction() ); /* Result: Called. Called. 6 */ Here theFunction is evaluated twice, each time calling console.log then resulting in the simple expression 3 . Those resulting values are added together, and the result of that arithmetic expression is logged as 6. Granted, a function call may not always result in an explicit value. I haven’t been including them in our interactive snippets here, but that’s the reason you’ll see two things in the output when you call console.log in your developer console: the logged string and undefined. JavaScript’s built-in console.log method doesn’t return a value. When the function is called it performs its work — the logging itself. Then, because it doesn’t have a meaningful value to return, it results in undefined. There’s nothing to do with that value, but your developer console informs you of the result of that evaluation before discarding it. Comma Operator Speaking of throwing results away, this brings us to a uniquely weird syntax: the comma operator. A comma operator evaluates its left operand, discards the resulting value, then evaluates and results in the value of the right operand. Based only on what you’ve learned so far in this lesson, if your first reaction is “I don’t know why I’d want an expression to do that,” odds are you’re reading it right. Let’s look at it in the context of an arithmetic expression: console.log( ( 1, 5 + 20 ) ); // result: 25 The primary expression 1 is evaluated and the resulting value is discarded, then the additive expression 5 + 20 is evaluated, and that’s resulting value. Five plus twenty, with a few extra characters thrown in for style points and a 1 cast into the void, perhaps intended to serve as a threat to the other numbers. And hey, notice the extra pair of parentheses there? Another example of a grouping operator used for disambiguation purposes. Without it, that comma would be interpreted as separating arguments to the console.log method — 1 and 5 + 20 — both of which would be logged to the console: console.log( 1, 5 + 20 ); // result: 1 25 Now, including a value in an expression in a way where it could never be used for anything would be a pretty wild choice, granted. That’s why I bring up the comma operator in the context of expressions with side effects: both sides of the , operator are evaluated, even if the immediately resulting value is discarded. Take a look at this validateResult function, which does something fairly common, mechanically speaking; depending on the value passed to it as an argument, it executes one of two functions, and ultimately returns one of two values. For the sake of simplicity, we’re just checking to see if the value being evaluated is strictly true — if so, call the whenValid function and return the string value "Nice!". If not, call the whenInvalid function and return the string "Sorry, no good.": function validateResult( theValue ) { function whenValid() { console.log( "Valid result." ); }; function whenInvalid() { console.warn( "Invalid result." ); }; if( theValue === true ) { whenValid(); return "Nice!"; } else { whenInvalid(); return "Sorry, no good."; } }; const resultMessage = validateResult( true ); // result: Valid result. console.log( resultMessage ); // result: "Nice!" Nothing wrong with this. The whenValid / whenInvalid functions are called when the validateResult function is called, and the resultMessage constant is initialized with the returned string value. We’re touching on a lot of future lessons here already, so don’t sweat the details too much. Some room for optimizations, of course — there almost always is. I’m not a fan of having multiple instances of return, which in a sufficiently large and potentially-tangled codebase can lead to increased “wait, where is that coming from” frustrations. Let’s sort that out first: function validateResult( theValue ) { function whenValid() { console.log( "Valid result." ); }; function whenInvalid() { console.warn( "Invalid result." ); }; if( theValue === true ) { whenValid(); } else { whenInvalid(); } return theValue === true ? "Nice!" : "Sorry, no good."; }; const resultMessage = validateResult( true ); // result: Valid result. resultMessage; // result: "Nice!" That’s a little better, but we’re still repeating ourselves with two separate checks for theValue. If our conditional logic were to be changed someday, it wouldn’t be ideal that we have to do it in two places. The first — the if/else — exists only to call one function or the other. We now know function calls to be expressions, and what we want from those expressions are their side effects, not their resulting values (which, absent a explicit return value, would just be undefined anyway). Because we need them evaluated and don’t care if their resulting values are discarded, we can use comma operators (and grouping operators) to sit them alongside the two simple expressions — the strings that make up the result messaging — that we do want values from: function validateResult( theValue ) { function whenValid() { console.log( "Valid result." ); }; function whenInvalid() { console.warn( "Invalid result." ); }; return theValue === true ? ( whenValid(), "Nice!" ) : ( whenInvalid(), "Sorry, no good." ); }; const resultMessage = validateResult( true ); // result: Valid result. resultMessage; // result: "Nice!" Lean and mean thanks to clever use of comma operators. Granted, there’s a case to be made that this is a little too clever, in that it could make this code a little more difficult to understand at a glance for anyone that might have to maintain this code after you (or, if you have a memory like mine, for your near-future self). The siren song of “I could do it with less characters” has driven more than one JavaScript developer toward the rocks of, uh, slightly more difficult maintainability. I’m in no position to talk, though. I chewed through my ropes years ago. Between this lesson on expressions and the lesson on statements that follows it, well, that would be the whole ballgame — the entirety of JavaScript summed up, in a manner of speaking — were it not for a not-so-secret third thing. Did you know that most declarations are neither statement nor expression, despite seeming very much like statements? Variable declarations performed with let or const, function declarations, class declarations — none of these are statements: if( true ) let theVariable; // Result: Uncaught SyntaxError: lexical declarations can't appear in single-statement context if is a statement that expects a statement, but what it encounters here is one of the non-statement declarations, resulting in a syntax error. Granted, you might never run into this specific example at all if you — like me — are the sort to always follow an if with a block statement, even if you’re only expecting a single statement. I did say “one of the non-statement declarations,” though. There is, in fact, a single exception to this rule — a variable declaration using var is a statement: if( true ) var theVariable; That’s just a hint at the kind of weirdness you’ll find buried deep in the JavaScript machinery. 5 is an expression, sure. 0.1 * 0.1 is 0.010000000000000002, yes, absolutely. Numeric values used to access elements in an array are implicitly coerced to strings? Well, sure — they’re objects, and their indexes are their keys, and keys are strings (or Symbols). What happens if you use call() to give this a string literal value? There’s only one way to find out — two ways to find out, if you factor in strict mode. That’s where JavaScript for Everyone is designed take you: inside JavaScript’s head. My goal is to teach you the deep magic — the how and the why of JavaScript. If you’re new to the language, you’ll walk away from this course with a foundational understanding of the language worth hundreds of hours of trial-and-error. If you’re a junior JavaScript developer, you’ll finish this course with a depth of knowledge to rival any senior. I hope to see you there. JavaScript for Everyone is now available and the launch price runs until midnight, October 28. Save £60 off the full price of £249 (~$289) and get it for £189 (~$220)! Get the Course An Introduction to JavaScript Expressions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  2. by: Pulkit Chandak Wed, 22 Oct 2025 07:13:10 GMT Arduino has been the cornerstone of embedded electronics projects for a while now. Be it DIY remote-controlled vehicles, binary clocks, power laces, or as is relevant to the month of publishing, flamethrowing Jack-O'-Lanterns! The versatility and affordability of the board has been uniquely unparalleled. But now that Qualcomm has acquired Arduino projecting more AI-forward features with more powerful hardware, there might be some changes around the corner. Perhaps I am reading too much between the lines but not all of us have favorable views about Big Tech and corporate greed. We thought it might be a good time to look at some alternatives. Since Arduino has a lot of different models with different features, we will not draw a comparison between Arduino and other boards, but just highlight the unique features these alternative boards have. 1. Raspberry Pi PicoRaspberry Pi needs no introduction, it being the one company besides Arduino that has always been the favorite of tinkerers. While Raspberry Pi is known for its full fledged single-board-computers, the Pico is a development board for programming dedicated tasks like the Arduino boards. There are two releases of the Pico at the time of writing this article, 1 and 2. The major upgrade being the processor. There are certain prefixes which denote model features, "W" denoting wireless capabilities, "H" denoting pre-soldered headers. Here, I describe the cutting-edge model, the Pico 2 W with Headers. Processors: Dual Cortex-M33 (ARM) upto 133 MHz and optional Hazard3 processors (RISC-V)Memory: 520 KB on-chip SRAMInput-Output: 26 GPIO pinsConnectivity: Optionally 2.4 GHz Wi-Fi and Bluetooth 5.2 on the W modelPower: Micro-USBProgramming Software or Language: MicroPython or C/C++Price: $8Extra Features: Temperature sensorThe greatest advantage of Raspberry Pi is the huge userbase, second probably only to Arduino. Besides that, the GPIO pins make projects easier to construct, and the optional RISC-V processors give it an open-source experimental edge that many long for. 2. ESP32ESP32 is a SoC that has soared in popularity in the past decade, and for all the right reasons. It comes in very cheap, screaming "hobbyist" and is committed to good documentation and an open SDK (software development kit). It came as a successor to the already very successful and still relevant ESP8266 SoC. The categorization is a little to get a hang of because of the sheer number of boards available. The original ESP32 SoC boards come with dual-core Xtensa LX6 processors that go up to 240 MHz, and they come with Wi-Fi + Bluetooth classic/LE built-in. The ESP32-S series are a little enhanced, with more GPIO pins for connectivity. Now the ESP32-C series transitioned to RISC-V chips, and finally the ESP32-H series are designed for ultra low-power IoT applications. If the board name has WROOM, it belongs to the original basic family but the ones with WROVER indicate modules with PSRAM and more memory in general. You can find all the "DevKits" here. Getting over the whole naming culture, I will directly describe one board here that might fulfill your Arduino-alternative needs, ESP32-DevKitC-VE: Processors: Dual-core 32-bit LX6 upto 240 MHzMemory: 8 MBInput-Output: 34 programmable GPIOsConnectivity: 802.11 Wi-Fi, Bluetooth 4.2 with BLEPower: Micro-USBProgramming Software or Language: Arduino IDE, PlatformIO IDE (VS Code), LUA, MicroPython, Espressif IDF (IoT Development Framework), JavaScriptPrice: $11Extra Features: Breadboard friendly, rich set of peripheral interfacesI encourage you to do your own research based on your needs of the board and choose one, as the support and hardware is rock solid but the sheer number of options can be a little tricky to figure out. 3. Adafruit FeatherAdafruit Feather isn't a single board, but a category of hardware boards that come with all sorts of different features and processors each. The idea is getting a "feather", which is the board, and then getting "wings" which are hats/shields, basically extending the features and abilities of the board, and there are a huge number of them. This extensible versatility is the most attractive features of the boards but also the reason why I cannot describe one board that best suits the needs of any user. I can, however, tell you what options they provide. All FeathersCan be programmed with Arduino IDECome with Micro-USB or USB-CAre 0.9" long and breadboard-compatibleCan be run with either USB power or a LiPo batteryProcessorsThe boards are available with several different processors, such as: Atmel ATmega32u4 and ATmega 328P - 8 bit AVRAtmel ATSAMD21 - 32 bit ARM Cortex M0+Atmel ATSAMD51 - 32-bit ARM Cortex M4Broadcom/Cypress WICED - STM32 with WiFiEspressif ESP8266 and ESP32 - Tensilica with WiFi/BTFreescale MK20 - ARM Cortex M4, as the Teensy 3.2 Feather AdapterNordic nRF52832 and nRF32840 - ARM Cortex & Bluetooth LEPacket radio modules featuring SemTech SX1231LoRa radio modules featuring SemTech SX127xA good model to look into for an Arduino alternative is Adafruit ESP32 Feather V2. Connectivity and wingsThe "feathers" have different categories based on their connectivity. The categories include: Basic FeathersWi-Fi FeathersBluetooth FeathersCellular FeathersLoRa and Radio FeathersThis doesn't mean that these connectivity features are mutually exclusive, there are several boards which have more than one of theses connectivity options. The Wings add all the functionality to the boards, and the number of options are immense. I cannot possibly list them here. 4. SeeeduinoAs Arduino alternatives go, this board seems to be one of the most worthy of holding that title. It looks like an Arduino, works with the software that Arduino is compatible with, and even supports the shields made for UNO-R3. Here is the description of the most recent model at the time of writing this, Seeeduino V4.3: Processors: ATmega328Memory: 2 KB RAM, 1 KB EEPROM and 32 KB Flash MemoryInput-Output: 14 digial IO pins, 6 analog inputsPower: Micro-USB, DC Input JackProgramming Software or Language: Arduino IDEPrice: $7.6If you need a no-brainer Arduino alternative that delivers what it does with stability and efficiency, this should be your go-to choice. 5. STM32 Nucleo BoardsSTM32 offers a very, very wide range of development boards, among which the Nucleo boards seem like the best alternatives for Arduino. They come in three series as well: Nucleo-32, Nucleo-64 and Nucleo-144, the numbers at the end of which denote the number of connectivity pins that the board offers. Every single series has a number of models within, again. Here, I will describe the one most appropriate as an Arduino alternative: STM32 Nucleo-F103RBMicrocontroller: STM32Input-Output: 64 IO pins; Arduino shield-compatibleConnectivity: Arduino Uno V3 expansion connectorPower: Micro-USBProgramming Software or Language: IAR Embedded Workbench, MDK-ARM, STM32CubeIDE, etc.Price: $10.81Extra Features: 1 programmable LED, 1 programmable button, 1 reset buttonOptional Features: Second user LED, cryptography, USB-C, etc.STM32 provides great hardware abstraction, ease of development, GUI based initialization, good resources and more. If that is the kind of thing you need, then this should probably be your choice. 6. micro:bitmicro:bit boards are designed mostly for younger students and kids to learn programming, but offer some really interesting features that can help anyone make a project without buying many extra parts. In fact, this is one of the ideal tools for introducing STEM education to young children. Here are the details of the most recent version at the time of writing, micro:bit v2: Processors: Nordic Semiconductor nRF52833Memory: 128 KB RAM, 512 KB Flash MemoryInput-Output: 25 pins (4 dedicated GPIO, PWM, I2C, SPI)Connectivity: Bluetooth 5.0, radioPower: Micro-USBProgramming Software or Language: Price: $17.95 (other more expensive bundles with extra hardware are also available)The extra built-in features of the board include: 2 built in buttons that can be programmed in different waysTouch sensor on the logo, temperature sensorBuilt-in speaker and microphone25 programmable LEDsAccelerometer and compassReset and power buttonIf a plethora of extra hardware features capable of executing almost anything you might want, or if you want a development board with extensive documentation for younger audiences, this should be your go to choice. The company doesn't only make great boards, but also supports inclusive technological education for children of all abilities, and sustainability, which is admirable. 7. Particle Photon 2The Particle Photon 2 is a board designed with ease of prototyping in mind. It enables IoT projects, giving broad customization options to both hardware and software. The Photon is also Feather-compatible (from Adafruit), giving the ability to attach Wings to extend the features. Processors: ARM Cortex M33, upto 200 MHzMemory: 3MB RAM, 2MB Flash MemoryInput-Output: 16 GPIO pinsConnectivity: Dual-band Wi-Fi and BLE 5.3Power: Micro-USBProgramming Software or Language: VSC plug-inPrice: $17.95The Photon also has a built-in programmable LED. Particle also provides a Wi-Fi antenna add-on component if your project requires that. If building new product ideas is your need, this might just be what you're looking for. 8. Teensy Development BoardsThe Teensy board series, as the name suggests, aims for a small board with a minimal footprint with a lot of power packed at an affordable price. There have been several releases of the board, with the most recent one at the time of writing being Teensy 4.1: Processors: ARM Cortex-M7 at 600 MHzMemory: 1024K RAM, 8MB Flash MemoryInput-Output: 55 digital IO pins, 18 analog input pinsPower: Micro-USB, Programming Software or Language: Arduino IDE + Teensyduino, Visual Micro, PlatformIO, CircuitPython, command linePrice: $31.50Extra Features: Onboard Micro SD cardIf you need a stable base for your project that just works, this might be your choice. It is worth noting that the Teensy boards have excellent audio libraries and offer a lot of processing power. 9. PineConePineCone is a development board from one of the foremost open source companies, Pine64. It provides amazing features and connectivity, making it ideal for a lot of tinkering purposes. Processors: 32-bit RV32IMAFC RISC-V “SiFive E24 Core”Memory: 2 MB Flash MemoryInput-Output: 18 GPIO pinsConnectivity: Wi-Fi, BLE 5.0, RadioPower: USB-CProgramming Software or Language: RustPrice: $3.99Extra Features: 3 on-board LEDsThe RISC-V processor capability gives it the open-source hardware edge that many other boards lack. That makes it quite good for IoT prototyping into devices and technologies that might be very new and untapped. 10. Sparkfun Development BoardsSparkfun has a whole range of boards on their website, out of which the two most notable series are the "RedBoard" series and the "Thing" series. A big part of some of these boards is the Qwiic ecosystem, in which I2C sensors, actuators, shields, etc. can be connected to the board with the same 4-pin connector. Not only that, but you can daisy-chain the boards in one string, making it more convenient and less prone to errors. Here's a great article to learn about the Qwiic ecosystem. Sparkfun RedBoard QwiicThis is another board that is a perfect alternative to Arduino with extra features because it was designed to be so. It is an Arduino-compatible board, supporting the software, shields, etc. Microcontroller: ATmega328 with UNO's Optiboot BootloaderInput-Output: 20 Digital IO pins, 1 Qwiic connectorConnectivity: 20 Digital I/O pins with 6 PWM pinsPower: Micro-USB, Pin inputProgramming Software or Language: Arduino IDEPrice: $21.95Sparkfun Thing Plus SeriesThe Sparkfun Thing Plus series comes in with sorts of different processors and connection abilities like RP2040, RP2350, nRF9160, ARM Cortex-M4, ESP32-based, STM32-based, etc. We've chosen to describe one of the most popular models here, SparkFun Thing Plus - ESP32 WROOM (USB-C). Microcontroller: ESP32-WROOM ModuleInput-Output: 21 Multifunctional GPIOConnectivity: Wi-Fi 2.4GHz, dual integrated Bluetooth (classic and BLE)Power: USB-C, Qwiic connectorProgramming Software or Language: Arduino IDEPrice: $33.73Extra Features: RGB status LED, built-in SD card slot, Adafruit Feather compatible (you can attach the "Wings")Sparkfun offers a lot of options, especially based on the form-factor. They not only provide /new unique features of their own, but also utilize the open technologies provided by other companies very well, as you can see. ConclusionThe Arduino boards clearly have a lot of alternatives, varying in size, features and practicality. If Arduino being acquired puts a bad taste in your mouth, or even if you just want to explore what the alternatives offer, I hope this article has been helpful for you. Please let us know in the comments if we missed your favorite one. Cheers!
  3. by: Hangga Aji Sayekti Wed, 22 Oct 2025 11:49:45 +0530 Ever wonder how security pros find those hidden entry points before the real testing even begins? It all starts with what we call reconnaissance—the art of gathering intelligence. Think of it like casing a building before a security audit; you need to know the doors, windows, and air vents first. In this digital age, one of the go-to tools for this initial legwork is TheHarvester. At its heart, TheHarvester is a Python script that doesn't try to do anything fancy. Its job is straightforward: to scour publicly available information and collect things like email addresses, subdomains, IPs, and URLs. It looks in all the usual places, from standard search engines to specialized databases like Shodan, which is essentially a search engine for internet-connected devices. We did something like this by fingerprinting with WhatWeb in an earlier tutorial. But TheHarvester is a different tool with more diverse information. 📋To put this into practice, we're going to get our hands dirty with a live example. We'll use vulnweb.com as our test subject. This is a safe, legal website specifically set up by security folks to practice these very techniques, so it's the perfect place to learn without causing any harm. Let's dive in and see what we can uncover.Step 1: Installing TheHarvesterIf you're not using Kali Linux, you can easily install TheHarvester from its GitHub repository. Option A: Using apt (Kali Linux / Debian/Ubuntu)sudo apt update && sudo apt install theharvester Option B: Installing from source (Latest Version)git clone https://github.com/laramies/theHarvester.git cd theHarvester python3 -m pip install -r requirements.txt You can verify the installation by checking the help menu: theHarvester -h Step 2: Understanding the basic syntaxThe basic command structure of TheHarvester is straightforward: theHarvester -d <domain> -l <limit> -b <data_source> Let's break down the key options: -d or --domain: The target domain name (e.g., vulnweb.com).-l or --limit: The number of results to fetch from each data source (e.g., 100, 500). More results take longer.-b or --source: The data source to use. You can specify a single source like google or use all to run all available sources.-f or --filename: Save the results to an HTML and/or XML file.Step 3: Case Study: Reconnaissance on vulnweb.comLet's use TheHarvester to discover information about our target, vulnweb.com. We'll start with a broad search using the google and duckduckgo sources. Run a basic scantheHarvester -d vulnweb.com -l 100 -b google,duckduckgo If you're seeing the error The following engines are not supported: {'google'}, don't worry—you're not alone. This is a frequent problem that stems from how TheHarvester interacts with public search engines, particularly Google. Let's break down why this happens and walk through the most effective solutions. Why Does This Happen?The short answer: Google has made its search engine increasingly difficult to scrape programmatically. Here are the core reasons: Advanced Bot Detection: Google uses sophisticated algorithms to detect and block automated requests that don't come from a real web browser. TheHarvester's requests are easily identified as bots.CAPTCHAs: When Google suspects automated activity, it presents a CAPTCHA challenge. TheHarvester cannot solve these, so the request fails, and the module is disabled for the rest of your session.Lack of an API Key (for some sources): Some data sources, like Shodan, require a free API key to be used effectively. Without one, the module will not work.In the case of our example domain, vulnweb.com, this means we might miss some results that could be indexed on Google, but it's not the end of the world. Solution: Use the "All" flag with realistic expectationsYou can use -b all to run all modules. The unsupported ones will be gracefully skipped, and the supported ones will run. theHarvester -d vulnweb.com -l 100 -b all Now the output will look something like that. Read proxies.yaml from /etc/theHarvester/proxies.yaml ******************************************************************* * _ _ _ * * | |_| |__ ___ /\ /\__ _ _ ____ _____ ___| |_ ___ _ __ * * | __| _ \ / _ \ / /_/ / _` | '__\ \ / / _ \/ __| __/ _ \ '__| * * | |_| | | | __/ / __ / (_| | | \ V / __/\__ \ || __/ | * * \__|_| |_|\___| \/ /_/ \__,_|_| \_/ \___||___/\__\___|_| * * * * theHarvester 4.8.0 * * Coded by Christian Martorella * * Edge-Security Research * * cmartorella@edge-security.com * * * ******************************************************************* [*] Target: vulnweb.com Read api-keys.yaml from /etc/theHarvester/api-keys.yaml An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected An exception has occurred: Server disconnected [*] Searching Baidu. Searching 0 results. ..... ..... ..... The output is actually huge, spanning over 600 lines. You can view the complete output in this GitHub gist. Analyzing the outputWhen TheHarvester finishes its work, the real detective work begins. The Initial Chatter: Warnings and Status MessagesRight off the bat, you'll see a series of status checks and warnings: Read api-keys.yaml from /etc/theHarvester/api-keys.yaml An exception has occurred: Server disconnected [*] Searching Baidu. Searching 0 results. [*] Searching Bing. Searching results. Don't let these alarm you. The "Server disconnected" and similar exceptions are TheHarvester's way of telling you that certain data sources were unavailable or timed out—this is completely normal during reconnaissance. The tool gracefully skips these and moves on to working sources. The reconnaissance gold: Key findingsHere's where we strike valuable intelligence: Network infrastructure (ASN)[*] ASNS found: 1 -------------------- AS16509 This reveals the Autonomous System Number, essentially telling us which major network provider hosts this infrastructure (in this case, AS16509 is Amazon.com, Inc.). The attack surface - interesting URLs[*] Interesting Urls found: 15 -------------------- http://testphp.vulnweb.com/ https://testasp.vulnweb.com/ http://testphp.vulnweb.com/login.php This is your target list! Each URL represents a potential entry point. Notice we've found: Multiple applications (testphp, testasp, testhtml5)Specific functional pages (login.php, search.php)Both HTTP and HTTPS services.IP address mapping[*] IPs found: 2 ------------------- 44.228.249.3 44.238.29.244 Only two IP addresses serving all this content? This suggests virtual hosting where multiple domains share the same server—valuable for understanding the infrastructure setup. The subdomain treasure trove[*] Hosts found: 610 --------------------- testphp.vulnweb.com:44.228.249.3 testasp.vulnweb.com:44.238.29.244 This massive list of 610 hosts reveals the true scale of the environment. You can see patterns emerging: Application subdomains (testphp, testasp)Infrastructure components (compute.vulnweb.com, elb.vulnweb.com)Geographic distribution across AWS regionsWhat's not there matters too[*] No emails found. [*] No people found. For a test site like vulnweb.com, this makes sense. But in a real engagement, missing email addresses might mean you need different reconnaissance approaches. From reconnaissance to actionSo what's next with this intelligence? Your penetration testing roadmap becomes clear: Prioritize targets - Start with the login pages and search functionsScan the applications - Use tools like nikto or nuclei on the discovered URLsProbe the infrastructure - Run nmap scans on the identified IP addressesDocument everything - Each subdomain is a potential attack vectortIn just minutes, TheHarvester has transformed an unknown domain into a mapped-out territory ready for deeper security testing. Step 4: Expanding the search with more data sourcesThe real power of TheHarvester comes from using multiple data sources. Let's run a more comprehensive scan using bing, linkedin, and threatcrowd. theHarvester -d vulnweb.com -l 100 -b bing,linkedin,threatcrowd Bing: Often returns different and sometimes more results than Google.LinkedIn: Can be useful for finding employee names and profiles associated with a company, which can help in social engineering attacks. For vulnweb.com, this won't yield results, but for a real corporate target, it's invaluable.Threat Crowd: An open-source threat intelligence engine that can often provide a rich list of subdomains.Step 5: Using all sources and saving resultsFor the most thorough reconnaissance, you can use nearly all sources with the -b all flag. 🚧This can be slow and may trigger captchas on some search engines.It's also crucial to save your results for later analysis. Use the -f flag for this. theHarvester -d vulnweb.com -l 100 -b all -f recon-results This command will: Query all available data sources.Limit results to 100 per source.Save the final output to recon-results.json and recon-results.xml.Read json files with cat and jq: cat recon-results.json | jq '.' Important notes and best practicesRate Limiting: Be respectful of the data sources. Using high limits or running scans too frequently can get your IP address temporarily blocked.Legality: Only use TheHarvester on domains you own or have explicit permission to test. Unauthorized reconnaissance can be illegal.Context is Key: TheHarvester is a starting point. The data it collects must be verified and analyzed in the context of a broader security assessment.TheHarvester is a cornerstone tool for any penetration tester or security researcher. By following this guide, you can effectively use it to map out the digital footprint of your target and lay the groundwork for a successful security assessment.
  4. by: Chris Coyier Mon, 20 Oct 2025 18:06:24 +0000 If you’re a CodePen user, this shouldn’t affect you aside from potentially seeing some console noise while we work this out. Carry on! At CodePen we have Embedded Pens which are shown in an <iframe>. These contain user-authored code at a non-same-origin URL as where they are placed. We like to be both safe and as permissive as possible with what we allow users to build and test. The sandbox attribute helps us with safety and while there are some issues with it that we’ll get to later, this is mostly about the allow attribute. Here’s an example. A user wants to use the navigator.clipboard.writeText() API. So they write JavaScript like: button.onclick = async () => { try { await navigator.clipboard.writeText(`some text`); console.log('Content copied to clipboard'); } catch (err) { console.error('Failed to copy: ', err); } } The Embedded Pen is placed on arbitrary origins, for example: chriscoyier.net. The src of the <iframe> is at codepen.io, so there is an origin mismatch there. The JavaScript in the iframe is not same-origin JavaScript, thus is subject to permissions policies. If CodePen were to not use the allow attribute on our <iframe> it would throw an error when the user tries to execute that JavaScript. Failed to copy: NotAllowedError: Failed to execute 'writeText' on 'Clipboard': The Clipboard API has been blocked because of a permissions policy applied to the current document. See https://crbug.com/414348233 for more details. This is an easy fix. We make sure that allow attribute is on the <iframe>, like this, targeting the exact feature we want to allow at any origin: <iframe src="https://codepen.io/..." allow="clipboard-write *;"> </iframe> But here’s where the problem comes in… The (new) Nested Iframe Issue CodePen has Embedded Pens are actually nested <iframe>s in a structure like this: In code structured like this: <iframe src="https://codepen.io/..."> CodePen UI <iframe src="..."> User-Authored Code </iframe> </iframe> We need to put the allow attribute on the user-authored code, so it works, like this: <iframe src="https://codepen.io/..."> CodePen UI <iframe src="..." allow="clipboard-write *;" > User-Authored Code </iframe> </iframe> This is the problem! As soon as the nested iframe has the allow attribute, as of recently (seems like Chrome 136) this will throw an error: [Violation] Potential permissions policy violation: clipboard-write is not allowed in this document. With our complete list (which I’ll include below), this error list is very intense: Can’t we just put the allow attributes on both <iframe>s? Yes and no. Now we run into a second problem that we’ve been working around for many years. That problem is that every browser has a different set of allow attribute values that is supports. If you use a value that isn’t supported, it throws console errors or warnings about those attributes. This is noisy or scary to users who might think it’s their own code causing the issue, and it’s entirely outside of their (or our) control. The list of allow values for Google Chrome We know we need all these to allow users to test browser APIs. This list is constantly being adjusted with new APIs, often that our users ask for directly. <iframe allow="accelerometer *; bluetooth *; camera *; clipboard-read *; clipboard-write *; display-capture *; encrypted-media *; geolocation *; gyroscope *; language-detector *; language-model *; microphone *; midi *; rewriter *; serial *; summarizer *; translator *; web-share *; writer *; xr-spatial-tracking *" ></iframe> There are even some quite-new AI-related attributes in there reflecting brand new browser APIs. Example of allow value errors If were to ship those allow attribute values on all <iframe>s that we generate for Embedded Pens, here’s what it would look like in Firefox: At the moment, Firefox actually displays three sets of these warning. That’s a lot of console noise. Safari, at the moment, isn’t displaying errors or warnings about unsupported allow attribute values, but I believe they have in the past. Chrome itself throws warnings. If I include an unknown policy like fartsandwich, it will throw a warning like: Unrecognized feature: 'fartsandwich'. Those AI-related attributes require a trial which also throw warnings, so most users get that noise as well. We (sorry!) Need To Do User-Agent Sniffing To avoid all this noise and stop scaring users, we detect the user-agent (client-side) and generate the iframe attributes based on what browser we’re pretty sure it is. Here’s our current data and choices for the allow attribute export default { allowAttributes: { chrome: [ 'accelerometer', 'bluetooth', 'camera', 'clipboard-read', 'clipboard-write', 'display-capture', 'encrypted-media', 'geolocation', 'gyroscope', 'language-detector', 'language-model', 'microphone', 'midi', 'rewriter', 'serial', 'summarizer', 'translator', 'web-share', 'writer', 'xr-spatial-tracking' ], firefox: [ 'camera', 'display-capture', 'geolocation', 'microphone', 'web-share' ], default: [ 'accelerometer', 'ambient-light-sensor', 'camera', 'display-capture', 'encrypted-media', 'geolocation', 'gyroscope', 'microphone', 'midi', 'payment', 'serial', 'vr', 'web-share', 'xr-spatial-tracking' ] } }; We’ve been around long enough to know that user-agent sniffing is rife with problems. And also around long enough that you gotta do what you gotta do to solve problems. We’ve been doing this for many years and while we don’t love it, it’s mostly worked. The User-Agent Sniffing Happens on the Client <script> /* We need to user-agent sniff at *this* level so we can generate the allow attributes when the iframe is created. */ </script> <iframe src="..." allow="..."></iframe> CodePen has a couple of features where the <iframe> is provided directly, not generated. Direct <iframe> embeds. Users choose this in situations where they can’t run JavaScript directly on the page it’s going (e.g. RSS, restrictive CMSs, etc) oEmbed API. This returns an <iframe> to be embedded via a server-side call. The nested structure of our embeds has helped us here where we have that first level of iframe to attempt to run the user-agent sniff an apply the correct allow attributes to the internal iframe. The problem now is that if we’re expected to provide the allow attributes directly, we can’t know which set of attributes to provide, because any browser in the world could potentially be loading that iframe. Solutions? Are the allow attributes on “parent” iframes really necessary? Was this a regression? Or is this a feature? It sorta seems like the issue is that it’s possible for nested iframes to loosen permissions on a parent, which could be a security issue? It would be good to know where we fall here. Could browsers just stop erroring or warning about unsupported allow attributes? Looks like that’s what Safari is doing and that seems OK? If this is the case, we could just ship the complete set of allow attributes to all browsers. A little verbose but prevents needing to user-agent sniff. This could also help with the problem of needing to “keep up” with these attributes quite as much. For example, if Firefox starts to support the “rewriter” value, then it’ll just start working. This is better than some confused or disappointed user writing to support about it. Even being rather engaged with web platform news, we find it hard to catch when these very niche features evolve and need iframe attribute changes. Could browsers give us API access to what allow attributes are supported? Can the browser just tell us which ones it supports and then we could verify our list against that? Navigator.allow? Also… It’s not just the allow attribute. We also maintain browser-specific sets for the sandbox attribute. Right now, this isn’t affected by the nesting issues, but we could see it going that road. This isn’t entirely about nested iframes. We use one level of iframe anywhere on codepen.io we show a preview of a Pen, and we need allow attributes there also. This is less of an immediate problem because of the user-agent sniffing JS we have access to do get them right, but ideally we wouldn’t have to do that at all.
  5. by: Zell Liew Mon, 20 Oct 2025 16:11:40 +0000 Honeypots are fields that developers use to prevent spam submissions. They still work in 2025. So you don’t need reCAPTCHA or other annoying mechanisms. But you got to set a couple of tricks in place so spambots can’t detect your honeypot field. Use This I’ve created a Honeypot component that does everything I mention below. So you can simply import and use them like this: <script> import { Honeypot } from '@splendidlabz/svelte' </script> <Honeypot name="honeypot-name" /> Or, if you use Astro, you can do this: --- import { Honeypot } from '@splendidlabz/svelte' --- <Honeypot name="honeypot-name" /> But since you’re reading this, I’m sure you kinda want to know what’s the necessary steps. Preventing Bots From Detecting Honeypots Here are two things that you must not do: Do not use <input type=hidden>. Do not hide the honeypot with inline CSS. Bots today are already smart enough to know that these are traps — and they will skip them. Here’s what you need to do instead: Use a text field. Hide the field with CSS that is not inline. A simple example that would work is this: <input class="honeypot" type="text" name="honeypot" /> <style> .honeypot { display: none; } </style> For now, placing the <style> tag near the honeypot seems to work. But you might not want to do that in the future (more below). Unnecessary Enhancements You may have seen these other enhancements being used in various honeypot articles out there: aria-hidden to prevent screen readers from using the field autocomplete=off and tabindex="-1" to prevent the field from being selected <input ... aria-hidden autocomplete="off" tabindex="-1" /> These aren’t necessary because display: none itself already does the things these properties are supposed to do. Future-Proof Enhancements Bots get smarter everyday, so I won’t discount the possibility that they can catch what we’ve created above. So, here are a few things we can do today to future-proof a honeypot: Use a legit-sounding name attribute values like website or mobile instead of obvious honeypot names like spam or honeypot. Use legit-sounding CSS class names like .form-helper instead of obvious ones like .honeypot. Put the CSS in another file so they’re further away and harder to link between the CSS and honeypot field. The basic idea is to trick spam bot to enter into this “legit” field. <input class="form-helper" ... name="occupation" /> <!-- Put this into your CSS file, not directly in the HTML --> <style> .form-helper { display: none; } </style> There’s a very high chance that bots won’t be able to differentiate the honeypot field from other legit fields. Even More Enhancements The following enhancements need to happen on the <form> instead of a honeypot field. The basic idea is to detect if the entry is potentially made by a human. There are many ways of doing that — and all of them require JavaScript: Detect a mousemove event somewhere. Detect a keyboard event somewhere. Ensure the the form doesn’t get filled up super duper quickly (‘cuz people don’t work that fast). Now, the simplest way of using these (I always advocate for the simplest way I know), is to use the Form component I’ve created in Splendid Labz: <script> import { Form, Honeypot } from '@splendidlabz/svelte' </script> <Form> <Honeypot name="honeypot" /> </Form> If you use Astro, you need to enable JavaScript with a client directive: --- import { Form, Honeypot } from '@splendidlabz/svelte' --- <Form client:idle> <Honeypot name="honeypot" /> </Form> If you use vanilla JavaScript or other frameworks, you can use the preventSpam utility that does the triple checks for you: import { preventSpam } from '@splendidlabz/utils/dom' let form = document.querySelector('form') form = preventSpam(form, { honeypotField: 'honeypot' }) form.addEventListener('submit', event => { event.preventDefault() if (form.containsSpam) return else form.submit() }) And, if you don’t wanna use any of the above, the idea is to use JavaScript to detect if the user performed any sort of interaction on the page: export function preventSpam( form, { honeypotField = 'honeypot', honeypotDuration = 2000 } = {} ) { const startTime = Date.now() let hasInteraction = false // Check for user interaction function checkForInteraction() { hasInteraction = true } // Listen for a couple of events to check interaction const events = ['keydown', 'mousemove', 'touchstart', 'click'] events.forEach(event => { form.addEventListener(event, checkForInteraction, { once: true }) }) // Check for spam via all the available methods form.containsSpam = function () { const fillTime = Date.now() - startTime const isTooFast = fillTime < honeypotDuration const honeypotInput = form.querySelector(`[name="${honeypotField}"]`) const hasHoneypotValue = honeypotInput?.value?.trim() const noInteraction = !hasInteraction // Clean up event listeners after use events.forEach(event => form.removeEventListener(event, checkForInteraction) ) return isTooFast || !!hasHoneypotValue || noInteraction } } Better Forms I’m putting together a solution that will make HTML form elements much easier to use. It includes the standard elements you know, but with easy-to-use syntax and are highly accessible. Stuff like: Form Honeypot Text input Textarea Radios Checkboxes Switches Button groups etc. Here’s a landing page if you’re interested in this. I’d be happy to share something with you as soon as I can. Wrapping Up There are a couple of tricks that make honeypots work today. The best way, likely, is to trick spam bots into thinking your honeypot is a real field. If you don’t want to trick bots, you can use other bot-detection mechanisms that we’ve defined above. Hope you have learned a lot and manage to get something useful from this! Building a Honeypot Field That Works originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  6. by: Chris Coyier Mon, 20 Oct 2025 15:47:26 +0000 We get all excited when we get new CSS features. Well, I do anyway. It’s amazing, because sometimes it unlocks something we’ve literally never been able to do before. It’s wonderful when an artist finishes a new painting, and something to be celebrated. But this is more akin to a new color dropping, making possible a sight never before seen. Just as exciting, to me, is the evolution of new features. Both from the perspective of the feature literally gaining new abilities, or us users figuring out how to use it more effectively. We point to CSS grid as an incredibly important feature addition to CSS in the last decade. And it was! … but then later we got subgrid. … but then later gap was improved to work across layouts. … but then later we got safe alignment. And this journey isn’t over! Masonry is actively being hashed out, and has gone back-and-forth whether it will be part of grid itself. (It looks like it will be a new display type but share properties with other layout types.) Plus another one I’m excited about: styling the gap. Just as gap itself is just for the spacing between grid items, now row-rule and column-rule can draw lines in those gaps. Actual elements don’t need to be there, so we don’t need to put “fake” elements there just to draw borders and whatnot. Interestingly, column-rule isn’t even new as it was used to draw lines between multi-column layouts already, now it just does double-duty which is kinda awesome. Chrome Developer Blog: A new way to style gaps in CSS Microsoft Edge Blog: Minding the gaps: A new way to draw separators in CSS If we’re entering an era where CSS innovation slows down a little and we catch our breath with Stage 2 sorta features and figuring out what to do with these new features, I’m cool with that. Sorta like… We’ve got corner-shape, so what can we actually do with it? We’ve got @layer now, how do we actually get it into a project? We’ve got View Transitions now, maybe we actually need to scope them for variety of real-world situations.
  7. by: Bhuwan Mishra Mon, 20 Oct 2025 03:31:08 GMT When I started experimenting with AI integrations, I wanted to create a chat assistant on my website, something that could talk like GPT-4, reason like Claude, and even joke like Grok. But OpenAI, Anthropic, Google, and xAI all require API keys. That means I needed to set up an account for each of the platforms and upgrade to one of their paid plans before I could start coding. Why? Because most of these LLM providers require a paid plan for API access. Not to mention, I would need to cover API usage billing for each LLM platform. What if I could tell you there's an easier approach to start integrating AI within your websites and mobile applications, even without requiring API keys at all? Sounds exciting? Let me share how I did exactly that. Integrate AI with Puter.js Thanks to Puter.js, an open source JavaScript library that lets you use cloud features like AI models, storage, databases, user auth, all from the client side. No servers, no API keys, no backend setup needed here. What else can you ask for as a developer? Puter.js is built around Puter’s decentralized cloud platform, which handles all the stuff like key management, routing, usage limits, and billing. Everything’s abstracted away so cleanly that, from your side, it feels like authentication, AI, and LLM just live in your browser. Enough talking, let’s see how you can add GPT-5 integration within your web application in less than 10 lines. <html> <body> <script src="https://js.puter.com/v2/"></script> <script> puter.ai.chat(`What is puter js?`, { model: 'gpt-5-nano', }).then(puter.print); </script> </body> </html>Yes, that’s it. Unbelievable, right? Let's save the HTML code into an index.html file place this a new, empty directory. Open a terminal and switch to the directory where index.html file is located and serve it on localhost with the Python command: python -m http.serverThen open http://localhost:8000 in your web browser. Click on Puter.js “Continue” button when presented. Integrate ChatGPT with Puter JS🚧 It would take some time before you see a response from ChatGPT. Till then, you'll see a blank page. ChatGPT Nano doesn't know Puter.js yet but it will, soonYou can explore a lot of examples and get an idea of what Puter.js does for you on its playground. Let’s modify the code to make it more interesting this time. It would take a user query and return streaming responses from three different LLMs so that users can decide which among the three provides the best result.  <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>AI Model Comparison</title> <script src="https://cdn.twind.style"></script> <script src="https://js.puter.com/v2/"></script> </head> <body class="bg-gray-900 min-h-screen p-6"> <div class="max-w-7xl mx-auto"> <h1 class="text-3xl font-bold text-white mb-6 text-center">AI Model Comparison</h1> <div class="mb-6"> <label for="queryInput" class="block text-white mb-2 font-medium">Enter your query:</label> <div class="flex gap-2"> <input type="text" id="queryInput" class="flex-1 px-4 py-3 rounded-lg bg-gray-800 text-white border border-gray-700 focus:outline-none focus:border-blue-500" placeholder="Write a detailed essay on the impact of artificial intelligence on society" value="Write a detailed essay on the impact of artificial intelligence on society" /> <button id="submitBtn" class="px-6 py-3 bg-blue-600 hover:bg-blue-700 text-white rounded-lg font-medium transition-colors" > Generate </button> </div> </div> <div class="grid grid-cols-1 md:grid-cols-3 gap-4"> <div class="bg-gray-800 rounded-lg p-4"> <h2 class="text-xl font-semibold text-blue-400 mb-3">Claude Opus 4</h2> <div id="output1" class="text-gray-300 text-sm leading-relaxed h-96 overflow-y-auto whitespace-pre-wrap"></div> </div> <div class="bg-gray-800 rounded-lg p-4"> <h2 class="text-xl font-semibold text-green-400 mb-3">Claude Sonnet 4</h2> <div id="output2" class="text-gray-300 text-sm leading-relaxed h-96 overflow-y-auto whitespace-pre-wrap"></div> </div> <div class="bg-gray-800 rounded-lg p-4"> <h2 class="text-xl font-semibold text-purple-400 mb-3">Gemini 2.0 Pro</h2> <div id="output3" class="text-gray-300 text-sm leading-relaxed h-96 overflow-y-auto whitespace-pre-wrap"></div> </div> </div> </div> <script> const queryInput = document.getElementById('queryInput'); const submitBtn = document.getElementById('submitBtn'); const output1 = document.getElementById('output1'); const output2 = document.getElementById('output2'); const output3 = document.getElementById('output3'); async function generateResponse(query, model, outputElement) { outputElement.textContent = 'Loading...'; try { const response = await puter.ai.chat(query, { model: model, stream: true }); outputElement.textContent = ''; for await (const part of response) { if (part?.text) { outputElement.textContent += part.text; outputElement.scrollTop = outputElement.scrollHeight; } } } catch (error) { outputElement.textContent = `Error: ${error.message}`; } } async function handleSubmit() { const query = queryInput.value.trim(); if (!query) { alert('Please enter a query'); return; } submitBtn.disabled = true; submitBtn.textContent = 'Generating...'; submitBtn.classList.add('opacity-50', 'cursor-not-allowed'); await Promise.all([ generateResponse(query, 'claude-opus-4', output1), generateResponse(query, 'claude-sonnet-4', output2), generateResponse(query, 'google/gemini-2.0-flash-lite-001', output3) ]); submitBtn.disabled = false; submitBtn.textContent = 'Generate'; submitBtn.classList.remove('opacity-50', 'cursor-not-allowed'); } submitBtn.addEventListener('click', handleSubmit); queryInput.addEventListener('keypress', (e) => { if (e.key === 'Enter') { handleSubmit(); } }); </script> </body> </html> Save the above file in the index.html file as we did in the previos example and then run the server with Python. This is what it looks like now on localhost. Comparing output from different LLM provider with Puter.jsAnd here is a sample response from all three models on the query "What is It's FOSS". Looks like It's FOSS is well trusted by humans as well as AI 😉 My Final Take on Puter.js and LLMs IntegrationThat’s not bad! Without requiring any API keys, you can do this crazy stuff. Puter.js utilizes the “User pays model” which means it’s completely free for developers, and your application user will spend credits from their Puter’s account for the cloud features like the storage and LLMs they will be using. I reached out to them to understand their pricing structure, but at this moment, the team behind it is still working out to come up with a pricing plan.  This new Puter.js library is superbly underrated. I’m still amazed by how easy it has made LLM integration. Besides it, you can use Puter.js SDK for authentication, storage like Firebase. Do check out this wonderful open source JavaScript library and explore what else you can build with it. Puter.js - Free, Serverless, Cloud and AI in One Simple LibraryPuter.js provides auth, cloud storage, database, GPT-4o, o1, o3-mini, Claude 3.7 Sonnet, DALL-E 3, and more, all through a single JavaScript library. No backend. No servers. No configuration.Puter
  8. by: Abhishek Prakash Fri, 17 Oct 2025 18:31:53 +0530 Welcome back to another round of Linux magic and command-line sorcery. Weirdly scary opening line, right? That's because I am already in Halloween spirit 🎃 And I'll take this opportunity to crack a dad joke: Q: Why do Linux sysadmins confuse Halloween with Christmas? A: Because 31 Oct equals 25 Dec. Hint: Think octal. Think Decimal. Jokes aside, we are working towards a few new series and courses. The CNCF series should be published next week, followed by either networking or Kubernetes microcourses. Stay awesome 😄       This post is for subscribers only Subscribe now Already have an account? Sign in
  9. by: Hangga Aji Sayekti Fri, 17 Oct 2025 17:59:33 +0530 This short guide will help you get started with WhatWeb, a simple tool for fingerprinting websites. It’s written for beginners who want clear steps, short explanations, and practical tips. By the end, you’ll know how to run WhatWeb with confidence. What is WhatWeb?Imagine you’re curious about what powers a website: the CMS, web server, frameworks, analytics tools, or plugins behind it. WhatWeb can tell you all that right from the Linux command line. It’s like getting a quick peek under the hood of any site. In this guide, we’ll skip the long theory and go straight to the fun part. You’ll run the commands, see the results, and learn how to understand them in real situations. Legal and ethical noteBefore you start, here’s a quick reminder. Only scan websites that you own or have clear permission to test. Running scans on random sites can break the law and go against ethical hacking practices. If you just want to practice, use safe test targets that are made for learning. For the examples in this guide, we will use http://www.vulnweb.com/ and some of its subdomains as safe test targets. These sites are intentionally provided for learning and experimentation, so they are good places to try WhatWeb without worrying about legal or ethical issues. Install WhatWebKali Linux often includes WhatWeb. Check version with: whatweb --version If not present, install with: sudo apt update sudo apt install whatweb Quick basic scanRun a fast scan with this command. Replace the URL with your target. whatweb http://testphp.vulnweb.com This prints a one-line summary for the target. You will see status code, server, CMS, and other hints: Beyond basic scan: Getting more out of whatwebThe above was just the very basic usse of whatweb. Let's see what else we can do with it. 1. Verbose outputwhatweb -v http://testphp.vulnweb.com This shows more details and the patterns WhatWeb matched. WhatWeb report for http://testphp.vulnweb.com Status : 200 OK Title : Home of Acunetix Art IP : 44.228.249.3 Country : UNITED STATES, US Summary : ActiveX[D27CDB6E-AE6D-11cf-96B8-444553540000], Adobe-Flash, Email[wvs@acunetix.com], HTTPServer[nginx/1.19.0], nginx[1.19.0], Object[http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,29,0][clsid:D27CDB6E-AE6D-11cf-96B8-444553540000], PHP[5.6.40-38+ubuntu20.04.1+deb.sury.org+1], Script[text/JavaScript], X-Powered-By[PHP/5.6.40-38+ubuntu20.04.1+deb.sury.org+1] Detected Plugins: [ ActiveX ] ActiveX is a framework based on Microsoft's Component Object Model (COM) and Object Linking and Embedding (OLE) technologies. ActiveX components officially operate only with Microsoft's Internet Explorer web browser and the Microsoft Windows operating system. - More info: http://en.wikipedia.org/wiki/ActiveX Module : D27CDB6E-AE6D-11cf-96B8-444553540000 [ Adobe-Flash ] This plugin identifies instances of embedded adobe flash files. Google Dorks: (1) Website : https://get.adobe.com/flashplayer/ [ Email ] Extract email addresses. Find valid email address and syntactically invalid email addresses from mailto: link tags. We match syntactically invalid links containing mailto: to catch anti-spam email addresses, eg. bob at gmail.com. This uses the simplified email regular expression from http://www.regular-expressions.info/email.html for valid email address matching. String : wvs@acunetix.com String : wvs@acunetix.com [ HTTPServer ] HTTP server header string. This plugin also attempts to identify the operating system from the server header. String : nginx/1.19.0 (from server string) [ Object ] HTML object tag. This can be audio, video, Flash, ActiveX, Python, etc. More info: http://www.w3schools.com/tags/tag_object.asp Module : clsid:D27CDB6E-AE6D-11cf-96B8-444553540000 (from classid) String : http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,29,0 [ PHP ] PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. This plugin identifies PHP errors, modules and versions and extracts the local file path and username if present. Version : 5.6.40-38+ubuntu20.04.1+deb.sury.org+1 Google Dorks: (2) Website : http://www.php.net/ [ Script ] This plugin detects instances of script HTML elements and returns the script language/type. String : text/JavaScript [ X-Powered-By ] X-Powered-By HTTP header String : PHP/5.6.40-38+ubuntu20.04.1+deb.sury.org+1 (from x-powered-by string) [ nginx ] Nginx (Engine-X) is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. Version : 1.19.0 Website : http://nginx.net/ HTTP Headers: HTTP/1.1 200 OK Server: nginx/1.19.0 Date: Mon, 13 Oct 2025 07:29:42 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: close X-Powered-By: PHP/5.6.40-38+ubuntu20.04.1+deb.sury.org+1 Content-Encoding: gzip 2. Aggressive scan (more probes)whatweb -a 3 http://testphp.vulnweb.com Use aggressive mode when you want more fingerprints. Aggressive mode is slower and noisier. Use it only with permission. 3. Scan a list of targetsCreate a file named targets.txt with one URL per line. nano targets.txt When nano opens, paste the following lines exactly (copy and right-click to paste in many terminals): http://testphp.vulnweb.com/ http://testasp.vulnweb.com/ http://testaspnet.vulnweb.com/ http://rest.vulnweb.com/ http://testhtml5.vulnweb.com/ Save and exit nano by pressing ctrl+X. Confirm the file was created for the sake of it: cat targets.txt You should see the five URLs listed. Then run: whatweb -i targets.txt --log-json results.json This saves results in JSON format in results.json. What to expect on screen: WhatWeb prints a per-host summary while it runs. When finished, open the JSON file to inspect it: less results.json If you want a pretty view and you have jq installed, run: jq '.' results.json | less -R 4. Save a human readable logwhatweb -v --log-verbose whatweb.log http://testphp.vulnweb.com Let's see the log: cat whatweb.log 5. Use a proxy (for example Burp Suite)whatweb --proxy 127.0.0.1:8080 http://testphp.vulnweb.com 6. Custom user agentIf a site blocks you, slow down the scan or change the user agent. whatweb --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" http://testphp.vulnweb.com 7. Limit scan to specific portsWhatWeb accepts a URL with port, for example: whatweb http://example.com:8080 Interpreting the outputA typical WhatWeb line looks like this: http://testphp.vulnweb.com [200 OK] Apache[2.4.7], PHP[5.5.9], HTML5 200 OK - HTTP status code. It means the request succeeded.Apache[2.4.7] - the web server software and version.PHP[5.5.9] - server side language and version.HTML5 - content hints.If you see a CMS such as WordPress, you may also see plugins or themes. WhatWeb reports probable matches. It is not a guarantee. Combine WhatWeb with other toolsWhatWeb is best for reconnaissance. Use it with these tools for a fuller picture: nmap - for network and port scansdirsearch or gobuster - for directory and file discoverywpscan - for deeper WordPress checksA simple workflow: Run WhatWeb to identify technologies.Use nmap to find open ports and services.Use dirsearch to find hidden pages or admin panels.If the site is WordPress, run wpscan for plugin vulnerabilities.ConclusionWhatWeb is a lightweight and fast tool for fingerprinting websites. It helps you quickly understand what runs a site and gives leads for deeper testing. Use the copy-paste commands here to get started, and combine WhatWeb with other tools for a full reconnaissance workflow. Happy pen-testing 😀
  10. by: Pulkit Chandak Fri, 17 Oct 2025 05:20:49 GMT The e-ink display technology arrived on the scene as the answer for a long list of issues and desires people had with digital book reading. The strain on the eyes, the distractions, the low battery life—all of it fixed in one swoop. While the most popular option that remains in the category is an Amazon Kindle, not everyone of us would want a DRM-restricted Big Tech ecosystem. As a Linux user and open source enthusiast, I wanted something more 'open' and thus I scoured the World Wide Web and came up with a few interesting options. I have put them into two categories: DIY: You use a board like Raspberry Pi Pico and you build it yourself thanks to the blueprint provided by the project developer. This is for hardware tinkerers.A couple of non-DIY options that may be considered here.Needless to say, you should not expect a polished, out of the box eBook experience like Amazon Kindle but that's not what we are aiming for here, are we? Also, I have not tested these projects on my own. As much as I would like to, I don't have enough money to get all of them and experiment with them. 1. The Open BookThe Open Book project is the definitively DIY ebook reader project. It is based on the Raspberry Pi Pico, and makes a point of having to buy a minimum number of components. The pins on the Pico make it easy to control all necessary actions including button controls, power controls, etc. The firmware is called libros, which needs to be flashed onto the Pico. It also uses a library called Babel that gives it the ability to display the text of all languages in the world, which is a major advantage. Display: 4.2" GDEW042T2 display, designed for fast refreshingFormats supported: Plain UTF-8 text, TXT files (a converter is given by the creator)Battery: 2 AAA batteriesCost: Can differ depending on the cost of the hardware you decide to go with, but a decent build can be made at about $130.The PCB for the main board as well as the e-paper driver are easily printable because the schematics are given by the creator. The instructions for setting up the device and getting books ready to be read on the device are given very clearly and concisely on the website. 2. ZEReaderZEReader is a device inspired by The Open Book, making another iteration of the Raspberry Pi Pico based e-ink device. This project is relatively more convenient as it provides a USB-C port for charging. The convenience is not only limited to the usage, but also the assembly. The software is based on Zephyr Real-Time OS, which makes it easier for the software to be adapted to other hardware boards as well. Display: 7.5" Waveshare ePaper displayFormats supported: EPUB, very basic HTML parsingBattery: LiPo batteryCost: UnknownFor navigation, there are 4 buttons designed on the casing. The board is printable with schematics available online, and the parts can be gathered as the user pleases according to the requirements. There's a micro SD card necessary for storage of files. The instructions can all be found on the GitHub page, along with the information of the parts and software commands. Get more information on our news article about the device. 3. Dual-Screen E-ReaderThe big idea behind this project is getting back to the feeling of reading a two-paged book instead of a single-page pamphlet-like structure like a Kindle provides. A button press to change the page moves both the pages ahead, making it feel more natural, similar to an actual book. Instead of a full single-board computer like a Raspberry Pi, this uses a SoC, ESP32-S3. This provides a significant edge to the power consumption, drawing very low power as it is in the reading mode, but in the deep sleep mode, which occurs after 10 minutes of inactivity, it reduces power consumption even more dramatically, basically never needing to be turned off. Display: 2 x 4.2" panelsFormats supported: EPUB, basic HTMLBattery: 2 x 1300 mAh batteriesCost: Original creator's estimate is a little over $80.The parts are all laid out in a very concise list on the originating Reddit post with all the relevant information linked there effectively. The project is posted on Yanko Design as well in a well written post. 4. piEreaderThe piEreader aims for a fully open approach, that includes the hardware, software, and even a server to host a library. The heart of the device is a Raspberry Pi Compute Module, giving it more capabilities than an average microcontroller. The display on the build has a touch-screen as well as a backlight. The software revolves around MuPDF, which is a very well known popular e-book reader on the Linux platform. Display: 4.2" e-paper displayFormats supported: EPUB, MOBI, CBZ, PDF, etc.Battery: Lithium batteryCost: UnknownThe Hackaday page contains all the necessary information, and the GitLab page hosts all the necessary code. It is worth noting that the creator has been able to successfully try out the software on other boards like PINE64-LTS, SOQUARTZ, etc. as well. Read more about this device in our news article. 5. TurtleBookTaking an extremely practical approach, the creator of TurtleBook made some really innovative choices. First, and as they mention, most e-book readers have a lot of unnecessary features when mostly all that is needed is turning a page. As such, the reader doesn't have any physical buttons. It works on gestures, which can be used to switch pages, open menus and adjust brightness, among other things. Also since e-ink technology doesn't require a lot of power, the power setup is solar with hybrid capacitors, making it truly autonomous and one-of-a-kind. The device is based on an Arduino MEGA2560 board. Display: Waveshare 5.3" e-ink display, and a small OLED panel for easily accessing the menu optionsFormats supported: CB files (custom formatting website is given by the creator)Battery: Hybrid capacitorsCost: $80-$120All the necessary parts and the links to them are provided by the creator in a list on the GitHub page, as well as the schematics for the PCBs and 3D-printable casing. There are two options, one with SRAM, a charger and WiFI capabilities and the other one with no charger or WiFi. The Instructables page for the device has very detailed instructions for the entire process, making it one of the most friendly options on this list. 6. EPub-InkPlate Inkplate 6 from Soldred Electronics is basically an ESP-32 based e-Paper display. Inkplate uses recycled screens from old, discarded e-Book readers. Excellent intiative. The project is open source both software and hardware wise. While you can build a lot of cool devices on top of it, the EPub-InkPlate project allows you to convert it into an eBook reader. Although, the GitHub repo doesn't seen any new updates since 2022, it could be worth giving a shot if you already have an InkPlate display. 7. PineNote (not DIY)While not DIY like the other projects on the list, PineNote is from the company Pine64, which has been one of the most actively pro-open source companies in recent times. Since it is pre-built by a proper manufacturer, it can provide a lot of stable features that the DIY projects might lack. To start with, it is immensely powerful and has a Linux-based OS. It has a 128 GB eMMC storage, 4 GB RAM, and am ARM processor. Display: 10.3" multi-touch e-ink panel with frontlighting and an optional Wacom EMR penFormats supported: PDF, MOBI, CBZ, TXT, etc. virtually any formatBattery: 4000 mAh lithium batteryCost: $400 (I know but it's not just an e-Book reader)It also is charged by USB-C and can be expanded into different sorts of projects, not just an e-book reader since it is based on an unrestricted Linux OS. Special Mention: paper 7Don't confuse this paper 7 with the Paper 7 e-ink tablet from Harbor Innovations. That is also an excellent device but not open source. Yes. paper 7 is an open source device, or at least it is in the process. It is developed by a company called paperless paper based in Leipzig, Germany. It has been designed mainly as a photo frame, but I think it can be repurposed into an e-book reader. Presently, the official integration shows that you can save and read webpages on it. Adding the ability to read PDF and ePUB files would be wonderful. paper 7ConclusionThere are a lot of options to choose from, each with something more distinct than the last. The extent of the open-source philosophy, the amount of effort it might require, the extra features the devices have are some of the factors that might influence your decision when choosing the right device for yourself. Whatever your choice may be, you might find yourself with a new device as well as a new interest, perhaps, after dabbling into the DIY side of open technology. We wish you the very best for it. Let us know what you think about it in the comments. Cheers!
  11. by: Abhishek Prakash Thu, 16 Oct 2025 04:50:27 GMT In the previous newsletter, I asked what kind of advice someone looking to switch from Windows to Linux would have. I got so many responses that I am still replying to all the suggestions. I am also working on the 'Windows to Linux migration' page. Hopefully, we will have that up by next week. Hope to see more people coming to Linux as Windows 10 support has ended now. 💬 Let's see what you get in this edition: Mastering alias command.A bug that broke Flatpaks on Ubuntu 25.10.Controversy over Framework supporting Hyprland project.New Flatpak software center.Open source game development arriving on iPhone.And other Linux news, tips, and, of course, memes!📰 Linux and Open Source NewsXogot is now available on Apple iPhone for open source game development.The German state of Schleswig-Holstein has completed a massive transition to open source email systems.Ubuntu 25.10 has been released as the second and final interim release of Ubuntu for 2025, with a bug briefly breaking flatpak installations on it.Zorin OS 18 is also available now, looking prettier than ever.Framework has found itself in a controversy over its recent endorsements of Hyprland project. Framework is Accused of Supporting the Far-right, Apparently for Sponsoring the Hyprland ProjectThe announcement has generated quite some buzz but for all the wrong reasons.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutTelegram banned our community group without reasons. It's a deja vu moment, as Facebook was also banning links to Linux websites some months ago. Telegram, Please Learn Who’s a Threat and Who’s NotOur Telegram community got deleted without an explanation.It's FOSS NewsSourav RudraProprietary ecosystems are great at keeping creative people locked in, but you can break free with the power of FOSS. 5 Signs Your Proprietary Workflow Is Stifling Your Creativity (And What You Can Do About It)If these signs feel familiar, your creativity may be stifled by proprietary constraints.It's FOSS NewsTheena Kumaragurunathan🧮 Linux Tips, Tutorials, and LearningsYou can greatly improve your efficiency in the Linux terminal by using aliases.Ubuntu/GNOME customization tips.Our beginner's guide to the Nano text editor will teach you the basics without overwhelming you.Understanding software update management in Linux Mint.Getting Started With ManjaroThis is a collection of tutorials that are useful for new Manjaro users.It's FOSSAbhishek Prakash👷 AI, Homelab and Hardware CornerWe have a Pironman alternative for you that saves your wallet and desk space. The Affordable Pironman Alternative Mini PC Case for Raspberry Pi 5We have a new option in tower cases for Raspberry Pi 5. This one has a lower price tag but does that make it worth a purchase?It's FOSSAbhishek PrakashUbo Pod is an open source AI assistant that works for you, not for your data. It is based on Raspberry Pi. Bhuwan tried them all but llama.cpp finally nailed the local LLM experience. I have been using Keychron mechanical keyboard for two years now. I recently came across their upcoming product that has ceramic mechanical keyboards. Interesting materials choice, right? Keychron's Ceramic Keyboards🎫 Event Alert: First Ever UbuCon in IndiaThe Ubuntu India LoCo is hosting the first ever UbuCon event in India, and we are the official media partners for it! India’s First UbuCon Set to Unite Ubuntu Community in Bengaluru This NovemberIndia gets its first UbuCon!It's FOSS NewsSourav RudraProprietary ecosystems are great at keeping creative people locked in, but ✨ Project HighlightsBazaar is getting all the hype right now; it is a neat app store for GNOME that focuses on providing applications and add-ons from Flatpak remotes, particularly Flathub. GitHub - kolunmi/bazaar: New App Store for GNOMENew App Store for GNOME. Contribute to kolunmi/bazaar development by creating an account on GitHub.GitHubkolunmiA new, open source personal finance application. John Schneiderman’s - DRNAn application to manage your personal finances using a budget.DRNJohn Schneiderman📽️ Videos I Am Creating for YouYour Linux Mint setup deserves a stunning makeover! Subscribe to It's FOSS YouTube Channel Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a McDonald's burger a month), and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 💡 Quick Handy TipIn KDE Plasma, open settings and go into Colors & Themes → Window Decorations → Configure Titlebar. Here, add the "On all desktops" and "Keep above other windows" options to the title bar by dragging and dropping. Click on "Apply" to confirm the changes. Now, you can use: The On all desktops button to pin an app to all your desktops.The Keep above other windows button to keep a selected window always on top.🎋 Fun in the FOSSverseCan memory match terminal shortcuts with their actions? Memory Match Terminal Shortcuts With Their ActionsAn enjoyable way to test your memory by matching the Linux terminal shortcuts with their respective actions.It's FOSSAbhishek Prakash🤣 Meme of the Week: Windows 10 will be missed by many, but there are much better Linux choices to replace it with. 🗓️ Tech Trivia: On October 16, 1959, Control Data Corporation introduced the CDC 1604, one of the first fully transistorized computers. It was designed by Seymour Cray, who later became known as the father of supercomputing. The CDC 1604 was among the fastest machines of its time and was used for scientific research, weapons control, and commercial data processing. 🧑‍🤝‍🧑 From the Community: Windows 10 has reached end of life, and our FOSSers are discussing the event. Windows 10 reaches EOL tomorrow!Hi everybody, it’s that time again, that happens approx. every 10 or so years: A Windows version is reaching its end of life. I was doing some research and asked Brave Search about it. And the facts said that Windows 10 has 47% of overall Windows market share, which is roughly 35% of the overall share. Let’s just hope that they will do the right thing and switch to Linux. I wanted to know: what are others opinions on this? Do you know somebody who migrated from Windows?It's FOSS CommunityGeorge1❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  12. by: Temani Afif Wed, 15 Oct 2025 13:39:39 +0000 Let’s suppose you have N elements with the same animation that should animate sequentially. The first one, then the second one, and so on until we reach the last one, then we loop back to the beginning. I am sure you know what I am talking about, and you also know that it’s tricky to get such an effect. You need to define complex keyframes, calculate delays, make it work for a specific number of items, etc. Tell you what: with modern CSS, we can easily achieve this using a few lines of code, and it works for any number of items! The following demo is currently limited to Chrome and Edge, but will work in other browsers as the sibling-index() and sibling-count() functions gain broader support. You can track Firefox support in Ticket #1953973 and WebKit’s position in Issue #471. CodePen Embed Fallback In the above demo, the elements are animated sequentially and the keyframes are as simple as a single to frame changing an element’s background color and scale: @keyframes x { to { background: #F8CA00; scale: .8; } } You can add or remove as many items as you want and everything will keep running smoothly. Cool, right? That effect is made possible with this strange and complex-looking code: .container > * { --_s: calc(100%*(sibling-index() - 1)/sibling-count()); --_e: calc(100%*(sibling-index())/sibling-count()); animation: x calc(var(--d)*sibling-count()) infinite linear(0, 0 var(--_s), 1, 0 var(--_e), 0); } It’s a bit scary and unreadable, but I will dissect it with you to understand the logic behind it. The CSS linear() function When working with animations, we can define timing functions (also called easing functions). We can use predefined keyword values — such as linear, ease, ease-in, etc. — or steps() to define discrete animations. There’s also cubic-bezier(). But we have a newer, more powerful function we can add to that list: linear(). From the specification: animation-timing-function: linear creates a linear interpolation between two points — the start and end of the animation — while the linear() function allows us to define as many points as we want and have a “linear” interpolation between two consecutive points. It’s a bit confusing at first glance, but once we start working with it, things becomes clearer. Let’s start with the first value, which is nothing but an equivalent of the linear value. linear(0 0%, 1 100%) We have two points, and each point is defined with two values (the “output” progress and “input” progress). The “output” progress is the animation (i.e., what is defined within the keyframes) and the “input” progress is the time. Let’s consider the following code: .box { animation: move 2s linear(0 0%, 1 100%); } @keyframes move { 0% {translate: 0px } 100% {translate: 80px} } In this case, we want 0 of the animation (translate: 0px) at t=0% (in other words, 0% of 2s, so 0s) and 1 of the animation (translate: 80px) at t=100% (which is 100% of 2s, so 2s). Between these points, we do a linear interpolation. CodePen Embed Fallback Instead of percentages, we can use numbers, which means that the following is also valid: linear(0 0, 1 1) But I recommend you stick to the percentage notation to avoid getting confused with the first value which is a number as well. The 0% and 100% are implicit, so we can remove them and simply use the following: linear(0, 1) Let’s add a third point: linear(0, 1, 0) As you can see, I am not defining any “input” progress (the percentage values that represent the time) because they are not mandatory; however, introducing them is the first thing to do to understand what the function is doing. The first value is always at 0% and the last value is always at 100%. linear(0 0%, 1, 0 100%) The value will be 50% for the middle point. When a control point is missing its “input” progress, we take the mid-value between two adjacent points. If you are familiar with gradients, you will notice the same logic applies to color stops. linear(0 0%, 1 50%, 0 100%) Easier to read, right? Can you explain what it does? Take a few minutes to think about it before continuing. Got it? I am sure you did! It breaks down like this: We start with translate: 0px at t=0s (0% of 2s). Then we move to translate: 80px at t=1s (50% of 2s). Then we get back to translate: 0px at t=2s (100% of 2s). CodePen Embed Fallback Most of the timing functions allow us to only move forward, but with linear() we can move in both directions as many times as we want. That’s what makes this function so powerful. With a “simple” keyframes you can have a “complex” animation. I could have used the following keyframes to do the same thing: @keyframes move { 0%, 100% { translate: 0px } 50% { translate: 80px } } However, I won’t be able to update the percentage values on the fly if I want a different animation. There is no way to control keyframes using CSS so I need to define new keyframes each time I need a new animation. But with linear(), I only need one keyframes. In the demo below, all the elements are using the same keyframes and yet have completely different animations! CodePen Embed Fallback Add a delay with linear() Now that we know more about linear(), let’s move to the main trick of our effect. Don’t forget that the idea is to create a sequential animation with a certain number (N) of elements. Each element needs to animate, then “wait” until all the others are done with their animation to start again. That waiting time can be seen as a delay. The intuitive way to do this is the following: @keyframes move { 0%, 50% { translate: 0px } 100% { translate: 80px } } We specify the same value at 0% and 50%; hence nothing will happen between 0% and 50%. We have our delay, but as I said previously, we won’t be able to control those percentages using CSS. Instead, we can express the same thing using linear(): linear(0 0%, 0 50%, 1 100%) The first two control points have the same “output” progress. The first one is at 0% of the time, and the second one at 50% of the time, so nothing will “visually” happen in the first half of the animation. We created a delay without having to update the keyframes! @keyframes move { 0% { translate: 0px } 100% { translate: 80px } } CodePen Embed Fallback Let’s add another point to get back to the initial state: linear(0 0%, 0 50%, 1 75%, 0 100%) Or simply: linear(0, 0 50%, 1, 0) CodePen Embed Fallback Cool, right? We’re able to create a complex animation with a simple set of keyframes. Not only that, but we can easily adjust the configuration by tweaking the linear() function. This is what we will do for each element to get our sequential animation! The full animation Let’s get back to our first animation and use the previous linear() value we did before. We will start with two elements. CodePen Embed Fallback Nothing surprising yet. Both elements have the exact same animation, so they animate the same way at the same time. Now, let’s update the linear() function for the first element to have the opposite effect: an animation in the first half, then a delay in the second half. linear(0, 1, 0 50%, 0) This literally inverts the previous value: CodePen Embed Fallback Tada! We have established a sequential animation with two elements! Are you starting to see the idea? The goal is to do the same with any number (N) of elements. Of course, we are not going to assign a different linear() value for each element — we will do it programmatically. First, let’s draw a figure to understand what we did for two elements. When one element is waiting, the other one is animating. We can identify two ranges. Let’s imagine the same with three elements. This time, we need three ranges. Each element animates in one range and waits in two ranges. Do you see the pattern? For N elements, we need N ranges, and the linear() function will have the following syntax: linear(0, 0 S, 1, 0 E, 0) The start and the end are equal to 0, which is the initial state of the animation, then we have an animation between S and E. An element will wait from 0% to S, animate from S to E, then wait again from E to 100%. The animation time equals to 100%/N, which means E - S = 100%/N. The first element starts its animation at the first range (0 * 100%/N), the second element at the second range (1 * 100%/N), the third element at the third range (2 * 100%/N), and so on. S is equal to: S = (i - 1) * 100%/N …where i is the index of the element. Now, you may ask, how do we get the value of N and i? The answer is as simple as using the sibling-count()and sibling-index() functions! Again, these are currently supported in Chromium browsers, but we can expect them to roll out in other browsers down the road. S = calc(100%*(sibling-index() - 1)/sibling-count()) And: E = S + 100%/N E = calc(100%*sibling-index()/sibling-count()) We write all this with some good CSS and we are done! .box { --d: .5s; /* animation duration */ --_s: calc(100%*(sibling-index() - 1)/sibling-count()); --_e: calc(100%*(sibling-index())/sibling-count()); animation: x calc(var(--d)*sibling-count()) infinite linear(0, 0 var(--_s), 1, 0 var(--_e), 0); } @keyframes x { to { background: #F8CA00; scale: .8; } } I used a variable (--d) to control the duration, but it’s not mandatory. I wanted to be able to control the amount of time each element takes to animate. That’s why I multiply it later by N. CodePen Embed Fallback Now all that’s left is to define your animation. Add as many elements as you want, and watch the result. No more complex keyframes and magic values! Note: For unknown reasons (probably a bug) you need to register the variables with @property. More variations We can extend the basic idea to create more variations. For example, instead of having to wait for an element to completely end its animation, the next one can already start its own. CodePen Embed Fallback This time, I am defining N + 1 ranges, and each element animates in two ranges. The first element will animate in the first and second range, while the second element will animate in the second and third range; hence an overlap of both animations in the second range, etc. I will not spend too much time explaining this case because it’s one example among many we create, so I let you dissect the code as a small exercise. And here is another one for you to study as well. CodePen Embed Fallback Conclusion The linear() function was mainly introduced to create complex easing such as bounce and elastic, but combined with other modern features, it unlocks a lot of possibilities. Through this article, we got a small overview of its potential. I said “small” because we can go further and create even more complex animations, so stay tuned for more articles to come! Sequential linear() Animation With N Elements originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  13. by: Chris Coyier Tue, 14 Oct 2025 13:52:25 +0000 We’re over 13 years old as a company now. We decide that we’re not a startup anymore (we’re a “small business” with big dreams) but we are still indie. We’ve seen trends come and go. We just do what we do, knowing the tradeoffs, and plan to keep getting better as long as we can. Links Timeline – Chris Coyier 115: Adam Argyle on Cracking the 2025 Web Dev Interview | Front-End Fire Time Jumps 00:05 Are we still an indie startup? 04:32 Remote working at CodePen 19:20 Progressing and advancement in a small business 22:51 Career opportunities in tech 25:39 Startups starting at free 29:17 2.0 for the future
  14. by: Chris Coyier Mon, 13 Oct 2025 17:01:15 +0000 Damning opening words from Edwin Heathcote in Why designers abandoned their dreams of changing the world. The situation is, if you wanna make money doing design work, you’re probably going to be making it from some company hurting the world, making both you and them complicit. Kinda dark. But maybe it is course correction for designers thinking they are the world’s salvation, a swing too far in the other direction. This pairs very nicely with Pavel Samsonov’s UX so bad that it’s illegal, again opening with a banger: Big companies products are so dominant that users are simply going to use them no matter what. Young designers will be hired to make the products more profitable no matter what, and they will like it, damn it. Using design to make money is, well, often kind of the point. And I personally take no issue with that. I do take issue with using design for intentional harm. I take issue with using the power of design to influence users to make decisions against their own better judgement. It makes me think of the toy catalog that showed up at my house from Amazon recently. It’s early October. Christmas is 3 months away, but the message is clear: get your wallets ready. This design artifact, for children, chockablock with every toy under the sun, to set their desire ablaze, to ensure temper tantrums for until the temporary soothing that only a parent clicking a Buy Now button gives. It isn’t asking kids to thoughtfully pick out a toy they might want, it’s says give me them all, I want every last thing. The pages are nicely designed with great photography. A designer make the argument: let’s set all the pages on white with product cutouts and plenty of white space, so kids can easily visibly circle all the things they want. Let their fingers bleed with capitalism. Making a list isn’t just implied though, the first page is a thicker-weight paper that is a literal 15-item wish list page designed to be filled out and torn out. More. Morrrreeeee. And just as a little cherry on top, it’s a sticker book too. It begs to travel with you, becoming an accessory to the season. It’s cocaine for children with the same mandates as the Instagram algorithm is for older kids and adults.
  15. by: Saleh Mubashar Mon, 13 Oct 2025 14:31:35 +0000 You’ve probably heard the buzz about CSS Masonry. You might even be current on the ongoing debate about how it should be built, with two big proposals on the table, one from the Chrome team and one from the WebKit team. The two competing proposals are interesting in their own right. Chrome posted about its implementation a while back, and WebKit followed it up with a detailed post stating their position (which evolved out of a third proposal from the Technical Architecture Group). We’ll rehash some of that in this post, but even more interesting to me is that this entire process is an excellent illustration of how the CSS Working Group (CSSWG), browsers, and developers coalesce around standards for CSS features. There are tons of considerations that go into a feature, like technical implementations and backwards compatibility. But it can be a bit political, too. That’s really what I want to do here: look at the CSS Masonry discussions and what they can teach us about the development of new CSS features. What is the CSSWG’s role? What influence do browsers have? What can learn from the way past features evolved? Masonry Recap A masonry layout is different than, say Flexbox and Grid, stacking unevenly-sized items along a single track that automatically wraps into multiple rows or columns, depending on the direction. It’s called the “Pinterest layout” for the obvious reason that it’s the hallmark of Pinterest’s feed. Pinterest’s masonry layout We could go deeper here, but talking specifically about CSS Masonry isn’t the point. When Masonry entered CSS Working Group discussions, the first prototype actually came from Firefox back in 2019, based on an early draft that integrated masonry behavior directly into Grid. The Chrome team followed later with a new display: masonry value, treating it as a distinct layout model. They argued that masonry is a different enough layout from Flexbox and Grid to deserve its own display value. Grid’s defaults don’t line up with how masonry works, so why force developers to learn a bunch of extra Grid syntax? Chrome pushed ahead with this idea and prototyped it in Chrome 140: .container { display: masonry; grid-template-columns: repeat(auto-fit, minmax(160px, 1fr)); gap: 10px; } Meanwhile, the WebKit team has proposed that masonry should be a subset of Grid, rather than its own display type. They endorsed a newer direction based on a recommendation by the W3C Technical Architecture Group (TAG) built around a concept called Item Flow that unifies flex-flow and grid-auto-flow into a single set of properties. Instead of writing display: masonry, you’d stick with display: grid and use a new item-flow shorthand to collapse rows or columns into a masonry-style layout: .container { display: grid; grid-template-columns: repeat(auto-fill, minmax(14rem, 1fr)); item-flow: row collapse; gap: 1rem; } The debate here really comes down to mental models and how you think about masonry. WebKit sees it as a natural extension of Grid, not a brand-new system. Their thinking is that developers shouldn’t need to learn an entirely new model when most of it already exists in Grid. With item-flow, you’re not telling the browser “this is a whole new layout system,” you’re more or less adjusting the way elements flow in a particular context. How CSS Features Evolve This sort of horse-trading isn’t new. Both Flexbox and Grid went through years of competing drafts before becoming the specs we use today. Flexbox, in particular, had a rocky rollout in the early 2010s. Those who were in the trenches at the time likely remember multiple conflicting syntaxes floating around at once. The initial release had missing gaps and browsers implemented the features differently, leading to all kinds of things, like proprietary properties, experimental releases, and different naming conventions that made the learning curve rather steep, and even Frankenstein-like usage in some cases to get the most browser support. In other words, Flexbox (nor Grid, for that matter) did not enjoyed a seamless release, but we’ve gotten to a place where the browsers implementations are interoperable with one another. That’s a big deal for developers like us who often juggle inconsistent support for various features. Heck, Rob O’Leary recently published the rabbit hole he traveled trying to use text-wrap: pretty in his work, and that’s considered “Baseline” support that is “widely available.” But I digress. It’s worth noting that Flexbox faced unique challenges early on, and masonry has benefitted from those lessons learned. I reached out to CSSWG member Tab Atkins-Bittner for a little context since they were heavily involved in editing the Flexbox specification. In other words, Flexbox was sort of a canary in the coal mine as the CSSWG considered what a modern CSS layout syntax should accomplish. This greatly benefited the work put into defining CSS Grid because a lot of the foundation for things like tracks, intrinsic sizing, and proportions were already tackled. Atkins-Bittner goes on further to explain that the Grid specification process also forced the CSSWG to rethink several of Flexbox’s design choices in the process. This also explains why Flexbox underwent several revisions following its initial release. It also highlights another key point: CSS features are always evolving. Early debate and iteration are essential because they reduce the need for big breaking changes. Still, some of the Flexbox mistakes (which do happen) became widely adopted. Browsers had widely implemented their approaches and the specification caught up to it while trying to establish a consistent language that helps both user agents and developers implemented and use the features, respectively. All this to say: Masonry is in a much better spot than Flexbox was at its inception. It benefits from the 15+ years that the CSSWG, browsers, and developers contributed to Flexbox and Grid over that time. The discussions are now less about fixing under-specified details and more about high-level design choices. Hence, novel ideas born from Masonry that combine the features of Flexbox and Grid into the new Item Flow proposal. It’s messy. And weird. But it’s how things get done. The CSSWG’s Role Getting to this point requires process. And in CSS, that process runs through the Working Group. The CSS Working Group (CSSWG) runs on consensus: members debate in the open, weigh pros and cons, and push browsers towards common ground. Miriam Suzanne, an invited expert with the CSSWG (and CSS-Tricks alumni), describes the process like this: But consensus only applies to the specifications. Browsers still decide when and how to those features are shipped, as Suzanne continues: So, while the CSSWG facilitates discussions around features, it can’t actually stop browsers from shipping those features, let alone how they’re implemented. It’s a consensus-driven system, but consensus is only about publishing a specification. In practice, momentum can shift if one vendor is the first to ship or prototype a feature. In most cases, though, the specification adoption process results in a stronger proposal overall. By the time features ship, the idea is that they’ve already been thoroughly debated, which in theory, reduces the need for significant revisions later that could lead to breaking changes. Backwards compatibility is always at the forefront of CSSWG discussions. Developer feedback also plays an important role, though there isn’t a single standardized way that it is solicited, collected, or used. For the CSSWG, the csswg-drafts GitHub repo is the primary source of feedback and discussion, while browsers also run their own surveys and gather input through various other channels such as Chrome’s technical discussion groups and Webkit’s mailing lists. The Bigger Picture Browsers are in the business of shaping new features. It’s also in their best interest for a number of reasons. Proposing new ideas gives them a seat at the table. Prototyping new features gets developers excited and helps further refine edge cases. Implementing new features (particularly first) gives them a competitive edge in the consumer market. All that said, prototyping features ahead of consensus is a bit of a tightrope walk. And that’s where Masonry comes back into the bigger picture. Chrome shipped a prototype of the feature that leans heavily into the first proposal for a new display: masonry value. Other browsers have yet to ship competing prototypes, but have openly discussed their positions, as WebKit did in subsequent blog posts. At first glance, that might suggest that Chrome is taking a heavy-handed approach to tip the scales in its favorable direction. But there’s a lot to like about prototyping features because it’s proof in the pudding for real-world uses by allowing developers early access to experiment. Atkins-Bittner explains it nicely: This kind of “soft” commit moves conversations forward while leaving room to change course, if needed, based on real-world use. But there’s obviously a tension here as well. Browsers may be custodians of web standards and features, but they’re still employed by massive companies that are selling a product at the end of the day. It’s easy to get cynical. And political. In theory, though, allowing browsers to voluntarily adopt features gives everyone choice: browsers compete in the market based on what they implement, developers gain new features that push the web further, and everyone gets to choose the browser that best fits their browsing needs. If one company controls access to a huge share of users, however, those choices feel less accessible. Standards often get shaped just as much by market power as by technical merit. Where We’re At At the end of the day, standards get shaped by a mix of politics, technical trade-offs, and developer feedback. Consensus is messy, and it’s rarely about one side “winning.” With masonry, it might look like Google got its way, but in reality the outcome reflects input from both proposals, plus ideas from the wider community. As of this writing: Masonry will be a new display type, but must include the word “grid” in the name. The exact keyword is still being debated. The CSSWG has resolved to proceed with the proposed **item-flow** approach. Grid will be used for layout templates and explicitly placing items in them. Some details, like a possible shorthand syntax and track listing defaults, are still being discussed. Further reading This is a big topic, one that goes much deeper and further than we’ve gone here. While working on this article, a few others popped up that are very much worth your time to see the spectrum of ideas and opinions about the CSS standards process: Alex Russell’s post about the standards adoption process in browsers. Rob O’Leary’s article about struggling with text-wrap: pretty, explaining that “Baseline” doesn’t always mean consistent support in practice. David Bushell’s piece about the WHATWG. It isn’t about the CSSWG specifically, but covers similar discussions on browser politics and standards consensus. Masonry: Watching a CSS Feature Evolve originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.