
Everything posted by Blogger
-
413: Still indie after all these years
by: Chris Coyier Tue, 14 Oct 2025 13:52:25 +0000 We’re over 13 years old as a company now. We decide that we’re not a startup anymore (we’re a “small business” with big dreams) but we are still indie. We’ve seen trends come and go. We just do what we do, knowing the tradeoffs, and plan to keep getting better as long as we can. Links Timeline – Chris Coyier 115: Adam Argyle on Cracking the 2025 Web Dev Interview | Front-End Fire Time Jumps 00:05 Are we still an indie startup? 04:32 Remote working at CodePen 19:20 Progressing and advancement in a small business 22:51 Career opportunities in tech 25:39 Startups starting at free 29:17 2.0 for the future
-
Chris’ Corner: Design (and you’re going to like it)
by: Chris Coyier Mon, 13 Oct 2025 17:01:15 +0000 Damning opening words from Edwin Heathcote in Why designers abandoned their dreams of changing the world. The situation is, if you wanna make money doing design work, you’re probably going to be making it from some company hurting the world, making both you and them complicit. Kinda dark. But maybe it is course correction for designers thinking they are the world’s salvation, a swing too far in the other direction. This pairs very nicely with Pavel Samsonov’s UX so bad that it’s illegal, again opening with a banger: Big companies products are so dominant that users are simply going to use them no matter what. Young designers will be hired to make the products more profitable no matter what, and they will like it, damn it. Using design to make money is, well, often kind of the point. And I personally take no issue with that. I do take issue with using design for intentional harm. I take issue with using the power of design to influence users to make decisions against their own better judgement. It makes me think of the toy catalog that showed up at my house from Amazon recently. It’s early October. Christmas is 3 months away, but the message is clear: get your wallets ready. This design artifact, for children, chockablock with every toy under the sun, to set their desire ablaze, to ensure temper tantrums for until the temporary soothing that only a parent clicking a Buy Now button gives. It isn’t asking kids to thoughtfully pick out a toy they might want, it’s says give me them all, I want every last thing. The pages are nicely designed with great photography. A designer make the argument: let’s set all the pages on white with product cutouts and plenty of white space, so kids can easily visibly circle all the things they want. Let their fingers bleed with capitalism. Making a list isn’t just implied though, the first page is a thicker-weight paper that is a literal 15-item wish list page designed to be filled out and torn out. More. Morrrreeeee. And just as a little cherry on top, it’s a sticker book too. It begs to travel with you, becoming an accessory to the season. It’s cocaine for children with the same mandates as the Instagram algorithm is for older kids and adults.
-
Masonry: Watching a CSS Feature Evolve
by: Saleh Mubashar Mon, 13 Oct 2025 14:31:35 +0000 You’ve probably heard the buzz about CSS Masonry. You might even be current on the ongoing debate about how it should be built, with two big proposals on the table, one from the Chrome team and one from the WebKit team. The two competing proposals are interesting in their own right. Chrome posted about its implementation a while back, and WebKit followed it up with a detailed post stating their position (which evolved out of a third proposal from the Technical Architecture Group). We’ll rehash some of that in this post, but even more interesting to me is that this entire process is an excellent illustration of how the CSS Working Group (CSSWG), browsers, and developers coalesce around standards for CSS features. There are tons of considerations that go into a feature, like technical implementations and backwards compatibility. But it can be a bit political, too. That’s really what I want to do here: look at the CSS Masonry discussions and what they can teach us about the development of new CSS features. What is the CSSWG’s role? What influence do browsers have? What can learn from the way past features evolved? Masonry Recap A masonry layout is different than, say Flexbox and Grid, stacking unevenly-sized items along a single track that automatically wraps into multiple rows or columns, depending on the direction. It’s called the “Pinterest layout” for the obvious reason that it’s the hallmark of Pinterest’s feed. Pinterest’s masonry layout We could go deeper here, but talking specifically about CSS Masonry isn’t the point. When Masonry entered CSS Working Group discussions, the first prototype actually came from Firefox back in 2019, based on an early draft that integrated masonry behavior directly into Grid. The Chrome team followed later with a new display: masonry value, treating it as a distinct layout model. They argued that masonry is a different enough layout from Flexbox and Grid to deserve its own display value. Grid’s defaults don’t line up with how masonry works, so why force developers to learn a bunch of extra Grid syntax? Chrome pushed ahead with this idea and prototyped it in Chrome 140: .container { display: masonry; grid-template-columns: repeat(auto-fit, minmax(160px, 1fr)); gap: 10px; } Meanwhile, the WebKit team has proposed that masonry should be a subset of Grid, rather than its own display type. They endorsed a newer direction based on a recommendation by the W3C Technical Architecture Group (TAG) built around a concept called Item Flow that unifies flex-flow and grid-auto-flow into a single set of properties. Instead of writing display: masonry, you’d stick with display: grid and use a new item-flow shorthand to collapse rows or columns into a masonry-style layout: .container { display: grid; grid-template-columns: repeat(auto-fill, minmax(14rem, 1fr)); item-flow: row collapse; gap: 1rem; } The debate here really comes down to mental models and how you think about masonry. WebKit sees it as a natural extension of Grid, not a brand-new system. Their thinking is that developers shouldn’t need to learn an entirely new model when most of it already exists in Grid. With item-flow, you’re not telling the browser “this is a whole new layout system,” you’re more or less adjusting the way elements flow in a particular context. How CSS Features Evolve This sort of horse-trading isn’t new. Both Flexbox and Grid went through years of competing drafts before becoming the specs we use today. Flexbox, in particular, had a rocky rollout in the early 2010s. Those who were in the trenches at the time likely remember multiple conflicting syntaxes floating around at once. The initial release had missing gaps and browsers implemented the features differently, leading to all kinds of things, like proprietary properties, experimental releases, and different naming conventions that made the learning curve rather steep, and even Frankenstein-like usage in some cases to get the most browser support. In other words, Flexbox (nor Grid, for that matter) did not enjoyed a seamless release, but we’ve gotten to a place where the browsers implementations are interoperable with one another. That’s a big deal for developers like us who often juggle inconsistent support for various features. Heck, Rob O’Leary recently published the rabbit hole he traveled trying to use text-wrap: pretty in his work, and that’s considered “Baseline” support that is “widely available.” But I digress. It’s worth noting that Flexbox faced unique challenges early on, and masonry has benefitted from those lessons learned. I reached out to CSSWG member Tab Atkins-Bittner for a little context since they were heavily involved in editing the Flexbox specification. In other words, Flexbox was sort of a canary in the coal mine as the CSSWG considered what a modern CSS layout syntax should accomplish. This greatly benefited the work put into defining CSS Grid because a lot of the foundation for things like tracks, intrinsic sizing, and proportions were already tackled. Atkins-Bittner goes on further to explain that the Grid specification process also forced the CSSWG to rethink several of Flexbox’s design choices in the process. This also explains why Flexbox underwent several revisions following its initial release. It also highlights another key point: CSS features are always evolving. Early debate and iteration are essential because they reduce the need for big breaking changes. Still, some of the Flexbox mistakes (which do happen) became widely adopted. Browsers had widely implemented their approaches and the specification caught up to it while trying to establish a consistent language that helps both user agents and developers implemented and use the features, respectively. All this to say: Masonry is in a much better spot than Flexbox was at its inception. It benefits from the 15+ years that the CSSWG, browsers, and developers contributed to Flexbox and Grid over that time. The discussions are now less about fixing under-specified details and more about high-level design choices. Hence, novel ideas born from Masonry that combine the features of Flexbox and Grid into the new Item Flow proposal. It’s messy. And weird. But it’s how things get done. The CSSWG’s Role Getting to this point requires process. And in CSS, that process runs through the Working Group. The CSS Working Group (CSSWG) runs on consensus: members debate in the open, weigh pros and cons, and push browsers towards common ground. Miriam Suzanne, an invited expert with the CSSWG (and CSS-Tricks alumni), describes the process like this: But consensus only applies to the specifications. Browsers still decide when and how to those features are shipped, as Suzanne continues: So, while the CSSWG facilitates discussions around features, it can’t actually stop browsers from shipping those features, let alone how they’re implemented. It’s a consensus-driven system, but consensus is only about publishing a specification. In practice, momentum can shift if one vendor is the first to ship or prototype a feature. In most cases, though, the specification adoption process results in a stronger proposal overall. By the time features ship, the idea is that they’ve already been thoroughly debated, which in theory, reduces the need for significant revisions later that could lead to breaking changes. Backwards compatibility is always at the forefront of CSSWG discussions. Developer feedback also plays an important role, though there isn’t a single standardized way that it is solicited, collected, or used. For the CSSWG, the csswg-drafts GitHub repo is the primary source of feedback and discussion, while browsers also run their own surveys and gather input through various other channels such as Chrome’s technical discussion groups and Webkit’s mailing lists. The Bigger Picture Browsers are in the business of shaping new features. It’s also in their best interest for a number of reasons. Proposing new ideas gives them a seat at the table. Prototyping new features gets developers excited and helps further refine edge cases. Implementing new features (particularly first) gives them a competitive edge in the consumer market. All that said, prototyping features ahead of consensus is a bit of a tightrope walk. And that’s where Masonry comes back into the bigger picture. Chrome shipped a prototype of the feature that leans heavily into the first proposal for a new display: masonry value. Other browsers have yet to ship competing prototypes, but have openly discussed their positions, as WebKit did in subsequent blog posts. At first glance, that might suggest that Chrome is taking a heavy-handed approach to tip the scales in its favorable direction. But there’s a lot to like about prototyping features because it’s proof in the pudding for real-world uses by allowing developers early access to experiment. Atkins-Bittner explains it nicely: This kind of “soft” commit moves conversations forward while leaving room to change course, if needed, based on real-world use. But there’s obviously a tension here as well. Browsers may be custodians of web standards and features, but they’re still employed by massive companies that are selling a product at the end of the day. It’s easy to get cynical. And political. In theory, though, allowing browsers to voluntarily adopt features gives everyone choice: browsers compete in the market based on what they implement, developers gain new features that push the web further, and everyone gets to choose the browser that best fits their browsing needs. If one company controls access to a huge share of users, however, those choices feel less accessible. Standards often get shaped just as much by market power as by technical merit. Where We’re At At the end of the day, standards get shaped by a mix of politics, technical trade-offs, and developer feedback. Consensus is messy, and it’s rarely about one side “winning.” With masonry, it might look like Google got its way, but in reality the outcome reflects input from both proposals, plus ideas from the wider community. As of this writing: Masonry will be a new display type, but must include the word “grid” in the name. The exact keyword is still being debated. The CSSWG has resolved to proceed with the proposed **item-flow** approach. Grid will be used for layout templates and explicitly placing items in them. Some details, like a possible shorthand syntax and track listing defaults, are still being discussed. Further reading This is a big topic, one that goes much deeper and further than we’ve gone here. While working on this article, a few others popped up that are very much worth your time to see the spectrum of ideas and opinions about the CSS standards process: Alex Russell’s post about the standards adoption process in browsers. Rob O’Leary’s article about struggling with text-wrap: pretty, explaining that “Baseline” doesn’t always mean consistent support in practice. David Bushell’s piece about the WHATWG. It isn’t about the CSSWG specifically, but covers similar discussions on browser politics and standards consensus. Masonry: Watching a CSS Feature Evolve originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
The Affordable Pironman Alternative Mini PC Case for Raspberry Pi 5
by: Abhishek Prakash Mon, 13 Oct 2025 07:48:52 GMT SunFounder's Pironman cases for Raspberry Pi are a huge hit. This bestselling device converts the naked Raspberry Pi board into a miniature tower PC. The RGB lighting, OLED display and glass casing make it look cool. Full HDMI ports, NVMe ports and active-passive cooling options enhance the functionality of the Pi 5. This great gadget is too expensive for some people to buy at $76 for the Pironman and $95 for the dual-NVMe NVMe Pironman Max. SunFounder knows it and that's why they have introduced Pironman 5 Mini at $45 but have removed the OLED display, full HDMI ports and reduced the number of fans. Dealbreaker? Maybe. Maybe not. But I have come across a new case that has most of the features at a much lower price. Elecrow's PitowerLike SunFounder, Elecrow's has been offering gadgets and accessories for Raspberry Pi and other embedded devices for years. Their CrowView Note and all-in-on starter kits have been popular among SBC enthusiasts. They have just revealed a new product, a mini PC case for your Raspberry Pi 5 and Jetson Orin Nano. Yes, that doubles the excitement. Parameter Specification Compatible Devices Raspberry Pi 5 / Jetson Orin Nano Display 1.3″ OLED Screen Material Aluminum Alloy + Acrylic Cooling System 3 × Cooling Fans Power Control Integrated Power Button PCIe Interface (Raspberry Pi Version) PCIe M.2 Supported SSD Sizes 2230 / 2242 / 2260 / 2280 RTC (Real-Time Clock) Support Supported (Raspberry Pi Version) Dimensions 120 × 120 × 72 mm Weight 500 g Ports 2 x Full HDMI Ports 4 x USB 1 X Ethernet 1 X Type C for power Included Accessories 1 × Case (Unassembled) 1 × PCBA Board 3 × Cooling Fans 1 × Heatsink (for Raspberry Pi) -1 × User Manual And all this comes at a lower price tag of nearly $40 (more on this later). That sounds tempting, right? Let's see how good this case is. 📋Elecrow sent me this case for review. The views expressed are my own.Features meet affordibilityLet's take a look at the appearance of Elecrow's mini PC case. It is slightly bigger than the Pironman cases and has a more boxy looks somehow. The OLED display and power button are at the top. The micro SD card outlet is at the bottom and to accommodate it, the case has taller feet. There is nothing in the front of the device except a transparent acrylic sheet. The main look of the case comes from the side that gives you a broader look at the circuits. It looks magnificent with the RGB lights. The GPIO pins are accessible from here and they are duly marked. Front viewThere are three RGB fans here. Two in the back throw air out and one at the top sucks air in. This is done to keep the airflow in circulation inside the case. The official Raspberry Pi Active Cooler is also added to provide some passive cooling. All the other ports are accessible from the back. In addition to all the usual Raspberry Pi ports like, there are two full-HDMI ports replacing the mini HDMI ports. Back viewThe NVMe board is inside the case and it is better to insert the SSD while assembling the case. Yes, this is also an assembly kit. 📋I used the case for Raspberry Pi 5 and hence this section focuses on the Pi 5 specific features.Assembling the caseMini PC case boxSince Elcerow's tower case is clearly inspired from SunFounder's Pironman case, they also have kept the DIY angle here. This simply means that you have to assemble the kit yourself. It is while assembling that you can decide whether you want to use it for Raspberry Pi 5 or Jetson Orin Nano. Assembling instructions differ slightly for the devices. There is an official assembly video and you should surely watch it to get a feel of how much effort is required for building this case. In my case, I was not aware of the assembly video as I was sent this device at the time the product was announced. I used the included paper manual and it took me nearly two hours to complete the assembly. If I had had the help of the video and if I had not encountered a couple of issues, this could have been done within an hour. Assembling the caseDid I say issues? Yes, a few. First, the paper manual didn't specifically mention connecting one of the FPC cables. The video mentions it, thankfully. One major issue was in putting in the power button. It seems to me that while they sized the hole according to the power button, they applied the black coating later on. And this reduced the size of the hole from which the power button passes through. I don't see the official assembly video mentioning this issue and it could create confusion. The workaround is to simply use an object to remove the coating. I used scissors to scrape it. Another issue was putting in the tiny screws in even tinier spaces at times. The situation worsened for me as the paper manual suggested joining the main board and all the adapter boards in the initial phases. This made putting the screws in even harder. As the video shows, this could be done in steps. My magnetic screwdriver helped a great deal in placing the tiny screws in narrow places, and I think Elecrow should have provided a magnetic screwdriver instead of a regular one. User experienceTo make full use of all the cool features, i.e., OLED display, RGB fans, etc., you need to install a few Python scripts first. Scripts to add support for power button actions and OLED screenAnd here's the thing that I have noticed with most Elecrow products: they are uncertain about the appropriate location for their documentation. The paper manual that comes with the package has a QR code that takes you to this Google Drive that contains various scripts and a readme file. But there is also an online Wiki page and I think this page should be considered and distributed as the official documentation. After running 12 or so commands, including a few that allow 777 permissions, the OLED screen started showing system stats such as CPU temperature and usage, RAM usage, disk stats, date and time. It would have been nice if it displayed the IP address too. Milliseconds of light sync issue which is present in SunFounder cases tooLike Pironman, Elecrow also has RGB lighting of fans out of sync by a few milliseconds. Not an issue unless you have acute OSD. The main issue is that it has three fans and the fans start running as soon as the device is turned on. For such a tiny device, three continuously running fans generate considerable noise. The problem is that there is no user-facing way of controlling the fans without modifying the scripts themselves. Another issue is that if you turn off Pi from the operating system, i.e., use the shutdown command or the graphical option of Raspberry Pi OS, the RGB lights and fans stay on. Even the OLED screen keeps on displaying whatever message it had when the system was shut down. Top of the case has the OLED display and power buttonIf you shut down the device by long pressing the power button, everything is turned off normally. This should not be the intended behavior. I have notified Elecrow about it and hopefully their developers will work on fixing their script. Barring these hiccups, there are plenty of positives. There is an RTC battery to give you correct time between long shutdowns, although it works only with Raspberry Pi OS at the moment. The device stays super cool thanks to three fans maintaining a good airflow and the active cooler adding to the overall cooling. The clear display with RGB lights surely gives it an oomph factor. My photography skills don't do justiceConclusion There is room for improvement here, and I hope Elecrow updates their scripts to address these issues in the future: Proper handling of lights/fans shutdown instead of relying on the power button.oProvide options to configure the RGB lights and control the fans.Include IP address in OLED display (optional).Other than that, I have no complaints. The case is visually appealing, the device remains cool, and the price is reasonable in comparison to the popular Pironman cases. Coming to the pricing. The device costs $32 for the Jetson Nano version and $40 for the Raspberry Pi version. I am guessing this is because the Pi version includes the additional active cooler. Do note that the pricing displayed on the website DOES NOT include shipping charges and customs duty. Those things will be additional. Alternatively, at least for our readers in the United States of America, the device is available on Amazon (partner link) but at a price tag of $59 at the time of writing this review. You don't have to worry about extra shipping or custom duty fee if you order from Amazon. Get it from Amazon US (for $59)Get it from official website (shipping/customs extra)
-
I Switched From Ollama And LM Studio To llama.cpp And Absolutely Loving It
by: Bhuwan Mishra Sat, 11 Oct 2025 02:26:37 GMT My interest in running AI models locally started as a side project with part curiosity and part irritation with cloud limits. There’s something satisfying about running everything on your own box. No API quotas, no censorship, no signups. That’s what pulled me toward local inference. My struggle with running local AI modelsMy setup, being an AMD GPU on Windows, turned out to be the worst combination for most local AI stacks. The majority of AI stacks assume NVIDIA + CUDA, and if you don’t have that, you’re basically on your own. ROCm, AMD’s so-called CUDA alternative, doesn’t even work on Windows, and even on Linux, it’s not straightforward. You end up stuck with CPU-only inference or inconsistent OpenCL backends that feel like a decade behind. Why not Ollama and LM Studio?I started with the usual tools, i.e., Ollama and LM Studio. Both deserve credit for making local AI look plug-and-play. I tried LM Studio first. But soon after, I discovered how LM Studio hijacks my taskbar. I frequently jump from one application window to another using the mouse, and it was getting annoying for me. Another thing that annoyed me is its installer size of 528 MB. I’m a big advocate for keeping things minimal yet functional. I’m a big admirer of a functional text editor that fits under 1 MB (Dred), a reactive JavaScript library and React alternative that fits under 1KB (Van JS), and a game engine that fits under 100 MB (Godot). Then I tried Ollama. Being a CLI user (even on Windows), I was impressed with Ollama. I don’t need to spin up an Electron JS application (LM Studio) to run an AI model locally. With just two commands, you can run any AI models locally with Ollama. ollma pull tinyllama ollama run tinyllama But once I started testing different AI models, I needed to reclaim disk space after that. My initial approach was to delete the model manually from File Explorer. I was a bit paranoid! But soon, I discovered these Ollama commands: ollama rm tinyllama #remove the model ollama ls #lists all modelsUpon checking how lightweight Ollama is, it comes close to 4.6 GB on my Windows system. Although you can delete unnecessary files to make it slim (it comes bundled with all libraries like rocm, cuda_v13, and cuda_v12), After trying Ollama, I was curious! Does LM Studio even provide a CLI? Upon my research, I came to know, yeah, it does offer a command lineinterface. I investigated further and found out that LM Studio uses Llama.cpp under the hood. With these two commands, I can run LM Studio via CLI and chat to an AI model while staying in the terminal: lms load <model name> #Load the model lms chat #starts the interactive chatI was generally satisfied with LM Studio CLI at this moment. Also, I noticed it came with Vulkan support out of the box. Now, I have been looking to add Vulkan support for Ollama. I discovered an approach to compile Ollama from source code and enable Vulkan support manually. That’s a real hassle! I just had three additional complaints at this moment. Every time I needed to use LM Studio CLI(lms), it would take some time to wake up its Windows service. LMS CLI is not feature-rich. It does not even provide a CLI way to delete a model. And the last one was how it takes two steps to load the model first and then chat. After the chat is over, you need to manually unload the model. This mental model doesn’t make sense to me. That’s where I started looking for something more open, something that actually respected the hardware I had. That’s when I stumbled onto Llama.cpp, with its Vulkan backend and refreshingly simple approach. Setting up Llama.cpp🚧The tutorial was performed on Windows because that's the system I am using currently. I understand that most folks here on It's FOSS are Linux users and I am committing blasphemy of sort but I just wanted to share the knowledge and experience I gained with my local AI setup. You could actually try similar setup on Linux, too. Just use Linux equivalent paths and commands.Step 1: Download from GitHubHead over to its GitHub releases page and download its latest releases for your platform. 📋If you’ll be using Vulkan support, remember to download assets suffixed with vulkan-x64.zip like llama-b6710-bin-ubuntu-vulkan-x64.zip, llama-b6710-bin-win-vulkan-x64.zip.Step 2: Extract the zip fileExtract the downloaded zip file and, optionally, move the directory where you usually keep your binaries, like /usr/local/bin on macOS and Linux. On Windows 10, I usually keep it under %USERPROFILE%\.local/bin. Step 3: Add the Llama.cpp directory to the PATH environment variableNow, you need to add its directory location to the PATH environment variable. On Linux and macOS (replace path-to-llama-cpp-directory with your exact directory location): export PATH=$PATH:”<path-to-llama-cpp-directory>”On Windows 10 and Windows 11: setx PATH=%PATH%;:”<path-to-llama-cpp-directory>”Now, Llama.cpp is ready to use. llama.cpp: The best local AI stack for meJust grab a .gguf file, point to it, and run. It reminded me why I love tinkering on Linux in the first place: fewer black boxes, more freedom to make things work your way. With just one command, you can start a chat session with Llama.cpp: llama-cli.exe -m e:\models\Qwen3-8B-Q4_K_M.gguf --interactiveIf you carefully read its verbose message, it clearly shows signs of GPU being utilized: With llama-server, you can even download AI models from Hugging Face, like: llama-server -hf itlwas/Phi-4-mini-instruct-Q4_K_M-GGUF:Q4_K_M-hf flag tells to download the model from the Hugging Face repository. You even get a web UI with Llama.cpp. Like run the model with this command: llama-server -m e:\models\Qwen3-8B-Q4_K_M.gguf --port 8080 --host 127.0.0.1This starts a web UI on http://127.0.0.1:8080, along with the ability to send an API request from another application to Llama. Let’s send an API request via curl: curl http://127.0.0.1:8080/completion -H "Content-Type: application/json" -d "{\"prompt\":\"Explain the difference between OpenCL and SYCL in short.\",\"temperature\":0.7,\"max_tokens\":128}temperature controls the creativity of the model’s outputmax_tokens controls whether the output will be short and concise or a paragraph-length explanation.llama.cpp for the winWhat am I losing by using llama? Nothing. Like Ollama, I can use a feature-rich CLI, plus Vulkan support. All comes under 90 MB on my Windows 10 system. Now, I don’t see the point of using Ollama and LM Studio, I can directly download any model with llama-server, run the model directly with llama-cli, and even interact with its web UI and API requests. I’m hoping to do some benchmarking on how performant AI inference on Vulkan is as compared to pure CPU and SYCL implementation in some future post. Until then, keep exploring AI tools and the ecosystem to make your life easier. Use AI to your advantage rather than going on endless debate with questions like, will AI take our jobs?
-
We Completely Missed width/height: stretch
by: Daniel Schwarz Fri, 10 Oct 2025 14:03:52 +0000 The stretch keyword, which you can use with width and height (as well as min-width, max-width, min-height, and max-height, of course), was shipped in Chromium web browsers back in June 2025. But the value is actually a unification of the non-standard -webkit-fill-available and -moz-available values, the latter of which has been available to use in Firefox since 2008. The issue was that, before the @supports at-rule, there was no nice way to implement the right value for the right web browser, and I suppose we just forgot about it after that until, whoops, one day I see Dave Rupert casually put it out there on Bluesky a month ago: Layout pro Miriam Suzanne recorded an explainer shortly thereafter. It’s worth giving this value a closer look. What does stretch do? The quick answer is that stretch does the same thing as declaring 100%, but ignores padding when looking at the available space. In short, if you’ve ever wanted 100% to actually mean 100% (when using padding), stretch is what you’re looking for: div { padding: 3rem 50vw 3rem 1rem; width: 100%; /* 100% + 50vw + 1rem, causing overflow */ width: stretch; /* 100% including padding, no overflow */ } CodePen Embed Fallback The more technical answer is that the stretch value sets the width or height of the element’s margin box (rather than the box determined by box-sizing) to match the width/height of its containing block. Note: It’s never a bad idea to revisit the CSS Box Model for a refresher on different box sizings. And on that note — yes — we can achieve the same result by declaring box-sizing: border-box, something that many of us do, as a CSS reset in fact. *, ::before, ::after { box-sizing: border-box; } I suppose that it’s because of this solution that we forgot all about the non-standard values and didn’t pay any attention to stretch when it shipped, but I actually rather like stretch and don’t touch box-sizing at all now. Yay stretch, nay box-sizing There isn’t an especially compelling reason to switch to stretch, but there are several small ones. Firstly, the Universal selector (*) doesn’t apply to pseudo-elements, which is why the CSS reset typically includes ::before and ::after, and not only are there way more pseudo-elements than we might think, but the rise in declarative HTML components means that we’ll be seeing more of them. Do you really want to maintain something like the following? *, ::after, ::backdrop, ::before, ::column, ::checkmark, ::cue (and ::cue()), ::details-content, ::file-selector-button, ::first-letter, ::first-line, ::grammar-error, ::highlight(), ::marker, ::part(), ::picker(), ::picker-icon, ::placeholder, ::scroll-button(), ::scroll-marker, ::scroll-marker-group, ::selection, ::slotted(), ::spelling-error, ::target-text, ::view-transition, ::view-transition-image-pair(), ::view-transition-group(), ::view-transition-new(), ::view-transition-old() { box-sizing: border-box; } Okay, I’m being dramatic. Or maybe I’m not? I don’t know. I’ve actually used quite a few of these and having to maintain a list like this sounds dreadful, although I’ve certainly seen crazier CSS resets. Besides, you might want 100% to exclude padding, and if you’re a fussy coder like me you won’t enjoy un-resetting CSS resets. Animating to and from stretch Opinions aside, there’s one thing that box-sizing certainly isn’t and that’s animatable. If you didn’t catch it the first time, we do transition to and from 100% and stretch: CodePen Embed Fallback Because stretch is a keyword though, you’ll need to interpolate its size, and you can only do that by declaring interpolate-size: allow-keywords (on the :root if you want to activate interpolation globally): :root { /* Activate interpolation */ interpolate-size: allow-keywords; } div { width: 100%; transition: 300ms; &:hover { width: stretch; } } The calc-size() function wouldn’t be useful here due to the web browser support of stretch and the fact that calc-size() doesn’t support its non-standard alternatives. In the future though, you’ll be able to use width: calc-size(stretch, size) in the example above to interpolate just that specific width. Web browser support Web browser support is limited to Chromium browsers for now: Opera 122+ Chrome and Edge 138+ (140+ on Android) Luckily though, because we have those non-standard values, we can use the @supports at-rule to implement the right value for the right browser. The best way to do that (and strip away the @supports logic later) is to save the right value as a custom property: :root { /* Firefox */ @supports (width: -moz-available) { --stretch: -moz-available; } /* Safari */ @supports (width: -webkit-fill-available) { --stretch: -webkit-fill-available; } /* Chromium */ @supports (width: stretch) { --stretch: stretch; } } div { width: var(--stretch); } Then later, once stretch is widely supported, switch to: div { width: stretch; } In a nutshell While this might not exactly win Feature of the Year awards (I haven’t heard a whisper about it), quality-of-life improvements like this are some of my favorite features. If you’d rather use box-sizing: border-box, that’s totally fine — it works really well. Either way, more ways to write and organize code is never a bad thing, especially if certain ways don’t align with your mental model. Plus, using a brand new feature in production is just too tempting to resist. Irrational, but tempting and satisfying! We Completely Missed width/height: stretch originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
LHB Linux Digest #25.30: New Systemd Automation Course, LoggiFly, Docker Storage and More
by: Abhishek Prakash Fri, 10 Oct 2025 16:36:22 +0530 Our latest course, Advanced Automation With Systemd, is available now. Believe it or not, systemd is the future of automation on Linux. Its automation framework lets you precisely schedule task, create complex, dependent workflows and sandbox risky jobs for security. You can even create containers with systemd. Advanced Automation with systemdTake Your Linux Automation Beyond CronLinux HandbookUmair KhurshidThe idea is to focus on small, niche topics and provide you a streamlined learning. Next, we are working on adding videos to the Docker course (I think I already told you about that), a micro course 'Linux Networking at Scale' and a tutorial series on building an open source product from scratch and publishing it to CNCF-level standards. I have not forgotten core Linux stuff. There are additional series and microcourse ideas around them, too. Stay tuned 😄 This post is for subscribers only Subscribe now Already have an account? Sign in
-
412: 2.0 Embedded Pens
by: Chris Coyier Thu, 09 Oct 2025 15:45:43 +0000 Or just “Embeds” as we more frequently refer to them as. Stephen and Chris talk about the fairly meaty project which was re-writing our Embeds for a CodePen 2.0 world. No longer can we assume Pens are just one HTML, CSS, and JavaScript “file”, so they needed a bit of a redesign, but doing as little as possible so that existing Embed Themes still work. This was plenty tricky as it was a re-write from Rails to Next.js, with everything needing to be Server-Side Rendered and as lightweight as possible (thank urql!). Time Jumps 00:06 Welcome back to CodePen Land 00:35 What’s new about Pens in CodePen 2.0 05:20 Designing with custom themes in mind 10:40 What the editor looks like in the 2.0 Editor 16:09 Converting old Pens to new Pens 17:20 Debating using Apollo in embeds
-
FOSS Weekly #25.41: Windows 11 Fiasco, Ubuntu 25.10 Releasing, Joplin Tips, NeoVim Journals and More Linux Stuff
by: Abhishek Prakash Thu, 09 Oct 2025 04:35:13 GMT Microsoft is all set to kill existing methods to set up a local account on fresh Windows 11 installs. I am not really surprised. This is Microsoft being Microsoft. Microsoft Kills Windows 11 Local Account Setup Just as Windows 10 Reaches End of LifeLocal account workarounds removed just before Windows 10 goes dark.It's FOSS NewsSourav RudraAnd this comes just days before Windows 10 support is scheduled to end. And that is a pivotal moment for us desktop Linux users. I have seen an influx of people migrating to Linux when Windows XP and 7 support ended. Some of those went back to Windows with newer systems, whereas some became lifelong Linux users. We are reorganizing and also creating new guides to make the Windows 10 to Linux migration smooth for new users. Please provide your suggestions on what difficulties a new user may face when they switch to Linux and what kind of questions that might have about switching to Linux. Let's work to a broader Linux userbase 💪 💬 Let's see what you get in this edition: A new openSUSE Leap release.Codes of Conduct being called a disaster.Linus being unhappy with some Rust code.And other Linux news, tips, and, of course, memes!📰 Linux and Open Source NewsLinus Torvalds recently called out Rust format checking.openSUSE Leap 16 is here with major modernization work.The FSF has turned 40 and has launched the LibrePhone initiative.Amazon launches the 'Linux-based' Vega OS for its Fire TV devices.Wikidata has launched an open source vector database for use with AI.By the way, Ubuntu 25.10 will be releasing today. Do check out the new features it is getting. Ubuntu 25.10: Release Date and New Features in Questing QuokkaTake a look at the new features and changes you’ll see in the upcoming Ubuntu 25.10 release.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutOpen Source legend, Eric S. Raymond, says Codes of Conduct are a disaster. The Man Who Started Open Source Initiative Advocates for Abolishing Codes of ConductBetween Anarchy and Bureaucracy: The Code of Conduct Debate Ignited by Eric Raymond.It's FOSS NewsSourav RudraYou can balance cost and effort if you go the FOSS way as a creative. Beyond Free: The Value Proposition of Open Source for CreativesFOSS is free as in cost, but not free as in effort. The loss of convenience is real, especially at the start. But for creatives who are willing to invest, the long-term rewards—flexibility, control, and a workflow built to last—are more than worth the price.It's FOSS NewsTheena Kumaragurunathan🧮 Linux Tips, Tutorials, and LearningsObsidian + Git = simple version control for ideas.Learn the pros and cons of using Btrfs on a Linux system.Skip the syntax and focus on writing with WYSIWYG Markdown editors for Linux.Speaking of Obsidian and Markdown editors, the popular open source notes software Joplin can be made more effective with these tips. Mastering Joplin Notes: Tips and TweaksJoplin is an awesome open source note taking application. Here’s how you can make the best of it.It's FOSSSreenath👷 AI, Homelab and Hardware CornerIBM has launched Granite 4.0, their hybrid AI model that beats rivals twice its size. IBM Unveils Granite 4.0 Hybrid Model That Competes with Rivals Twice Its SizeThese models sure pack a punch.It's FOSS NewsSourav Rudra✨ Project Highlightstelekasten.nvim is a Neovim Lua plugin that lets you manage a markdown-based zettelkasten/wiki + journal inside Neovim. GitHub - nvim-telekasten/telekasten.nvim: A Neovim (lua) plugin for working with a markdown zettelkasten / wiki and mixing it with a journal, based on telescope.nvimA Neovim (lua) plugin for working with a markdown zettelkasten / wiki and mixing it with a journal, based on telescope.nvim - nvim-telekasten/telekasten.nvimGitHubnvim-telekasten📽️ Videos I Am Creating for YouI don't usually do rant videos but this is a first. An argument against 'sudo apt update' and 'sudo apt upgrade'. Is it time to unify these two into a single command? Please take this opinion video lightly even if you disagree (and you have every right to disagree and express your opinion). Subscribe to It's FOSS YouTube Channel Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a McDonald's burger a month), and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 💡 Quick Handy TipIn the GNOME Files app (Nautilus), you can left-click and drag to select multiple items. To add more items to your selection, hold the CTRL key while dragging; this lets you include additional files lower in the list without losing your previous selection. 🎋 Fun in the FOSSverseTake this personality quiz to find out what kind of terminal user you are. What Type of Terminal User Are You? [Personality Quiz]Find out which terminal persona you are because your Linux habits say more about you than your horoscope ever could.It's FOSSAbhishek Prakash🤣 Meme of the Week: Linux, the savior of old hardware and those wronged by Microsoft and Apple. 🗓️ Tech Trivia: On October 06, 1942, Chester Carlson patented electrophotography, a way to make dry copies of text and images on paper without using ink or chemicals. A few years later, the Haloid Company licensed his patent, renamed the process xerography, and eventually became Xerox, turning document copying into a global industry. 🧑🤝🧑 From the Community: FOSSers are talking about the planned Android sideloading policy change from Google. Got any insights to add? About Android Sideloading Apps Policy ChangesI’ve been reading and seeing videos about some Google policy changes that would affect side-loading of apps on Android in the next few years. Doesn’t sound like it’s going to be a positive change for developers or Free and Open Source projects like F-Droid. I’m wondering what others think of the situation and if they’ve come across any interesting work-arounds to keep side-loaded apps on their phones.It's FOSS CommunityLaura_Michaels❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
The thing about contrast-color
by: Geoff Graham Wed, 08 Oct 2025 14:52:58 +0000 One of our favorites, Andy Clarke, on the one thing keeping the CSS contrast-color() function from true glory: Word. White and black are two very safe colors to create contrast with another color value. But the amount of contrast between a solid white/black and any other color, while offering the most contrast, may not be the best contrast ratio overall. This was true when added a dark color scheme to my personal website. The contrast between the background color, a dark blue (hsl(238.2 53.1% 12.5%), and solid white (#fff) was too jarring for me. To tone that down, I’d want something a little less opaque than what, say hsl(100 100% 100% / .8), or 20% lighter than white. Can’t do that with contrast-color(), though. That’s why I reach for light-dark() instead: body { color: light-dark(hsl(238.2 53.1% 12.5%), hsl(100 100% 100% / .8)); } Will contrast-color() support more than a black/white duo in the future? The spec says yes: I’m sure it’s one of those things that ‘s easier said than done, as the “right” amount of contrast is more nuanced than simply saying it’s a ratio of 4.5:1. There are user preferences to take into account, too. And then it gets into weeds of work being done on WCAG 3.0, which Danny does a nice job summarizing in a recent article detailing the shortcomings of contrast-color(). The thing about contrast-color originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Getting Creative With shape-outside
by: Andy Clarke Mon, 06 Oct 2025 15:45:40 +0000 Last time, I asked, “Why do so many long-form articles feel visually flat?” I explained that: Then, I touched on the expressive possibilities of CSS Shapes and how, by using shape-outside, you can wrap text around an image’s alpha channel to add energy to a design and keep it feeling lively. There are so many creative opportunities for using shape-outside that I’m surprised I see it used so rarely. So, how can you use it to add personality to a design? Here’s how I do it. Patty Meltt is an up-and-coming country music sensation. My brief: Patty Meltt is an up-and-coming country music sensation, and she needed a website to launch her new album and tour. She wanted it to be distinctive-looking and memorable, so she called Stuff & Nonsense. Patty’s not real, but the challenges of designing and developing sites like hers are. Most shape-outside guides start with circles and polygons. That’s useful, but it answers only the how. Designers need the why — otherwise it’s just another CSS property. Whatever shape its subject takes, every image sits inside a box. By default, text flows above or below that box. If I float an image left or right, the text wraps around the rectangle, regardless of what’s inside. That’s the limitation shape-outside overcomes. shape-outside lets you break free from those boxes by enabling layouts that can respond to the contours of an image. That shift from images in boxes to letting the image content define the composition is what makes using shape-outside so interesting. Solid blocks of text around straight-edged images can feel static. But text that bends around a guitar or curves around a portrait creates movement, which can make a story more compelling and engaging. CSS shape-outside enables text to flow around any custom shape, including an image’s alpha channel (i.e., the transparent areas): img { float: left; width: 300px; shape-outside: url('patty.webp'); shape-image-threshold: .5; shape-margin: 1rem; } First, a quick recap. For text to flow around elements, they need to float either left or right and have their width defined. The shape-outside URL selects an image with an alpha channel, such as a PNG or WebP. The shape-image-threshold property sets the alpha channel threshold for creating a shape. Finally, there’s the shape-margin property which — believe it or not — creates a margin around the shape. Interactive examples from this article are available in my lab. Multiple image shapes When I’m adding images to a long-form article, I ask myself, “How can they help shape someone’s experience?” Flowing text around images can slow people down a little, making their experience more immersive. Visually, it brings text and image into a closer relationship, making them feel part of a shared composition rather than isolated elements. Columns without shape-outside applied to them Patty’s life story — from singing in honky-tonks to headlining stadiums — contains two sections: one about her, the other about her music. I added a tall vertical image of Patty to her biography and two smaller album covers to the music column: <section id="patty"> <div> <img src="patty.webp" alt=""> [...] </div> <div> <img src="album-1.webp" alt=""> [...] <img src="album-2.webp" alt=""> [...] </div> </section> A simple grid then creates the two columns: #patty { display: grid; grid-template-columns: 2fr 1fr; gap: 5rem; } I like to make my designs as flexible as I can, so instead of specifying image widths and margins in static pixels, I opted for percentages on those column widths so their actual size is relative to whatever the size of the container happens to be: #patty > *:nth-child(1) img { float: left; width: 50%; shape-outside: url("patty.webp"); shape-margin: 2%; } #patty > *:nth-child(2) img:nth-of-type(1) { float: left; width: 45%; shape-outside: url("album-1.webp"); shape-margin: 2%; } #patty > *:nth-child(2) img:nth-of-type(2) { float: right; width: 45%; shape-outside: url("album-2.webp"); shape-margin: 2%; } Columns with shape-outside applied to them. See this example in my lab. Text now flows around Patty’s tall image without clipping paths or polygons — just the natural silhouette of her image shaping the text. Building rotations into images. When an image is rotated using a CSS transform, ideally, browsers would reflow text around its rotated alpha channel. Sadly, they don’t, so it’s often necessary to build that rotation into the image. shape-outside with a faux-centred image For text to flow around elements, they need to be floated either to the left or right. Placing an image in the centre of the text would make Patty’s biography design more striking. But there’s no center value for floats, so how might this be possible? Patty’s image set between two text columns. See this example in my lab. Patty’s bio content is split across two symmetrical columns: #dolly { display: grid; grid-template-columns: 1fr 1fr; } To create the illusion of text flowing around both sides of her image, I first split it into two parts: one for the left and the other for the right, both of which are half, or 50%, of the original width. Splitting the image into two pieces. Then I placed one image in the left column, the other in the right: <section id="dolly"> <div> <img src="patty-left.webp" alt=""> [...] </div> <div> <img src="patty-right.webp" alt=""> [...] </div> </section> To give the illusion that text flows around both sides of a single image, I floated the left column’s half to the right: #dolly > *:nth-child(1) img { float: right; width: 40%; shape-outside: url("patty-left.webp"); shape-margin: 2%; } …and the right column’s half to the left, so that both halves of Patty’s image combine right in the middle: #dolly > *:nth-child(2) img { float: left; width: 40%; shape-outside: url("patty-right.webp"); shape-margin: 2%; } Faux-centred image. See this example in my lab. Faux background image So far, my designs for Patty’s biography have included a cut-out portrait with a clearly defined alpha channel. But, I often need to make a design that feels looser and more natural. Faux background image. See this example in my lab. Ordinarily, I would place a picture as a background-image, but for this design, I wanted the content to flow loosely around Patty and her guitar. Large featured image So, I inserted Patty’s picture as an inline image, floated it, and set its width to 100%; <section id="kenny"> <img src="patty.webp" alt=""> [...] </section> #kenny > img { float: left; width: 100%; max-width: 100%; } There are two methods I might use to flow text around Patty and her guitar. First, I might edit the image, removing non-essential parts to create a soft-edged alpha channel. Then, I could use the shape-image-threshold property to control which parts of the alpha channel form the contours for text wrapping: #kenny > img { shape-outside: url("patty.webp"); shape-image-threshold: 2; } Edited image with a soft-edged alpha channel However, this method is destructive, since much of the texture behind Patty is removed. Instead, I created a polygon clip-path and applied that as the shape-outside, around which text flows while preserving all the detail of my original image: #kenny > img { float: left; width: 100%; max-width: 100%; shape-outside: polygon(…); shape-margin: 20px; } Original image with a non-destructive clip-path. I have little time for writing polygon path points by hand, so I rely on Bennett Feely’s CSS clip-path maker. I add my image URL, draw a custom polygon shape, then copy the clip-path values to my shape-outside property. Bennett Feely’s CSS clip path maker. Text between shapes Patty Meltt likes to push the boundaries of country music, and I wanted to do the same with my design of her biography. I planned to flow text between two photomontages, where elements overlap and parts of the images spill out of their containers to create depth. Text between shapes. See this example in my lab. So, I made two montages with irregularly shaped alpha channels. Irregularly shaped alpha channels I placed both images above the content: <section id="johnny"> <img src="patty-1.webp" alt=""> <img src="patty-2.webp" alt=""> […] </section> …and used those same image URLs as values for shape-outside: #johnny img:nth-of-type(1) { float: left; width: 45%; shape-outside: url("patty-1.webp"); shape-margin: 2%; } #johnny img:nth-of-type(2) { float: right; width: 35%; shape-outside: url("img/patty-2.webp"); shape-margin: 2%; } Content now flows like a river in a country song, between the two image montages, filling the design with energy and movement. Conclusion Too often, images in long-form content end up boxed in and isolated, as if they were dropped into the page as an afterthought. CSS Shapes — and especially shape-outside — give us a chance to treat images and text as part of the same composition. This matters because design isn’t just about making things usable; it’s about shaping how people feel. Wrapping text around the curve of a guitar or the edge of a portrait slows readers down, invites them to linger, and makes their experience more immersive. It brings rhythm and personality to layouts that might otherwise feel flat. So, next time you reach for a rectangle, pause and think about how shape-outside can help turn an ordinary page into something memorable. Getting Creative With shape-outside originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Chris’ Corner: Discontent
by: Chris Coyier Mon, 06 Oct 2025 15:16:00 +0000 Nothing is above a little healthy criticism. Here’s Den Odell’s article We Keep Reinventing CSS, but Styling Was Never the Problem. If I can critique the critique, it makes some good and astute points, but pitting CSS evolution as the enemy up front doesn’t sit right. CSS itself, in my opinion, is doing great. It’s not the job of CSS to chase trends or bend toward fickle developer tooling trends. To be fair, Den also says it’s not really the fault of CSS, it’s a matter of choosing trade-offs with your eyes open, which I fully agree with. Jono Alderson isn’t pleased, really, with modern web development entirely. I’ll quote some of his intro: Go look around at “modern” media-driven sites and you’ll see he’s not wrong. We all see it every day. It’s almost embarrassing these days to tell people you make websites, because everybody knows websites are pretty rough these days. If I can, again, critique the critique, it’s that nobody (you, me, Jono) need to do this. The web isn’t forcing us to do this. The web is amazingly backward compatible. If you are nostalgic for some way of building websites gone by, well, just start doing it that way again, because it will still work. Jono does mention this, but misses naming the real enemy: the companies themselves that put up with this and hire specifically to enforce it. I can and do say you don’t need to build websites with the latest in JavaScript cargo-culting, but in order to get a decent paying job in web development, well, you just might have to. That’s the real shame here. Yes this is a problem, and we’re not going to fix it unless companies start writing checks to employees to remove cruft and remove complexity and give freedom to developers who care to improve the situation. For a random little bonus… ain’t it a shame the lack of love for sin() and cos() in CSS? Most hated?! Jeepers.
-
Can't Recall the Syntax? Try These WYSIWYG Markdown Editors on Linux
by: Community Mon, 06 Oct 2025 15:05:57 GMT From GitHub repositories to technical documentation, Markdown is an extremely popular lightweight markup language. Basically, markdown files are plain text, but they follow certain syntax and when they are rendered, you see a beautiful document with headings, bullet points, code boxes and more. There are many Markdown editors available for Linux users but mostly they are two paned editors where you write in Markdown syntax in left and it gets rendered on the other side. Though lightweight and easy to begin with, you still have to get yourself familiar with Markdown syntax. That might be out of the comfort zone for many. This is why I have compiled a list of markdown editors with WYSIWYG feature. WYSIWYG sounds like one of those 2000s terms that didn't succeed, but actually is an acronym for an extremely convenient category of editing software. WYSIWYG stands for "What You See is What You Get" and these editors render the Markdown code in real time and shows the output immediately as the code is typed in. Best of all, you get a toolbar that you can use to easily create formatted text. WYSIWYG editors come with toolbar and you don't need to remember syntax, although you can still use themThis makes things a lot easier, especially for people who have to work with Markdown once in a while. Let's see what WYSIWYG Markdown editors you can use on Linux. ✋Non-FOSS Warning! Some of the applications mentioned here are not open source. They have been included in the context of Linux usage.1. MarkTextMarkText is an Electron based modern-looking editor that offers a simple and clean working environment. The features are: Interface: Custom, minimal, desktop-environment-independent single-pane interface.Highlight features: Several writing modes (such as Focus mode, Typewriter mode, Source Code mode, each with a different emphasis), options for formatting and block options on the menu.Navigation: Sidebar for file tree and table of contents, tab bar for open files; all optional.Exports to: HTML, PDFCustomizability: 6 inbuilt themesMisc.: A searchable Command Palette for easy access to keyboard shortcuts for all possible commands that can be given to the editor and the text blocks.MarkText is your ideal choice if what you need is a no-fuss editor that gets the job done. Another advantage is that the DE-independent Electron interface works consistently in any OS, leaving you not to worry about visual inconsistencies. Install MarkText on LinuxMarkText is easy to install because it comes in both AppImage and Flatpak formats, which makes it available straight from the software store in certain distributions. AUR contains the package for direct installation, too. If none of these option are suitable, the binary file is available on the GitHub page as well. 2. MarknoteA part of the KDE software ensemble, Marknote is a great option if you're on a KDE distribution. It comes with a structure where you can create multiple notebooks and notes within them for better organization. Interface: Qt graphic kit, KDE style.Highlight features: Easily accessible bottom panel for quickly formatting text and adding blocks.Navigation: The notes are organized in notebooks, which shows on a sidebar. Focus Mode that hides all navigation options except for the note being edited, for a cleaner view.Exports to: HTML, ODT, PDFCustomizability: Inbuilt Qt Breeze themesMisc.: A Command Bar that gives easy access to some general operations such as making/deleting notes, etc.🚧Marknote seems to support a limited set of Markdown code, such as headings, italics, bold, lists etc. There is a lack of support for code-blocks and quotations and so on. Some inspection is required to check if it might fit your needs or not.Marknote should be your go-to option if you want to get straight to work with all options within very easy grasp. It is worth noting that it is not as customizable as KDE software usually is. Another important reminder is that it might not look great or very native on a DE like GNOME or Cinnamon. Install Marknote on LinuxInstalling Marknote is easier through either Flatpak or the Snapstore. There are binaries available, too, which can be found on the product page under releases. 3. MarkFlowyMarkFlowy is a modern, rising Markdown editor that flaunts a slick interface and AI capabilities. Interface: Custom independent design with a very clean look.Highlight features: There is an easy access toolbar on top for various formatting and code-block options. There are Source Code and WYSIWYG modes both available, and an AI input feature in the side panel.Navigation: Optional, toggle-able file tree on the left and table of contents on the right.Exports to: HTML, JPGCustomizability: A bright and a dark inbuilt theme with the option for custom CSS themes.Misc.: There are customizable keybindings, and the AI interface settings can be configured as well. There is an optional autosave feature, that might come in handy for a lot of users.While MarkFlowy is still in the beta stage at the time of writing this, the application shows a lot of promise and is quite stable and lightweight as it is. Install MarkFlowy on LinuxMarkFlowy is available in DEB, RPM and AppImage formats, available on the release page. I had some issues with AppImage but the RPM package worked flawlessly on Fedora. 4. NotesnookNotesnook is foremost a note-taking app that looks more like a document editor than a note-making app, which might be more comfortable for certain users. It is great in terms of organization of ideas. Interface: The interface is designed like a notebook with all formatting and code-block options on the menu on top.Highlight features: Multiple notebooks can be made, with the possibility to make an account and syncing them across devices. There is a Focus mode that hides the everything apart from the active note.Navigation: There is a categorical notebook panel on the left, with notes within it in the panel in the middle. The notes can be opened in multiple tabs like a browser.Exports to: HTML, Text, but only all notes at once.Customizability: There are light and dark modes built in for multiple theme styles by default, and more can be added over that.Misc.: End-to-end encryption of notes is available for synced notes, with bidirectional note linking, and an app lock is available as well.There are paid tiers available as well for the app which provide more features and storage for the online storage, such as an unlimited amount of notebooks as opposed to the free tier's 20 limit. Install Notesnook on LinuxNotesnook is available as Flatpak, AppImage as well as on the Snapstore on the official website. ✋Non-FOSS Warning! Some of the applications mentioned here are not open source. They have been included in the context of Linux usage.5. Znote (not FOSS)Znote is another up-and-coming freemium-type model Markdown note making app that puts emphasis on AI and organization. WYSIWYG is one of the available options with a Source Code Mode, or a panned view as well. I liked it for the fact that is the effort of a lone developer. Interface: The interface is modern and peppy with important options well within the view of the note itself. It is easily accessible and editable, with the AI asking option on the bottom of the window.Highlight features: Apart from the usual note-making, there is a unique option of audio recording. The notes can be synced with an internal Znote account across devices. Keybinding menu can be accessed very easily using a button on the bottom.Navigation: A collapsible menu on the left shows a file tree as well as bookmarks. The notes appear in a tabbed view.Exports to: HTML, PDF, TXTCustomizability: There are several options for the light and dark themes separately within the application.Misc.: AI features like copying responses and summarizing notes is available. You can even run code for certain languages within the app itself. Many of these features come with the paid version, though.While there are paid versions that provide more features, the free plan gets the job done as well, and that too, quite smoothly. Install Znote on LinuxThe app is only available as an AppImage on Linux, right on the official website. 6. Typora (not FOSS)Typora, albeit not FOSS, is one of the most popular Markdown editors that has been around for a while. It is designed to be very simple, elegant and feature-rich. That's one of the reasons why I included in this list. Interface: Fairly streamlined interface with a top menu and a status bar.Highlight features: There are Focus and Typewriter modes for more efficient workflow.Navigation: Optional, toggle-able side bar with a file tree and outline menu.Exports to: HTML, ODT, PDF, DOCX, LaTeX, ePub, etc.Customizability: 5 inbuilt themes, all with very different looks, fonts and feel. Other than that, a plethora of themes are available on the website.Misc.:Typora has a free trial of 15 days, after which a license needs to be bought, which is a major drawback. Install Typora on LinuxThere is a Flatpak available, which is the simplest option. Other than that, the install page provides an option to add a repository on Debain based distributions, a DEB file for direct installation, as well as binary files. 7. Octarine (not FOSS)Another non-FOSS entry but Octarine is a really loaded Markdown editor with interesting features left and right. It has a free tier plan that has most of the necessary features. Interface: Clean interface but with all the extra options within easy reach for the users.Highlight features: Multiple workspaces, easily searchable keyboard shortcuts.Navigation: The file is opened in a tabbed view, with a file tree on a panel on the left, with collapsible code blocks in the file itself.Exports to: No export options, the text remains within the app.Customizability: 3 inbuilt themes for the free version, with 4 more with the paid one.Misc.: There are a lot of extra features, such as syncing with a Git repository, a Graph View (similar to Obsidian), Writing Assistant, Calendar Planner, Command Bar as well as tagging to stitch notes together, but many of these features are for paid versions only.The free tier of Octarine is enough to get a lot of work done but not being able to export them is a big drawback, except when you only have to make personal notes in which case, you can sync them across with Git. There are many more features available with the app that can be explored on the website. Install Octarine on LinuxOctarine provides an AppImage and also DEB, RPM as well as binary files to install on Linux on their [official website](Download Octarine). Embeddable WYSIWYG Markdown Editors for Web DevelopersIn place of a desktop application to write Markdown documents, if you're looking for something you can embed within a web-based project to easily upload and manage formatted text, or even demonstrate the usage of Markdown, you can utilize one of these applications: Toast UI Editor: Toast UI Editor is a ready-to-use JavaScript Markdown editor that can be embedded as a React/Vue wrapper. It boasts a simple workflow with both source code and WYSIWYG options. TUI focuses on an extensible approach with a lot of plugins to access different kinds of features such as UML rendering, colored syntax highlighting etc.Milkdown: Milkdown is an editor engine built on ProseMirror which lets you build your own Markdown editor, with bindings for React and Vue. It is inspired by Typora, and is also very plugin-driven, for things like modifying the syntax, UI, themes, embedding math, etc.MDXEditor: MDXEditor is a React component that works on simplicity and consistent output across all use cases and devices. The code blocks have syntax highlighting and autocomplete among other practical features listed on the website.ConclusionThe ease of usage of Markdown has brought about several options in the market, each offering something that it does better than the others. While I hope that this post has helped you narrow down the choices, I would still encourage trying the applications out yourself. Which one do you think is the best contender on the list? Did we miss the one that you favor? Let us know in the comments below. Cheers! I also feel a little guilty for including some non-open source apps here but those are popular and good and I could not find more open source WYSIWYG Markdown editors. Perhaps you can help me out by suggesting a few? Please mind the WYSIWYG feature. That's what we are looking for here. Article submitted by community member Pulkit Chandak.
-
Module 6: Debugging Automated Services
by: Umair Khurshid Mon, 06 Oct 2025 13:10:10 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
Module 5: Sandboxing Systemd Directives for Safer Automation
by: Umair Khurshid Mon, 06 Oct 2025 13:03:22 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
Module 4: Automated Resource Management With Systemd
by: Umair Khurshid Mon, 06 Oct 2025 12:41:06 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
Module 3: Systemd-nspawn and Machinectl for Repeatable Environments
by: Umair Khurshid Mon, 06 Oct 2025 11:29:31 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
How I am Using Git and Obsidian for Note Version Management
by: Sreenath Sun, 05 Oct 2025 07:22:20 GMT Git is a powerful tool that helps you keep track of changes in your files over time. While it is highly popular among the developer community, you can use Git as a note storage vault. In this case, the source files are Obsidian markdown files. When you use Obsidian for note-taking, Git can be very useful to manage different versions of your notes. You can easily go back to previous versions, undo mistakes, and even collaborate with others. In this tutorial, I'll share how I set up Git with Obsidian on a Linux system, connect it with GitHub or GitLab, and use the Obsidian Git plugin to make version control simple and accessible right inside your notes app. This is all at the beginner level, where all you are doing is setting up Git for your knowledge base version management. 🚧I am assuming you are taking simpler markdown notes, where individual note and file sizes are less. If you are using large notes, you may want to try GitHub Large File Storage, which is out of scope of this article.Step 1: Create a remote repository📋I am going to use GitHub in the examples here. If you use a GitHub alternative repository like GitLab, similar steps should also be valid.Go to the GitHub official webpage and log in to your account. Now, on the profiles page, click on the "Create repository" button. Create a new repositoryProvide all the details. Make sure you have set the repository to private. Once you've entered the necessary details, click on the Create Repository button. Enter Details and CreateThat's it. You have a new private repository, which only you can access. Step 2: Create a simple READMEYou need to create a simple README file in the newly created repo. For this, click on the Create a new file button. Create a new file On the next page, enter the name of the file and add placeholder text. Enter file contents Click on Commit changes and add a message when asked. Done! You have added a simple README to your repo. Step 3: Install Git in your systemNow, let's install Git on your system. I also suggest installing the GitHub or GitLab CLI plugin. Since you are into version control, these CLI tools can greatly improve your experience. 💡With the GitHub or GitLab CLI tool, you can commit and push changes to GitHub/GitLab from the terminal also, in case there is a failure in the Obsidian GUI.In Ubuntu, you can install both Git and the GitHub CLI using the command: sudo apt install git gh For Fedora, use: sudo dnf install git gh For Arch Linux, the package name is a bit different. sudo pacman -S git github-cli If you are using GitLab instead of GitHub, instead of the gh/github-cli package, install glab package. The name is the same on all three of the above Linux distributions. Step 4: Authenticate GitHubOnce you have Git and the GitHub tool installed, you need to authenticate it with user credentials. First, you need to make sure the GitHub credentials are properly saved. For this, you can use libsecret. git config --global credential.helper libsecret Now, let's set the username and email so that Git can understand who is committing changes. Open a terminal and run: git config --global user.name "your username" git config --global user.email "your email" Add username and emailRun the GitHub CLI: gh auth login If you are using GitLab, use: glab auth login 🚧For the rest, I am using GitHub. So. GitLab users should follow their on-screen instructions.It will ask some questions, and you can select a choice and press enter. This is shown in the screenshot below: Initial choicesWhen you press enter, it will open in the browser. You will be prompted to continue as the logged in account. Continue website loginOn the next page, enter the code you are provided in the terminal. Enter the codeThis will ask you to verify the details before proceeding. Check once again and press Authorize github. Confirm loginThat's it. Your device is connected. Close the browser. You can view the terminal got updated with the successful login message. Device connection successStep 5: Clone the repositoryNow that you have set up GitHub, it's time to clone the private notes repo to somewhere convenient for you. I am cloning the repo on my ~/Documents directory. cd ~/Documents git clone git clone https://your-git-remote-repo-link.git You will get a message that you have cloned into an empty repo. This is what we need. Cloned the repositoryYou can open it in file manager to see that the only content in the cloned directory is a .git directory and a README file. Step 6: Copy contentsNow, you have to copy your markdown notes from the earlier location to this new vault location. You can do this using your file manager. While copying, make sure that you copy the .obsidian folder as well. Because your rest of plugins and settings are in the .obsidian directory. The folder is hidden, so use CTRL+H to enable the hidden items and then select all. Copy all contents from existing vaultStep 7: Create a .gitignoreOnce you copy all the contents to the new section, you will notice that you have a .obsidian folder that contains all the plugins and cache files. Usually, this does not need to be pushed to GitHub. So, we will create a .gitignore file in the root vault location. Inside this file, add the content: # to exclude Obsidian's settings (including plugin and hotkey configurations) .obsidian/ # to only exclude plugin configuration. Might be useful to prevent some plugin from exposing sensitive data .obsidian/plugins # OR only to exclude workspace cache .obsidian/workspace.json # to exclude workspace cache specific to mobile devices .obsidian/workspace-mobile.json # Add below lines to exclude OS settings and caches .trash/ .DS_Store 📋The above gitignore code is directly taken from the git plugin documentation.Step 8: Open a new vault in ObsidianOpen Obsidian and click on your vault name in the bottom and select Manage Vaults. Select ManageFrom the new window, select the open button adjacent to "Open a folder as vault". Open an existing vaultIn the file chooser, select the directory you have cloned recently. A new Obsidian is opened with notes in the new location, which is empty as of now. You will be asked to trust the author. This is because you have copied all the contents, including plugins, from previous notes. So, in order for the plugin to work, you need to enable the community plugins, and that needs user permission. Accept that you trust the plugins and continue. Trust authorStep 9: Install the Obsidian Git PluginWe need to get plugins in Obsidian for Git version control. Click on the settings button in Obsidian. Click on the settings button.Go to Community plugins. Click on browse. Here, search for the Git plugin and install it. Search and Install GitOnce installed, enable it. Enable Git PluginYou have set the basics of Git with Obsidian. Click on the Git button in Obsidian to see the Git status. Obsidian Git StatusAs you can see, there is a .gitignore file under changes. Step 10: Stage changesI suggest you stage changes in batches and commit. To stage a file, you can either press the + button adjacent to that file or use the + button on the top menu to stage all. Stage ChangesEverything is under staged now for me: Stage everythingStep 11: Commit and Push🚧I am assuming you are the only one managing the notes, and there is no other collaborator.If you are a solo user of your personal notes, then you can commit the changes and push them to the remote repository. For this, once all changes are staged, use the commit button. Commit all staged changes When commit is finished, use the Push button. Push ChangesStep 12: Pull ChangesLet's say you have edited the notes in another system and pushed the changes to GitHub from there. In this case, when you start on the original system, you should pull the item first from GitHub. Use the Pull button in the Obsidian Git control panel. Pulled files from remoteNow that your local copy is in sync with the main, you can work effortlessly. Wrapping UpThe Git plugin also allows you to automatically commit/pull/push at pre-defined times. But I prefer keeping things under my control and thus prefer following the manual method of handpicking my files. But it's up to you how you want to go about it. Integrating Git with Obsidian is a great way of syncing your notes in the cloud without additional cost.
-
LHB Linux Digest #25.29: Git for DevOps eBook, getfacl, Crontab Recovery and More
by: Abhishek Prakash Fri, 03 Oct 2025 17:22:02 +0530 The Git for DevOps course has been converted into an eBook. Considering that it is less hands-on and more theory, it makes sense to read it as book. LHB Pro members can download the ebook for free from the resources page (scroll down a bit). If you are not a Pro member yet, either opt for one and get access to everything we create or just purchase this eBook. One more chapter has been added to the systemd automation course. I plan to publish the entire course by next week. After that, I'll work to add videos to the Docker course. Stay tuned for more educational content. ⭐If you think your Linux/devops knowledge and experience can be shared in micro-course format, please reach out to me. Remember, some of our courses have been contributed by readers like you. I help with editing so that the courses are easy to understand. This post is for subscribers only Subscribe now Already have an account? Sign in
-
Module 2: Automating Complex Workflows with Targets
by: Umair Khurshid Fri, 03 Oct 2025 15:53:16 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
-
FOSS Weekly #25.40: Fedora 43 Features, Kernel 6.17, Zorin OS 18, Retro Gaming Setup and More Linux Stuff
by: Abhishek Prakash Thu, 02 Oct 2025 04:43:03 GMT Last month, Austria's armed forces ditched Microsoft Office for LibreOffice. This is surely positive news, but it also makes us think about something crucial. The move to switch to open source is often driven by monetary benefits. Since these organizations often save a hefty amount, should they not contribute some part of their savings back to the open source project they are relying on? What do you think? Austria’s Armed Forces Gets Rid of Microsoft Office (Mostly) for LibreOfficeThe Austrian military prioritizes independence over convenience.It's FOSS NewsSourav Rudra💬 Let's see what you get in this edition: ZimaOS adding a paid tier.A new Linux kernel release.GUI apps in terminal.Fedora floating a proposal on AI.Revamped Proton Mail mobile apps.And other Linux news, tips, and, of course, memes!📰 Linux and Open Source NewsCalibre eBook reader has introduced its first AI feature.Kali Linux 2025.3 is a packed release with many new tools.The mobile apps for Proton Mail have received a major revamp.Austria's armed forces have moved away from Microsoft Office.Linux 6.17 has landed with many performance and reliability buffs.Homelab focused ZimaOS 1.5 adds a paid tier while keeping the core features untouched.The Fedora Project is looking for feedback on its policy for AI-assisted contributions.Fedora 43 is due soon. Here are the new features arriving with it: Fedora 43 Release Date and New FeaturesA close look at the new features coming in Fedora 43.It's FOSS NewsSourav Rudra🧠 What We’re Thinking AboutFOSS is an important consideration for creatives in 2025. From Disillusionment to Freedom: Why Creatives Need FOSS Now More Than EverMore than ever, creative professionals need to exert control over their digital footprint. Big tech will not give us control—we have to take it. Free and Open Source (FOSS) software gives us a path forward. The path isn’t easy, but I argue nothing worthwhile is.It's FOSS NewsTheena KumaragurunathanRuby's ecosystem is under threat from corporations. 🧮 Linux Tips, Tutorials, and LearningsExplore terminal shortcuts to enhance your efficiency. I have shared it in the past too but it's worth a reshare. Speaking of enhancing efficiency, here are a few tips Linux users can use to be more productive. I understand that not everyone is a keyboard shortcut maestro, so here are a few tips to master the finger swipe gesture in GNOME desktop environment. 👷 AI, Homelab and Hardware CornerThese 3D-printed cases for the Raspberry Pi will not disappoint. 13 Amazingly Innovative 3D Printed Cases for Raspberry Pi I Came AcrossSo what if I don’t have a 3D printer to print these cases. I can at least appreciate the creativity.It's FOSSAbhishek KumarRaspberry Pi has quietly launched the 500+, a blingy, faster version of the original 500 model. WebScreen is a crowdfunded secondary display for gamers and creators. The Raspberry Pi can be used for retro gaming, you know. The other Abhishek shows it with his latest work. ✨ Project HighlightsI recently discovered Sync-in, an open source platform that facilitates file sharing, sync, and collaboration. Sync-inThe secure, open-source platform for file storage, sharing, collaboration, and syncing. - Sync-inGitHubAnother interesting tool I discovered is term.everything which allows you to run 'any' GUI app in the terminal. I am still exploring it and will be doing a full review soon. GitHub - mmulet/term.everything: Run any GUI app in the terminal❗Run any GUI app in the terminal❗. Contribute to mmulet/term.everything development by creating an account on GitHub.GitHubmmulet🛍️ Deal worth a lookThis ebook bundle from No Starch is a curated collection of titles to help you explore embedded electronics with Raspberry Pi and Arduino. Plus, your purchase supports the Electronic Frontier Foundation. Humble Tech Book Bundle: Electronics for the Curious by No StarchPay what you want to deepen your knowledge of video games and technology with our latest Tech Book Bundle: Electronics for the Curious.Humble Bundle📽️ Videos I Am Creating for YouZorin OS 18 is coming up with new features specially planned for new Linux users who are migrating from Windows 10. I discuss those features in the latest video. Subscribe to It's FOSS YouTube Channel Desktop Linux is mostly neglected by the industry but loved by the community. For the past 13 years, It's FOSS has been helping people use Linux on their personal computers. And we are now facing the existential threat from AI models stealing our content. If you like what we do and would love to support our work, please become It's FOSS Plus member. It costs $24 a year (less than the cost of a McDonald's burger a month), and you get an ad-free reading experience with the satisfaction of helping the desktop Linux community. Join It's FOSS Plus 💡 Quick Handy TipIn Firefox, you can forget about one site, by erasing its browsing history, download history, cookies, login, etc. First, go to Menu → History → Manage History. Here, locate the website you want to forget about (one of those spicy ones, perhaps?), right-click on the website, and then select "Forget About This Site..." When asked, click on "Clear data" to clear any data related to that website. Following this method means that the website will be gone forever from your history, unless you visit it again. 🎋 Fun in the FOSSverseSeeing Halloween is close, are you in the mood to hunt a Daemon in our latest crossword? Daemon Hunter: Crossword EditionBackground processes, foreground fun! Can you summon all the daemons and solve this Linux crossword?It's FOSSAbhishek Prakash🤣 Meme of the Week: One of the worst crimes in the world of Linux. 🗓️ Tech Trivia: On October 2, 1955, the ENIAC, the world’s first general-purpose electronic computer, was retired. Built by John Mauchly and J. Presper Eckert, it could perform 5,000 operations per second. 🧑🤝🧑 From the Community: Pro FOSSer Neville asked a really important question in the forum a few days ago, and the replies on that so far have been wonderful. Why do people come to this forum? Feedback pleaseLets see if we can find out what aspects of this forum are most appreciated by our members. I will start it off. What I mostly appreciate from this forum is some mental challenge helping to solve computing issues inspiration… the flow of new ideas Can each of you attempt to summarize what you see as important or rewarding in our forum.?It's FOSS CommunitynevjFellow Pro FOSSer Xander started a thread, asking for ideas to make the most unusable desktop environment. ❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
-
411: The Power of Tree-Sitter
by: Chris Coyier Wed, 01 Oct 2025 13:24:46 +0000 Alex and Chris hop on the show to talk about a bit of technology that Alex calls “The 2nd best technological choice he’s ever made.” That technology is called Tree-sitter. It’s a code parsing tool for building ASTs (Abstract Syntax Trees) out of code. GitHub uses it to power search and “go to” functionality. The creators now work on Zen, where a code parser is paramount. We use it to understand an entire Pen very quickly so we can understand how it all links together (among other things) and make a plan for how to process the Pen (a “build plan”). It’s fast, accurate, forgiving, and extensible. Just a heck of a learning curve. Jump Links 00:07 CodePen 2.0 is more than just a fresh coat of paint 03:00 Treesitter explained 12:04 Making the right choices with technology 21:50 How we parse your code 26:10 We don’t want you to have to be in config hell 28:48 Type type type stop see what happens
-
Same Idea, Different Paint Brush
by: Geoff Graham Wed, 01 Oct 2025 13:02:47 +0000 There’s the idiom that says everything looks like a nail when all you have is a hammer. I also like the one about worms in horseradish seeing the world as horseradish. That’s what it felt like for me as I worked on music for an album of covers I released yesterday. I was raised by my mother, a former high school art teacher (and a gifted artist in her own right), who exposed me to a lot of different tools and materials for painting and drawing. I’m convinced that’s what pointed me in the direction of web development, even though we’re talking years before the internet of AOL and 56K dial-up modems. And just as there’s art and craft to producing a creative 2D visual on paper with wet paint on a brush, there’s a level of art and craft to designing user interfaces that are written in code. You might even say there’s a poetry to code, just as there’s code to writing poetry. I’ve been painting with code for 20 years. HTML, CSS, JavaScript, and friends are my medium, and I’ve created a bunch of works since then. I know my mom made a bunch of artistic works in her 25+ years teaching and studying art. In a sense, we’re both artists using a different brush to produce works in different mediums. Naturally, everything looks like code when I’m staring at a blank canvas. That’s whether the canvas is paper, a screen, some Figma artboard, or what have you. Code is my horseradish and I’ve been marinating in this horseradish ocean for quite a while. This is what’s challenging to me about performing and producing an album of music. The work is done in a different medium. The brush is no longer code (though it can be) but sounds, be them vibrations that come from a physical instrument or digital waves that come from a programmed beat or sample. There are parallels between painting with code and painting with sound, and it is mostly a matter of approach. The concepts, tasks, and challenges are the same, but the brush and canvas are totally different. What’s in your stack? Sound is no different than the web when it comes to choosing the right tools to do the work. Just as you need a stack of technical tools to produce a website or app, you will need technical tools to capture and produce sounds, and the decision affects how that work happens. For example, my development environment might include an editor app for writing code, a virtual server to see my work locally, GitHub for version control and collaboration, some build process that compiles and deploys my code, and a host that serves the final product to everyone on the web to see. Making music? I have recording software, microphones, gobs of guitars, and an audio interface that connects them together so that the physical sounds I make are captured and converted to digital sound waves. And, of course, I need a distributor to serve the music to be heard by others just as a host would serve code to be rendered as webpages. Can your website’s technical stack be as simple as writing HTML in a plain text editor and manually uploading the file to a hosting service via FTP? Of course! Your album’s technical stack can just as easily be a boombox with a built in mic and recording. Be as indie or punk as you want! Either way, you’ve gotta establish a working environment to do the work, and that environment requires you to make decisions that affect the way you work, be it code, music, or painting for that matter. Personalize your process and make it joyful. It’s the “Recording Experience” (EX) to what we think of as Developer Experience (DX). What’re you painting on? If you’re painting, it could be paper. But what kind of paper? Is college-rule cool or do you need something more substantial with heavier card stock? You’re going to want something that supports the type of paint you’re using, whether it’s oil, water, acrylic… or lead? That wouldn’t be good. On the web, you’re most often painting on a screen that measures its space in pixel units. Screens are different than paper because they’re not limited by physical constraints. Sure, the hardware may pose a constraint as far as how large a certain screen can be. But the scene itself is limitless where we can scroll to any portion of it that is not in the current frame. But please, avoid AJAX-based infinite scrolling patterns in your work for everyone’s sake. I’m also painting music on a screen that’s as infinite as the canvas of a webpage. My recording software simply shows me a timeline and I paint sound on top of time, often layering multiple sounds at the same point in time — sound pictures, if you will. That’s simply one way to look at it. In some apps, it’s possible to view the canvas as movements that hold buckets of sound samples. Same thing with code. Authoring code is as likely to happen in a code editor you type into as it is to happen with a point-and-click setup in a visual interface that doesn’t require touching any code at all (Dreamweaver, anyone?). Heck, the kids are even “vibe” coding now without any awareness of how the code actually comes together. Or maybe you’re super low-fi and like to sketch your code before sitting behind a keyboard. How’re people using it? Web developers be like all obsessed with how their work looks on whatever device someone is using. I know you know what I’m talking about because you not only resize browsers to check responsiveness but probably also have tried opening your site (and others!) on a slew of different devices. ⚠️ Auto-playing media It’s no different with sound. I’ve listened to each song I’ve recorded countless times because the way they sound varies from speaker to speaker. There’s one song in particular that I nearly scrapped because I struggled to get it sounding good on my AirPods Max headphones that are bass-ier than your typical speaker. I couldn’t handle the striking difference between that and a different output source that might be more widely used, like car speakers. Will anyone actually listen to that song on a pair of AirPods Max headphones? Probably not. Then again, I don’t know if anyone is viewing my sites on some screen built into their fridge or washing machine, but you don’t see me rushing out to test that. I certainly do try to look at the sites I make on as many devices as possible to make sure nothing is completely busted. You can’t control what device someone uses to look at a website. You can’t control what speakers someone uses to listen to music. There’s a level of user experience and quality assurance that both fields share. There’s a whole other layer about accessibility and inclusive design that fits here as well. There is one big difference: The cringe of listening to your own voice. I never feel personally attached to the websites I make, but listening to my sounds takes a certain level of vulnerability and humility that I have to cope with. The creative process I mentioned it earlier, but I think the way music is created shares a lot of overlap with how websites are generally built. For example, a song rarely (if ever) comes fully formed. Most accounts I read of musicians discussing their creative process talk about the “magic” of a melody in which it pretty much falls in the writer’s lap. It often starts as the germ of an idea and it might take minutes, days, weeks, months, or even years to develop it into a comprehensive piece of work. I keep my phone’s Voice Memos app at the ready so that I’m able to quickly “sketch” ideas that strike me in the moment. It might simply be something I hum into the phone. It could be strumming a few chords on the guitar that sound really nice together. Whatever it is, I like to think of those recordings as little low-fidelity sketches, not totally unlike sketching website layouts and content blocks with paper and pencil. I’m partial to sketching websites on paper and pencil before jumping straight into code. It’s go time! And, of course, there’s what you do when it’s time to release your work. I’m waist-deep in this part of the music and I can most definitely say that shipping an album has as many moving parts, if not more, than deploying a website. But they both require a lot of steps and dependencies that complicate the process. It’s no exaggeration that I’m more confused and lost about music publishing and distribution than I ever felt learning about publishing and deploying websites. It’s perfectly understandable that someone might get lost when hosting a website. There’s so many ways to go about it, and the “right” way is shrouded in the cloak of “it depends” based on what you’re trying to accomplish. Well, same goes for music, apparently. I’ve signed up for a professional rights organization that establishes me as the owner of the recordings, very similar to how I need to register myself as the owner of a particular web domain. On top of that, I’ve enlisted the help of a distributor to make the songs available for anyone to hear and it is exactly the same concept as needing a host to distribute your website over the wire. I just wish I could programmatically push changes to my music catalog. Uploading and configuring the content for an album release reminds me so much of manually uploading hosted files with FTP. Nothing wrong with that, of course, but it’s certainly an opportunity to improve the developer recording experience. So, what? I guess what triggered this post is the realization that I’ve been in a self-made rut. Not a bad one, mind you, but more like being run by an automated script programmed to run efficiently in one direction. Working on a music project forced me into a new context where my development environment and paint brush of code are way less effective than what I need to get the job done. It’s sort of like breaking out of the grid. My layout has been pretty fixed for some time and I’m drawing new grid tracks that open my imagination up to a whole new way of work that’s been right in front of me the entire time, but drowned in my horseradish ocean. There’s so much we can learn from other disciplines, be it music, painting, engineering, architecture, working on cars… turns out front-end development is like a lot of other things. So, what’s your horseradish and what helps you look past it? Same Idea, Different Paint Brush originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
When Should Testing Start in the Development Process? – Complete Guide
by: Neeraj Mishra Wed, 01 Oct 2025 07:33:55 +0000 Typically speaking, developing software is a long series of stages that starts with requirements gathering to development to testing, to final release. Each stage requires the respective members to contribute to the final development of the product in their own capacity. The business analyst’s job is to collect requirements from the client and validate their feasibility with a technical architect. The technical architect studies the whole environment and performs the impact analysis of placing a new solution in it. Based on feasibility, they may recommend changes in the requirements. After long discussions and to and fro of requirements, the development of the product begins. Then the development team faces its own challenges. They encounter unforeseen events while building the software that may require either updating the design or a change in the requirements themselves. Then the next stage of testing arrives when the product is tested against different criteria. Even this stage may push back the software to any of the previous stages based on the nature of the defects found. So, what we understand from all this is that software development is not a linear process. It goes back and forth between many stages that are required to give it a final shape. And hence, the question arises, when should the testing ideally begin? This is what we will explore in detail in the further sections. Why the Timing of Testing Matters? Testing is surely needed to develop a product that is reliable and safe. There’s no doubt about it. But what matters most is the timing when the defect is found. This timing has a direct impact on both the vendor and the clients. If the defect is found in production, it immediately breaks customer trust. Your brand reputation falters instantly in the customer’s eyes. On the other hand, studies have shown that the later a defect is discovered, the more expensive it is to fix it. It is because a software solution is built on a large set of algorithms and logic. Each has an impact on others in terms of data exchange, dependent logic, and the sequence of its flow. A defect found at one spot can leave all other dependent programs and subroutines to fail as well. It may disrupt the complete flow of the code. A wrong logic in generating a value that is used at multiple places can have cascading effects. Hence, it becomes exponentially costly and laborious to fix the defect later than sooner. So, the conclusion is, the sooner you discover a bug, the better it is in all terms. And that leads us to the question: what is the ideal timing when the testing should begin? Of course, it makes no sense to start testing until there’s not enough substance generated. On the other hand, it is equally risky to postpone it to be discovered later, when it will create a higher impact on the overall solution. The Role of Testing in Each Stage of Development To understand the role of testing in each stage, let’s categorize the stages into three phases of development: requirements gathering and design, code development, and integration and deployment. Requirement Gathering and Design In this stage, the testing is applicable to not catch the bugs because there’s no code developed yet. It is mostly about testing the assumptions. Requirements raised by the client must be validated against technical feasibility and alignment with business goals. This kind of testing is at the functional level, where the requirements are tested for their impact on other related processes, both at the business and technical levels. For example, a change in the workflow of the process that follows after a customer places an order may impact the downstream events, like updating the database, customer notification process, and product delivery. A business analyst validates the workflow at the functional level while a technical architect checks for the feasibility of developing such a solution. The earlier the assumptions are uncovered, the less impact it has on the process that follows. Code Development and Unit Testing This is the stage when testing becomes more tangible. In this stage, a unit of functionality is developed, like a stand-alone program, and it can be tested against its expected outputs. The dependent programs’ data and functional exchange need not be developed yet, as the transaction with them can be simulated through hard-coded values. Unit testing intends to check how a single unit of functionality works independently, and if it generates the expected outcome in both ideal and negative scenarios. For effective unit testing, using an automation framework is wiser. testRigor, as a software testing tool, is one such product that can perform this task through simulated scenarios. The ideal testing practice in this stage is to create test cases even before the program has been coded. Its responsibility falls on the developer himself, who is expected to not just write the code but also validate its results honestly. Integration and Deployment After unit testing, which validates the functionality of each component, the integration process comes into play. In this stage, all the different components that were developed and tested separately are brought together, and their performance is tested in relation to each other. While unit testing was about testing a component individually, integration testing tests their relationship. It validates whether the whole is greater than the sum of its parts or not. Once the integration works flawlessly, its usability is checked against customer expectations. This part includes testing the software from a human perspective. All the technical aspects are useful only as long as they can finally meet users’ expectations. This testing can be done on a User Acceptance environment, which is almost a replica of production. Closing Statement: When Should Testing Begin? Having understood the various stages of testing, it is justifiable to ask when the ideal time to start testing is. The simple answer to that is as soon as possible. Before you start testing your product, you must cultivate a mindset of quality within your team at all levels. Testing is not just about finding defects, but primarily about developing a critical outlook towards every stage of development. How fool-proof this requirement is, how robust the code is, will it stand against adverse scenarios? These are the questions that do not require a set stage to begin with. These criteria should be validated right from the start till the deployment phase. So, the final answer is, testing starts right at the moment we start imagining a product. Every requirement must be met with a “What if?” question. Every assumption should be unearthed, and every functionality should be tested against tangible results. Once you cultivate a critical mindset in your team, all your testing endeavours will be its manifestation that will have a deeper impact on the quality of the product. The post When Should Testing Start in the Development Process? – Complete Guide appeared first on The Crazy Programmer.
-
Utilizing My Raspberry Pi 4 for Retro Gaming With RetroPie
by: Abhishek Kumar Wed, 01 Oct 2025 05:59:24 GMT Do you remember the thrill of powering up your old console, the satisfying clunk of the cartridge clicking into place, and the vibrant, pixelated characters that transported you to another realm? Whether you were a Mario fanatic (like me), a Sonic speedster, or a Pokémon trainer, those retro games hold a special place in our hearts. Thanks to RetroPie, you can dive back into your favorite classic games. This is one of the easier projects you can build with Raspberry Pi. Since I use Pi 5 for my homelab setup, I thought of utilizing my older Raspberry Pi 4 for the retro gaming project. In this guide, I will show you how I set up RetroPie on my Raspberry Pi 4. I'll also share some tips for that authentic retro gaming experience. What is RetroPie, again?🚧RetroPie only works till Raspberry Pi 4 and has not seen a new release since 2022. It still works fine with Pi 4.Before I share the setup, let's talk about what RetroPie is. It’s a collection of emulators that enables you to play games from a wide range of classic consoles such as GameBoy, Game Cube, SNES, and PlayStation 1 & 2. You can even play some Microsoft DOS games as well. Think of it as a pre-built package that turns your Pi into a retro gaming console with minimal setup. But there are a few things to understand here. You won't just get access to hundreds of retro games. You'll have to get the game ROMs (digital file of the old classic games) and then upload them to the appropriate emulator folder inside the RetroPie. There are websites that let you download the retro games of your choice. The problem is that downloading ROMs could be illegal in your country. That's the thing about corporate greed. Even if they have not been selling those games and devices for years, they won't let you enjoy that little piece of your childhood. The pure legal way is that if you have those old game cartridges, you can build ROMs on your own. There are specialized devices that let you create ROMs from old cartridges. Retrode 2Stone Age GamerPosted by Aaron Wilson on Nov 25th 2024For more details on retro game ROMs, watch the video below. Ready to get started? Here’s how to set up RetroPie on your Raspberry Pi. What you’ll need: A Raspberry Pi (ideally a Pi 3 or 4 for better performance but could work with Zero as well)A microSD card (at least 16 GB recommended)Official power supply for the Raspberry PiMonitor & HDMI cable (or composite video cable for CRT TVs)Keyboard and mouseController or joysticks (Optional)An internet connection (Optional) ✋The official RetroPie image for the Raspberry Pi 5 isn’t available yet, but you can use Batocera (tutorial coming soon).RetroPie installationThere are two ways of getting RetroPie on a Raspberry Pi: You can install RetroPie from a standalone image by flashing it onto your microSD card.If you’re already running an operating system like Raspberry Pi OS, you can install RetroPie right on top of it.Method 1: Installing from pre-built RetroPie imageI have already downloaded the image on my system from the RetroPie's downloads page. RetroPie pre-built image downloads pageNext, burn it in a microSD card. I am using Raspberry Pi Imager tool but you can use Balena Etcher or even Rufus (if you are on Windows). Select your device (as shown in the image below): I have selected 'No filtering' for my Pi 3Select your "Custom Image": Browse to the downloaded RetroPie image: Choose the installation medium (microSD card): Hit "Next" That's it. Wait for the process to complete and then take out the SD card. Method 2: Installing RetroPie on top of existing Raspberry Pi OS Updating Raspberry Pi OS is the most basic thing you should do first: sudo apt update && sudo apt upgrade -yInstalling necessary packagesWith your Raspberry Pi’s OS updated, it's time to install a couple of essential packages for RetroPie. First, you'll need the “dialog” package, which the RetroPie setup script uses to create dialog boxes in the terminal. Next, the “git” package is crucial as it allows us to clone the setup script repository directly to the Raspberry Pi. You can install both packages by running the following command: sudo apt install -y git dialogCloning the RetroPie setup scriptNow that we’ve got the required packages, let's move on to cloning the RetroPie setup script. This script will install RetroPie on your Raspberry Pi. Switch to your home directory: cdNow, use the commands below to clone the RetroPie setup script into your home directory: git clone --depth=1 https://github.com/RetroPie/RetroPie-Setup.gitRunning the RetroPie setup scriptNext, navigate to the “RetroPie-Setup” directory that was created when you cloned the repository. cd RetroPie-SetupOnce inside the directory, you can start the setup script. This script will handle the installation of all the necessary packages for a few basic emulators. sudo ./retropie_setup.shStarting the installation processYou should now see the RetroPie setup dialog on your screen. Just press OK. The next menu offers several options, but for now, focus on the “Basic Install” option. This will install the core and main packages needed to get RetroPie up and running. Navigate to “Basic Install” using the arrow keys, and press Enter to select it. Confirming the installationYou'll be prompted to confirm whether you want to install the “Core” and “Main” components of RetroPie. Select “Yes” to proceed. 📋Keep in mind that this step might take a while since the Raspberry Pi needs to download and install numerous packages.Once the installation is complete, you’ll return to the main menu of the RetroPie setup script. Final Steps: RebootingFinally, to ensure everything is working correctly, reboot your Raspberry Pi. In the main menu, select the “Perform reboot” option. 💡To have EmulationStation start automatically with your Raspberry Pi, head to the “Configuration / Tools” menu in RetroPie, find the “autostart” option, and select “Start Emulation Station at boot.” This way, it’ll launch on its own every time you power up!Adding Games (ROM's) to RetroPie🚧This guide is for educational purposes only. We’re not liable for any legal issues or promoting piracy. Because it seems that downloading classic game ROMs is illegal even if these games are no longer being sold anywhere. It is up to you to decide if you want to download and use ROMs.So, you've set up RetroPie on your Raspberry Pi, and now you're ready for the fun part- adding games, aka ROMs! What are ROMs?ROMs are essentially digital copies of games from old consoles. They allow you to play your favorite classics on modern hardware, like our little friend Pi here. 📋A quick reminder again - Only download and use ROMs for games you legally own.How to add game ROMs to RetroPieAdding ROMs to your RetroPie setup is easier than you might think. Here's how you can do it: Method 1: Transferring ROM's via USB DriveThis is the most straightforward method. Just format a USB drive to FAT32. I'm doing a Quick format in Windows: In Linux, you can use 'GNOME Disks' utility or a Command line tool like this: sudo mkfs -t vfat /dev/sda1mkfs is a command use to format block storage devices.-t ensures the type of file system /dev/sda1 is the location of my storage device.Create a folder, I named it as "retropie" and plug it into your Pi. RetroPie will automatically create sub-folders for each console. Next, copy your ROM files into the appropriate folders and plug the USB back into your Pi, and RetroPie will handle the rest. Here is the unzipped version of the ROM: Method 2: Transferring ROMs via network transferIf your Raspberry Pi is connected to your home network, you can transfer ROMs directly over WiFi using Samba, SFTP, etc. Here I'm using WinSCP to transfer my ROMs using SFTP: After login, just go to the RetroPie directory or where you want to save your games. I'm saving my ROMs in this directory: /home/user/RetroPie/roms/n64: That's it! Now that your ROMs are added, you're ready to boot up and start gaming. First boot of RetroPieYou'll see the RetroPie splash screen on the first boot, followed by EmulationStation's welcome message. Sorry for the image quality as I don't have a HDMI capture deviceNext, you will be prompted to configure your controller. This only takes a minute, and once it's done, you'll have full control over the system. Once your controller is set up, you'll be taken to the main EmulationStation menu. Here, you'll see a list of all the systems for which you've added ROMs. In my case, it's for Nintendo 64: The interface is clean and easy to navigate. You can use your controller to scroll through the different consoles, select a game, and dive straight into the action. Here I have added the Super Mario 64, a true classic that never gets old. It's showing 2 copies because I've added one compressed and the other one direct. When we select it from the menu, you'll see the familiar startup screen: and there he is - Mario himself, ready for action! The game loads a bit slow but manageable, and with just a press of a button, you are back in the colorful world of Mario. Final Thoughts:While RetroPie is an amazing way to bring back the nostalgia of classic gaming, it's not without its quirks, especially if you are using older Pi models like Pi 3. If you are aiming for a smooth, lag-free experience, I'd highly recommend using a Raspberry Pi 4. RetroPie may not have seen a new release in the last few years but it still works. I'm curious. What does your retro gaming setup look like? What games are you playing? Share your setups and experiences in the comments below.