-
Linus Torvalds Defends Windows' Blue Screen of Death
by: Abhishek Prakash Wed, 03 Dec 2025 04:48:52 GMT We all have seen countless memes and jokes about Microsoft Windows' blue of screen death popularly known as BSOD. A popular but fake image poking fun at Windows blue screen of deathMicrosoft did make changes to handle the criticism and jokes. They changed the blue color to black 😆 So, it is still BSOD, blue or black, doesn't matter. The black screen surely mixes with Linux's very own kernel panic screen. Microsoft is taking notes from Linux, it seems. The reason I am talking about Blue Screen of Death is that Linux creator Linus Torvalds recently defended Microsoft for these error screens. Well, sort of. Not entirely a software issue: TorvaldsBy now you might have been aware that Linus Torvalds did a non-serious, fun video with Linus Sebastian of Linus Tech Tips. They built a PC together. In that video, Sebastian discussed Torvalds' fondness for ECC (Error Correction Code). I am using their last name because Linus will be confused with Linus. This is where Torvalds says this: I am convinced that all the jokes about how unstable Windows is and blue screening, I guess it's not a blue screen anymore, a big percentage of those were not actually software bugs. A big percentage of those are hardware being not reliable.Torvalds further mentioned that gamers who overclock get extra unreliability. Essentially, Torvalds believes that having ECC on the machine makes them more reliable, makes you trust your machine. Without ECC, the memory will go bad, sooner or later. He thinks that more than software bugs, often it is hardware behind Microsoft's blue screen of death. I am no hardware expert, and even if I was, I could not disagree with the OG Linus. The part where Torvalds talks about the Windows blue screen of death starts around 9:37. The ECC part comes is just before that. The embed video below starts at 9:37 for your comfort. If you have not already, do watch the full video. It is good to see the casual, fun, human side of one of the greatest computing legends alive, Linus Sebastian. Oops, sorry. Linus Torvalds 😀
-
Scrollytelling on Steroids With Scroll-State Queries
by: Lee Meyer Tue, 02 Dec 2025 16:47:14 +0000 Do you think of scrolling as a more modern way of reading than turning pages in a book? Nope, the concept originated in ancient Egypt, and it’s older than what we now classify as books. It’s based on how our ancestors read ancient physical scrolls, the earliest form of editable text in the history of writing. I am Jewish, so I remember my earliest non-digital scrolling experience was horizontally scrolling the Torah, which can be more immersive than traditionally scrolling a webpage. The physical actions to navigate texts have captured the imagination of many a storyteller, leading authors to gamify the act of turning pages and to create stories that incorporate the physical actions of opening a book and turning pages as part of the narrative. However, innovative experiences using non-standard scrolling haven’t been explored as thoroughly. Photo by Taylor Flowe on Unsplash I can sympathize with those who dismiss scrollytelling as a gimmick: it can be an annoyance if it’s just for the sake of cleverness, but my favorite examples I’ve seen over the years tell stories we couldn’t otherwise. There’s something uniquely immersive about stories driven by a mechanic that has lived in our species’ collective muscle memory since ancient days. Still unconvinced of the value of scrollytelling? Alright, hypothetical annoying skeptic, let’s first warm up with some common use cases for scroll-based styling. Popular scroll-based designs we can simplify with modern CSS It’s awesome that Chrome has solid support for native scroll-driven animations without requiring JavaScript, and we see that both Safari and Firefox are actively working on support for the new scroll-driven standards. These new features facilitate optimized, smooth scroll-driven animations. The support via pure CSS syntax makes scroll-driven animation a more approachable option for designers who may be more comfortable with CSS than with the equivalent JavaScript. Indeed, even though I am a full-stack developer who is supposed to know everything, I found having scroll-driven animation built into the browser and available with a few lines of CSS gets my creativity flowing, inspiring me to experiment more than if I had to go through hoops of a proprietary library and writing JavaScript, which in the past might include messing with intersection observer and fiddly code. If animation timelines weren’t enough, Chrome has now introduced support for CSS carousel, scroll-initial-target, and scroll-state queries—all of which provide opportunities to control scrolling behaviors in CSS and style all the things based on scrolling. In my opinion, scroll-state is more of an evolutionary than revolutionary addition to the growing range of scroll-related CSS features. Animation timelines are so powerful that they can be hacked to achieve many of the same effects we can implement with scroll-state queries. Therefore, think of scroll-state as a highly convenient, simplified subset of what we can do in more verbose hacky ways with animation timelines and/or view timelines. Some examples of effects scroll-state simplifies are: Before scroll-state queries existed, you could hack view progress timelines to create scroll-triggered animations, but we now have snapped scroll-state queries to achieve similar effects. Before snappped queries existed, Bramus demonstrated a hack to simulate a hypothetical :snappped selector using scroll-driven animations. Before scrollable queries existed, Bramus showed how we could do similar things using scroll-timeline. Take a moment to appreciate that Bramus is from the future, and to reflect on how scroll-state can simplify common UI patterns, such as scroll shadows, which Chris Coyier said might be his “favorite CSS trick of all time.” This year, Kevin Hamer showed how scroll-timeline can achieve scroll shadows in CSS with fewer tricks. It’s excellent, but the only thing better than clever CSS tricks is that scroll shadows no longer require a trick at all. Hacking CSS is fun, but there is something to be said for that warm fuzzy feeling that CSS was made just for your use case. This demo from the Chrome blog shows how scroll shadows and other visual affordances are easy to implement with scroll-state. But the popularity of Kevin’s article suggests that normal, sane people will gravitate to practical use cases for the new CSS scroll-based features. In fact, a normal and sane author might end the article here. Unfortunately, as I revealed in a previous article, I have been cursed by a spooky shopkeeper who sells CSS tricks at a haunted carnival, so I now roam the earth attempting the unthinkable with pure CSS. Decision time As you reach this paragraph in the article, you realize that when you scroll, it fast-forwards reality. Therefore, after we end the discussion of scroll shadows, the shadows swallow the world outside your window, except for two glowing words hovering near your house: CSS TRICKS. You wander out through your front door and meet a street vendor standing beneath the neon sign. The letters give her multiple shadows as if she has thrown them down like discarded masks, undecided about which shade of night to wear. On the table before her lies a weathered scroll. It unrolls on its own, whispering misremembered fragments from a forgotten CSS-Tricks article: “A scroll trigger is a point of no return, like a trap sprung once the hapless user scrolls past a certain point.” The neon flickers like a glitch, revealing another of the shopkeeper’s faces: a fire demon doppleganger of yourself who is the villain of the CodePen we’ll descend into if you scroll further. “Will you continue?” the fire demon hisses. “Will you scroll deeper into the madness at the far edges of CSS? Non-linear scrollytelling Evidently, you are game to play with fire, so check out the pure CSS experiment below, which demonstrates a technique I call “nonlinear scrollytelling,” in which the user controls the outcome of a visual story by deciding which direction to scroll next. It’s a scrolling Choose Your Own Adventure. But if your browser is less adventurous than you are, watch the screen recording instead. The experiment will only work on Chromium-based browsers for now, because it relies on scroll-state, animation-timeline, scroll-initial-target and CSS inline conditionals. CodePen Embed Fallback I haven’t seen this technique in the wild, so let me know in the comments if you have seen other examples of the idea. For now, I’ll claim credit for pioneering the mechanics — but I give credit to the talented Dead Revolver for creating the awesome, affordable pixel art bundle I used for most of the graphics. The animated lightsaber icon was ripped from this cool CodePen by Ujjawal Anand, and I used ChatGPT to draw the climbable building. To make the bad guy, I reused the same spritesheet from the player character, but I implemented the Mirror Match trope from Mortal Kombat, using color shifting to create a “new” character who I evilized by casting the following spell in CSS: .evil-twin { transform: rotateY(180deg); filter: invert(24%) sepia(99%) saturate(5431%) hue-rotate(354deg) brightness(93%) contrast(122%); background-image: url(/* same spritesheet as the player character */); } It’s cool that CSS helps recycle existing assets for those like me who are drawing-challenged. I also wanted to make sure that well-supported CSS features like transform and filter didn’t feel left out of the fun in an experiment filled with newer, emergent CSS features. But if you’ve come this far, you’re probably eager to understand the scroll-related CSS logic. Our story begins in the middle of the end You may have noticed our experiment earns extra crazy points as soon as it loads, by starting at the middle of the bottom of the page so that the player can choose whether to scroll left to run away, or scroll right to walk unarmed towards the bad guy if the player wants to compete with the madness level of the game’s creator. This explainer for the emergent scroll-initial-target property shows that controlling scroll position on load was previously possible by hacking CSS animations and the scroll-snap-align property. However, similar to what we discussed above about the value proposition of scroll-state, a feature like scroll-initial-target is exciting because it simplifies something that previously required verbose, fragile hacks, which can now be replaced with more succinct and reliable CSS: .spawn-point { position: absolute; left: 400vw; scroll-initial-target: nearest; } As cool as this is, we should only subvert expectations for how a webpage behaves if we have a sufficient reason. For instance, CSS like the above could have simplified my pure CSS swiper experiment, but Chrome only added scroll-initial-target in February 2025, the month after I wrote that article. Using scroll-initial-target would be justified in the swiper scenario, since the crux of that design was that the user started in the middle with the option to swipe left or right. A similar dilemma is central to the opening of our scrollytelling narrative. The disorienting experience of finding ourselves in an unexpected scroll position with only the option to scroll horizontally heightens the drama, as the user has to adapt to an unusual way of interacting while the bad guy rapidly approaches. I’m feeling generous, so let’s give the user 20 seconds to figure it out, but you can experiment with different timeframes by editing the --chase-time custom property at the top of the source file. We’re going to create a CSS implementation of the slasher movie trope in which a walking aggressor can’t be outrun. We do that by marking the bad guy as position: fixed, then adding an infinite walk-cycle animation and another animation that moves him relentlessly from right to left across the screen. Meanwhile, we give the player character a running animation and position him based on a horizontal animation timeline. He can run, but he can’t hide. body { .idle { animation: idleAnim 1s steps(6) infinite; } /* --scroll-direction is populated using the clever property Bramus demonstrates here https://www.bram.us/2023/10/23/css-scroll-detection */ .sprite { transform: rotateY(calc(1deg * min(0, var(--scroll-direction) * 180))); } @container not style(--scroll-direction: 0) { .sprite { animation: runAnim 0.8s steps(8) infinite; } } .evil-twin-wrapper { position: fixed; bottom: 5px; z-index: 1000; margin-left: var(--enemy-x-offset); /* we'll explain later how we detect the way the game should end */ --follow: if(style(--game-state: ending): paused; else: running); animation: var(--chase-time) forwards linear evil-twin-chase var(--follow); } } He can’t hide, but we’ll next introduce a second scroll-based decision point using scroll-state to detect when our hero has been backed into a corner and see if we can help him. How scroll-state could save your life As our hero runs away to the left, the buildings and sky in the cityscape background show off a few layers of parallax scrolling by assigning each layer an anonymous animation timeline and an animation that moves each layer faster than the layer behind it. .sky, .buildings-back, .buildings-mid, .sky-vertical, .buildings-back-vertical, .buildings-mid-vertical { position: fixed; top: 0; left: 0; width: 800%; height: max(100vh, 300px); background-size: auto max(100vh, 300px); background-repeat: repeat-x; animation-timing-function: linear; animation-timeline: scroll(x); } /*...repetitively assign the corresponding animations to each layer...*/ @keyframes move-sky { from { transform: translateX(0); } to { transform: translateX(-2.5%); } } @keyframes move-back { from { transform: translateX(0); } to { transform: translateX(-6.25%); } } @keyframes move-mid { from { transform: translateX(0); } to { transform: translateX(-12.5%); } } This usage of animation timelines is what they were designed for, which is why the code is straightforward. If we had to, we could push the boundaries and use the same technique to set a Houdini variable in an animation timeline to detect when the player reaches the left corner of the screen — but thanks to scroll-state queries, we have a cleaner option. @container scroll-state((scrollable: left)) { body { overflow-y: hidden; } } @container scroll-state((scrollable: bottom)) { body { width: 0; } } That’s all we need to toggle vertical and horizontal scrolling based on position! This is the basis that allows the player to escape from being slashed by the bad guy. Now we can scroll up and down to climb the ladder only when the player reaches the left corner where the ladder is, and disallow horizontal scrolling while he is climbing. I could have made the game detect reaching the left of the screen using animation timelines, but that would involve custom property toggles, which are more verbose and error-prone. When the player climbs to the top of the ladder to collect the lightsaber, we do need one toggle property so the game will remember we have collected the weapon, but it’s simpler than if we had used animation timelines. @keyframes collect-saber { from { --player-has-saber: false; } to { --player-has-saber: true; } } body { animation: .25s forwards var(--saber-collection-state, paused) collect-saber; } @container scroll-state(not (scrollable: top)) { body { --saber-collection-state: running; } } @container style(--player-has-saber: true) { .sprite { background-image: url(/*combat spritesheet*/); } .lightsaber { visibility: hidden; } } Contrariwise, the animation cycle while the sprite is climbing the ladder is a job for animation-timeline used to assign an anonymous vertical timeline to the player sprite. This is applied conditionally when our scroll-state query detects that the player is between the bottom and the top of the ladder. It’s a nice example of how animation timelines and scroll-state queries are good at different things, and work well together. @container scroll-state((scrollable: top) and ((scrollable: bottom))) { .player-wrapper { .sprite { animation: climbAnim 1s steps(8); animation-timeline: scroll(root y); animation-iteration-count: 10; } } } Finish him with fatal conditionality We apply the techniques I discovered in my CSS collision detection article to detect when the two characters meet for their showdown. At that point, we want to disable scrolling entirely and display the appropriate non-interactive endgame cutscene depending on the choices our user made. Notice that if we detect the good guy won, he only strikes with the sword once, whereas the bad guy will continue to slash infinitely, even after the good guy is dead. What can I say — I was working on this CodePen around Halloween. In the past, I wrote an article questioning the need for inline CSS conditionals — but now that they’ve landed in Chrome, I find them addictive, especially when creating a heavily conditional CSS experiment like nonlinear scrollytelling. I like to imagine that the new if() function stands for Interactive Fiction. Below is how I detect the endgame conditions and choose which animations to play in the final cutscene. I am not sure of the most readable way to space out if() code in CSS, so feel free to start holy wars on that topic in the comments. body { --min-of-player-and-enemy-x: min(var(--player-x-offset), var(--enemy-x-offset) - 10px); --max-of-player-and-enemy-y: max(var(--player-y-offset, 5px)); --game-state: if( style(--min-of-player-and-enemy-x: calc(var(--enemy-x-offset) - 10px)) and style(--max-of-player-and-enemy-y: 5px): ending; else: playing ); overflow: if( style(--game-state: ending): hidden; else: scroll ); } @container style(--player-has-saber: true) and style(--game-state: ending) { .player-wrapper { .sprite { animation: attack 0.7s steps(4) forwards; } .speech-bubble { animation: show-endgame-message 3s linear 1s forwards; &::before { content: 'Refresh the page to play again'; } } .evil-twin-wrapper { .evil-twin { evil-twin-die 0.8s steps(4) .7s forwards; } } } @container style(--player-has-saber: false) and style(--game-state: ending) { .player-wrapper { .sprite { animation: player-die .8s steps(6) .7s forwards; } } .evil-twin-wrapper { .speech-bubble { animation: show-endgame-message 3s linear 1s forwards; display: block; &::before { content: 'Baha! Refresh the page to fight me again'; } .evil-twin { attack 0.8s steps(4) infinite; } } } } Should we non-linearly scrollytell all the things? I am glad you asked, hypothetical troll who wrote that heading. Of course, even putting the technical challenges aside, you know that this won’t always be the right approach for a website. As Andy Clarke recently pointed out here on CSS-Tricks, design is storytelling. The needs of every story are different, but I found my little pixel art guy’s emotional story arc requires non-linear scrollytelling. I think this particular example isn’t a gimmick and is a legitimate form of web design expression. The demo tells a simple story, but my wife pointed out that a personal situation I am dealing with has strong analogies to the pixel guy’s journey. He finds himself in a situation where the only sane option is to allow himself to be backed into a corner, but when all seems lost, he finds a way to rise above the adversity. Then he learns that the moral high ground is its own form of trap, so he must put his own spin on the wisdom of Sun Tzu that “to know your enemy, you must become your enemy.” He apparently lowers himself back to the aggressor’s level — but he only does what is necessary. The bitterwseet moral is that survival sometimes requires taking a leaf out of the enemy’s book — but the user has been guiding the hero through this story, which helps the audience to understand that the good guy’s motivations are not comparable to those of his adversary. While testing the CodePen, I found the story moving and even suspenseful in an 8-bit nostalgia kind of way, even if some of that suspense was my uncertainty about whether I would get it working. From a technical point of view, I think building a full-scale website based on this idea would require a mix of CSS and JavaScript, because storing state in CSS currently requires hacks (like this one, which is cool but also highly experimental). The paused animation approach to remember that the player collected the sword can glitch due to timer drift, so there is a small chance the dude will start the game with the lightsaber already in his hand! If you resize the window during the endgame, you can glitch the game, and then things get really weird. By contrast, something like the scroll snap events — already supported in Chrome — would allow us to store state and even play sounds using a script that fires based on scroll interactions. It seems like we already have enough in CSS to build a site like this one, which uses horizontal multimedia scrollytelling to raise awareness that interpersonal violence exists on a continuum and tends to escalate if the target is unable to recognize the early warning signs. That’s a worthy topic I unfortunately have some experience with, and the usage of horizontal scrollytelling to address it demonstrates that a wide variety of stories can be told engagingly through scrollytelling. Scrollytelling on Steroids With Scroll-State Queries originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
Google's New AI Tool Solves a Problem for Every Lazy Developer
by: Sourav Rudra Tue, 02 Dec 2025 11:17:07 GMT Back on November 13, Google launched Code Wiki in public preview. The platform automatically generates and maintains documentation for code repositories using Gemini. The tool addresses what Google calls software development's biggest, most expensive bottleneck, "reading existing code". In simple terms, Code Wiki keeps documentation constantly updated as the codebase develops. Instead of static content that becomes outdated, it serves as a living wiki that evolves with every code change. Here's what it brings to the table. 👇 Code Wiki: Ciao Manual Documentation?Code Wiki creates interactive documentation that links high-level explanations directly to specific code files, classes, and functions. Every wiki section is hyperlinked to relevant code files and definitions, merging reading and exploring into a relatively simple workflow. The platform scans the full codebase and creates fresh documentation after each change. It can automatically generate architecture diagrams, class diagrams, and sequence diagrams that change with the code. A Gemini-powered chat agent is also built in that uses content from the up-to-date wiki as context to answer specific questions about the code repository. An Early PreviewCode Wiki does a surprisingly good job! I tested Code Wiki by searching for the Kubernetes repository. It displayed a detailed page with video, diagrams, and text explanations of the project structure. After that, I asked the integrated Gemini chat a basic question about what the repository contains. It listed everything cleanly and organized the information in a way that made sense (using bullet points and so on). If you want to check it out yourself, then the Code Wiki website is already live as a public preview. It should work well for searches of public repositories. Code Wiki (public preview)Google is also developing a Gemini CLI extension for private repositories, where developers and teams will be able to run Code Wiki locally on their internal codebases. It is not live yet, but you can join the waitlist to get access when it launches.
-
Not Every Browser is Built on Chrome: Explore These Firefox-based Options
by: Pulkit Chandak Tue, 02 Dec 2025 10:54:21 GMT Chrome is undoubtedly the most popular browser on the market. Backed by Google and coming in default for most Android devices, which have the largest smartphone market share, Chrome checks a lot of boxes and makes it immensely easy to sync your browsing data across devices. There are, however, some caveats. Even though it is based on open source Chromium project, Chrome has been under fire again and again over the years because of privacy concerns. The biggest alternative available to us is Firefox. But then not everyone is a fan of Firefox as well. It could be the user interface, it could be the disklike for the Mozilla Foundation (don't be surprised, some people do), it could be some other reason. Yet, if you want a browser that is not Chromium based and it is not the stock Firefox, how about trying some Firefox-based browsers? Let me list the best available options for you. 1. LibreWolfLibreWolf takes up the ambitious goal of removing all sorts of security issues from the usual Firefox. It does so first off by removing all telemetry, including all experimental data surveys. It provides privacy-oriented search engines by default, like DuckDuckGo, Searx and Qwant. LibreWolf also includes uBlock Origin by default as well, to block ads and tracking. Other than that, it also disables Firefox Sync as some people don't want cloud based sync as well. Altogether, you get a more Free Software Foundation styled open source version of Firefox. Meaning it is more towards the core principles of open source. LibreWolf Browser2. WaterfoxA similar option to LibreWolf, Waterfox offers extra privacy features. It comes with tracking protection by default, with things like Oblivious DNS, which makes it harder for the ISP to track online activity. It allows you to open private tabs within the window of the regular tabs, making it easier to access the non-recording privacy measures. Correct me if I am wrong but I don't think Firefox has this option, at least by default. Telemetry, obviously, is disabled except for the bare minimum necessary for browser updates. It also provides clean link sharing, stripping it of the tracking parameters, along with privacy-friendly defaults for search engine. Other than the privacy features, Waterfox offers ease-of-usage features such as smooth import from existing Firefox accounts, vertical tabs, container tabs for tab grouping and so on. Waterfox is beautifully designed, highly customizable, and delivers on its privacy promises, for which reasons it has been quite well reviewed. Waterfox Browser3. Zen BrowserZen Browser has gained quite some popularity in the last couple of years, and for good reason. It features an iconic vertical tab sidebar, with workspace support and compact mode for lesser visual hassle. Talking about cosmetic changes, it gives you heavy customizability for the browser themes, with gradients, textures and colors. It even offers community themes via the Themes Store. For easier access to different tabs, it offers a split view, in which you can split the browser window to include multiple tabs side-by-side for multitasking. You'll love it if you have an ultrawide monitor. Other than that, it also has privacy features with minimal telemetry, and offers a "calm" internet experience. I've used Zen personally for a stretch of time, and I had no complaints with it except for a slightly longer start-up time, which might not necessarily be the case for you (because my teammates at It's FOSS disagree). Zen Browser4. Tor BrowserTor Browser has its root in the privacy business, with the onion routing protocol. It was started off in the Naval Research Lab, which was designed primarily for isolated secure transmission of data. In simple words, onion routing protocol works like layers of an onion, where data is sent through network nodes called "onion routers", each of which peels away a single layer, revealing the next destination of the data, essentially making it harder to track where the data is coming from or going to, next. Each intermediary knows only the location of the next router, and no more. Coming to the actual browser, it is pretty much a Firefox clone with multiple security levels, fingerprinting protection, and censorship circumvention using ".onion" sites. The security is so good that it has been historically used for whistleblowing purposes and anonymously published journalism such as WikiLeaks. Due to the intricate protocol, however, it also tends to be quite slow. A number of convenience features, especially ones that require geolocation tend to work unwell on Tor. Basically, use Tor browser when you need extra privacy, but it may not be suitable for your regular day-to-day web browsing activity. Tor Browser5. Mullvad BrowserMullvad is essentially being described as Tor without the Tor protocol. It has proper private browsing by default, and has an interesting and strong anti-fingerprinting policy. Mullvad makes all the users seem the exact same in terms of any identification parameters such as window sizes, fonts and so on to prevent special identification of any one user. Mullvad also has built in ad and telemetry blocking. Mullvad insists on using a VPN service to pair with the browser, even offering one themselves (which is not free, but is affordable at $5-6 a month). It is configured with DNS-over-HTTPS (DoH) by default, which reduces the chances of DNS-leaking and improves privacy. Some UI changes make it a little inconvenient, such as locked window-sizes, but otherwise, it seems identical to Firefox and works just as well. Mullvad Browser6. Floorp BrowserFloorp browser has a major selling point at customization. The top bar can be moved, the title bar can be hidden, vertical-style and tree-style tabs are available, and it offers a wide array of UI themes. Floorp offers many more interesting, unique features. Some of those are containerized workspaces, where in different workspaces, different login information and settings can be configured. It also offers split tabs and custom mouse gestures. You can also use web apps on an internal Floorp Hub. It offers built-in note-making integration, which can be really useful for productivity work. Because of the unique set of features, Floorp can be really great for productive work, and the customization options further helps the cause. If you're looking for a Vivaldi like option but in Firefox land, Floorp is a good option. Floorp Browser7. Pale MoonPale Moon browser is one of the earliest forks of Firefox. They replaced the Gecko engine with a fork called Goanna. It has the interface of the traditional Firefox, which works well but surely looks dated. Of course, you can customize almost every aspect of the browser. It claims no data collection and telemetry, making it grea choice for privacy as well. Another unique point here is that Pale Moon supports some obsolete web technologies and legacy Firefox plugins. If you have to use legacy plugins or you are nostalgic about Firefox before the Quantum shift. Pale Moon BrowserConclusionAll in all, there are several Firefox-based browsers, and each of them offers something special, something different. Zen and Floorp have been creating quite a bit of buzz recently due to their interesting features, often being considered better than Firefox for both productivity as well as privacy. Let us know in the comments which Firefox-based browser is your favorite. Cheers!
-
ONLYOFFICE Docs 9.2 Release Brings AI Grammar Checks to the Free Office Suite
by: Sourav Rudra Tue, 02 Dec 2025 09:07:48 GMT ONLYOFFICE continues to offer a compelling alternative to proprietary suites like Microsoft Office, with strong document format compatibility and a privacy-respecting approach. The open source suite has successfully built a reputation for combining professional features with user data protection. Now, the developers have released ONLYOFFICE Docs 9.2, building on the work of the earlier release. Let's dig in! 🤓 🆕 ONLYOFFICE Docs 9.2: What's New?It is now possible to customize keyboard shortcuts to suit your workflow needs. To do that, open the File tab and head into the Advanced Settings, where you will find options to reconfigure the shortcuts. This should come in really handy if you are someone who has spent years on a certain office suite and want to replicate your workflow on ONLYOFFICE. Power users who rely heavily on keyboard navigation also benefit from this. Next up is the new macro recording feature, which helps automate repetitive tasks that slow down your workflow. Head to the View tab and click "Record macro" to start capturing your actions. Left to right: custom keyboard shortcuts, macro recording, and AI spell check. With this, the editor records your sequence of steps, whether you are applying consistent formatting across multiple sections, performing data manipulation, or executing any other routine operation. Once saved, you can replay the macro whenever needed. Grammar and spelling checks get an AI upgrade through the ONLYOFFICE AI plugin. With the plugin set up, you can trigger the feature from the AI tab in the toolbar or via the right-click context menu. The tool scans your text and offers correction suggestions, typically including explanations for the recommendations. You have the option to check your full document or just selected portions. Every suggestion appears for individual review, so you decide which corrections to implement. Left to right: form field role assignment and colored PDF redactions. The Form Editor gets usability improvements in this release. You can now add descriptive text labels to checkboxes and radio buttons, making it clearer what each option represents when someone fills out your form. Additionally, when you are inserting new fields into a form, you can assign specific roles to them during the creation process. This is useful for collaborative environments where different team members need to fill in different parts of the same form. 📥 Download ONLYOFFICE Docs 9.2Self-hosting users of ONLYOFFICE can get the latest packages from the official website. For the rest of us, the desktop editors will be receiving this update very soon. The changelog is also a handy resource if you want to learn more about this release. ONLYOFFICE Docs 9.2Suggested Read 📖 6 Best Open Source Alternatives to Microsoft Office for LinuxLooking for Microsoft Office on Linux? Here are the best free and open-source alternatives to Microsoft Office for Linux.It's FOSSAbhishek Prakash
-
Chris’ Corner: Web Components
by: Chris Coyier Mon, 01 Dec 2025 18:25:26 +0000 I’d never heard of a CEM before. That’s a “Custom Elements Manifest” or a custom-elements.json file in your project. It’s basically generated documentation about all the web components your project has. Dave calls them the killer feature of web components. I love the idea of essentially getting “free” DX just be generating this file. I particularly like the language server idea so that code editors can offer all the fancy autocomplete and linting for your bespoke elements. Sometimes web components seem so practical and straightforward, like Eric Meyer’s recent <aside-note>. It yanks out some text into another element that gets positioned somewhere new, if a few media queries pass. And sometimes web components scare me, like when you read advice to make sure to have an asynchronous promise-resolving disconnectedCallback that is a mirror image of your connectedCallback because you can’t predict how the DOM will be changed. Maybe it’s best to roll up your sleeves and write your own define functions. Let’s just take a quick stroll through some web components I saw people writing about recently. I do find it significant there is a low constant simmer of web component writing/sharing like this. Lea Verou made a <bluesky-likes> component. I gave it a whirl just for fun. Zach Leatherman took the Web Awesome copy button and incorporated it into the Eleventy docs in his own way, with all the thinking toward performance and progressive enhancement as you’d expect. Pontus Horn got all declarative with ThreeJS thanks to web components.
-
Prevent a page from scrolling while a dialog is open
by: Geoff Graham Mon, 01 Dec 2025 17:25:28 +0000 Bramus: YES! Way back in 2019, I worked on “Prevent Page Scrolling When a Modal is Open” with Brad Wu about this exact thing. Apparently this was mere months before we got our hands on the true HTML <dialog> element. In any case, you can see the trouble with active scrolling when a “dialog” is open: CodePen Embed Fallback The problem is that the dialog itself is not a scroll container. If it was, we could slap overscroll-behavior: contain on it and be done with it. Brad demoed his solution that involved a JavaScript-y approach that sets the <body> to fixed positioning when the dialog is in an open state: CodePen Embed Fallback That’s the tweak Bramus is talking about. In Chrome 144, that’s no longer the case. Going back to that first demo, we can do a couple of things to avoid all the JS mumbo-jumbo. First, we declare overscroll-behavior on both the dialog element and the backdrop and set it to contain: body { overscroll-behavior: contain; } #dialog { overscroll-behavior: contain; } You’d think that would do it, but there’s a super key final step. That dialog needs to be a scroll container, which we can do explicitly: #dialog { overscroll-behavior: contain; overflow: hidden; } Chrome 144 needed, of course: CodePen Embed Fallback The demo that Bramus provided is much, much better as it deals with the actual HTML <dialog> element and its ::backdrop: CodePen Embed Fallback Prevent a page from scrolling while a dialog is open originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
-
"Less Bugfixing Noise": Last Kernel Release of 2025 is Here and it Could be an LTS
by: Sourav Rudra Mon, 01 Dec 2025 10:53:54 GMT More than two months since the last version, a new Linux release has been introduced, offering, as usual, better hardware support and many new additions covering a broad range of subsystems. As with every development cycle, work from thousands of contributors has brought incremental improvements across CPUs, GPUs, storage, networking, and security. Linus Torvalds had this to say about the release: So I'll have to admit that I'd have been happier with slightly less bugfixing noise in this last week of the release, but while there's a few more fixes than I would hope for, there was nothing that made me feel like this needs more time to cook. So 6.18 is tagged and pushed out.📋This coverage is based on the detailed reporting from Phoronix.Linux Kernel 6.18: What's New?If things go as planned, Linux kernel 6.18 is expected to become 2025’s long-term support (LTS) release. Users can expect longer maintenance, steady security fixes, and a stable base that many distributions could rely on for years. This release continues the kernel’s longstanding focus on supporting the latest hardware from vendors like Intel, AMD, Arm, NVIDIA, and others. Many architecture-specific tweaks, driver updates, and power-management refinements land in this cycle. Intel UpgradesWe kick things off with display support for Intel’s upcoming Wildcat Lake series of CPUs, which targets budget laptops and mini PCs and includes an integrated GPU for handling graphics and video tasks. The release also adds the Panther Lake SoC Power Slider, allowing users of this platform to choose from one of three power profiles: "low-power", "balanced", and "performance". The Intel P-State driver has been updated, allowing it to enable Hardware P-States (HWP) without Energy Performance Preference (EPP) when the new Dynamic Efficiency Control (DEC) hardware feature is enabled. There's also support for Intel TDX with kexec, with the kernel now handling memory correctly for TDX workloads. Some early Xeon Sapphire Rapids processors with known hardware issues are not supported. New Device Tree InclusionsLinux 6.18 adds device trees for Arm C1, Apple M2 Pro, M2 Max, and M2 Ultra chips. The Apple-related work is tied to Asahi Linux's efforts and brings better support for high-end Apple Silicon Macs into the mainline kernel. Several Snapdragon X1 laptops now have mainline support: Dell Inspiron 14 Plus, Dell Latitude 7455, HP OmniBook X14, and Lenovo ThinkBook T16. Owners of these machines should see improved Linux compatibility. The SiFive HiFive Premier P550 RISC-V development board finally gets mainline support too. AMD RefinementsSimilarly, this kernel release adds support for what's most likely AMD EPYC Venice processors with 16-channel memory support. The AMD64 EDAC driver now recognizes these chips along with what appears to be the EPYC 8004 successor. Virtualization improvements include Secure AVIC for SEV-SNP virtual machines, providing better security and performance. And, to round out this section, the firmware bug affecting VMs with more than 255 vCPUs is fixed. CPU topology detection now works correctly for large virtual machines on AMD EPYC servers. Tyr Driver for ARM Mali GPUsLinux kernel 6.18 gets the Tyr driver, bringing Rust-based GPU support for Arm Mali CSF GPUs. Jointly developed by Collabora, Google, and Arm, it is a Rust port of the Panthor driver. The driver remains experimental. It can power up the GPU, query hardware metadata through MMIO, and provide metadata to userspace via the DRM device ioctl. Future releases are set to expand functionality toward becoming a full Panthor replacement. Yoohoo! There is a New Rust-Based GPU Driver for Linux in DevelopmentThat’s quite a surprising development, I must say.It's FOSSSourav RudraStorage ImprovementsAs expected, Bcachefs is out of the mainline kernel. After marking it "externally maintained" in 6.17, Linus Torvalds has now removed the code entirely. Users need the DKMS module going forward. Btrfs gets a nice speed boost for read-heavy workflows. The file system avoids locking contention when searching checksums, cutting sync times from minutes to seconds in some cases. Initial support for block sizes larger than kernel page size also arrives, though it is experimental with several limitations. XFS enables online filesystem checking by default. The feature has been tested for a year without problems, so it is now standard. Some old deprecated mount options are also gone. New Linux Kernel Drama: Torvalds Drops Bcachefs Support After ClashThings have taken a bad turn for Bcachefs as Linux supremo Linus Torvalds is not happy with their objections.It's FOSSSourav RudraMiscellaneous ChangesRounding out this kernel release are the following additions: Initial support for those fancy haptic touchpads found on newer laptops.The kernel can now detect and use MIPS vendor extensions for RISC-V chips.The Loongson Security Engine chip is now supported, along with its associated driver stack.UDP receive performance has been improved under heavy load, especially during high-rate or DDoS-like traffic.Starting with this release, the nouveau driver now defaults to using the GSP firmware on supported NVIDIA GPUs.Installing Linux Kernel 6.18Those on rolling release distributions (like Arch Linux), Fedora, and any of its derivatives will be able to take advantage of this kernel very soon. For those on other distributions, you have two options: wait for your distro's official release of Linux kernel 6.18, or manually install the latest mainline kernel yourself. That said, I don't recommend this for new or regular users, as it carries a certain degree of risk. If you do choose this route, backing up your data beforehand is essential. Linux Kernel 6.18Suggested Read 📖 Install the Latest Mainline Linux Kernel Version in UbuntuThis article shows you how to upgrade to the latest Linux kernel in Ubuntu. There are two methods discussed. One is manually installing a new kernel and the other uses a GUI tool providing even easier way.It's FOSSAbhishek Prakash
-
Avoid These 10 Mistakes for an Efficient, Enjoyable, and Safe Homelab
by: Umair Khurshid Mon, 01 Dec 2025 16:19:53 +0530 Mistakes are part of the learning process, but in homelabbing, they can be not only costly but also time-consuming and take the fun out of it. Over the years, I have made countless mistakes that cost me time, energy, and a lot of sweat. It’s easy to lose the joy of homelabbling after that. To ensure your pursuit of new knowledge doesn’t falter due to unforeseen obstacles, I sat down and wrote this comprehensive article. I will give you a behind the scenes look and share my personal insights. Find out where I wasted a lot of money and where more thoughtful planning would have saved me from a lot of guilt. 1. Overly ambitious planningWhen I first decided to set up a homelab, my ambitions were enormous. In my mind’s eye, I already envisioned a server rack crammed full of enterprise hardware: dozens of high-performance rack servers, a high-performance 10 Gbps switch stack, and a fully-fledged firewall failover with dual WAN connectivity. I won’t even get started on the storage cluster. Later, when I started researching, I quickly realized how expensive even old hardware can be, and that a server rack also costs a fortune, especially in Global South. Ultimately, I abandoned my plans and wasted valuable time that I could have better used for my professional development. Photo by Taylor Vick / UnsplashAnd let’s be honest, nobody needs a 42U rack full of enterprise hardware at home to improve their admin or DevOps skills. In retrospect, a simple thin client like the HP T630 with Proxmox and three virtual machines would have sufficed back then. A managed switch would have been completely unnecessary back then as my provider’s Box couldn’t handle the VLAN anyway because of my such overambitious expectations, my project simply didn’t get off the ground for a long time. So don’t make the mistake of trying to start with something far too big. These days, it’s perfectly possible to run a virtual Kubernetes cluster or a dozen virtual servers on a powerful mini-PC. Even running a pure Docker host is a cost-effective way to get started. After all, virtually all homelab services are available in containerized form. At the moment, my homelab mixes compact nodes with heavier hardware that carries the demanding workloads, and each system has settled into a role that matches its strengths. Minisforum NAB6 Lite with an Intel Core i5 12600H handles most of the always-on services, including Docker stacks, light Kubernetes testing, and a few monitoring agents. MinisForum UM250 with a Ryzen 5 Pro acts as a dependable secondary node where I spin up short lived VMs, test new tools, and run anything experimental that I do not want on the main box. The EliteMini HM90 has become my remote Steam machine for indie gaming and doubles as a small GPU capable host for media tasks when needed. These three mini-PCs save me a lot of save and electricity cost. I also have a ACEPC AK1 with its Intel Celeron J3455 fills the role of a low power utility system for status dashboards, uptime monitoring, and a couple of background scripts that do not need real compute. A Synology DiskStation DS925 Plus handles shared storage, snapshots, and offloading backups for the entire lab and a Cisco switch keeps the network organized and gives me the VLAN separation I need. The monstrous, HP Z640 with dual Xeon E5 2680 v4 processors anchors the setup by taking on Proxmox, but I will change it soon with something that consumes less power and isn’t that noisy. For networking, I rely on pfSense which runs on barebone box that I got from AliExpress. How my homelab looks these days📋Start small. Start with existing hardware at hand. Don't burn money at the beginning.2. Using only open source softwareI am all for open source and Proxmox is my hypervisor of choice. Apparently, there’s nothing wrong with this approach and you might be wondering why I included it in the list. The problem is, most companies don’t rely on open source technology stack. In practice, you are more likely to find VMware vSphere as their virtualization solution. In most cases, no one is familiar with the Proxmox Backup Server either. Veeam, on the other hand, is known to almost everyone. Therefore, your homelab should also have a traditional enterprise environment. This offers the advantage of creating synergies. Update processes are already familiar from your own lab, and you can calmly test a wide variety of configurations there. You might argue that the entire software is extremely expensive and simply unaffordable. However, that’s not the case: NFR (Not-for-Resale) licenses: These are licenses used by vendors to authorize content creators, partners, and others to use their software. This is a great source of free software. If you like a particular piece of software or solution, contact the vendor and check if they offer such licenses that you can use.VMUG Advantage: When I first started running VMware vSphere at home, I wasn’t really aware of VMUG Advantage. It’s a subscription that allows you to use the full VMware software suite in your home lab for a year without any restrictions. It’s not cheap, but certainly reasonable if you are thinking of a career and are not just a hobbiest.📋If you are using enterprise software at work, you can use their freemium version in your homelab. Reduces the learning curve with a new (open source) software.3. Inadequate planningThose who don’t think things through sufficiently may find themselves having to purchase additional hardware more frequently. Thinking back to the beginning of my homelab, the 8-port switch was too small after only 3 months, the Raspberry Pi 4 was too underpowered after just a few days, and the inexpensive firewall couldn’t route more than 500 Mbps of traffic. Then came the first Proxmox cluster with two nodes and a NAS as shared storage. The latter was only connected at 1 Gbps, which resulted in incredibly poor performance for my VMs, and the former immediately went into read-only mode whenever a node restarted. When I added another QDevice, I first had to replace the cheap triple power strip. The NAS in questionYou don’t need to start big, but accurately assessing your needs is crucial. Just jumping in isn’t a good idea. Sit down and think carefully about what you want to implement after completing the smaller projects because inexpensive server hardware quickly reaches its performance limits, the backup pool fills up faster than expected, and cheap SSDs in the servers tend to almost fail under multiple simultaneous random write operations. As a result, everything has to be replaced and the setup rebuilt from scratch. This not only wastes valuable time but also incurs unnecessary expenses. 📋If you have to buy hardware at the beginning, assess your need. What you buy today may not be sufficient in a few months.4. Forego backupsA home lab serves as a learning environment for new skills and is therefore often reconfigured regularly. In this process, backups are often considered less important and deemed unnecessary. However, many forget the backbone of their network, namely the firewall, switch, wireless access point, and smart home hub. A lack of backups in these areas can have devastating consequences in a worst-case scenario. Even a malfunctioning DNS ad blocker like the popular Pi-hole or the more modern AdGuard Home can cripple the entire home network. Therefore, it’s advisable to create backups and the built-in tools of these services are sufficient for this. Exporting configurations is possible almost everywhere. Users with ample storage space who are running Proxmox can utilize the integrated backup function. However, this continuously creates full backups, which can quickly deplete available storage space with frequent backups. A more efficient solution is offered by the Proxmox Backup Server, which enables incremental backups. Furthermore, PBS has a very practical feature, you can view and even download files in the graphical user interface, just like in a file browser. Additionally, there’s the Proxmox Backup Server Client tool, which allows you to backup physical servers, making it particularly useful for the PVE hypervisor itself. Veeam, especially the free edition, also proves to be a powerful backup solution and is definitely worth considering. However, the number of backup clients is limited. Furthermore, Synology NAS systems offer a great built-in solution with Active Backup for Business. Those who prefer can also consider cloud services like the Hetzner Storage Box. Since this is only cloud storage, you still need software to perform backups. Ultimately, the question remains: what exactly needs to be backed up? In practice, you often hear the term “critical infrastructure.” In my opinion, the following systems in a home lab fall into this category: Virtual machinesContainers (LXC, Docker or K8s)Switch/Firewall configurationsHypervisor settingsYou should also consider the following: Offsite backups: Carefully consider which backups or configuration files are essential for you. It makes perfect sense to also move a selection of your backups to the cloud. This might seem paranoid at first. But consider this: What good is a nightly backup on your own NAS if a lightning strike destroys both the hypervisor and the network storage?Storage replication: Do you have a central storage device that provides the majority of the storage for your network? Then you should back it up as well. RAID is not a substitute for backups. Although RAID offers higher availability and redundancy, it doesn’t protect against hardware failures. The simplest way to implement a backup is to run an RSYNC job from the primary to the secondary storage once a night.📋Have multiple backup copies. And no, RAID is not backup.5. Everything in one networkCombining all smart home devices, network storage, and servers into a single network introduces numerous problems. Network performance suffers significantly, especially when VoIP, video streaming, and storage replication are running simultaneously on the same network. Furthermore, configuring just Quality of Service (QoS) no longer provides a solution. Such a network design has a very negative impact on time-critical applications like VoIP or online gaming. Therefore, I recommend using VLANs. Not least because they simplify management and improve security. Guests can be isolated in a separate VLAN, while the management VLAN is only accessible to specific devices. Furthermore, I am a big fan of operating smart home devices in a dedicated IoT network. This allows for better monitoring of their traffic and enables a quick response to any data privacy concerns. Moreover, IoT devices are a frequent target for attacks, so they must not be allowed to connect to sensitive areas of my network. Security is my top priority. Furthermore, I see the use of VLANs as an excellent opportunity for system administrators to deepen their firewall and switching skills. A solid understanding of network traffic will be important sooner or later, even if it’s just the connection between the load balancer and the backend. The following structure is recommended as a general guideline: LAN: A dedicated VLAN for normal home network traffic. This could include laptops, PCs, tablets, and smartphones.Server: A dedicated VLAN where you make your self-hosted services available.IoT: Due to the security risks of IoT devices, they should be placed in their own VLAN. Ensure that firewall rules prevent these devices from connecting to sensitive network areas.Guests: A special VLAN where visitors’ devices are located. Ensure that the devices are isolated at Layer 2, although not every switch and wireless access point supports this.MGMT: A dedicated VLAN in which all administrative access to the infrastructure components is only possible for specific IP addresses.DMZ: A VLAN where you can place publicly accessible services. Examples include web servers, game servers, and VPN servers. You need to design your firewall rules particularly strictly here.Created using lucidchartA positive side effect of network segmentation for me was the opportunity to delve deeper into IPv6. In particular, setting up Stateless Address Autoconfiguration (SLAAC) for the different subnets was interesting and significantly improved my understanding of IPv6. 📋Take advantage of VLANs and divide the network and divices as per usage.6. Insufficient resourcesAnother common mistake when homelabling is underestimating the resources required. Initially, Linux might seem like a major hurdle that needs to be overcome, but after a few weeks, the picture changes. When you then want to start your first project, for example a Nextcloud instance, you could be in for a nasty surprise. The inexpensive thin client might only have one hard drive connection, and integrating network storage doesn’t make much sense with a 1 Gbps connection. Therefore, purchasing more powerful hardware with more hard drive connections is necessary. Several months later, when attempting to create a Kubernetes cluster, bottlenecks arise again. The two installed consumer SSDs can’t handle the read and write demands of multiple virtual machines. Furthermore, failures become increasingly frequent because the ZFS completely overloads the consumer drives. Then the cycle starts all over again, and used Intel SSDs are purchased, or NVMe drives are supposed to be the solution to all the problems. All these problems could have been avoided with more careful planning and a more precise resource estimate. Therefore, it’s important to consider your long-term goals. Do you simply want to run individual VMs or even just a few Docker containers? Or are you aiming to operate failover clusters? Photo by Thomas Jensen / UnsplashFor those interested in entering the data center field, it might be wise to invest in more hardware from the outset. In professional practice, one is often confronted with complex setups that require a thorough understanding down to the smallest detail. Resource clustering, in particular, is ubiquitous and demands comprehensive expertise. For example, if you want to familiarize yourself with Kubernetes and SUSE Rancher, you need sufficient hardware to run an upstream and downstream cluster. In addition, the containerized workloads require adequate computing power and of course, you also need to factor in the resources required for a few more virtual machines. Just think about GitOps in the form of a Gitea server or S3-compatible storage like MinIO. Often you also require a host to run Ansible, Puppet, or Terraform code. As you can see, resource planning can be highly individual. As a rough guide, I can give you the following standalone hypervisor recommendations: Budget-friendly setup for trainees: 4C/8T Intel Core i5 or AMD Ryzen 5, 16 GB RAM, 2 x 500 GB SSDs and 1 Gigabit network card.Price-performance winner for admins and DevOps: 8C/16T Intel Core i7 or AMD Ryzen 7, 64 GB RAM, 2 x 1 TB NVME and 2.5 Gigabit network card.Resource-packed powerhouse for enthusiasts: 12C/24T Intel Core i9 or AMD Ryzen 9, 128 GB RAM, 2 x 2 TB NVMEs and a 10 Gigabit network card.Although storage is extremely affordable these days, many people still opt for slow SMR HDDs. These don’t achieve high transfer rates, neither for reading nor writing. Furthermore, they often don’t last very long and are hardly any cheaper than enterprise SATA hard drives. 📋Reassess your needs and be prepared for spending more on hardware as your needs expand.7. Missing documentationOne of the biggest and most common mistakes in a home lab is the lack of documentation. How often have you told yourself: “Oh, I’ll remember where that is,” or which network it’s connected to, or how the upgrade process works in detail? I have often regretted not writing certain things down. Start with the network, you should write down all the assigned IP addresses. It’s also a good idea to properly document the physical cabling in the rack. It’s so easy for a cable to come loose during renovations. Labeling the cables has saved me a lot of time and frustration in this regard. This is how my cables used to look like when I started. I know, pretty laughable!A few months or years later, when you suddenly need to know how things are configured, it’s great to be able to pull out your documentation and understand how all the connections are established. Depending on what you want to document, there are several interesting solutions available. For IP management, I can recommend GestioIP or the classic phpIPAM. If you want to document your setup down to the last detail, I highly recommend the wiki software BookStack, which can be deployed as a Docker container in no time. 📋Documentation is important even if you build and manage everything on your own. Create separate, proper knowledge base for your homelab. Your futurself will thank you.8. No UPS in useFor a long time, I hesitated to purchase a UPS for my small rack. The costs seemed far too high, and the benefits questionable anyway, but despite CoW systems like ZFS on my servers, the file systems of my switches, firewall, and wireless access points can become corrupted during . Therefore, after a while, I only switched the machines on sporadically to explore new technologies. Nevertheless, electricity consumption remained quite high, and I had to pay a hefty additional bill. What I hadn’t considered was the immense standby power consumption of the three Dell PowerEdge servers. A full 30 watts per device were wasted so that the iDRAC could dutifully display the latest system statistics. Picture of my old dell server boardThis is why I emphasize using MiniPCs instead of buying old enterprise hardware unless you don't have an option. Their only advantage is the remote management interface but you can build your own PiKVM and connect it to various servers via a KVM switch. If that seems too complicated, you can also get a used Lantronix Spider KVM. Another option is to use an old monitor with an inexpensive KVM switch. 📋Even if you live in a first world country that (almost) never sees power cuts, it is a good idea to have a power backup in place.10. Lack of interactionDon’t despair over a problem for days. If you are stuck on something, someone else has most likely had the same or a similar problem and can help you solve it. A great way to give something back is to actively participate in forums, social media, or other community platforms. If you have a question, post it, and chances are good that experienced IT professionals will be willing to share their knowledge with you. Many open-source projects appreciate active support. Don’t worry if programming isn’t your thing, you will still find a good fit in the PR teams. Furthermore, many developers struggle with writing helpful tutorials. Improvements to documentation are therefore always welcome. With that in mind, get involved in the community. 📋Don't hesitate to seek help. Don't shy away from helping others. Join relavant forums.Wrapping UpA homelab feels worthwhile once the entire setup stops fighting you and starts making space for real experimenting. You do not need an overly expensive setup that your internet connection cannot fully utilize. Likewise, a system that offers high performance at a low price can become frustrating if the components constantly cause problems, such as frequent driver reinstalls, network interface failures, video output issues, or compatibility problems with the kernel. Every decision from hardware selection to basic topology shapes the stability of your lab. Think of it as a slow project that benefits from steady improvement rather than dramatic overhauls. With a bit of care, you can avoid the usual traps and end up with a setup that keeps you curious instead of frustrated.
-
Upgrade Your DevOps Skills Cheap: Linux Foundation Cyber Week Brings 65% Off Certifications
by: Sourav Rudra Mon, 01 Dec 2025 09:40:19 GMT The Linux Foundation has been a major driving force in the Linux and open source space. Beyond their work on the Linux kernel and hosting critical projects like Kubernetes, they run one of the most comprehensive technology training and certification programs around. Their courses cover Linux system administration, cloud native development, and cybersecurity. If you have been eyeing their certifications, then their annual sales are the best time to grab them. And, as it happens, the Linux Foundation's Cyber Week 2025 sale is here with the biggest discounts of the year. Linux Foundation Cuber Sale⏲️ The last date for the sale is December 9, 2025. 📋This article contains affiliate links. Please read our affiliate policy for more information.Cyber Week 2025 DealThe sale has already begun from today, with no extensions planned. You can expect discounts up to 65% depending on whichever bundle, certification, or course you opt for. For instance, certification and subscription bundles are 65% off. These pair popular certifications like CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer), CKS (Certified Kubernetes Security Specialist), and LFCS (Linux Foundation Certified System Administrator) with the THRIVE-ONE annual subscription. Kubestronaut bundles are 50% off, where the standard Kubestronaut bundle includes five certifications (KCNA, KCSA, CKA, CKAD, CKS), while the Golden Kubestronaut packs a full 16 certifications total. IT Professional Programs are 60% off. The Cloud Engineer IT Professional Program includes five courses plus LFCS and CKA certifications. There is also a new DevOps & GitOps IT Professional Program with five courses plus CAPA and CGOA certifications. THRIVE-ONE subscriptions are also discounted. The annual subscription drops from $360 to $252 (30% off for first year), and monthly goes from $35 to $25 (30% off for first three months). You will get unlimited access to the Linux Foundation's entire course catalog with this. Plus, every Cyber Week 2025 purchase gets you a $100 gift voucher for future training purchases between January 1 and October 31, 2026. What more are you waiting for? Grab the deal before it goes away! Get The DealWe also have a detailed list of Black Friday deals for Linux users. You can check it out to grab some really good bargains on popular tools and services. Black Friday Deals for Linux Users 2025 [Continually Updated With New Entries]Save big on cloud storage, privacy tools, VPN services, courses, and Linux hardware.It's FOSSAbhishek Prakash💬 Have you grabbed any Linux Foundation certifications or courses before? Planning to take advantage of this sale? Let me know below!
-
ObsidianOS Review: A New, Innovative Linux Distro Built Around A/B Partitioning
by: Neville Ondara Mon, 01 Dec 2025 07:42:54 GMT Most new Linux distributions tend to follow a familiar formula: take a well-known base, add a desktop environment, sprinkle in some theming, and call it a day. Sometimes it works; sometimes the distro disappears in six months. What you don’t often see, however, is a distro trying to rethink how updates, rollbacks, and system integrity fundamentally work. ObsidianOS is one of those unusual projects that immediately caught my attention for exactly that reason. It’s Arch-based, yes, but the defining feature isn’t Arch at all; it’s the implementation of an A/B partitioning layout using good old ext4, not btrfs, snapshots, or any of the usual suspects. I’ll admit, when I first heard about ObsidianOS, I had the same confused grin many Reddit users did: “Wait… A/B partitions? On a traditional Linux desktop? Without btrfs?” But after digging into the documentation, installing it on a virtual machine, and exploring it for a bit, I walked away genuinely impressed! ✨ Let’s break things down. ObsidianOS: Doing things a bit differentlyAt its core, ObsidianOS is a UEFI‑only, systemd‑based operating system for x86_64 systems, designed with an A/B partition layout for reliability. The A/B partition scheme means your root filesystem exists twice, partition A and partition B. When you update, the system writes to the inactive slot. If something goes wrong, you reboot into the previously working one. So, no complex snapshot rolling, no broken bootloaders, no “reinstall Arch because Pacman broke at 2 AM.” Just a simple flip back to the other partition. It’s very similar to how ChromeOS, Android, and some embedded systems work, but brought to the broader Linux desktop. The project originally launched with a single edition, but now it offers: Base Edition (minimal, TUI installer)KDE Edition (the recommended one)COSMIC Edition (beta, but functional)Void Edition (for experts who want a Void base with Obsidian’s tooling)This is already more variety than I expected for a distro still considered “early.” Minimum system requirementsFrom my experience and the project’s own documentation, ObsidianOS is quite modest in its hardware needs but add more than what it says. 2 GB RAM (yeah. we definitely need more)20+ GB storageUEFI firmware64-bit CPU🚧ObsidiaOS is in the early stages of development. I would consider it experimental. For this reason, either try it in a virtual machine or spare test system. Don't replace your stable Ubuntu/Debian/Mint/Fedora with Obsidian yet.Installing ObsidianOSInstallation depends on the edition. The Base Edition sticks to a TUI installer, which is a bit old-school, and familiar to anyone who has installed Arch Linux. The KDE and COSMIC editions, on the other hand, come with a Qt6 + Python GUI installer built by the project itself. I tested the KDE version, and honestly, it’s refreshing to see an independent project ship its own installer instead of relying on Calamares. It walks you through: Selecting the target disk. Setting up the A/B partitions automatically. Selecting System Image. Selecting Timezone. Selecting Keyboard layout. Bootloader setup. The layout still feels young, a few dialogs could be clearer, but it works, and more importantly, it handles the A/B scheme seamlessly without throwing technical jargon at the user. First impressions of ObsidianOSBooting into ObsidianOS for the first time feels much like entering a polished Arch environment. KDE is clean and mostly vanilla, without unnecessary patches or wild theming choices. The application launcher provides quick access to system settings, utilities, and installed applications, keeping everything organized and easily reachable. It feels intuitive, responsive, and minimalistic, much like what you’d expect from a well-curated Arch-based environment, no clutter, just efficiency at your fingertips. But the interesting parts aren’t immediately visible. They show up once you open the ObsidianOS Control Center, a Qt6 GUI frontend for the obsidianctl command-line tool. This is where the distro starts to feel like its own thing rather than “Arch with an unusual partition setup.” The Control Center shows you: Which slot (A or B) you’re currently runningAvailable updatesRollback optionsLogs and system informationFor a project its size, the amount of infrastructure built around these tools is impressive. 😎 User-Mode Overlays: Probably the most intriguing featureThis is where ObsidianOS goes beyond just A/B partitions. The distro introduces user-mode overlays, an experimental system written in Rust that intercepts libc calls to create layered filesystem behavior, without touching kernel modules or requiring special filesystems. 📝 To simplify: it adds an overlay on top of the root filesystem, but entirely in user space. This gives you: Layered modificationsReversible changesA “sandboxed” feel for certain operationsNo risk to the base system unless you commit changesIt’s clever, experimental, and absolutely the kind of thing that appeals to tinkerers. The overlay mechanism also powers another new component. opm: Overlaid packagesopm is the ObsidianOS Package Manager, also written in Rust, that works alongside pacman. When you install a package with opm: It downloads the package from pacman.It creates an overlay image of that package.The overlay is applied on top of the system.This is miles away from how most distros handle packaging, and while it’s still experimental, it hints at a future where package changes have a much smaller chance of trashing your root system. Arch fans might raise eyebrows, but power users will probably be curious. 😎 ObsidianOS PluginsThis is another Rust-powered system that lets scripts respond to system events, like: battery changesconnection/disconnection eventshardware triggersNot entirely new in the Linux world, but ObsidianOS wraps it in a clean, unified framework instead of leaving users to dig through systemd units and acpi handlers. Daily usage and performanceBeing Arch-based, the performance story is exactly what you’d expect: Fast boot timesResponsive KDE experienceRecent kernelsAccess to the entire Arch repositoryThe difference is that ObsidianOS doesn’t try to be flashier or heavier than necessary. It stays lean, even in the KDE edition. The COSMIC edition is still in beta, so I’ll reserve judgment until the desktop hits maturity. The only thing I noticed is that some experimental components, overlays and opm are still evolving. They work, but occasionally feel like tools intended for people who enjoy digging into logs and understanding what’s happening under the hood. That’s not a criticism; it’s simply the current reality of the project. 🤷 Too much reliance on experiments?To keep things realistic, it’s important to mention some concerns that other people (and I) have: Small team and young project: Several users on Reddit said they avoid distros with very small maintainer counts. It’s a fair point, longevity matters. Experimental components: The overlays, opm, and plugin system are fascinating, but still in the “early tech demo but highly functional” stage. A/B partitioning is unusual for the desktop: Not bad, just unfamiliar. Some users will love it; others may feel unsure. But none of these are dealbreakers if you know what you’re walking into. Final ThoughtsObsidianOS is one of the rare Arch-based distributions that genuinely tries to solve a long-standing problem instead of repackaging Arch with a new desktop theme. The A/B partitioning approach makes system updates dramatically safer, and the Rust-based overlay tools point toward a future where system state is much more predictable and much less prone to accidental breakage. Is it ready for absolute beginners? Probably not yet. Is it suitable for people who install Arch manually for fun? Absolutely. And for users who love the idea of transactional updates but don’t want to commit to NixOS, Fedora Silverblue, or OpenSUSE MicroOS, ObsidianOS hits a very interesting middle ground. It’s ambitious, clever, and while still young, it’s doing something bold that most distros simply don’t attempt. I’ll be keeping an eye on how it develops and if the team keeps pushing features like overlays and opm, it might end up becoming one of the more innovative Linux projects to come out of the Arch ecosystem in a while. 🏅These are the reasons why ObsidianOS is It's FOSS's Distro of the Month.What do you think of bringing A/B partitioning to desktop Linux and other unusual features in ObsidianOS? I’m curious to hear other experiences.
-
Ubuntu 26.04 LTS: Release Date and New Features
by: Abhishek Prakash Sat, 29 Nov 2025 10:55:28 GMT The development for Ubuntu 26.04 codenamed 'Resolute Raccoon' has already begun. It is a long-term support (LTS) release and a particularly important one as we venture more into the Wayland-only era of Linux. Let's have a look at the release schedule of Ubuntu 26.04 and its planned features. 📋Since the development is in progress and the final version comes in April'26, I'll be updating this article from time to time when there are new developments.Ubuntu 26.04 Release ScheduleUbuntu 26.04 LTS is going to be released on 23rd April, 2026. Here's the release sechedule with important milestones. Date Event February 19 Feature Freeze March 12 User Interface Freeze March 19 Kernel Feature Freeze March 26 Beta Release April 9 Kernel Freeze April 16 Release Candidate April 23 Final Release Please note that release schedule may change as the development progresses. Although, the final release date should stay the same. 💡Fun fact: A new version of Ubuntu is always released on a Thursday. For the October releases (version number ending with XX.10), it is the second Thursday of the month. For the April release (version number ending with XX.04), it is the fourth Thursday of the month. Two extra weeks are there to compensate for the Christmas holidays.New features coming to Ubuntu 26.04 Resolute RaccoonSince it is very early stages of development, I will include some predictions as well, which means some of the listed features may change in the final release. GNOME 50For sure, Ubuntu 26.04 LTS will be rocking the latest GNOME at the time of its release. And that latest GNOME will be version 50. What does GNOME 50 offer? Well, that too is under development and the picture will be a lot more clear as we enter 2026. I will say be prepared to see some of your classic GNOME apps replaced by modern versions. We have seen this trend in the past where GNOME changed the default text editor, document viewer, terminal etc. New default video playerTotem has been the default video player in Ubuntu for as long as I remember. Not that I can remember like an elephant, but I am not Leonard Shelby from Memento either. Showtime feels sleek and modern and fits quite well with the new GNOME design principles that is libadwaita. Interface is minimalist, but you still get some controls. You can click the gear symbol at bottom right or right click anywhere in the player for that. Showtime is only referred to as Video Player and the icon is similar to Totem (referred as Videos) in the screenshot below. Showtime is Video Player, Totem is Videos. MPV is well...MPVNew default system monitorGNOME 50 will also have a new default system monitor, Resources. This is surprising because Resources is not a GNOME Core app, although it's a GNOME Circle app which means a community made tool that meets the GNOME standards. Although the current system monitor is not that bad in my opinion. Current default system monitorx86-64-v3 and amd64v3 version for all packagesUbuntu 26.04 will have amd64v3/x86-64-v3 variants for all the packages, and they will be well tested, too. Some packages are already available in this format in the recently released Ubuntu 25.10, the LTS release will have all the packages in this variant. What is x86-64-v3? Well, you know what x86-64 and amd64 are, right? Yes, it is the 64-bit for Intel CPU and amd64 is the 64-bit AMD processor. And they have been in existence for nearly two decades now. But not all 64 bit processors are created equal. The newer generation of CPUs supports more instruction sets than their predecessors. And that's why they are labeled as v2/v3/v4 architecture variants. Basically, if you have a newer CPU, you can switch to the v3 variants of the packages and you should have some performance improvements. Don't worry. The v3 variant won't be default. Nothing to bother about if you are rocking an older machine. Introducing architecture variants: amd64v3 now available in Ubuntu 25.10Ubuntu prides itself on being among the most compatible Linux distributions. Compatibility is often a conscious trade-off against bleeding-edge performance. In Ubuntu 25.10, we have added support for packages that target specific silicon variants, meaning you can have your cake and eat it too! Back in 2023 I wrote an article talking about the history of the amd64/x86-64 architecture and described the “levels” x86-64-v2, -v3, and -v4 (often referred to as amd64v3, amd64v4, etc.). Since then, we’…Ubuntu Community HubmwhudsonDownload Ubuntu 26.04 (if you want to test it)🚧This is a development release and not suitable for running on your main machine. Only download and install it if you want to help with testing. Use it in a virtual machine or on a spare system that has no data on it. You have been warned.The first monthly snapshop of Ubuntu 26.04 development release is now available for thos who want to test it. And if you do test it, timely report the bugs otherwise what's the point of testing? Download Ubuntu 26.04 SnapshotWhat do you want to see in Ubuntu 26.04 LTS?This is a long term support. Expectations are high. What are yours? What features do you want to see in this upcoming version? Please share your views in the comment section.
-
Mission Center vs. Resources: The Ultimate Linux System Monitor Showdown
by: Roland Taylor Sat, 29 Nov 2025 08:31:26 GMT The GNOME app ecosystem is on fire these days. Whatever your needs, there's probably an app for that. Or two. Or three (no kidding)! Two of the sleekest apps for monitoring your system (aptly called, "system monitors", of course) are Mission Center, and Resources. Both use libadwaita to provide slick visuals, responsive GUIs, and familiar functionality for the GNOME desktop environment. But, which one is right for you? I'll attempt to help you answer that question in this article. Quick Intro of Both Awesome System MonitorsNow that you understand the premise of what we're about, let's get acquainted with both apps. You'll see where they're quite similar in some ways, yet distinct enough to each stand alone. Mission CenterMission Center 1.1.0 in GNOME 48Mission Center is a detail-oriented system monitor app for the GNOME desktop environment, written primarily in Rust, using GTK4 and libadwaita. Geared towards high efficiency and smooth displays, Mission Center has hardware accelerated graphs for complex CPU, memory, and GPU breakdowns. ResourcesResources 1.9.1 in GNOME 48Resources is a relatively minimalist system monitor for the GNOME desktop environment. As a GNOME Circle app, it conforms strictly to the GNOME HIG and its patterns, with an emphasis on simplicity and reduced user effort. Resources is written in Rust and uses GTK4 and libadwaita for its GUI. Usage: The First GlanceFirst impressions matter, and with any system monitor, what you see first tells you what's going on before you even click on anything else. So how do these two stack up? Let's see. Mission Center: Hardware First, Stats & Figures UpfrontMission Center drops you right into the hardware actionOn first launch, Mission Center surfaces your hardware resources right away: CPU, GPUs, memory, drives, and network, with detailed readouts right before your eyes. Combining clean, accessible visuals with thorough device info, Mission Center makes you feel you've hooked up your computer to an advanced scanner — where nothing is hidden from view. If you like to jump right into the stats and details, Mission Center is just for you. Resources: Apps & Hardware Side-by-sideResources puts your apps and hardware resources side by sideResources displays a combined overview of your apps and hardware resources at first glance. You can get a quick view of which apps are using the most resources, side by side with what hardware resources are most in use. You also get a graph for the system's battery (if present) in the sidebar (not shown here). It doesn't give you detailed hardware stats and readouts until you "ask" (by clicking on any individual component), but you can still see which resources are under strain at a glance and compare this with which apps are using the most resources. CPU Performance & Memory UsageA system monitor is no good if it hogs system resources for itself. They need to be lean and quick to help us wrangle with other applications that aren't. So where do our two contenders fall? 💡Note: Plasma System Monitor was used for resource measurements. Different apps, including both Mission Center and Resources, measure resource usage differently.Mission Center: Stealthy on the CPU, kind to memoryMission Center uses around 160 MiB (168 MB), during casual usageMission Center barely sips the CPU, being negligible enough that it does not show up in your active processes (if you choose this filter) in GNOME System Monitor, even while displaying live details for a selected application. This is likely due to the fact that Mission Center uses GPU acceleration for graphs, thereby reducing strain on the CPU. It's also relatively light on memory usage, hitting roughly 168MB of usage even while showing detailed process info. Resources: Light on CPU, easier on memory useResources hits roughly 130 MiB (136 MB) in typical usageKeeping well within its balanced, lightweight approach, Resources sips the CPU while also keeping memory usage low, at around 136MB. While its use of hardware acceleration could not be confirmed, it's worth noting that Resources keeps graphs visible and active, even when displaying process details. Still, it manages to keep resource usage to a minimum. Differences: NegligibleAs this is one of the few areas where the comparison veers beyond subjectivity, it's important to note that the difference here is not that significant. Both apps are light on resources, especially in the critical area of CPU usage. The difference in memory usage between the two isn't particularly significant, though for users with limited RAM to spare, Mission Center's slightly higher memory usage could be a consideration to keep in mind. Process Management & ControlMission Center (left, background) and Resources (Right, foreground) showing their app viewsPerhaps the most critical aspect of any system monitor, is not just how well they can show you information, but how much they actually let you do with the information you're given. That's where process management and control come in, so let's look at how these two compare. What both have in commonAs you might expect, each app gives you the typical "Halt/Stop", "Continue", "End", and "Kill" signal controls as standard fare for Linux process management. Both allow you to view details for an individual app or process. Of course, you also get the common, critical stats, like CPU, Memory, and GPU usage. However, there are distinct, notable differences that can help you decide which one you'd prefer. 💡Note: Processes in Linux are not the same as "Apps". Apps can consist of multiple processes working in tandem.Mission Center: More details up frontViewing the details for Google Chrome in Mission CenterBoth apps and processes are displayed in the same tree view in Mission Center, just separated with a divider. It tries to put more info before you by default, including the Process ID (PID), though only for processes, Shared Memory, and Drive I/O. You can also combine parent and child process data, and show which CPU core any app is running on. Despite a detailed view, there's no control over process priority in Mission CenterWhile you get more signals for controlling your processes, like 'Interrupt' (INT), 'Hangup' (HUP), and 'Terminate' (TERM), you don't get the option to display or adjust the 'niceness' of any process, which, for those not in the know, tells the system what priority a process should have. Standout feature: Service managementMission Center lets you start, stop, and restart services with Systemd from a familiar GUIOne thing that sets Mission Center apart from other system monitors is its ability to display and control processes through Systemd. With Systemd being pretty much the standard across most distros, this is a feature that many power users will want in their toolkit, especially those who would prefer to avoid the CLI for such tasks as restarting services like Pipewire. Resources: Crouching data, hidden customizationResources showing app details for Nextcloud DesktopInterestingly, while Resources might appear to be the more conservative choice, it actually gives more options for what data you can display. As an example, Resources allows you to view GPU video encoder/decoder usage on a per-app basis. Another handy feature is the option to change a process' niceness value, though you must first enable this in the preferences. In Resources, apps and processes are displayed in separate views, which have some notable differences. For instance, there is no "User" column in the 'Apps' view, and you cannot change the priority of an app. Standout feature: Changing processor affinityChanging Processor Affinity in Resources is quick and simpleResources features a hidden gem in its process view, which is the ability to change process affinity on a per-process basis. This is especially handy for power users who want to make use of modern multi-core systems, where efficiency and performance cores often dwell in the same CPU. With a clever combination of niceness values (priority) and CPU affinity, advanced users can use Resources to pull maximum performance or power savings without having to jump into the terminal. Installation & AvailabilityMission Center: A package for everyoneMission Center is included by default with Aurora, Bazzite, Bluefin and DeLinuxCo. It's also available through an official Flatpak hosted on Flathub. The project provides AppImage downloads for both AMD64 and ARM64 architectures, and a Snap package in the Snap Store. Ubuntu users can install Mission Center with Snap by running: # Install Mission Center: sudo snap install mission-centerIf even these are not enough, you can also get Mission Center in many distributions directly from their repositories (though mileage may vary on the version that's actually available in such instances). The project provides a full list of repositories (with version numbers) in their Readme file. Resources: A conservative, but universal approachBeing part of the GNOME Circle, Resources is assuredly packaged as a Flatpak and available via Flathub. These are official packages and provide the experience most likely to offer the best stability and newest available features. Unofficial packages are also available for Arch and Fedora. Arch users can install it with: pacman -S resourcesWhereas Fedora users can install it using dnf and Copr: dnf copr enable atim/resources dnf install resourcesFinal thoughts: Which one's for you?That's a question only you can answer, but hopefully you now have enough information to help you make an informed decision. With the diversity of apps arising in this season of mass Linux development and adoption, it's only a matter of time before you find (or create) your favourite. If you're looking for deep hardware monitoring up front and don't need heavy customization, Mission Center is more likely to be a good fit for you. However, if you're looking for a quick bird's eye-view of apps and hardware at a glance, with the option to dig deeper where needed, Resources is probably more your speed. Of course, you can install and try both apps if you'd like, that's part of the fun and freedom of Linux. Feel free to let us know what you think in the comments.
-
LHB Linux Digest #25.36: 2 New Courses - Terraform (FREE) and Kubernetes, Disk Space Management and More
by: Abhishek Prakash Fri, 28 Nov 2025 18:54:18 +0530 Happy Thanksgiving. To celebrate the occassion, I am announcing a new course on that teaches you Infrastructure as Code with Terraform. This course is contributed by Akhilesh who is also the creator behind the Living DevOps platform. The Terraform course is free for all LHB members. Learn Infrastructure as Code with TerraformLearn Terraform from scratch with a Linux-first approach. Master Infrastructure as Code concepts, modules, state, best practices, and real-world workflows.Linux HandbookAkhilesh MishraThat's not it. We also have a Kubernetes course for beginners now. It is a blend of essential concept explanation with handy examples. This one is for Pro members only. Mastering Kubernetes as a BeginnerStop struggling with Kubernetes. Learn it properly with a structured and practical course crafted specially for beginners.Linux HandbookMead NajiWith these two, our catalog now has 16 courses. And we are not stopping here. We are working on more courses and series and videos. Stay tuned and enjoy the membership benefits 💙 By the way, we are also running a limited time Black Friday deal. You get $10 off on both yearly and lifetime membership (for $89 instead of $139) for the next 7 days. This is probably the last time you'll see such low prices as we plan a price increase in 2026 to survive the inflation. Get Lifetime Pro Membership This post is for subscribers only Subscribe now Already have an account? Sign in
-
Chapter 12: From Command to Cluster - How Kubernetes Actually Works
by: Mead Naji Fri, 28 Nov 2025 18:02:30 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in