Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Akhilesh Mishra
    Fri, 28 Nov 2025 15:35:51 +0530

    Basic Variable Types
    Terraform has three basic types: string, number, and bool.
    variable "name" { type = string description = "User name" default = "World" } variable "counts" { type = number default = 5 } variable "enabled" { type = bool default = true } Use them:
    resource "local_file" "example" { content = "Hello, ${var.name}! Count: ${var.counts}, Enabled: ${var.enabled}" filename = "output.txt" } 🚧You cannot use reserved words like count as variable name.Change values:
    terraform apply -var="name=Alice" -var="counts=10" Apply VariableAlways add description. Future you will thank you.
    Advanced Variable Types
    Real infrastructure needs complex data structures.
    Lists
    Ordered collections of values:
    variable "availability_zones" { type = list(string) default = ["us-west-2a", "us-west-2b", "us-west-2c"] } Access elements:
    locals { first_az = var.availability_zones[0] # "us-west-2a" all_zones = join(", ", var.availability_zones) } Use in resources:
    resource "aws_subnet" "public" { count = length(var.availability_zones) availability_zone = var.availability_zones[count.index] # ... other config } Maps
    Key-value pairs:
    variable "instance_types" { type = map(string) default = { dev = "t2.micro" prod = "t2.large" } } Access values:
    resource "aws_instance" "app" { instance_type = var.instance_types["prod"] # Or with lookup function instance_type = lookup(var.instance_types, var.environment, "t2.micro") } Objects
    Structured data with different types:
    variable "database_config" { type = object({ instance_class = string allocated_storage = number multi_az = bool backup_retention = number }) default = { instance_class = "db.t3.micro" allocated_storage = 20 multi_az = false backup_retention = 7 } } Use in resources:
    resource "aws_db_instance" "main" { instance_class = var.database_config.instance_class allocated_storage = var.database_config.allocated_storage multi_az = var.database_config.multi_az backup_retention_period = var.database_config.backup_retention } List of Objects
    The power combo - multiple structured items:
    variable "servers" { type = map(object({ size = string disk = number })) default = { web-1 = { size = "t2.micro", disk = 20 } web-2 = { size = "t2.small", disk = 30 } } } resource "aws_instance" "servers" { for_each = var.servers instance_type = each.value.size tags = { Name = each.key } root_block_device { volume_size = each.value.disk } } Sets and Tuples
    Set - Like list but unordered and unique:
    variable "allowed_ips" { type = set(string) default = ["10.0.0.1", "10.0.0.2"] } Tuple - Fixed-length list with specific types:
    variable "server_config" { type = tuple([string, number, bool]) default = ["t2.micro", 20, true] } Rarely used. Stick with lists and maps for most cases.
    Variable Validation
    Add rules to validate input:
    variable "environment" { type = string description = "Environment name" validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be dev, staging, or prod." } } variable "instance_count" { type = number default = 1 validation { condition = var.instance_count >= 1 && var.instance_count <= 10 error_message = "Instance count must be between 1 and 10." } } Catches errors before Terraform runs.
    Validation CheckSensitive Variables
    Mark secrets as sensitive:
    variable "db_password" { type = string sensitive = true } Won’t appear in logs or plan output. Still stored in state though (encrypt your state!).
    Variable Precedence
    Multiple ways to set variables. Terraform picks in this order (highest to lowest):
    Command line: -var="key=value" *.auto.tfvars files (alphabetical order) terraform.tfvars file Environment variables: TF_VAR_name Default value in variable block Setting Variables with Files
    Create terraform.tfvars:
    environment = "prod" instance_type = "t2.large" database_config = { instance_class = "db.t3.large" allocated_storage = 100 multi_az = true backup_retention = 30 } Run terraform apply - picks up values automatically
    Or environment-specific files:
    # dev.tfvars environment = "dev" instance_type = "t2.micro" terraform apply -var-file="dev.tfvars" Locals: Computed Values
    Variables are inputs. Locals are calculated values you use internally.
    variable "project_name" { type = string default = "myapp" } variable "environment" { type = string default = "dev" } locals { resource_prefix = "${var.project_name}-${var.environment}" common_tags = { Project = var.project_name Environment = var.environment ManagedBy = "Terraform" } is_production = var.environment == "prod" backup_count = local.is_production ? 3 : 1 } resource "aws_s3_bucket" "data" { bucket = "${local.resource_prefix}-data" tags = local.common_tags } Use var. for variables, local. for locals.
    Outputs
    Display values after apply:
    output "bucket_name" { description = "Name of the S3 bucket" value = aws_s3_bucket.data.id } output "is_production" { value = local.is_production } output "db_endpoint" { value = aws_db_instance.main.endpoint sensitive = true # Don't show in logs } View outputs:
    terraform output terraform output bucket_name Real-World Example
    variable "environment" { type = string validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Must be dev, staging, or prod." } } variable "app_config" { type = object({ instance_type = string min_size = number }) } locals { common_tags = { Environment = var.environment ManagedBy = "Terraform" } # Override for production min_size = var.environment == "prod" ? 3 : var.app_config.min_size } resource "aws_autoscaling_group" "app" { name = "myapp-${var.environment}-asg" min_size = local.min_size desired_capacity = local.min_size tags = [ for key, value in local.common_tags : { key = key value = value propagate_at_launch = true } ] } Quick Reference
    Basic types:
    variable "name" { type = string } variable "count" { type = number } variable "enabled" { type = bool } Complex types:
    variable "zones" { type = list(string) } variable "types" { type = map(string) } variable "config" { type = object({ name = string, size = number }) } variable "servers" { type = map(object({ size = string, disk = number })) } Validation:
    validation { condition = contains(["dev", "prod"], var.env) error_message = "Must be dev or prod." } Locals and Outputs:
    locals { name = "${var.project}-${var.env}" } output "result" { value = aws_instance.app.id, sensitive = true } Variables make your code flexible. Complex types model real infrastructure. Locals keep things DRY. Outputs share information.
    With variables and locals in your toolkit, you now know how to make your Terraform code flexible and maintainable. But where does Terraform store the information about what it created? And how does it connect to AWS, Azure, or other cloud providers? That’s what we’ll explore next with state management and providers.
  2. by: Akhilesh Mishra
    Fri, 28 Nov 2025 15:34:33 +0530

    Step 1: Install Terraform
    For macOS users:
    brew install terraform For Windows users: Download from the official Terraform website and add it to your PATH.
    For Linux users:
    wget https://releases.hashicorp.com/terraform/1.12.0/terraform_1.12.0_linux_amd64.zip unzip terraform_1.12.0_linux_amd64.zip sudo mv terraform /usr/local/bin/ Step 2: Verify Installation
    terraform version You should see something like:
    Terraform v1.12.0 Step 3: Create Your First Terraform File
    Create a new directory for your first Terraform project:
    mkdir my-first-terraform cd my-first-terraform Create a file called main.tf and add this simple configuration:
    # This is a comment in Terraform resource "local_file" "hello" { content = "Hello, Terraform World!" filename = "hello.txt" } This simple example creates a text file on your local machine. Not very exciting, but it’s a great way to see Terraform in action without needing cloud credentials.
    Step 4: The Magic Commands
    Now comes the fun part! Run these commands in order:
    Initialize Terraform:
    terraform init Terraform InitThis downloads the providers (plugins) needed for your configuration.
    See what Terraform plans to do:
    terraform plan Terraform PlanThis shows you exactly what changes Terraform will make.
    Apply the changes:
    terraform apply Terraform ApplyType yes when prompted, and watch Terraform create your file!
    Clean up:
    terraform destroy Terraform DestroyThis removes everything Terraform created.
    What Just Happened?
    Congratulations! You just used Terraform to manage infrastructure (even if it was just a simple file). Here’s what each command did:
    terraform init: Set up the working directory and downloaded necessary plugins terraform plan: Showed you what changes would be made terraform apply: Actually made the changes terraform destroy: Cleaned everything up This same pattern works whether you’re creating a simple file or managing thousands of cloud resources.
    Essential Terraform Commands
    Beyond the basic workflow, here are commands you’ll use daily:
    terraform validate - Check if your configuration is syntactically valid:
    terraform validate Terraform ValidateRun this before plan. Catches typos and syntax errors instantly.
    terraform fmt - Format your code to follow standard style:
    terraform fmt Terraform FormatMakes your code consistent and readable. Run it before committing.
    terraform show - Inspect the current state:
    terraform show Terraform ShowShows you what Terraform has created.
    terraform output - Display output values:
    terraform output Terraform OutputUseful for getting information like IP addresses or resource IDs.
    terraform console - Interactive console for testing expressions:
    terraform console Test functions and interpolations before using them in code. Type exit to quit.
    terraform refresh - Update state to match real infrastructure:
    terraform refresh 📋Deprecated in favor of terraform apply -refresh-only, but worth knowing.Common Command Patterns
    See plan without applying:
    terraform plan -out=tfplan See PlanApply saved plan:
    terraform apply tfplan Auto-approve (careful!):
    terraform apply -auto-approve Destroy specific resource:
    terraform destroy -target=aws_instance.example Format all files recursively:
    terraform fmt -recursive These commands form your daily Terraform workflow. You’ll use init, validate, fmt, plan, and apply constantly.
    Terraform Daily CommandsNow that you understand what Terraform is and how to use its basic commands, let’s dive deeper into the core concepts that make Terraform powerful. We’ll start with variables and locals—the building blocks that make your infrastructure code flexible and reusable.
    I have also built Living DevOps platform as a real-world DevOps education platform.
    I’ve spent years building, breaking, and fixing systems in production. Now I teach what I’ve learned in my free time.
    You’ll find resources, roadmaps, blogs, and courses around real-world DevOps. No fluff. No theory-only content. Just practical stuff that actually works in production.
    Living With DevOps
  3. by: Akhilesh Mishra
    Fri, 28 Nov 2025 15:33:10 +0530

    If you go back two decades, everyone used those physical servers (produced by IBM, HP, and Cisco), which took weeks to setup correctly before we could run the applications on them.
    Then came the time of virtualization. Sharing computing resources across multiple OS installations using hypervisor-based virtualization technologies such as VMware became the new normal. It reduced the time to spin up a server to run your application but also increased complexity.
    Subsequently, we got AWS, which revolutionized computing, and a new era of cloud computing became streamlined. After AWS, other big tech companies such as Microsoft and Google launched their cloud offerings named Azure and Google Cloud Platform, respectively.
    In the cloud, you can spin up a server in a few minutes with just a few clicks. Creating and managing a few servers was very easy, but as the number of servers and their configurations grew, manual tracking became a significant challenge.
    That’s where Infrastructure as Code (IaC) and Terraform came to the rescue, and trust me, once you understand what they can do, you’ll wonder how you ever lived without them.
    What is Infrastructure as Code?
    Infrastructure as Code is exactly what it sounds like – managing and provisioning your infrastructure (servers, networks, databases, etc.) through code instead of manual processes. Instead of clicking through web consoles or running manual commands, you write code that describes what you want your infrastructure to look like.
    The Problems IaC Solves
    Manual configuration chaos and deployment failures
    “It works on my machine” syndrome Scaling nightmares across multiple environments Lost documentation and tribal knowledge Slow disaster recovery Then came Terraform, and it changed the game
    So what is Terraform? Terraform is an open-source Infrastructure as Code tool developed by HashiCorp that makes managing infrastructure as simple as writing a shopping list.
    Here’s what makes Terraform special:
    1. It’s Written in Go
    Terraform is built in Golang, which gives it superpowers for creating infrastructure in parallel. While other tools are still thinking about what to do, Terraform is already building your servers, networks, and databases simultaneously.
    2. Uses HCL (HashiCorp Configuration Language)
    Terraform uses HCL, which is designed to be human-readable and easy to understand. Don’t worry if you haven’t heard of HCL – it’s so intuitive that you’ll be writing infrastructure code in no time.
    Here’s a simple example of what Terraform code looks like:
    resource "aws_instance" "web_server" { ami = "ami-12345678" instance_type = "t2.micro" tags = { Name = "My Web Server" Environment = "Production" } } See how readable that is? We’re creating an AWS instance (a virtual server) called “web_server” with specific settings. Even if you’ve never seen Terraform code before, you can probably guess what this does.
    3. Cloud-Agnostic Magic
    Here’s where Terraform really shines – it works with ANY cloud provider. AWS, Azure, Google Cloud, DigitalOcean, even on-premises systems. You learn Terraform once, and you can manage infrastructure anywhere.
    4. State Management
    Terraform keeps track of what it has created in something called a “state file.” This means it knows exactly what exists and what needs to be changed, created, or destroyed. It’s like having a super-smart assistant who remembers everything.
    Why Terraform Became the King of IaC
    You might be wondering: “Why should I learn Terraform when there are other tools like AWS CloudFormation or Azure Resource Manager?”
    Great question! Here’s why Terraform has become the go-to choice for infrastructure management:
    1. One Tool to Rule Them All
    Most cloud providers have their own IaC tools (AWS CloudFormation, Azure ARM templates, etc.), but they only work with their specific cloud. Terraform works with over 1,000 providers, from major cloud platforms to niche services. Learn it once, use it everywhere.
    2. Huge Community and Ecosystem
    Terraform has a massive community creating and sharing modules (think of them as infrastructure blueprints). Need to set up a web application with a database? There’s probably a module for that. Want to configure monitoring? There’s a module for that too.
    3. Declarative Approach
    With Terraform, you describe what you want (the end state), not how to get there. You say “I want a web server with these specifications,” and Terraform figures out all the steps needed to make it happen.
    4. Plan Before You Apply
    One of Terraform’s best features is the ability to see exactly what changes will be made before applying them. It’s like having a crystal ball that shows you the future of your infrastructure.
    Real-World Example: Why You Need This
    Let me paint you a picture of why this matters. Imagine you’re working at a company that needs to:
    Deploy a web application across development, staging, and production environments. Ensure all environments are identical Scale up during peak times Quickly recover from disasters Maintain security and compliance standards Without Terraform: You’d spend weeks manually setting up each environment, documenting every step, praying nothing breaks, and probably making small mistakes that cause mysterious issues months later.
    With Terraform: You write the infrastructure code once, test it in development, then deploy identical environments to staging and production with a single command. Need to scale up? Change a number in your code and redeploy. Disaster recovery? Run the same code in a different region.
  4. by: Akhilesh Mishra
    Fri, 28 Nov 2025 15:31:09 +0530

    Stop clicking around cloud dashboards. Start building reproducible, version-controlled, scalable infrastructure using Terraform, the industry standard for Infrastructure as Code.
    This course takes you from first terraform init to real-world Terraform architectures with modules, best practices, and production workflows.
    👉 Designed for Linux users, DevOps engineers, cloud learners, and sysadmins transitioning to modern IaC.
    Most Terraform tutorials either stay too basic or jump straight into complex setups without building strong foundations.
    This course does both. You don’t just learn commands. You understand the logic and design decisions behind Terraform infrastructure.
    🧑‍🎓 Who is this course for?
    This course is built for people who want real skills, not just certificates:
    Linux users who want to move into cloud & DevOps System administrators shifting towards Infrastructure as Code Aspiring DevOps engineers building their toolchain Developers tired of manual server configuration Anyone who wants to treat infrastructure like code (the right way) 🕺No prior Terraform experience required, but basic Linux command-line knowledge will help.
    🧩 What you’ll learn in this course?
    Chapter 1: Infrastructure as Code – Here We Go
    Understand what IaC really means, why Terraform matters and how it fits into modern infrastructure.
    Chapter 2: Getting Started – Your First Steps
    Install Terraform, your first configuration, understanding providers, init, plan, and apply.
    Chapter 3: Terraform Variables and Locals
    Learn how to write reusable and parameterized configurations using variables and locals.
    Chapter 4: Terraform State and Providers
    Dive deep into state files, provider configuration, remote state, and dangers of bad state handling.
    Chapter 5: Resources, Data Sources, and Dependencies
    Understand how Terraform actually builds infrastructure graphs and manages dependencies.
    Chapter 6: Count, For_Each, and Conditionals
    Dynamic infrastructure with loops, conditional logic, and scalable configuration patterns.
    Chapter 7: Dynamic Blocks in Terraform
    Create flexible and advanced configurations using dynamic blocks.
    Chapter 8: Terraform Modules – Building Blocks You Can Reuse Everywhere
    Learn how to design, use, and structure modules like real production setups.
    Chapter 9: Provisioners and Import
    Handle legacy infrastructure, migration strategies, provisioners, and importing existing resources.
    Chapter 10: Terraform Functions – Your Code’s Swiss Army Knife
    Use built-in functions to manipulate data, strings, numbers, and collections.
    Chapter 11: Workspaces, Null Resources, and Lifecycle Rules
    Advanced control: multi-environment setups, resource lifecycle management, and more.
    Chapter 12: Terraform Best Practices and Standards
    The chapter that converts you from a Terraform user to a Terraform practitioner.
    Folder structure, naming, workflows, and professional practices.
    I built Living DevOps platform as a real-world DevOps education platform.
    I’ve spent years building, breaking, and fixing systems in production. Now I teach what I’ve learned in my free time.
    You’ll find resources, roadmaps, blogs, and courses around real-world DevOps. No fluff. No theory-only content. Just practical stuff that actually works in production.
    Living With DevOps
  5. by: Sourav Rudra
    Fri, 28 Nov 2025 09:50:08 GMT

    Pebble, the e-paper smartwatch that first launched on Kickstarter in 2012, gained a cult-like following for its innovative approach to wearable tech. Sadly, Fitbit acquired and shut it down in 2016, taking with it the intellectual property (IP) of the brand.
    The IP eventually landed with Google after their Fitbit acquisition in 2021.
    Earlier this year, the original creator, Eric Migicovsky, relaunched Pebble through Core Devices LLC, a self-funded company operating via the rePebble consumer brand. This resurrection became possible after Google open-sourced PebbleOS in January 2025.
    Now, Core Devices has announced something significant for the Pebble community.
    Great News for Pebble Enthusiasts
    A screenshot from the demo in this YouTube videoThe complete Pebble software stack is now open source. Everything you need to operate a Pebble watch is now available on GitHub. All of this didn't just materialize overnight; Core Devices has been improving PebbleOS since its open-sourcing and has been pushing those to the public repository.
    The rebuilt mobile companion apps for Android and iOS just got released as open source too. Without these apps, a Pebble watch is basically a paperweight. These are built on libpebble3, a Kotlin multiplatform library for interacting with Pebble devices.
    Similarly, the developer tools have been completely overhauled, with the old Ubuntu VirtualBox VM-based workflow being replaced with a modern browser-based one that allows anyone to develop Pebble apps in a web browser.
    The Pebble Time 2 is very close to coming to market!
    Hardware schematics are public as well. The complete electrical and mechanical design files for the Pebble 2 Duo are now available with KiCad project files included. You could literally build your own Pebble-compatible device from these files.
    There are some non-free components still in the mix. The heart rate sensor library for the Pebble Time 2, Memfault crash reporting, and Wispr Flow speech recognition all use proprietary code. But, fret not, these are all optional. You can compile and run the core Pebble software without touching any of them.
    Core Devices also launched two major software systems alongside the open source releases. The Pebble mobile app now supports multiple app store feeds that anyone can create and operate.
    This works similar to Linux package managers such as APT or AUR. Here, users can subscribe to different feeds and browse apps from multiple sources instead of relying on a single centralized server.
    Core Devices already operates its own feed at appstore-api.repebble.com. This feed backs up to the Internet Archive, preserving community-created watchfaces and apps that have been around over the years.
    Plus, developers can upload new or existing apps through the new Developer Dashboard. Monetization remains possible through services like KiezelPay, so creators can still get paid for their hard work.
    Why Open Source Everything?
    Migicovsky learned some painful lessons from Pebble's first shutdown. When Fitbit killed the project in 2016, the community was left scrambling with limited options.
    The gap between 95% and 100% open source turned out to matter more than anyone expected. Android users couldn't easily get the companion app. Many iOS users faced the same problem.
    "This made it very hard for the Pebble community to make improvements to their watches after the company behind Pebble shut down," Eric explained in his blog post.
    The reasoning behind this open source push is straightforward. If Core Devices disappears tomorrow, the community has everything they need to keep their watches running. No dependencies, no single point of failure.
    Apart from that, these new Pebble devices will focus on reparability, with the upcoming Pebble Time 2 (expected March-April 2026) featuring a screwed-in back cover, allowing users to replace the battery themselves instead of needing to buy a new device when the battery gives out.
    💬 What are your thoughts on Pebble's comeback? I certainly look forward to new launches by them!
  6. by: Theena Kumaragurunathan
    Fri, 28 Nov 2025 08:29:16 GMT

    In a previous column, I argued that self-hosting is resistance in an age where ownership is increasingly illusory.
    There is increasing evidence that self-hosting is becoming popular among a certain kind of user, say the typical readership of ItsFoss.
    There is a simple explanation for this shift: people want their data, dollars, and destiny back. Centralized platforms optimized for engagement and extraction are colliding with real-world needs — privacy, compliance, predictability, and craft. Linux, containers, and a flood of polished open-source apps have turned what used to be an enthusiast’s project into a practical step for tech‑savvy users and teams.
    The demand and supply of self-hosting is headed in the right direction.
    The Economics of Self-Hosting
    Photo by Chris Briggs / UnsplashI spoke about the demand side of the equation in a preivous column. Today, I would like to talk about the supply side.
    Put simply, self-hosting got easier: Dockerized services, one‑click bundles, and opinionated orchestration kits now cover mail, identity, storage, media, automation, and analytics. And the hardware needed is trivial: a mini‑PC, a NAS, or a Pi can host most personal stacks comfortably.
    Click and deploy OS and interfaces make it so easyAn increasing portion of these users are also conscious of the environmental impact of unchecked consumerism: recycling older hardware for your home-lab is an easy way to ensure that you aren't contributing to mountainous e-waste that pose risks to communities and the environment.
    The numbers reinforce the vibe. The 2025 selfh.st community survey (~4081 respondents) shows more than four in five self‑hosters run Linux, and Docker is the dominant runtime by a wide margin. While this hasn't become mainstream yet, it highlights one of my arguments: there are costs to trusting big tech with your most important data and services, financial and otherwise. Once such costs outweigh the costs of self-hosting, once the vast majority of users can no longer deny such costs are draining their wallets and their sense of agency, we can expect this shift to become mainstream.
    Self-Hosting is Independence from Big Tech
    Photo by Jonathan Borba / UnsplashWhen your calendar, contacts, photo library, and documents sit on your own box behind your own reverse proxy, you remove third‑party analytics, shadow data enrichment, and surprise policy drift. You also reduce the surface area for “account lockouts” that nuke access to life‑critical records. For users burned by sudden platform changes — forced accounts, feature removals, data portability barriers—self ‑ hosting is an antidote. Cost predictability over time. Cloud convenience is real, but variable charges accumulate as you scale storage, bandwidth, and API calls. With self‑hosting, you pay upfront (hardware + power), then amortize. For steady, continuous workloads—backups, photo libraries, media servers, home automation, docs, password vaults—the math is often favorable. Reliability through ownership. Services die. Companies pivot. APIs change. By running key utilities yourself — RSS, password vaults, photo libraries, file sync, smart‑home control — you guarantee continuity and can script migrations on your timeline. That resilience matters when consumer vendors sunset features or shove core capabilities behind accounts and subscriptions. Curiosity and capability‑building. There’s a practical joy in assembling a stack and knowing how each layer works that I can attest. For Linux users, self‑hosting is an ideal next step: you practice containerization, networking, monitoring, backups, and threat modeling in a low‑risk environment. The Linux‑first baseline
    Photo by Hc Digital / UnsplashLinux dominates self‑hosting because it’s stable, well‑documented, and unfussy (in the context of servers; I am aware Linux desktop has some ways to go before mainstream users will flock towards Linux).
    Package managers and container runtimes are mature. Community tutorials cover everything from Traefik/Caddy reverse proxies to WireGuard tunnels and PostgreSQL hardening. The selfh.st survey shows Docker adoption near 90 percent, with Proxmox, Home Assistant OS, and Raspberry Pi OS widely used. It’s not gatekeeping; it’s pragmatism. Linux is simply the easiest way to stitch a small, reliable server together today.
    Where the rubber meets the road
    Most start with a single box and a few services: identity and secrets (Vaultwarden, Authelia, Keycloak); files and backups (Nextcloud, Syncthing, Borgmatic); media (Jellyfin, Navidrome, Photoprism/Immich); home (Home Assistant); networking (Nginx/Traefik/Caddy, WireGuard); knowledge (FreshRSS, Paperless‑ngx, Ghost). The payoff is a system where each function is yours.
    AI is accelerating the trend
    Self‑hosted AI moved from novelty to necessity for teams with sensitive workloads. Local inference avoids model‑provider data policies, reduces latency, and stabilizes costs. Smaller models now run on consumer hardware; hybrid patterns route easy requests locally and escalate only high‑uncertainty tasks to cloud. For regulated data, self‑hosting is often the only sane route.
    The economics are getting clearer
    “Is self‑hosting cheaper?” depends on workload shape and rigor. Cloud Total Cost of Ownership (TCO) includes convenience and externalized maintenance; self‑hosting TCO includes your time, updates, and electricity. But for persistent, predictable personal workloads—photo/video storage, backups, calendars, private media—self‑hosting tends to win.
    What self‑hosting doesn’t fix
    You still need to operate. Patching, backups, monitoring, and basic security hygiene are on you. Automated update pipelines and off‑site backups reduce pain, but they require setup and discipline. ​ Internet constraints exist. Residential ISPs throttle uploads or block SMTP; dynamic IPs complicate inbound routes; power outages happen. In practice, most personal stacks work fine with dynamic DNS, tunneling, and a small VPS for exposed services, but know your constraints. ​⁠ Some services are better bought. Global‑scale delivery, high‑throughput public sites, and compliance‑heavy email sending can be more efficient with a trustworthy provider. “Self‑host everything” isn’t the point—“self‑host what’s sensible” is. The cultural angle
    Self‑hosting isn’t anti‑cloud; it’s pro‑agency. It’s choosing the right locus of control for the things you care about. For FOSS communities, it’s consistent with the ethos: own your stack, contribute upstream, and refuse enshittification through slow, patient craft. For Linux users, it’s the obvious next rung: turn your knowledge into durable systems that serve people you love, not just platforms that serve themselves.
    If you value predictability, privacy, and the quiet confidence of owning the tools you rely on, self‑hosting stops being a hobby and starts being common sense. The shift is already underway. It’s not loud. It’s steady. And Linux is where it happens.
  7. by: Sourav Rudra
    Thu, 27 Nov 2025 17:00:46 GMT

    A growing number of Linux desktop environments (DEs) are moving towards Wayland, the modern display protocol designed to replace the aging X11 window system.
    X11 has been the foundation of Linux graphical interfaces for over three decades now, but it carries significant technical debt and security limitations that Wayland aims to address.
    Projects like Fedora, GNOME, and KDE have been leading the charge on this by being among the first ones to adopt Wayland.
    Now, KDE has announced it is sunsetting the Plasma X11 session entirely.
    What's Happening: The KDE Plasma team has made it clear that the upcoming Plasma 6.8 release will be Wayland-exclusive and that the Plasma X11 session will not be included in it.
    Support for X11 applications will be handled entirely through Xwayland, a compatibility layer that allows X11 apps to run on Wayland compositors. The Plasma X11 session itself will continue to receive support until early 2027.
    Though, the developers have not provided a specific end date yet, as they are working on additional bug-fix releases for Plasma 6.7.
    The rationale behind this change is to allow the Plasma team to move faster on improving the stability and functionality of the DE. They stated that dropping X11 support will help them adapt without dragging forward legacy support that holds back development.
    What to Expect: For most users, this change is said to have minimal immediate impact. KDE says that the vast majority of their users are already using the Wayland session, and it has been the default on most distributions.
    Users who still require X11 can opt for long-term support distributions like AlmaLinux 9, for example, which includes the Plasma X11 session and will be supported until 2032.
    The developers also note that gaming performance has improved on Wayland. The session supports adaptive sync, optional tearing, and high-refresh-rate multi-monitor setups out of the box. HDR gaming works with some additional configuration.
    Plus, users of NVIDIA GPUs can breathe easy now, as Wayland support in the proprietary NVIDIA driver has matured significantly. Graphics cards supported by the manufacturer work well nowadays. For older NVIDIA hardware, the open source Nouveau driver can be used instead.
    There are some issues that the Plasma team is actively working on addressing, things like output mirroring, session restore, and remembering window positions. But overall, they seem well-prepared for this massive shift.
    Suggested Read 📖
    U Turn! X11 is Back in GNOME 49, For NowA temporary move that gives people some breathing room.It's FOSSSourav Rudra
  8. by: Sourav Rudra
    Thu, 27 Nov 2025 17:00:46 GMT

    A growing number of Linux desktop environments (DEs) are moving towards Wayland, the modern display protocol designed to replace the aging X11 window system.
    X11 has been the foundation of Linux graphical interfaces for over three decades now, but it carries significant technical debt and security limitations that Wayland aims to address.
    Projects like Fedora, GNOME, and KDE have been leading the charge on this by being among the first ones to adopt Wayland.
    Now, KDE has announced it is sunsetting the Plasma X11 session entirely.
    What's Happening: The KDE Plasma team has made it clear that the upcoming Plasma 6.8 release will be Wayland-exclusive and that the Plasma X11 session will not be included in it.
    Support for X11 applications will be handled entirely through Xwayland, a compatibility layer that allows X11 apps to run on Wayland compositors. The Plasma X11 session itself will continue to receive support until early 2027.
    Though, the developers have not provided a specific end date yet, as they are working on additional bug-fix releases for Plasma 6.7.
    The rationale behind this change is to allow the Plasma team to move faster on improving the stability and functionality of the DE. They stated that dropping X11 support will help them adapt without dragging forward legacy support that holds back development.
    What to Expect: For most users, this change is said to have minimal immediate impact. KDE says that the vast majority of their users are already using the Wayland session, and it has been the default on most distributions.
    Users who still require X11 can opt for long-term support distributions like AlmaLinux 9, for example, which includes the Plasma X11 session and will be supported until 2032.
    The developers also note that gaming performance has improved on Wayland. The session supports adaptive sync, optional tearing, and high-refresh-rate multi-monitor setups out of the box. HDR gaming works with some additional configuration.
    Plus, users of NVIDIA GPUs can breathe easy now, as Wayland support in the proprietary NVIDIA driver has matured significantly. Graphics cards supported by the manufacturer work well nowadays. For older NVIDIA hardware, the open source Nouveau driver can be used instead.
    There are some issues that the Plasma team is actively working on addressing, things like output mirroring, session restore, and remembering window positions. But overall, they seem well-prepared for this massive shift.
    Suggested Read 📖
    U Turn! X11 is Back in GNOME 49, For NowA temporary move that gives people some breathing room.It's FOSSSourav Rudra
  9. by: Sourav Rudra
    Thu, 27 Nov 2025 14:13:02 GMT

    If you spend a lot of time on a computer, then fonts matter more than you think. A good one reduces eye strain and makes reading the contents of the screen easier. The right one can drastically improve your entire desktop experience.
    In my case, I like to use Inter on my Fedora-powered daily driver, and I don't really mess around with it. But everyone's different. Some like rounded fonts. Others want sharp, clean lines. Having options matters. Your eyes, your choice after all.
    Anyhow, Google just open-sourced a new option worth checking out.
    Google Sans Flex: What to Expect?
    Google Sans FlexReleased under the SIL Open Font License, Google Sans Flex is an open source font that is touted to be their next-gen brand typeface, designed by David Berlow.
    Sans Flex is a variable font with five axes: weight, width, optical size, slant, and rounded terminals. One file holds multiple styles instead of separate files, delivering different looks from a single download.
    Google designed it for screens of various sizes and modern operating systems. Plus, it should look sharp on high-resolution displays with fractional scaling. Basically, one Sans Flex file replaces dozens of individual font files.
    Just a demo of this font. I used GNOME Tweaks to apply it system-wide.Get Google Sans Flex
    You can get the font file from the official website, and after that, you can install it on Ubuntu or any other Linux distribution with ease by following our handy guide.
    Keep in mind that the variable font features won't work in Linux desktop environments, and you will only get the regular style when using it system-wide.
    If you need help or have any questions, then you can ask the helpful folks over at our community forum.
    Google Sans FlexSuggested Read 📖: Learn to install fonts in Linux.
    How to Install New Fonts in Ubuntu and Other Linux DistrosWondering how to install additional fonts in Ubuntu Linux? Here is a screenshot tutorial to show you how to easily install new fonts.It's FOSSAbhishek Prakash
  10. by: Abhishek Prakash
    Thu, 27 Nov 2025 10:33:18 GMT

    As Linux users, most of us prefer open-source software. But if you’ve been using Linux for a while, you know this truth too: in daily workflows, you may have to rely on proprietary software.
    And sometimes, you use software that feels like open source projects but they actually are not. I am going to list some of those applications that are popular among Linux users but often we don't realize that they are not open source. I'll also suggest their open source alternatives for you.
    Obsidian: Personal knowledge base
    Obsidian has become incredibly popular among developers, researchers, and anyone who takes their notes seriously. Its local-first approach, Markdown support, and graph view make it ideal for building a personal knowledge base.
    While it supports community plugins and customization, the core application itself is proprietary. This may come as a surprise because it always feels like Obsidian is open source. Alas! It is not.
    🐧The most suitable open source alternative to Obsidian is Logseq. You can also try Joplin for its simplicity.Termius: Modern SSH client
    Termius is a sleek, cross-platform SSH client used by sysadmins and developers, specially the ones who manage multiple servers.
    It offers synchronization across devices, organized host management, and secure key handling. However, it’s a fully closed-source commercial product. How I wish it was open source.
    🐧Tabby could be somewhat of an open source alternative here.MobaXterm: Accessing Linux servers from Windows
    MobaXterm is primarily a Windows tool, but many Linux users interact with it while managing remote Linux servers from work or university environments. At least that's what I used around 12 years ago at work.
    It combines SSH, X11 forwarding, and remote desktop features under one roof. And it does the job very effectively and offers a lot more than PuTTY.
    🐧Not sure if there is a single application that has same features as MobaXterm. Perhaps PuTTY and X2Go or Remmina could be used.Warp: The AI-powered terminal
    Warp is a new-age terminal focused on modern developer and devops workflows. It offers command blocks, AI suggestions and AI agents, team sharing features, and a highly polished interface.
    But it’s completely closed-source. I would have appreciated it if they offered it as open source and used their proprietary AI offering as optional add-on.
    🐧I believe Wave is the most suitable open source alternative to Warp. Similar features and you can also use local AI.Docker Desktop: For easy container management
    Docker itself is open source, but Docker Desktop is not.
    It provides a GUI, system integration, container management tools and additional features that simplify your container-based workflows on personal machines. After all, not everyone is a command line champion.
    Despite the licensing controversies, many people still use it because of convenience and integration with development environments.
    🐧Rancher Desktop is worth looking at as an alternative here.Visual Studio Code: Microsoft's not so open offering
    VS Code sits in a slightly grey area:
    The base project (Code – OSS) is open source. The official Microsoft build of VS Code is proprietary due to licensed components and telemetry. Nevertheless, it remains the most popular code editor for developers, including Linux users, thanks to its extensions, easy GitHub integration, and huge plugin ecosystem.
    🐧Code - OSS is available in the official repositories of many Linux distributions. Think of it as Chromium browser which is open source version of Chrome.Discord: The developer community hub
    There was a time when developers used to dwell in IRC servers. That was 20 years ago. These days, Discord seems to have taken over all other instant messaging services.
    Surprisingly, Discord started as a gaming platform but has become a central communication tool for tech communities, open source projects, and developer groups.
    Many open source project communities now live there, even though Discord itself is fully proprietary.
    🐧Matrix-based Element can be an alternative here.Vivaldi: Chrome alternative browser
    Vivaldi is a popular web browser among Linux users. It is based on open-source Chromium, but its UI, branding, and feature layer are proprietary.
    Its deep customization, built-in tools (notes, mail, calendar), and privacy-focused philosophy make it a suitable choice for many Linux users.
    Wondering why it is not open source? They have a detailed blog post about it.
    🐧You may consider Brave web browser.VMWare Workstation: Enterprise-level virtualization
    But since it is 'enterprise' level stuff, how can it be open source?
    Despite all the licensing controversy, VMware’s Workstation and Fusion products are still heavily used for virtualization in both personal and enterprise environments.
    They’re well-optimized, reliable, and offer features that are sometimes ahead of open-source alternatives. But yes, they are completely proprietary.
    🐧GNOME Boxes is my preferred way of managing virtual machines.Ukuu: Easy kernel management on Ubuntu
    Ukuu stands for Ubuntu Kernel Upgrade Utility. It allows you to install mainline Linux kernel on Ubuntu. You can also use it for installing a kernel of your choice, add, delete kernels from the comfort of GUI.
    A few years ago, Ukuu switched to a paid license, unfortunately.
    🐧Mainline is an actively maintained open source fork of Ukuu.Plex: Media server for self-hosting enthusiasts
    Plex is extremely popular among Linux users who build homelabs and/or media servers.
    What started as a self-hosted media server, Plex gradually moved to become a streaming platform of its own. Oh! The irony.
    Not just that, most of its ecosystem is closed-source and cloud-dependent. Recently, they have started cracking down on free remote streaming of personal media.
    🐧Forget Plex, go for Jellyfin. Emby and Kodi are also good open source media servers.Tailscale – Easy remote access for self-hosters
    Tailscale uses the open-source WireGuard protocol but offers a proprietary product and service on top of it.
    It makes secure networking between your devices ridiculously easy. This is perfect for self-hosters, and homelabbers as you can securely access your self-hosted services from outside your home network.
    This simplicity is why several users accept the closed-source backend.
    🐧You can go for Headscale as an alternative.Snap Store: Open front, closed backend
    Ubuntu's Snap-based software center, Snap Store, is closed source software.
    Snapd, the package manager, is open source. But the Snap Store backend is proprietary and controlled by Canonical. This has sparked debate in the Linux community for years.
    Still, most Ubuntu users rely on it daily for installing and managing applications. It comes by default, after all.
    🐧As an Ubuntu user, you can get the actual GNOME Software back.Steam: The backbone of Linux gaming
    Surprised? Yes, our beloved Steam client is not open source software. Yet we use it. None of us can deny that Steam has been crucial for improving the state of gaming on Linux.
    From Proton to native Linux support for thousands of games, Steam has played a huge role in improving Linux as a gaming platform, even though the platform itself is proprietary.
    🐧If you must, you could try Lutris or Heroic Games Launcher.Conclusion
    Using open-source software is about freedom, not necessarily forced purity.
    Many Linux users aim to replace proprietary software whenever possible but they also value productivity, reliability, and workflow efficiency. If a closed-source tool genuinely helps you work better today, well use them but keep on supporting open alternatives alongside.
    The good thing is that for almost every popular proprietary tool, the open-source ecosystem continues to offer strong alternatives.
    To me, the important thing isn’t whether your entire stack is open source. It’s that you’re aware of your choices and the trade-offs behind them.
    And that awareness is where true freedom begins.

  11. by: Abhishek Prakash
    Thu, 27 Nov 2025 04:41:37 GMT

    Happy Thanksgiving 🦃
    I’m incredibly thankful for this community. To our Plus members who support us financially, and to our free members who amplify our work by sharing it with the world — you all mean a lot to us. Your belief in what we do has kept us going for 13 amazing years.
    This Thanksgiving, let’s also extend our gratitude beyond our personal circles to the open-source contributors whose work silently powers our servers, desktops, and daily digital lives. From code to distributions to documentation, their relentless effort keeps the Linux world alive 🙏
    Here's the highlight of this edition of FOSS Weekly:
    Zorin OS upgrade tool. Arduino's future looking precarious. Dell prioritizing Linux with its recent launch. Backing up Flatpak and Snap applications. And other Linux news, tips, and, of course, memes! Thanksgiving is also associated with offers, deals and shopping. Like every year, I have curated a list of deals and offers that may interest you as a Linux user. See if there is something that you need (or want).
    Black Friday Deals for Linux Users 2025 [Continually Updated With New Entries]Save big on cloud storage, privacy tools, VPN services, courses, and Linux hardware.It's FOSSAbhishek PrakashThere is also a wholesome deal that will deliver fresh cranberry sauce to your doorstep while supporting keystone open source maintainers.
    📰 Linux and Open Source News
    Blender 5.0 has arrived with major changes across the board. TUXEDO Computers has shelved its plans for an Arm notebook. Ultramarine 43 is here with a fresh Fedora 43 base and some major changes. Raspberry Pi Imager 2.0 has arrived with a clean redesign and new features. Dell has launched the Dell Pro Max 16 Plus, with the Linux version being available before Windows. Collabora has relaunched desktop office suite. Which is basically LibreOffice under the core but with a more modern and fresh user interface.
    Collabora Launches Desktop Office Suite for LinuxThe new office suite uses modern tech for a consistent online-offline experience; the existing offering is renamed ‘Classic’ and it maintains a traditional approach.It's FOSSSourav Rudra🧠 What We’re Thinking About
    Arduino's enshittification might've begun as Qualcomm carries out some massive policy changes.
    Enshittification of Arduino Begins? Qualcomm Starts Clamping DownNew Terms of Service introduce perpetual content licenses, reverse-engineering bans, and widespread data collection.It's FOSSSourav Rudra🧮 Linux Tips, Tutorials, and Learnings
    You can backup and restore your Flatpak and Snap apps and settings between distro hops.
    Backup and Restore Your Flatpak Apps & SettingsMake a backup of your Flatpak apps and application data and restore them to a new Linux system where Flatpak is supported.It's FOSSRoland TaylorMove Between the Distros: Back Up and Restore Your Snap PackagesMake a backup of your Snap apps and application data and restore them to a new Linux system where Snap is supported. Works between Ubuntu and non-Ubuntu distros, too.It's FOSSRoland TaylorThe Zorin OS developers have given early access to the upgrade path from Zorin OS 17 to 18.
    And check out this list of OG applications that were reborn as NG apps.
    Open Source Never Dies: 11 of My Favorite Linux Apps That Refused to Stay DeadThese Linux apps were popular once. And then they were abandoned. And then they came back with a new generation tag.It's FOSSRoland Taylor Linux runs the world’s servers, but on desktops, it’s still fighting for attention.
    That’s why It’s FOSS exists; to make Linux easier, friendlier, and more approachable for everyday users.
    We’re funded not by VCs, but by readers like you. This Thanksgiving, we’re grateful for your trust and your support.
    If you believe in our work, if we ever helped you, do consider upgrading to an It’s FOSS Plus membership — just $3/month or a single payment of $99 for lifetime access.

    Help us stay independent, stay human in the age of AI slop and more importantly.
    Join It's FOSS Plus 👷 AI, Homelab and Hardware Corner
    Don't neglect your homelab. Manage it effectively with these dashboard tools.
    9 Dashboard Tools to Manage Your Homelab EffectivelySee which server is running what services with the help of a dashboard tool for your homelab.It's FOSSAbhishek Kumar🛍️ Linux eBook bundle

    This curated library (partner link) of courses includes Supercomputers for Linux SysAdmins, CompTIA Linux+ Certification Companion, Using and Administering Linux: Volumes 1–2, and more. Plus, your purchase supports the Room to Read initiative!
    Explore the Humble offer here✨ Project Highlights
    A return from the dead? These open source apps sure did.
    Open Source Never Dies: 11 of My Favorite Linux Apps That Refused to Stay DeadThese Linux apps were popular once. And then they were abandoned. And then they came back with a new generation tag.It's FOSSRoland Taylor📽️ Videos I Am Creating for You
    In the latest video, I share how I customize and set up my Firefox browser.
    Subscribe to It's FOSS YouTube Channel💡 Quick Handy Tip
    In Nautilus file manager, you can select files according to certain pre-set conditions.
    To do that, first press CTRL+S and enter the relevant patterns you want to sort by. This will then make Nautilus select files or directories based on the given pattern.
    You can press CTRL+SHIFT+I to revert the selection as well.
    PS: The tip was tested using Nautilus, but other file managers should also have such functionality; only the shortcuts will vary.

    🎋 Fun in the FOSSverse
    Test your skills by reviewing Fedora's interesting history in this quick quiz.
    The Fedora Side of Linux: QuizFedora has an interesting history. Take this quiz to find out a little more about it.It's FOSSAnkush Das🤣 Meme of the Week: Step aside mortals, your god is here.
    🗓️ Tech Trivia: On November 24, 1998, America Online announced it would acquire Netscape Communications in a stock-for-stock deal valued at $4.2 billion, a move that signaled the shifting balance of power in the browser wars and highlighted the rapid consolidation occurring during the late-1990s Internet boom.
    🧑‍🤝‍🧑 From the Community: Long-time FOSSer Ernest has posted an interesting thread on obscure Linux distributions.
    Obscure GNU/Linux Distributions that May Interest YouIn the ZDNET Tech Today newsletter that came into my inbox today, there’s an item that interested me, and I immediately thought about all my fellow !T’S FOSS’ers! You can read the item for yourself here, but one distribution in particular caught my attention, because it offers only open source software throughout, and it eschews systemd, and instead offers several other init systems users can choose from, including OpenRC, Runit, s6, and SysV (list copied directly from the article), which brough…It's FOSS Communityernie❤️ With love
    Please share it with your Linux-using friends and encourage them to subscribe (hint: it's here).
    Share the articles in Linux Subreddits and community forums.
    Follow us on Google News and stay updated in your News feed.
    Opt for It's FOSS Plus membership and support us 🙏
    Enjoy FOSS 😄
  12. by: Sourav Rudra
    Wed, 26 Nov 2025 12:54:17 GMT

    Collabora Productivity is well-known for two of its flagship offerings, Collabora Online, their web-based document editor that powers many organizations, and their LibreOffice-based enterprise suite. That second one just got a makeover and the existing offering was moved to a new name.
    They announced Collabora Office for desktop today. It brings their online editor's interface to local desktop apps for Linux, Windows, and macOS. The previous enterprise suite is now called Collabora Office Classic.
    Collabora Office: What's Fresh?
    From left to right: Writer, Impress, and Calc. Click to expand.
    The new suite covers the basics like word processing, spreadsheets, presentations, and vector graphics. You get Writer for documents, Impress for presentations, and Calc for spreadsheets. But the way it is all put together is quite different.
    Under the hood, it uses LibreOffice's core technology, but the interface is where things get interesting. Instead of relying on VCL, they built it with JavaScript, CSS, WebGL, and Canvas.
    There is no Java dependency either. The result is a smaller download that installs cleanly. Everything you need comes in one package.
    File compatibility looks good too. Microsoft Office formats like DOCX, XLSX, and PPTX work as expected. OpenDocument formats are obviously supported as well.
    During my brief use of it, the interface felt modern with a familiar tabbed layout and easy-to-use toolbars. The developers mention that they have simplified the defaults and settings compared to typical desktop office apps. This should result in less clutter and more productivity for people who use Collabora Office daily.
    Speaking on this, Michael Meeks, the CEO of Collabora Productivity, added that:
    Similar to LibreOffice, Yet Different
    Both products use the same LibreOffice foundation. But that's where the similarities end. The new one mimics Collabora Online's web interface using JavaScript and CSS. Classic sticks with the traditional VCL-based desktop interface that longtime LibreOffice users will know well.
    Classic includes the Base database app with its Java components. The new version skips Base entirely and drops the Java requirement.
    Macros work on both, but differently. Classic gives you full editing capabilities with BASIC, Python, and UNO support. The new version just runs macros, no advanced tools.
    For business users, the support difference will matter the most. Classic has long-term enterprise support available now. The new Collabora Office is a fresh release that isn't yet tailored for enterprise deployment.
    Collabora is working on bringing enterprise support to the new suite. They expect to have it ready sometime in 2026. Until then, organizations needing production-ready support should stick with Classic.
    Download Collabora Office
    You can grab Collabora Office from the official website. The suite is available as a Flatpak for Linux, an appx file for Windows 11, and an app bundle for macOS 15 Sequoia or later.
    If you need help with deployment or documentation, you can check out the support page for the relevant resources. The source code is available on GitHub.
    Collabora OfficeSuggested Read 📖
    ODF 1.4 Release Marks 20 Years of OpenDocument FormatAccessibility and compatibility upgrades mark 20th anniversary of document standard at OASIS Open.It's FOSSSourav Rudra
  13. by: Roland Taylor
    Wed, 26 Nov 2025 11:03:03 GMT

    One of the greatest things about open-source software is that anyone can pick up where a project left off and bring it back to life, whether it's to continue a legacy, or a spiritual successor that builds on a new foundation.
    In this article, I'll share some of the popular Linux apps that got new lives as "New/Next Generation" (-ng) versions of their former selves.
    1. iotop-c
    iotop-c gives iotop a refreshed lookYou've heard of top and htop, but did you know there's also a tool specifically for monitoring disk I/O? That's what iotop was created to do, but, but it's not seen development activity for some time, and being written in Python, it can get a bit slow (sorry Python lovers).
    That's where iotop-c comes in. It's a rewrite of the original iotop in C, of course, and it's not only much faster, but richer in features, and actively maintained.
    Installation
    Iotop-c is packaged as iotop-c in most distros. You can also check out the GitHub page to grab the source code, star the project, or report bugs.
    For Debian/Ubuntu you can run:
    sudo apt install iotop-c💡Want to learn how to make the most of iotop? Check out this guide to iotop and ntopng on Linux Handbook.2. vokoscreenNG
    vokoscreen NG makes screenrecording a breezevokoscreen NG (vovokscreen Next Generation) is the modernized rewrite of vokoscreen, a popular open-source screen recording app from the previous decade. Where the original version used FFmpeg and was limited to X11 (not because of its backend, to be clear), vokoscreenNG uses Gstreamer and has a fresh Qt interface.
    It's also got support for Wayland, which the previous generation lacked.
    Installation
    You can grab vokoscreenNG from Flathub, or install it most distros directly from your package manager. On Debian/Ubuntu, you can install vokoscreen NG with:
    sudo apt install vokoscreen-ng3. WoeUSB-ng
    WoeUSB-ng makes it easy to create bootable Windows USB drivesWoeUSB-ng is a total rewrite of WoeUSB, an open-source Linux app for creating bootable Windows USB flash drives. It was created by the same developers, but rewritten in Python and given a GUI to make it easier to set Windows installers from Linux.
    Ironically, despite an active community, WoeUSB-ng seems abandoned again, as it hasn't been updated in at least two years. For instance, there's an open pull request to add AppImage packaging, and pave the way for others, but the main repository appears stalled. Maybe some day WoeUSB-ng will rise again.
    🚧WoeUSB was popular in 2010s. Then it was abandoned and WoeUSB-ng took its place. From what I see, WoeUSB-ng's development has stagnated as well. Until we see a WoeUSB-ng++ or WoeUSB-GenZ, we have Ventoy to make bootable Windows USB on Linux.Installation
    If you're on Arch (or, if you use Arch on your distro of choice via Distrobox), you can install WoeUSB-ng with:
    yay -S woeusb-ng4. eSpeak NG
    The Screen Reader in GNOME uses eSpeak NG at the backendeSpeak NG is speech synthesizer with support for over a hundred languages. Its a true fork that builds on the preexisting eSpeak engine, adding more languages and new features while possessing a cleaner codebase and remaining fully compatible with the original.
    This means eSpeak NG serves as a drop-in replacement for the original.
    Installation
    eSpeak NG is included with most distros as their text-to-speech engine. You can also install espeak-ng from your package manager of choice, for example:
    sudo apt install espeak-ngWill install it on Debian/Ubuntu (if you don't already have it).
    5. stress-ng
    The safest move with stress-ng (unless you know what you're doing)stress-ng (stress next generation) is an app designed to do exactly what its name suggests, but for a good cause. It generates system load to stress-test both hardware and software subsystems to uncover bugs and limitations. Let me stress, no pun intended, it is not meant for casual use.
    As you might guess, stress-ng is the remake of stress, the original app. After stress was abandoned, stress-ng became the standard, adding new features and methods for a broader range of systems.
    Installation
    You can install stress-ng from your distro's package manager. For Debian/Ubuntu, the command would be:
    sudo apt install stress-ng⚠️Warning: stress-ng is not a toy and can genuinely cause your system to overheat or become unresponsive. It should only be used by professionals, in controlled conditions.6. aircrack-ng
    aircrack-ng is a great pen-test toolaircrack-ng is a total remake and expansion of aircrack, an app used for professional security auditing of WiFi networks by attempting to "crack" their passwords (hence the name). The original aircrack was a WEP/WPA recovery tool from the early 2000s.
    Designed when WPA2 was new, it lacked the coverage and hardware support needed for the modern era. By contrast, aircrack-ng is a full suite, with broader hardware support, various attack types, automation features, and more.
    Installation
    You can get aircrack-ng on most distros through the package manager. It's included with many security focused distros, like Kali, Parrot, and BlackArch.
    To install aircrack-ng on Debian/Ubuntu, you can run:
    sudo apt install aircrack-ng7. tomboy-ng:
    Tomboy-ng keeps the note nostalgia aliveTomboy-ng is a total rewrite of Tomboy, which was once the standard notes tool on the GNOME desktop, and shipped with several distros, including Ubuntu. Tomboy was written in C#, and required Mono, which was too heavy in the days of CD and DVDs.
    For this reason, Tomboy was dropped from Ubuntu, and its C# dependency raised issues for some. Later, the legacy Tomboy codebase was abandoned, and Tomboy-ng, written in Pascal, took its place.
    Installation
    You can install Tomboy-ng on most distros by from the default repositories. On Debian/Ubuntu, you can run:
    sudo apt install tomboy-ng8. radiotray-ng:
    Radiotray-ng lets you listen to online radios easilyRadiotray-ng is a complete rewrite of Radiotray, a minimalist Python/GTK2 app for playing online radio stations right from the system tray. This rewrite use C++ and Glib/Gtkmm, and is not only more stable, but less prone to breakage from GTK updates.
    Radiotray-ng brings better codec handling, lower resource usage, more stable stream reconnection and uses JSON for saving its configuration (as opposed to XML).
    Installation
    Radiotray-ng is packaged for Fedora and can be installed directly with:
    sudo dnf install radiotray-ngFor Ubuntu users, .deb packages are typically provided with each release.
    9. GoldenDict-ng
    GoldenDict-ng is way more than a basic dictionary appGoldenDict-ng is a true fork of GoldenDict, an popular open-source dictionary and translation app. GoldenDict-ng maintains the original's support for multiple dictionary formats (StarDict, Babylon, Webster, and more), audio pronunciations, web lookups, and scan-to-translate functionality.
    On top of these, it brings an updated interface based on Qt 6, various bug fixes, better multimedia support, and improved dictionary rendering. It also adds other niceties like dark mode, better scanning behavior, and more robust indexing, making it suitable for dictionary power users.
    Installation
    Goldendict-ng is available on Flathub, for those who'd prefer to use a Flatpak. You can also install it from most distro repos. Debian/Ubuntu users can run:
    sudo apt install goldendict-ng10. ntopng
    ntopng gives a bird's eye view of your network activityntop-ng is the next-generation rewrite of ntop, a powerful real-time network traffic analyzer. The original ntop was already groundbreaking, and ntopng brings a new architecture, modern web UI, deep packet inspection, powerful metrics and flow analysis, and real-time bandwidth monitoring.
    It also adds Lua scripting, network flow export, and integration with PF_RING for high-performance environments.
    Installation
    ntop-ng is packaged for most distros. On Debian/Ubuntu systems you can run:
    # Install ntop-ng sudo apt install ntop-ng💡Note: You can learn how to put ntop-ng to good use by following this tutorial.11. Shutter: revived, not replaced
    Shutter's back like it never leftShutter is popular Linux screenshot app with a slew of useful features that served countless users for many years. It was abandoned for some time, not working on modern distros, nor supporting Wayland. Despite apps like Flameshot and Gradia arising in its absence, Shutter still held a special place for many.
    Fortunately, Shutter has been revived and even has initial support for Wayland. It's actively maintained by a community of enthusiastic users and contributors.
    Where to get it:
    Shutter is packaged for most popular distros, so you can grab it right from your package manager. On Debian/Ubuntu, you can run the following to install it:
    sudo apt install shutterConclusion
    Open-source projects are rarely ever truly dead: the right person or community can bring them back to life. From humble desktop apps, to critical system utilities, open-source finds new ways to preserve old ideas.
    If you rely on any of these apps, consider contributing or making a donation. After all, it's we, the community, who keep open-source alive.
  14. by: Abhishek Prakash
    Tue, 25 Nov 2025 16:01:55 GMT

    Thanksgiving is around the corner, and the market is flooded with Black Friday and Cyber Monday deals on everything from gadgets to software subscriptions.
    For Linux users and open source enthusiasts, finding deals that respect privacy can be tricky. We have handpicked offers on secure cloud storage, VPNs, learning platforms, and Linux-friendly hardware.
    My advice for picking the right deals
    For someone who often takes advantage of deals, here are a few things you should note for making an informed decision.
    Money back policy: If it's a service/SaaS, like a cloud storage service, check for their money back policy and time period. If you don't like the service, you can get a refund if you initiate the refund request within the time specified in their policy. Renewal pricing: It is nice to use a service at a reduced rate but this may not last forever. For example, StartMail is offering reduced pricing for new accounts for the first year at $29 but it renews next year at $58. Avoid vendor lock-ins: Imagine you bought a service that doesn't allow you to export your data in a universally accepted format. Then you'll be stuck with that service for ever or lose your data. Do check if you store some data in a service, how can you get it back. For example, if you choose Proton Pass, you can easily export your data back if you decided to switch to some other password manager. Lifetime plans: I am a huge fan of lifetime offers. It helps me cut down on recurring subscription pricing as I pay a single fee, just once. I use lifetime plans of pCloud, Internxt for dumping data. And I am going to get Filen's too. It is good to check if a service offers lifetime plan. Plan ahead for Christmas gifts: Take advantage of Black Friday sales to purchase Christmas gifts, too. For example, you can get Raspberry Pi kits and other DIY gadgets at a lower pricing now and gift them to your children, nephews/nieces, etc. later. Just an idea to save money. Want vs Need vs Budget: It is easier to fall down the rabbit hole of deal shopping. Evaluate what you need and what you want. Those are two separate things. You might not need all the things you want. Doesn't mean you should only get what you need. Check your budget and decide how much it allows you to splurge. But you should take into account that these are limited-time offers, so decide fast and smart.
    📋Some of the links here are affiliate links, which means we may get a commission when you purchase at no additional cost to you. Please read our affiliate policy. Proton — A Range of Privacy-Focused Services
    Proton started as an encrypted email service. Today it is a complete privacy ecosystem trusted by over 100 million people worldwide. Its services take advantage of Swiss privacy laws and open source code.
    Proton Mail offers end-to-end encrypted email with an ad-free inbox. Proton VPN encrypts your internet traffic and masks your location. Proton Pass manages passwords and creates hide-my-email aliases to protect your inbox.
    Proton Drive provides encrypted cloud storage for files and photos. Lumo AI is their new privacy-respecting AI assistant that uses zero-access encryption and keeps no chat logs, unlike Big Tech alternatives.
    💸 Offer: Up to 70% offGet The DealpCloud — Secure, Reliable Cloud Storage
    pCloud has protected 22 million users across 134 countries for over a decade. They have never had a security breach, and their specialty is lifetime plans where you pay once and own forever.
    This year's flagship deal is the 3-in-1 bundle. You get 5 TB of cloud storage, lifetime access to pCloud Pass password manager, and lifetime access to Cloud Crypto. All three products will cover your storage and security needs permanently.
    For people tired of subscriptions, the one-time payment means no recurring fees.
    💸 Offer: Up to 62% offGet The DealFilen - Encrypted cloud storage
    Germany-based Filen offers zero knowledge, client side, end to end encrypted cloud storage. They use AES 256-bit file encryption which is considered to be quantum resistant. All of their data center are located in Germany and owned by Filen itself, not rented from someone else.
    They are quite affordable, actually. Their 200 GB storage plan costs just 19.99€, and just 13.99€ in the Black Friday sale.
    As I said earlier, I like lifetime deals. Filen is offering lifetime plans for the last time. I would suggest going for the lifetime plan. There is a 14-day refund period.
    💸 Offer: Up to 30% off. Take advantage of their soon-to-be-removed lifetime plan.Get The DealInternxt — An Inexpensive Cloud Storage
    Internxt offers post-quantum encrypted cloud storage with additional privacy tools. Plans include Drive for storage and backups, Antivirus for securing your devices, VPN for encrypted connections, Cleaner to keep your system tidy, and Meet for video calls.
    All services use zero-knowledge encryption and only you can access your files.
    Note that some people have complained about lack of support from Internxt. Use it as an alternative cloud storage in that case. They also have a 30-day money back policy, so worth checking out if it meets your requirements or not.
    💸 Offer: Up to 90% off (slightly more discounted in Black Friday than it usually is)Get The DealDataCamp — Land Your Dream Job
    DataCamp teaches data science, AI, and machine learning through interactive courses. The platform offers 570+ courses, career tracks, and certifications. Learn Python, SQL, Power BI, ChatGPT, and other in-demand skills.
    The hands-on approach lets you practice real skills and build projects you can add to your portfolio. Premium plans give unlimited access to the entire catalog.
    💸 Offer: Up to 50% offGet The DealNordVPN — For Keeping Nosy Trackers at Bay
    NordVPN is one of the most popular VPN services globally. It combines strong security, fast speeds, and competitive pricing. Servers in 60+ countries provide reliable connections and help bypass geo-restrictions.
    Apps work seamlessly on Linux, Windows, macOS, Android, and iOS. Features include automatic kill switch, split tunneling, and multiple device connections.
    💸 Offer: Up to 77% offGet The DealSystem76 — Hardware Tailored for Linux
    System76 builds computers specifically for Linux users. Based in the US, they also develop the community favorite, Pop!_OS, a distribution for both general users and developers alike. Every machine can be configured to ship with Linux pre-installed and fully supported.
    The Thelio line offers powerful desktops for demanding workloads. Lemur Pro laptops deliver portability without compromising performance. All hardware is customizable to match your exact needs and budget.
    💸 Offer: Up to $300 offGet The DealPironman 5-Max — The Best Raspberry Pi case
    Of all the mini PC cases of Raspberry Pi, I like Pironman 5 Max the most. It looks beautiful, it has more NVMe ports and real HDMI ports. I have shared my experience in a detailed review of Pironman 5 Max.
    While the official website has not listed any reduced pricing, I see that at least Amazon US is offering 20% off on most SunFounder products. This means, you get this awesome case for $76 instead of $96.
    💸 Offer: 20% off but only on Amazon, not on official SunFounder websiteGet The Deal on Amazon USYou can also get 20% off on Pironman Mini and Pironman 5 variants.
    Tuta — Become a Legend
    Tuta offers private email and calendar services to over 10 million users. Formerly known as Tutanota, they are committed to making privacy a fundamental right.
    Quantum-resistant cryptography protects against future threats; zero-access infrastructure means even Tuta can't read your data; and many of its apps are open source and independently audited for security vulnerabilities.
    The Legend Plan includes 500 GB of storage, priority support, 30 extra email addresses, and unlimited custom domain addresses.
    💸 Offer: Up to 62% offGet The DealCodeacademy — Upskill in The Age of AI
    Codecademy has taught millions of people to code through interactive, hands-on courses. Learn Python, web development, data science, cybersecurity, or machine learning. All courses let you write actual code in the browser.
    The learn-by-doing approach makes coding accessible to beginners. Advanced learners can dive deep into specialized topics. The Pro plans unlock the full catalog and career services.
    💸 Offer: Up to 60% offGet The DealJuno Computers — Linux Laptops from the UK
    Juno Computers is a UK-based manufacturer offering laptops, tablets, and mini PCs with Ubuntu pre-installed. Operating from London and Sunny Isles Beach, they specialize in Linux-ready hardware. Their lineup includes various models for different needs and budgets.
    All systems ship with Ubuntu, LibreOffice, and full Linux support, with some exceptionally good compatibility across different kernel versions.
    💸 Offer: Up to 10% offGet The DealTerraMaster — NAS and DAS Storage Solutions

    TerraMaster specializes in network-attached storage and direct-attached storage devices for home users and small businesses. Their Black Friday sale covers NAS and DAS products with discounts up to 30%. The promotion runs from November 20 to December 1.
    Popular models include the F2-424 dual-bay NAS with an Intel N95 processor and dual 2.5GbE ports. It supports TOS 6 and Plex 4K transcoding. The F4-425 Plus features an Intel N150 CPU with dual 5Gbe interfaces for 8K streaming.
    For high-capacity needs, the F6-424 Max six-bay NAS includes an Intel i5 processor and TRAID support. DAS options like the D4-320 connect directly to PCs via USB 3.2 Gen 2 for local backup. The D1 SSD Plus supports USB 4 with speeds up to 40Gbps for video editing.
    💸 Offer: Up to 30% offGet The DealKhadas — Your Destination for Mini PCs
    Khadas manufactures single board computers and mini PCs for makers and developers. Their product line up includes the VIM series of SBCs and the modular Mind series of portable workstations.
    The Mind workstation features Intel Core processors in an ultra-slim design with magnetic modular connections. Past deals have included significant discounts on these products during the sale period too.
    💸 Offer: Up to $100 or 20% offGet The DealZima — Experts in Homelab Products
    Zima makes homelab and personal server hardware for self-hosters and DIY enthusiasts. Their products are perfect for building your own private cloud. Every device includes ZimaOS Plus benefits out of the box.
    Discounted products include ZimaBoard 2 for Plex and Docker with PCIe support, ZimaBlade for NAS and VPN projects, and ZimaCube with multiple drive bays for media transcoding.
    💸 Offer: Up to 40% offGet The DealMore offers will be added...
    I'll keep on adding more interesting deals and offers as I come across them. Keep watching this page.
    And if you know some other offers, that should interest us Linux users, please share them in the comment section and I may add them in the list here.
  15. by: Sourav Rudra
    Tue, 25 Nov 2025 14:57:07 GMT

    TUXEDO Computers specializes in Linux-first hardware, recently launching the InfinityBook Max 15 (Gen10) with AMD Ryzen AI 300 processors. The German manufacturer has built a reputation for well-built Linux systems that work reliably.
    However, 18 months of work on an ARM-powered notebook has come to an abrupt halt. The company announced that it is shelving its Snapdragon X Elite laptop project.
    A Tricky SoC Architecture
    Just a placeholder image of TUXEDO Computers' recent launch.The notebook was built around Qualcomm's Snapdragon X Elite (X1E) SoC. TUXEDO faced numerous technical roadblocks that prevented a viable Linux experience. KVM virtualization support was missing entirely on their model. This eliminated a critical feature for developers and power users who rely on virtual machines.
    USB4 ports failed to deliver the high transfer rates expected from the specification. Fan control through standard Linux interfaces proved impossible to implement. BIOS updates under Linux presented another problem.
    Battery life fell far short of expectations. The long runtimes ARM devices typically achieve under Windows never materialized on Linux. Video hardware decoding exists at the chip level. However, most Linux applications lack support to utilize it, making the feature essentially useless.
    Some Hope for the Future
    TUXEDO Computers is open to the possibility of this work being carried over. If the newer Snapdragon X2 Elite (X2E) proves more suitable, development may resume. The X2E chip launches in the first half of 2026, and reusing a significant portion of existing work would make the project viable again.
    Nonetheless, they will be contributing the device tree and other related work they developed to the mainline kernel, improving Linux support for many devices.
    Suggested Read 📖
    Best Linux Laptop of 2025? TUXEDO InfinityBook Pro 15 (Gen10) LaunchesBeast specifications. Pre-orders open now, mid-August shipping.It's FOSSSourav Rudra
  16. by: Roland Taylor
    Tue, 25 Nov 2025 03:08:57 GMT

    Flatpak has pretty much become the de-facto standard for universal packages on the Linux desktop, with an increasing number of distros supporting the format in their default installs. Yet, even with how easy it is to install and update Linux apps with Flatpak, moving them to a new system can be tricky, especially if you’ve installed dozens over time.
    Sure, you could list and reinstall everything manually, but that’s tedious work, and easily prone to human error. Fortunately, there’s a simple way to export your Flatpak apps, remotes, and even overrides so you can recreate your setup on another machine with just a few commands.
    You can even backup and restore your settings on another system.
    1. Exporting your Flatpak apps
    On the system where you've got all your apps, you'll first want to save a list of your installed apps as Flatpak "refs", including where each one is installed. Flatpaks can be installed either system-wide (and thus available to all users) or per-user. The process is different depending on whether you're running a single-user set up, or if you have to back up and restore for multiple users.
    For single-user systems
    This assumes you have no other users on your system. Backup both user and system apps you have access to.
    flatpak list --app --columns=installation,ref > flatpak-apps.txt For a multi-user setup
    First, you'll need to copy any system-level installations:
    # Backup only system-installed apps flatpak list --system --app --columns=ref > flatpak-apps-system.txt ❗This will not copy any user-installed Flatpaks.Next, copy any user-installed Flatpaks. You'll need to do this for every user individually. Have each user run this to backup their personal installations.
    flatpak list --user --app --columns=ref > flatpak-apps-user-$USER.txt Then, back up your Flatpak remotes (the repositories your apps came from):
    flatpak remotes --columns=name,url > flatpak-remotes.txt Each Flatpak app has a unique “ref” (short for "reference") that identifies its source, branch, and architecture. Saving these ensures you reinstall the exact same apps later.
    Exporting your overrides (optional)
    Overrides are the individual settings that you can modify for each Flatpak with an app like Flatseal. By exporting all overrides together at once, you can preserve your settings across installs.
    To do this, you can run the following command:
    # Export Flatpak overrides to a file flatpak override --show > flatpak-overrides.txtYou can later restore these overrides on your target system.
    Exporting your app data
    Flatpak app data, like configuration files and saved sessions, is stored in ~/.var/app/. You can copy this folder to your target system any time you want to transfer your app settings. For individual apps, you can copy their individual folders.
    For example, for GIMP, you can copy ~/.var/app/org.gimp.GIMP.
    2. Preparing the target system (optional)
    ℹ️I assume that you're transferring your apps to another system. If that's not the case, you can skip this step.It goes without saying, but if you're going to transfer your Flatpak apps to another system, you should ensure that the target system has Flatpak support. To check this, you can run:
    # Check if Flatpak is installed flatpak --version Checking that Flatpak is installed and workingIf you got a version number, you’re good to go. Most popular distros, including Fedora, Mint, and Pop!_OS, have Flatpak preinstalled.
    If you're planning on migrating to a fresh installation of Ubuntu, you'll need to install Flatpak first:
    # Install Flatpak sudp apt -y install flatpak3. Recreating your setup on the new system
    On your new Linux install, the first step is to re-add your Flatpak remotes:
    # Add saved Flatpak remotes while read -r name url; do flatpak remote-add --if-not-exists "$name" "$url" done < flatpak-remotes.txtRemember to run this command in the same directory where you have your flatpak-remotes.txt saved.
    Reinstalling your apps
    Once you've added your Flatpak remotes, you can now reinstall all your apps to their original locations:
    # Restore Flatpaks: while read -r inst ref; do if [ "$inst" = "user" ]; then flatpak install -y --user "$ref" else flatpak install -y --system "$ref" fi done < flatpak-apps.txtOnce this process completes, you can confirm that everything worked by running:
    flatpak list --appYou can compare this output with your original flatpak-apps.txt file to verify all your apps are back.
    Restoring overrides (optional)
    If you've saved your Flatpak overrides, you can restore them by running:
    # Restore your Flatpak Overrides while read -r line; do # Skip empty lines and comments [[ -z "$line" || "$line" =~ ^# ]] && continue flatpak override $line done < flatpak-overrides.txt Optional bonus for advanced users: Automating your setup
    If you frequently install or test new Flatpak apps, you can automate this process so your backups stay up to date, and you can quickly move your apps to a new system at any time.
    Create a simple script (e.g., ~/bin/flatpak-backup.sh):
    #!/bin/bash flatpak list --app --columns=installation,ref > ~/flatpak-apps.txt flatpak remotes --columns=name,url > ~/flatpak-remotes.txt flatpak override --show > ~/flatpak-overrides.txt echo "Flatpak backup completed on $(date)" >> ~/flatpak-backup.log Then, make the shell script executable:
    chmod +x ~/bin/flatpak-backup.sh Then schedule it to run weekly with cron:
    crontab -e Add this line (runs every Sunday at 10 AM):
    0 10 * * SUN ~/bin/flatpak-backup.sh This way, your Flatpak list and overrides stay current without any manual work.
    Wrapping up
    You now know how to quickly back up and migrate your Flatpak apps between systems in a clean, scriptable. It’s lightweight, doesn’t require extra tools, and makes distro hopping or system rebuilds much easier.
    If you'd like to take this to the next level, here's another quick tip: you can keep your Flatpak backup files in a version control system like git or a personal storage solution like Nextcloud. This way, if disaster strikes, you’ll be able to rebuild your app environment in minutes.
    You can also backup and restore Snap packages in similar function.
    Move Between the Distros: Back Up and Restore Your Snap PackagesMake a backup of your Snap apps and application data and restore them to a new Linux system where Snap is supported. Works between Ubuntu and non-Ubuntu distros, too.It's FOSSRoland TaylorI hope you find it useful 😄
  17. by: Chris Coyier
    Mon, 24 Nov 2025 15:38:52 +0000

    I’ve been using Kagi for search for the last many months. I just like the really clean search results. Google search results feel all junked up with ads and credit-bereft AI sludge, like the incentives to provide a useful experience have been overpowered by milking profit and a corporate mandates on making sure your eyeballs see as much AI as possible.
    I’m also not convinced Google cares about AI slop. Like do they care if a movie review for Predator: Badlands was written by a human after actually watching the movie, or if Gemini farted out a review because the LLM knows basically what a movie review reads like. Me, I sure would like to know. So I’m pleased with Kagi’s SlopStop idea.
    But I’ve managed to start this column with something I didn’t even really intend to talk about.
    Naturally, I’d like to talk about the typography one Kagi’s blog (follow that SlopStop link).
    Look at those single words at the end of both of those headers. Looks off. I can’t remember if those are “widows” or “orphans”, but upon looking it up, it’s neither, it’s a “runt” (lol).
    Obviously we can’t have that.
    One option is to text-wrap: balance; on the headers. Here’s what that looks like:
    Ehhhhhhhhh. Also not ideal. It makes those headers like half the width of the available space. Balancing is just way nicer with center-aligned headers. Which actually makes me think of how style queries should work with arbitrary styles…
    h1, h2, h3, h4 { /* doesn't actually work, style queries only work on --custom-properties */ @media style(text-align: center) { text-wrap: balance; } } Whatever — let’s not balance here anyway, let’s try text-wrap: pretty; (which lacks Firefox support). There we go:
    Better. The pretty values does a bunch of stuff, and runt-protection is among them.
    Honestly though it’s the line-height that bugs me the most. It’s just too much for a big header. Let’s bring it in and even pull the letters a little bit with negative letter-spacing.
    Now we’ve got to fight hierarchy and organization a bit. All the text is pure black… fine. Everything is about the same distance away from each other… that’s a little weird. So we’re just leaning on text size and weight (and one little instance of italic).
    I think we bring in just a smidge more to help here. Kagi has a wonderful little dog logo, we bring her in on the title so it sets it apart. The nav can set inline with the title. We use the nice yellow brand color to better set the title and date, then let it ride.
    They should probably just get a CodePen account to work this stuff out right?
  18. by: Daniel Schwarz
    Mon, 24 Nov 2025 14:22:30 +0000

    Sometimes I want to set the value of a CSS property to that of a different property, even if I don’t know what that value is, and even if it changes later. Unfortunately though, that’s not possible (at least, there isn’t a CSS function that specifically does that).
    In my opinion, it’d be super useful to have something like this (for interpolation, maybe you’d throw calc-size() in there as well):
    /* Totally hypothetical */ button { border-radius: compute(height, self); border-radius: compute(height, inherit); border-radius: compute(height, #this); } In 2021, Lea Verou explained why, despite being proposed numerous times, implementing such a general-purpose CSS function like this isn’t feasible. Having said that, I do remain hopeful, because things are always evolving and the CSSWG process isn’t always linear.
    In the meantime, even though there isn’t a CSS function that enables us to get the value of a different property, you might be able to achieve your outcome using a different method, and those methods are what we’re going to look at today.
    The fool-proof CSS custom properties method
    We can easily get the value of a different CSS property using custom properties, but we’d need to know what the value is in order to declare the custom property to begin with. This isn’t ideal, but it does enable us to achieve some outcomes.
    Let’s jump back to the example from the intro where we try to set the border-radius based on the height, only this time we know what the height is and we store it as a CSS custom property for reusability, and so we’re able to achieve our outcome:
    button { --button-height: 3rem; height: var(--button-height); border-radius: calc(var(--button-height) * 0.3); } We can even place that --button-height custom property higher up in the CSS cascade to make it available to more containment contexts.
    :root { /* Declare here to use anywhere */ --button-height: 3rem; header { --header-padding: 1rem; padding: var(--header-padding); /* Height is unknown (but we can calculate it) */ --header-height: calc(var(--button-height) + (var(--header-padding) * 2)); /* Which means we can calculate this, too */ border-radius: calc(var(--header-height) * 0.3); button { /* As well as these, of course */ height: var(--button-height); border-radius: calc(var(--button-height) * 0.3); /* Oh, what the heck */ padding-inline: calc(var(--button-height) * 0.5); } } } CodePen Embed Fallback I guess when my math teacher said that I’d need algebra one day. She wasn’t lying!
    The unsupported inherit() CSS function method
    The inherit() CSS function, which isn’t currently supported by any web browser, will enable us to get the value of a parent’s property. Think: the inherit keyword, except that we can get the value of any parent property and even modify it using value functions such as calc(). The latest draft of the CSS Values and Units Module Level 5 spec defines how this’d work for custom properties, which wouldn’t really enable us to do anything that we can’t already do (as demonstrated in the previous example), but the hope is that it’d work for all CSS properties further down the line so that we wouldn’t need to use custom properties (which is just a tad longer):
    header { height: 3rem; button { height: 100%; /* Get height of parent but use it here */ border-radius: calc(inherit(height) * 0.3); padding-inline: calc(inherit(height) * 0.5); } } There is one difference between this and the custom properties approach, though. This method depends on the fixed height of the parent, whereas with the custom properties method either the parent or the child can have the fixed height.
    This means that inherit() wouldn’t interpolate values. For example, an auto value that computes to 3rem would still be inherited as auto, which might compute to something else when inherit()-ed., Sometimes that’d be fine, but other times it’d be an issue. Personally, I’m hoping that interpolation becomes a possibility at some point, making it far more useful than the custom properties method.
    Until then, there are some other (mostly property-specific) options.
    The aspect-ratio CSS property
    Using the aspect-ratio CSS property, we can set the height relative to the width, and vice-versa. For example:
    div { width: 30rem; /* height will be half of the width */ aspect-ratio: 2 / 1; /* Same thing */ aspect-ratio: 3 / 1.5; /* Same thing */ aspect-ratio: 10 / 5; /* width and height will be the same */ aspect-ratio: 1 / 1; } Technically we don’t “get” the width or the height, but we do get to set one based on the other, which is the important thing (and since it’s a ratio, you don’t need to know the actual value — or unit — of either).
    The currentColor CSS keyword
    The currentColor CSS keyword resolves to the computed value of the color property. Its data type is <color>, so we can use it in place of any <color> on any property on the same element. For example, if we set the color to red (or something that resolves to red), or if the color is computed as red via inheritance, we could then declare border-color: currentColor to make the border red too:
    body { /* We can set color here (and let it be inherited) */ color: red; button { /* Or set it here */ color: red; /* And then use currentColor here */ border-color: currentColor; border: 0.0625rem solid currentColor; background: hsl(from currentColor h s 90); } } CodePen Embed Fallback This enables us to reuse the color without having to set up custom properties, and of course if the value of color changes, currentColor will automatically update to match it.
    While this isn’t the same thing as being able to get the color of literally anything, it’s still pretty useful. Actually, if something akin to compute(background-color) just isn’t possible, I’d be happy with more CSS keywords like currentColor.
    In fact, currentBackgroundColor/currentBackground has already been proposed. Using currentBackgroundColor for example, we could set the border color to be slightly darker than the background color (border-color: hsl(from currentBackgroundColor h s calc(l - 30))), or mix the background color with another color and then use that as the border color (border-color: color-mix(currentBackgroundColor, black, 30)).
    But why stop there? Why not currentWidth, currentHeight, and so on?
    The from-font CSS keyword
    The from-font CSS keyword is exclusive to the text-decoration-thickness property, which can be used to set the thickness of underlines. If you’ve ever hated the fact that underlines are always 1px regardless of the font-size and font-weight, then text-decoration-thickness can fix that.
    The from-font keyword doesn’t generate a value though — it’s optionally provided by the font maker and embedded into the font file, so you might not like the value that they provide, if they provide one at all. If they don’t, auto will be used as a fallback, which web browsers resolve to 1px. This is fine if you aren’t picky, but it’s nonetheless unreliable (and obviously quite niche).
    We can, however, specify a percentage value instead, which will ensure that the thickness is relative to the font-size. So, if text-decoration-thickness: from-font just isn’t cutting it, then we have that as a backup (something between 8% and 12% should do it).
    Don’t underestimate CSS units
    You probably already know about vw and vh units (viewport width and viewport height units). These represent a percentage of the viewport’s width and height respectively, so 1vw for example would be 1% of the viewport’s width. These units can be useful by themselves or within a calc() function, and used within any property that accepts a <length> unit.
    However, there are plenty of other, lesser-known units that can be useful in a similar way:
    1ex: equal to the computed x-height 1cap: equal to the computed cap height 1ch: equal to the computed width of the 0 glyph 1lh: equal to the computed line-height (as long as you’re not trimming or adding to its content box, for example using text-box or padding, respectively, lh units could be used to determine the height of a box that has a fixed number of lines) Source: W3 And again, you can use them, their logical variants (e.g., vi and vb), and their root variants (e.g., rex and rcap) within any property that accepts a <length> unit.
    In addition, if you’re using container size queries, you’re also free to use the following container query units within the containment contexts:
    1cqw: equal to 1% of the container’s computed width 1cqh: equal to 1% of the container’s computed height 1cqi: equal to 1% of the container’s computed inline size 1cqb: equal to 1% of the container’s computed block size 1cqmin: equal to 1cqi or 1cqb, whichever is smallest 1cqmax: equal to 1cqi or 1cqb, whichever is largest That inherit() example from earlier, you know, the one that isn’t currently supported by any web browser? Here’s the same thing but with container size queries:
    header { height: 3rem; container: header / size; @container header (width) { button { height: 100%; border-radius: calc(100cqh * 0.3); padding-inline: calc(100cqh * 0.5); } } } CodePen Embed Fallback Or, since we’re talking about a container and its direct child, we can use the following shorter version that doesn’t create and query a named container (we don’t need to query the container anyway, since all we’re doing is stealing its units!):
    header { height: 3rem; container-type: size; button { height: 100%; border-radius: calc(100cqh * 0.3); padding-inline: calc(100cqh * 0.5); } } However, keep in mind that inherit() would enable us to inherit anything, whereas container size queries only enable us to inherit sizes. Also, container size queries don’t work with inline containers (that’s why this version of the container is horizontally stretched), so they can’t solve every problem anyway.
    In a nutshell
    I’m just going to throw compute() out there again, because I think it’d be a really great way to get the values of other CSS properties:
    button { /* self could be the default */ border-radius: compute(height, self); /* inherit could work like inherit() */ border-radius: compute(height, inherit); /* Nice to have, but not as important */ border-radius: compute(height, #this); } But if it’s just not possible, I really like the idea of introducing more currentColor-like keywords. With the exception of keywords like from-font where the font maker provides the value (or not, sigh), keywords such as currentWidth and currentHeight would be incredibly useful. They’d make CSS easier to read, and we wouldn’t have to create as many custom properties.
    In the meantime though, custom properties, aspect-ratio, and certain CSS units can help us in the right circumstances, not to mention that we’ll be getting inherit() in the future. These are heavily geared towards getting widths and heights, which is fine because that’s undoubtedly the biggest problem here, but hopefully there are more CSS features on the horizon that allow values to be used in more places.
    On Inheriting and Sharing Property Values originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  19. by: Sourav Rudra
    Mon, 24 Nov 2025 13:19:55 GMT

    Dell has a solid track record with Linux-powered OSes, particularly Ubuntu. The company has been shipping developer-focused laptops with Ubuntu pre-installed for years.
    Many of their devices come with compatible drivers working out of the box. Audio, Wi-Fi, Thunderbolt ports, and even fingerprint readers mostly work without hassle. My daily workhorse is a Dell laptop that hasn't had a driver-related issue for quite some time now.
    And a recent launch just reinforces their Linux approach.
    Dell Pro Max 16 Plus: What's Inside?
    Dell just launched the Pro Max 16 Plus. It is being marketed as the first mobile workstation with an enterprise-grade discrete NPU, the Qualcomm AI 100 PC Inference Card. It packs 64GB of dedicated AI memory and dual NPUs on a single card.
    Under the hood, you get Intel Core Ultra processors (up to Ultra 9 285HX), memory up to 256GB CAMM2 at 7200MT/s, GPU options up to NVIDIA RTX PRO 5000 Blackwell with 24GB VRAM, and storage topping out at 12TB with RAID support.
    Interestingly, Phoronix has received word that the Windows 11 version of the Dell Pro Max 16 Plus won't ship until early 2026, while the validated Ubuntu 24.04 LTS version is already available.
    With this, Dell is targeting professionals who can't rely on cloud inferencing. It says that the discrete NPU keeps data on-device while eliminating cloud latency, enabling work in air-gapped environments, disconnected locations, and compliance-heavy industries.
    📝 Key Specifications
    The Dell Pro Max 16 Plus ships with the following components:
    Video: Up to 16″ UHD+, 120Hz, OLED touch display. Power: 6-cell, 96 Wh, Li-ion. Audio: 1x 3.5 mm combined audio jack, 2x 2 W stereo speakers. Camera: 1080p at 30 fps HDR FHD RGB camera, 8MP 30 fps HDR RGB+IR camera. USB: 1x USB 3.2 Gen 1 (5 Gbps) with PowerShare, 1x USB 3.2 Gen 1 (5 Gbps). Thunderbolt: 2x Thunderbolt 5 (80 Gbps), 1x Thunderbolt 4 (40 Gbps), both with Power Delivery and DisplayPort. Networking: 1x RJ45 (2.5 Gbps), Wi-Fi 7 BE200, Bluetooth 5.4, and Qualcomm Snapdragon X72 eSIM. Slots: 1x SD card reader, 1x smart card reader. Weight: 5.63 lb (2.55 kg) 🛒 Pricing & Availability
    The Dell Pro Max 16 Plus starts at $3,329 (excl tax and shipping). You can configure and order it directly from the official website.
    Dell Pro Max 16 PlusSuggested Read 📖
    Best Linux Laptop of 2025? TUXEDO InfinityBook Pro 15 (Gen10) LaunchesBeast specifications. Pre-orders open now, mid-August shipping.It's FOSSSourav Rudra
  20. by: Ani
    Mon, 24 Nov 2025 10:15:48 +0000

    I struggled to combine work and family. It took me years, mistakes, and a lot of self-reflection to understand what really matters. When I had more balance, I became happier, more creative, and ultimately more effective. I also learned that personal happiness matters.
    About me
    I am the Head of DevOps and AI at Eficode. I have vast experience in IT service organizations. A significant part of my focus is on AI upskilling. We run several AI-related initiatives, including weekly demos and knowledge-sharing sessions. In addition, I am part of the Eficode Finland Steering Group, which meets weekly, and we also hold regular gatherings for all Eficode leaders.
    My Journey from Chemistry to IT and Beyond
    When I think back to my days at Ressu High School, I remember being equally fascinated by chemistry and psychology, but eventually I chose chemistry. That decision led me to pursue my first master’s degree in chemical engineering. The job market for chemists wasn’t exactly booming in 1999, while the IT industry was exploding with opportunities. 
    My first job was at Hewlett-Packard, where I worked as a sales representative. I was responsible for selling Unix servers to a major telecom company in Finland. It introduced me to the world of technology, but after two years, I realized that sales alone weren’t enough for me. I wanted to go deeper.
    Katja Saarela, Head of DevOps and AI, Eficode
    From Academia to Consulting

    That curiosity led me back to university. I began working toward my PhD, exploring big data and bioinformatics long before those terms became buzzwords. I loved the research, the depth, and the challenge, but I also discovered that academia moves quite slowly.
    That’s when I realized that consulting might be my perfect fit. In consulting, every project brings a new question, a new client, a new opportunity to learn. It’s fast, dynamic, and exactly what my curious mind craves.
    A Lifelong Learner
    In technology, and in life, you’re never “done learning.” I want to learn new things all the time, and for me it’s really interesting to start learning a new area or a new topic. It is so much so that along the way I earned additional degrees, such as a Master of Computer Science and a Master of Economics, and even explored theology and philosophy. In this field, you need to have a joy of lifelong learning. It is important to never feel that, okay, now I know everything. 
    Managing Stress: Lessons from a Career in IT and Parenthood

    In the early years of my career, work was at the centre of my life. I thought it was the most important thing in the world. But I was wrong. I struggled to combine work and family. It took me years, mistakes, and a lot of self-reflection to understand what really matters. 
    I have five kids, and in those early years, my values weren’t right. I was giving my best to my job, but not to the people who needed me most: my family. Over time, I realized something that completely changed my perspective: in my family, I can’t be replaced. But at work, no one is truly indispensable. 
    I began to set my priorities clearly: family first, then work. Ironically, when I started working less, I finally began moving forward in my career. That was one of the most surprising lessons of my life. When I had more balance, I became happier, more creative, and ultimately more effective. I also learned that personal happiness matters. If I have time for my hobbies and my studies, I’m happier. And when I’m happy, I’m a better leader and colleague.
    Now, with older kids and more experience, I don’t see the need for such strict boundaries. I might do small work tasks in the evening, but it doesn’t feel like a burden anymore. After 25 years in the IT field, I trust myself. I know what I’m doing, and I no longer worry as much.
    Working At Eficode
    My days are filled with planned meetings, but also spontaneous discussions with colleagues. I spend a lot of time at our Helsinki office because meeting people face-to-face and exchanging ideas energizes and inspires me.
    No two days look the same in my role. I’m responsible for the delivery and performance of our consultants. I have four teams in my unit. I regularly meet with my Team Leads, collaborate with our Sales team to review ongoing and upcoming cases, and lately, I’ve also been conducting many job interviews as we are recruiting new consultants for the unit.
    Learning Skills from Sports
    Scouting has been my most important hobby for as long as I can remember. I’ve held different positions over the years. I started as a scout leader at the age of 15. Looking back, that was my first real leadership experience. I didn’t realize it at the time, but those years of leading groups, organizing activities, and motivating people taught me lessons that became the foundation of my professional life.
    Years later, when I transitioned from an expert role to a leadership position in my career, I struggled at first. It wasn’t easy to move from doing the work myself to guiding others to do it. Then I remembered my early days in scouting — and it clicked. I had been leading people since I was a teenager. That gave me confidence. Leadership wasn’t new to me after all.
    Scouting also taught me one of the most practical skills of all: time management. As a student, I had school, hobbies, and responsibilities in the scouts. I had to learn how to divide my time carefully, and that skill has stayed with me to this day. Now, in my work life, I still structure my time the same way: focus on my tasks but always make space for my hobbies and family.
    But the most important lesson I learned from scouting was listening to myself and my feelings. It’s easy to plan your week, to fill your calendar with activities and goals. But sometimes, it just doesn’t feel right. Scouting taught me to pay attention to recognize when I need to adjust my schedule or slow down. It’s not just about efficiency; it’s about balance and well-being.
    Managing and Leading
    Over the years, I have come to realize the difference between managing and leading — two roles that often overlap but are not the same. Managing is about things: tasks, deadlines, and structures. Things don’t have feelings; they can be organized logically into a schedule. But leading is about people, and people are complex. They have families, challenges, and emotions. 
    Real leadership means being able to handle both managing tasks and people, but it mostly means understanding that people are not robots. It’s about connecting with them, listening to their worries, hearing their ideas, and being flexible when life happens. Sometimes plans need to change, and that’s okay. What matters is building trust and respect so that people feel valued and supported.
    AI agents and trust
    Lately, I’ve been deeply interested in the relationship between AI agents and trust. I often listen to Eficode’s tech talks, especially those by our CTO, Marko, who shares fascinating insights into the world of AI agent orchestration. At Eficode, for example, we’ve developed a demo in which six different AI agents collaborate to build software: one writes specifications, another codes, and others handle testing. What makes this so intriguing is not just the technology itself, but the human element behind it: how do these agents trust one another, and how can we trust the results they produce? This question of trust is at the heart of today’s AI revolution. 
    The post Role Model Blog: Katja Saarela, Eficode first appeared on Women in Tech Finland.
  21. by: Sourav Rudra
    Mon, 24 Nov 2025 09:53:40 GMT

    Last month, Zorin OS 18 dropped just in time for the Windows 10 EOL, bringing about an assortment of improvements like Linux kernel 6.14, rounded corners for the desktop interface, and a new window tiling manager.
    So, it didn't come as a surprise to me when Zorin OS 18 hit the 1 million downloads milestone just over a month after its release. Alongside that announcement, the developers have made available an upgrade path from Zorin OS 17, which is intended for users of Core, Education, and Pro editions.
    Let me walk you through the upgrade process. 😃
    🚧This upgrade path is currently in the testing phase. I don't recommend using it on your main computer or any production machine until the full rollout.Before You Upgrade to Zorin OS 18
    Zorin OS uses Déjà Dup as the backup utility.First, ensure that you are running Zorin OS 17.3, the last point release. Then, create a backup of your files before upgrading the system. This is an optional step, as Zorin OS' upgrade tool is quite reliable.
    The easiest way to do so is by using the pre-installed "Backups" tool. You can search for it in the Zorin Menu (the app launcher).
    You can select the folders you want to backup and the location for their storage.
    After you launch it, click on "Create My First Backup," and select the folders you want saved and the ones ignored. Then, select the storage location for the backup. I suggest you store these on external storage or upload them to Google Drive.
    📋In the screenshot above, I just used a dummy folder located on-device to demonstrate the steps.You can choose to encrypt your Zorin OS backups.
    Should you choose to, there is an option to encrypt the backup using a password; you will need it to update the existing backup or restore the files to the system.
    For a more comprehensive backup solution, I recommend opting for Timeshift instead.
    Guide to Backup and Restore Linux Systems with TimeshiftThis beginner’s guide shows you how to back up and restore Linux systems easily with the Timeshift application.It's FOSSAbhishek PrakashTime for The Upgrade
    Open the Zorin Menu by clicking on its logo in the taskbar or pressing the Super key on your keyboard and search for "Software Updater". If you have any pending updates, get them by clicking on "Install Now".
    Just search for the Software Updater in the Zorin Menu.
    You will be prompted to enter your account password. Enter it to authenticate the upgrade and wait for the process to complete. Towards the end, you might be asked to restart your computer.
    Now, open the terminal via the Zorin Menu or by using the handy keyboard shortcut Ctrl + Alt + T and run the following command on it:
    gsettings set com.zorin.desktop.upgrader show-test-upgrades trueWhen the upgrade path comes out of testing, you won't need to run the above-mentioned command and can directly skip over to the step below.
    Finding the "Upgrade Zorin OS" tool is easy.
    Now, launch the "Upgrade Zorin OS" tool and select the Zorin OS 18 edition that matches your current installation. In my case, that is Zorin OS 18 Core, going up from Zorin 17 OS Core.
    You will be prompted to enter your password again. Go ahead and authenticate.
    Remember to read the disclaimers!
    After an upgrade requirements check, a long list of disclaimers will be shown. Ensure that you go through them before clicking on "Upgrade" to begin the upgrade process from Zorin OS 17 to 18.
    The final stretch of the Zorin OS upgrade process.
    Now it is just a matter of waiting. The upgrade time depends on your internet speed and hardware. Once done, restart your computer when prompted, and you will boot into Zorin OS 18.
    If you run into any issues, you can ask the helpful FOSSers over at It's FOSS Community for help.
    Suggested Read 📖
    Move Between the Distros: Back Up and Restore Your Snap PackagesMake a backup of your Snap apps and application data and restore them to a new Linux system where Snap is supported. Works between Ubuntu and non-Ubuntu distros, too.It's FOSSRoland Taylor
  22. by: Geoff Graham
    Fri, 21 Nov 2025 18:53:05 +0000

    Sketch is getting a massive UI overhaul, codenamed Copenhagen:
    Makes a lot of sense for an app that’s so tightly integrated to Mac to design around the macOS UI. Big Sur was a big update. Apple called it the biggest one since Mac OS X. So big, indeed, that they renamed Mac OS to macOS in the process. Now we have macOS Tahoe and while it isn’t billed the “biggest update since Big Sur” it does lean into an entirely new Liquid Glass aesthetic that many are calling the biggest design update to the Apple ecosystem since iOS 7.
    Sketch probably didn’t “have” to redesign its UI to line up with macOS Tahoe, but a big part of its appeal is the fact that it feels like it totally belongs to the Mac. It’s the same for Panic apps.
    The blog post I linked to sheds a good amount of light on the Sketch team’s approach to the updates. I came to the blog post to read about the attention they put into new features (individual page and frame link for the win!) and tightening up existing ones (that layer list looks nice), but what I really stayed for was their approach to Liquid Glass. Turns out they decided to respect it, but split lanes a bit:
    Spend a few seconds with an early prototype that leaned more heavily into Liquid Glass and it’s uber clear why a custom route was the best lane choice:
    Still taken from one of the blog post’s embedded videos Choosing a design editor can feel personal, can’t it? I know lots of folks are in the Figma Or Bust camp. Illustrator is still the favorite child for many, after all these… decades! There’s a lot of buzz around Affinity now that it’s totally free. I adopted Sketch a long time ago. How long? I dug up this dusty old blog post I wrote about Sketch 3 back in 2014, so at least 11 years.
    But I’m more of a transient in the design editor space. Being a contractor and all, I have to be open to any app my clients might use internally, regardless of my personal preference. I’d brush up on Sketch’s UI updates even if it wasn’t my go-to.
    Sketch: A guided tour of Copenhagen originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  23. by: Sourav Rudra
    Fri, 21 Nov 2025 10:34:13 GMT

    When Qualcomm announced its acquisition of Arduino in October 2025, the tinkerer and maker community watched nervously. Large corporate acquisitions rarely end well for open platforms after all, and enshittification is something that often follows.
    And now, what's followed is unsettling. Adafruit Industries, makers of popular development boards and a respected voice in the open hardware space, have sounded the alarm.
    This Looks Concerning
    Qualcomm has quietly made some massive changes to Arduino's Terms of Service and Privacy Policy, marking a clear departure from the platform's founding principles.
    According to Adafruit, the new policies introduce sweeping user-license provisions, broaden data collection (particularly around AI usage), and embed long-term account data retention, all while integrating user information into Qualcomm’s broader data ecosystem.
    Section 7.1 grants Arduino a perpetual, irrevocable license over anything you upload. Your code, projects, forum posts, and comments all fall under this. This remains in effect even after you delete your account. Arduino retains rights to your content indefinitely.
    The license is also royalty-free and sublicensable. Arduino can use your content however they want, distribute it, modify it, and even sublicense it to others.
    This is not unfounded; see for yourself.The terms further state that users are not allowed to reverse engineer or attempt to understand how the platform works unless Arduino gives permission. Adafruit argues that this contradicts the values that made Arduino attractive to educators, researchers, and hobbyists.
    The Privacy Policy states Arduino is wholly owned by Qualcomm Technologies, Inc. User data, including from minors, flows to other Qualcomm Group companies.
    While these policy changes have raised eyebrows, Qualcomm and Arduino maintain that the acquisition will not alter the core spirit of the platform. They also state that existing Arduino boards built on non-Qualcomm microcontrollers will continue to be supported.
    Nonetheless, there are good reasons to take Adafruit's concerns seriously. The updated Terms of Service and Privacy Policy do contain sweeping language that feels out of place for a platform built on openness and transparency. The community is entitled to scrutinize these changes closely.
    At the same time, the hardware side of Arduino doesn't seem to have changed too much as of now, so there's that too. Going forward, how these two organizations respond to criticisms such as this should paint a more clear picture of Arduino's future at its new home.
  24. by: Roland Taylor
    Fri, 21 Nov 2025 04:43:20 GMT

    The Snap packaging system makes it easy to install and update software on any Linux distribution that supports them. However, if you’ve ever had to reinstall your system, you’ve probably been burned by the fact that Snap, like most other packaging systems, doesn’t provide any built-in means for exporting your apps or moving them to a new machine.
    Thankfully, there's good news: You can still. With just a few commands and a bit of organisation, you can export and restore your Snap applications on any other system where it's supported.
    🚧 Some things to keep in mind
    Before you dive in, there are some key things to understand about the Snap packaging system and how it works. Snap doesn’t yet have a built-in “export/import” tool like Flatpak. Neither packaging format allows you to repackage any packages you've already installed.
    Furthermore, with the Snap system, reinstalling restores the latest version of the package, not necessarily the exact revisions you previously had. Since many apps store extra data under /var/snap/, you'll likely need to restore this data as well, if you're seeking to retain your settings when migrating. This article will show you how you can back up and restore this directory as well.
    ⚠️ What this tutorial cannot cover
    Occasionally, Snap packages require hooks to enable certain features and integrations. Unfortunately, this is a more complicated process and must be done on a per-package basis. For this reason, we won't cover how to do this for individual packages, as that process can differ for each package that requires it.
    Now, let's dive in.
    Step 1. Creating a list of installed snap packages
    To get started, you'll first need to save a list of every Snap package currently installed on your system:
    snap list --all | awk 'NR>1 {print $1}' > snap-list.txt This will create a text file with the names of all your Snap packages. As with most other packaging systems, package names are all you need to refer to the packages you want to manage. However, if you'd keep a note of further details in this list, you can do so with the following command:
    snap list --all > snap-list-detailed.txt 🗒️Unlike with other packaging systems, you cannot restore particular revisions of an application with the Snap packaging system. This command is only useful for record keeping purpose.Step 2. Backing up your app data
    Snap packages store their data and settings in your home folder within the ~/snap directory. Each app saves its data in a subdirectory of the same name. For example, Inkscape saves its data in /snap/inkscape Firefox in /snap/firefox, and so on.
    Each Snap package has its own config and data directoryYou can back up individual apps if you'd like, but for the purposes of this tutorial, we'll run through how to back up the entire directory.
    To do this, you can run:
    tar -czf snap-data-backup.tar.gz ~/snap/ Remember to copy this file along with snap-list.txt to the target system where you'll be restoring your packages.
    🗒️If you're on a multi-user system, you'll need to run this for each user on the system you're transferring from.Step 3. Transferring to the target system
    On the target system, you should first ensure Snap is installed and working.
    For the best results, it's safest if the target system has the same or a newer version of Snap compared to the original. You can check the Snap version on both systems by running the following command:
    snap version If you get output showing your snap version and other data, you’re ready to go.
    🗒️If you're running Ubuntu, Snap will be preinstalled. Most major distributions do not ship with Snap preinstalled, so you'll need to install it before continuing.Installing your packages on the target system
    In the same directory where you've copied the snap-list text file, run the following command to install the snap packages from your list:
    xargs -a snap-list.txt sudo snap install Once this command is finished running, you'll have all the same Snap apps and packages you'd have had on your previous system. Now, you can move on to restoring your app data.
    You can verify your apps successfully installed by running:
    snap list 4. Restoring app data
    Now that you've successfully restored your Snap apps and packages, you can restore your Snap package data. To do this, you can decompress the archive you created earlier in your home folder:
    tar -xzf snap-data-backup.tar.gz -C ~/ Remember, if you've done this for multiple users, this will need to be done in each user's home folder individually.
    Optional bonus for advanced users: Automating your setup
    If you regularly install or change your Snap packages, and you'd like to run this step more smoothly, you can automate it all with some simple scripting and cron.
    Creating the script
    For the script, you just to tie these commands together. Create a file such as ~/bin/snap-back.sh, and give it executable permission:
    # Create the script: touch ~/bin/snap-back.sh # Give it executable permission: chmod +x ~/bin/snap-back.shNow edit the script with the text editor of your choice, and add the following:
    #!/bin/bash # Backup Snap package list and user data snap list --all | awk 'NR>1 {print $1}' > ~/snap-list.txt tar -czf ~/snap-data-backup.tar.gz ~/snap/ # Optional: enable logging echo "Snap backup completed on $(date)" >> ~/snap-backup.log If you don't need to keep a log, you can remove the last line (and the comment above it).
    Automating it all
    If you'd like to have this back up run at regular intervals, you can schedule this process with cron:
    # Open your crontab crontab -eIn the editor, add this line:
    0 10 * * SUN ~/bin/snap-back.shThis will set your Snap package backup to run automatically at the beginning of every week. You can choose any interval you'd prefer, of course.
    Conclusion
    Even though Snap doesn't offer the same level of convenience as Flatpak, these steps still give you a dependable and scriptable way to preserve and transfer your setup. This is especially useful if you love to maintain the same setup across devices or like to do a fresh installation on upgrade. Remember, you can always keep your setup synced to a version control system or your personal cloud server.
  25. by: Sunkanmi Fafowora
    Thu, 20 Nov 2025 15:10:26 +0000

    For the past few months, I’ve been writing a lot of entries on pseudo-selectors in CSS, like ::picker() or ::checkmark. And, in the process, I noticed I tend to use the :open pseudo-selector a lot in my examples — and in my work in general.
    Borrowing words from the fine author of the :open entry in the Almanac:
    So, given this:
    details:open { background: lightblue; color: darkred; } We expect that the <details> element gets a light blue background and dark red text when it is in an open state (everywhere but Safari at the time I’m writing this):
    CodePen Embed Fallback But what if we want to select the “closed” state instead? That’s what we have the:closed pseudo-class for, right? It’s supposed to match an element’s closed state. I say, supposed because it’s not specced yet.
    But does it need to be specced at all? I only ask because we can still target an element’s closed state without it using :not():
    /* When details is _not_ open, but closed */ details:not(:open) { /* ... */ } So, again: do we really need a :closed pseudo-class? The answer may surprise you! (Just kidding, this isn’t that sort of article…)
    Some background
    Talks surrounding :open started in May 2022 when Mason Freed raised the issue of adding :open (which was also considered being named :top-layer at the time) to target elements in the top layer (like popups):
    Today, the OpenUI WC similarly resolved to add a :top-layer pseudo class that should apply to (at least) elements using the Popup API which are currently in the top layer. The intention for the naming and behavior, though, was that this pseudo class should also be general purpose. It should match any type of element in the top layer, including modal <dialog>, fullscreen elements, and ::backdrop pseudo elements.
    This sparked discourse on whether the name of the pseudo-element targeting the top layer of any type of element (e.g., popups, pickers, etc.) should either be :open or :top-layer. I, for one, was thrilled when the CSSWG eventually decided on :open in August 2022. The name makes a lot more sense to me because “open” assumes something in the top layer.
    To :close or :not(:open)?
    Hold on, though! In September that same year, Mason asked whether or not we should have something like a :closed pseudo-class to accompany :open. That way, we can match elements in their “closed” states just as we can their “open” states. That makes a lot of sense, t least on the surface. Tab Atkins chimed in:
    And guess what? Everyone seemed to agree. Why? Because it made sense at the time. I mean, since we have a pseudo-class that targets elements in their :open state, surely it makes sense to have :closed to target elements in their closed states, right? Right??
    No. There’s actually an issue with that line of reasoning. Joey Arhar made a comment about it in October that same year:
    Wait, what happened to consensus? It’s the same question I raised at the top of this post. According to Luke Warlow:
    There is no :closed… for now
    Fast forward one more month to November 2024. A consensus was made to start out with just :open and remove :closed for the time being.
    Dang. Nevertheless, according to WHATWG and CSSWG, that decision could change in the future. In fact, Bramus dropped a useful note in there just a month before WHATWG made the decision:
    Just dropping this as an FYI: :read-only is defined as :not(:read-write), and that shipped.
    Which do you find easier to understand?
    Personally, I’m okay with :closed — or even using :not(:open) — so far as it works. In fact, I went ahead swapped :closed for :not(:open) in my  ::checkmark and ::picker() examples. That’s why they are they way they are today.
    But! If you were to ask me which one comes easier to me on a typical day, I think I would say :closed. It’s easier for to think in literal terms than negated statements.
    What do you think, though? Would you prefer having :closed or just leaving it as :not(:open)?
    If you’re like me and you love following discussions like this, you can always head over to CSSWG drafts on GitHub to watch or participate in the fun.
    Should We Even Have :closed? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.