Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Everything posted by Blogger

  1. by: Akhilesh Mishra Fri, 28 Nov 2025 16:02:00 +0530 This lesson is for subscribers only This post is for subscribers only Subscribe now Already have an account? Sign in
  2. by: Akhilesh Mishra Fri, 28 Nov 2025 16:00:43 +0530 This lesson is for subscribers only This post is for subscribers only Subscribe now Already have an account? Sign in
  3. by: Akhilesh Mishra Fri, 28 Nov 2025 15:42:59 +0530 Think of Terraform as a construction manager. Resources are the buildings you construct. Data sources are the surveys you conduct before building. Dependencies are the order in which construction must happen. You can’t build the roof before the walls, right? Resources: The Heart of EverythingIf Terraform were a programming language, resources would be the objects. They’re things you create, modify, and delete. Every piece of infrastructure — servers, databases, networks, load balancers—starts as a resource in your code. The anatomy of a resource: Two parts matter most. The type tells Terraform what kind of thing to create. The name is how you refer to it in your code. That’s it. resource "aws_instance" "web" { ami = "ami-12345678" instance_type = "t2.micro" } Here’s what beginners often miss: the name web isn’t the name your server gets in AWS. It’s just a label for your Terraform code. Think of it like a variable name in programming. The actual AWS resource might be named something completely different (usually via tags). Arguments vs Attributes - the key distinction: You provide arguments (the input values). Terraform gives you attributes (the output values). You tell Terraform instance_type = "t2.micro". Terraform tells you back id = "i-1234567890abcdef0" and public_ip = "54.123.45.67" after creation. This distinction is crucial because attributes only exist after Terraform creates the resource. You can’t reference an instance’s IP address before it exists. Terraform figures out the order automatically. References connect everything: When you write aws_instance.web.id, you’re doing three things: Referencing the resource type (aws_instance)Referencing your local name for it (web)Accessing an attribute it exposes (id)This is how infrastructure connects. One resource references another’s attributes. VPC ID goes into subnet configuration. Subnet ID goes into instance configuration. These references tell Terraform the construction order. Why the two-part naming? Because you might create multiple instances of the same type. You could have aws_instance.web, aws_instance.db, and aws_instance.cache. The type describes what it is. The name describes which one. Data Sources: Reading the Existing WorldResources create. Data sources read. That’s the fundamental difference. Real infrastructure doesn’t exist in a vacuum. You’re deploying into an existing VPC someone else created. You need the latest Ubuntu AMI that changes monthly. You’re reading a secret from a vault. None of these things should you create — you just need to reference them. Data sources are queries: Think of them as SELECT statements in SQL. You’re querying existing infrastructure and pulling information into your Terraform code. data "aws_ami" "ubuntu" { most_recent = true owners = ["099720109477"] filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-*"] } } This doesn’t create an AMI. It searches for one that already exists and gives you its ID. Why data sources matter for infrastructure code: Imagine hardcoding AMI IDs. Next month, there’s a new Ubuntu release with security patches. You have to find the new AMI ID and update your code. Or, use a data source that always finds the latest. Code stays the same, infrastructure stays updated. The same principle applies to everything external: VPCs, DNS zones, availability zones, TLS certificates, secrets. If it exists before your Terraform code runs, use a data source. The reference difference: Resources are type.name.attribute. Data sources are data.type.name.attribute. That extra data. prefix tells Terraform and you that this is a read operation, not a create operation. Data sources run first: Before Terraform creates anything, it runs all data source queries. This makes sense—you need to read information before you can use it to create things. String Interpolation: Building Dynamic InfrastructureInfrastructure can’t be static. You need bucket names that include environment names. Server names that include region. Tags that reference other resources. String interpolation is how you build these dynamic values. The rule is simple: Use ${} when building strings. Don’t use it for direct references. bucket = "myapp-${var.environment}-data" # String building - USE ${} ami = data.aws_ami.ubuntu.id # Direct reference - NO ${} Why the distinction? In Terraform’s early days (before version 0.12), you needed "${var.name}" everywhere. It was verbose and ugly. Modern Terraform is cleaner — interpolation only when actually building strings. What you can put inside interpolation: Everything. Variables, resource attributes, conditional expressions, function calls. If it produces a value, you can interpolate it. name = "${var.project}-${var.environment}-${count.index + 1}" Common beginner mistake: Writing instance_type = "${var.instance_type}". The ${} is unnecessary here — you’re not building a string, just referencing a variable. Just write instance_type = var.instance_type. When interpolation shines: Multi-part names. Constructing URLs. Building complex strings from multiple sources. Any time “I need to combine these values into text.” Dependencies: The Hidden GraphThis is where Terraform’s magic happens. You write resources in any order. Terraform figures out the correct creation order automatically. How? By analyzing dependencies. Implicit Dependencies: The Automatic KindWhen you reference one resource’s attribute in another resource, you’ve created a dependency. Terraform sees the reference and knows the order. Mental model: Think of dependencies as arrows in a diagram. VPC -> Subnet -> Instance. Each arrow means “must exist before.” Terraform builds this diagram automatically by finding all the attribute references in your code. resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" } resource "aws_subnet" "app" { vpc_id = aws_vpc.main.id # Reference creates dependency cidr_block = "10.0.1.0/24" } resource "aws_instance" "web" { subnet_id = aws_subnet.app.id # Another dependency ami = "ami-12345678" instance_type = "t2.micro" } You can write these in any order in your files. Terraform sees aws_vpc.main.id referenced in the subnet, and aws_subnet.app.id referenced in the instance. It builds the dependency graph: VPC -> Subnet -> Instance. Why this matters: Terraform creates things in parallel when possible. If you define 10 S3 buckets with no dependencies, Terraform creates all 10 simultaneously. If you define a VPC with 10 subnets, it creates the VPC first, then all 10 subnets in parallel. The key insight: Every attribute reference is a dependency. resource.name.attribute means “I need this resource to exist first.” Explicit Dependencies: The Manual KindSometimes Terraform can’t detect dependencies automatically. The relationship exists, but there’s no attribute reference to signal it. Classic example - IAM: You create an IAM role. You attach a policy to it. You launch an instance with that role. The instance references the role, but not the policy. Terraform might launch the instance before the policy attaches, causing errors. resource "aws_instance" "app" { ami = "ami-12345678" instance_type = "t2.micro" depends_on = [aws_iam_role_policy.app_policy] } The depends_on argument says “don’t create this until that other thing exists,” even though we’re not referencing any of its attributes. When you need explicit dependencies: Timing matters but there’s no direct attribute reference Resources must exist in a certain order for external reasons You’re working around provider bugs or limitations Use sparingly: Explicit dependencies reduce parallelism. Terraform must wait for the dependency before proceeding. Only use them when implicit dependencies won’t work. The Dependency GraphBehind the scenes, Terraform builds a directed acyclic graph (DAG) of all your resources. Nodes are resources. Edges are dependencies. This graph determines everything: What to create firstWhat can be created in parallelWhat to destroy first when tearing downDirected: Dependencies have direction. A depends on B, not the other way around. Acyclic: No loops allowed. If A depends on B, B can’t depend on A (even indirectly). Terraform will error on circular dependencies—they’re impossible to resolve. Why you should care: Understanding the dependency graph helps you debug. If Terraform is creating things in a weird order, check the references. If it’s failing on circular dependencies, look for cycles in your attribute references. Viewing the graph: Run terraform graph to see the actual graph Terraform built. It’s mostly useful for debugging complex configurations. How It All Fits TogetherEvery Terraform confguration is a combination of these concepts: Resources define what to createData sources query what existsInterpolation builds dynamic valuesDependencies determine the orderThe workflow: Data sources run first (they’re just queries). Terraform analyzes all resource definitions and builds the dependency graph. It creates resources in the correct order, parallelizing when possible. References between resources become the glue. The mental shift: You’re not writing a script that executes top-to-bottom. You’re describing desired state. Terraform figures out how to achieve it. That’s declarative infrastructure. Why beginners struggle: They think procedurally. “First create this, then create that.” Terraform doesn’t work that way. You declare everything you want. Terraform analyzes the dependencies and figures out the procedure. Common Mistakes and How to Avoid ThemMistake 1: Using resource names as identifiers - Resource names in Terraform are local to your code. They’re not the names resources get in your cloud provider. Use tags or name attributes for that. Mistake 2: Trying to reference attributes before resources exist - You can’t use aws_instance.web.public_ip in a variable default value. The instance doesn’t exist when Terraform evaluates variables. Use locals or outputs instead. Mistake 3: Over-using explicit dependencies - If you’re writing lots of depends_on, you’re probably doing something wrong. Most dependencies should be implicit through attribute references. Mistake 4: Confusing data sources with resources - Data sources don’t create anything. If you need to create something, use a resource, not a data source. Mistake 5: Hardcoding values that data sources should provide - Don’t hardcode AMI IDs, availability zones, or other values that change. Use data sources to query them dynamically. Quick ReferenceResources: resource "type" "name" { argument = "value" } # Reference: type.name.attribute Data Sources: data "type" "name" { filter = "value" } # Reference: data.type.name.attribute String Interpolation: "prefix-${var.name}-suffix" # Building strings var.name # Direct reference Dependencies: # Implicit (automatic) subnet_id = aws_subnet.main.id # Explicit (manual) depends_on = [aws_iam_role.app] Master these four concepts and you’ll understand 80% of Terraform. Everything else builds on this foundation. You now understand the core building blocks: resources, data sources, and dependencies. But what if you need to create multiple similar resources? Copy pasting code isn’t the answer. In the next chapter, we’ll explore count, for_each, and conditionals—the tools that make your infrastructure code truly dynamic and scalable.
  4. by: Akhilesh Mishra Fri, 28 Nov 2025 15:39:09 +0530 How does Terraform remember what it created? How does it connect to AWS or Azure? Two concepts answer these questions: State (Terraform’s memory) and Providers (Terraform’s translators). Without state and providers, Terraform would be useless. Let’s understand them. What is Terraform State?State is Terraform’s memory. After terraform apply, it stores what it created in terraform.tfstate. Run this example: resource "local_file" "example" { content = "Hello from Terraform!" filename = "example.txt" } After terraform apply, check your folder – you’ll see example.txt and terraform.tfstate. Expected Files after applyState answers three questions: What exists? – Resources Terraform createdWhat changed? – Differences from your current configWhat to do? – Create, update, or delete?Change the content and run terraform plan. Terraform compares the state with your new config and shows exactly what will change. That’s the power of state. Local vs Remote StateLocal state works for solo projects. But teams need remote state stored in shared locations (S3, Azure Storage, Terraform Cloud). Remote state with S3:terraform { backend "s3" { bucket = "my-terraform-state" key = "terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-locks" # Enables locking } } State locking prevents disasters when multiple people run Terraform simultaneously. Person A locks the state, Person B waits. Simple, but crucial for teams. Backend ConfigurationBackends tell Terraform where to store state. Local backend uses files on your computer. Remote backends use cloud storage. Local backend (default): # No configuration needed - stores terraform.tfstate locally S3 backend (AWS): terraform { backend "s3" { bucket = "my-terraform-state" key = "prod/terraform.tfstate" region = "us-west-2" encrypt = true dynamodb_table = "terraform-locks" } } Azure backend: terraform { backend "azurerm" { resource_group_name = "terraform-state" storage_account_name = "tfstatestore" container_name = "tfstate" key = "prod.terraform.tfstate" } } GCS backend (Google Cloud): terraform { backend "gcs" { bucket = "my-terraform-state" prefix = "prod" } } Terraform Cloud: terraform { backend "remote" { organization = "my-org" workspaces { name = "production" } } } Backend InitializationAfter adding backend config, initialize: terraform init Terraform downloads backend provider and configures it. If state already exists locally, Terraform asks to migrate it to remote backend. Migration example: Initializing the backend... Do you want to copy existing state to the new backend? Pre-existing state was found while migrating the previous "local" backend to the newly configured "s3" backend. No existing state was found in the newly configured "s3" backend. Do you want to copy this state to the new "s3" backend? Enter "yes" to copy and "no" to start with an empty state. Enter a value: yes Type yes and Terraform migrates your state. Partial Backend ConfigurationDon’t hardcode sensitive values. Use partial configuration: backend.tf: terraform { backend "s3" { # Dynamic values provided at init time } } backend-config.hcl: bucket = "my-terraform-state" key = "prod/terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-locks" Initialize with config: terraform init -backend-config=backend-config.hcl Or via CLI: terraform init \ -backend-config="bucket=my-terraform-state" \ -backend-config="key=prod/terraform.tfstate" \ -backend-config="region=us-west-2" Use case: Different backends per environment without changing code. Changing BackendsSwitching backends? Change config and re-run init: terraform init -migrate-state Terraform detects backend change and migrates state automatically. Reconfigure without migration: terraform init -reconfigure Starts fresh, doesn’t migrate existing state. Backend Best PracticesFor S3: - Enable bucket versioning (rollback bad changes) - Enable encryption at rest - Use DynamoDB for state locking - Restrict bucket access with IAM For teams: - Always use remote backends - Never use local backends in production - One state file per environment - Use separate AWS accounts for different environments Example S3 setup: # Create S3 bucket aws s3api create-bucket \ --bucket my-terraform-state \ --region us-west-2 # Enable versioning aws s3api put-bucket-versioning \ --bucket my-terraform-state \ --versioning-configuration Status=Enabled # Create DynamoDB table for locking aws dynamodb create-table \ --table-name terraform-locks \ --attribute-definitions AttributeName=LockID,AttributeType=S \ --key-schema AttributeName=LockID,KeyType=HASH \ --billing-mode PAY_PER_REQUEST What Are Providers?Providers are translators. They connect Terraform to services like AWS, Azure, Google Cloud, and 1,000+ others. Basic AWS provider: provider "aws" { region = "us-west-2" } resource "aws_s3_bucket" "my_bucket" { bucket = "my-unique-bucket-12345" # Must be globally unique } Authentication: Use AWS CLI (aws configure) or environment variables. Never hardcode credentials in your code. Provider Requirements and VersionsAlways specify provider versions to prevent surprises: terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" # 5.x but not 6.0 } } } provider "aws" { region = "us-west-2" } resource "random_string" "suffix" { length = 6 special = false upper = false } resource "aws_s3_bucket" "example" { bucket = "my-bucket-${random_string.suffix.result}" } Version operators: = (exact), >= (minimum), ~> (pessimistic constraint). Provider Aliases: Multiple RegionsNeed the same provider with different configurations? Use aliases: provider "aws" { region = "us-west-2" } provider "aws" { alias = "east" region = "us-east-1" } resource "aws_s3_bucket" "west" { bucket = "west-bucket-12345" } resource "aws_s3_bucket" "east" { provider = aws.east bucket = "east-bucket-12345" } This creates buckets in two different regions. Perfect for multi-region deployments or backups. State Best PracticesMust do: - Add .tfstate to .gitignore (state files contain secrets) - Use remote state with encryption for teams - Enable state locking to prevent conflicts - Enable versioning on state storage (S3, etc.) Never do: - Manually edit state files - Commit state to git - Ignore state locking errors - Delete state without backups Essential State CommandsView state: terraform state list # List all resources terraform state show aws_s3_bucket.example # Show resource details Modify state: terraform state mv <old> <new> # Rename resource terraform state rm <resource> # Remove from state terraform import <resource> <id> # Import existing resource Example - Renaming a resource: # Change resource name in code, then: terraform state mv aws_s3_bucket.old aws_s3_bucket.new terraform plan # Should show "No changes" Advanced State ManagementBeyond basic commands, here’s what you need for real-world scenarios: Pulling and Pushing StatePull state to local file: terraform state pull > backup.tfstate Creates a backup. Useful before risky operations. Push state from local file: terraform state push backup.tfstate Restore state from backup. Use with extreme caution. Moving Resources Between ModulesRefactoring code? Move resources without recreating them: # Moving to a module terraform state mv aws_instance.web module.servers.aws_instance.web # Moving from a module terraform state mv module.servers.aws_instance.web aws_instance.web Removing Resources Without DestroyingRemove from state but keep the actual resource: terraform state rm aws_s3_bucket.keep_this Use case: You created a resource with Terraform but now want to manage it manually. Remove it from state, and Terraform forgets about it. Importing Existing ResourcesSomeone created resources manually? Import them into Terraform: # Import an existing S3 bucket terraform import aws_s3_bucket.imported my-existing-bucket # Import an EC2 instance terraform import aws_instance.imported i-1234567890abcdef0 Steps: Write the resource block in your code (without attributes)Run import command with resource address and actual IDRun terraform plan to see what attributes are missingUpdate your code to match the actual resourceRun terraform plan again until it shows no changesState Locking DetailsWhen someone is running Terraform, the state is locked. If a lock gets stuck: # Force unlock (dangerous!) terraform force-unlock <lock-id> Only use this if you’re absolutely sure no one else is running Terraform. Replacing ProvidersMigrating from one provider registry to another: terraform state replace-provider registry.terraform.io/hashicorp/aws \ registry.example.com/hashicorp/aws Useful when moving to private registries. State Inspection TricksShow specific resource: terraform state show aws_instance.web Shows all attributes of a single resource. Filter state list: terraform state list | grep "aws_instance" Find all EC2 instances in your state. Count resources: terraform state list | wc -l How many resources does Terraform manage? When Things Go WrongState out of sync with reality? terraform refresh # Or newer approach: terraform apply -refresh-only Corrupted state? Check your state backups (S3 versioning saves you here)Restore from backup using terraform state pushAlways test in a non-prod environment firstConflicting states in team? Enable state locking (DynamoDB with S3) Use remote state, never local for teams - Implement CI/CD that runs Terraform centrally Quick ReferenceBackends: # S3 terraform { backend "s3" { bucket = "my-state-bucket" key = "terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-locks" } } # Azure terraform { backend "azurerm" { resource_group_name = "terraform-state" storage_account_name = "tfstatestore" container_name = "tfstate" key = "terraform.tfstate" } } terraform init # Initialize backend terraform init -backend-config=file.hcl # Partial config terraform init -migrate-state # Migrate to new backend Providers: # Single provider provider "aws" { region = "us-west-2" } # With version constraint terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } } # Multiple regions with aliases provider "aws" { alias = "east" region = "us-east-1" } resource "aws_s3_bucket" "east_bucket" { provider = aws.east bucket = "my-bucket" } Common Commands: terraform state list # List resources terraform state mv <old> <new> # Rename resource terraform state rm <resource> # Remove from state terraform import <res> <id> # Import existing resource You now understand how Terraform remembers (state) and connects (providers). These two concepts are fundamental to everything else you’ll do with Terraform. State and providers handle the “how” and “where” of Terraform. Now let’s explore the “what”—the actual infrastructure you create. In the next chapter, we’ll dive deep into resources, data sources, and the dependency system that makes Terraform intelligent about the order of operations.
  5. by: Akhilesh Mishra Fri, 28 Nov 2025 15:35:51 +0530 Basic Variable TypesTerraform has three basic types: string, number, and bool. variable "name" { type = string description = "User name" default = "World" } variable "counts" { type = number default = 5 } variable "enabled" { type = bool default = true } Use them: resource "local_file" "example" { content = "Hello, ${var.name}! Count: ${var.counts}, Enabled: ${var.enabled}" filename = "output.txt" } 🚧You cannot use reserved words like count as variable name.Change values: terraform apply -var="name=Alice" -var="counts=10" Apply VariableAlways add description. Future you will thank you. Advanced Variable TypesReal infrastructure needs complex data structures. ListsOrdered collections of values: variable "availability_zones" { type = list(string) default = ["us-west-2a", "us-west-2b", "us-west-2c"] } Access elements: locals { first_az = var.availability_zones[0] # "us-west-2a" all_zones = join(", ", var.availability_zones) } Use in resources: resource "aws_subnet" "public" { count = length(var.availability_zones) availability_zone = var.availability_zones[count.index] # ... other config } MapsKey-value pairs: variable "instance_types" { type = map(string) default = { dev = "t2.micro" prod = "t2.large" } } Access values: resource "aws_instance" "app" { instance_type = var.instance_types["prod"] # Or with lookup function instance_type = lookup(var.instance_types, var.environment, "t2.micro") } ObjectsStructured data with different types: variable "database_config" { type = object({ instance_class = string allocated_storage = number multi_az = bool backup_retention = number }) default = { instance_class = "db.t3.micro" allocated_storage = 20 multi_az = false backup_retention = 7 } } Use in resources: resource "aws_db_instance" "main" { instance_class = var.database_config.instance_class allocated_storage = var.database_config.allocated_storage multi_az = var.database_config.multi_az backup_retention_period = var.database_config.backup_retention } List of ObjectsThe power combo - multiple structured items: variable "servers" { type = map(object({ size = string disk = number })) default = { web-1 = { size = "t2.micro", disk = 20 } web-2 = { size = "t2.small", disk = 30 } } } resource "aws_instance" "servers" { for_each = var.servers instance_type = each.value.size tags = { Name = each.key } root_block_device { volume_size = each.value.disk } } Sets and TuplesSet - Like list but unordered and unique: variable "allowed_ips" { type = set(string) default = ["10.0.0.1", "10.0.0.2"] } Tuple - Fixed-length list with specific types: variable "server_config" { type = tuple([string, number, bool]) default = ["t2.micro", 20, true] } Rarely used. Stick with lists and maps for most cases. Variable ValidationAdd rules to validate input: variable "environment" { type = string description = "Environment name" validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be dev, staging, or prod." } } variable "instance_count" { type = number default = 1 validation { condition = var.instance_count >= 1 && var.instance_count <= 10 error_message = "Instance count must be between 1 and 10." } } Catches errors before Terraform runs. Validation CheckSensitive VariablesMark secrets as sensitive: variable "db_password" { type = string sensitive = true } Won’t appear in logs or plan output. Still stored in state though (encrypt your state!). Variable PrecedenceMultiple ways to set variables. Terraform picks in this order (highest to lowest): Command line: -var="key=value"*.auto.tfvars files (alphabetical order)terraform.tfvars fileEnvironment variables: TF_VAR_nameDefault value in variable blockSetting Variables with FilesCreate terraform.tfvars: environment = "prod" instance_type = "t2.large" database_config = { instance_class = "db.t3.large" allocated_storage = 100 multi_az = true backup_retention = 30 } Run terraform apply - picks up values automatically Or environment-specific files: # dev.tfvars environment = "dev" instance_type = "t2.micro" terraform apply -var-file="dev.tfvars" Locals: Computed ValuesVariables are inputs. Locals are calculated values you use internally. variable "project_name" { type = string default = "myapp" } variable "environment" { type = string default = "dev" } locals { resource_prefix = "${var.project_name}-${var.environment}" common_tags = { Project = var.project_name Environment = var.environment ManagedBy = "Terraform" } is_production = var.environment == "prod" backup_count = local.is_production ? 3 : 1 } resource "aws_s3_bucket" "data" { bucket = "${local.resource_prefix}-data" tags = local.common_tags } Use var. for variables, local. for locals. OutputsDisplay values after apply: output "bucket_name" { description = "Name of the S3 bucket" value = aws_s3_bucket.data.id } output "is_production" { value = local.is_production } output "db_endpoint" { value = aws_db_instance.main.endpoint sensitive = true # Don't show in logs } View outputs: terraform output terraform output bucket_name Real-World Examplevariable "environment" { type = string validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Must be dev, staging, or prod." } } variable "app_config" { type = object({ instance_type = string min_size = number }) } locals { common_tags = { Environment = var.environment ManagedBy = "Terraform" } # Override for production min_size = var.environment == "prod" ? 3 : var.app_config.min_size } resource "aws_autoscaling_group" "app" { name = "myapp-${var.environment}-asg" min_size = local.min_size desired_capacity = local.min_size tags = [ for key, value in local.common_tags : { key = key value = value propagate_at_launch = true } ] } Quick ReferenceBasic types:variable "name" { type = string } variable "count" { type = number } variable "enabled" { type = bool } Complex types:variable "zones" { type = list(string) } variable "types" { type = map(string) } variable "config" { type = object({ name = string, size = number }) } variable "servers" { type = map(object({ size = string, disk = number })) } Validation:validation { condition = contains(["dev", "prod"], var.env) error_message = "Must be dev or prod." } Locals and Outputs:locals { name = "${var.project}-${var.env}" } output "result" { value = aws_instance.app.id, sensitive = true } Variables make your code flexible. Complex types model real infrastructure. Locals keep things DRY. Outputs share information. With variables and locals in your toolkit, you now know how to make your Terraform code flexible and maintainable. But where does Terraform store the information about what it created? And how does it connect to AWS, Azure, or other cloud providers? That’s what we’ll explore next with state management and providers.
  6. by: Akhilesh Mishra Fri, 28 Nov 2025 15:34:33 +0530 Step 1: Install TerraformFor macOS users: brew install terraform For Windows users: Download from the official Terraform website and add it to your PATH. For Linux users: wget https://releases.hashicorp.com/terraform/1.12.0/terraform_1.12.0_linux_amd64.zip unzip terraform_1.12.0_linux_amd64.zip sudo mv terraform /usr/local/bin/ Step 2: Verify Installationterraform version You should see something like: Terraform v1.12.0 Step 3: Create Your First Terraform FileCreate a new directory for your first Terraform project: mkdir my-first-terraform cd my-first-terraform Create a file called main.tf and add this simple configuration: # This is a comment in Terraform resource "local_file" "hello" { content = "Hello, Terraform World!" filename = "hello.txt" } This simple example creates a text file on your local machine. Not very exciting, but it’s a great way to see Terraform in action without needing cloud credentials. Step 4: The Magic CommandsNow comes the fun part! Run these commands in order: Initialize Terraform: terraform init Terraform InitThis downloads the providers (plugins) needed for your configuration. See what Terraform plans to do: terraform plan Terraform PlanThis shows you exactly what changes Terraform will make. Apply the changes: terraform apply Terraform ApplyType yes when prompted, and watch Terraform create your file! Clean up: terraform destroy Terraform DestroyThis removes everything Terraform created. What Just Happened?Congratulations! You just used Terraform to manage infrastructure (even if it was just a simple file). Here’s what each command did: terraform init: Set up the working directory and downloaded necessary pluginsterraform plan: Showed you what changes would be madeterraform apply: Actually made the changesterraform destroy: Cleaned everything upThis same pattern works whether you’re creating a simple file or managing thousands of cloud resources. Essential Terraform CommandsBeyond the basic workflow, here are commands you’ll use daily: terraform validate - Check if your configuration is syntactically valid: terraform validate Terraform ValidateRun this before plan. Catches typos and syntax errors instantly. terraform fmt - Format your code to follow standard style: terraform fmt Terraform FormatMakes your code consistent and readable. Run it before committing. terraform show - Inspect the current state: terraform show Terraform ShowShows you what Terraform has created. terraform output - Display output values: terraform output Terraform OutputUseful for getting information like IP addresses or resource IDs. terraform console - Interactive console for testing expressions: terraform console Test functions and interpolations before using them in code. Type exit to quit. terraform refresh - Update state to match real infrastructure: terraform refresh 📋Deprecated in favor of terraform apply -refresh-only, but worth knowing.Common Command PatternsSee plan without applying: terraform plan -out=tfplan See PlanApply saved plan: terraform apply tfplan Auto-approve (careful!): terraform apply -auto-approve Destroy specific resource: terraform destroy -target=aws_instance.example Format all files recursively: terraform fmt -recursive These commands form your daily Terraform workflow. You’ll use init, validate, fmt, plan, and apply constantly. Terraform Daily CommandsNow that you understand what Terraform is and how to use its basic commands, let’s dive deeper into the core concepts that make Terraform powerful. We’ll start with variables and locals—the building blocks that make your infrastructure code flexible and reusable. I have also built Living DevOps platform as a real-world DevOps education platform. I’ve spent years building, breaking, and fixing systems in production. Now I teach what I’ve learned in my free time. You’ll find resources, roadmaps, blogs, and courses around real-world DevOps. No fluff. No theory-only content. Just practical stuff that actually works in production. Living With DevOps
  7. by: Akhilesh Mishra Fri, 28 Nov 2025 15:33:10 +0530 If you go back two decades, everyone used those physical servers (produced by IBM, HP, and Cisco), which took weeks to setup correctly before we could run the applications on them. Then came the time of virtualization. Sharing computing resources across multiple OS installations using hypervisor-based virtualization technologies such as VMware became the new normal. It reduced the time to spin up a server to run your application but also increased complexity. Subsequently, we got AWS, which revolutionized computing, and a new era of cloud computing became streamlined. After AWS, other big tech companies such as Microsoft and Google launched their cloud offerings named Azure and Google Cloud Platform, respectively. In the cloud, you can spin up a server in a few minutes with just a few clicks. Creating and managing a few servers was very easy, but as the number of servers and their configurations grew, manual tracking became a significant challenge. That’s where Infrastructure as Code (IaC) and Terraform came to the rescue, and trust me, once you understand what they can do, you’ll wonder how you ever lived without them. What is Infrastructure as Code?Infrastructure as Code is exactly what it sounds like – managing and provisioning your infrastructure (servers, networks, databases, etc.) through code instead of manual processes. Instead of clicking through web consoles or running manual commands, you write code that describes what you want your infrastructure to look like. The Problems IaC SolvesManual configuration chaos and deployment failures “It works on my machine” syndromeScaling nightmares across multiple environmentsLost documentation and tribal knowledgeSlow disaster recoveryThen came Terraform, and it changed the gameSo what is Terraform? Terraform is an open-source Infrastructure as Code tool developed by HashiCorp that makes managing infrastructure as simple as writing a shopping list. Here’s what makes Terraform special: 1. It’s Written in GoTerraform is built in Golang, which gives it superpowers for creating infrastructure in parallel. While other tools are still thinking about what to do, Terraform is already building your servers, networks, and databases simultaneously. 2. Uses HCL (HashiCorp Configuration Language)Terraform uses HCL, which is designed to be human-readable and easy to understand. Don’t worry if you haven’t heard of HCL – it’s so intuitive that you’ll be writing infrastructure code in no time. Here’s a simple example of what Terraform code looks like: resource "aws_instance" "web_server" { ami = "ami-12345678" instance_type = "t2.micro" tags = { Name = "My Web Server" Environment = "Production" } } See how readable that is? We’re creating an AWS instance (a virtual server) called “web_server” with specific settings. Even if you’ve never seen Terraform code before, you can probably guess what this does. 3. Cloud-Agnostic MagicHere’s where Terraform really shines – it works with ANY cloud provider. AWS, Azure, Google Cloud, DigitalOcean, even on-premises systems. You learn Terraform once, and you can manage infrastructure anywhere. 4. State ManagementTerraform keeps track of what it has created in something called a “state file.” This means it knows exactly what exists and what needs to be changed, created, or destroyed. It’s like having a super-smart assistant who remembers everything. Why Terraform Became the King of IaCYou might be wondering: “Why should I learn Terraform when there are other tools like AWS CloudFormation or Azure Resource Manager?” Great question! Here’s why Terraform has become the go-to choice for infrastructure management: 1. One Tool to Rule Them AllMost cloud providers have their own IaC tools (AWS CloudFormation, Azure ARM templates, etc.), but they only work with their specific cloud. Terraform works with over 1,000 providers, from major cloud platforms to niche services. Learn it once, use it everywhere. 2. Huge Community and EcosystemTerraform has a massive community creating and sharing modules (think of them as infrastructure blueprints). Need to set up a web application with a database? There’s probably a module for that. Want to configure monitoring? There’s a module for that too. 3. Declarative ApproachWith Terraform, you describe what you want (the end state), not how to get there. You say “I want a web server with these specifications,” and Terraform figures out all the steps needed to make it happen. 4. Plan Before You ApplyOne of Terraform’s best features is the ability to see exactly what changes will be made before applying them. It’s like having a crystal ball that shows you the future of your infrastructure. Real-World Example: Why You Need ThisLet me paint you a picture of why this matters. Imagine you’re working at a company that needs to: Deploy a web application across development, staging, and production environments.Ensure all environments are identicalScale up during peak timesQuickly recover from disastersMaintain security and compliance standardsWithout Terraform: You’d spend weeks manually setting up each environment, documenting every step, praying nothing breaks, and probably making small mistakes that cause mysterious issues months later. With Terraform: You write the infrastructure code once, test it in development, then deploy identical environments to staging and production with a single command. Need to scale up? Change a number in your code and redeploy. Disaster recovery? Run the same code in a different region.
  8. by: Akhilesh Mishra Fri, 28 Nov 2025 15:31:09 +0530 Stop clicking around cloud dashboards. Start building reproducible, version-controlled, scalable infrastructure using Terraform, the industry standard for Infrastructure as Code. This course takes you from first terraform init to real-world Terraform architectures with modules, best practices, and production workflows. 👉 Designed for Linux users, DevOps engineers, cloud learners, and sysadmins transitioning to modern IaC. Most Terraform tutorials either stay too basic or jump straight into complex setups without building strong foundations. This course does both. You don’t just learn commands. You understand the logic and design decisions behind Terraform infrastructure. 🧑‍🎓 Who is this course for?This course is built for people who want real skills, not just certificates: Linux users who want to move into cloud & DevOpsSystem administrators shifting towards Infrastructure as CodeAspiring DevOps engineers building their toolchainDevelopers tired of manual server configurationAnyone who wants to treat infrastructure like code (the right way)🕺No prior Terraform experience required, but basic Linux command-line knowledge will help. 🧩 What you’ll learn in this course?Chapter 1: Infrastructure as Code – Here We Go Understand what IaC really means, why Terraform matters and how it fits into modern infrastructure. Chapter 2: Getting Started – Your First Steps Install Terraform, your first configuration, understanding providers, init, plan, and apply. Chapter 3: Terraform Variables and Locals Learn how to write reusable and parameterized configurations using variables and locals. Chapter 4: Terraform State and Providers Dive deep into state files, provider configuration, remote state, and dangers of bad state handling. Chapter 5: Resources, Data Sources, and Dependencies Understand how Terraform actually builds infrastructure graphs and manages dependencies. Chapter 6: Count, For_Each, and Conditionals Dynamic infrastructure with loops, conditional logic, and scalable configuration patterns. Chapter 7: Dynamic Blocks in Terraform Create flexible and advanced configurations using dynamic blocks. Chapter 8: Terraform Modules – Building Blocks You Can Reuse Everywhere Learn how to design, use, and structure modules like real production setups. Chapter 9: Provisioners and Import Handle legacy infrastructure, migration strategies, provisioners, and importing existing resources. Chapter 10: Terraform Functions – Your Code’s Swiss Army Knife Use built-in functions to manipulate data, strings, numbers, and collections. Chapter 11: Workspaces, Null Resources, and Lifecycle Rules Advanced control: multi-environment setups, resource lifecycle management, and more. Chapter 12: Terraform Best Practices and Standards The chapter that converts you from a Terraform user to a Terraform practitioner. Folder structure, naming, workflows, and professional practices. I built Living DevOps platform as a real-world DevOps education platform. I’ve spent years building, breaking, and fixing systems in production. Now I teach what I’ve learned in my free time. You’ll find resources, roadmaps, blogs, and courses around real-world DevOps. No fluff. No theory-only content. Just practical stuff that actually works in production. Living With DevOps
  9. by: Sourav Rudra Fri, 28 Nov 2025 09:50:08 GMT Pebble, the e-paper smartwatch that first launched on Kickstarter in 2012, gained a cult-like following for its innovative approach to wearable tech. Sadly, Fitbit acquired and shut it down in 2016, taking with it the intellectual property (IP) of the brand. The IP eventually landed with Google after their Fitbit acquisition in 2021. Earlier this year, the original creator, Eric Migicovsky, relaunched Pebble through Core Devices LLC, a self-funded company operating via the rePebble consumer brand. This resurrection became possible after Google open-sourced PebbleOS in January 2025. Now, Core Devices has announced something significant for the Pebble community. Great News for Pebble EnthusiastsA screenshot from the demo in this YouTube videoThe complete Pebble software stack is now open source. Everything you need to operate a Pebble watch is now available on GitHub. All of this didn't just materialize overnight; Core Devices has been improving PebbleOS since its open-sourcing and has been pushing those to the public repository. The rebuilt mobile companion apps for Android and iOS just got released as open source too. Without these apps, a Pebble watch is basically a paperweight. These are built on libpebble3, a Kotlin multiplatform library for interacting with Pebble devices. Similarly, the developer tools have been completely overhauled, with the old Ubuntu VirtualBox VM-based workflow being replaced with a modern browser-based one that allows anyone to develop Pebble apps in a web browser. The Pebble Time 2 is very close to coming to market! Hardware schematics are public as well. The complete electrical and mechanical design files for the Pebble 2 Duo are now available with KiCad project files included. You could literally build your own Pebble-compatible device from these files. There are some non-free components still in the mix. The heart rate sensor library for the Pebble Time 2, Memfault crash reporting, and Wispr Flow speech recognition all use proprietary code. But, fret not, these are all optional. You can compile and run the core Pebble software without touching any of them. Core Devices also launched two major software systems alongside the open source releases. The Pebble mobile app now supports multiple app store feeds that anyone can create and operate. This works similar to Linux package managers such as APT or AUR. Here, users can subscribe to different feeds and browse apps from multiple sources instead of relying on a single centralized server. Core Devices already operates its own feed at appstore-api.repebble.com. This feed backs up to the Internet Archive, preserving community-created watchfaces and apps that have been around over the years. Plus, developers can upload new or existing apps through the new Developer Dashboard. Monetization remains possible through services like KiezelPay, so creators can still get paid for their hard work. Why Open Source Everything?Migicovsky learned some painful lessons from Pebble's first shutdown. When Fitbit killed the project in 2016, the community was left scrambling with limited options. The gap between 95% and 100% open source turned out to matter more than anyone expected. Android users couldn't easily get the companion app. Many iOS users faced the same problem. "This made it very hard for the Pebble community to make improvements to their watches after the company behind Pebble shut down," Eric explained in his blog post. The reasoning behind this open source push is straightforward. If Core Devices disappears tomorrow, the community has everything they need to keep their watches running. No dependencies, no single point of failure. Apart from that, these new Pebble devices will focus on reparability, with the upcoming Pebble Time 2 (expected March-April 2026) featuring a screwed-in back cover, allowing users to replace the battery themselves instead of needing to buy a new device when the battery gives out. 💬 What are your thoughts on Pebble's comeback? I certainly look forward to new launches by them!
  10. by: Theena Kumaragurunathan Fri, 28 Nov 2025 08:29:16 GMT In a previous column, I argued that self-hosting is resistance in an age where ownership is increasingly illusory. There is increasing evidence that self-hosting is becoming popular among a certain kind of user, say the typical readership of ItsFoss. There is a simple explanation for this shift: people want their data, dollars, and destiny back. Centralized platforms optimized for engagement and extraction are colliding with real-world needs — privacy, compliance, predictability, and craft. Linux, containers, and a flood of polished open-source apps have turned what used to be an enthusiast’s project into a practical step for tech‑savvy users and teams. The demand and supply of self-hosting is headed in the right direction. The Economics of Self-HostingPhoto by Chris Briggs / UnsplashI spoke about the demand side of the equation in a preivous column. Today, I would like to talk about the supply side. Put simply, self-hosting got easier: Dockerized services, one‑click bundles, and opinionated orchestration kits now cover mail, identity, storage, media, automation, and analytics. And the hardware needed is trivial: a mini‑PC, a NAS, or a Pi can host most personal stacks comfortably. Click and deploy OS and interfaces make it so easyAn increasing portion of these users are also conscious of the environmental impact of unchecked consumerism: recycling older hardware for your home-lab is an easy way to ensure that you aren't contributing to mountainous e-waste that pose risks to communities and the environment. The numbers reinforce the vibe. The 2025 selfh.st community survey (~4081 respondents) shows more than four in five self‑hosters run Linux, and Docker is the dominant runtime by a wide margin. While this hasn't become mainstream yet, it highlights one of my arguments: there are costs to trusting big tech with your most important data and services, financial and otherwise. Once such costs outweigh the costs of self-hosting, once the vast majority of users can no longer deny such costs are draining their wallets and their sense of agency, we can expect this shift to become mainstream. Self-Hosting is Independence from Big TechPhoto by Jonathan Borba / UnsplashWhen your calendar, contacts, photo library, and documents sit on your own box behind your own reverse proxy, you remove third‑party analytics, shadow data enrichment, and surprise policy drift. You also reduce the surface area for “account lockouts” that nuke access to life‑critical records. For users burned by sudden platform changes — forced accounts, feature removals, data portability barriers—self ‑ hosting is an antidote.Cost predictability over time. Cloud convenience is real, but variable charges accumulate as you scale storage, bandwidth, and API calls. With self‑hosting, you pay upfront (hardware + power), then amortize. For steady, continuous workloads—backups, photo libraries, media servers, home automation, docs, password vaults—the math is often favorable.Reliability through ownership. Services die. Companies pivot. APIs change. By running key utilities yourself — RSS, password vaults, photo libraries, file sync, smart‑home control — you guarantee continuity and can script migrations on your timeline. That resilience matters when consumer vendors sunset features or shove core capabilities behind accounts and subscriptions.Curiosity and capability‑building. There’s a practical joy in assembling a stack and knowing how each layer works that I can attest. For Linux users, self‑hosting is an ideal next step: you practice containerization, networking, monitoring, backups, and threat modeling in a low‑risk environment.The Linux‑first baselinePhoto by Hc Digital / UnsplashLinux dominates self‑hosting because it’s stable, well‑documented, and unfussy (in the context of servers; I am aware Linux desktop has some ways to go before mainstream users will flock towards Linux). Package managers and container runtimes are mature. Community tutorials cover everything from Traefik/Caddy reverse proxies to WireGuard tunnels and PostgreSQL hardening. The selfh.st survey shows Docker adoption near 90 percent, with Proxmox, Home Assistant OS, and Raspberry Pi OS widely used. It’s not gatekeeping; it’s pragmatism. Linux is simply the easiest way to stitch a small, reliable server together today. Where the rubber meets the roadMost start with a single box and a few services: identity and secrets (Vaultwarden, Authelia, Keycloak); files and backups (Nextcloud, Syncthing, Borgmatic); media (Jellyfin, Navidrome, Photoprism/Immich); home (Home Assistant); networking (Nginx/Traefik/Caddy, WireGuard); knowledge (FreshRSS, Paperless‑ngx, Ghost). The payoff is a system where each function is yours. AI is accelerating the trendSelf‑hosted AI moved from novelty to necessity for teams with sensitive workloads. Local inference avoids model‑provider data policies, reduces latency, and stabilizes costs. Smaller models now run on consumer hardware; hybrid patterns route easy requests locally and escalate only high‑uncertainty tasks to cloud. For regulated data, self‑hosting is often the only sane route. The economics are getting clearer“Is self‑hosting cheaper?” depends on workload shape and rigor. Cloud Total Cost of Ownership (TCO) includes convenience and externalized maintenance; self‑hosting TCO includes your time, updates, and electricity. But for persistent, predictable personal workloads—photo/video storage, backups, calendars, private media—self‑hosting tends to win. What self‑hosting doesn’t fixYou still need to operate. Patching, backups, monitoring, and basic security hygiene are on you. Automated update pipelines and off‑site backups reduce pain, but they require setup and discipline. ​Internet constraints exist. Residential ISPs throttle uploads or block SMTP; dynamic IPs complicate inbound routes; power outages happen. In practice, most personal stacks work fine with dynamic DNS, tunneling, and a small VPS for exposed services, but know your constraints. ​⁠Some services are better bought. Global‑scale delivery, high‑throughput public sites, and compliance‑heavy email sending can be more efficient with a trustworthy provider. “Self‑host everything” isn’t the point—“self‑host what’s sensible” is.The cultural angleSelf‑hosting isn’t anti‑cloud; it’s pro‑agency. It’s choosing the right locus of control for the things you care about. For FOSS communities, it’s consistent with the ethos: own your stack, contribute upstream, and refuse enshittification through slow, patient craft. For Linux users, it’s the obvious next rung: turn your knowledge into durable systems that serve people you love, not just platforms that serve themselves. If you value predictability, privacy, and the quiet confidence of owning the tools you rely on, self‑hosting stops being a hobby and starts being common sense. The shift is already underway. It’s not loud. It’s steady. And Linux is where it happens.
  11. by: Sourav Rudra Thu, 27 Nov 2025 17:00:46 GMT A growing number of Linux desktop environments (DEs) are moving towards Wayland, the modern display protocol designed to replace the aging X11 window system. X11 has been the foundation of Linux graphical interfaces for over three decades now, but it carries significant technical debt and security limitations that Wayland aims to address. Projects like Fedora, GNOME, and KDE have been leading the charge on this by being among the first ones to adopt Wayland. Now, KDE has announced it is sunsetting the Plasma X11 session entirely. What's Happening: The KDE Plasma team has made it clear that the upcoming Plasma 6.8 release will be Wayland-exclusive and that the Plasma X11 session will not be included in it. Support for X11 applications will be handled entirely through Xwayland, a compatibility layer that allows X11 apps to run on Wayland compositors. The Plasma X11 session itself will continue to receive support until early 2027. Though, the developers have not provided a specific end date yet, as they are working on additional bug-fix releases for Plasma 6.7. The rationale behind this change is to allow the Plasma team to move faster on improving the stability and functionality of the DE. They stated that dropping X11 support will help them adapt without dragging forward legacy support that holds back development. What to Expect: For most users, this change is said to have minimal immediate impact. KDE says that the vast majority of their users are already using the Wayland session, and it has been the default on most distributions. Users who still require X11 can opt for long-term support distributions like AlmaLinux 9, for example, which includes the Plasma X11 session and will be supported until 2032. The developers also note that gaming performance has improved on Wayland. The session supports adaptive sync, optional tearing, and high-refresh-rate multi-monitor setups out of the box. HDR gaming works with some additional configuration. Plus, users of NVIDIA GPUs can breathe easy now, as Wayland support in the proprietary NVIDIA driver has matured significantly. Graphics cards supported by the manufacturer work well nowadays. For older NVIDIA hardware, the open source Nouveau driver can be used instead. There are some issues that the Plasma team is actively working on addressing, things like output mirroring, session restore, and remembering window positions. But overall, they seem well-prepared for this massive shift. Suggested Read 📖 U Turn! X11 is Back in GNOME 49, For NowA temporary move that gives people some breathing room.It's FOSSSourav Rudra
  12. by: Sourav Rudra Thu, 27 Nov 2025 17:00:46 GMT A growing number of Linux desktop environments (DEs) are moving towards Wayland, the modern display protocol designed to replace the aging X11 window system. X11 has been the foundation of Linux graphical interfaces for over three decades now, but it carries significant technical debt and security limitations that Wayland aims to address. Projects like Fedora, GNOME, and KDE have been leading the charge on this by being among the first ones to adopt Wayland. Now, KDE has announced it is sunsetting the Plasma X11 session entirely. What's Happening: The KDE Plasma team has made it clear that the upcoming Plasma 6.8 release will be Wayland-exclusive and that the Plasma X11 session will not be included in it. Support for X11 applications will be handled entirely through Xwayland, a compatibility layer that allows X11 apps to run on Wayland compositors. The Plasma X11 session itself will continue to receive support until early 2027. Though, the developers have not provided a specific end date yet, as they are working on additional bug-fix releases for Plasma 6.7. The rationale behind this change is to allow the Plasma team to move faster on improving the stability and functionality of the DE. They stated that dropping X11 support will help them adapt without dragging forward legacy support that holds back development. What to Expect: For most users, this change is said to have minimal immediate impact. KDE says that the vast majority of their users are already using the Wayland session, and it has been the default on most distributions. Users who still require X11 can opt for long-term support distributions like AlmaLinux 9, for example, which includes the Plasma X11 session and will be supported until 2032. The developers also note that gaming performance has improved on Wayland. The session supports adaptive sync, optional tearing, and high-refresh-rate multi-monitor setups out of the box. HDR gaming works with some additional configuration. Plus, users of NVIDIA GPUs can breathe easy now, as Wayland support in the proprietary NVIDIA driver has matured significantly. Graphics cards supported by the manufacturer work well nowadays. For older NVIDIA hardware, the open source Nouveau driver can be used instead. There are some issues that the Plasma team is actively working on addressing, things like output mirroring, session restore, and remembering window positions. But overall, they seem well-prepared for this massive shift. Suggested Read 📖 U Turn! X11 is Back in GNOME 49, For NowA temporary move that gives people some breathing room.It's FOSSSourav Rudra
  13. by: Sourav Rudra Thu, 27 Nov 2025 14:13:02 GMT If you spend a lot of time on a computer, then fonts matter more than you think. A good one reduces eye strain and makes reading the contents of the screen easier. The right one can drastically improve your entire desktop experience. In my case, I like to use Inter on my Fedora-powered daily driver, and I don't really mess around with it. But everyone's different. Some like rounded fonts. Others want sharp, clean lines. Having options matters. Your eyes, your choice after all. Anyhow, Google just open-sourced a new option worth checking out. Google Sans Flex: What to Expect?Google Sans FlexReleased under the SIL Open Font License, Google Sans Flex is an open source font that is touted to be their next-gen brand typeface, designed by David Berlow. Sans Flex is a variable font with five axes: weight, width, optical size, slant, and rounded terminals. One file holds multiple styles instead of separate files, delivering different looks from a single download. Google designed it for screens of various sizes and modern operating systems. Plus, it should look sharp on high-resolution displays with fractional scaling. Basically, one Sans Flex file replaces dozens of individual font files. Just a demo of this font. I used GNOME Tweaks to apply it system-wide.Get Google Sans FlexYou can get the font file from the official website, and after that, you can install it on Ubuntu or any other Linux distribution with ease by following our handy guide. Keep in mind that the variable font features won't work in Linux desktop environments, and you will only get the regular style when using it system-wide. If you need help or have any questions, then you can ask the helpful folks over at our community forum. Google Sans FlexSuggested Read 📖: Learn to install fonts in Linux. How to Install New Fonts in Ubuntu and Other Linux DistrosWondering how to install additional fonts in Ubuntu Linux? Here is a screenshot tutorial to show you how to easily install new fonts.It's FOSSAbhishek Prakash
  14. by: Abhishek Prakash Thu, 27 Nov 2025 10:33:18 GMT As Linux users, most of us prefer open-source software. But if you’ve been using Linux for a while, you know this truth too: in daily workflows, you may have to rely on proprietary software. And sometimes, you use software that feels like open source projects but they actually are not. I am going to list some of those applications that are popular among Linux users but often we don't realize that they are not open source. I'll also suggest their open source alternatives for you. Obsidian: Personal knowledge baseObsidian has become incredibly popular among developers, researchers, and anyone who takes their notes seriously. Its local-first approach, Markdown support, and graph view make it ideal for building a personal knowledge base. While it supports community plugins and customization, the core application itself is proprietary. This may come as a surprise because it always feels like Obsidian is open source. Alas! It is not. 🐧The most suitable open source alternative to Obsidian is Logseq. You can also try Joplin for its simplicity.Termius: Modern SSH clientTermius is a sleek, cross-platform SSH client used by sysadmins and developers, specially the ones who manage multiple servers. It offers synchronization across devices, organized host management, and secure key handling. However, it’s a fully closed-source commercial product. How I wish it was open source. 🐧Tabby could be somewhat of an open source alternative here.MobaXterm: Accessing Linux servers from WindowsMobaXterm is primarily a Windows tool, but many Linux users interact with it while managing remote Linux servers from work or university environments. At least that's what I used around 12 years ago at work. It combines SSH, X11 forwarding, and remote desktop features under one roof. And it does the job very effectively and offers a lot more than PuTTY. 🐧Not sure if there is a single application that has same features as MobaXterm. Perhaps PuTTY and X2Go or Remmina could be used.Warp: The AI-powered terminalWarp is a new-age terminal focused on modern developer and devops workflows. It offers command blocks, AI suggestions and AI agents, team sharing features, and a highly polished interface. But it’s completely closed-source. I would have appreciated it if they offered it as open source and used their proprietary AI offering as optional add-on. 🐧I believe Wave is the most suitable open source alternative to Warp. Similar features and you can also use local AI.Docker Desktop: For easy container managementDocker itself is open source, but Docker Desktop is not. It provides a GUI, system integration, container management tools and additional features that simplify your container-based workflows on personal machines. After all, not everyone is a command line champion. Despite the licensing controversies, many people still use it because of convenience and integration with development environments. 🐧Rancher Desktop is worth looking at as an alternative here.Visual Studio Code: Microsoft's not so open offeringVS Code sits in a slightly grey area: The base project (Code – OSS) is open source.The official Microsoft build of VS Code is proprietary due to licensed components and telemetry.Nevertheless, it remains the most popular code editor for developers, including Linux users, thanks to its extensions, easy GitHub integration, and huge plugin ecosystem. 🐧Code - OSS is available in the official repositories of many Linux distributions. Think of it as Chromium browser which is open source version of Chrome.Discord: The developer community hubThere was a time when developers used to dwell in IRC servers. That was 20 years ago. These days, Discord seems to have taken over all other instant messaging services. Surprisingly, Discord started as a gaming platform but has become a central communication tool for tech communities, open source projects, and developer groups. Many open source project communities now live there, even though Discord itself is fully proprietary. 🐧Matrix-based Element can be an alternative here.Vivaldi: Chrome alternative browserVivaldi is a popular web browser among Linux users. It is based on open-source Chromium, but its UI, branding, and feature layer are proprietary. Its deep customization, built-in tools (notes, mail, calendar), and privacy-focused philosophy make it a suitable choice for many Linux users. Wondering why it is not open source? They have a detailed blog post about it. 🐧You may consider Brave web browser.VMWare Workstation: Enterprise-level virtualizationBut since it is 'enterprise' level stuff, how can it be open source? Despite all the licensing controversy, VMware’s Workstation and Fusion products are still heavily used for virtualization in both personal and enterprise environments. They’re well-optimized, reliable, and offer features that are sometimes ahead of open-source alternatives. But yes, they are completely proprietary. 🐧GNOME Boxes is my preferred way of managing virtual machines.Ukuu: Easy kernel management on UbuntuUkuu stands for Ubuntu Kernel Upgrade Utility. It allows you to install mainline Linux kernel on Ubuntu. You can also use it for installing a kernel of your choice, add, delete kernels from the comfort of GUI. A few years ago, Ukuu switched to a paid license, unfortunately. 🐧Mainline is an actively maintained open source fork of Ukuu.Plex: Media server for self-hosting enthusiastsPlex is extremely popular among Linux users who build homelabs and/or media servers. What started as a self-hosted media server, Plex gradually moved to become a streaming platform of its own. Oh! The irony. Not just that, most of its ecosystem is closed-source and cloud-dependent. Recently, they have started cracking down on free remote streaming of personal media. 🐧Forget Plex, go for Jellyfin. Emby and Kodi are also good open source media servers.Tailscale – Easy remote access for self-hostersTailscale uses the open-source WireGuard protocol but offers a proprietary product and service on top of it. It makes secure networking between your devices ridiculously easy. This is perfect for self-hosters, and homelabbers as you can securely access your self-hosted services from outside your home network. This simplicity is why several users accept the closed-source backend. 🐧You can go for Headscale as an alternative.Snap Store: Open front, closed backendUbuntu's Snap-based software center, Snap Store, is closed source software. Snapd, the package manager, is open source. But the Snap Store backend is proprietary and controlled by Canonical. This has sparked debate in the Linux community for years. Still, most Ubuntu users rely on it daily for installing and managing applications. It comes by default, after all. 🐧As an Ubuntu user, you can get the actual GNOME Software back.Steam: The backbone of Linux gamingSurprised? Yes, our beloved Steam client is not open source software. Yet we use it. None of us can deny that Steam has been crucial for improving the state of gaming on Linux. From Proton to native Linux support for thousands of games, Steam has played a huge role in improving Linux as a gaming platform, even though the platform itself is proprietary. 🐧If you must, you could try Lutris or Heroic Games Launcher.ConclusionUsing open-source software is about freedom, not necessarily forced purity. Many Linux users aim to replace proprietary software whenever possible but they also value productivity, reliability, and workflow efficiency. If a closed-source tool genuinely helps you work better today, well use them but keep on supporting open alternatives alongside. The good thing is that for almost every popular proprietary tool, the open-source ecosystem continues to offer strong alternatives. To me, the important thing isn’t whether your entire stack is open source. It’s that you’re aware of your choices and the trade-offs behind them. And that awareness is where true freedom begins.
  15. by: Abhishek Prakash Thu, 27 Nov 2025 04:41:37 GMT Happy Thanksgiving 🦃 I’m incredibly thankful for this community. To our Plus members who support us financially, and to our free members who amplify our work by sharing it with the world — you all mean a lot to us. Your belief in what we do has kept us going for 13 amazing years. This Thanksgiving, let’s also extend our gratitude beyond our personal circles to the open-source contributors whose work silently powers our servers, desktops, and daily digital lives. From code to distributions to documentation, their relentless effort keeps the Linux world alive 🙏 Here's the highlight of this edition of FOSS Weekly: Zorin OS upgrade tool.Arduino's future looking precarious.Dell prioritizing Linux with its recent launch.Backing up Flatpak and Snap applications.And other Linux news, tips, and, of course, memes!Thanksgiving is also associated with offers, deals and shopping. Like every year, I have curated a list of deals and offers that may interest you as a Linux user. See if there is something that you need (or want). Black Friday Deals for Linux Users 2025 [Continually Updated With New Entries]Save big on cloud storage, privacy tools, VPN services, courses, and Linux hardware.It's FOSSAbhishek PrakashThere is also a wholesome deal that will deliver fresh cranberry sauce to your doorstep while supporting keystone open source maintainers. 📰 Linux and Open Source NewsBlender 5.0 has arrived with major changes across the board.TUXEDO Computers has shelved its plans for an Arm notebook.Ultramarine 43 is here with a fresh Fedora 43 base and some major changes.Raspberry Pi Imager 2.0 has arrived with a clean redesign and new features.Dell has launched the Dell Pro Max 16 Plus, with the Linux version being available before Windows.Collabora has relaunched desktop office suite. Which is basically LibreOffice under the core but with a more modern and fresh user interface. Collabora Launches Desktop Office Suite for LinuxThe new office suite uses modern tech for a consistent online-offline experience; the existing offering is renamed ‘Classic’ and it maintains a traditional approach.It's FOSSSourav Rudra🧠 What We’re Thinking AboutArduino's enshittification might've begun as Qualcomm carries out some massive policy changes. Enshittification of Arduino Begins? Qualcomm Starts Clamping DownNew Terms of Service introduce perpetual content licenses, reverse-engineering bans, and widespread data collection.It's FOSSSourav Rudra🧮 Linux Tips, Tutorials, and LearningsYou can backup and restore your Flatpak and Snap apps and settings between distro hops. Backup and Restore Your Flatpak Apps & SettingsMake a backup of your Flatpak apps and application data and restore them to a new Linux system where Flatpak is supported.It's FOSSRoland TaylorMove Between the Distros: Back Up and Restore Your Snap PackagesMake a backup of your Snap apps and application data and restore them to a new Linux system where Snap is supported. Works between Ubuntu and non-Ubuntu distros, too.It's FOSSRoland TaylorThe Zorin OS developers have given early access to the upgrade path from Zorin OS 17 to 18. And check out this list of OG applications that were reborn as NG apps. Open Source Never Dies: 11 of My Favorite Linux Apps That Refused to Stay DeadThese Linux apps were popular once. And then they were abandoned. And then they came back with a new generation tag.It's FOSSRoland Taylor Linux runs the world’s servers, but on desktops, it’s still fighting for attention. That’s why It’s FOSS exists; to make Linux easier, friendlier, and more approachable for everyday users. We’re funded not by VCs, but by readers like you. This Thanksgiving, we’re grateful for your trust and your support. If you believe in our work, if we ever helped you, do consider upgrading to an It’s FOSS Plus membership — just $3/month or a single payment of $99 for lifetime access. Help us stay independent, stay human in the age of AI slop and more importantly. Join It's FOSS Plus 👷 AI, Homelab and Hardware CornerDon't neglect your homelab. Manage it effectively with these dashboard tools. 9 Dashboard Tools to Manage Your Homelab EffectivelySee which server is running what services with the help of a dashboard tool for your homelab.It's FOSSAbhishek Kumar🛍️ Linux eBook bundle This curated library (partner link) of courses includes Supercomputers for Linux SysAdmins, CompTIA Linux+ Certification Companion, Using and Administering Linux: Volumes 1–2, and more. Plus, your purchase supports the Room to Read initiative! Explore the Humble offer here✨ Project HighlightsA return from the dead? These open source apps sure did. Open Source Never Dies: 11 of My Favorite Linux Apps That Refused to Stay DeadThese Linux apps were popular once. And then they were abandoned. And then they came back with a new generation tag.It's FOSSRoland Taylor📽️ Videos I Am Creating for YouIn the latest video, I share how I customize and set up my Firefox browser. Subscribe to It's FOSS YouTube Channel💡 Quick Handy TipIn Nautilus file manager, you can select files according to certain pre-set conditions. To do that, first press CTRL+S and enter the relevant patterns you want to sort by. This will then make Nautilus select files or directories based on the given pattern. You can press CTRL+SHIFT+I to revert the selection as well. PS: The tip was tested using Nautilus, but other file managers should also have such functionality; only the shortcuts will vary. 🎋 Fun in the FOSSverseTest your skills by reviewing Fedora's interesting history in this quick quiz. The Fedora Side of Linux: QuizFedora has an interesting history. Take this quiz to find out a little more about it.It's FOSSAnkush Das🤣 Meme of the Week: Step aside mortals, your god is here. 🗓️ Tech Trivia: On November 24, 1998, America Online announced it would acquire Netscape Communications in a stock-for-stock deal valued at $4.2 billion, a move that signaled the shifting balance of power in the browser wars and highlighted the rapid consolidation occurring during the late-1990s Internet boom. 🧑‍🤝‍🧑 From the Community: Long-time FOSSer Ernest has posted an interesting thread on obscure Linux distributions. Obscure GNU/Linux Distributions that May Interest YouIn the ZDNET Tech Today newsletter that came into my inbox today, there’s an item that interested me, and I immediately thought about all my fellow !T’S FOSS’ers! You can read the item for yourself here, but one distribution in particular caught my attention, because it offers only open source software throughout, and it eschews systemd, and instead offers several other init systems users can choose from, including OpenRC, Runit, s6, and SysV (list copied directly from the article), which brough…It's FOSS Communityernie❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄
  16. by: Sourav Rudra Wed, 26 Nov 2025 12:54:17 GMT Collabora Productivity is well-known for two of its flagship offerings, Collabora Online, their web-based document editor that powers many organizations, and their LibreOffice-based enterprise suite. That second one just got a makeover and the existing offering was moved to a new name. They announced Collabora Office for desktop today. It brings their online editor's interface to local desktop apps for Linux, Windows, and macOS. The previous enterprise suite is now called Collabora Office Classic. Collabora Office: What's Fresh?From left to right: Writer, Impress, and Calc. Click to expand. The new suite covers the basics like word processing, spreadsheets, presentations, and vector graphics. You get Writer for documents, Impress for presentations, and Calc for spreadsheets. But the way it is all put together is quite different. Under the hood, it uses LibreOffice's core technology, but the interface is where things get interesting. Instead of relying on VCL, they built it with JavaScript, CSS, WebGL, and Canvas. There is no Java dependency either. The result is a smaller download that installs cleanly. Everything you need comes in one package. File compatibility looks good too. Microsoft Office formats like DOCX, XLSX, and PPTX work as expected. OpenDocument formats are obviously supported as well. During my brief use of it, the interface felt modern with a familiar tabbed layout and easy-to-use toolbars. The developers mention that they have simplified the defaults and settings compared to typical desktop office apps. This should result in less clutter and more productivity for people who use Collabora Office daily. Speaking on this, Michael Meeks, the CEO of Collabora Productivity, added that: We’re excited to bring a first release of Collabora Office to the desktop, letting desktop users work both on-line and off-line in comfort. We look forward to working with and gaining valuable feedback from our partners, customers, users and community.Similar to LibreOffice, Yet DifferentBoth products use the same LibreOffice foundation. But that's where the similarities end. The new one mimics Collabora Online's web interface using JavaScript and CSS. Classic sticks with the traditional VCL-based desktop interface that longtime LibreOffice users will know well. Classic includes the Base database app with its Java components. The new version skips Base entirely and drops the Java requirement. Macros work on both, but differently. Classic gives you full editing capabilities with BASIC, Python, and UNO support. The new version just runs macros, no advanced tools. For business users, the support difference will matter the most. Classic has long-term enterprise support available now. The new Collabora Office is a fresh release that isn't yet tailored for enterprise deployment. Collabora is working on bringing enterprise support to the new suite. They expect to have it ready sometime in 2026. Until then, organizations needing production-ready support should stick with Classic. Download Collabora OfficeYou can grab Collabora Office from the official website. The suite is available as a Flatpak for Linux, an appx file for Windows 11, and an app bundle for macOS 15 Sequoia or later. If you need help with deployment or documentation, you can check out the support page for the relevant resources. The source code is available on GitHub. Collabora OfficeSuggested Read 📖 ODF 1.4 Release Marks 20 Years of OpenDocument FormatAccessibility and compatibility upgrades mark 20th anniversary of document standard at OASIS Open.It's FOSSSourav Rudra
  17. by: Roland Taylor Wed, 26 Nov 2025 11:03:03 GMT One of the greatest things about open-source software is that anyone can pick up where a project left off and bring it back to life, whether it's to continue a legacy, or a spiritual successor that builds on a new foundation. In this article, I'll share some of the popular Linux apps that got new lives as "New/Next Generation" (-ng) versions of their former selves. 1. iotop-ciotop-c gives iotop a refreshed lookYou've heard of top and htop, but did you know there's also a tool specifically for monitoring disk I/O? That's what iotop was created to do, but, but it's not seen development activity for some time, and being written in Python, it can get a bit slow (sorry Python lovers). That's where iotop-c comes in. It's a rewrite of the original iotop in C, of course, and it's not only much faster, but richer in features, and actively maintained. InstallationIotop-c is packaged as iotop-c in most distros. You can also check out the GitHub page to grab the source code, star the project, or report bugs. For Debian/Ubuntu you can run: sudo apt install iotop-c💡Want to learn how to make the most of iotop? Check out this guide to iotop and ntopng on Linux Handbook.2. vokoscreenNGvokoscreen NG makes screenrecording a breezevokoscreen NG (vovokscreen Next Generation) is the modernized rewrite of vokoscreen, a popular open-source screen recording app from the previous decade. Where the original version used FFmpeg and was limited to X11 (not because of its backend, to be clear), vokoscreenNG uses Gstreamer and has a fresh Qt interface. It's also got support for Wayland, which the previous generation lacked. InstallationYou can grab vokoscreenNG from Flathub, or install it most distros directly from your package manager. On Debian/Ubuntu, you can install vokoscreen NG with: sudo apt install vokoscreen-ng3. WoeUSB-ngWoeUSB-ng makes it easy to create bootable Windows USB drivesWoeUSB-ng is a total rewrite of WoeUSB, an open-source Linux app for creating bootable Windows USB flash drives. It was created by the same developers, but rewritten in Python and given a GUI to make it easier to set Windows installers from Linux. Ironically, despite an active community, WoeUSB-ng seems abandoned again, as it hasn't been updated in at least two years. For instance, there's an open pull request to add AppImage packaging, and pave the way for others, but the main repository appears stalled. Maybe some day WoeUSB-ng will rise again. 🚧WoeUSB was popular in 2010s. Then it was abandoned and WoeUSB-ng took its place. From what I see, WoeUSB-ng's development has stagnated as well. Until we see a WoeUSB-ng++ or WoeUSB-GenZ, we have Ventoy to make bootable Windows USB on Linux.InstallationIf you're on Arch (or, if you use Arch on your distro of choice via Distrobox), you can install WoeUSB-ng with: yay -S woeusb-ng4. eSpeak NGThe Screen Reader in GNOME uses eSpeak NG at the backendeSpeak NG is speech synthesizer with support for over a hundred languages. Its a true fork that builds on the preexisting eSpeak engine, adding more languages and new features while possessing a cleaner codebase and remaining fully compatible with the original. This means eSpeak NG serves as a drop-in replacement for the original. InstallationeSpeak NG is included with most distros as their text-to-speech engine. You can also install espeak-ng from your package manager of choice, for example: sudo apt install espeak-ngWill install it on Debian/Ubuntu (if you don't already have it). 5. stress-ngThe safest move with stress-ng (unless you know what you're doing)stress-ng (stress next generation) is an app designed to do exactly what its name suggests, but for a good cause. It generates system load to stress-test both hardware and software subsystems to uncover bugs and limitations. Let me stress, no pun intended, it is not meant for casual use. As you might guess, stress-ng is the remake of stress, the original app. After stress was abandoned, stress-ng became the standard, adding new features and methods for a broader range of systems. InstallationYou can install stress-ng from your distro's package manager. For Debian/Ubuntu, the command would be: sudo apt install stress-ng⚠️Warning: stress-ng is not a toy and can genuinely cause your system to overheat or become unresponsive. It should only be used by professionals, in controlled conditions.6. aircrack-ngaircrack-ng is a great pen-test toolaircrack-ng is a total remake and expansion of aircrack, an app used for professional security auditing of WiFi networks by attempting to "crack" their passwords (hence the name). The original aircrack was a WEP/WPA recovery tool from the early 2000s. Designed when WPA2 was new, it lacked the coverage and hardware support needed for the modern era. By contrast, aircrack-ng is a full suite, with broader hardware support, various attack types, automation features, and more. InstallationYou can get aircrack-ng on most distros through the package manager. It's included with many security focused distros, like Kali, Parrot, and BlackArch. To install aircrack-ng on Debian/Ubuntu, you can run: sudo apt install aircrack-ng7. tomboy-ng:Tomboy-ng keeps the note nostalgia aliveTomboy-ng is a total rewrite of Tomboy, which was once the standard notes tool on the GNOME desktop, and shipped with several distros, including Ubuntu. Tomboy was written in C#, and required Mono, which was too heavy in the days of CD and DVDs. For this reason, Tomboy was dropped from Ubuntu, and its C# dependency raised issues for some. Later, the legacy Tomboy codebase was abandoned, and Tomboy-ng, written in Pascal, took its place. InstallationYou can install Tomboy-ng on most distros by from the default repositories. On Debian/Ubuntu, you can run: sudo apt install tomboy-ng8. radiotray-ng:Radiotray-ng lets you listen to online radios easilyRadiotray-ng is a complete rewrite of Radiotray, a minimalist Python/GTK2 app for playing online radio stations right from the system tray. This rewrite use C++ and Glib/Gtkmm, and is not only more stable, but less prone to breakage from GTK updates. Radiotray-ng brings better codec handling, lower resource usage, more stable stream reconnection and uses JSON for saving its configuration (as opposed to XML). InstallationRadiotray-ng is packaged for Fedora and can be installed directly with: sudo dnf install radiotray-ngFor Ubuntu users, .deb packages are typically provided with each release. 9. GoldenDict-ngGoldenDict-ng is way more than a basic dictionary appGoldenDict-ng is a true fork of GoldenDict, an popular open-source dictionary and translation app. GoldenDict-ng maintains the original's support for multiple dictionary formats (StarDict, Babylon, Webster, and more), audio pronunciations, web lookups, and scan-to-translate functionality. On top of these, it brings an updated interface based on Qt 6, various bug fixes, better multimedia support, and improved dictionary rendering. It also adds other niceties like dark mode, better scanning behavior, and more robust indexing, making it suitable for dictionary power users. InstallationGoldendict-ng is available on Flathub, for those who'd prefer to use a Flatpak. You can also install it from most distro repos. Debian/Ubuntu users can run: sudo apt install goldendict-ng10. ntopngntopng gives a bird's eye view of your network activityntop-ng is the next-generation rewrite of ntop, a powerful real-time network traffic analyzer. The original ntop was already groundbreaking, and ntopng brings a new architecture, modern web UI, deep packet inspection, powerful metrics and flow analysis, and real-time bandwidth monitoring. It also adds Lua scripting, network flow export, and integration with PF_RING for high-performance environments. Installationntop-ng is packaged for most distros. On Debian/Ubuntu systems you can run: # Install ntop-ng sudo apt install ntop-ng💡Note: You can learn how to put ntop-ng to good use by following this tutorial.11. Shutter: revived, not replacedShutter's back like it never leftShutter is popular Linux screenshot app with a slew of useful features that served countless users for many years. It was abandoned for some time, not working on modern distros, nor supporting Wayland. Despite apps like Flameshot and Gradia arising in its absence, Shutter still held a special place for many. Fortunately, Shutter has been revived and even has initial support for Wayland. It's actively maintained by a community of enthusiastic users and contributors. Where to get it:Shutter is packaged for most popular distros, so you can grab it right from your package manager. On Debian/Ubuntu, you can run the following to install it: sudo apt install shutterConclusionOpen-source projects are rarely ever truly dead: the right person or community can bring them back to life. From humble desktop apps, to critical system utilities, open-source finds new ways to preserve old ideas. If you rely on any of these apps, consider contributing or making a donation. After all, it's we, the community, who keep open-source alive.
  18. by: Abhishek Prakash Tue, 25 Nov 2025 16:01:55 GMT Thanksgiving is around the corner, and the market is flooded with Black Friday and Cyber Monday deals on everything from gadgets to software subscriptions. For Linux users and open source enthusiasts, finding deals that respect privacy can be tricky. We have handpicked offers on secure cloud storage, VPNs, learning platforms, and Linux-friendly hardware. My advice for picking the right dealsFor someone who often takes advantage of deals, here are a few things you should note for making an informed decision. Money back policy: If it's a service/SaaS, like a cloud storage service, check for their money back policy and time period. If you don't like the service, you can get a refund if you initiate the refund request within the time specified in their policy.Renewal pricing: It is nice to use a service at a reduced rate but this may not last forever. For example, StartMail is offering reduced pricing for new accounts for the first year at $29 but it renews next year at $58.Avoid vendor lock-ins: Imagine you bought a service that doesn't allow you to export your data in a universally accepted format. Then you'll be stuck with that service for ever or lose your data. Do check if you store some data in a service, how can you get it back. For example, if you choose Proton Pass, you can easily export your data back if you decided to switch to some other password manager.Lifetime plans: I am a huge fan of lifetime offers. It helps me cut down on recurring subscription pricing as I pay a single fee, just once. I use lifetime plans of pCloud, Internxt for dumping data. And I am going to get Filen's too. It is good to check if a service offers lifetime plan.Plan ahead for Christmas gifts: Take advantage of Black Friday sales to purchase Christmas gifts, too. For example, you can get Raspberry Pi kits and other DIY gadgets at a lower pricing now and gift them to your children, nephews/nieces, etc. later. Just an idea to save money. Want vs Need vs Budget: It is easier to fall down the rabbit hole of deal shopping. Evaluate what you need and what you want. Those are two separate things. You might not need all the things you want. Doesn't mean you should only get what you need. Check your budget and decide how much it allows you to splurge.But you should take into account that these are limited-time offers, so decide fast and smart. 📋Some of the links here are affiliate links, which means we may get a commission when you purchase at no additional cost to you. Please read our affiliate policy. Proton — A Range of Privacy-Focused ServicesProton started as an encrypted email service. Today it is a complete privacy ecosystem trusted by over 100 million people worldwide. Its services take advantage of Swiss privacy laws and open source code. Proton Mail offers end-to-end encrypted email with an ad-free inbox. Proton VPN encrypts your internet traffic and masks your location. Proton Pass manages passwords and creates hide-my-email aliases to protect your inbox. Proton Drive provides encrypted cloud storage for files and photos. Lumo AI is their new privacy-respecting AI assistant that uses zero-access encryption and keeps no chat logs, unlike Big Tech alternatives. 💸 Offer: Up to 70% offGet The DealpCloud — Secure, Reliable Cloud StoragepCloud has protected 22 million users across 134 countries for over a decade. They have never had a security breach, and their specialty is lifetime plans where you pay once and own forever. This year's flagship deal is the 3-in-1 bundle. You get 5 TB of cloud storage, lifetime access to pCloud Pass password manager, and lifetime access to Cloud Crypto. All three products will cover your storage and security needs permanently. For people tired of subscriptions, the one-time payment means no recurring fees. 💸 Offer: Up to 62% offGet The DealFilen - Encrypted cloud storageGermany-based Filen offers zero knowledge, client side, end to end encrypted cloud storage. They use AES 256-bit file encryption which is considered to be quantum resistant. All of their data center are located in Germany and owned by Filen itself, not rented from someone else. They are quite affordable, actually. Their 200 GB storage plan costs just 19.99€, and just 13.99€ in the Black Friday sale. As I said earlier, I like lifetime deals. Filen is offering lifetime plans for the last time. I would suggest going for the lifetime plan. There is a 14-day refund period. 💸 Offer: Up to 30% off. Take advantage of their soon-to-be-removed lifetime plan.Get The DealInternxt — An Inexpensive Cloud StorageInternxt offers post-quantum encrypted cloud storage with additional privacy tools. Plans include Drive for storage and backups, Antivirus for securing your devices, VPN for encrypted connections, Cleaner to keep your system tidy, and Meet for video calls. All services use zero-knowledge encryption and only you can access your files. Note that some people have complained about lack of support from Internxt. Use it as an alternative cloud storage in that case. They also have a 30-day money back policy, so worth checking out if it meets your requirements or not. 💸 Offer: Up to 90% off (slightly more discounted in Black Friday than it usually is)Get The DealDataCamp — Land Your Dream JobDataCamp teaches data science, AI, and machine learning through interactive courses. The platform offers 570+ courses, career tracks, and certifications. Learn Python, SQL, Power BI, ChatGPT, and other in-demand skills. The hands-on approach lets you practice real skills and build projects you can add to your portfolio. Premium plans give unlimited access to the entire catalog. 💸 Offer: Up to 50% offGet The DealNordVPN — For Keeping Nosy Trackers at BayNordVPN is one of the most popular VPN services globally. It combines strong security, fast speeds, and competitive pricing. Servers in 60+ countries provide reliable connections and help bypass geo-restrictions. Apps work seamlessly on Linux, Windows, macOS, Android, and iOS. Features include automatic kill switch, split tunneling, and multiple device connections. 💸 Offer: Up to 77% offGet The DealSystem76 — Hardware Tailored for LinuxSystem76 builds computers specifically for Linux users. Based in the US, they also develop the community favorite, Pop!_OS, a distribution for both general users and developers alike. Every machine can be configured to ship with Linux pre-installed and fully supported. The Thelio line offers powerful desktops for demanding workloads. Lemur Pro laptops deliver portability without compromising performance. All hardware is customizable to match your exact needs and budget. 💸 Offer: Up to $300 offGet The DealPironman 5-Max — The Best Raspberry Pi caseOf all the mini PC cases of Raspberry Pi, I like Pironman 5 Max the most. It looks beautiful, it has more NVMe ports and real HDMI ports. I have shared my experience in a detailed review of Pironman 5 Max. While the official website has not listed any reduced pricing, I see that at least Amazon US is offering 20% off on most SunFounder products. This means, you get this awesome case for $76 instead of $96. 💸 Offer: 20% off but only on Amazon, not on official SunFounder websiteGet The Deal on Amazon USYou can also get 20% off on Pironman Mini and Pironman 5 variants. Tuta — Become a Legend Tuta offers private email and calendar services to over 10 million users. Formerly known as Tutanota, they are committed to making privacy a fundamental right. Quantum-resistant cryptography protects against future threats; zero-access infrastructure means even Tuta can't read your data; and many of its apps are open source and independently audited for security vulnerabilities. The Legend Plan includes 500 GB of storage, priority support, 30 extra email addresses, and unlimited custom domain addresses. 💸 Offer: Up to 62% offGet The DealCodeacademy — Upskill in The Age of AI Codecademy has taught millions of people to code through interactive, hands-on courses. Learn Python, web development, data science, cybersecurity, or machine learning. All courses let you write actual code in the browser. The learn-by-doing approach makes coding accessible to beginners. Advanced learners can dive deep into specialized topics. The Pro plans unlock the full catalog and career services. 💸 Offer: Up to 60% offGet The DealJuno Computers — Linux Laptops from the UKJuno Computers is a UK-based manufacturer offering laptops, tablets, and mini PCs with Ubuntu pre-installed. Operating from London and Sunny Isles Beach, they specialize in Linux-ready hardware. Their lineup includes various models for different needs and budgets. All systems ship with Ubuntu, LibreOffice, and full Linux support, with some exceptionally good compatibility across different kernel versions. 💸 Offer: Up to 10% offGet The DealTerraMaster — NAS and DAS Storage Solutions TerraMaster specializes in network-attached storage and direct-attached storage devices for home users and small businesses. Their Black Friday sale covers NAS and DAS products with discounts up to 30%. The promotion runs from November 20 to December 1. Popular models include the F2-424 dual-bay NAS with an Intel N95 processor and dual 2.5GbE ports. It supports TOS 6 and Plex 4K transcoding. The F4-425 Plus features an Intel N150 CPU with dual 5Gbe interfaces for 8K streaming. For high-capacity needs, the F6-424 Max six-bay NAS includes an Intel i5 processor and TRAID support. DAS options like the D4-320 connect directly to PCs via USB 3.2 Gen 2 for local backup. The D1 SSD Plus supports USB 4 with speeds up to 40Gbps for video editing. 💸 Offer: Up to 30% offGet The DealKhadas — Your Destination for Mini PCsKhadas manufactures single board computers and mini PCs for makers and developers. Their product line up includes the VIM series of SBCs and the modular Mind series of portable workstations. The Mind workstation features Intel Core processors in an ultra-slim design with magnetic modular connections. Past deals have included significant discounts on these products during the sale period too. 💸 Offer: Up to $100 or 20% offGet The DealZima — Experts in Homelab ProductsZima makes homelab and personal server hardware for self-hosters and DIY enthusiasts. Their products are perfect for building your own private cloud. Every device includes ZimaOS Plus benefits out of the box. Discounted products include ZimaBoard 2 for Plex and Docker with PCIe support, ZimaBlade for NAS and VPN projects, and ZimaCube with multiple drive bays for media transcoding. 💸 Offer: Up to 40% offGet The DealMore offers will be added...I'll keep on adding more interesting deals and offers as I come across them. Keep watching this page. And if you know some other offers, that should interest us Linux users, please share them in the comment section and I may add them in the list here.
  19. by: Sourav Rudra Tue, 25 Nov 2025 14:57:07 GMT TUXEDO Computers specializes in Linux-first hardware, recently launching the InfinityBook Max 15 (Gen10) with AMD Ryzen AI 300 processors. The German manufacturer has built a reputation for well-built Linux systems that work reliably. However, 18 months of work on an ARM-powered notebook has come to an abrupt halt. The company announced that it is shelving its Snapdragon X Elite laptop project. A Tricky SoC ArchitectureJust a placeholder image of TUXEDO Computers' recent launch.The notebook was built around Qualcomm's Snapdragon X Elite (X1E) SoC. TUXEDO faced numerous technical roadblocks that prevented a viable Linux experience. KVM virtualization support was missing entirely on their model. This eliminated a critical feature for developers and power users who rely on virtual machines. USB4 ports failed to deliver the high transfer rates expected from the specification. Fan control through standard Linux interfaces proved impossible to implement. BIOS updates under Linux presented another problem. Battery life fell far short of expectations. The long runtimes ARM devices typically achieve under Windows never materialized on Linux. Video hardware decoding exists at the chip level. However, most Linux applications lack support to utilize it, making the feature essentially useless. Some Hope for the FutureTUXEDO Computers is open to the possibility of this work being carried over. If the newer Snapdragon X2 Elite (X2E) proves more suitable, development may resume. The X2E chip launches in the first half of 2026, and reusing a significant portion of existing work would make the project viable again. Nonetheless, they will be contributing the device tree and other related work they developed to the mainline kernel, improving Linux support for many devices. Suggested Read 📖 Best Linux Laptop of 2025? TUXEDO InfinityBook Pro 15 (Gen10) LaunchesBeast specifications. Pre-orders open now, mid-August shipping.It's FOSSSourav Rudra
  20. by: Roland Taylor Tue, 25 Nov 2025 03:08:57 GMT Flatpak has pretty much become the de-facto standard for universal packages on the Linux desktop, with an increasing number of distros supporting the format in their default installs. Yet, even with how easy it is to install and update Linux apps with Flatpak, moving them to a new system can be tricky, especially if you’ve installed dozens over time. Sure, you could list and reinstall everything manually, but that’s tedious work, and easily prone to human error. Fortunately, there’s a simple way to export your Flatpak apps, remotes, and even overrides so you can recreate your setup on another machine with just a few commands. You can even backup and restore your settings on another system. 1. Exporting your Flatpak appsOn the system where you've got all your apps, you'll first want to save a list of your installed apps as Flatpak "refs", including where each one is installed. Flatpaks can be installed either system-wide (and thus available to all users) or per-user. The process is different depending on whether you're running a single-user set up, or if you have to back up and restore for multiple users. For single-user systemsThis assumes you have no other users on your system. Backup both user and system apps you have access to. flatpak list --app --columns=installation,ref > flatpak-apps.txt For a multi-user setupFirst, you'll need to copy any system-level installations: # Backup only system-installed apps flatpak list --system --app --columns=ref > flatpak-apps-system.txt ❗This will not copy any user-installed Flatpaks.Next, copy any user-installed Flatpaks. You'll need to do this for every user individually. Have each user run this to backup their personal installations. flatpak list --user --app --columns=ref > flatpak-apps-user-$USER.txt Then, back up your Flatpak remotes (the repositories your apps came from): flatpak remotes --columns=name,url > flatpak-remotes.txt Each Flatpak app has a unique “ref” (short for "reference") that identifies its source, branch, and architecture. Saving these ensures you reinstall the exact same apps later. Exporting your overrides (optional)Overrides are the individual settings that you can modify for each Flatpak with an app like Flatseal. By exporting all overrides together at once, you can preserve your settings across installs. To do this, you can run the following command: # Export Flatpak overrides to a file flatpak override --show > flatpak-overrides.txtYou can later restore these overrides on your target system. Exporting your app dataFlatpak app data, like configuration files and saved sessions, is stored in ~/.var/app/. You can copy this folder to your target system any time you want to transfer your app settings. For individual apps, you can copy their individual folders. For example, for GIMP, you can copy ~/.var/app/org.gimp.GIMP. 2. Preparing the target system (optional)ℹ️I assume that you're transferring your apps to another system. If that's not the case, you can skip this step.It goes without saying, but if you're going to transfer your Flatpak apps to another system, you should ensure that the target system has Flatpak support. To check this, you can run: # Check if Flatpak is installed flatpak --version Checking that Flatpak is installed and workingIf you got a version number, you’re good to go. Most popular distros, including Fedora, Mint, and Pop!_OS, have Flatpak preinstalled. If you're planning on migrating to a fresh installation of Ubuntu, you'll need to install Flatpak first: # Install Flatpak sudp apt -y install flatpak3. Recreating your setup on the new systemOn your new Linux install, the first step is to re-add your Flatpak remotes: # Add saved Flatpak remotes while read -r name url; do flatpak remote-add --if-not-exists "$name" "$url" done < flatpak-remotes.txtRemember to run this command in the same directory where you have your flatpak-remotes.txt saved. Reinstalling your appsOnce you've added your Flatpak remotes, you can now reinstall all your apps to their original locations: # Restore Flatpaks: while read -r inst ref; do if [ "$inst" = "user" ]; then flatpak install -y --user "$ref" else flatpak install -y --system "$ref" fi done < flatpak-apps.txtOnce this process completes, you can confirm that everything worked by running: flatpak list --appYou can compare this output with your original flatpak-apps.txt file to verify all your apps are back. Restoring overrides (optional)If you've saved your Flatpak overrides, you can restore them by running: # Restore your Flatpak Overrides while read -r line; do # Skip empty lines and comments [[ -z "$line" || "$line" =~ ^# ]] && continue flatpak override $line done < flatpak-overrides.txt Optional bonus for advanced users: Automating your setupIf you frequently install or test new Flatpak apps, you can automate this process so your backups stay up to date, and you can quickly move your apps to a new system at any time. Create a simple script (e.g., ~/bin/flatpak-backup.sh): #!/bin/bash flatpak list --app --columns=installation,ref > ~/flatpak-apps.txt flatpak remotes --columns=name,url > ~/flatpak-remotes.txt flatpak override --show > ~/flatpak-overrides.txt echo "Flatpak backup completed on $(date)" >> ~/flatpak-backup.log Then, make the shell script executable: chmod +x ~/bin/flatpak-backup.sh Then schedule it to run weekly with cron: crontab -e Add this line (runs every Sunday at 10 AM): 0 10 * * SUN ~/bin/flatpak-backup.sh This way, your Flatpak list and overrides stay current without any manual work. Wrapping upYou now know how to quickly back up and migrate your Flatpak apps between systems in a clean, scriptable. It’s lightweight, doesn’t require extra tools, and makes distro hopping or system rebuilds much easier. If you'd like to take this to the next level, here's another quick tip: you can keep your Flatpak backup files in a version control system like git or a personal storage solution like Nextcloud. This way, if disaster strikes, you’ll be able to rebuild your app environment in minutes. You can also backup and restore Snap packages in similar function. Move Between the Distros: Back Up and Restore Your Snap PackagesMake a backup of your Snap apps and application data and restore them to a new Linux system where Snap is supported. Works between Ubuntu and non-Ubuntu distros, too.It's FOSSRoland TaylorI hope you find it useful 😄
  21. by: Chris Coyier Mon, 24 Nov 2025 15:38:52 +0000 I’ve been using Kagi for search for the last many months. I just like the really clean search results. Google search results feel all junked up with ads and credit-bereft AI sludge, like the incentives to provide a useful experience have been overpowered by milking profit and a corporate mandates on making sure your eyeballs see as much AI as possible. I’m also not convinced Google cares about AI slop. Like do they care if a movie review for Predator: Badlands was written by a human after actually watching the movie, or if Gemini farted out a review because the LLM knows basically what a movie review reads like. Me, I sure would like to know. So I’m pleased with Kagi’s SlopStop idea. But I’ve managed to start this column with something I didn’t even really intend to talk about. Naturally, I’d like to talk about the typography one Kagi’s blog (follow that SlopStop link). Look at those single words at the end of both of those headers. Looks off. I can’t remember if those are “widows” or “orphans”, but upon looking it up, it’s neither, it’s a “runt” (lol). Obviously we can’t have that. One option is to text-wrap: balance; on the headers. Here’s what that looks like: Ehhhhhhhhh. Also not ideal. It makes those headers like half the width of the available space. Balancing is just way nicer with center-aligned headers. Which actually makes me think of how style queries should work with arbitrary styles… h1, h2, h3, h4 { /* doesn't actually work, style queries only work on --custom-properties */ @media style(text-align: center) { text-wrap: balance; } } Whatever — let’s not balance here anyway, let’s try text-wrap: pretty; (which lacks Firefox support). There we go: Better. The pretty values does a bunch of stuff, and runt-protection is among them. Honestly though it’s the line-height that bugs me the most. It’s just too much for a big header. Let’s bring it in and even pull the letters a little bit with negative letter-spacing. Now we’ve got to fight hierarchy and organization a bit. All the text is pure black… fine. Everything is about the same distance away from each other… that’s a little weird. So we’re just leaning on text size and weight (and one little instance of italic). I think we bring in just a smidge more to help here. Kagi has a wonderful little dog logo, we bring her in on the title so it sets it apart. The nav can set inline with the title. We use the nice yellow brand color to better set the title and date, then let it ride. They should probably just get a CodePen account to work this stuff out right?
  22. by: Daniel Schwarz Mon, 24 Nov 2025 14:22:30 +0000 Sometimes I want to set the value of a CSS property to that of a different property, even if I don’t know what that value is, and even if it changes later. Unfortunately though, that’s not possible (at least, there isn’t a CSS function that specifically does that). In my opinion, it’d be super useful to have something like this (for interpolation, maybe you’d throw calc-size() in there as well): /* Totally hypothetical */ button { border-radius: compute(height, self); border-radius: compute(height, inherit); border-radius: compute(height, #this); } In 2021, Lea Verou explained why, despite being proposed numerous times, implementing such a general-purpose CSS function like this isn’t feasible. Having said that, I do remain hopeful, because things are always evolving and the CSSWG process isn’t always linear. In the meantime, even though there isn’t a CSS function that enables us to get the value of a different property, you might be able to achieve your outcome using a different method, and those methods are what we’re going to look at today. The fool-proof CSS custom properties method We can easily get the value of a different CSS property using custom properties, but we’d need to know what the value is in order to declare the custom property to begin with. This isn’t ideal, but it does enable us to achieve some outcomes. Let’s jump back to the example from the intro where we try to set the border-radius based on the height, only this time we know what the height is and we store it as a CSS custom property for reusability, and so we’re able to achieve our outcome: button { --button-height: 3rem; height: var(--button-height); border-radius: calc(var(--button-height) * 0.3); } We can even place that --button-height custom property higher up in the CSS cascade to make it available to more containment contexts. :root { /* Declare here to use anywhere */ --button-height: 3rem; header { --header-padding: 1rem; padding: var(--header-padding); /* Height is unknown (but we can calculate it) */ --header-height: calc(var(--button-height) + (var(--header-padding) * 2)); /* Which means we can calculate this, too */ border-radius: calc(var(--header-height) * 0.3); button { /* As well as these, of course */ height: var(--button-height); border-radius: calc(var(--button-height) * 0.3); /* Oh, what the heck */ padding-inline: calc(var(--button-height) * 0.5); } } } CodePen Embed Fallback I guess when my math teacher said that I’d need algebra one day. She wasn’t lying! The unsupported inherit() CSS function method The inherit() CSS function, which isn’t currently supported by any web browser, will enable us to get the value of a parent’s property. Think: the inherit keyword, except that we can get the value of any parent property and even modify it using value functions such as calc(). The latest draft of the CSS Values and Units Module Level 5 spec defines how this’d work for custom properties, which wouldn’t really enable us to do anything that we can’t already do (as demonstrated in the previous example), but the hope is that it’d work for all CSS properties further down the line so that we wouldn’t need to use custom properties (which is just a tad longer): header { height: 3rem; button { height: 100%; /* Get height of parent but use it here */ border-radius: calc(inherit(height) * 0.3); padding-inline: calc(inherit(height) * 0.5); } } There is one difference between this and the custom properties approach, though. This method depends on the fixed height of the parent, whereas with the custom properties method either the parent or the child can have the fixed height. This means that inherit() wouldn’t interpolate values. For example, an auto value that computes to 3rem would still be inherited as auto, which might compute to something else when inherit()-ed., Sometimes that’d be fine, but other times it’d be an issue. Personally, I’m hoping that interpolation becomes a possibility at some point, making it far more useful than the custom properties method. Until then, there are some other (mostly property-specific) options. The aspect-ratio CSS property Using the aspect-ratio CSS property, we can set the height relative to the width, and vice-versa. For example: div { width: 30rem; /* height will be half of the width */ aspect-ratio: 2 / 1; /* Same thing */ aspect-ratio: 3 / 1.5; /* Same thing */ aspect-ratio: 10 / 5; /* width and height will be the same */ aspect-ratio: 1 / 1; } Technically we don’t “get” the width or the height, but we do get to set one based on the other, which is the important thing (and since it’s a ratio, you don’t need to know the actual value — or unit — of either). The currentColor CSS keyword The currentColor CSS keyword resolves to the computed value of the color property. Its data type is <color>, so we can use it in place of any <color> on any property on the same element. For example, if we set the color to red (or something that resolves to red), or if the color is computed as red via inheritance, we could then declare border-color: currentColor to make the border red too: body { /* We can set color here (and let it be inherited) */ color: red; button { /* Or set it here */ color: red; /* And then use currentColor here */ border-color: currentColor; border: 0.0625rem solid currentColor; background: hsl(from currentColor h s 90); } } CodePen Embed Fallback This enables us to reuse the color without having to set up custom properties, and of course if the value of color changes, currentColor will automatically update to match it. While this isn’t the same thing as being able to get the color of literally anything, it’s still pretty useful. Actually, if something akin to compute(background-color) just isn’t possible, I’d be happy with more CSS keywords like currentColor. In fact, currentBackgroundColor/currentBackground has already been proposed. Using currentBackgroundColor for example, we could set the border color to be slightly darker than the background color (border-color: hsl(from currentBackgroundColor h s calc(l - 30))), or mix the background color with another color and then use that as the border color (border-color: color-mix(currentBackgroundColor, black, 30)). But why stop there? Why not currentWidth, currentHeight, and so on? The from-font CSS keyword The from-font CSS keyword is exclusive to the text-decoration-thickness property, which can be used to set the thickness of underlines. If you’ve ever hated the fact that underlines are always 1px regardless of the font-size and font-weight, then text-decoration-thickness can fix that. The from-font keyword doesn’t generate a value though — it’s optionally provided by the font maker and embedded into the font file, so you might not like the value that they provide, if they provide one at all. If they don’t, auto will be used as a fallback, which web browsers resolve to 1px. This is fine if you aren’t picky, but it’s nonetheless unreliable (and obviously quite niche). We can, however, specify a percentage value instead, which will ensure that the thickness is relative to the font-size. So, if text-decoration-thickness: from-font just isn’t cutting it, then we have that as a backup (something between 8% and 12% should do it). Don’t underestimate CSS units You probably already know about vw and vh units (viewport width and viewport height units). These represent a percentage of the viewport’s width and height respectively, so 1vw for example would be 1% of the viewport’s width. These units can be useful by themselves or within a calc() function, and used within any property that accepts a <length> unit. However, there are plenty of other, lesser-known units that can be useful in a similar way: 1ex: equal to the computed x-height 1cap: equal to the computed cap height 1ch: equal to the computed width of the 0 glyph 1lh: equal to the computed line-height (as long as you’re not trimming or adding to its content box, for example using text-box or padding, respectively, lh units could be used to determine the height of a box that has a fixed number of lines) Source: W3 And again, you can use them, their logical variants (e.g., vi and vb), and their root variants (e.g., rex and rcap) within any property that accepts a <length> unit. In addition, if you’re using container size queries, you’re also free to use the following container query units within the containment contexts: 1cqw: equal to 1% of the container’s computed width 1cqh: equal to 1% of the container’s computed height 1cqi: equal to 1% of the container’s computed inline size 1cqb: equal to 1% of the container’s computed block size 1cqmin: equal to 1cqi or 1cqb, whichever is smallest 1cqmax: equal to 1cqi or 1cqb, whichever is largest That inherit() example from earlier, you know, the one that isn’t currently supported by any web browser? Here’s the same thing but with container size queries: header { height: 3rem; container: header / size; @container header (width) { button { height: 100%; border-radius: calc(100cqh * 0.3); padding-inline: calc(100cqh * 0.5); } } } CodePen Embed Fallback Or, since we’re talking about a container and its direct child, we can use the following shorter version that doesn’t create and query a named container (we don’t need to query the container anyway, since all we’re doing is stealing its units!): header { height: 3rem; container-type: size; button { height: 100%; border-radius: calc(100cqh * 0.3); padding-inline: calc(100cqh * 0.5); } } However, keep in mind that inherit() would enable us to inherit anything, whereas container size queries only enable us to inherit sizes. Also, container size queries don’t work with inline containers (that’s why this version of the container is horizontally stretched), so they can’t solve every problem anyway. In a nutshell I’m just going to throw compute() out there again, because I think it’d be a really great way to get the values of other CSS properties: button { /* self could be the default */ border-radius: compute(height, self); /* inherit could work like inherit() */ border-radius: compute(height, inherit); /* Nice to have, but not as important */ border-radius: compute(height, #this); } But if it’s just not possible, I really like the idea of introducing more currentColor-like keywords. With the exception of keywords like from-font where the font maker provides the value (or not, sigh), keywords such as currentWidth and currentHeight would be incredibly useful. They’d make CSS easier to read, and we wouldn’t have to create as many custom properties. In the meantime though, custom properties, aspect-ratio, and certain CSS units can help us in the right circumstances, not to mention that we’ll be getting inherit() in the future. These are heavily geared towards getting widths and heights, which is fine because that’s undoubtedly the biggest problem here, but hopefully there are more CSS features on the horizon that allow values to be used in more places. On Inheriting and Sharing Property Values originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
  23. by: Sourav Rudra Mon, 24 Nov 2025 13:19:55 GMT Dell has a solid track record with Linux-powered OSes, particularly Ubuntu. The company has been shipping developer-focused laptops with Ubuntu pre-installed for years. Many of their devices come with compatible drivers working out of the box. Audio, Wi-Fi, Thunderbolt ports, and even fingerprint readers mostly work without hassle. My daily workhorse is a Dell laptop that hasn't had a driver-related issue for quite some time now. And a recent launch just reinforces their Linux approach. Dell Pro Max 16 Plus: What's Inside?Dell just launched the Pro Max 16 Plus. It is being marketed as the first mobile workstation with an enterprise-grade discrete NPU, the Qualcomm AI 100 PC Inference Card. It packs 64GB of dedicated AI memory and dual NPUs on a single card. Under the hood, you get Intel Core Ultra processors (up to Ultra 9 285HX), memory up to 256GB CAMM2 at 7200MT/s, GPU options up to NVIDIA RTX PRO 5000 Blackwell with 24GB VRAM, and storage topping out at 12TB with RAID support. Interestingly, Phoronix has received word that the Windows 11 version of the Dell Pro Max 16 Plus won't ship until early 2026, while the validated Ubuntu 24.04 LTS version is already available. With this, Dell is targeting professionals who can't rely on cloud inferencing. It says that the discrete NPU keeps data on-device while eliminating cloud latency, enabling work in air-gapped environments, disconnected locations, and compliance-heavy industries. 📝 Key SpecificationsThe Dell Pro Max 16 Plus ships with the following components: Video: Up to 16″ UHD+, 120Hz, OLED touch display.Power: 6-cell, 96 Wh, Li-ion.Audio: 1x 3.5 mm combined audio jack, 2x 2 W stereo speakers.Camera: 1080p at 30 fps HDR FHD RGB camera, 8MP 30 fps HDR RGB+IR camera.USB: 1x USB 3.2 Gen 1 (5 Gbps) with PowerShare, 1x USB 3.2 Gen 1 (5 Gbps).Thunderbolt: 2x Thunderbolt 5 (80 Gbps), 1x Thunderbolt 4 (40 Gbps), both with Power Delivery and DisplayPort.Networking: 1x RJ45 (2.5 Gbps), Wi-Fi 7 BE200, Bluetooth 5.4, and Qualcomm Snapdragon X72 eSIM.Slots: 1x SD card reader, 1x smart card reader.Weight: 5.63 lb (2.55 kg)🛒 Pricing & AvailabilityThe Dell Pro Max 16 Plus starts at $3,329 (excl tax and shipping). You can configure and order it directly from the official website. Dell Pro Max 16 PlusSuggested Read 📖 Best Linux Laptop of 2025? TUXEDO InfinityBook Pro 15 (Gen10) LaunchesBeast specifications. Pre-orders open now, mid-August shipping.It's FOSSSourav Rudra
  24. by: Ani Mon, 24 Nov 2025 10:15:48 +0000 I struggled to combine work and family. It took me years, mistakes, and a lot of self-reflection to understand what really matters. When I had more balance, I became happier, more creative, and ultimately more effective. I also learned that personal happiness matters. About meI am the Head of DevOps and AI at Eficode. I have vast experience in IT service organizations. A significant part of my focus is on AI upskilling. We run several AI-related initiatives, including weekly demos and knowledge-sharing sessions. In addition, I am part of the Eficode Finland Steering Group, which meets weekly, and we also hold regular gatherings for all Eficode leaders. My Journey from Chemistry to IT and BeyondWhen I think back to my days at Ressu High School, I remember being equally fascinated by chemistry and psychology, but eventually I chose chemistry. That decision led me to pursue my first master’s degree in chemical engineering. The job market for chemists wasn’t exactly booming in 1999, while the IT industry was exploding with opportunities. My first job was at Hewlett-Packard, where I worked as a sales representative. I was responsible for selling Unix servers to a major telecom company in Finland. It introduced me to the world of technology, but after two years, I realized that sales alone weren’t enough for me. I wanted to go deeper. Katja Saarela, Head of DevOps and AI, Eficode From Academia to Consulting That curiosity led me back to university. I began working toward my PhD, exploring big data and bioinformatics long before those terms became buzzwords. I loved the research, the depth, and the challenge, but I also discovered that academia moves quite slowly. That’s when I realized that consulting might be my perfect fit. In consulting, every project brings a new question, a new client, a new opportunity to learn. It’s fast, dynamic, and exactly what my curious mind craves. A Lifelong LearnerIn technology, and in life, you’re never “done learning.” I want to learn new things all the time, and for me it’s really interesting to start learning a new area or a new topic. It is so much so that along the way I earned additional degrees, such as a Master of Computer Science and a Master of Economics, and even explored theology and philosophy. In this field, you need to have a joy of lifelong learning. It is important to never feel that, okay, now I know everything. Managing Stress: Lessons from a Career in IT and Parenthood In the early years of my career, work was at the centre of my life. I thought it was the most important thing in the world. But I was wrong. I struggled to combine work and family. It took me years, mistakes, and a lot of self-reflection to understand what really matters. I have five kids, and in those early years, my values weren’t right. I was giving my best to my job, but not to the people who needed me most: my family. Over time, I realized something that completely changed my perspective: in my family, I can’t be replaced. But at work, no one is truly indispensable. I began to set my priorities clearly: family first, then work. Ironically, when I started working less, I finally began moving forward in my career. That was one of the most surprising lessons of my life. When I had more balance, I became happier, more creative, and ultimately more effective. I also learned that personal happiness matters. If I have time for my hobbies and my studies, I’m happier. And when I’m happy, I’m a better leader and colleague. Now, with older kids and more experience, I don’t see the need for such strict boundaries. I might do small work tasks in the evening, but it doesn’t feel like a burden anymore. After 25 years in the IT field, I trust myself. I know what I’m doing, and I no longer worry as much. Working At EficodeMy days are filled with planned meetings, but also spontaneous discussions with colleagues. I spend a lot of time at our Helsinki office because meeting people face-to-face and exchanging ideas energizes and inspires me. No two days look the same in my role. I’m responsible for the delivery and performance of our consultants. I have four teams in my unit. I regularly meet with my Team Leads, collaborate with our Sales team to review ongoing and upcoming cases, and lately, I’ve also been conducting many job interviews as we are recruiting new consultants for the unit. Learning Skills from SportsScouting has been my most important hobby for as long as I can remember. I’ve held different positions over the years. I started as a scout leader at the age of 15. Looking back, that was my first real leadership experience. I didn’t realize it at the time, but those years of leading groups, organizing activities, and motivating people taught me lessons that became the foundation of my professional life. Years later, when I transitioned from an expert role to a leadership position in my career, I struggled at first. It wasn’t easy to move from doing the work myself to guiding others to do it. Then I remembered my early days in scouting — and it clicked. I had been leading people since I was a teenager. That gave me confidence. Leadership wasn’t new to me after all. Scouting also taught me one of the most practical skills of all: time management. As a student, I had school, hobbies, and responsibilities in the scouts. I had to learn how to divide my time carefully, and that skill has stayed with me to this day. Now, in my work life, I still structure my time the same way: focus on my tasks but always make space for my hobbies and family. But the most important lesson I learned from scouting was listening to myself and my feelings. It’s easy to plan your week, to fill your calendar with activities and goals. But sometimes, it just doesn’t feel right. Scouting taught me to pay attention to recognize when I need to adjust my schedule or slow down. It’s not just about efficiency; it’s about balance and well-being. Managing and LeadingOver the years, I have come to realize the difference between managing and leading — two roles that often overlap but are not the same. Managing is about things: tasks, deadlines, and structures. Things don’t have feelings; they can be organized logically into a schedule. But leading is about people, and people are complex. They have families, challenges, and emotions. Real leadership means being able to handle both managing tasks and people, but it mostly means understanding that people are not robots. It’s about connecting with them, listening to their worries, hearing their ideas, and being flexible when life happens. Sometimes plans need to change, and that’s okay. What matters is building trust and respect so that people feel valued and supported. AI agents and trustLately, I’ve been deeply interested in the relationship between AI agents and trust. I often listen to Eficode’s tech talks, especially those by our CTO, Marko, who shares fascinating insights into the world of AI agent orchestration. At Eficode, for example, we’ve developed a demo in which six different AI agents collaborate to build software: one writes specifications, another codes, and others handle testing. What makes this so intriguing is not just the technology itself, but the human element behind it: how do these agents trust one another, and how can we trust the results they produce? This question of trust is at the heart of today’s AI revolution. The post Role Model Blog: Katja Saarela, Eficode first appeared on Women in Tech Finland.
  25. by: Sourav Rudra Mon, 24 Nov 2025 09:53:40 GMT Last month, Zorin OS 18 dropped just in time for the Windows 10 EOL, bringing about an assortment of improvements like Linux kernel 6.14, rounded corners for the desktop interface, and a new window tiling manager. So, it didn't come as a surprise to me when Zorin OS 18 hit the 1 million downloads milestone just over a month after its release. Alongside that announcement, the developers have made available an upgrade path from Zorin OS 17, which is intended for users of Core, Education, and Pro editions. Let me walk you through the upgrade process. 😃 🚧This upgrade path is currently in the testing phase. I don't recommend using it on your main computer or any production machine until the full rollout.Before You Upgrade to Zorin OS 18Zorin OS uses Déjà Dup as the backup utility.First, ensure that you are running Zorin OS 17.3, the last point release. Then, create a backup of your files before upgrading the system. This is an optional step, as Zorin OS' upgrade tool is quite reliable. The easiest way to do so is by using the pre-installed "Backups" tool. You can search for it in the Zorin Menu (the app launcher). You can select the folders you want to backup and the location for their storage. After you launch it, click on "Create My First Backup," and select the folders you want saved and the ones ignored. Then, select the storage location for the backup. I suggest you store these on external storage or upload them to Google Drive. 📋In the screenshot above, I just used a dummy folder located on-device to demonstrate the steps.You can choose to encrypt your Zorin OS backups. Should you choose to, there is an option to encrypt the backup using a password; you will need it to update the existing backup or restore the files to the system. For a more comprehensive backup solution, I recommend opting for Timeshift instead. Guide to Backup and Restore Linux Systems with TimeshiftThis beginner’s guide shows you how to back up and restore Linux systems easily with the Timeshift application.It's FOSSAbhishek PrakashTime for The UpgradeOpen the Zorin Menu by clicking on its logo in the taskbar or pressing the Super key on your keyboard and search for "Software Updater". If you have any pending updates, get them by clicking on "Install Now". Just search for the Software Updater in the Zorin Menu. You will be prompted to enter your account password. Enter it to authenticate the upgrade and wait for the process to complete. Towards the end, you might be asked to restart your computer. Now, open the terminal via the Zorin Menu or by using the handy keyboard shortcut Ctrl + Alt + T and run the following command on it: gsettings set com.zorin.desktop.upgrader show-test-upgrades trueWhen the upgrade path comes out of testing, you won't need to run the above-mentioned command and can directly skip over to the step below. Finding the "Upgrade Zorin OS" tool is easy. Now, launch the "Upgrade Zorin OS" tool and select the Zorin OS 18 edition that matches your current installation. In my case, that is Zorin OS 18 Core, going up from Zorin 17 OS Core. You will be prompted to enter your password again. Go ahead and authenticate. Remember to read the disclaimers! After an upgrade requirements check, a long list of disclaimers will be shown. Ensure that you go through them before clicking on "Upgrade" to begin the upgrade process from Zorin OS 17 to 18. The final stretch of the Zorin OS upgrade process. Now it is just a matter of waiting. The upgrade time depends on your internet speed and hardware. Once done, restart your computer when prompted, and you will boot into Zorin OS 18. If you run into any issues, you can ask the helpful FOSSers over at It's FOSS Community for help. Suggested Read 📖 Move Between the Distros: Back Up and Restore Your Snap PackagesMake a backup of your Snap apps and application data and restore them to a new Linux system where Snap is supported. Works between Ubuntu and non-Ubuntu distros, too.It's FOSSRoland Taylor

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.